Normal view

There are new articles available, click to refresh the page.
Yesterday — 16 May 2024Vulnerabily Research

Rounding up some of the major headlines from RSA

16 May 2024 at 18:00
Rounding up some of the major headlines from RSA

While I one day wish to make it to the RSA Conference in person, I’ve never had the pleasure of making the trek to San Francisco for one of the largest security conferences in the U.S. 

Instead, I had to watch from afar and catch up on the internet every day like the common folk. This at least gives me the advantage of not having my day totally slip away from me on the conference floor, so at least I felt like I didn’t miss much in the way of talks, announcements and buzz. So, I wanted to use this space to recap what I felt like the top stories and trends were coming out of RSA last week.  

Here’s a rundown of some things you may have missed if you weren’t able to stay on top of the things coming out of the conference. 

AI is the talk of the town 

This is unsurprising given how every other tech-focused conference and talk has gone since the start of the year, but everyone had something to say about AI at RSA.  

AI and its associated tools were part of all sorts of product announcements (either to be used as a marketing buzzword or something that is truly adding to the security landscape).  

Cisco’s own Jeetu Patel gave a keynote on how Cisco Secure is using AI in its newly announced Hypershield product. In the talk, he argued that AI needs to be used natively on networking infrastructure and not as a “bolt-on” to compete with attackers.  

U.S. Secretary of State Anthony Blinken was the headliner of the week, delivering a talk outlining the U.S.’ global cybersecurity policies. He spent a decent chunk of his half hour in the spotlight also talking about AI, in which he warned that the U.S. needed to maintain its edge when it comes to AI and quantum computing — and that losing that race to a geopolitical rival (like China) would have devastating consequences to our national security and economy.  

Individual talks ran the gamut from “AI is the best thing ever for security!” to “Oh boy AI is going to ruin everything.” The reality of how this trend shakes out, like most things, is likely going to be somewhere in between those two schools of thought.  

An IBM study released at RSA highlighted how headstrong many executives can be when embracing AI. They found that security is generally an afterthought when creating generative AI models and tools, with only 24 percent of responding C-suite executives saying they have a security component built into their most recent GenAI project.  

Vendors vow to build security into product designs 

Sixty-eight new tech companies signed onto a pledge from the U.S. Cybersecurity and Infrastructure Security Agency, vowing to build security into their products from earliest stages of the design process.  

The list of signees now includes Cisco, Microsoft, Google, Amazon Web Services and IBM, among other large tech companies. The pledge states that the signees will work over the next 12 months to build new security safeguards for their products, including increasing the use of multi-factor authentication (MFA) and reducing the presence of default passwords.  

However, there’s looming speculation about how enforceable the Secure By Design pledge is and what the potential downside here is for any company that doesn’t live up to these promises.  

New technologies countering deepfakes 

Deepfake images and videos are rapidly spreading online and pose a grave threat to the already fading faith many of us had in the internet

It can be difficult to detect when users are looking at a digitally manipulated image or video unless they’re educated on common red flags to look for, or if they’re particularly knowledgeable on the subject in question. They’re getting so good now that even targets’ parents are falling for fake videos of their loved ones.  

Some potential solutions discussed at RSA include digital “watermarks” in things like virtual meetings and video recordings with immutable metadata.  

A deep fake-detecting startup was also named RSA’s “Most Innovative Startup 2024” for its multi-modal software that can detect and alert users of AI-generated and manipulated content. McAfee also has its own Deepfake Detector that it says, “utilizes advanced AI detection models to identify AI-generated audio within videos, helping people understand their digital world and assess the authenticity of content.” 

Whether these technologies can keep up with the pace that attackers are developing this technology and deploying it on such a wide scale, remains to be seen.  

The one big thing 

Microsoft disclosed a zero-day vulnerability that could lead to an adversary gaining SYSTEM-level privileges as part of its monthly security update. After a hefty Microsoft Patch Tuesday in April, this month’s security update from the company only included one critical vulnerability across its massive suite of products and services. In all, May’s slate of vulnerabilities disclosed by Microsoft included 59 total CVEs, most of which are of “important” severity. There is only one moderate-severity vulnerability. 

Why do I care? 

The lone critical security issue is CVE-2024-30044, a remote code execution vulnerability in SharePoint Server. An authenticated attacker who obtains Site Owner permissions or higher could exploit this vulnerability by uploading a specially crafted file to the targeted SharePoint Server. Then, they must craft specialized API requests to trigger the deserialization of that file’s parameters, potentially leading to remote code execution in the context of the SharePoint Server. The aforementioned zero-day vulnerability, CVE-2024-30051, could allow an attacker to gain SYSTEM-level privileges, which could have devastating impacts if they were to carry out other attacks or exploit additional vulnerabilities. 

So now what? 

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page. In response to these vulnerability disclosures, Talos is releasing a new Snort rule set that detects attempts to exploit some of them. Please note that additional rules may be released at a future date and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. The rules included in this release that protect against the exploitation of many of these vulnerabilities are 63419, 63420, 63422 - 63432, 63444 and 63445. There are also Snort 3 rules 300906 - 300912. 

Top security headlines of the week 

A massive network intrusion is disrupting dozens of hospitals across the U.S., even forcing some of them to reroute ambulances late last week. Ascension Healthcare Network said it first detected the activity on May 8 and then had to revert to manual systems. The disruption caused some appointments to have to be canceled or rescheduled and kept patients from visiting MyChart, an online portal for medical records. Doctors also had to start taking pen-and-paper records for patients. Ascension operates more than 140 hospitals in 19 states across the U.S. and works with more than 8,500 medical providers. The company has yet to say if the disruption was the result of a ransomware attack or some sort of other targeted cyber attack, though there was no timeline for restoring services as of earlier this week. Earlier this year, a ransomware attack on Change Healthcare disrupted health care systems nationwide, pausing many payments providers were expected to receive. UnitedHealth Group Inc., the parent company of Change, told a Congressional panel recently that it paid a requested ransom of $22 million in Bitcoin to the attackers. (CPO Magazine, The Associated Press

Google and Apple are rolling out new alerts to their mobile operating systems that warn users of potentially unwanted devices tracking their locations. The new features specifically target Bluetooth Low Energy (LE)-enabled accessories that are small enough to often be unknowingly tracking their specific location, such as an Apple AirTag. Android and iOS users will now receive the alert when such a device, when it's been separated from the owner’s smartphone, is moving with them still. This alert is meant to prevent adversaries or anyone with malicious intentions from unknowingly tracking targets’ locations. The two companies proposed these new rules for tracking devices a year ago, and other manufacturers of these devices have agreed to add this alert feature to their products going forward. “This cross-platform collaboration — also an industry first, involving community and industry input — offers instructions and best practices for manufacturers, should they choose to build unwanted tracking alert capabilities into their products,” Apple said in its announcement of the rollout. (Security Week, Apple

The popular Christie’s online art marketplace was still down as of Wednesday afternoon after a suspected cyber attack. The site, known for having many high-profile and wealthy clients, was planning on selling artwork worth at least $578 million this week. Christie’s said it first detected the technology security incident on Thursday but has yet to comment on if it was any sort of targeted cyber attack or data breach. There was also no information on whether client or user data was potentially at risk. Current items for sale included a Vincent van Gogh painting and a collection of rare watches, some owned by Formula 1 star Michael Schumacher. Potential buyers could instead place bids in person or over the phone. (Wall Street Journal, BBC

Can’t get enough Talos? 

Upcoming events where you can find Talos 

ISC2 SECURE Europe (May 29) 

Amsterdam, Netherlands 

Gergana Karadzhova-Dangela from Cisco Talos Incident Response will participate in a panel on “Using ECSF to Reduce the Cybersecurity Workforce and Skills Gap in the EU.” Karadzhova-Dangela participated in the creation of the EU cybersecurity framework, and will discuss how Cisco has used it for several of its internal initiatives as a way to recruit and hire new talent.  

Cisco Live (June 2 - 6) 

Las Vegas, Nevada  

AREA41 (June 6 – 7) 

Zurich, Switzerland 

Gergana Karadzhova-Dangela from Cisco Talos Incident Response will highlight the primordial importance of actionable incident response documentation for the overall response readiness of an organization. During this talk, she will share commonly observed mistakes when writing IR documentation and ways to avoid them. She will draw on her experiences as a responder who works with customers during proactive activities and actual cybersecurity breaches. 

Most prevalent malware files from Talos telemetry over the past week 

SHA 256: 9be2103d3418d266de57143c2164b31c27dfa73c22e42137f3fe63a21f793202 
MD5: e4acf0e303e9f1371f029e013f902262 
Typical Filename: FileZilla_3.67.0_win64_sponsored2-setup.exe 
Claimed Product: FileZilla 
Detection Name: W32.Application.27hg.1201 

SHA 256: a024a18e27707738adcd7b5a740c5a93534b4b8c9d3b947f6d85740af19d17d0 
MD5: b4440eea7367c3fb04a89225df4022a6 
Typical Filename: Pdfixers.exe 
Claimed Product: Pdfixers 
Detection Name: W32.Superfluss:PUPgenPUP.27gq.1201 

SHA 256: 1fa0222e5ae2b891fa9c2dad1f63a9b26901d825dc6d6b9dcc6258a985f4f9ab 
MD5: 4c648967aeac81b18b53a3cb357120f4 
Typical Filename: yypnexwqivdpvdeakbmmd.exe 
Claimed Product: N/A  
Detection Name: Win.Dropper.Scar::1201 

SHA 256: d529b406724e4db3defbaf15fcd216e66b9c999831e0b1f0c82899f7f8ef6ee1 
MD5: fb9e0617489f517dc47452e204572b4e 
Typical Filename: KMSAuto++.exe 
Claimed Product: KMSAuto++ 
Detection Name: W32.File.MalParent 

SHA 256: abaa1b89dca9655410f61d64de25990972db95d28738fc93bb7a8a69b347a6a6 
MD5: 22ae85259273bc4ea419584293eda886 
Typical Filename: KMSAuto++ x64.exe 
Claimed Product: KMSAuto++ 
Detection Name: W32.File.MalParent 

Understanding AddressSanitizer: Better memory safety for your code

16 May 2024 at 13:00

By Dominik Klemba and Dominik Czarnota

This post will guide you through using AddressSanitizer (ASan), a compiler plugin that helps developers detect memory issues in code that can lead to remote code execution attacks (such as WannaCry or this WebP implementation bug). ASan inserts checks around memory accesses during compile time, and crashes the program upon detecting improper memory access. It is widely used during fuzzing due to its ability to detect bugs missed by unit testing and its better performance compared to other similar tools.

ASan was designed for C and C++, but it can also be used with Objective-C, Rust, Go, and Swift. This post will focus on C++ and demonstrate how to use ASan, explain its error outputs, explore implementation fundamentals, and discuss ASan’s limitations and common mistakes, which will help you grasp previously undetected bugs.

Finally, we share a concrete example of a real bug we encountered during an audit that was missed by ASan and can be detected with our changes. This case motivated us to research ASan bug detection capabilities and contribute dozens of upstreamed commits to the LLVM project. These commits resulted in the following changes:

Getting started with ASan

ASan can be enabled in LLVM’s Clang and GNU GCC compilers by using the -fsanitize=address compiler and linker flag. The Microsoft Visual C++ (MSVC) compiler supports it via the /fsanitize=address option. Under the hood, the program’s memory accesses will be instrumented with ASan checks and the program will be linked with ASan runtime libraries. As a result, when a memory error is detected, the program will stop and provide information that may help in diagnosing the cause of memory corruption.

AddressSanitizer’s approach differs from other tools like Valgrind, which may be used without rebuilding a program from its source, but has bigger performance overhead (20x vs 2x) and may detect fewer bugs.

Simple example: detecting out-of-bounds memory access

Let’s see ASan in practice on a simple buggy C++ program that reads data from an array out of its bounds. Figure 1 shows the code of such a program, and figure 2 shows its compilation, linking, and output when running it, including the error detected by ASan. Note that the program was compiled with debugging symbols and no optimizations (-g3 and -O0 flags) to make the ASan output more readable.

Figure 1: Example program that has an out-of-bounds bug on the stack since it reads the fifth item from the buf array while it has only 4 elements (example.cpp)

Figure 2: Running the program from figure 1 with ASan

When ASan detects a bug, it prints out a best guess of the error type that has occurred, a backtrace where it happened in the code, and other location information (e.g., where the related memory was allocated or freed).

Figure 3: Part of an ASan error message with location in code where related memory was allocated

In this example, ASan detected a heap-buffer overflow (an out-of-bounds read) in the sixth line of the example.cpp file. The problem was that we read the memory of the buf variable out of bounds through the buf[i] code when the loop counter variable (i) had a value of 4.

It is also worth noting that ASan can detect many different types of errors like stack-buffer-overflows, heap-use-after-free, double-free, alloc-dealloc-mismatch, container-overflow, and others. Figures 4 and 5 present another example, where the ASan detects a heap-use-after-free bug and shows the exact location where the related heap memory was allocated and freed.

Figure 4: Example program that uses a buffer that was freed (built with -fsanitize=address -O0 -g3)

Figure 5: Excerpt of ASan report from running the program from figure 4

For more ASan examples, refer to the LLVM tests code or Microsoft’s documentation.

Building blocks of ASan

ASan is built upon two key concepts: shadow memory and redzones. Shadow memory is a dedicated memory region that stores metadata about the application’s memory. Redzones are special memory regions placed in between objects in memory (e.g., variables on the stack or heap allocations) so that ASan can detect attempts to access memory outside of the intended boundaries.

Shadow memory

Shadow memory is allocated at a high address of the program, and ASan modifies its data throughout the lifetime of the process. Each byte in shadow memory describes the accessibility status of a corresponding memory chunk that can potentially be accessed by the process. Those memory chunks, typically referred to as “granules,” are commonly 8 bytes in size and are aligned to their size (the granule size is set in GCC/LLVM code). Figure 6 shows the mapping between granules and process memory.

Figure 6: Logical division of process memory and corresponding shadow memory bytes

The shadow memory values detail whether a given granule can be fully or partially addressable (accessible by the process), or whether the memory should not be touched by the process. In the latter case, we call this memory “poisoned,” and the corresponding shadow memory byte value details the reason why ASan thinks so. The shadow memory values legend is printed by ASan along with its reports. Figure 7 shows this legend.

Figure 7: Shadow memory legend (the values are displayed in hexadecimal format)

By updating the state of shadow memory during the process execution, ASan can verify the validity of memory accesses by checking the granule’s value (and so its accessibility status). If a memory granule is fully accessible, a corresponding shadow byte is set to zero. Conversely, if the whole granule is poisoned, the value is negative. If the granule is partially addressable—i.e., only the first N bytes may be accessed and the rest shouldn’t—then the number N of addressable bytes is stored in the shadow memory. For example, freed memory on the heap is described with value fd and shouldn’t be used by the process until it’s allocated again. This allows for detecting use-after-free bugs, which often lead to serious security vulnerabilities.

Partially addressable granules are very common. One example may be a buffer on a heap of a size that is not 8-byte-aligned; another may be a variable on the stack that has a size smaller than 8 bytes.

Redzones

Redzones are memory regions inserted into the process memory (and so reflected in shadow memory) that act as buffer zones, separating different objects in memory with poisoned memory. As a result, compiling a program with ASan changes its memory layout.

Let’s look at the shadow memory for the program shown in figure 8, where we introduced three variables on the stack: “buf,” an array of six items each of 2 bytes, and “a” and “b” variables of 2 and 1 bytes.

Figure 8: Example program with an out of bounds memory access error detected by ASan (built with -fsanitize=address -O0 -g3)

Running the program with ASan, as in figure 9, shows us that the problematic memory access hit the “stack right redzone” as marked by the “[f3]” shadow memory byte. Note that ASan marked this byte with the arrow before the address and the brackets around the value.

Figure 9: Shadow bytes describing memory area around stack variables from figure 6. Note that the byte 01 corresponds to the variable “b,” the 02 to variable “a,” and 00 04 to the buf array.

This shadow memory along with the corresponding process memory is shown in figure 10. ASan would detect accesses to the bytes colored in red and report them as errors.

Figure 10: Memory layout with ASan. Each cell represents one byte.

Without ASan, the “a,” “b,” and “buf” variables would likely be next to each other, without any padding between them. The padding was added by the fact that the variables must be partially addressable and because redzones were added in between them as well as before and after them.

Redzones are not added between elements in arrays or in between member variables in structures. This is due to the fact that it would simply break many applications that depend upon the structure layout, their sizes, or simply on the fact that arrays are contiguous in memory.

Sadly, ASan also doesn’t poison the structure padding bytes, since they may be accessed by valid programs when a whole structure is copied (e.g., with the memcpy function).

How does ASan instrumentation work?

ASan instrumentation is fully dependent on the compiler; however, implementations are very similar between compilers. Its shadow memory has the same layout and uses the same values in LLVM and GCC, as the latter is based on the former. The instrumented code also calls to special functions defined in compiler-rt, a low-level runtime library from LLVM. It is worth noting that there are also shared or static versions of the ASan libraries, though this may vary based on a compiler or environment.

The ASan instrumentation adds checks to the program code to validate legality of the program’s memory accesses. Those checks are performed by comparing the address and size of the access against the shadow memory. The shadow memory mapping and encoding of values (the fact that granules are of 8 bytes in size) allow ASan to efficiently detect memory access errors and provide valuable insight into the problems encountered.

Let’s look at a simple C++ example compiled and tested on x86-64, where the touch function accesses 8 bytes at the address given in the argument (the touch function takes a pointer to a pointer and dereferences it):

Figure 11: A function accessing memory area of size 8 bytes

Without ASan, the function has a very simple assembly code:

Figure 12: The function from figure 11 compiled without ASan

Figure 13 shows that, when compiling code from figure 11 with ASan, a check is added that confirms if the access is correct (i.e., if the whole granule is accessed). We can see that the address that we are going to access is first divided by 8 (shr rax, 3 instruction) to compute its offset in the shadow memory. Then, the program checks if the shadow memory byte is zero; if it’s not, it calls to the __asan_report_load8 function, which makes ASan to report the memory access violation. The byte is checked against zero, because zero means that 8 bytes are accessible, whereas the memory dereference that the program performs returns another pointer, which is of course of 8 bytes in size.

Figure 13: The function from Figure 11 compiled with ASan using Clang 15

For comparison, we can see that the gcc compiler generates similar code (figure 14) as by LLVM (figure 13):

Figure 14: The function from Figure 11 compiled with ASan using gcc 12

Of course, if the program accessed a smaller region, a different check would have to be generated by the compiler. This is shown in figures 15 and 16, where the program accesses just a single byte.

Figure 15: A function accessing memory area smaller than a granule

Now the function accesses a single byte that may be at the beginning, middle, or the end of a granule, and every granule may be fully addressable, partially addressable, or fully poisoned. The shadow memory byte is first checked against zero, and if it doesn’t match, a detailed check is performed (starting from the .LBB0_1 label). This check will raise an error if the granule is partially addressable and a poisoned byte is accessed (from a poisoned suffix) or if the granule is fully poisoned. (GCC generates similar code.)

Figure 16: An example of a more complex check, confirming legality of the access in function from figure 15, compiled with Clang 15

Can you spot the problem above?

You may have noticed in figures 12-14 that access to poisoned memory may not be detected if the address we read 8 bytes from is unaligned. For such an unaligned memory access, its first and last bytes are in different granules.

The following snippet illustrates a scenario when the address of variable ptr is increased by three and the touch function touches an unaligned address.

Figure 17: Code accessing unaligned memory of size 8 may not be detected by ASan in Clang 15

The incorrect access from figure 17 is not detected when it is compiled with Clang 15, but it is detected by GCC 12 as long as the function is inlined. If we force non-inlining with __attribute__ ((noinline)), GCC won’t detect it either. It seems that when GCC is aware of address manipulations that may result in unaligned addressing, it generates a more robust check that detects the invalid access correctly.

ASan’s limitations and quirks

While ASan may miss some bugs, it is important to note that it does not report any false positives if used properly. This means that if it detects a bug, it must be a valid bug in the code, or, a part of the code was not linked with ASan properly (assuming that ASan itself doesn’t have bugs).

However, the ASan implementation in GCC and LLVM include the following limitations or/and quirks:

  • Redzones are not added between variables in structures.
  • Redzones are not added between array elements.
  • Padding in structures is not poisoned (example).
  • Access to allocated, but not yet used, memory in a container won’t be detected, unless the container annotates itself like C++’s std::vector, std::deque, or std::string (in some cases). Note that std::basic_string (with external buffers) and std::deque are annotated in libc++ (thanks to our patches) while std::string is also annotated in Microsoft C++ standard library.
  • Incorrect access to memory managed by a custom allocator won’t raise an error unless the allocator performs annotations.
  • Only suffixes of a memory granule may be poisoned; therefore, access before an unaligned object may not be detected.
  • ASan may not detect memory errors if a random address is accessed. As long as the random number generator returns an addressable address, access won’t be considered incorrect
  • ASan doesn’t understand context and only checks values in shadow memory. If a random address being accessed is annotated as some error in shadow memory, ASan will correctly report that error, even if its bug title may not make much sense.
  • Because ASan does not understand what programs are intended to do, accessing an array with an incorrect index may not be detected if the resulting address is still addressable, as shown in figure 18.

Figure 18: Access to memory that is addressable but out of bounds of the array. There is no error detected.

ASan is not meant for production use

ASan is designed as a debugging tool for use in development and testing environments and it should not be used on production. Apart from its overhead, ASan shouldn’t be used for hardening as its use could compromise the security of a program. For example, it decreases the effectiveness of ASLR security mitigation by its gigantic shadow memory allocation and it also changes the behavior of the program based on environment variables which could be problematic, e.g., for suid binaries.

If you have any other doubts, you should check the ASan FAQ and for hardening your application, refer to compiler security flags.

Poisoning-only suffixes

Because ASan currently has a very limited number of values in shadow memory, it can only poison suffixes of memory granules. In other words, there is no such value encoding in shadow memory to inform ASan that for a granule a given byte is accessible if it follows an inaccessible (poisoned) byte.

As an example, if the third byte in a granule is not poisoned, the previous two bytes are not poisoned as well, even if logic would require them to be poisoned.

It also means that up to seven bytes may not be poisoned, assuming that an object/variable/buffer starts in the middle or at the last byte of a granule.

False positives due to linking

False positives can occur when only part of a program is built with ASan. These false positives are often (if not always) related to container annotations. For example, linking a library that is both missing instrumentation and modifying annotated objects may result in false positives.

Consider a scenario where the push_back member function of a vector is called. If an object is added at the end of the container in a part of the program that does not have ASan instrumentation, no error will be reported, and the memory where the object is stored will not be unpoisoned. As a result, accessing this memory in the instrumented part of the program will trigger a false positive error.

Similarly, access to poisoned memory in a part of the program that was built without ASan won’t be detected.

To address this situation, the whole application along with all its dependencies should be built with ASan (or at least all parts modifying annotated containers). If this is not possible, you can turn off container annotations by setting the environment variable ASAN_OPTIONS=detect_container_overflow=0.

Do it yourself: user annotations

User annotations may be used to detect incorrect memory accesses—for example, when preallocating a big chunk of memory and managing it with a custom allocator or in a custom container. In other words, user annotations can be used to implement similar checks to those std::vector does under the hood in order to detect out-of-bounds access in between the vector’s data+size and data+capacity addresses.

If you want to make your testing even stronger, you can choose to intentionally “poison” certain memory areas yourself. For this, there are two macros you may find useful:

  • ASAN_POISON_MEMORY_REGION(addr, size)
  • ASAN_UNPOISON_MEMORY_REGION(addr, size)

To use these macros, you need to include the ASan interface header:

Figure 19: The ASan API must be included in the program

This makes poisoning and unpoisoning memory quite simple. The following is an example of how to do this:

Figure 20: A program demonstrating user poisoning and its detection.

The program allocates a buffer on heap, poisons the whole buffer (through user poisoning), and then accesses an element from the buffer. This access is detected as forbidden, and the program reports a “Poisoned by user” error (f7). The figure below shows the buffer (poisoned by user) as well as the heap redzone (fa).

Figure 21: A part of the error message generated by program from figure 20 while compiled with ASan

However, if you unpoison part of the buffer (as shown below, for four elements), no error would be raised while accessing the first four elements. Accessing any further element will raise an error.

Figure 22: An example of unpoisoning memory by user

If you want to understand better how those macros impact the code, you can look into its definition in an ASan interface file.

The ASAN_POISON_MEMORY_REGION and ASAN_UNPOISON_MEMORY_REGION macros simply invoke the __asan_poison_memory_region and __asan_unpoison_memory_region functions from the API. However, when a program is compiled without ASan, these macros do nothing beyond evaluating the macro arguments.

The bug missed by ASan

As we noted previously in the limitations section, ASan does not automatically detect out-of-bound accesses into containers that preallocate memory and manage it. This was also a case we came across during an audit: we found a bug with manual review in code that we were fuzzing and we were surprised the fuzzer did not find it. It turned out that this was because of lack of container overflow detection in the std::basic_string and std::deque collections in libc++.

This motivated us to get involved in ASan development by developing a proof of concept of those ASan container overflow detections in GCC and LLVM and eventually upstream patches to LLVM.

So what was the bug that ASan missed? Figure 23 shows a minimal example of it. The buggy code compared two containers via an std::equal function that took only the first1, last1, and first2 iterators, corresponding to the beginning and end of the first sequence and to the beginning of the second sequence for comparison, assuming the same length of the sequences.

However, when the second container is shorter than the first one, this can cause an out-of-bounds read, which was not detected by ASan and which we changed. With our patches, this is finally detected by ASan.

Figure 23: Code snippet demonstrating the nature of the bug we found during the audit. Container type was changed for demonstrative purposes.

Use ASan to detect more memory safety bugs

We hope our efforts to improve ASan’s state-of-the-art bug detection capabilities will cement its status as a powerful tool for protecting codebases against memory issues.

We’d like to express our sincere gratitude to the entire LLVM community for their support during the development of our ASan annotation improvements. From reviewing code patches and brainstorming implementation ideas to identifying issues and sharing knowledge, their contributions were invaluable. We especially want to thank vitalybuka, ldionne, philnik777, and EricWF for their ongoing support!

We hope this explanation of AddressSanitizer has been insightful and demonstrated its value in hunting down bugs within a codebase. We encourage you to leverage this knowledge to proactively identify and eliminate issues in your own projects. If you successfully detect bugs with the help of the information provided here, we’d love to hear about it! Happy hunting!

If you need help with ASan annotations, fuzzing, or anything related to LLVM, contact us! We are happy to help tailor sanitizers or other LLVM tools to your specific needs. If you’d like to read more about our work on compilers, check out the following posts: VAST (GitHub repository) and Macroni (GitHub repository).

Talos releases new macOS open-source fuzzer

16 May 2024 at 12:00
  • Cisco Talos has developed a fuzzer that enables us to test macOS software on commodity hardware.
  • Fuzzer utilizes a snapshot-based fuzzing approach and is based on WhatTheFuzz framework.
  • Support for VM state extraction was implemented and WhatTheFuzz was extended to support the loading of VMWare virtual machine snapshots.
  • Additional tools support symbolizing and code coverage analysis of fuzzing traces.

Finding novel and unique vulnerabilities often requires the development of unique tools that are best suited for the task. Platforms and hardware that target software run on usually dictate tools and techniques that can be used.  This is especially true for parts of the macOS operating system and kernel due to its close-sourced nature and lack of tools that support advanced debugging, introspection or instrumentation. 

Compared to fuzzing for software vulnerabilities on Linux, where most of the code is open-source, targeting anything on macOS presents a few difficulties. Things are closed-source, so we can’t use compile-time instrumentation. While Dynamic Binary instrumentation tools like Dynamorio and TinyInst work on macOS, they cannot be used to instrument kernel components.

There are also hardware considerations – with few exceptions, macOS only runs on Apple hardware. Yes, it can be virtualized, but that has its drawbacks. What this means in practice is that we cannot use our commodity off-the-shelf servers to test macOS code. And fuzzing on laptops isn’t exactly effective.

A while ago, we embarked upon a project that would alleviate most of these issues, and we are making the code available today. 

Using a snapshot-based approach enables us to target closed-source code without custom harnesses precisely. Researchers can obtain full instrumentation and code coverage by executing tests in an emulator, which enables us to perform tests on our existing hardware. While this approach is limited to testing macOS running on Intel hardware, most of the code is still shared between Intel and ARM versions. 

Previously in snapshot fuzzing

The simplest way to fuzz a target application is to run it in a loop while changing the inputs. The obvious downside is that you lose time on application initialization, boilerplate code and less CPU time spent on executing the relevant part of the code.

The approach in snapshot-based fuzzing is to define a point in process execution to inject the fuzzing test case (at an entry point of an important function). Then, you interrupt the program at a given point (via breakpoint or other means) and take a snapshot. The snapshot includes all of the virtual memory being used, and the CPU or other process state required to restore and resume process execution. Then, you insert the fuzzing test case by modifying the memory and resume execution.

When the execution reaches a predefined sink (end of function, error state, etc.) you stop the program, discard and replace the state with the previously saved one.

The benefit of this is that you only pay the penalty of restoring the process to its previous state, you don’t create it from scratch. Additionally, suppose you can rely on OS or CPU mechanisms such as CopyOnWrite, page-dirty tracking and on-demand paging. In that case, the operation of restoring the process can be very fast and have little impact on overall fuzzing speed. 

Cory Duplantis championed our previous attempts at utilizing snapshot-based fuzzing in his work on Barbervisor, a bare metal hypervisor developed to support high-performance snapshot fuzzing.

It involved acquiring a snapshot of a full (Virtual Box-based) VM and then transplanting it into Barbervisor where it could be executed. It relied on Intel CPU features to enable high performance by only restoring modified memory pages.

While this showed great potential and gave us a glimpse into the potential utility of snapshot-based fuzzing, it had a few downsides. A similar approach, built on top of KVM and with numerous improvements, was implemented in Snapchange and released by AWS Labs.

Snapshot fuzzing building blocks

Around the time Talos published Barbervisor, Axel Souchet published his WTF project, which takes a different approach. It trades performance to have a clean development environment by relying on existing tooling. It uses Hyper-V to run virtual machines that are to be snapshotted, then uses kd (Windows kernel debugger) to perform the snapshot, which saves the state in a Windows memory dump file format, which is optimized for loading. WTF is written in C++, which means it can benefit from the plethora of existing support libraries such as custom mutators or fuzz generators.

It has multiple possible execution backends, but the most fully featured one is based on Bochs, an x86 emulator, which provides a complete instrumentation framework. The user will likely see a dip in performance – it’s slower than native execution – but it can be run on any platform that Bochs runs on (Linux and Windows, virtualized or otherwise) with no special hardware requirements.

The biggest downside is that it was mainly designed to target Windows virtual machines and targets running on Windows.

When modifying WTF to support fuzzing macOS targets, we need to take care of a few mechanisms that aren’t supported out of the box. Split into pre-fuzzing and fuzzing stages, those include:

  • A mechanism to debug the OS and process that is to be fuzzed – this is necessary to precisely choose the point of snapshotting.
  • A mechanism to acquire a copy of physical memory – necessary to transplant the execution into the emulator.
  • CPU state snapshotting – this has to include all the Control Registers, all the MSRs and other CPU-specific registers that aren’t general-purpose registers.

In the fuzzing stage, on the other hand, we need:

  • A mechanism to restore the acquired memory pages – this has to be custom for our environment.
  • A way to catch crashes as crashing/faulting mechanisms on Windows and macOS, which differ greatly.

CPU state, memory modification and coverage analysis will also require adjustments.

Debugging 

For targeting the macOS kernel, we’d want to take a snapshot of an actual, physical, machine. That would give us the most accurate attack surface with all the kernel extensions that require special hardware being loaded and set up. There is a significant attack surface reduction in virtualized macOS.

However, debugging physical Mac machines is cumbersome. It requires at least one more machine and special network adapters, and the debug mechanism isn’t perfect for our goal (relies on non-maskable interrupts instead of breakpoints and doesn’t fully stop the kernel from executing code).

Debugging a virtual machine is somewhat easier. VMWare Fusion contains a gdbserver stub that doesn’t care about the underlying operating system. We can also piggyback on VMWare’s snapshotting feature. 

VMWare debugger stub is enabled in the .vmx file.

debugStub.listen.guest64 = "TRUE"
debugStub.hideBreakpoints = "FALSE"

The first option enables it, and the second tells gdb stub to use software, as opposed to hardware breakpoints. Hardware breakpoints aren’t supported in Fusion. 

Attaching to a VM for debugging relies on GDB’s remote protocol:

$ lldb
(lldb) gdb-remote 8864
Kernel UUID: 3C587984-4004-3C76-8ADF-997822977184
Load Address: 0xffffff8000210000
...
kernel was compiled with optimization - stepping may behave oddly; variables may not be available.
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
    frame #0: 0xffffff80003d2eba kernel`machine_idle at pmCPU.c:181:3 [opt]
Target 0: (kernel) stopped.
(lldb)

Snapshot acquisition

The second major requirement for snapshot fuzzing is, well, snapshotting. We can piggyback on VMWare Fusion for this, as well.

The usual way to use VMWare’s snapshotting is to either suspend a VM or make an exact copy of the state you can revert to. This is almost exactly what we want to do.

We can set a breakpoint using the debugger and wait for it to be reached. At this point, the whole virtual machine execution is paused. Then, we can take a snapshot of the machine state paused at precisely the instruction we want. There is no need to time anything or inject sentinel instruction. Since we are debugging the VM, we control it fully. A slightly more difficult part is figuring out how to use this snapshot. To reuse them, we needed to figure out the file formats VMware Fusion stores the snapshots in. 

Talos releases new macOS open-source fuzzer

Fusion’s snapshots consist of two separate files: a vmem file that holds a memory state and a vmsn file that holds the device state, which includes the CPU, all the controllers, busses, pci, disks, etc. – everything that’s needed to restore the VM.

As far as the memory dump goes, the vmem file is a linear dump of all of the VM’s RAM. If the VM has 2GB of RAM, the vmem file will be a 2GB byte-for-byte copy of the RAM’s contents. This is a physical memory layout because we are dealing with virtual machines and no parsing is required. Instead, we just need a loader.

The machine state file, on the other hand, uses a fairly complex, undocumented format that contains a lot of irrelevant information. We only care about the CPU state, as we won’t be trying to restore a complete VM, just enough to run a fair bit of code. While undocumented, it has been mostly reverse-engineered for the Volatility project. By extending Volatility, we can get a CPU state dump in the format usable by WhatTheFuzz.

Snapshot loading into WTF

With both file formats figured out, we can return to WTF to modify it accordingly. The most important modification we need to make is to the physical memory loader.

WTF uses Windows’ dmp file format, so we need our own handler. Since our memory dump file is just a direct one-to-one copy of physical RAM, mapping it into memory and then mapping the pages is very straightforward, as you can see in the following excerpt:

bool BuildPhysmemRawDump(){
  //vmware snapshot is just a raw linear dump of physical memory, with some gaps
  //just fill up a structure for all the pages with appropriate physmem file offsets
  //assuming physmem dump file is from a vm with 4gb of ram
  uint8_t *base = (uint8_t *)FileMap_.ViewBase();
  for(uint64_t i  = 0;i < 786432; i++ ){ //that many pages, first 3gb
    uint64_t offset = i*4096;
    Physmem_.try_emplace(offset, (uint8_t *)base+offset);
  }
  //there's a gap in VMWare's memory dump from 3 to 4gb, last 1gb is mapped above 4gb
  for(uint64_t i  = 0;i < 262144; i++ ){
    uint64_t offset = (i+786432)*4096;
  Physmem_.try_emplace(i*4096+4294967296, (uint8_t *)base+offset);
  }
  return true;
}

 We just need to fake the structures with appropriate offsets. 

Catching crashes

The last piece of the puzzle is how to catch crashes. In WTF, and our modification of it, this is as simple as setting a breakpoint at an appropriate place. On Windows, hooking nt!KeBugCheck2 is the perfect place, we just need a similar thing in the macOS kernel. 

The kernel panics, exceptions, faults and similar on macOS go through a complicated call stack that ultimately culminates in a complete OS crash and reboot.

Depending on what type of crash we are trying to catch and the type of kernel we are running, we can put a breakpoint on exception_triage function, which is in the execution path between a fault happening and the machine panicking or rebooting:

With that out of the way, we have all the pieces of the puzzle necessary to fuzz a macOS kernel target.

Case study: IPv6 stack

MacOS’ IPv6 stack would be a good example to illustrate how the complete scheme works. This is a simple but interesting entry point into some complex code. Attack surface that is composed of a complex set of protocols, is reachable over the network and is stateful. It would be difficult to fuzz with traditional fuzzers because network fuzzing is slow, and we wouldn’t have coverage. Additionally, this part of the macOS kernel is open-source, making it easy to see if things work as intended. First thing, we’ll need to prepare the target virtual machine.

VM preparation

This will assume a few things:

  • The host machine is a MacBook running macOS 12 Monterey. 
  • VMWare fusion as a virtualization platform
  • Guest VM running macOS 12 Monterey with the following specs:
    • SIP turned off.
    • 2 or 4 GB of RAM (4 is better, but snapshots are bigger).
    • One CPU/Core as multithreading just complicates things.

Since we are going to be debugging on the VM, it's prudent to disable SIP before doing anything else.

We'll use VMWare's GDB stub to debug the VM instead of Apple’s KDP because it interferes less with the running VM. The VM doesn't and cannot know that it is enabled. 

Enabling it is as simple as editing a VM's .vmx file. Locate it in the VM package and add the following lines to the end:

debugStub.listen.guest64 = "TRUE"
debugStub.hideBreakpoints = "FALSE"

To make debugging, and our lives, easier, we'll want to change some macOS boot options. Since we've disabled SIP, this should be doable from a regular (elevated) terminal:

$ sudo nvram boot-args="slide=0 debug=0x100 keepsyms=1"

The code above changes macOS' boot args to:

  • Disable boot time kASLR via slide=0.
  • Disable watchdog via debug=0x100, this will prevent the VM from automatically rebooting in case of a kernel panic.
  • keepsyms=1, in conjunction with the previous one, prints out the symbols during a kernel panic.

Setting up a KASAN build of the macOS kernel would be a crucial step for actual fuzzing, but not strictly necessary for testing purposes.

Target function

Our fuzzing target is function ip6_input which is the entry point for parsing incoming IPv6 packets.

void
ip6_input(struct mbuf *m)
{
	struct ip6_hdr *ip6;
	int off = sizeof(struct ip6_hdr), nest;
	u_int32_t plen;
	u_int32_t rtalert = ~0;

It has a single parameter that contains a mbuf that holds the actual packet data. This is the data we want to mutate and modify to fuzz ipv6_input.

Mbuf structures are a standard structure in XNU and are essentially a linked list of buffers that contain data. We need to find where the actual packet data is (mh_data) and mutate it before resuming execution. 

struct mbuf {
    struct m_hdr m_hdr;
    union {
        struct {
            struct pkthdr MH_pkthdr;        /* M_PKTHDR set */
            union {
                struct m_ext MH_ext;    /* M_EXT set */
                char    MH_databuf[_MHLEN];
            } MH_dat;
        } MH;
        char    M_databuf[_MLEN];               /* !M_PKTHDR, !M_EXT */
    } M_dat;
};
struct m_hdr {
    struct mbuf 	*mh_next;       /* next buffer in chain */
    struct mbuf 	*mh_nextpkt;    /* next chain in queue/record */
    caddr_t     	mh_data;        /* location of data */
    int32_t     	mh_len;         /* amount of data in this mbuf */
    u_int16_t   	mh_type;        /* type of data in this mbuf */
    u_int16_t   	mh_flags;       /* flags; see below */
 
}

This means that we will have to, in the WTF fuzzing harness, dereference a pointer to get to the actual packet data.

Snapshotting

To create a snapshot, we use the debugger to set a breakpoint at ip6_input function. This is where we want to start our fuzzing.

Process 1 stopped
* thread #2, name = '0xffffff96db894540', queue = 'cpu-0', stop reason = signal SIGTRAP
    frame #0: 0xffffff80003d2eba kernel`machine_idle at pmCPU.c:181:3 [opt]
Target 0: (kernel) stopped.
(lldb) breakpoint set -n ip6_input
Breakpoint 1: where = kernel`ip6_input + 44 at ip6_input.c:779:6, address = 0xffffff800078b54c
(lldb) c
Process 1 resuming
(lldb)

Then, we need to provoke the VM to reach that breakpoint. We can either wait until the VM receives an IPv6 packet, or we can do it manually. To send the actual packet, we prefer using `ping6` because it doesn’t send any SYN/ACKs and allows us to easily control packet size and contents.:

The actual command is:

ping6 fe80::108f:8a2:70be:17ba%en0 -c 1 -p 41 -s 1016 -b 1064

The above simply sends a controlled ICMPv6 ping packet that is as large as possible and padded with 0x41 bytes. We send the packet to the en0 interface – sending to the localhost shortcuts the call stack and packet processing are different. This should give us a nice packet in memory, mostly full of AAAs that we can mutate and fuzz. 

When the ping6 command is executed, the VM will receive the IPv6 packet and start parsing it, which will immediately reach our breakpoint.

Process 1 stopped
* thread #3, name = '0xffffff96dbacd540', queue = 'cpu-0', stop reason = breakpoint 1.1
	frame #0: 0xffffff800078b54c kernel`ip6_input(m=0xffffff904e51b000) at ip6_input.c:779:6 [opt]
Target 0: (kernel) stopped.
(lldb)

The VM is now paused and we have the address of our mbuf that contains the packet which we can fuzz. Fusion's gdb stub seems to be buggy, though, and it leaves that int 3 in place. If we were to take a snapshot now, the first instruction we execute would be that int3, which would immediately break our fuzzing. We need to explicitly disable the breakpoint before taking the snapshot:

(lldb) disassemble
kernel`ip6_input:
	0xffffff800078b520 <+0>:  pushq  %rbp
	0xffffff800078b521 <+1>:  movq   %rsp, %rbp
	0xffffff800078b524 <+4>:  pushq  %r15
	0xffffff800078b526 <+6>:  pushq  %r14
	0xffffff800078b528 <+8>:  pushq  %r13
	0xffffff800078b52a <+10>: pushq  %r12
	0xffffff800078b52c <+12>: pushq  %rbx
	0xffffff800078b52d <+13>: subq   $0x1b8, %rsp          	; imm = 0x1B8
	0xffffff800078b534 <+20>: movq   %rdi, %r12
	0xffffff800078b537 <+23>: leaq   0x98ab02(%rip), %rax  	; __stack_chk_guard
	0xffffff800078b53e <+30>: movq   (%rax), %rax
	0xffffff800078b541 <+33>: movq   %rax, -0x30(%rbp)
	0xffffff800078b545 <+37>: movq   %rdi, -0xb8(%rbp)
->  0xffffff800078b54c <+44>: int3
	0xffffff800078b54d <+45>: testl  %ebp, (%rdi,%rdi,8)

Sometimes, it's just buggy enough that it won't update the disassembly listing after the breakpoint is removed.

(lldb) breakpoint disable
All breakpoints disabled. (1 breakpoints)
(lldb) disassemble
kernel`ip6_input:
	0xffffff800078b520 <+0>:  pushq  %rbp
	0xffffff800078b521 <+1>:  movq   %rsp, %rbp
	0xffffff800078b524 <+4>:  pushq  %r15
	0xffffff800078b526 <+6>:  pushq  %r14
	0xffffff800078b528 <+8>:  pushq  %r13
	0xffffff800078b52a <+10>: pushq  %r12
	0xffffff800078b52c <+12>: pushq  %rbx
	0xffffff800078b52d <+13>: subq   $0x1b8, %rsp          	; imm = 0x1B8
	0xffffff800078b534 <+20>: movq   %rdi, %r12
	0xffffff800078b537 <+23>: leaq   0x98ab02(%rip), %rax  	; __stack_chk_guard
	0xffffff800078b53e <+30>: movq   (%rax), %rax
	0xffffff800078b541 <+33>: movq   %rax, -0x30(%rbp)
	0xffffff800078b545 <+37>: movq   %rdi, -0xb8(%rbp)
->  0xffffff800078b54c <+44>: int3
	0xffffff800078b54d <+45>: testl  %ebp, (%rdi,%rdi,8)

So, we can just step over the offending instruction to make sure:

(lldb) step
Process 1 stopped
* thread #3, name = '0xffffff96dbacd540', queue = 'cpu-0', stop reason = step in
	frame #0: 0xffffff800078b556 kernel`ip6_input(m=0xffffff904e51b000) at ip6_input.c:780:12 [opt]
Target 0: (kernel) stopped.
(lldb) disassemble
kernel`ip6_input:
	0xffffff800078b520 <+0>:	pushq  %rbp
	0xffffff800078b521 <+1>:	movq   %rsp, %rbp
	0xffffff800078b524 <+4>:	pushq  %r15
	0xffffff800078b526 <+6>:	pushq  %r14
	0xffffff800078b528 <+8>:	pushq  %r13
	0xffffff800078b52a <+10>:   pushq  %r12
	0xffffff800078b52c <+12>:   pushq  %rbx
	0xffffff800078b52d <+13>:   subq   $0x1b8, %rsp          	; imm = 0x1B8
	0xffffff800078b534 <+20>:   movq   %rdi, %r12
	0xffffff800078b537 <+23>:   leaq   0x98ab02(%rip), %rax  	; __stack_chk_guard
	0xffffff800078b53e <+30>:   movq   (%rax), %rax
	0xffffff800078b541 <+33>:   movq   %rax, -0x30(%rbp)
	0xffffff800078b545 <+37>:   movq   %rdi, -0xb8(%rbp)
	0xffffff800078b54c <+44>:   movl   $0x28, -0xd4(%rbp)
->  0xffffff800078b556 <+54>:   movl   $0x0, -0xe4(%rbp)
	0xffffff800078b560 <+64>:   movl   $0xffffffff, -0xe8(%rbp)  ; imm = 0xFFFFFFFF
	0xffffff800078b56a <+74>:   leaq   -0x1d8(%rbp), %rdi
	0xffffff800078b571 <+81>:   movl   $0xa0, %esi
	0xffffff800078b576 <+86>:   callq  0xffffff80001010f0    	; __bzero
	0xffffff800078b57b <+91>:   movq   $0x0, -0x100(%rbp)
	0xffffff800078b586 <+102>:  movq   $0x0, -0x108(%rbp)
	0xffffff800078b591 <+113>:  movq   $0x0, -0x110(%rbp)
	0xffffff800078b59c <+124>:  movq   $0x0, -0x118(%rbp)
	0xffffff800078b5a7 <+135>:  movq   $0x0, -0x120(%rbp)
	0xffffff800078b5b2 <+146>:  movq   $0x0, -0x128(%rbp)
	0xffffff800078b5bd <+157>:  movq   $0x0, -0x130(%rbp)
	0xffffff800078b5c8 <+168>:  movzwl 0x1e(%r12), %r8d
	0xffffff800078b5ce <+174>:  movl   0x18(%r12), %edx 

Now, we should be in a good place to take our snapshot before something goes wrong. To do that, we simply need to use Fusion's "Snapshot" menu while the VM is stuck on a breakpoint.

VM snapshot state

As mentioned previously, the .vmsn file contains a virtual machine state. The file format is partially documented and we can use a modified version of  Volatility (a patch is available in the repository).  

Simply execute Volatility like so, making sure to point it at the correct `vmsn` file:  

 $ python2 ./vol.py -d -v -f ~/Virtual\ Machines.localized/macOS\ 11.vmwarevm/macOS\ 11-Snapshot3.vmsn vmwareinfo

It will spit out the relevant machine state in the JSON format that WTF expects. For example:

{
	"rip": "0xffffff800078b556",
	"rax": "0x715d862e57400011",
	"rbx": "0xffffff904e51b000",
	"rcx": "0xffffff80012f1860",
	"rdx": "0xffffff904e51b000",
	"rsi": "0xffffff904e51b000",
	"rdi": "0xffffff904e51b000",
	"rsp": "0xffffffe598ca3ab0",
	"rbp": "0xffffffe598ca3c90",
	"r8": "0x42",
	"r9": "0x989680",
	"r10": "0xffffff80010fdfb8",
	"r11": "0xffffff96dbacd540",
	"r12": "0xffffff904e51b000",
	"r13": "0xffffffa0752ddbd0",
	"r14": "0x0",
	"r15": "0x0",
	"tsc": "0xfffffffffef07619",
	"rflags": "0x202",
	"cr0": "0x8001003b",
	"cr2": "0x104ca5000",
	"cr3": "0x4513000",
	"cr4": "0x3606e0",
	"cr8": "0x0",
	"dr0": "0x0",
	"dr1": "0x0",
	"dr2": "0x0",
	"dr3": "0x0",
	"dr6": "0xffff0ff0",
	"dr7": "0x400",
	"gdtr": {
    	"base": "0xfffff69f40039000",
    	"limit": "0x97"
	},
	"idtr": {
    	"base": "0xfffff69f40084000",
    	"limit": "0x1000"
	},
	"sysenter_cs": "0xb",
	"sysenter_esp": "0xfffff69f40085200",
	"sysenter_eip": "0xfffff69f400027a0",
	"kernel_gs_base": "0x114a486e0",
	"efer": "0xd01",
	"tsc_aux": "0x0",
	"xcr0": "0x7",
	"pat": "0x1040600070406",
	"es": {
    	"base": "0x0",
    	"limit": "0xfffff",
    	"attr": "0xc000",
    	"present": true,
    	"selector": "0x0"
	},
	"cs": {
    	"base": "0x0",
    	"limit": "0xfffff",
    	"attr": "0xa09b",
    	"present": true,
    	"selector": "0x8"
	},
	"ss": {
    	"base": "0x0",
    	"limit": "0xfffff",
    	"attr": "0xc093",
    	"present": true,
    	"selector": "0x10"
	},
	"ds": {
    	"base": "0x0",
    	"limit": "0xfffff",
    	"attr": "0xc000",
    	"present": true,
    	"selector": "0x0"
	},
	"fs": {
    	"base": "0x0",
    	"limit": "0xfffff",
    	"attr": "0xc000",
    	"present": true,
    	"selector": "0x0"
	},
	"gs": {
    	"base": "0xffffff8001089140",
    	"limit": "0xfffff",
    	"attr": "0xc000",
    	"present": true,
    	"selector": "0x0"
	},
	"ldtr": {
    	"base": "0xfffff69f40087000",
    	"limit": "0x17",
    	"attr": "0x82",
    	"present": true,
    	"selector": "0x30"
	},
	"tr": {
    	"base": "0xfffff69f40086000",
    	"limit": "0x67",
    	"attr": "0x8b",
    	"present": true,
    	"selector": "0x40"
	},
	"star": "0x001b000800000000",
	"lstar": "0xfffff68600002720",
	"cstar": "0x0000000000000000",
	"sfmask": "0x0000000000004700",
	"fpcw": "0x27f",
	"fpsw": "0x0",
	"fptw": "0x0",
	"fpst": [
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity",
    	"0x-Infinity"
	],
	"mxcsr": "0x00001f80",
	"mxcsr_mask": "0x0",
	"fpop": "0x0",
	"apic_base": "0x0"
}

Notice that the above output contains all the same register content as our debugger shows but also contains MSRs, control registers, gdtr and others. This is all we need to be able to start running the snapshot under WTF. 

Fuzzing harness and fixups

Our fuzzing harness needs to do a couple of things:

  • Set a few meaningful breakpoints.
    • A breakpoint on target function return so we know where to stop fuzzing.
    • A breakpoint on the kernel exception handler so we can catch crashes. 
    • Other handy breakpoints that would patch things, or stop the test case if it reaches a certain state.
  • For every test case, find a proper place in memory, write it there, and adjust the size.

All WTF fuzzers need to implement at least two methods: 

  • bool Init(const Options_t &Opts, const CpuState_t &)
  • bool InsertTestcase(const uint8_t *Buffer, const size_t BufferSize) 

Init 

Method Init does the fuzzing initialization steps, and this is where we would register our breakpoints. 

To begin, we need the end of theip6_input function, which we will use as the end of execution:

(lldb) disassemble -n ip6_input
...	
    0xffffff800078cdf2 <+6354>: testl  %ecx, %ecx
	0xffffff800078cdf4 <+6356>: jle	0xffffff800078cfc9    	; <+6825> at ip6_input.c:1415:2
	0xffffff800078cdfa <+6362>: addl   $-0x1, %ecx
	0xffffff800078cdfd <+6365>: movl   %ecx, 0x80(%rax)
	0xffffff800078ce03 <+6371>: leaq   0x989236(%rip), %rax  	; __stack_chk_guard
	0xffffff800078ce0a <+6378>: movq   (%rax), %rax
	0xffffff800078ce0d <+6381>: cmpq   -0x30(%rbp), %rax
	0xffffff800078ce11 <+6385>: jne	0xffffff800078d07f    	; <+7007> at ip6_input.c
	0xffffff800078ce17 <+6391>: addq   $0x1b8, %rsp          	; imm = 0x1B8
	0xffffff800078ce1e <+6398>: popq   %rbx
	0xffffff800078ce1f <+6399>: popq   %r12
	0xffffff800078ce21 <+6401>: popq   %r13
	0xffffff800078ce23 <+6403>: popq   %r14
	0xffffff800078ce25 <+6405>: popq   %r15
	0xffffff800078ce27 <+6407>: popq   %rbp
	0xffffff800078ce28 <+6408>: retq

This function has only one ret, so we can use that. We'll add a breakpoint at 0xffffff800078ce28 to stop the execution of the test case:

   Gva_t retq = Gva_t(0xffffff800078ce28);
  if (!g_Backend->SetBreakpoint(retq, [](Backend_t *Backend) {
    	Backend->Stop(Ok_t());
  	})) {
	return false;
  }

The above code sets up a breakpoint at the desired address, which executes the anonymous handler function when hit. This handler then stops the execution with Ok_t() type, which signifies the non-crashing end of the test case. 

Next, we'll want to catch actual exceptions, crashes and panics. Whenever an exception happens in the macOS kernel, the function exception_triage` is called. Regardless if this was caused by something else or by an actual crash, if this function is called, we may as well stop test case execution. 

We need to get the address of exception_triage first:

(lldb)  p exception_triage
(kern_return_t (*)(exception_type_t, mach_exception_data_t, mach_msg_type_number_t)) $4 = 0xffffff8000283cb0 (kernel`exception_triage at exception.c:671)
(lldb)

Now, we just need to add a breakpoint at 0xffffff8000283cb0:

Gva_t exception_triage = Gva_t(0xffffff8000283cb0);
  if (!g_Backend->SetBreakpoint(exception_triage, [](Backend_t *Backend) {
 
        const Gva_t rdi =  Gva_t(g_Backend->Rdi());
    	const std::string Filename = fmt::format(
        	"crash-{:#x}", rdi);
    	DebugPrint("Crash: {}\n", Filename);
    	Backend->Stop(Crash_t(Filename));
 
  	})) {
	return false;
  }

This breakpoint is slightly more complicated as we want to gather some information at the time of the crash. When the breakpoint is hit, we want to get a couple of registers that contain information about the exception context we use to form a filename for the saved test case. This helps differentiate unique crashes. 

Finally, since this is a crashing test case, the execution is stopped with Crash_t() which saves the crashing test case. 

With that, the basic Init function is complete. 

InsertTestcase

The function InsertTestcase is what inserts the mutated data into the target's memory before resuming execution. This is where you would sanitize any necessary input and figure out where you want to put your mutated data in memory. 

Our target function's signature is ip6_input(struct mbuf *), so the mbuf struct will hold the actual data. We can use lldb at our first breakpoint to figure out where the data is:  

(lldb) p m->m_hdr
(m_hdr) $7 = {
  mh_next = 0xffffff904e3f4700
  mh_nextpkt = NULL
  mh_data = 0xffffff904e51b0d8 "`\U00000004\U00000003"
  mh_len = 40
  mh_type = 1
  mh_flags = 66
}
(lldb) memory read 0xffffff904e51b0d8
0xffffff904e51b0d8: 60 04 03 00 04 00 3a 40 fe 80 00 00 00 00 00 00  `.....:@........
0xffffff904e51b0e8: 10 8f 08 a2 70 be 17 ba fe 80 00 00 00 00 00 00  ....p...........
(lldb) p (struct mbuf *)0xffffff904e3f4700
(struct mbuf *) $8 = 0xffffff904e3f4700
(lldb) p ((struct mbuf *)0xffffff904e3f4700)->m_hdr
(m_hdr) $9 = {
  mh_next = NULL
  mh_nextpkt = NULL
  mh_data = 0xffffff904e373000 "\x80"
  mh_len = 1024
  mh_type = 1
  mh_flags = 1
}
(lldb) memory read 0xffffff904e373000
0xffffff904e373000: 80 00 30 d7 02 69 00 00 62 b4 fd 25 00 0a 2f d3  ..0..i..b..%../.
0xffffff904e373010: 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA
(lldb)

At the start of ip6_input function, inspecting m_hdr of the first parameter shows us that it has 40 bytes of data at 0xffffff904e51b0d8 which looks like a standard ipv6 header. Additionally, grabbing mh_next and inspecting it shows that it contains data at 0xffffff904e373000 of size 1,024, which consists of ICMP6 data and our AAAAs.

To properly fuzz all IPv6 protocols, we'll mutate the IPv6 header and encapsulated packet. We'll need to separately copy 40 bytes over to the first mbuf and the rest over to the second mbuf.

For the second mbuf (the ICMPv6 packet), we need to write our mutated data at 0xffffff904e373000. This is fairly straightforward, as we don't need to read or dereference registers or deal with offsets:

bool InsertTestcase(const uint8_t *Buffer, const size_t BufferSize) {
if (BufferSize < 40) return true; // mutated data too short
 
  Gva_t ipv6_header = Gva_t(0xffffff904e51b0d8);
  if(!g_Backend->VirtWriteDirty(ipv6_header,Buffer,40)){
	DebugPrint("VirtWriteDirtys failed\n");
  }
 
  Gva_t icmp6_data = Gva_t(0xffffff904e373000);
  if(!g_Backend->VirtWriteDirty(icmp6_data,Buffer+40,BufferSize-40)){
	DebugPrint("VirtWriteDirtys failed\n");
  }
 
  return true;
}

We could also update the mbuf size, but we'll limit the mutated test case size instead. And that's it – our fuzzing harness is pretty much ready.

Everything together

Every WTF fuzzer needs to have a state directory and three things in it:

  • Mem.dmp: A full dump of RAM.
  • Regs.json: A JSON file describing CPU state.
  • Symbol-store.json: Not really required, can be empty, but we can populate it with addresses of known symbols, so we can use those instead of hardcoded addresses in the fuzzer.

Next, copy the snapshot's .vmm file over to your fuzzing machine and rename it to mem.dmp. Write the VM state that we got from volatility into a file called regs.json.

With the state set up, we can make a test run. Compile the fuzzer and test it like so:

c:\work\codes\wtf\targets\ipv6_input>..\..\src\build\wtf.exe  run  --backend=bochscpu --name IPv6_Input --state state --input inputs\ipv6 --trace-type 1 --trace-path .

The debugger instance is loaded with 0 items

load raw mem dump1
Done
Setting debug register status to zero.
Setting debug register status to zero.
Segment with selector 0 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 8 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 10 has invalid attributes.
Segment with selector 0 has invalid attributes.
Trace file .\ipv6.trace
Running inputs\ipv6
--------------------------------------------------
Run stats:
Instructions executed: 13001 (4961 unique)
      	Dirty pages: 229376 bytes (0 MB)
  	Memory accesses: 46135 bytes (0 MB)
#1 cov: 4961 exec/s: infm lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 0.0s
 
c:\work\codes\wtf\targets\ipv6_input>

In the above, we run WTF in run mode with tracing enabled. We want it to run the fuzzer with specified input and save a RIP trace file that we can then examine. As we can see from the output, the fuzzer run was completed successfully. The total number of instructions was 13,001 (4,961 of which were unique) and most notably, the run was completed without a crash or a timeout. 

Analyzing coverage and symbolizing

WTF's symbolizer relies on the fact that the targets it runs are on Windows and that it generally has PDBs. Emulating that completely would be too much work, so I've opted to instead do some LLDB scripting and symbolization. 

First, we need LLDB to dump out all known symbols and their addresses. That's fairly straightforward with the script supplied in the repository. The script will parse the output of image dump symtab command and perform some additional querying to resolve the most symbols. The result is a symbol-store.json file that looks something like this:

{"0xffffff8001085204": ".constructors_used",
"0xffffff800108520c": ".destructors_used",
"0xffffff8000b15172": "Assert",
"0xffffff80009e52b0": "Block_size",
"0xffffff80008662a0": "CURSIG",
"0xffffff8000a05a10": "ConfigureIOKit",
"0xffffff8000c8fd00": "DTRootNode",
"0xffffff8000282190": "Debugger",
"0xffffff8000281fb0": "DebuggerTrapWithState",
"0xffffff80002821b0": "DebuggerWithContext",
"0xffffff8000a047b0": "IOAlignmentToSize",
"0xffffff8000aa8840": "IOBSDGetPlatformUUID",
"0xffffff8000aa89e0": "IOBSDMountChange",
"0xffffff8000aa6df0": "IOBSDNameMatching",
"0xffffff8000aa87b0": "IOBSDRegistryEntryForDeviceTree",
"0xffffff8000aa87f0": "IOBSDRegistryEntryGetData",
"0xffffff8000aa87d0": "IOBSDRegistryEntryRelease",
"0xffffff8000ad6740": "IOBaseSystemARVRootHashAvailable",
"0xffffff8000a68e20": "IOCPURunPlatformActiveActions",
"0xffffff8000a68ea0": "IOCPURunPlatformHaltRestartActions",
"0xffffff8000a68f20": "IOCPURunPlatformPanicActions",
"0xffffff8000a68ff0": "IOCPURunPlatformPanicSyncAction",
"0xffffff8000a68db0": "IOCPURunPlatformQuiesceActions",
"0xffffff8000aa6d20": "IOCatalogueMatchingDriversPresent",
"0xffffff8000a04480": "IOCopyLogNameForPID",
"0xffffff8000a023c0": "IOCreateThread",
"0xffffff8000aa8c30": "IOCurrentTaskHasEntitlement",
"0xffffff8000a07940": "IODTFreeLoaderInfo",
"0xffffff8000a07a90": "IODTGetDefault",
"0xffffff8000a079b0": "IODTGetLoaderInfo",
"0xffffff8000381fd0": "IODefaultCacheBits",
"0xffffff8000a03f00": "IODelay",
"0xffffff8000a02430": "IOExitThread",
"0xffffff8000aa7830": "IOFindBSDRoot",
"0xffffff8000a043c0": "IOFindNameForValue",
"0xffffff8000a04420": "IOFindValueForName",
"0xffffff8000a03e30": "IOFlushProcessorCache",
"0xffffff8000a02580": "IOFree",
"0xffffff8000a029e0": "IOFreeAligned",
"0xffffff8000a02880": "IOFreeAligned_internal",
"0xffffff8000a02f60": "IOFreeContiguous",
"0xffffff8000a03c40": "IOFreeData",
"0xffffff8000a03840": "IOFreePageable",
"0xffffff8000a03050": "IOFreeTypeImpl",
"0xffffff8000a03cd0": "IOFreeTypeVarImpl",
"0xffffff8000a024b0": "IOFree_internal",

 The trace file we obtained from the fuzzer is just a text file containing addresses of executed instructions. Supporting tools include a symbolize.py script which uses a previously generated symbol store to symbolize a trace. Running it on ipv6.trace would result in a symbolized trace:

ip6_input+0x36
ip6_input+0x40
ip6_input+0x4a
ip6_input+0x51
ip6_input+0x56
bzero
bzero+0x3
bzero+0x5
bzero+0x6
bzero+0x8
ip6_input+0x5b
ip6_input+0x66
ip6_input+0x10b
ip6_input+0x127
ip6_input+0x129
ip6_input+0x12e
ip6_input+0x130
m_tag_locate
m_tag_locate+0x1
m_tag_locate+0x4
m_tag_locate+0x8
m_tag_locate+0xa
m_tag_locate+0x37
m_tag_locate+0x4b
m_tag_locate+0x4d
m_tag_locate+0x4e
ip6_input+0x135
ip6_input+0x138
ip6_input+0x145
ip6_input+0x148
ip6_input+0x14a
ip6_input+0x14f
ip6_input+0x151
m_tag_locate
m_tag_locate+0x1
m_tag_locate+0x4
m_tag_locate+0x8
m_tag_locate+0xa
m_tag_locate+0x14
...
lck_mtx_unlock+0x4e
lck_mtx_unlock+0x52
lck_mtx_unlock+0x54
lck_mtx_unlock+0x5a
lck_mtx_unlock+0x5c
lck_mtx_unlock+0x5e
ip6_input+0x1890
ip6_input+0x189b
ip6_input+0x18a2
ip6_input+0x18a5
ip6_input+0x18c0
ip6_input+0x18c7
ip6_input+0x18ca
ip6_input+0x18e3
ip6_input+0x18ea
ip6_input+0x18ed
ip6_input+0x18f1
ip6_input+0x18f7
ip6_input+0x18fe
ip6_input+0x18ff
ip6_input+0x1901
ip6_input+0x1903
ip6_input+0x1905
ip6_input+0x1907
ip6_input+0x1908

The complete trace is longer, but at the end, can easily see that the retq instruction was reached if we compared the function offsets. 

Trace files are also compatible with Ida Lighthouse, so we can just load them into it to get a visual coverage overview:

Talos releases new macOS open-source fuzzer
Green nodes have been hit.

Avoiding checksum problems

Even without manual coverage analysis, with IPv6 as a target, it would be quickly apparent that a feedback-driven fuzzer isn’t getting very far. This is due to various checksums that are present in higher-level protocol packets, for example, TCP packet checksums. Randomly mutated data would invalidate the checksum and the packet would be rejected early. 

There are two options to deal with this issue: We can fix the checksum after mutating the data, or leverage instrumentation to NOP out the code that performs the check. This is easily achieved by setting yet another breakpoint in the fuzzing harness that will simply modify the return value of the checksum check:

//patch tcp_checksum check
 retq = Gva_t(0xffffff80125fbe57); //
  if (!g_Backend->SetBreakpoint(retq, [](Backend_t *Backend) {
  
     g_Backend->Rax(0);
      })) {
    return false;
  }

Running the fuzzer

Now that we know that things work, we can start fuzzing. In one terminal, we start the server:

c:\work\codes\wtf\targets\ipv6_input>..\..\src\build\wtf.exe master --max_len=1064 --runs=1000000000 --target .
Seeded with 3801664353568777264
Iterating through the corpus..
Sorting through the 1 entries..
Running server on tcp://localhost:31337..

And in another, the actual fuzzing node:

c:\work\codes\wtf\targets\ipv6_input> ..\..\src\build\wtf.exe fuzz --backend=bochscpu --name IPv6_Input  --limit 5000000
 
The debugger instance is loaded with 0 items
load raw mem dump1
Done
Setting debug register status to zero.
Setting debug register status to zero.
Segment with selector 0 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 8 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 10 has invalid attributes.
Segment with selector 0 has invalid attributes.
Dialing to tcp://localhost:31337/..

You  should quickly see in the server window that coverage increases and that new test cases are being found and saved:

Running server on tcp://localhost:31337..
#0 cov: 0 (+0) corp: 0 (0.0b) exec/s: -nan (1 nodes) lastcov: 8.0s crash: 0 timeout: 0 cr3: 0 uptime: 8.0s
Saving output in .\outputs\4b20f7c59a0c1a03d41fc5c3c436db7c
Saving output in .\outputs\c6cc17a6c6d8fea0b1323d5acd49377c
Saving output in .\outputs\525101cf9ce45d15bbaaa8e05c6b80cd
Saving output in .\outputs\26c094dded3cf21cf241e59f5aa42a42
Saving output in .\outputs\97ba1f8d402b01b1475c2a7b4b55bc29
Saving output in .\outputs\cfa5abf0800668a09939456b82f95d36
Saving output in .\outputs\4f63c6e22486381b907daa92daecd007
Saving output in .\outputs\1bd771b2a9a65f2419bce4686cbd1577
Saving output in .\outputs\3f5f966cc9b59e113de5fd31284df198
Saving output in .\outputs\b454d6965f113a025562ac9874446b7a
Saving output in .\outputs\00680b75d90e502fd0413c172aeca256
Saving output in .\outputs\51e31306ef681a8db35c74ac845bef7e
Saving output in .\outputs\b996cc78a4d3f417dae24b33d197defc
Saving output in .\outputs\2f456c73b5cd21fbaf647271e9439572
#10699 cov: 9778 (+9778) corp: 15 (9.1kb) exec/s: 1.1k (1 nodes) lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 18.0s
Saving output in .\outputs\3b93493ff98cf5e46c23a8b337d8242e
Saving output in .\outputs\73100aa4ae076a4cf29469ca70a360d9
#20922 cov: 9781 (+3) corp: 17 (10.0kb) exec/s: 1.0k (1 nodes) lastcov: 3.0s crash: 0 timeout: 0 cr3: 0 uptime: 28.0s
#31663 cov: 9781 (+0) corp: 17 (10.0kb) exec/s: 1.1k (1 nodes) lastcov: 13.0s crash: 0 timeout: 0 cr3: 0 uptime: 38.0s
#42872 cov: 9781 (+0) corp: 17 (10.0kb) exec/s: 1.1k (1 nodes) lastcov: 23.0s crash: 0 timeout: 0 cr3: 0 uptime: 48.0s
#53925 cov: 9781 (+0) corp: 17 (10.0kb) exec/s: 1.1k (1 nodes) lastcov: 33.0s crash: 0 timeout: 0 cr3: 0 uptime: 58.0s
#65054 cov: 9781 (+0) corp: 17 (10.0kb) exec/s: 1.1k (1 nodes) lastcov: 43.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.1min
#75682 cov: 9781 (+0) corp: 17 (10.0kb) exec/s: 1.1k (1 nodes) lastcov: 53.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.3min
Saving output in .\outputs\00f15aa5c6a1c822b36e33afb362e9ec

Likewise, the fuzzing node will show its progress:

The debugger instance is loaded with 0 items
load raw mem dump1
Done
Setting debug register status to zero.
Setting debug register status to zero.
Segment with selector 0 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 8 has invalid attributes.
Segment with selector 0 has invalid attributes.
Segment with selector 10 has invalid attributes.
Segment with selector 0 has invalid attributes.
Dialing to tcp://localhost:31337/..
#10437 cov: 9778 exec/s: 1.0k lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 10.0s
#20682 cov: 9781 exec/s: 1.0k lastcov: 3.0s crash: 0 timeout: 0 cr3: 0 uptime: 20.0s
#31402 cov: 9781 exec/s: 1.0k lastcov: 13.0s crash: 0 timeout: 0 cr3: 0 uptime: 30.0s
#42667 cov: 9781 exec/s: 1.1k lastcov: 23.0s crash: 0 timeout: 0 cr3: 0 uptime: 40.0s
#53698 cov: 9781 exec/s: 1.1k lastcov: 33.0s crash: 0 timeout: 0 cr3: 0 uptime: 50.0s
#64867 cov: 9781 exec/s: 1.1k lastcov: 43.0s crash: 0 timeout: 0 cr3: 0 uptime: 60.0s
#75446 cov: 9781 exec/s: 1.1k lastcov: 53.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.2min
#84790 cov: 10497 exec/s: 1.1k lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.3min
#95497 cov: 11704 exec/s: 1.1k lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.5min
#105469 cov: 11761 exec/s: 1.1k lastcov: 4.0s crash: 0 timeout: 0 cr3: 0 uptime: 1.7min

Conclusion

Building this snapshot fuzzing environment on top of WTF provides several benefits. It enables us to perform precisely targeted fuzz testing of, otherwise, hard-to-pinpoint chunks of macOS kernel. We can perform the actual testing on commodity CPUs, which enables us to use our existing computer resources instead of being limited to a few cores. Additionally, although emulated execution speed is fairly slow, we can leverage Bosch to perform more complex instrumentation. Patches to Volatility and WTF projects, as well as additional support tooling, is available in our GitHub repository.

Before yesterdayVulnerabily Research

Only one critical vulnerability included in May’s Microsoft Patch Tuesday; One other zero-day in DWN Core

14 May 2024 at 17:57
Only one critical vulnerability included in May’s Microsoft Patch Tuesday; One other zero-day in DWN Core

After a relatively hefty Microsoft Patch Tuesday in April, this month’s security update from the company only included one critical vulnerability across its massive suite of products and services.  

In all, May’s slate of vulnerabilities disclosed by Microsoft included 59 total CVEs, most of which are considered to be of “important” severity. There is only one moderate-severity vulnerability. 

The lone critical security issue is CVE-2024-30044, a remote code execution vulnerability in SharePoint Server. An authenticated attacker who obtains Site Owner permissions or higher could exploit this vulnerability by uploading a specially crafted file to the targeted SharePoint Server. Then, they must craft specialized API requests to trigger the deserialization of that file’s parameters, potentially leading to remote code execution in the context of the SharePoint Server. 

The Windows Mobile Broadband Driver also contains multiple remote code execution vulnerabilities: 

However, to successfully exploit this issue, an adversary would need to physically connect a compromised USB device to the victim's machine. 

Microsoft also disclosed a zero-day vulnerability in the Windows DWM Core Library, CVE-2024-30051. Desktop Window Manager (DWM) is a Windows operating system service that enables visual effects on the desktop and manages things like transitions between windows.   

An adversary could exploit CVE-2024-30051 to gain SYSTEM-level privileges.  

This vulnerability is classified as having a “low” level of attack complexity, and exploitation of this vulnerability has already been detected in the wild.  

One other issue, CVE-2024-30046, has already been disclosed prior to Patch Tuesday, but has not yet been exploited in the wild. This is a denial-of-service vulnerability in ASP.NET, a web application framework commonly used in Windows.  

Microsoft considers this vulnerability “less likely” to be exploited, as successful exploitation would require an adversary to spend a significant amount of time repeating exploitation attempts by sending constant or intermittent data to the targeted machine.   

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page

In response to these vulnerability disclosures, Talos is releasing a new Snort rule set that detects attempts to exploit some of them. Please note that additional rules may be released at a future date and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.  

The rules included in this release that protect against the exploitation of many of these vulnerabilities are 63419, 63420, 63422 - 63432, 63444 and 63445. There are also Snort 3 rules 300906 - 300912.

The May 2024 Security Update Review

14 May 2024 at 17:28

Welcome to the second Tuesday of May. As expected, Adobe and Microsoft have released their standard bunch of security patches. Take a break from your regular activities and join us as we review the details of their latest advisories. If you’d rather watch the full video recap covering the entire release, you can check it out here:

Apple Patches for May 2024

Apple kicked off the May release cycle with a group of updates for their macOS and iOS platforms. Most notable is a fix for CVE-2024-23296 for iOS 16.7.8 and iPadOS 16.7.8. This vulnerability is a memory corruption issue in RTKit that could allow attackers to bypass kernel memory protections. The initial patch was released back in March, but Apple noted additional fixes would be coming, and here they are. This bug is reported as being under active attack, so if you’re using a device with an affected OS, make sure you get the update.

Apple also patched the Safari bug demonstrated at Pwn2Own Vancouver by Master of Pwn Winner Manfred Paul.

Adobe Patches for May 2024

For May, Adobe released eight patches addressing 37 CVEs in Adobe Acrobat and Reader, Illustrator, Substance3D Painter, Adobe Aero, Substance3D Designer, Adobe Animate, FrameMaker, and Dreamweaver. Eight of these vulnerabilities were reported through the ZDI program. The update for Reader should be the priority. It includes multiple Critical-rated bugs that are often used by malware and ransomware gangs. While none of these bugs are under active attack, it is likely some will eventually be exploited. The patch for Illustrator also addresses a couple of Critical-rated bugs that could result in arbitrary code execution. The patch for Aero (an augmented reality authoring and publishing tool) fixes a single code execution bug. Unless I’m mistaken, this is the first Adobe patch for this product.

The fix for Adobe Animate fixes eight bugs, seven of which result in Critical-rated code execution. The patch for FrameMaker also fixes several code execution bugs. These are classic open-and-own bugs that require user interaction. That’s the same for the single bug fixed in Dreamweaver. The patch for Substance 3D Painter addresses four bugs, two of which are rated Critical, while the patch for Substance 3D Designer fixes a single Important-rated memory leak.

None of the bugs fixed by Adobe this month are listed as publicly known or under active attack at the time of release. Adobe categorizes these updates as a deployment priority rating of 3.

Microsoft Patches for May 2024

This month, Microsoft released 59 CVEs in Windows and Windows Components; Office and Office Components; .NET Framework and Visual Studio; Microsoft Dynamics 365; Power BI; DHCP Server; Microsoft Edge (Chromium-based); and Windows Mobile Broadband. If you include the third-party CVEs being documented this month, the CVE count comes to 63. A total of two of these bugs came through the ZDI program. As with last month, none of the bugs disclosed at Pwn2Own Vancouver are fixed with this release. With Apple and VMware fixing the vulnerabilities reported during the event, Microsoft stands alone as the only vendor not to produce patches from the contest.

Of the new patches released today, only one is rated Critical, 57 are rated Important, and one is rated Moderate in severity. This release is roughly a third of the size of last month’s, so hopefully that’s a sign that a huge number of fixes in a single month isn’t going to be a regular occurrence.

Two of the CVEs released today are listed as under active attack, and one other is listed as publicly known at the time of the release. Microsoft doesn’t provide any indication of the volume of attacks, but the DWM Core bug appears to me to be more than a targeted attack. Let’s take a closer look at some of the more interesting updates for this month, starting with the DWM bug currently exploited in the wild:

-       CVE-2024-30051 – Windows DWM Core Library Elevation of Privilege Vulnerability
This bug allows attackers to escalate the SYSTEM on affected systems. These types of bugs are usually combined with a code execution bug to take over a target and are often used by ransomware. Microsoft credits four different groups for reporting the bug, which indicates the attacks are widespread. They also indicate the vulnerability is publicly known. Don’t wait to test and deploy this update as exploits are likely to increase now that a patch is available to reverse engineer.

-       CVE-2024-30043 – Microsoft SharePoint Server Information Disclosure Vulnerability
This vulnerability was reported to Microsoft by ZDI researcher Piotr Bazydło and represents an XML external entity injection (XXE) vulnerability in Microsoft SharePoint Server 2019. An authenticated attacker could use this bug to read local files with SharePoint Farm service account user privileges. They could also perform an HTTP-based server-side request forgery (SSRF), and – most importantly – perform NLTM relaying as the SharePoint Farm service account. Bugs like this show why info disclosure vulnerabilities shouldn’t be ignored or deprioritized.

-       CVE-2024-30033 – Windows Search Service Elevation of Privilege Vulnerability
This is another bug reported through the ZDI program and has a similar impact to the bug currently being exploited, although it manifests through a different mechanism. This is a link following bug in the Windows Search service. By creating a pseudo-symlink, an attacker could redirect a delete call to delete a different file or folder as SYSTEM. We discussed how this could be used to elevate privileges here. The delete happens when restarting the service. A low-privileged user can't restart the service directly. However, this could easily be combined with a bug that allows a low-privileged user to terminate any process by PID. After failure, the service will restart automatically, successfully triggering this vulnerability.

-       CVE-2024-30050 – Windows Mark of the Web Security Feature Bypass Vulnerability
We don’t normally detail Moderate-rated bugs, but this type of security feature bypass is quite in vogue with ransomware gangs right now. They zip their payload to bypass network and host-based defenses, they use a Mark of the Web (MotW) bypass to evade SmartScreen or Protected View in Microsoft Office. While we have no indication this bug is being actively used, we see the technique used often enough to call it out. Bugs like this one show why Moderate-rated bugs shouldn’t be ignored or deprioritized.

Here’s the full list of CVEs released by Microsoft for May 2024:

CVE Title Severity CVSS Public Exploited Type
CVE-2024-30051 Windows DWM Core Library Elevation of Privilege Vulnerability Important 7.8 Yes Yes EoP
CVE-2024-30040 Windows MSHTML Platform Security Feature Bypass Vulnerability Important 8.8 No Yes SFB
CVE-2024-30046 ASP.NET Core Denial of Service Vulnerability Important 5.9 Yes No DoS
CVE-2024-30044 Microsoft SharePoint Server Remote Code Execution Vulnerability Critical 8.8 No No RCE
CVE-2024-30045 .NET and Visual Studio Remote Code Execution Vulnerability Important 6.3 No No RCE
CVE-2024-30053 † Azure Migrate Spoofing Vulnerability Important 7.5 No No Spoofing
CVE-2024-32002 * CVE-2023-32002 Recursive clones on case-insensitive filesystems that support symlinks are susceptible to Remote Code Execution Important 9.8 No No RCE
CVE-2024-30019 DHCP Server Service Denial of Service Vulnerability Important 6.5 No No DoS
CVE-2024-30047 Dynamics 365 Customer Insights Spoofing Vulnerability Important 7.6 No No Spoofing
CVE-2024-30048 Dynamics 365 Customer Insights Spoofing Vulnerability Important 7.6 No No Spoofing
CVE-2024-32004 * GitHub: CVE-2024-32004 GitHub: CVE-2023-32004 Remote Code Execution while cloning special-crafted local repositories Important 8.8 No No RCE
CVE-2024-30041 Microsoft Bing Search Spoofing Vulnerability Important 5.4 No No Spoofing
CVE-2024-30007 Microsoft Brokering File System Elevation of Privilege Vulnerability Important 8.8 No No EoP
CVE-2024-30042 Microsoft Excel Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2024-26238 Microsoft PLUGScheduler Scheduled Task Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30054 Microsoft Power BI Client Javascript SDK Information Disclosure Vulnerability Important 6.5 No No Info
CVE-2024-30043 Microsoft SharePoint Server Information Disclosure Vulnerability Important 6.5 No No Info
CVE-2024-30006 Microsoft WDAC OLE DB provider for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29994 Microsoft Windows SCSI Class System File Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30027 NTFS Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30028 Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30030 Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30038 Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30034 Windows Cloud Files Mini Filter Driver Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-30031 Windows CNG Key Isolation Service Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-29996 Windows Common Log File System Driver Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30025 Windows Common Log File System Driver Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30037 Windows Common Log File System Driver Elevation of Privilege Vulnerability Important 7.5 No No EoP
CVE-2024-30016 Windows Cryptographic Services Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-30020 Windows Cryptographic Services Remote Code Execution Vulnerability Important 8.1 No No RCE
CVE-2024-30036 Windows Deployment Services Information Disclosure Vulnerability Important 6.5 No No Info
CVE-2024-30032 Windows DWM Core Library Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30035 Windows DWM Core Library Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30008 Windows DWM Core Library Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-30011 Windows Hyper-V Denial of Service Vulnerability Important 6.5 No No DoS
CVE-2024-30010 Windows Hyper-V Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-30017 Windows Hyper-V Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-30018 Windows Kernel Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-29997 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-29998 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-29999 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30000 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30001 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30002 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30003 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30004 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30005 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30012 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30021 Windows Mobile Broadband Driver Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-30039 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-30009 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-30014 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30015 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30022 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30023 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30024 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30029 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-30033 Windows Search Service Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2024-30049 Windows Win32 Kernel Subsystem Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-30059 Microsoft Intune for Android Mobile Application Management Tampering Vulnerability Important 6.1 No No Tampering
CVE-2024-30050 Windows Mark of the Web Security Feature Bypass Vulnerability Moderate 5.4 No No SFB
CVE-2024-4331 * Chromium: CVE-2024-4331 Use after free in Picture In Picture High N/A No No RCE
CVE-2024-4368 * Chromium: CVE-2024-4368 Use after free in Dawn High N/A No No RCE

* Indicates this CVE had been released by a third party and is now being included in Microsoft releases.

† Indicates further administrative actions are required to fully address the vulnerability.

 

There’s just one Critical-rated bug this month, and it deals with a remote code execution (RCE) vulnerability in SharePoint server. An authenticated attacker could use this bug to execute arbitrary code in the context of the SharePoint Server. While permissions are needed for this to occur, any authorized user on the server has the needed level of permissions.

Looking at the other RCE bugs, we see a lot of vulnerabilities in rarely used protocols. The Windows Mobile Broadband driver and the Routing and Remote Access Service (RRAS) make up the bulk of this category. More notable are the two bugs in Hyper-V. One of these would allow an authenticated attacker to execute code on the host system. This would result in a guest-to-host escape, but Microsoft doesn’t indicate what level the code execution occurs on the host OS. After a couple of months with many SQL-related fixes, there’s just one this month. As with the previous bugs, you would need to connect to a malicious SQL server. The bug in Cryptographic Services requires a machine-in-the-middle (MITM) but could lead to a malicious certificate being imported onto the target system. The RCE bugs are rounded out with open-and-own style bugs in Excel and .NET and Visual Studio.

Moving on to the elevation of privilege (EoP) patches in this month’s release, almost all lead to SYSTEM-level code execution if an authenticated user runs specially crafted code. While there isn’t a lot else to say about these bugs, they are often used by attackers to take over a system when combined with a code execution bug – like the Excel bug mentioned above. They convince a user to open a specially crafted Excel document that executes the EoP and takes over the system. The lone exception to this is the bug in the Brokering File System component. The vulnerability allows attackers to gain the ability to authenticate against a remote host using the current user’s credentials. The attack could be launched from a low-privileged AppContainer, which would allow the attacker to execute code or access resources at a higher integrity level than that of the AppContainer execution environment.

We’ve already discussed the MotW security feature bypass (SFB), and the only other SFB vulnerability receiving a fix this month is the MSHTML engine. Just when you thought you were safe from Internet Explorer, the Trident engine rears its ugly head. This bug allows an unauthenticated attacker to get code execution if they can convince a user to open a malicious document. The code execution occurs in the context of the user, so this is another reminder not to log on with Admin privileges unless you absolutely need to.

There are only seven information disclosure bugs receiving fixes this month, and we’ve already covered the one in SharePoint. As usual, most of these vulnerabilities only result in info leaks consisting of unspecified memory contents. The bug in Power BI could result in the disclosing of “sensitive information,” but Microsoft doesn’t narrow down what type of “sensitive information” could be leaked. Similarly, the bug in Deployment Services could leak “file contents.” Microsoft provides no information on whether that’s any arbitrary file contents or only specific files, so your guess is as good as mine.

The May release includes four spoofing bugs. The first is a stored cross-site scripting (XSS) bug in Azure Migrate. There’s not a straightforward patch for this one. You need the latest Azure Migrate Agent and ConfigManager updates. More info on how to do that can be found here. There are two spoofing bugs in Dynamics 365, but they read more like XSS bugs. The final spoofing bug addressed this month is in the Bing search engine. An attacker could modify the content of the vulnerable link to redirect the victim to a malicious site.

There’s a single Tampering bug addressed in this release fixing a bug in Microsoft Intune Mobile Application Management. An attacker could gain sensitive information on a target device that has been rooted.

The final bugs for May are Denial-of-Service (DoS) vulnerabilities in ASP.NET, DHCP server, and Hyper-V. Unfortunately, Microsoft provides no additional information about these bugs and how they would manifest on affected systems.

There are no new advisories in this month’s release.

Looking Ahead

The next Patch Tuesday of 2024 will be on June 11, and I’ll return with details and patch analysis then. Until then, stay safe, happy patching, and may all your reboots be smooth and clean!

How Scammers Hijack Your Instagram

14 May 2024 at 15:13

Authored by Vignesh Dhatchanamoorthy, Rachana S

Instagram, with its vast user base and dynamic platform, has become a hotbed for scams and fraudulent activities. From phishing attempts to fake giveaways, scammers employ a range of tactics to exploit user trust and vulnerability. These scams often prey on people’s desire for social validation, financial gain, or exclusive opportunities, luring them into traps that can compromise their personal accounts and identity.

McAfee has observed a concerning scam emerging on Instagram, where scammers are exploiting the platform’s influencer program to deceive users. This manipulation of the influencer ecosystem underscores the adaptability and cunning of online fraudsters in their pursuit of ill-gotten gains.

Brand Ambassador and influencer program scams:

The Instagram influencer program, designed to empower content creators and influencers by providing opportunities for collaboration and brand partnerships, has inadvertently become a target for exploitation. Scammers are leveraging the allure of influencer status to lure unsuspecting individuals into fraudulent schemes, promising fame, fortune, and exclusive opportunities in exchange for participation.

The first step involves a cybercrook creating a dummy account and using it to hack into a target’s Instagram account. Using those hacked accounts hackers then share posts about Bitcoin and other cryptocurrencies. Finally, the hacked accounts are used to scam target friends with a request that they vote for them to win an influencer contest.

After this series of steps is complete, the scammer will first identify the target and then send them a link with a Gmail email address to vote in their favor.

Fig 1: Scammer Message

While the link in the voting request message likely leads to a legitimate Instagram page, victims are often directed to an Instagram email update page upon clicking — not the promised voting page.  Also, since the account sending the voting request is likely familiar to the scam target, they are more likely to enter the scammer’s email ID without examining it closely.

During our research, we saw scammers like Instagram’s accounts center link to their targets like below hxxp[.]//accountscenter.instagram.com/personal_info/contact_points/contact_point_type=email&dialog_type=add_contact_point

Fig 2. Email Updating Page

We took this opportunity to gain more insight into the details of how these deceptive tactics are carried out, creating an email account (scammerxxxx.com and victimxxxx.com) and a dummy Instagram account using that email (victimxxxx.com) for testing purposes.

Fig 3. Victim’s Personal Details

We visited the URL provided in the chat and entered our testing email ID scammerxxxx.com instead of entering the email address provided by the scammer, which was “[email protected]

Fig 4. Adding Scammer’s Email Address in Victim Account

After adding the scammerxxxx.com address in the email address field, we received a notification stating, “Adding this email will replace vitimxxxx.com on this Instagram account”.

This is the point at which a scam target will fall victim to this type of scam if they are not aware that they are giving someone else, with access to the scammerxxxx.com email address, control of their Instagram account.

After selecting Next, we were redirected to the confirmation code page. Here, scammers will send the confirmation code received in their email account and provide that code to victims, via an additional Instagram message, to complete the email updating process.

In our testing case, the verification code was sent to the email address scammerxxxx.com.

Fig 5. Confirmation Code Page

We received the verification code in our scammerxxxx.com account and submitted it on the confirmation code page.

Fig 6. Confirmation Code Mail

Once the ‘Add an Email Address’ procedure is completed, the scammer’s email address is linked to the victim’s Instagram account. As a result, the actual user will be unable to log in to their account due to the updated email address.

Fig 7. Victim’s Profile after updating Scammer’s email

Because the scammer’s email address (scammerxxxx.com) was updated the account owner — the scam victim will not be able to access their account and will instead receive the message “Sorry, your password was incorrect. Please double-check your password.”

Fig 8. Victim trying to login to their account.

The scammer will now change the victim’s account password by using the “forgot password” function with the new, scammer email login ID.

Fig 9. Forgot Password Page

 

The password reset code will be sent to the scammer’s email address (scammerxxxx.com).

Fig 10. Reset the Password token received in the Scammer’s email

After getting the email, the scammer will “Reset your password” for the victim’s account.

Fig 11. Scammer Resetting the Password

After resetting the password, the scammer can take over the victim’s Instagram account.

Fig 12. The scammer took over the victim’s Instagram account.

To protect yourself from Instagram scams:

  • Be cautious of contests, polls, or surveys that seem too good to be true or request sensitive information.
  • Verify the legitimacy of contests or giveaways by checking the account’s authenticity, looking for official rules or terms, and researching the organizer.
  • Avoid clicking on suspicious links or providing personal information to unknown sources.
  • Enable two-factor authentication (2FA) on your Instagram account to add an extra layer of security.
  • Report suspicious activity or accounts to Instagram for investigation.
  • If any of your friends ask you to help them, contact them via text message or phone call, to ensure that their account has not been hacked first.

The post How Scammers Hijack Your Instagram appeared first on McAfee Blog.

Talos joins CISA to counter cyber threats against non-profits, activists and other at-risk communities

14 May 2024 at 12:42
Talos joins CISA to counter cyber threats against non-profits, activists and other at-risk communities

Cisco Talos is delighted to share updates about our ongoing partnership with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to combat cybersecurity threats facing civil society organizations.

Talos has partnered with CISA on several initiatives through the Joint Cyber Defense Collaborative (JCDC), including sharing intelligence on strategic threats of interest.

Adversaries are leveraging advancements in technology and the interconnectedness of the world’s networks to undermine democratic values and interests by targeting high-risk communities within civil society. According to CISA, these communities include activists, journalists, academics and organizations engaged in advocacy and humanitarian causes. Consequently, the U.S. government has elevated efforts in recent years to counter cyber threats that have placed the democratic freedoms of organizations and individuals at heightened risk.

The JCDC’s High-Risk Community Protection (HRCP) initiative is one such measure that brings together government, technology companies, and civil society organizations to strengthen the security of entities at heightened risk of cyber threat targeting and transnational repression.

The HRCP initiative’s outputs — including a threat mitigation guide for civil society, operational best practices, and online resources for communities at risk — aim to counter the threats posed by state-sponsored advanced persistent threats (APTs) and, increasingly, private-sector offensive actors (PSOA).

Our ongoing partnership with CISA and contributions to the JCDC’s HRCP initiative are consistent with Cisco’s security mission to protect data, systems, and networks, and uphold and respect the human rights of all.

Spyware threats persist despite government and private sector measures

As we’ve written about, the use of commercially available spyware to target high-profile or at-risk individuals and organizations is a global problem. This software can often track targets’ exact location, steal their messages and personal information, or even listen in on phone calls. Private companies, commonly referred to as “PSOAs” or “cyber mercenaries,” have monetized the development of these offensive tools, selling their spyware to any government willing to pay regardless of the buyer's intended use.

Commercial spyware tools can threaten democratic values by enabling governments to conduct covert surveillance on citizens, undermining privacy rights and freedom of expression. Lacking any international laws or norms around the use of commercial spyware, this surveillance can lead to the suppression of dissent, erosion of trust in democratic institutions, and consolidation of power in the hands of authoritarian governments.

The U.S. and its partners have taken steps to curb the proliferation of these dangerous tools. These include executive orders banning the use of certain spyware by U.S. government agencies, export restrictions and sanctions on companies or individuals involved in the development and sale of spyware (such as the recent sanctioning of members of the Intellexa Commercial Spyware Consortium), and diplomatic efforts with international partners and allies to pressure countries that harbor or support such firms.

Private industry has also played a substantial role in countering this threat, including by publishing research and publicly attributing PSOAs and countries involved in digital repression. Some companies have also developed countersurveillance technologies (such as Apple’s Lockdown Mode) to protect high-risk users and have initiated legal challenges through lawsuits against PSOAs alleging privacy violations. In March 2023, Cisco proudly became principal co-author of the Cybersecurity Tech Accord principles limiting offensive operations in cyberspace, joining several technology partners in calling for industry-wide principles to counter PSOAs.

Talos intelligence fuels HRCP threat mitigation guide for civil society

Talos has tracked the evolution of the commercial spyware industry and APT targeting of high-risk industries, placing us in a strong position to contribute our knowledge to the HRCP effort. Our research on two key threat actors — the Intellexa Commercial Spyware Consortium and the China state-sponsored Mustang Panda group — informed the HRCP guide’s overview of tactics commonly used against high-risk communities.

Talos has closely monitored threats stemming from the Intellexa Consortium, an umbrella group of organizations and individuals that offer commercial spyware tools to global customers, including authoritarian governments. In May 2023, we conducted a technical analysis of Intellaxa’s flagship PREDATOR spyware which was initially developed by a PSOA known as Cytrox. Our research specifically looked at two components of Intellexa's mobile spyware suite known as “ALIEN” and “PREDATOR,” which compose the backbone of the organization’s implant.

Our findings included an in-depth walkthrough of the infection chain, including the implant’s various information-stealing capabilities and evasion techniques. Over time, we learned more about Intellexa’s inner workings, including their spyware development timelines, product offerings, operating paradigms and procedures.

Our research on Mustang Panda also contributed to the mitigation guide by illustrating how government-sponsored threat actors have targeted civil society organizations with their own signature tools and techniques. This APT is heavily focused on political espionage and has targeted non-governmental organizations (NGOs), religious institutions, think tanks, and activist groups worldwide. Mustang Panda commonly sends spear phishing emails using enticing lures to gain access to victim networks and install custom implants, such as PlugX, that enable device control and user monitoring. The group has continuously evolved its delivery mechanisms and payloads to ensure long-term uninterrupted access, underscoring the threat posed to civil society and others.

What is next for this growing threat?

Threat actors with ties to Russia, China, and Iran have primarily been responsible for this heightened threat activity, according to industry reporting. But the threat is not limited to them. Last year, a U.K. National Cyber Security Centre (NCSC) estimate found that at least 80 countries have purchased commercial spyware, highlighting how the proliferation of these tools enables even more actors to join the playing field.

Yet we are staying ahead of the game. Talos researchers are continuously identifying the latest trends in threat actor targeting which include not only the use of commercial spyware but other tools and techniques identified in the HRCP guide, such as spear phishing and trojanized applications. Our intelligence powers Cisco’s security portfolio, ensuring customer safety.

Talos created a reporting resource where individuals or organizations suspected of being infected with commercial spyware can contact Talos’ research team ([email protected]) to assist in furthering the community’s knowledge of these threats.

We are determined to continue our work with CISA, other agencies, and industry leaders, leveraging the power of partnerships to protect Cisco customers and strengthen community resilience against common adversaries.

A peek into build provenance for Homebrew

14 May 2024 at 13:00

By Joe Sweeney and William Woodruff

Last November, we announced our collaboration with Alpha-Omega and OpenSSF to add build provenance to Homebrew.

Today, we are pleased to announce that the core of that work is live and in public beta: homebrew-core is now cryptographically attesting to all bottles built in the official Homebrew CI. You can verify these attestations with our (currently external, but soon upstreamed) brew verify command, which you can install from our tap:

This means that, from now on, each bottle built by Homebrew will come with a cryptographically verifiable statement binding the bottle’s content to the specific workflow and other build-time metadata that produced it. This metadata includes (among other things) the git commit and GitHub Actions run ID for the workflow that produced the bottle, making it a SLSA Build L2-compatible attestation:

In effect, this injects greater transparency into the Homebrew build process, and diminishes the threat posed by a compromised or malicious insider by making it impossible to trick ordinary users into installing non-CI-built bottles.

This work is still in early beta, and involves features and components still under active development within both Homebrew and GitHub. As such, we don’t recommend that ordinary users begin to verify provenance attestations quite yet.

For the adventurous, however, read on!

A quick Homebrew recap

Homebrew is an open-source package manager for macOS and Linux. Homebrew’s crown jewel is homebrew-core, a default repository of over 7,000 curated open-source packages that ship by default with the rest of Homebrew. homebrew-core’s packages are downloaded hundreds of millions of times each year, and form the baseline tool suite (node, openssl, python, go, etc.) for programmers using macOS for development.

One of Homebrew’s core features is its use of bottles: precompiled binary distributions of each package that speed up brew install and ensure its consistency between individual machines. When a new formula (the machine-readable description of how the package is built) is updated or added to homebrew-core, Homebrew’s CI (orchestrated through BrewTestBot) automatically triggers a process to create these bottles.

After a bottle is successfully built and tested, it’s time for distribution. BrewTestBot takes the compiled bottle and uploads it to GitHub Packages, Homebrew’s chosen hosting service for homebrew-core. This step ensures that users can access and download the latest software version directly through Homebrew’s command-line interface. Finally, BrewTestBot updates references to the changes formula to include the latest bottle builds, ensuring that users receive the updated bottle upon their next brew update.

In sum: Homebrew’s bottle automation increases the reliability of homebrew-core by removing humans from the software building process. In doing so, it also eliminates one specific kind of supply chain risk: by lifting bottle builds away from individual Homebrew maintainers into the Homebrew CI, it reduces the likelihood that a maintainer’s compromised development machine could be used to launch an attack against the larger Homebrew user base1.

At the same time, there are other aspects of this scheme that an attacker could exploit: an attacker with sufficient permissions could potentially upload malicious builds directly to homebrew-core’s bottle storage, potentially leveraging alert fatigue to trick users into installing despite a checksum mismatch. More concerningly, a compromised or rogue Homebrew maintainer could surreptitiously replace both the bottle and its checksum, resulting in silently compromised installs for all users onwards.

This scenario is a singular but nonetheless serious weakness in the software supply chain, one that is well addressed by build provenance.

Build provenance

In a nutshell, build provenance provides cryptographically verifiable evidence that a software package was actually built by the expected “build identity” and not tampered with or secretly inserted by a privileged attacker. In effect, build provenance offers the integrity properties of a strong cryptographic digest, combined with an assertion that the artifact was produced by a publicly auditable piece of build infrastructure.

In the case of Homebrew, that “build identity” is a GitHub Actions workflow, meaning that the provenance for every bottle build attests to valuable pieces of metadata like the GitHub owner and repository, the branch that the workflow was triggered from, the event that triggered the workflow, and even the exact git commit that the workflow ran from.

This data (and more!) is encapsulated in a machine-readable in-toto statement, giving downstream consumers the ability to express complex policies over individual attestations:

Build provenance and provenance more generally are not panaceas: they aren’t a substitute for application-level protections against software downgrades or confusion attacks, and they can’t prevent “private conversation with Satan” scenarios where the software itself is malicious or compromised.

Despite this, provenance is a valuable building block for auditable supply chains: it forces attackers into the open by committing them to public artifacts on a publicly verifiable timeline, and reduces the number of opaque format conversions that an attacker can hide their payload in. This is especially salient in cases like the recent xz-utils backdoor, where the attacker used a disconnect between the upstream source repository and backdoored tarball distribution to maintain their attack’s stealth. Or in other words: build provenance won’t stop a fully malicious maintainer, but it will force their attack into the open for review and incident response.

Our implementation

Our implementation of build provenance for Homebrew is built on GitHub’s new artifact attestations feature. We were given early (private beta) access to the feature, including the generate-build-provenance action and gh attestation CLI, which allowed us to iterate rapidly on a design that could be easily integrated into Homebrew’s pre-existing CI.

This gives us build provenance for all current and future bottle builds, but we were left with a problem: Homebrew has a long “tail” of pre-existing bottles that are still referenced in formulae, including bottles built on (architecture, OS version) tuples that are no longer supported by GitHub Actions2. This tail is used extensively, leaving us with a dilemma:

  1. Attempt to rebuild all old bottles. This is technically and logistically infeasible, both due to the changes in GitHub Actions’ own supported runners and significant toolchain changes between macOS versions.
  2. Only verify a bottle’s build provenance if present. This would effectively punch a hole in the intended security contract for build provenance, allowing an attacker to downgrade to a lower degree of integrity simply by stripping off any provenance metadata.

Neither of these solutions was workable, so we sought a third. Instead of either rebuilding the world or selectively verifying, we decided to create a set of backfilled build attestations, signed by a completely different repository (our tap) and workflow. With a backfilled attestation behind each bottle, verification looks like a waterfall:

  1. We first check for build provenance tied to the “upstream” repository with the expected workflow, i.e. Homebrew/homebrew-core with publish-commit-bottles.yml.
  2. If the “upstream” provenance is not present, we check for a backfilled attestation before a specified cutoff date from the backfill identity, i.e. trailofbits/homebrew-brew-verify with backfill_signatures.yml.
  3. If neither is present, then we produce a hard failure.

This gives us the best of both worlds: the backfill allows us to uniformly fail if no provenance or attestation is present (eliminating downgrades), without having to rebuild every old homebrew-core bottle. The cutoff date then adds an additional layer of assurance, preventing an attacker from attempting to use the backfill attestation to inject an unexpected bottle.

We expect the tail of backfilled bottle attestations to decrease over time, as formulae turn over towards newer versions. Once all reachable bottles are fully turned over, Homebrew will be able to remove the backfill check entirely and assert perfect provenance coverage!

Verifying provenance today

As mentioned above: this feature is in an early beta. We’re still working out known performance and UX issues; as such, we do not recommend that ordinary users try it yet.

With that being said, adventuresome early adopters can give it a try with two different interfaces:

  1. A dedicated brew verify command, available via our third-party tap
  2. An early upstream integration into brew install itself.

For brew verify, simply install our third-party tap. Once installed, the brew verify subcommand will become usable:

brew update
brew tap trailofbits/homebrew-brew-verify
brew verify --help
brew verify bash

Going forward, we’ll be working with Homebrew to upstream brew verify directly into brew as a developer command.

For brew install itself, set HOMEBREW_VERIFY_ATTESTATIONS=1 in your environment:

brew update
export HOMEBREW_VERIFY_ATTESTATIONS=1
brew install cowsay

Regardless of how you choose to experiment with this new features, certain caveats apply:

  • Both brew verify and brew install wrap the gh CLI internally, and will bootstrap gh locally if it isn’t already installed. We intend to replace our use of gh attestation with a pure-Ruby verifier in the medium term.
  • The build provenance beta depends on authenticated GitHub API endpoints, meaning that gh must have access to a suitable access credential. If you experience initial failures with brew verify or brew install, try running gh auth login or setting HOMEBREW_GITHUB_API_TOKEN to a personal access token with minimal permissions.

If you hit a bug or unexpected behavior while experimenting with brew install, please report it! Similarly, for brew verify: please send any reports directly to us.

Looking forward

Everything above concerns homebrew-core, the official repository of Homebrew formulae. But Homebrew also supports third-party repositories (“taps”), which provide a minoritybutsignificant number of overall bottle installs. These repositories also deserve build provenance, and we have ideas for accomplishing that!

Further out, we plan to take a stab at source provenance as well: Homebrew’s formulae already hash-pin their source artifacts, but we can go a step further and additionally assert that source artifacts are produced by the repository (or other signing identity) that’s latent in their URL or otherwise embedded into the formula specification. This will compose nicely with GitHub’s artifact attestations, enabling a hypothetical DSL:

Stay tuned for further updates in this space and, as always, don’t hesitate to contact us! We’re interested in collaborating on similar improvements for other open-source packaging ecosystems, and would love to hear from you.

Last but not least, we’d like to offer our gratitude to Homebrew’s maintainers for their development and review throughout the process. We’d also like to thank Dustin Ingram for his authorship and design on the original proposal, the GitHub Package Security team, as well as Michael Winser and the rest of Alpha-Omega for their vision and support for a better, more secure software supply chain.

1In the not-too-distant past, Homebrew’s bottles were produced by maintainers on their own development machines and uploaded to a shared Bintray account. Mike McQuaid’s 2023 talk provides an excellent overview on the history of Homebrew’s transition to CI/CD builds.
2Or easy to provide with self-hosted runners, which Homebrew uses for some builds.

CVE-2024-33625

2 May 2024 at 10:50

CWE-259: USE OF HARD-CODED PASSWORD

The application code contains a hard-coded JWT signing key. This could result in an attacker forging JWT tokens to bypass authentication.

Successful exploitation of these vulnerabilities could result in an attacker bypassing authentication and gaining administrator privileges, forging JWT tokens to bypass authentication, writing arbitrary files to the server and achieving code execution, gaining access to services with the privileges of a PowerPanel application, gaining access to the testing or production server, learning passwords and authenticating with user or administrator privileges, injecting SQL syntax, writing arbitrary files to the system, executing remote code, impersonating any client in the system and sending malicious data, or obtaining data from throughout the system after gaining access to any device.

❌
❌