There are new articles available, click to refresh the page.
Before yesterdayNVISO Labs

New mobile malware family now also targets Belgian financial apps

11 May 2021 at 15:14

While banking trojans have been around for a very long time now, we have never seen a mobile malware family attack the applications of Belgian financial institutions. Until today…

Earlier this week, the Italy-based Cleafy published an article about a new android malware family which they dubbed TeaBot. The sample we will take a look at doesn’t use a lot of obfuscation and only has a limited set of features. What is interesting though, is that TeaBot actually does attack the mobile applications of Belgian financial institutions.

This is quite surprising since Banking trojans typically use a phishing attack to acquire the credentials of unsuspecting victims. Those credentials would be fairly useless against Belgian financial applications as they all have secure device enrollment and authentication flows which are resilient against a phishing attack.

So let’s take a closer look at how these banking trojans work, how they are actually trying to attack Belgian banking apps and what can be done to protect these apps.


  • Typical banking malware uses a combination of Android accessibility services and overlay windows to construct an elaborate phishing attack
  • Belgian apps are being targeted with basic phishing attacks and keyloggers which should not result in an account takeover

Android Overlay Attacks

There have been numerous articles written on Android Overlay attacks, including a very recent one from F-Secure labs: “How are we doing with Android’s overlay attacks in 2020?” For those who have never heard of it before, let’s start with a small overview.

Drawing on top of other apps through overlays (SYSTEM_ALERT_WINDOW)

The Android OS allows apps to draw on top of other apps after they have obtained the SYSTEM_ALERT_WINDOW permission. There are valid use cases for this, with Facebook Messenger’s chat heads being the typical example. These chat bubbles stay on top of any other application to allow the user to quickly access their conversations without having to go to the Messenger app.

Overlays have two interesting properties: whether or not they are transparent, and whether or not they are interactive. If an overlay is transparent you will be able to see whatever is underneath the overlay (either another app or the home screen), and if an overlay is interactive it will register any screen touches, while the app underneath will not. Below you can see two examples of this. On the left, there’s Facebook’s Messenger app, which has may interactive views, but also some transparent parts at the top, while on the right you see Twilight, which is a blue light filter that covers the entire screen in a semi-transparent way without any interactive elements in the overlay. The controls that you do see with Twilight is the actual Twilight app that’s opened underneath the red overlay.

Until very recently, if the app was installed through the Google Play store (instead of through sideloading or third party app stores), the application automatically received this permission, without even a confirmation dialog for the user! After much abuse by Banking malware that was installed through the Play store, Google has now added an additional manual verification step in the approval process for apps on the Google Play store. If the app wants to have the permission without requesting it from the user, the app will need to request special permission from Google. But of course, an app can still manually request this permission from the user, and Android’s information for this permission looks rather innocent: “This may interfere with your use of other apps”.

The permission is fairly benign in the hands of the Facebook Messenger app or Twilight, but for mobile malware, the ability to draw on top of other apps is extremely interesting. There are a few ways in which you can use this to attack the user:

  1. Create a fake UI on top of a real app that tricks the user into touching specific locations on the screen. Those locations will not be interactive, and will thus propagate the touch to the underlying application. As a result, the user performs actions in the underlying app without realizing it. This is often called Tapjacking.
  2. Create interactive fields on top of key fields of the app in order to harvest information such as usernames and passwords. This would require the overlay to track what is being shown in the app, so that it can correctly align its own buttons text fields. All in all quite some work and not often used to attack the user.
  3. Instead of only overlaying specific buttons, the overlay covers the entire app and pretends to be the app. A fully functional app (usually a webview) is shown on top of the targeted app and asks the user for their credentials. This is a full overlay attack.

These are just three possibilities, but there are many more. Researchers from Georgia Tech and the UC Santa Barbara have documented different attacks in their paper which also introduces the Cloak and Dagger attacks explained below.

Before we get into Cloak and Dagger, let’s take a look at a few other dangerous Android permissions first.

Accessibility services

Applications on Android can request the accessibility services permission, which allows them to simulate button presses or interact with UI elements outside of their own application. These apps are very useful to people with disabilities who need a bit of extra help to navigate their smartphone. For example, the Google TalkBack application will read out any UI element that is touched on the screen, and requires a double click to actually register as a button press. An alternative application is the Voice Access app which tags every UI element with a number and allows you to select them by using voice commands.

Left: Giving permission to the TalkBack service. Android clearly indicates the dangers of giving this permission
Middle: TalkBack uses text-to-speech to read the description that the user taps
Right: Voice Access adds a button to each UI control and allows you to click them through voice commands

Both of these applications can read UI elements and perform touches on the user’s behalf. Just like overlay windows, this can be a very nice feature, or very dangerous if abused. Malware could use accessibility services to create a keylogger which collects the input of a text field any time data is entered, or it could press buttons on your behalf to purchase premium features or subscriptions, or even just click advertisements.

So let’s take a quick look at what kind of information becomes available by installing the Screen Logger app. The Screen Logger app is a legitimate application that uses accessibility features to monitor your actions. At the time of writing, the application doesn’t even request INTERNET permission, so it shouldn’t be stealing your data in any way. However, it’s always best to do these tests on a device without sensitive data which you can factory-reset. The application is very basic:

  • Install the accessibility service
  • Click the record button
  • Perform some actions and enter some text
  • Click the stop recording button

The app will then show all the information it has collected. Below are some examples of the information it collected from a test app:

The Screen logger application shows the data that was collected through an accessibility service

When enabling accessibility services, users are actually warned about the dangers of enabling accessibility. This makes it a bit harder to trick the user into granting this permission. More difficult, but definitely not impossible. Applications actually have a lot of control over the information that is shown to the user. Take for example the four screens below, which belong to a malware sample. All of the text indicated with red is under control of the attacker. The first screen shows a popup window asking the user to enable the Google service (which is, of course, the name of the malware’s service), and the next three screens are what the user sees while enabling the accessibility permission.

Tricking users into installing an accessibility service

Even if malware can’t convince the user to give the accessibility permission, there’s still a way to trick them using overlay windows. This approach is exactly what Cloak and Dagger does.

Cloak and Dagger

Cloak and Dagger is best explained through their own video, where they show a combination of overlay attacks and accessibility to install an application that has all permissions enabled. In the video shown below, anything that is red is non-transparent and interactive, while everything that is green or transparent is non-interactive and will let touches go through to the app underneath.

Now, over the past few years, Android has made efforts to hinder these kinds of attacks. For example, on newer versions of Android, it’s not possible to configure accessibility settings in case an overlay is active, or Android automatically disables any overlays when going into the Accessibility settings page. Unfortunately this only prevents a malware sample from giving itself accessibility permissions through overlays; it still allows malware to use social engineering tactics to trick users into installing them.

Read SMS permission

Finally, another interesting permission for malware is the RECEIVE_SMS permission, which allows an application to read received SMS messages. While this can definitely be used to invade the user’s privacy, the main reason for malware to acquire this permission is to intercept 2FA tokens which are unfortunately often still sent through SMS. Next to SIM-swapping attacks and attacks against the SS7 infrastructure, this is another way in which those tokens can be stolen.

This permission is pretty self-explanatory and a typical user will probably not grant the permission to a game that they just installed. However, by using phishing, overlays or accessibility attacks, malware can make sure the user accepts the permission.

Does this mean your device is fully compromised? Yes, and no.

Given the very intrusive nature of the attacks described above, it’s not a stretch to say that your device is fully compromised. If malware can access what you see, monitor what you do and perform actions on your behalf, they’re basically using your device just like you would. However, the malware is still (ab)using legitimate functionality provided by the OS, and that does come with restrictions.

For example, even applications with full accessibility permissions aren’t able to access data that is stored inside the application container of another app. This means that private information stored within an app is safe, unless you of course access the data through the app and the accessibility service actively collects everything on the screen.

By combining accessibility and overlay windows, it is actually much easier to social engineer the victim and get their credentials or card information. And this is exactly what Banking Trojans often do. Instead of attacking an application and trying to steal their authentication tokens or modify their behavior, they simply ask the user for all the information that’s required to either authenticate to a financial website or enroll a new device with the user’s credentials.

How to protect your app

Protecting against overlays

Protecting your application against a full overlay is, well, impossible. Some research has already been performed on this and one of the suggestions is to add a visual indicator on the device itself that can inform the user about an overlay attack tacking place. Another study took a look at detecting suspicious patterns during app-review to identify overlay malware. While the research is definitely interesting, it doesn’t really help you when developing an application.

And even if you could detect an overlay on top of your application. What could your application do? There are a few options, but none of them really work:

  • Close the application > Doesn’t matter, the attack just continues, since there’s a full overlay
  • Show something to the user to warn them > Difficult, since you’re not the top-level view
  • Inform the backend and block the account > Possible, though many false negatives. Imagine customer accounts being blocked because they have Facebook messenger installed…

What remains is trying to detect an attack and informing your backend. Instead of directly blocking an account, the information could be taken into account when performing risk analysis on a new sign-up or transaction. There are a few ways to collect this information, but all of them can have many false positives:

  • You can detect if a screen has been obfuscated by listening for onFilterTouchEventForSecurity events. There are however various edge cases where it doesn’t work as expected and will lead to many false negatives and false positives.
  • You can scan for installed applications and check if a suspicious application is installed. This would require you to actively track mobile malware campaigns and update your blacklist accordingly. Given the fact that malware samples often have random package names, this will be very difficult. Additionally, starting with Android 11 (Q), it actually becomes impossible to scan for applications which you don’t define in your Android Manifest.
  • You can use accessibility services yourself to monitor which views are created by the Android OS and trigger an error if specific scenarios occur. While this could technically work, it would give people the idea that financial applications do actually require accessibility services, which would play into the hands of malware developers.

The only real feasible implementation is detection through the onFilterTouchEventForSecurity handler, and, given the many false positives, it can only be used in conjunction with other information during a risk assessment.

Protecting against accessibility attacks

Unfortunately it’s not much better than the section. There are many different settings you can set on views, components and text fields, but all of them are designed to help you improve the accessibility of your application. Removing all accessibility data from your application could help a bit, but this will of course also stop legitimate accessibility software from analyzing your application.

But let’s for a moment assume that we don’t care about legitimate accessibility. How can we make the app as secure as possible to prevent malware from logging our activities? Let’s see…

  • We could set the android:importantForAccessibility attribute of a view component to ‘no’ or ‘noHideDescendants’. This won’t work however, since the accessibility service can just ignore this property and still read everything inside the view component.
  • We could set all the android:contentDescription attributes to “@null”. This will effectively remove all the meta information from the application and will make it much more difficult to track a user. However, any text that’s on screen can still be captured, so the label of a button will still give information about its purpose, even if there is no content description. For input text, the content of the text field will still be available to the malware.
  • We could change every input text to a password field. Password fields are masked and their content isn’t accessible in clear-text format. Depending on the user’s settings, this won’t work either (see next section).
  • Enable FLAG_SECURE on the view. This will prevent screenshots of the view, but it doesn’t impact accessibility services.

About passwords

By default, Android shows the last entered character in a password field. This is useful for the user as they are able to see if they mistyped something. However, whenever this preview is shown, the value is also accessible to the accessibility services. As a result, we can still steal passwords, as shown in the second and third image below:

Left: A password being entered in ProxyDroid
Middle / Right: The entered password can be reconstructed based on the character previews

It is possible for users to disable this feature by going to Settings > Privacy > Show Passwords, but this setting cannot be manipulated from inside an application.

Detecting accessibility services

If we can’t protect our own application, can we maybe detect an attack? Here is where there’s finally some good news. It is possible to retrieve all the accessibility services running on the device, including their capabilities. This can be done through the AccessibilityManager.getEnabledAccessibilityServiceList.

This information could be used to identify suspicious services running on the device. This would require building an dataset of known-good services to compare against. Given that Google is really hammering down on applications requiring accessibility services in the Google Play store, this could be a valid approach.

The obvious downside is that there will still be false positives. Additionally, there may be some privacy related issues as well, since it might not be desirable to identify disabilities in users.

Can’t Google fix this?

For a large part, dealing with these overlay attacks is Google’s responsibility, and over the last few versions, they have made multiple changes to make it more difficult to use the SYSTEM_ALERT_WINDOW (SAW) overlay permission:

  • Android Q (Go Edition) doesn’t support the SAW.
  • Sideloaded apps on Android P loose the SAW permission upon reboot.
  • Android O has marked the SAW permission deprecated, though Android 11 has removed the deprecated status.
  • Play Store apps on Android Q loose the permission on reboot.
  • Android O shows a notification for apps that are performing overlays, but also allows you to disable the notifications through settings (and thus through accessibility as well).
  • Android Q introduced the Bubbles API, which deals with some of the use cases for SAW, but not all of them.

Almost all of these updates are mitigations and don’t fix the actual problem. Only the removal of SAW in Android Q (Go Edition) is a real way to stop overlay attacks, and it may hopefully one day make it into the standard Android version as well.

Android 12 Preview

The latest version of the Android 12 preview actually contains a new permission called ‘HIDE_OVERLAY_WINDOWS‘. After acquiring this permission, an app can call ‘setHideOverlayWindows()’ to disable overlays. This is another step in the right direction, but it’s still far from great. Instead of targeting the application when the user opens it, the malware could still create fake notifications that link directly to the overlay without the targeted application even being opened.

It’s clear that it’s not an easy problem to fix. Developers were given the option to use SAW since Android 1, and many apps rely on the permission to provide their core functionality. Removing it would affect many apps, and would thus get a lot of backlash. Finally, any new update that Google makes will take many years to reach a high percentage of Android users, due to Android’s slow update process and unwillingness for mobile device manufacturers to provide major OS updates to users.

Now that we understand the permissions involved, let’s go back to the TeaBot malware.

TeaBot – Attacking Belgian apps

What was surprising about Cleafy’s original report is the targeting of Belgian applications which so far had been spared of similar attacks. This is also a bit surprising since Belgian financial apps all make use of strong authentication (card readers, ItsMe, etc) and are thus pretty hard to successfully phish. Let’s take a look at how exactly the TeaBot family attacks these applications.

Once the TeaBot malware is installed, it shows a small animation to the user how to enable accessibility options. It doesn’t provide a specific explanation for the accessibility service, and it doesn’t pretend to be a Google or System service. However, if you wait too long to activate the accessibility service, the device will regularly start vibrating, which is extremely annoying and will surely convince many victims to enable the services.

  • Main view when opening the app
  • Automatically opens the Accessibility Settings
  • No description of the service
  • The service requests full control
  • If you wait too long, you get annoying popups and vibration
  • After enabling the service, the application quits and shows an error message

This specific sample pretends to be bpost, but TeaBot also pretends to be the VLC Media Player, the Spanish postal app Correos, a video streaming app called Mobdro, and UPS as well.

The malware sample has the following functionality related to attacking financial applications:

  • Take a screenshot;
  • Perform overlay attacks on specific apps;
  • Enable keyloggers for specific apps.

Just like the FluBot sample from our last blogpost, the application collects all of the installed applications and then sends them to the C2 which returns a list of the applications that should be attacked:

POST /api/getbotinjects HTTP/1.1
Accept-Charset: UTF-8
Content-Type: application/xml
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Connection: close
Accept-Encoding: gzip, deflate
Content-Length: 776

{"installed_apps":[{"package":"org.proxydroid"},{"package":"com.android.documentsui"}, ...<snip>... ,{"package":"com.android.messaging"}]}
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2
Date: Mon, 10 May 2021 19:20:51 GMT


In order to identify the applications that are attacked, we can supply a list of banking applications which will return more interesting data:

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 2031830
Date: Mon, 10 May 2021 18:28:01 GMT

		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>",
		"html":"<!DOCTYPE html><html lang=\"en\"><head> ...SNIP...</html>"

By brute-forcing against different C2 servers, overlays for the following apps were returned:


Only one Belgian financial application (be.belfius.directmobile.android) returned an overlay. The interesting part is that the overlay only phishes for credit card information and not for anything related to account onboarding:

The overlay requests the debit card number, but nothing else.

This overlay will be shown when TeaBot detects that the Belfius app has been opened. This way the user will expect a Belfius prompt to appear, which gives more credibility to the malicious view that was opened.

The original report by Cleafy specified at least 5 applications under attack, so we need to dig a bit deeper. Another endpoint called by the samples is /getkeyloggers. Fortunately, this one does simply return a list of targeted applications without us having to guess.

GET /api/getkeyloggers HTTP/1.1
Accept-Charset: UTF-8
User-Agent: Dalvik/2.1.0 (Linux; U; Android 10; Nexus 5 Build/QQ3A.200805.001)
Connection: close
Accept-Encoding: gzip, deflate

HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Server: Rocket
Content-Length: 1205
Date: Tue, 11 May 2021 12:45:30 GMT

[{"application":"com.ing.banking"},{"application":"com.binance.dev"},{"application":"com.bankinter.launcher"},{"application":"com.unicredit"},{"application":"com.lynxspa.bancopopolare"}, ... ]

Scattered over multiple C2 servers, we could identify the following targeted applications:


Based on this list, 14 Belgian applications are being attacked through the keylogger module. Since all these applications have a strong device onboarding and authentication flow, the impact of the collected information should be limited.

However, if the applications don’t detect the active keylogger, the malware could still collect any information entered by the user into the app. In this regard, the impact is the same as when someone installs a malicious keyboard that logs all the entered information.

Google Play Protect will protect you

The TeaBot sample is currently not known to spread in the Google Play store. That means victims will need to install it by downloading and installing the app manually. Most devices will have Google Play protect installed, which will automatically block the currently identified TeaBot samples.

Of course, this is a typical cat & mouse game between Google and malware developers, and who knows how many samples may go undetected …


It’s very interesting to see how TeaBot attacks the Belgian financial applications. While they don’t attempt to social engineer a user into a full device onboarding, the malware developers are finally identifying Belgium as an interesting target.

It will be very interesting to see how these attacks will evolve. Eventually all financial applications will have very strong authentication and then malware developers will either have to be satisfied with only stealing credit-card information, or they will have to invest into more advanced tactics with live challenge/responses and active social engineering.

From a development point of view, there’s not much we can do. The Android OS provides the functionality that is abused and it’s difficult to take that functionality away again. Collecting as much information about the device as possible can help in making correct assessments on the risk of certain transactions, but there’s no silver bullet.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

How to analyze mobile malware: a Cabassous/FluBot Case study

19 April 2021 at 12:20

This blogpost explains all the steps I took while analyzing the Cabassous/FluBot malware. I wrote this while analyzing the sample and I’ve written down both successful and failed attempts at moving forward, as well as my thoughts/options along the way. As a result, this blogpost is not a writeup of the Cabassous/FluBot malware, but rather a step-by-step guide on how you can examine the malware yourself and what the thought process can be behind examining mobile malware. Finally, it’s worth mentioning that all the tools used in this analysis are open-source / free.

If you want a straightforward writeup of the malware’s capabilities, there’s an excellent technical write up by ProDaft (pdf) and a writeup by Aleksejs Kuprins with more background information and further analysis. I knew these existed before writing this blogpost, but deliberately chose not to read them first as I wanted to tackle the sample ‘blind’.

Our goal: Intercept communication between the malware sample and the C&C and figure out which applications are being attacked.

The sample

Cabassous/FluBot recently popped up in Europe where it is currently expanding quite rapidly. The sample I examined is attacking Spanish mobile banking applications, but German, Italian and Hungarian versions have been spotted recently as well.

In this post, we’ll be taking a look at this sample (acb38742fddfc3dcb511e5b0b2b2a2e4cef3d67cc6188b29aeb4475a717f5f95). I’ve also uploaded this sample to the Malware Bazar website if you want to follow along.

This is live malware

Note that this is live malware and you should never install this on a device which contains sensitive information.

Starting with some static analysis

I usually make the mistake of directly going to dynamic analysis without some recon first, so this time I wanted to start things slow. It also takes some time to reset my phone after it has been infected, so I wanted to get the most out of my first install by placing Frida hooks where necessary.

First steps

The first thing to do is find the starting point of the application, which is listed in the AndroidManifest:

<activity android:name="com.tencent.mobileqq.MainActivity">
                <action android:name="android.intent.action.MAIN"/>
                <category android:name="android.intent.category.LAUNCHER"/>
        <activity android:name="com.tencent.mobileqq.IntentStarter">
                <action android:name="android.intent.action.MAIN"/>

So we need to find com.tencent.mobileqq.MainActivity. After opening the sample with Bytecode Viewer, there unfortunately isn’t a com.tencent.mobileqq package. There are however a few other interesting things that Bytecode Viewer shows:

  • There’s a classes-v1.bin file in a folder called ‘dex’. While this file probably contains dex bytecode, it currently isn’t identified by the file utility and is probably encrypted.
  • There is a com.whatsapp package with what appear to be legitimate WhatsApp classes
  • There are three top-level packages that are suspicious: n, np and obfuse
  • There’s a libreactnativeblob.so which probably belongs to WhatsApp as well

Comparing the sample to WhatsApp

So it seems that the malware authors repackaged the official WhatsApp app and added their malicious functionality. Now that we know that, we can compare this sample to the official WhatsApp app and see if any functionality was added in the com.whatsapp folder. A good tool for comparing apks is apkdiff.

Which version to compare to?

I first downloaded the latest version of WhatsApp from the Google Play store, but there were way too many differences between that version and the sample. After digging around the com.whatsapp folder for a bit, I found the AbstractAppShell class which contains a version identifier: A quick google search leads us to apkmirror which has older versions for download.

This image has an empty alt attribute; its file name is whatsappversion-1024x545.png

So let’s compare both versions using apkdiff:

python3 apkdiff.py ../com.whatsapp_2.21.3.19-210319006_minAPI16\(x86\)\(nodpi\)_apkmirror.com.apk ../Cabassous.apk

Because the malware stripped all the resource files from the original WhatsApp apk, apkdiff identifies 147 files that were modified. To reduce this output, I added ‘xml’ to the ignore list of apkdiff.py on line 14:

at = "at/"
ignore = ".*(align|apktool.yml|pak|MF|RSA|SF|bin|so|xml)"
count = 0

After running apkdiff again, the output is much shorter with only 4 files that are different. All of them differ in their labeling of try/catch statements and are thus not noteworthy.

Something’s missing…

It’s pretty interesting to see that apkdiff doesn’t identify the n, np and obfuse packages. I would have expected them to show up as being added in the malware sample, but apparently apkdiff only compares files that exist in both apks.

Additionally, apkdiff did not identify the encrypted dex file (classes-v1.bin). This is because, by default, apkdiff.py ignores files with the .bin extension.

So to make sure no other files were added, we can run a normal diff on the two smali folders after having used apktool to decompile them:

diff -rq Cabassous com.whatsapp_2.21.3.19-210319006_minAPI16\(x86\)\(nodpi\)_apkmirror.com | grep -i "only in Cabassous/smali"

It looks like no other classes/packages were added, so we can start focusing on the n, np and obfuse packages.

Examining the obfuscated classes

We still need to find the com.tencent.mobileqq.MainActivity class and it’s probably inside the encrypted classes-v1.bin file. The com.tencent package name also tells us that the application has probably been packaged with the tencent packer. Let’s use APKiD to see if it can detect the packer:

Not much help there; it only tells us that the sample has been obfuscated but it doesn’t say with which packer. Most likely the tencent packer was indeed used, but it was then obfuscated with a tool unknown to APKiD.

So let’s take a look at those three packages that were added ourselves. Our main goal is to find any references to System.load or DexClassLoader, but after scrolling through the files using different decompilers in Bytecode Viewer, I couldn’t really find any. The classes use string obfuscation, control flow obfuscation and many of the decompilers are unable to decompile entire sections of the obfuscated classes.

There are however quite some imports for Java reflection classes, so the class and method names are probably constructed at runtime.

We could tackle this statically, but that’s a lot of work. The unicode names are also pretty annoying, and I couldn’t find a script that deobfuscates these, apart from the Pro version of the JEB decompiler. At this point, it would be better to move onto dynamic analysis and use some create Frida hooks to figure out what’s happening. But there’s one thing we need to solve first…

How is the malicious code triggered?

How does the application actually trigger the obfuscated functionality? It’s not inside the MainActivity (which doesn’t even exist yet), which is the first piece of code that will be executed when launching the app. Well, this is a trick that’s often used by malware to hide functionality or to perform anti-debugging checks before the application actually starts. Before Android calls the MainActivity’s onCreate method, all required classes are loaded into memory. After they are loaded in memory, all Static Initialization Blocks are executed. Any class can have one of these blocks, and they are all executed before the application actually starts.

The application contains many of these static initializers, both in the legitimate com.whatsapp classes and in the obfuscated classes:

Most likely, the classes-v1.bin file gets decrypted and loaded in one of the static initialization blocks, so that Android can then find the com.tencent.mobileqq.MainActivity and call its onCreate method.

On to Dynamic Analysis…

The classes-v1.bin file will need to be decrypted and then loaded. Since we are missing some classes, and since the file is inside a ‘dex’ folder, it’s a pretty safe bet that it would decrypt to a dex file. That dex file then needs to be loaded using the DexClassLoader. A tool that’s perfect for the job here is Dexcalibur by @FrenchYeti. Dexcalibur allows us to easily hook many interesting functions using Frida and is specifically aimed at apps that use reflection and dynamic loading of classes.

For my dynamic testing, I’ve installed LineageOS + TWRP on an old Nexus 5, I’ve installed Magisk, MagiskTrustUserCerts and Magisk Frida Server. I also installed ProxyDroid and configured it to connect to my Burp Proxy. Finally, I installed Burp’s certificate, made sure everything was working and then performed a backup using TWRP. This way, I can easily restore my device to a clean state and run the malware sample again and again for the first time. Since the malware doesn’t affect the /system partition, I only need to restore the /data/ permission. You could use an emulator, but not all malware will have x86 binaries and, furthermore, emulators are easily detected. There are certainly drawbacks as well, such as the restore taking a few minutes, but it’s currently fast enough for me to not be annoyed by it.

Resetting a device is easy with TWRP

Making and restoring backups is pretty straightforward in TWRP. You first boot into TWRP by executing ‘adb reboot recovery‘. Each phone also has specific buttons you can press during boot, but using adb is much more nicer and consistent.
In order to create a backup, go to Backup and select the partitions you want to create a backup of. In this case, we should do System, Data and Boot. Slide the slider at the bottom to the right and wait for the backup to finish.
In order to restore a backup, go to Restore and select the backup you created earlier. You can choose which partitions you want to restore and then swipe the slider to the right again.

After setting up a device and creating a project, we can start analyzing. Unfortunately, the latest version of Dexcalibur wasn’t too happy with the SMALI code inside the sample. Some lines have whitespace where it isn’t supposed to be, and there are a few illegal constructions using array definitions and goto labels. Both of them were fixed within 24 hours of reporting which is very impressive!

When something doesn’t work…

Almost all the tools we use in mobile security are free and/or open source. When something doesn’t work, you can either find another tool that does the job, or dig into the code and figure out exactly why it’s not working. Even by just reporting an issue with enough information, you’re contributing to the project and making the tools better for everyone in the future. So don’t hesitate to do some debugging!

So after pulling the latest code (or making some quick hotpatches) we can run the sample using dexcalibur. All hooks will be enabled by default, and when running the malware Dexcalibur lists all of the reflection API calls that we saw earlier:

We can see that some visual components are created, which corresponds to what we see on the device, which is the malware asking for accessibility permissions.

At this point, one of the items in the hooks log should be the dynamic loading of the decrypted dex file. However, there’s no such call and this actually had me puzzled for a little while. I thought maybe there was another bug in Dexcalibur, or maybe the sample was using a class or method not covered by Dexcalibur’s default list of hooks, but none of this turns out to be the case.

Frida is too late 🙁

Frida scripts only run when the runtime is ready to start executing. At that point, Android will have loaded all the necessary classes but hasn’t started execution yet. However, static initializers are run during the initialization of the classes which is before Frida hooks into the Android Runtime. There’s one issue reported about this on the Frida GitHub repository but it was closed without any remediation. There are a few ways forward now:

  • We manually reverse engineer the obfuscated code to figure out when the dex file is loaded into memory. Usually, malware will remove the file from disk as soon as it is loaded in memory. We can then remove the function that removes the decrypted dex file and simply pull it from the device.
  • We dive into the smali code and modify the static initializers to normal static functions and call all of them from the MainActivity.onCreate method. However, since the Activity defined in the manifest is inside the encrypted dex file, we would have to update the manifest as well, otherwise Android would complain that it can’t find the main activity as it hasn’t been loaded yet. A real chicken/egg problem.
  • Most (all?) methods can be decompiled by at least one of the decompilers in Bytecode Viewer, and there aren’t too many methods, so we could copy everything over to a new Android project and simply debug the application to figure out what is happening. We could also trick the new application to decrypt the dex file for us.

But…. None of that is necessary. While figuring out why the hooks weren’t called, I took a look at the application’s storage and after the sample has been run once, it actually doesn’t delete the decrypted dex file and simply keeps it in the app folder.

So we can copy it off the device by moving it to a world-readable location and making the file world-readable as well.

kali > adb shell
hammerhead:/ $ su
hammerhead:/ # cp /data/data/com.tencent.mobileqq/app_apkprotector_dex /data/local/tmp/classes-v1.bin
hammerhead:/ # chmod 666 /data/local/tmp/classes-v1.bin
hammerhead:/ # exit
hammerhead:/ $ exit
kali > adb pull /data/local/tmp/classes-v1.bin payload.dex
/data/local/tmp/classes-v1.bin: 1 file pulled. 18.0 MB/s (3229988 bytes in 0.171s)

But now that we’ve got the malware running, let’s take a quick look at Burp. Our goal is to intercept C&C traffic, so we might already be done!

While we are indeed intercepting C&C traffic, everything seems to be encrypted, so we’re not done just yet.

… and back to static

Since we now have the decrypted dex file, let’s open it up in Bytecode Viewer again:

The payload doesn’t have any real anti-reverse engineering stuff, apart from some string obfuscation. However, all the class and method names are still there and it’s pretty easy to understand most functionality. Based on the class names inside the com.tencent.mobileqq package we can see that the sample can:

  • Perform overlay attacks (BrowserActivity.class)
  • Start different intens (IntentStarter.class)
  • Launch an accessibility service (MyAccessibilityService.class)
  • Compose SMS messages (ComposeSMSActivity)
  • etc…

The string obfuscation is inside the io.michaelrocks.paranoid package (Deobfuscator$app$Release.class) and the source code is available online.

Another interesting class is DGA.class which is responsible for the Domain Generation Algorithm. By using a DGA, the sample cannot be taken down by sink-holing the C&C’s domain. We could reverse engineer this algorithm, but that’s not really necessary as the sample can just do it for us. At this point we also don’t really care which domain it actually ends up connecting to. We can actually see the DGA in action in Burp: Before the sample is able to connect to a legitimate C&C it tries various different domain names (requests 46 – 56), after which it eventually finds a C&C that it likes (requests 57 – 60):

So the payloads are encrypted/obfuscated and we need to figure out how that’s done. After browsing through the source a bit, we can see that the class that’s responsible for actually communicating with the C&C is the PanelReq class. There are a few methods involving encryption and decryption, but there’s also one method called ‘Send’ which takes two parameters and contains references to HTTP related classes:

public static String Send(String paramString1, String paramString2)
        HttpCom localHttpCom = new com/tencent/mobileqq/HttpCom;
        paramString1 = Deobfuscator.app.Release.getString(-37585202133604L);

We can be pretty sure that ‘paramString1’ is the hostname which is generated by the DGA. The second string is not immediately added to the HTTP request and various cryptographic functions are applied to it first. This is a strong indication that paramString2 will not be encrypted when it enters the Send method. Let’s hook the Send method using Frida to see what it contains.

The following Frida script contains a hook for the PanelReq.Send() method:

    var PanelReqClass = Java.use("com.tencent.mobileqq.PanelReq");
    PanelReqClass.Send.overload('java.lang.String', 'java.lang.String').implementation = function(hostname, payload){
        var retVal = this.Send(hostname, payload);
        console.log("Response:" + retVal)
        return retVal;

Additionally, we can hook the Deobfuscator.app.Release.getString method to figure out which strings are returned after decrypting them, but in the end this wasn’t really necessary:

var Release = Java.use("io.michaelrocks.paranoid.Deobfuscator$app$Release");
Release.getString.implementation = function (id){
    var retVal = this.getString(id);
    console.log(id + " > " + retVal);
    return retVal;

Monitoring C&C traffic

After performing a reset of the device and launching the sample with Frida and the overloaded Send method, we get the following output:

payload:PING,3.4,10,LGE,Nexus 5,en,127,
Response: 10
Response:648516978,Capi: El envio se ha devuelto dos veces al centro mas cercano codigo: AMZIPH1156020 
Response:634689547,No hemos dejado su envio 01101G573629 por estar ausente de su domicilio. Vea las opciones: 
Response:699579720,Hola, no te hemos localizado en tu domicilio. Coordina la entrega de tu envio 279000650 aqui: 
payload:PING,3.4,10,LGE,Nexus 5,en,197,

Some observations:

  • The sample starts with querying different domains until it finds one that answers ‘OK’ (Line 14). This confirms with what we saw in Burp.
  • It sends a list of all installed applications to see which applications to attack using an overlay (Line 27). Currently, no targeted applications are installed, as the response is empty
  • Multiple premium text messages are received (Lines 36, 41, 46, …)

Package names of targeted applications are sometimes included in the apk, or a full list is returned from the C&C and compared locally. In this sample that’s not the case and we actually have to start guessing. There doesn’t appear to be a list of financial applications available online (or at least, I didn’t find any) so I basically copied all the targeted applications from previous malware writeups and combined them into one long list. This does not guarantee that we will find all the targeted applications, but it should give us pretty good coverage.

In order to interact with the C&C, we can simply modify the Send hook to overwrite the payload. Since the sample is constantly polling the C&C, the method is called repeatedly and any modifications are quickly sent to the server:

    var PanelReqClass = Java.use("com.tencent.mobileqq.PanelReq");
    PanelReqClass.Send.overload('java.lang.String', 'java.lang.String').implementation = function(hostname, payload){
      var injects="GET_INJECTS_LIST,alior.banking[...]zebpay.Application,"
      if(payload.split(",")[0] == "GET_INJECTS_LIST"){
      var retVal = this.Send(hostname, payload);
      console.log("Response:" + retVal)
      return retVal;

Frida also automatically reloads scripts if it detects a change, so we can simply update the Send hook with new commands to try out and it will automatically be picked up.

Based on the very long list of package names I submitted, the following response was returned by the server to say which packages should be attacked:


When the sample receives the list of applications to attack, it immediately begins sending the GET_INJECT command to get a HTML page for each targeted application:

Response:<!DOCTYPE html>
    <link rel="shortcut icon" href="es.evobanco.bancamovil.png" type="image/png">
    <meta charset="utf-8">

In order to view the different overlays, we can modify the Frida script to save the server’s response to an HTML file:

if(payload.split(",")[0] == "GET_INJECT"){
       var file = new File("/data/data/com.tencent.mobileqq/"+payload.split(",")[1] + ".html","w");

We can then extract them from the device, open them in Chrome, take some screenshots and end up with a nice collage:


The sample we examined in this post is pretty basic. The initial dropper made it a little bit difficult, but since the decrypted payload was never removed from the application folder, it was easy to extract and analyze. The actual payload uses a bit of string obfuscation but is very easy to understand.

The communication with the C&C is encrypted, and by hooking the correct method with Frida we don’t even have to figure out how the encryption works. If you want to know how it works though, be sure to check out the technical writeups by ProDaft (pdf) and Aleksejs Kuprins.

Jeroen Beckers
Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software and Security assessment team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff.

Proxying Android app traffic – Common issues / checklist

19 November 2020 at 09:52

During a mobile assessment, there will typically be two sub-assessments: The mobile frontend, and the backend API. In order to examine the security of the API, you will either need extensive documentation such as Swagger or Postman files, or you can let the mobile application generate all the traffic for you and simply intercept and modify traffic through a proxy (MitM attack).

Sometimes it’s really easy to get your proxy set up. Other times, it can be very difficult and time consuming. During many engagements, I have seen myself go over this ‘sanity checklist’ to figure out which step went wrong, so I wanted to write it down and share it with everyone.

In this guide, I will use PortSwigger’s Burp Suite proxy, but the same steps can of course be used with any HTTP proxy. The proxy will be hosted at on port 8080 in all the examples. The checks start very basic, but ramp up towards the end.


Update: Sven Schleier also created a blogpost on this with some awesome visuals and graphs, so check that out as well!

Setting up the device

First, we need to make sure everything is set up correctly on the device. These steps apply regardless of the application you’re trying to MitM.

Is your proxy configured on the device?

An obvious first step is to configure a proxy on the device. The UI changes a bit depending on your Android version, but it shouldn’t be too hard to find.

Sanity check
Go to Settings > Connections > Wi-Fi, select the Wi-Fi network that you’re on, click Advanced > Proxy > Manual and enter your Proxy details:

Proxy host name:
Proxy port: 8080

Is Burp listening on all interfaces?

By default, Burp only listens on the local interface ( but since we want to connect from a different device, Burp needs to listen on the specific interface that has joined the Wi-Fi network. You can either listen on all interfaces, or listen on a specific interface if you know which one you want. As a sanity check, I usually go for ‘listen on all interfaces’. Note that Burp has an API which may allow other people on the same Wi-Fi network to query your proxy and retrieve information from it.

Sanity check
Navigate to on your host computer. The Burp welcome screen should come up.

In Burp, go to Proxy > Options > Click your proxy in the Proxy Listeners window > check ‘All interfaces’ on the Bind to Address configuration

Can your device connect to your proxy?

Some networks have host/client isolation and won’t allow clients to talk to each other. In this case, your device won’t be able to connect to the proxy since the router doesn’t allow it.

Sanity Check
Open a browser on the device and navigate to . You should see Burp’s welcome screen. You should also be able to navigate to http://burp in case you’ve already configured the proxy in the previous check.

There are a few options here:

  • Set up a custom wireless network where host/client isolation is disabled
  • Host your proxy on a device that is accessible, for example an AWS ec2 instance
  • Perform an ARP spoofing attack to trick the mobile device into believing you are the router
  • Use adb reverse to proxy your traffic over a USB cable:
    • Configure the proxy on your device to go to on port 8080
    • Connect your device over USB and make sure that adb devices shows your device
    • Execute adb reverse tcp:8080 tcp:8080 which sends all traffic received on <device>:8080 to <host>:8080
    • At this point, you should be able to browse to and see Burp’s welcome screen

Can you proxy HTTP traffic?

The steps for HTTP traffic are typically much easier than HTTPS traffic, so a quick sanity check here makes sure that your proxy is set up correctly and reachable by the device.

Sanity check
Navigate to http://neverssl.com and make sure you see the request in Burp. Neverssl.com is a website that doesn’t use HSTS and will never send you to an HTTPS version, making it a perfect test for plaintext traffic.


  • Go over the previous checks again, something may be wrong
  • Burp’s Intercept is enabled and the request is waiting for your approval

Is your Burp certificate installed on the device?

In order to intercept HTTPS traffic, your proxy’s certificate needs to be installed on the device.

Sanity check
Go to Settings > Security > Trusted credentials > User and make sure your certificate is listed. Alternatively, you can try intercepting HTTPS traffic from the device’s browser.

This is documented in many places, but here’s a quick rundown:

  • Navigate to http://burp in your browser
  • Click the ‘CA Certificate’ in the top right; a download will start
  • Use adb or a file manager to change the extension from der to crt
    • adb shell mv /sdcard/Download/cacert.der /sdcard/Download/cacert.crt
  • Navigate to the file using your file manager and open the file to start the installation

Is your Burp certificate installed as a root certificate?

Applications on more recent versions of Android don’t trust user certificates by default. A more thorough writeup is available in another blogpost. Alternatively, you can repackage applications to add the relevant controls to the network_security_policy.xml file, but having your root CA in the system CA store will save you a headache on other steps (such as third-party frameworks) so it’s my preferred method.

Sanity check
Go to Settings > Security > Trusted credentials > System and make sure your certificate is listed.

In order to get your certificate listed as a root certificate, your device needs to be rooted with Magisk

  • Install the client certificate as normal (see previous check)
  • Install the MagiskTrustUser module
  • Restart your device to enable the module
  • Restart a second time to trigger the file copy

Alternatively, you can:

  • Make sure the certificate is in the correct format and copy/paste it to the /system/etc/security/cacerts directory yourself. However, for this to work, your /system partition needs to be writable. Some rooting methods allow this, but it’s very dirty and Magisk is just so much nicer. It’s also a bit tedious to get the certificate in the correct format.
  • Modify the networkSecurityConfig to include user certificates as trust anchors (see further down below). It’s much nicer to have your certificate as a system certificate though, so I rarely take this approach.

Does your Burp certificate have an appropriate lifetime?

Google (and thus Android) is aggressively shortening the maximum accepted lifetime of leaf certificates. If your leaf certificate’s expiration date is too far ahead in the future, Android/Chrome will not accept it. More information can be found in this blogpost.

Sanity check
Connect to your proxy using a browser and investigate the certificate lifetime of both the root CA and the leaf certificate. If they’re shorter than 1 year, you’re good to go. If they’re longer, I like to play it safe and create a new CA. You can also use the latest version of the Chrome browser on Android to validate your certificate lifetime. If something’s wrong, Chrome will display the following error: ERR_CERT_VALIDITY_TOO_LONG

There are two possible solutions here:

  • Make sure you have the latest version of Burp installed, which reduces the lifetime of generated leaf certificates
  • Make your own root CA that’s only valid for 365 days. Certificates generated by this root CA will also be shorter than 365 days. This is my preferred option, since the certificate can be shared with team members and be installed on all devices used during engagements.

Setting up the application

Now that the device is ready to go, it’s time to take a look at application specifics.

Is the application proxy aware?

Many applications simply ignore the proxy settings of the system. Applications that use standard libraries will typically use the system proxy settings, but applications that rely on interpreted language (such as Xamarin and Unity) or are compiled natively (such as Flutter) usually require the developer to explicitly program proxy support into the application.

Sanity check
When running the application, you should either see your HTTPS data in Burp’s Proxy tab, or you should see HTTPS connection errors in Burp’s Event log on the Dashboard panel. Since the entire device is proxied, you will see many blocked requests from applications that use SSL Pinning (e.g. Google Play), so see if you can find a domain that is related to the application. If you don’t see any relevant failed connections, your application is most likely proxy unaware.

As an additional sanity check, you can see if the application uses a third party framework. If the app is written in Flutter it will definitely be proxy unaware, while if it’s written in Xamarin or Unity, there’s a good chance it will ignore the system’s proxy settings.

  • Decompile with apktool
    • apktool d myapp.apk
  • Go through known locations
    • Flutter: myapp/lib/arm64-v8a/libflutter.so
    • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
    • Unity: myapp/lib/arm64-v8a/libunity.so

There are a few things to try:

  • Use ProxyDroid (root only). Although it’s an old app, it still works really well. ProxyDroid uses iptables in order to forcefully redirect traffic to your proxy
  • Set up a custom hotspot through a second wireless interface and use iptables to redirect traffic yourself. You can find the setup on the mitmproxy documentation, which is another useful HTTP proxy. The exact same setup works with Burp.

In both cases, you have moved from a ‘proxy aware’ to a ‘transparent proxy’ setup. There are two things you must do:

  • Disable the proxy on your device. If you don’t do this, Burp will receive both proxied and transparent requests, which are not compatible with each other.
  • Configure Burp to support transparent proxying via Proxy > Options > active proxy > edit > Request Handling > Support invisible proxying

Perform the sanity check again to now hopefully see SSL errors in Burp’s event log.

Is the application using custom ports?

This only really applies if your application is not proxy aware. In that case, you (or ProxyDroid) will be using iptables to intercept traffic, but these iptables rules only target specific ports. In the ProxyDroid source code, you can see that only ports 80 (HTTP) and 443 (HTTPS) are targeted. If the application uses a non-standard port (for example 8443 or 8080), it won’t be intercepted.

Sanity check
This one is a bit more tricky. We need to find traffic that is leaving the application that isn’t going to ports 80 or 443. The best way to do this is to listen for all traffic leaving the application. We can do this using tcpdump on the device, or on the host machine in case you are working with a second Wi-Fi hotspot.

Run the following command on an adb shell with root privileges:

tcpdump -i wlan0 -n -s0 -v

You will see many different connections. Ideally, you should start the command, open the app and stop tcpdump as soon as you know the application has made some requests. After some time, you will see connections to a remote host with a non-default port. In the example below, there are multiple connections to on port 8088:

Alternatively, you can send the output of tcpdump to a pcap by using tcpdump -i wlan0 -n -s0 -w /sdcard/output.pcap. After retrieving the output.pcap file from the device, it can be opened with WireShark and inspected:


If your application is indeed proxy unaware and communicating over custom ports, ProxyDroid won’t be able to help you. ProxyDroid doesn’t allow you to add custom ports, though it is an open-source project and a PR for this would be great 😉. This means you’ll have to use iptables manually.

  • Either you set up a second hotspot where your host machine acts as the router, and you can thus perform a MitM
  • Or you use ARP spoofing to perform an active MitM between the router and the device
  • Or you can use iptables yourself and forward all the traffic to Burp. Since Burp is listening on a separate host, the nicest solution is to use adb reverse to map a port on the device to your Burp instance. This way you don’t need to set up a separate hotspot, you just need to connect your device over USB.
    • On host: adb reverse tcp:8080 tcp:8080
    • On device, as root: iptables -t nat -A OUTPUT -p tcp -m tcp --dport 8088 -j REDIRECT --to-ports 8080

Is the application using SSL pinning?

At this point, you should be getting HTTPS connection failures in Burp’s Event log dashboard. The next step is to verify if SSL pinning is used, and disable it. Although many Frida scripts claim to be universal root bypasses, there isn’t a single one that even comes close. Android applications can be written in many different technologies, and only a few of those technologies are typically supported. Below you can find various ways in which SSL pinning may be implemented, and ways to get around it.

Note that some applications have multiple ways to pin a specific domain, and you may have to combine scripts in order to disable all of the SSL pinning.

Pinning through android:networkSecurityConfig

Android allows applications to perform SSL pinning by using the network_security_config.xml file. This file is referenced in the AndroidManifext.xml and is located in res/xml/. The name is usually network_security_config.xml but it doesn’t have to be. As an example application, the Microsoft Authenticator app has the following two pins defined:

Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Remove the networkSecurityConfig setting in the AndroidManifest by using apktool d and apktool b. Usually much faster to do it through Frida and only rarely needed.

Pinning through OkHttp

Another popular way of pinning domains is through the OkHttp library. You can do a quick validation by grepping for OkHttp and/or sha256. You will most likely find references (or even hashes) relating to OkHttp and whatever is being pinned:

Use any of the normal universal bypass scripts:

  • Run Objection and execute the android sslpinning disable command
  • Use Frida codeshare: frida -U --codeshare akabe1/frida-multiple-unpinning -f be.nviso.app
  • Decompile the apk using apktool, and modify the pinned domains. By default, OkHttp will allow connections that are not specifically pinned. So if you can find and modify the domain name that is pinned, the pinning will be disabled. Using Frida is much faster though, so this approach is rarely taken.

Pinning through OkHttp in obfuscated apps

Universal pinning scripts may work on obfuscated applications since they hook on Android libraries which can’t be obfuscated. However, if an application is using something else than a default Android Library, the classes will be obfuscated and the scripts will fail to find the correct classes. A good example of this is OkHttp. When an application is using OkHttp and has been obfuscated, you’ll have to figure out the obfuscated name of the CertificatePinner.Builder class. You can see below that obfuscated OkHttp was used by searching on the same sha256 string. This time, you won’t see nice OkHttp class references, but you will typically still find string references and maybe some package names as well. This depends on the level of obfuscation of course.

You’ll have to write your own Frida script to hook the obfuscated version of the CertificatePinner.Builder class. I have written down the steps to easily find the correct method, and create a custom Frida script in this blogpost.

Pinning through various libraries

Instead of using the networkSecurityConfig or OkHttp, developers can also perform SSL pinning using many different standard Java classes or imported libraries. Additionally, some Java based third party app such as the PhoneGap or AppCelerator frameworks provide specific functions to the developer to add pinning to the application.

There are many ways to do it programmatically, so your best bet is to just try various anti-pinning scripts and at least figure out what kind of methods are being triggered so that you have information on the app, after which you may be able to further reverse-engineer the app to figure out why interception isn’t working yet.

Try as many SSL pinning scripts you can find, and monitor their output. If you can identify certain classes or frameworks that are used, this will help you in creating your own custom SSL pinning bypasses specific for the application.

Pinning in third party app frameworks

Third party app frameworks will have their own low-level implementation for TLS and HTTP and default pinning bypass scripts won’t work. If the app is written in Flutter, Xamarin or Unity, you’ll need to do some manual reverse engineering.

Figuring out if a third party app framework is used
As mentioned in a previous step, the following files are giveaways for either Flutter, Xamarin or Unity:

  • Flutter: myapp/lib/arm64-v8a/libflutter.so
  • Xamarin: myapp/unknown/assemblies/Mono.Android.dll
  • Unity: myapp/lib/arm64-v8a/libunity.so

Pinning in Flutter applications

Flutter is proxy-unaware and doesn’t use the system’s CA store. Every Flutter app contains a full copy of trusted CAs which is used to validate connections. So while it most likely isn’t performing SSL pinning, it still won’t trust the root CA’s on your device and thus interception will not be possible. More information is available in the blogposts mentioned below.

Follow my blog post for either ARMv7 (x86) or ARMv64 (x64)

Pinning in Xamarin and Unity applications

Xamarin/Unity applications usually aren’t too difficult, but they do require manual reverse engineering and patching. Xamarin/Unity applications contain .dll files in the assemblies/ folder and these can be opened using .NET decompilers. My favorite tool is DNSpy which also allows you to modify the dll files.

No blog post on this yet, sorry 😉. The steps are as follows:

  • Extract apk using apktool and locate .dll files
  • Open .dll files using DNSpy and locate HTTP pinning logic
  • Modify logic either by modifying the C# code or the IL
  • Save the modified module
  • Overwrite the .dll file with the modified version
  • Repackage and resign the application
  • Reinstall the application and run

What if you still can’t intercept traffic?

It’s definitely possible that after all of these steps, you still won’t be able to intercept all the traffic. The typical culprits:

  • Non-HTTP protocols (we’re only using an HTTP proxy, so non-HTTP protocols won’t be intercepted)
  • Very heavy obfuscation
  • Anti-tampering controls

You will usually see these features in either mobile games or financial applications. At this point, you’ll have to reverse engineer the application and write your own Frida scripts. This can be an incredibly difficult and time consuming process, and a step-by-step guide such as this will never be able to help you there. But that, of course, is where the fun begins 😎.

About the author


Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He is a SANS instructor and SANS lead author of the SEC575 course. Jeroen is also a co-author of OWASP Mobile Security Testing Guide (MSTG) and the OWASP Mobile Application Security Verification Standard (MASVS). He loves to both program and reverse engineer stuff. You can find Jeroen on LinkedIn.

  • There are no more articles